All Episodes

April 16, 2025 45 mins

George and Jennifer welcome two of Carleton’s reference librarians, Sean Leahy and Sarah Calhoun. We talk about the ways AI is changing the ways students read and conduct research. We also touch on the origins of April Fools’ Day and the nature of reality and truth.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:02):
[Auto-generated transcript. Edits may have been applied for clarity.]Welcome to the Year of Curiosity podcast from Carleton College,
where we take a year long dive into a complex topic and invite curious guests to share their experiences and their questions.
This year, we're diving into the world of artificial intelligence. How will I change the ways we learn, work, and live?
What will we gain and lose as this technology becomes more pervasive and accessible?

(00:22):
Join us as we pursue these questions and many others with an open mind and a curious attitude.
I'm George Cusack, the director of Writing Across the Curriculum at Carleton.
And I'm Jennifer Wolfe, a biologist and the director of Carleton's Perlman Center for Learning and Teaching.
Welcome. Our AI generated tagline for the week is intelligently interrogating AI inquiries.

(00:48):
George, thank you for picking that particular tagline. You're quite welcome.
How are you doing? I'm all right. How are you today? I'm good.
What's made you curious lately? Well, as you know, I have a 16 year old who's just started looking at colleges.
Uh, and they've already come across one school who has a, uh, an aggressive chat bot on its website.

(01:10):
Uh, once it pegs you as a prospective student. Uh, and so this got me thinking about just how.
This gives colleges a personality, and it kind of leads students to think of the college,
the institution, as a friend before they even come on campus.
And so it's got me thinking about, okay, well, there's there's things that are good about that, obviously in terms of welcoming and belonging,

(01:35):
but there's also things that are potentially bad about thinking of your institution as a an entity with a personality.
It's going to become a friend that asks you for money, if nothing else. Exactly that.
Yes. Does Carleton have that?
Uh, not to my knowledge yet. I don't think so either. What about you?
What's making you curious? Well, we're recording on April 1st, April Fool's Day, and I know you are not listening to it on April Fool's Day,

(02:02):
but I did think about just absolutely making something up.
Um, but then I decided against that for various reasons, um, and decided just to be curious about the origin of April Fool's Day.
Um, I'm sure that this has been the topic of many, uh, PhD theses.
Um, and I did not consult them. I went to my friend Wikipedia and the Encyclopedia Britannica, which still exists,

(02:30):
and Britannica actually says it is unknowable, which sounds mysterious and kind of cool.
Um, but some people traced the origin to pagan festivals, others to a reference to the 32nd day of March in The Canterbury Tales.
So I thought that was kind of cool. And we know it was firmly established by the 1500s,

(02:54):
because in a 1561 poem by Flemish poet Edward Tradin about a nobleman who sends his servant on frivolous errands on the 1st of April,
um, the servant soon realizes that he has been sent on a fool's errand because it's April 1st.
And so there you have the origin of that little phrase, too.

(03:16):
And I'm thinking that maybe my trip down this rabbit hole for half an hour's time this morning is also a fool's errand,
but I kind of had fun with the tail end of spring break. It seems like a worthy way to spend that time.
Well, I needed to wake up and my coffee wasn't at hand.
So this this did it. But I want to get our guests in on this conversation.
So today we have two guests from the Carlton Library.

(03:41):
We have Sarah Calhoun, who is a reference and instruction librarian for humanities and digital scholarship,
and Sean Lahey, a reference and instruction librarian for social sciences and data.
So Sean and Sarah, first, thank you so much for coming to be with us at this busy time of year.
Um, we love to hear what you're curious about. Yeah.

(04:01):
Um, first to go back, um, in case you're wondering about the Encyclopedia Britannica, we do also have a subscription to it,
which has more information than the free version that you all often come across in Google.
So, FYI, a little while back down the rabbit hole, back down the rabbit hole for you.
Yeah. Um, so but big picture. Curious.
I'm thinking about how to balance innovation.

(04:22):
So in some ways are good for the few with preservation of government and environment, for the good, for the many.
But also because it is April 1st today. Um, what is the weather report and April Fool's joke?
Because it is going to be a lot happening today. So yeah, I saw that we were supposed to have snow.
Yeah, yeah we are. And then and some rain and probably some probably something else and 80 degree temperatures or something on top of that.

(04:50):
Yeah. I feel like there's always a week in late March early April where Minnesota experiences multiple seasons within a couple day period.
Yeah, yeah. And they're not in a chronological sequence.
No, no. Yeah. There've been many a mayday parade where I've been in snow or oppressive heat or rain.

(05:11):
So we're not even two. We're not even through April yet. That's mostly belts.
Shawn, how about you? Um, I am curious about.
I was just reading about it, uh, yesterday in preparation for this, but that, uh,
eye popping gift Reed Hastings gave to Bowdoin to set up an AI initiative there.
So very curious to see. I mean, I don't know how closely you will be able to kind of track it, but, you know,

(05:37):
to see a leading liberal arts college get that kind of money to kind of really invest a lot of time and energy in, uh, AI, that's kind of exciting.
And also, I'm curious to know, I mean, we're very much in this AI moment where we don't know what the future is going to be.
Um, how much of this is hype, how much is reality, etc.
And so, you know, with these kind of big initiatives. Yeah. What will they look like in ten years time?

(06:02):
Well, it's it's particularly interesting since Colby College, just up the road has the Davis Institute for AI.
So that's two liberal arts schools with major, well-funded and in one case quite well established AI institutes with.
Then what? I believe about 70 miles of each other.
So, yeah, I mean, the AI capital, they're U.S. natural lobsters and I.

(06:26):
Yeah. There we have it. So one of the things is we like to ask all our guests is about their eye origin story.
So can you think back to a time when I first became a big deal or entered your consciousness?
And let's shake it up and ask Sean first.
Okay. Um, well, for me, it really is.

(06:47):
You know, if we don't count the the Spielberg movie, that was a huge disappointment to me when it came out, uh, both creepy and disappointing.
Yes. Yeah, yeah. Um, but, uh, between then and, uh, I would say November 2022, when ChatGPT, uh, was released and,
you know, generative AI or the concept of generative AI kind of took over the world.

(07:09):
Um, that's I think when it really, um, kind of, uh, all started for me, like many people.
So I also had, uh, a parental leave that lasted, uh, a few blissful months of, you know, total ignorance about all things.
I, uh, so that when I came back from that, uh, in July of last year, it was sort of like, oh, it's still it's still here.

(07:30):
It's still a thing. So how about you, Sarah? Similar to Sean in many ways, uh, though also rewinding a bit before that,
I co supervise a group of student workers, the digital scholarship interns with some of my colleagues, and it's.
And digital humanities, um, with them back in 2017, we started reading, um, some articles about bias in AI, like with race and gender.

(07:55):
Um, I use by police specifically, um, Clearview AI, which is apparently still around, though they were starting to fold for a while.
Um, and then also more recently have been reading about, um,
Google's firing of their ethical AI team in December 2020, environmental cause of training AI.
So it's been kind of there for a couple of years, is something that we revisit and discuss over time.

(08:17):
Yeah, certainly a lot to think about and a lot to be concerned about. Yeah, yeah.
So to start with a somewhat basic question, um, for for folks who don't know, what does it mean to be a research library and how do you become one?
And I guess, in particular, what does it mean to be research librarian in 2025?
Great question. Um, well, uh, we're figuring out the latter part of that question.

(08:40):
Okay. Sure. Every day it's you, uh, you know, to become, uh, research librarian, um, or even a librarian kind of at any,
um, higher education institution, uh, library school helps going to library school.
So both Sarah and I have our, uh, library science degrees.
That's a graduate degree. Um, but, uh, you know, it's interesting.

(09:01):
We have interns in the library, um, every year.
And one part of their experience is they have to go around and talk to the
librarians in the reference and instruction department or call RNAi department,
um, uh, to find out what our journey to becoming librarians is.
And I think part of that is that everyone's journey tends actually to be, with the exception of that degree, very different.

(09:22):
So for a lot of folks, it can be a second or third career. In my case, I did it not that long after graduating from college.
Um, but there's a lot of different things that I think bring people to, um, becoming librarians and research librarians in particular.
Um, there are, you know, there are those librarians who might work at a large R1 university where they really are supporting a lot of, you know,

(09:45):
in-depth research, the work of graduate students and faculty and their work is, you know, I would say primarily on the research support side.
Then for librarians like Sara and I here at Carleton, um, you might be, uh,
you know, you might be more interested in the, um, teaching side of things.

(10:05):
You might be more interested in working with undergraduate students.
The the appeal, in addition to supporting research, I think is also, um, on that kind of pedagogical side.
So, um, the pedagogical side of the research process.
So, um, I think that that's, that's certainly that's sort of what brought me to being, um, an RNA librarian.
But, um. Sara. Yeah. Yeah. I mean, my journey was pretty similar.

(10:29):
This is also about a third career for me. Um, and I also have another degree in Asian studies and then went on to get the library science.
So that's pretty common to see.
And of course, there are many different types of librarians, all who kind of follow the similar path of getting a library science degree.
Um, so in addition to reference librarians like us,
we also have librarian colleagues and archives and special collections cataloging, collection development all through the library.

(10:56):
These specialized people coming in with their specialized knowledge and how to make information available to the broader campus.
So being at Carleton, do you miss sort of the opportunity at a huge library to, you know, ride the carriage elevators down into the dark cat?
I know I say this having yeah, having.

(11:16):
I'm here from an R1, so I worked at Yale before I got here.
Um, and again, because a lot of the the goal of a library is to make information accessible.
And so trying to just decipher a lot of these very difficult systems, um, is not easy, shall we say.
Um, one of the things I like about working at Carleton is that it is so, as Shawn was saying, like pedagogy and teaching focused,

(11:42):
that we do get a lot more of that, like very hands on teaching work that we can do with students.
So that actually raises something. I've always wondered about librarians, which is, um, uh, obviously your whole job,
as you just summarized it, is to make information accessible to people.
At the same time, is there a certain thrill in knowing stuff that, you know, 99% of the world doesn't?

(12:05):
You know, sort of uncovering things that you think, okay, I'm probably the the first person to read this in 50 years or a hundred years.
That sounds like my job as a scientist. Yeah, yeah.
Yes and no. Um, I think one of the thrills is finding this knowledge and then being able to share it.
So it's not the finding. Okay?

(12:26):
And a lot of times, the only way that it gets found is because one of my colleagues has already done the work to describe and organize it.
So like when you go into an archive,
the reason something is findable is that the archivists have spent a lot of time like putting everything in folders and making a finding aid for you.
So there's a lot of that, um, kind of like gratitude toward colleagues and toward the system of doing this.

(12:48):
And then one of the really fun parts for me is having these consultations with students or sometimes faculty,
when they're working on a very kind of niche research topic of like, okay, what can we find about?
Of course, now I'm blanking on all topics, but, you know, when we get to sit down and really dig through primary sources and find a thing of like,
oh, here's this testimony from a court case in 1940 that has exactly what you need about a topic, so that that can be the really fun part.

(13:15):
But again, it's usually only because some other librarian spent so much time with it.
So, so the fantasy trope of librarians hoarding information, smog style doesn't hold any appeal to either of you.
I don't know if it does for you. No no no no. I'm kind.
I'm kind of relieved at that. I have to say, and I've mentioned this on the podcast before,

(13:36):
I loved the library time when I was a child, and we had a class on how to use the library.
We would go once a week and there, um, and it was taught by our school librarian, whose name I kid you not was Mrs. Book.
That's amazing. And I thought that was normal.
Um. So clearly I need to change my name.

(13:57):
Right? Right. Um, and so that was great.
And the other thing that came out of that is this woman was so obsessed with our pushing in chairs when we left the table that I now associate a,
I never leave a table without pushing in the chair.
And I get really kind of irritated when people leave chairs all willy nilly.

(14:20):
And I wonder if that's also taught in library school or this was a quirk of Marlene book.
Uh, yeah. I mean, certain sounds like she was playing to type there for the fastidious librarian.
But, you know, I think, uh, yeah, it's sort of right.
I will say. Yeah. The, the, the the notion of the shushing librarian is obviously something we push back against.

(14:46):
And this there is a probably been pushed myself more well,
certainly more often than I've ever shut somebody because I don't think I ever have had to go over.
But um, but yeah. No, I think, uh, you know, it did library school, uh, you know, what did it impart in sort of those ways?
I'm not sure I can recall. Not part of the curriculum? Absolutely.
There's no very diplomatic answer. So, so coming back to that question of what it means to be a research librarian in 2025.

(15:15):
Um, we've obviously talked a lot on this podcast about how AI is affecting reading and writing, especially the way students read and write.
Uh, how is it affecting the way students research or the way you work with students as you teach them how to use the library for research?
Um, yeah, I'll, I'll start. And I would say that, um, maybe even surprisingly, it hasn't had, uh,

(15:39):
super significant impact yet on the way I work with students and I think across our department and ah,
and I it hasn't had, um, may there haven't been major changes so far.
Certainly. You know, I've had research consultations with students where we will, um,
use an AI tool to kind of help further along their research, um, when not just when appropriate.

(16:01):
But, you know, sometimes it's just some students have come and they want they want to try it out.
It's been a not that many examples, but they they were told like, well, why don't you try Gemini to do this?
Um, aspect of like the methodology. Um, and then for myself, I mean,
I do incorporate it sometimes when I'm trying to get up to speed on the topic because
students come to us with many different topics that we certainly don't have expertise in.

(16:26):
So just like I might Google it or Wikipedia it, or look at an encyclopedia about a topic to get some surface level introduction.
I'll use Gemini for that sometimes. Um, I have also used it to check my work.
You know, if I've had a research consultation, I was like, yeah, that went pretty well.
I wonder what you know, ChatGPT would have done. And what did I miss?
Uh, yeah. Let's see. I haven't given myself grades yet, but that is another use for it.

(16:50):
I suppose you could probably ask ChatGPT to give you a grade.
That's true. Yeah, that's, uh. But grades are meaningless.
That's true. That's also true. Um, but, uh, so, yeah, I haven't seen, you know, I think a significant change yet, but it is changing.
Um, and it, um, I think one thing that's going to be more prominent in our work is the fact that obviously, we know these.

(17:13):
Commercial. Uh, a lot of these commercial. Um. Tools are out there and very popular.
But all of the databases or most of the, you know,
research platforms that we subscribe to at the library are themselves starting to roll out research.
I enhancements, research assistants, that sort of thing.
So, um, it is becoming ubiquitous, not just because it's in the news or it's, you know, we get ads about it every day,

(17:38):
but because everywhere we'll turn within the library, we'll probably have an AI component to it in the future.
So, um, you know, and that's not that I think the way, you know,
I know Sarah and I have been talking about that is that this is not this just a sort of the same as it always has been,
where we have access to hundreds of databases worldwide because they each have different capabilities,

(18:03):
functionalities, affordances, uh, as well as limitations.
And so you can find the same journal across 7 or 8 different databases.
But why would you use ProQuest instead of Web of Science and that sort of thing.
And so I think we'll still see our job to be helping students navigate, um, those different, those differences in AI.

(18:23):
Um, because as we know, AI doesn't mean one thing.
It means many things. Um, so I can see that being part of the work we do.
You mentioned that a major part of your job is obviously making it easier for people to find information and find resources.
Um, that's what I promise to do is make the whole process of research easier.
Do you have a kind of, uh, line or mental rule of thumb for when it becomes too easy,

(18:50):
when you know what parts of research should require a certain amount of effort for the payoff for it.
Oh, that's. Yeah, that's a great question.
I mean, I think the way I kind of think about it is, yes, I do think that like, friction is a good part of the research process,
that there's not just learning there, but also opportunity for unexpected results and serendipity.
All that and that I would say is a, you know, um, there's lots of reasons why I worry about or don't like.

(19:18):
I, um, let's say, but one thing is specific to, let's say, the Carleton context or the undergraduate context is that notion that, oh, okay.
Well, it produces a really convincing answer. And so, um, you know, I can work from here and, and kind of building off.
Right, taking what the I said as maybe it's mostly true, but the kind of like the certainty maybe that comes from looking at a nice,

(19:44):
clean literature review that, uh, an AI has produced versus the yeah,
the effort, the frustration, the unexpected finds, um, unexpected connections, uh, that, um,
I think are essential to learning how to do research and as well as, you know, the kind of joy of research.
So that's that's what I worry about, I suppose, or that there would be my kind of point at which I'd really want to drive home to students.

(20:15):
Okay. It's one tool among many like keep, keep looking, think about it as, um, either a starting point or something to bounce your ideas off of.
Um, a concrete example of this. So I've been doing some genealogy work, uh, just kind of on my own time for fun, as librarians do.
Um, and I've been reading through a lot of shipping news from Maine.

(20:38):
This is from like late mid to late 1800s mainly.
I mean, I capitol of the U.S. it all comes back to me have it.
Yeah. Um, so as part of this, I've been having to learn how to read shipping news from the late 1800s,
which kind of has its own very idiosyncratic vocabulary and structure.

(20:59):
Um, so I've been going through it as I've been doing that.
Every now and then I'll come across a little phrase or an abbreviation that I cannot figure out what it means.
And so I've been using Google Gemini to be like, okay, I'm reading shipping news. What does this mean?
And that's been very helpful. So I've learned that D.O. is an abbreviation for ditto.
Um, which okay, wasn't that makes total sense.
Yeah, exactly. But it again, not an obvious one.

(21:21):
So that's been very helpful going through reading more and more.
And then I came to one specific entry that didn't make any sense.
I was trying to put it in chronological order in a timeline for this one ship captain that I was researching.
Um, and it wasn't working. Wasn't working. I asked Gemini, like, is it possible that this means this?
And he was sailing from point A to point B? Gemini responded, no, absolutely not.

(21:45):
It means that he was at point C, and I was like, uh, I don't know if that's true.
I ended up finding a second entry for the same trip that he was taking.
That corroborated my gut feeling as I went back to Gemini, and I was like, oh no, actually, it does mean what I thought it does in Gemini.
I was like, no, you're wrong. That's really that's not what it means.
Yeah. So then it started gaslighting me. Um, and I was like, no, no, really, this isn't what it means.

(22:10):
So we, I had this little back and forth, and then eventually we just had to agree to disagree.
Um, and this is also happened. I think in the archives.
Our archivist, um, has mentioned that this happened to.
That he was searching for something, found the correct answer, and had been working with Gemini.
Gemini. I was like, nope, you're wrong. So it does. Every now and then it will get hung up on an answer and will start gaslighting the humans.

(22:34):
Um, so it's one of those things, I think, uh, what Sean was talking about that like you,
you want to come in with a certain amount of curiosity and sometimes, um, doing a second read.
Having that friction can be helpful to kind of drive your own mind to, because Gemini will not always be responsive as a discussion partner.

(22:55):
Um, and will sometimes send you down rabbit holes that you shouldn't go down.
Well, Gemini is a more mainstream tool, and that's one that we encourage our students to use a lot.
And so this helping them think about what's coming back, I know is really important.
Yeah. Can you talk a little more about how students are using, um,
tools like Gemini and ChatGPT and what other kinds of problems they might run into or what, um, benefits it has for their work, too.

(23:22):
Yeah. I mean, in some ways, as Sean was saying, we haven't seen a ton of this.
And some of that has to do with the assignments of faculty are asking students
to do and also having to do with guidelines and policies that are in classes.
Um, and so in some ways, as Sean was saying, it's a little more incidental.
The first, there have been a couple of classes that are starting to explore.

(23:47):
The first big one that I'm going to be working with is, um, history 298, which is the History Methods class this spring.
So I'm going to be working with them. And you know, we haven't fully fledged the assignment and um, the exercise yet,
but what I'm hoping to do is work with the students to have them both do essentially some slow research.
So like the human driven research first to see how that goes, and then ask Gemini to essentially like help as that assistant.

(24:15):
So how can use this as a research assistant. Can Gemini help you write a query, a search query, boolean search query to use in catalyst?
Ah catalog can recommend some journals to go look in.
What are the major journals? Um, because like one of the very big known pitfalls, it's been around for over a year, years now.

(24:36):
Um, is that a lot of times because, um, generative AI is looking to make the humans happy,
unless it's arguing with you, unless it's arguing with you, in which case it's like, no, no, you are right.
If it's a librarian, yeah, I know better.
Yeah. Um, but yeah, it's making making the human happy.
Yeah. Um, it's trying to figure out, like, what is index area?

(25:00):
What is the most likely author, what is the most likely title. But it's not necessarily which of those go together.
And so a lot of times it will take and recombine these pieces of information to create new citations for books or articles that perhaps don't exist.
So this is something that will periodically come up and cross our desks.

(25:20):
Um, I feel like that pitfall is becoming more widely known.
Um, but it's still something that, you know, it's good to remind people about.
Um. Yeah. I'm thinking about a good, uh, misattribution of an author.
You know, like, I don't know, Harry Potter by Stephen King. Yeah, yeah.

(25:40):
Would be a good example. Yeah. Not not hard hitting research, right?
Yeah, but they're very both of them are very popular and very common, you know, title slash authors.
So of course I must go together. It seems like one thing.
So I know you mentioned that scholarly publishing and the kind of research that faculty do is
less a component of your job here at Carleton than it might be at an hour one but obviously,

(26:03):
keeping track of scholarly publishing is still part of your job.
Um, how do you see I affecting scholarly publishing as an industry, or the kinds of research the faculty are doing?
Um, there are definitely a lot of publishers that are actively looking for guidance on that right now.
Um, in part because there's the the question around, can scholarship be used to train AI models?

(26:31):
Um, and I'm also in the copyright Committee, and this is an outstanding question.
And by outstanding I mean both good and outstanding is still under discussion.
Yeah. Um, and there are a bunch of court cases that are making their way through the system now.
Uh, the first one has gone through that said, like, no, you cannot use this license material to train, but it's only one of many.

(26:55):
And there are a lot of trackers of court cases out there. Um, there are some groups that are starting to set some standards.
Um, there's the Committee on Publication Ethics, or Cope, which has authorship and AI tools.
Um, there are some licensing models out there so that are working along the same lines as Creative Commons.

(27:16):
So like authors can choose to license their content for training, but maybe not in other ways.
Um, but again, these are small early efforts down that road.
So it's something I think, to keep an eye on. Um, and probably for any faculty or anyone who's actively publishing right now,

(27:37):
you'd probably want to have that conversation with your publisher about how do and don't you want your material to be used?
Um, uh, you know, I'm also curious about, you know, the and some people have been talking about this,
the sort of big picture impact it will have, right?
In terms of output, the sheer amount of output it's being.

(27:59):
You know, I is a solution to the peer review problem in there.
No you can't find peer reviewers. But um, at the same time it's going to help, you know, create, uh,
maybe even make worse the problem of the, you know, profusion of, uh, publications that are out there.
And so, you know, there are a lot of problems with scholarly publishing, lots and lots of problems with scholarly publishing.

(28:20):
Um, and is I going to provide any sort of solutions to this or exacerbate these problems?
Um, in addition to is it going to help create actually progress, knowledge and understanding?
And I think some people are very skeptical about that, that more publications doesn't mean more progress.
I mean, um, so, you know, how that's going to shake out, you know,

(28:43):
we won't know necessarily for years, but that's certainly a conversation that I think is very,
uh, worth having, you know, um, amongst ourselves, um, within the Carlton context, but also within our disciplines and that sort of thing.
Yeah. Well, there's certainly a lot of uncertainty and many questions,
and it feels like the haystack is getting bigger and bigger and bigger while we're looking for the needle.

(29:06):
And we're glad you're here to support us in that. Um, so I want to know, um, as you look into the future, are there ways that you think I,
um, may affect your work that you're excited about or afraid of?

(29:26):
It's on. And I look nervously at each other. Okay.
Do you want to go for. This is not binding. Yeah, yeah.
Okay. Um. Well, excited? I'm not so sure.
You know, I think that there are there are.
I think for me, a lot of the things that are exciting about it are overshadowed by, um, some of the problems, I think that it might introduce,

(29:49):
um, like I said, whether it's helping with peer review, but then also making things worse by the profusion of, uh, publication.
But I think for librarians, you know, you know, what's on my mind, uh, you know,
are some of the kind of direct and indirect threats to our work, to the work we do to our labor.
Um, automation has always been a part and not always, but automation is certainly part of librarianship.

(30:13):
And libraries have explored automation in all its forms, um, over time.
But, you know, I think that there are some significant threats to different parts of the library workflow, things like cataloging and indexing,
which is the work of creating descriptive information about descriptive metadata,
about books and articles and sound recordings and primary sources so that these things can be found in catalogs or on databases.

(30:37):
That's work. Um, of creating those, you know, we'll call them tags, creating those tags and assigning those tags.
Um, that's work that's traditionally been done by people, and then that work gets shared.
Um, uh, but now you see groups, um, moving to automates that process through AI.
So, um, you know, the example I can point to is the is mesh, the medical subject headings,

(31:00):
which is an important sort of classification system that, um, had been done by manual indexers up until a couple of years ago.
And professional library professional experience. Um, I knew an indexer once.
Um, and so they are moving that a lot of that work to um or offloading that to artificial intelligence,

(31:20):
which then, you know, I'm sure is being reviewed by indexers.
But again, you know, um, in time, cataloging and indexing might be something that's impacted, um, for our own work.
You know, I think we see over time, libraries across the country, no matter the size,
you're seeing fewer and fewer students coming to the reference desk, fewer and fewer students maybe making research consultations.

(31:44):
And I can only imagine this might exacerbate that problem. So, you know, how do we push back against that or show our value?
Air quotes. I'm making sure our value, um, uh, and, uh, figure out ways to kind of, um, yeah,
counteract, um, the perception that I might be a better tool than talking to a librarian.

(32:04):
Um, you know, will it be justification for cost cutting and right sizing?
That's something I worry about. Um, you know, when you have large.
I think it was Cal State large have investing heavily in these large AI partnerships,
um, which would include access to, you know, these powerful reasoning models,
which if you use one, you it and I've, you know, when I've tried them out, I'm like,

(32:28):
oh, that sounds suspiciously like what I do when I'm talking to a student.
So and it's mimicking a librarian, you know,
librarians might all of a sudden human librarians might all of a sudden seem, um, look expensive and redundant.
So these are the things. And human librarians aren't usually available at 3 a.m.
That's also true. That's also true, though. Sort of sort of weird.

(32:50):
So if you've noticed on the library websites, we do actually have a chat widget there which is staffed by human beings.
So that is not an AI chat. Um, but it's through a consortium.
And so they are available at 3 a.m. because they have a professional staff and there are people who are working in different time zones.
So sometimes you'll be chatting with someone, a professional librarian in the UK for instance.

(33:10):
But that means and we also staff that periodically for a couple hours a week.
Um, and it's becoming more common that we get chat questions coming in from smaller colleges that don't have a staff.
Um, and so a lot of times then they will essentially we their reference work is being outsourced to us.

(33:31):
It is interesting and I like there's no solution to that.
And you know, there hasn't been a large conversation, but it is something I've been noticing over the years that that's been increasing.
Um. Oh, right. But, uh, other things I'm looking forward to, um, the, um,
again on the Copyright Committee and I'm going to pull back the curtain a little bit here with Dan, who's in the room with us right now.

(33:55):
The two of us, plus many other brilliant people across campus, are keeping track of a lot of these court cases that are going through right now.
Um, and so, just like trying to keep up on all of the AI related cases, um, we're going to be hiring an intern for this coming year again,
another digital scholarship intern to help us kind of track a lot of the work that's happening in various different ways.

(34:19):
Um, and then also just kind of, you know,
the small questions of what is reality and what is truth and how how do I teach truth to a student when they're, you know, like, is this real?
Is it not real? So these these kind of questions keep me awake at night sometimes.
Well, as we get kind of towards wrapping up, we always like to ask if there was something that you would want to or not want to offload to.

(34:45):
And I if I could do anything you wanted, what would it be?
We're in tax season right now. I would love to offload my taxes.
Um, that's that's like it's going to be possible pretty soon does whether it's going to be reliable.
I know right. How do I need to. There need to be for you to trust.
I have to pay this much. You're going. Yeah. Yeah. And also because we may or may not be getting some weird weather again.

(35:10):
And I live in Minneapolis, where there is always a question of who shovels the sidewalks.
If I could shovel the sidewalks, I would be a happy, happy camper.
How about you, Shawn? Um, I think, uh, meal planning would be my number one thing.
You know, maybe if it had. You know, I is all seeing.

(35:30):
I knew what was in all of my cupboards and in my pantry, and it could just tell me what I can make that night.
Uh, would be very helpful. Or what I need to go get.
Um, I still like going to the grocery store, but I don't like having to think about what to cook, uh, each night so that I would offload happily.
Theoretically, those tools exist already. You just need to buy an internet accessible fridge.

(35:53):
That's true. That's a good point. Yeah. Yeah.
So, um, maybe maybe I'll rethink that and just throw all my money at a smart grid.
Get a smart fridge, tell how long something's been in your fridge and whether it's been rotten.
That's a good question. Do not throw away your carrots.
It's no, never throw away. Yes, I never. But I think it's time for recommendations.

(36:18):
George, do you want to kick us off? Sure.
Uh, this is a bit of a long one, but I am going to recommend that folks experiment with, uh, Gemini's deep research function.
Uh, and this is, well, in keeping with our conversation today.
Uh, for those of you who haven't heard of this, uh, it's a function on Gemini, though ChatGPT has its own version as well, I believe.

(36:39):
Also called deep research.
Um, but you can ask the the chat bot a research question, like, say, for instance, what are the origins of April Fool's Day?
Uh, and then it will it will come up with a seven part research plan,
which are essentially seven sub questions it sees under that it'll ask you to approve it.

(36:59):
And then if you approve it, then in about five minutes it'll, uh, it'll go through whatever sources it has access to.
Uh, using clearly guidelines on what constitutes a good and a not good source on the web.
Uh, and it'll produce for you a remarkably not a good research report, but a remarkably collegiate sounding research report.

(37:23):
Uh, and so, uh, complete with verifiable links.
Uh, so it does not say it never hallucinates, but it it they no longer invent sources out of whole cloth.
Um, so what I find interesting about this is, uh, on the one hand, it's terrible.
Uh. And it's terrible, specifically in all the ways that the worst stereotype of an unengaged student assumes college research is supposed to work.

(37:52):
Uh, so, I mean, it does the very smart and good practice of breaking your research question down into sub questions,
but then it just maniacally goes through those seven questions, regardless of what it finds at any point.
Uh, and then it shows them up and tries to create an overall collegiate sounding argument, uh, for what it found, no matter what it found.

(38:18):
Uh, so I'll give you an example. Uh, I did run that.
What is the origin of April Fool's Day report? Uh, to which Gemini, as part of its report.
And and I've made the whole report accessible on Google, so we'll put it in the show notes.
But. The enduring popularity and longevity of April Fool's Day,
juxtaposed with the absence of a definitive origin story, underscore the intriguing nature of folklore.

(38:43):
Traditions often persist and evolve within a culture, even when their precise historical genesis is unclear.
The human inclination towards light hearted mischief and the fostering of social bonds through shared experience might serve
as more potent drivers for the continuation of such customs than a meticulously documented historical starting point,
so that that is such a perfect stereotype of terrible student writing.

(39:06):
It's, you know, it takes a one semi clever observation.
Isn't it funny that we don't know the origin of April Fool's Day and tries to build a profound argument around it by
using incredibly overinflated and convoluted language to hide the fact that there's just nothing there that it did.
You know what? For a human would be hours of research, uh, on exactly the same sources you found.

(39:29):
It went to Britannica, the free version. It went to Wikipedia, and it found nothing.
There's no answer to that question. And so it made up this slop.
Uh, on the other hand, what I find interesting about this, other than just the schadenfreude of watching I fail at something,
uh, is that you can actually see how the technology could produce something really quite,

(39:54):
almost alarmingly good if it had better programing on what a research report actually is meant to sound like,
if it if it was better programed and how you actually, you know, periodically come up for air and ask the human, okay, this is what I found.
Is this good or should I go in a different direction? And if it had access to better sources?
Right now it is literally searching via Google.

(40:14):
Uh, but you can already imagine how if it had access to scholarly databases and better material, it might produce better results.
So that's my recommendation for this week. Excellent. Well, I'm going to stick with the April Fool's Day theme here.
And I'm just going to recommend that people have fun going back and looking at classic April Fool's Day hoaxes,

(40:35):
you know, the spaghetti trees and things like that. And so I'm going to take us in the Carleton Wayback Machine.
I don't know how long all of you have been here, but I was here in 2010 when then-President Rob Odin announced that a brand new
administrative building would be immediately constructed on the bald spot.
And there had been surveying the week before. And then there was construction barriers and an excavator on the, um, bald spot.

(41:03):
And he released a video talking about the critical and present need for more administrators at Carleton.
And they even got the architects to design Cassat Hall, which is one of the dorms, to make this really over the top architectural diagram.
And this building is yellow and black and class.

(41:24):
And it is something um, we will link to the announcement and the video.
They of course did this all without AI, so it was much more work back in those days.
Um, but it was and and of course, it's fun to look at those things, and it's really fun to see in the comments the people who fall for the hoax.

(41:44):
Um, just how could you do this? The bald spot is a Carlton treasure.
Um, but I always have fun with that. How about you, Sarah?
Uh, sticking with hoaxes, I have a second answer. But the first answer there was, uh oh, a couple years ago.
Maybe it was 2020, 2021. The college released its new logo, the frisbee.

(42:06):
The Frisbee one, which I mean, talk about people freaking out.
I'm an alumni and like a few groups of alumni were just like, what is this?
And absolutely lost it. Um, uh, that was a that was a pretty good one that they got a lot of people.
Well, I seem to recall it was actually President Pioli's first year, I think it was.
Yeah. Yeah. She she came out swinging. Yeah.

(42:28):
And it was amazing. Um, in terms of things to share, Shawn and I are librarians.
And so we have, in fact, built a bibliography.
This is amazing. I was looking at this. I'm excited to explore.
This is this is the short version. Um, but two things in particular I want to highlight.
There is a scene from librarian Violet Fox called a librarian against I, or I think I should leave.

(42:55):
That's, of course, a nice little pun.
Um, and then there's also an article by history and digital humanities professor Cameron Blevins called a Large Language Model Walks into an archive.
Um, but both of them have some really helpful suggestions about ways to think about AI and then also ways to use it in scholarship.
Oh, cool. How about you, Shawn? Um, uh, I would recommend, uh, and maybe this has come up before, but the podcast feature on notebook and, um,

(43:22):
not because I like it in particular because I find it just so infuriating, uh, so obvious and yet so, like a mirror up to society and, like, what do.
We must be as humans that we could be so easily replicated in this AI podcast format.
Oh, yeah. Um, so I. Yeah, exactly. Yeah. Interest there.

(43:43):
Right? Yeah. That's good. Uh huh. Yeah.
Uh, you know something I find myself saying to faculty a lot these days is that you are not as unpredictable as you think you are.
Yes. Right. Yeah. I has a knack for revealing that.
Yeah. So it's fun to throw random things that you would never podcast into.
Um. Notebook 11, see what comes out like. Yes.

(44:05):
The front page of the LTC website is a good one.
Yeah. So we not only do scholarship, but we do scholarship of teaching.
And I would I would also recommend if you have, you know, a couple of years of a friend group chat, uh, explore that and dump that in there.
Oh that was, that was my most successful use. You know, it's good.

(44:28):
Or the family group chat or. That can be fun, too. Yeah. Yeah.
Well, um, this brings us to our end, and this is really been a fun conversation.
Thank you so much for taking the time to come and talk with us today.
Absolutely. This was great. Yeah. Thanks. The Year of Curiosity podcast is recorded on the campus of Carleton College.

(44:50):
Your hosts are Jennifer Ross Wolf and me, George Cusack.
Our producer is Dan Hurlbert, who records and edits each episode along with his team of hard working students.
Our show notes are compiled and edited by Wiebke and are available on our website.
Carlton Academy I, Mary Drew, maintains our Podbean account, which gets our episodes out to whatever platform you're currently listening on.

(45:11):
Our theme music was composed by Nathan Wolfe Carleton, class of 27, and our mascot, Maisie was generated by Jennifer Ross Wolfe using Adobe Firefly.
Advertise With Us

Popular Podcasts

United States of Kennedy
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.