All Episodes

May 7, 2025 48 mins

Jennifer and George talk to Lauren Back ‘27 and her dad Greg Back, the co-founder of CatchLight Capital Partners, about the ways AI is changing the landscape in academic and Silicon Valley.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:04):
[Auto-generated transcript. Edits may have been applied for clarity.]Welcome to the Year of Curiosity podcast from Carleton College,
where we take a year long dive into a complex topic and invite curious guests to share their experiences and their questions.
This year, we're diving into the world of artificial intelligence. How will I change the ways we learn, work, and live?
What will we gain and lose as this technology becomes more pervasive and accessible?

(00:25):
Join us as we pursue these questions and many others with an open mind and a curious attitude.
I'm George Cusack, the director of writing Across the curriculum at Carleton.
And I'm Jennifer Wolfe, a biologist and the director of Carlton's Perlman Center for Learning and Teaching.
Welcome. Our AI generated tagline for the week is understanding tomorrow's AI through today's conversations.

(00:49):
George, how are you doing today? I'm doing all right. How are you, Jennifer?
I'm doing well. What have you encountered lately that's made you curious?
Uh, so I'm going to get a little philosophical this week.
And, and as we were actually just discussing before the taping, uh, we're all a little frazzled at the end of the term, so that might not go well.
But either way, um, I've been thinking about a study that has just been released, uh, an internal study by, uh,

(01:12):
from within the the company anthropic, which for those who don't know, is the company behind the chat bot.
Claude AI. Um, and they examined over 3000 interactions with Claude.
Uh, and what they determined is that, uh, Claude expresses a in their assessment, a consistent set of values.

(01:32):
Uh, so, to be specific in the words the researchers themselves,
we find that Claude expresses many practical and epistemic values and typically
supports pro-social human values while resisting values like moral nihilism.
Uh, and so that's got me thinking about what it actually means to have or express values.
Right? Um, the researchers, to their credit, they don't anthropomorphize Claude.

(01:56):
They don't say Claude has a moral code or anything like that.
Um, but if what Claude produces is just strings of predictive texts that are essentially average from the internet as a whole,
and they're kind of managed by the the back end guardrails that anthropic puts on it, then whose values are those that it's expressing?

(02:21):
Uh, you know,
what does it even mean to express values if you don't actually understand anything you're saying in the way we think of human understanding, right?
This sounds unsurprising to me because the values are coded into the guardrails.
Is this what you're saying? Right? It's hard to tell if that's a finding or, you know,
in the sense of of they've discovered something about how the AI works or if it's really just kind of a victory lap that,

(02:46):
hey, our guardrails work, right? Um, somewhat believably.
Yeah. Yeah, that's that's where my head is today. Uh, how about you?
Well, you'll remember a few episodes ago I was a little grumpy about the Wooly mouse.
Yes, I do, and I'm still all for hair growth technology.
I know, I know, but I was I was unimpressed with the level of hype that came along with the wooly mouse.

(03:11):
And soon after that, of course, came the announcement that Colossal Biotech, same company has the extinct ID, the dire wolf?
Yes. Um, which, of course, is having a cultural moment through Game of Thrones.
And so what they did, um, although extant alive now gray wolves are probably 10 to 15 million changes away from the dire wolf.

(03:43):
Um, colossal has made 15 genetic changes, um, to a gray wolf and has declared this a dire wolf.
And I think this would be equivalent of declaring the wooly mouse an actual mammoth.
And so you do like to get salty about the work you do with the genetics.

(04:07):
Um, and this so one great headline was, um, along the lines of is this a scientific breakthrough or a colossal breakthrough in hype?
Um, there's an article from the LA times that is very skeptical and lets us know
that George R.R. Martin is actually a coauthor on the pre-print of the paper,

(04:31):
um, and is listed as a consultant. This is the author of Game of Thrones.
And so again, I'm. You know this isn't Jurassic Park.
So I guess the good news is the cute little direwolf puppies are not going to get out loose and,
you know, chase down anybody's SUV or anything, but, well, they are wolves.

(04:52):
That's true. They're there are some of the most expensive wolves ever to be, I suppose.
Yeah. Well, if we're going to pile on, I'll just add and I'm not the first nerd to express this,
but they named one of the wolves Khaleesi, which has nothing to do with dire wolves in the literature.
The Game of Thrones universe. Khaleesi is a character that has no connection to dire wolves.

(05:17):
Interesting. What would you have named? But anyway, what would you have named?
The wolves? Uh, Remus was right down the middle. Sure.
Uh, I don't know. Ghost seems like the obvious one, given that you've resurrected it.
Oh. Fair enough. Yeah. All right. Um, well, we could talk about this all day, but we should get our guests in on this conversation.

(05:39):
Um, so today we have, uh, set of perspectives that we have not featured on the show before.
And this is a Carleton student and a Carleton parent.
We have Lauren back, who is a Carleton sophomore, newly declared cognitive science major and philosophy minor.
Congratulations on declaring and coauthor of a Substack newsletter on the intersection of AI and humanity.

(06:06):
And we have her dad, Greg, back. Greg has worked at the intersection of business and tech for the last 30 years.
Since the dot.com era, he's been a venture capital investor, and he's the co-founder of Catch Light Capital Partners,
which has several dozen portfolio companies, including a few dozen with some element of AI.
Well, welcome, Lauren, and welcome, Greg. Thank you.

(06:29):
Thank you. Lauren, what are you curious about this week? Yeah.
You know, I'll. I'm going to answer something I've been kind of ongoing curious about, which is how does our perception of time function?
How does the brain integrate stimuli to create our perception of a sort of cohesive now?
You know, if I'm tapping my foot, um, and I'm watching myself doing that,

(06:52):
then it should take longer for those tactile signals to reach my processing centers in my brain than the visual signals.
Um, because my foot receiving the input is farther away than my eyes.
You know, the eyes of the benefit of the speed of light.
Um, and so how how does we think of, you know, the now is like at sort of a point going along a line,

(07:16):
but like, it seems as though there isn't an exact point.
And even more broadly, and I'm digressing a little bit and this we're all about that.
Um, you know, our eyes are moving constantly in, um, in a way that's called saccades.
Um, and our brain is constantly editing out that movement.
So there's a lot of. Information that we're just losing, and somehow there's this cohesive picture that's being formed.

(07:43):
Yeah. So we think of our brains as reliable reporters and they actually are not.
And sometimes it's some type of information, um, takes precedence over other types.
Do you know about the McGurk effect? Oh, I okay, I know the name.
Yeah. So this is the idea and I will not get it exactly right. But this is if I'm looking at you saying baa baa baa baa baa.

(08:05):
And I'm seeing your lips move in a particular way.
The visual information will override the aural information and you'll hear a different thing depending on what you're seeing.
And there are other experiments you can do, like trying to make people read, um, words really quickly.
But the words are particular colors. Oh yeah, and you'll say the color of the letters rather than the actual word.

(08:31):
So what would I say to students? And maybe this is an oversimplification is brains are dumb.
Um, but it sounds to me like you are a perfect cognitive science major.
Well, I don't know about that. Yeah, they're dumb, but they're also really cool if you think about.
Yeah, our peripheral vision, it's really bad.
It's very blurry. Our brain is constantly like constructing this picture or narrative of, like our, you know,

(08:59):
selves situated in an environment that seems coherent and cohesive despite all this,
like, you know, sensory processing, jumbled information that we're getting, that's really impressive.
We could talk about this all day and, um, maybe come to my office hours and, uh, keep going.
Um, but we should ask your dad. Greg, what are you curious about?

(09:20):
I'm curious about what you guys are just chatting about. Of course.
Uh, you can tell that, uh, Lauren and I have, uh, some interesting topics, uh, when we get together.
Uh, but, uh, I'll take it from the theoretical on the mind to the very, very practical.
I'm curious what's going on in the minds of President Trump and President XI of China?
Uh, these are two people who have concentrated enormous amounts of power.

(09:44):
And yet, on the one hand, we've got a leader who is incredibly unpredictable.
And on the other hand, we have a leader who is incredibly opaque.
Um, and so for both reasons, it's hard to understand, you know, where the, where the puck is going to use that analogy.
And. Yeah, and they might have more power now to influence lives, billions of lives than anyone in history.

(10:06):
Yeah. So I think that's, uh, that's fascinating to me.
Um, I also think the way that the relationship plays out between Trump and she is likely to have an impact on AI.
So just kind of bridging that curiosity right into today's topics.
Yeah. Well, it's a that's an interesting distinction you make between unpredictable and opaque.
And I know that you've said it. I see it in sort of how you compare those two.

(10:31):
Um, yeah. There's not a lot of data upon which to see where President XI might be heading.
Right. You know, he's got, uh, a lot of scripted interactions with the world, kind of the opposite of President Trump.
Exactly. Yeah. Um, but you can see every so often [INAUDIBLE] do a big flip flop as he did with Covid.
You know, he went from zero Covid to kind of let it rip overnight.

(10:52):
Yeah. And so, you know, there's uh, there's a high degree of, you know, quantum uncertainty there.
Well, we always like to ask people about their AI origin story when they come on the podcast.
And so, Greg, I'm going to ask you, you certainly are working in an AI world quite a lot.
When did I first become a big deal for you? I imagine it was earlier than for many of us.

(11:16):
Yeah, well, I had a head, a full start, actually. You know, going back to college here, um, I one of the jobs I was looking at right out of college,
uh, was, uh, had an expert system in early form of AI,
and they were pitching it as, you know, kind of game changing and, and ultimately that that did not end up being the case.
I chose an actually different option.

(11:38):
Um, but it's a sign of, of something we see in technology a lot, which is, um, that being too early is indistinguishable from being wrong.
Um, and so, um, roll forward though to about a decade ago I saw AI showing up really in the image arena computer vision,
uh, image net coming out of Stanford is, you know, starting to beat humans.

(12:03):
Um, these systems, um, and image recognition.
And so that was that I started investing in AI, uh, you know, applied AI, uh, fairly simple AI, not, uh, foundation models.
Large language models like ChatGPT weren't around yet and then finally roll forward.
Uh, third was ChatGPT like everybody else. Uh, you know, the ChatGPT moment, two and a half years ago, I, I gave it a try and then, you know,

(12:28):
ran down the hall and tone to it and said, okay, this this is one of those things that, you know, we're going to remember.
Yeah. So, Lauren, how about you? Okay. Obviously my dad just mentioned that he told me on day one about, yeah, she'd be 33.5.
Uh, and that's when it really hit me that, oh, this is a tool that could significantly reshape how various aspects of our lives function.

(12:52):
And I just remember in those first few days, thinking about it first and foremost,
with respect to academic life, obviously, I was a high school student, right.
Um, you know, we get assigned many essays. This is a tool that can generate language in a way that seems more human than ever before.
Um, obviously, I was technically aware of, you know, artificial intelligence, uh, before this point in the sense that,

(13:16):
you know, a lot of AI has kind of, um, you know, we think of it as defined in a very specific way, you know,
which is like, oh, generative AI, LMS, you know, but if you think about it as like a system able to perform tasks that,
you know, normally require a human intelligence that had already existed throughout my childhood.

(13:37):
Um, but yeah, 3.5 was there was the aha moment of sorts, as you just mentioned, you know, what we think of as the current era of AI,
starting with ChatGPT 3.5, that was, you know, now about half of your high school years and half of your college years.
Um, so from from your point of view, especially as someone who was thinking about how I might change the way we do things.

(14:00):
Um, what's been your experience about how different teachers and different professors seem to approach AI or how it's affected your academic work?
Right. Uh, so in high school, there was virtually no mention.
Um, you know, I was even in the first, uh, you know, weeks, days.
I was just thinking about, like, should I bring this up to people?

(14:24):
Um, you know, because, like, you don't want to be, you know, perceived as someone who was cheating in class and such.
Um, but, uh, as time has moved on, I've noticed that, um, there's this theme on syllabi in college with that professors put,
which is basically something along the lines of, um, like it's a statement on AI usage.

(14:47):
And a lot of them tend to say like, please don't use AI in my class.
Asterisks. Um, if you see a way that you think it would benefit your learning, you know, please come talk to me.
Um, and while I appreciate that, Asterix, I.

(15:08):
I question to what extent it's being used.
Um, you know, because students don't want, uh, I don't know.
I don't think students would want to be perceived, um, as being unethical or like, what if the professor says no?
You know, these are tools that people can access super easily?

(15:29):
Um, not to mention making more work for the student who wants to.
Yeah, exactly.
So even if the student is thinking about it in a thoughtful way, implementing it, and as such, you know, that just creates another step.
It's a barrier to access that I'm not convinced that everybody would choose to.
Sure. And I'm always curious because, I mean, Carlton professors, um, pride themselves on on being accessible.

(15:55):
Yeah. Flexible. Uh, and so I'm sure any, any faculty member who put that on their syllabus really sincerely meant, you know.
No. Oh, yeah. Open minded. I really want my students to come to me from a certain point of view, though,
is there a worry that, unintentionally or not, that it's maybe kind of a trap?
Uh, um, you know, subconsciously, I would suspect, yes.

(16:18):
For a lot of people, I completely agree with that sentiment that, like, I believe the majority of my professors are open minded people who sure,
like, if I make an argument, would, you know, accept that and think about that, or at least hear it out?
Yeah. At least. Yeah. Um, I think. It would also depend on the ask.

(16:41):
Okay. Yeah. So, you know, that's how you're thinking about the conversations with your professors.
What about your fellow classmates? How how did the conversation among your peers go in high school?
And how is it changed now that you're here in college?
Yeah, I mean, in high school, really in the first, I, I'm going to say this a million times in the first few weeks of hearing about it,

(17:05):
the only thing I really heard from peers with respect to usage was like through the grapevine,
hearing that, like a friend of a friend used it for like, college applications, that was basically it.
And that was like, oh, this is such a big deal. Like what they could call it, I don't know.
Yeah, we we hear that a lot. The friend of a friend report.
And I'm never sure if the fact that that it's always not me, not someone even I talk to.

(17:29):
But yeah, you know, if it's always third hand, if that's a sign that a whole lot of people are using it and it's just sort of being coded as well,
you know, I hear it, or if it's just the opposite, if in fact, actually there is no friend of a friend.
Yeah. Interesting. That's really interesting, especially with respect to, you know,
how I kind of see people discuss it now because it's a lot more of framed as like self usage and I've.

(17:56):
Yeah. Um, and, and I've found there's definitely it's very different um, based on the individual because um, you know,
there's sort of a contrast I've noted in like fear of speaking about it,
which is was more present in the beginning time and then more of like a cavalier attitude.
You know, I have some friends who will just openly say all the time, like, oh,

(18:19):
I'm going to use this for this assignment, or I'm going to use it for this email, etc., etc.
Um, and then also I had an interesting conversation with a friend the other day, uh, where we sort of settled on the maximum, uh, for,
for with regards to ethics, um, you know, in dishonesty or honesty, um,

(18:40):
even apart from, like, you know, syllabus use, um, we sort of settled on the max.
Some of, you know, if you'd ask a friend to do it for you, you know, so so if you think about that in the context of,
you know, different circumstances, like, oh, um, I was up really late last night.
I didn't have time to do this reading. Would I ask a friend for their notes on the reading so I can be more prepared?

(19:03):
Yes. Therefore, that's seems like a fair usage.
Sure. Uh, would you ask a friend to write your whole essay?
No, of course you wouldn't do it. Um, so that's that's kind of where we were, so.
Okay, I like that. Yeah. So, Greg, what about you?
Um, as you just said, you you've been aware of AI and machine learning technology a lot longer than most people.

(19:28):
Uh, so from your perspective in Silicon Valley, what are your views on the benefits and the potential risks that I might be bringing?
Yeah, maybe I'll just start with, uh, uh, bridging from the the academy here.
Um, you know, I think the, you know, institutions, uh, like Carleton are here,
obviously, to teach, help students learn how to think and the whole how to think thing.

(19:50):
I think that remains central. And I'm actually fortunate.
I'm feeling fortunate that my daughter has been through, you know, uh,
kind of the development of her own reasoning capabilities through high school and into college,
kind of prior to the ability to rely on a bot or as a crutch.

(20:10):
Yeah. We're wondering when, like the last generation or the last students.
Yeah. I mean, we'll come back to, you know, risks overall in a bit.
But, you know, my my biggest concerns are about cognition, cognitive development and societal stresses, really.
Um, but, you know, with the premises in terms of, um, use here in the academy, I think the first I would say is, uh, AI is here to stay.

(20:36):
There's no mechanism to shut it down at this point. So it's a reality.
You know, even if the US-China rivalry were not an impediment to global governance, you know, you could slow it down.
But even then there's open source and other, other patterns. So, uh, second is it's getting better all the time.
It's a one way ratchet. Mhm. So, um, I think OpenAI has said many times, you know, that bot you just interacted with is the worst you'll ever use.

(21:00):
Um, um, and the third is that and this is actually the key point, uh, you know,
the academy here to teach students how to think, but also, I think to prepare students to be productive members of society.
And this is when that's kind of the setup point to my last one, which is someone who wants to do the same thing that you want to do after you

(21:21):
graduate is going to be using AI to accelerate or otherwise augment their work.
So it's I think if you put all those elements together, I think it's incumbent on the Academy to encourage trial.
Uh, and responsible use of AI is. Tool.
So in my view, I would I would separate kind of reasoning from maybe research.

(21:44):
I think reasoning particularly in your core areas, you know, you have to really train your brain.
Um. You know, outside that research, you know, you don't need to spend hours googling something.
You can use a deep research tool now. Yeah. So that would be my kind of recommendation.
Are you involved with AI in your work as a venture capital investor?

(22:06):
Oh, I am indeed. Um, as I mentioned at the outset, I've got a few dozen companies in the in the portfolio.
Um, none are in the, uh, you know, chat bot space.
Um, but there's some interesting ones. One is doing, uh, autonomous trucking software, for example.
Oh, yeah. And, um, it's it's fascinating.

(22:26):
Uh, if you, uh, one of my other favorite sayings is the future is already here.
It's just unevenly distributed all the time.
And, uh, you know, it's William Gibson, the science fiction writer, I think, said that about 30 years ago.
And, uh, if you come to San Francisco, we're based in Silicon Valley area, you know, go to San Francisco, get a ride in a Waymo.

(22:47):
Sure. And it's it's mind blowing. They're everywhere.
Um, and, uh, somebody recently did a deep dive into their safety, and it's there about ten times safer.
Safer than human drivers. Uh, so it's got a lot of potential.
Um. We're encouraged. Uh, you know, I'm kind of a humanist, but also an optimist.
You kind of have to be to be in venture capital. Um, otherwise you freeze up.

(23:11):
Um, and, uh, so I'm, I'm optimistic overall that there's a lot of value that can be created by AI.
I think the, you know, there's a question of how you distribute that value.
And there's those cognitive and social risks we come back to which we can we can get into if we have time.
Sure. Yeah. And it's I'm glad you brought that up because it when we talk about AI, I mean both on this podcast.

(23:34):
But I think in academia in general, it's always it's almost always a shorthand for generative AI.
Uh, you know, chat bots and related apps.
Um, and it almost always tends to focus on the course you've just identified,
which is learning and how how outsourcing intellectual labor as a student or an academic to I might affect us.

(23:58):
We don't talk much about other applications of AI, even though there's obviously many, many of them.
And I don't think we think nearly enough about what the effects are as not your labor personally for most people,
but the labor around you that you already outsource becomes AI managed, you know?
So when paying somebody to drive you someplace isn't a human interaction anymore, but it's an interaction with an AI.

(24:25):
Uh. That's interesting. Yeah. Well, Greg, you've shared, um, some of your perspectives on the Academy,
and we know that you're involved in the Parents Advisory Council and the Career Center Advisory Board,
and I wondered if we could hear a little of your response to what your dad has said about

(24:45):
the work of thinking and what it's like to have a parent who is so involved in Carlton,
um, and the life of the college. Yeah, well, for the latter, I think that's great.
I just have a couple extra time. This is great. Yes, yes.
Um, yeah, I, I was listening to what you were saying about research versus cognition, which I thought was very interesting.

(25:09):
Um, and I would agree with your sentiment on, you know, because how I'm thinking about things is like it's a question of, um, you know, what?
You're gaining and versus losing with respect to your values.
Um, you know, because there are potential gains, which I think especially could be applied to the research, um,

(25:30):
perspective or component, um, such as like efficiency of tasks leaning, leading to more time for meaningful nonacademic pursuits.
You know, if it takes me less time to research things, then I could, you know, spend time connecting with other people.
You know, I, um, I always mention my, like, three pillar recipe, um, which I just came up with that name.

(25:54):
Um, but it sounds like a cake. Uh, yeah, a little bit, uh, but it's a cake for, like, a fulfilling life, essentially.
There we go.
Um, because, um, I've identified that, you know, as I go about my days, um, I like there to be, you know, intellectual and creative pursuits.

(26:17):
But I also like there to be social and human connection, strengthening pursuits, and also personal and reflective pursuits.
You know, and Carlton is a fairly, um, rigorous school.
I think it's safe to say I've heard, um, you know, so, um, if it's a question of choosing to, you know, do research manually, manually via Google.

(26:44):
Sure. Um, or via Google Scholar versus utilizing deep research which might which could be equally effective.
Um, but could free up some time for, you know, connecting with, um, classmates, friends.
Yeah. You know, I think that's a fair trade off. And I think, um, an interesting parallel even, is like the internet.

(27:06):
Um, you know, um, I don't know if these of these kinds of conversations about research were happening then because 100%.
Yeah. Well, yeah, that was my instinct. Yeah. Um, you know, but, like, you could make the exact same comparison.
If it's a question of going to the library and leafing through books for a really long time versus using the internet to find sources.
We all had so many copy cards and quarters in our pockets to make.

(27:31):
I mean, I only tried to do research with a physical book like once that Carlton and I failed.
It took forever and I still failed. Um, you know, so that's quite interesting.
And but then there's also, you know, potential losses such as the practicing of cognitive skills, um,
which I think begs a really interesting question is of is practicing certain cognitive skills intrinsically valuable?

(27:58):
You know, is even which my intuition is to say yes.
Um, as I find the mind's capabilities quite fascinating.
But it's a really interesting question.
With the prospect of a future world in which the use of AI tools are more integrated into many facets of society.
You know, like you were talking, uh, that about, um, you know, preparedness for, you know, for the future world and I.

(28:22):
I think it's possible that incorporating such tools into one's life, um, you know, may not diminish preparedness.
Um, or could even in some cases could even increase it, especially with respect to the research,
you know, because if, um, you know, like the use of a calculator is the example everybody thinks about,

(28:44):
like if you're going to have access to, you know, a pocket calculator your whole life, then does it really matter if you know the times table?
So that's what I think gets back to this. Is it intrinsically valuable question as well.
Right. Well, and I think it also raises the to me the, the essence of that question is,
are you are you offloading intellectual work or are you offloading an intellectual skill that you're not going to practice anywhere else?

(29:10):
Yeah. So so for example, um, I think everyone agrees it's good to have a solid functioning memory.
Um. Oh, okay. We we've oh, okay. You clearly have.
Let me say on that. But, um, so now that we all have smartphones, it's a common joke that no one remembers anyone's phone number anymore.
I don't know, my children's phone number. Uh oh. My, my phone in my outboard brain.

(29:34):
Yeah, exactly. Yeah. But you know, what I would say is there's a trade off.
We all know about 100 specific routines for how to get to a specific function on a specific app on our phones.
Uh, you know, so we don't remember those numerical sequences anymore.
We don't have that as an exercise to bolster our memory, but we have other things.

(29:55):
Um, so, you know, so to me, the kind of question is, well, okay, if if I take some of the intellectual work away from research, that's fine.
You know, what are this? What are the cognitive skills that that it's that you're not practicing that way?
And where else can we build them in in a way that might be more productive? Um, just just to build on that,

(30:17):
I think one of the key issues we may face going forward as individuals and at the societal level is kind of this skill versus will thing.
So on the skill side, I actually believe in the importance of these, not just for, you know,
kind of intrinsic value, but also for social emotional health, so on and so forth.
We want to feel somewhat capable and productive. Yeah.

(30:39):
Um, and I actually think we, we have the potential with AI to actually improve those skills.
You guys have probably heard of the two sigma effect, um, where, you know, a personal tutor can move a student's performance up by two sigma.
Um, everybody can have that in the future. Everybody can have a personal tutor.
Everybody can have a personal doctor, by the way, as well. Sure.

(31:02):
Um, which will be interesting. And I think a huge boon.
Um, what I worry about is, is Will, like Will if we have the opportunity to do the hard work of training our brains with our personal tutor,
but we also have the opportunity to just let the bot do the work, you know?

(31:23):
Will we do the hard work? You know, how I how many people will will lean into the opportunity?
It's kind of a multiplier in some respects, in my regard, in my thought, uh, you you can improve your cognition.
You can accelerate it. You can augment it using AI or you can substitute.
Yeah. And the latter would be unfortunate. Yeah.

(31:45):
I think this is a lot of the message that we're thinking about here at Carleton
and helping students understand that it's their brain that we want to change,
not the number of essays that are out there in the world.
I know that's a cliche. Now and then we've said before, um, but trying to make that brain exercise feel worthwhile.

(32:09):
Being very transparent about why it's important and and recognizing that the motivation for different students may not be the same as what mine is.
It's all it's it's a new frontier. Yeah.
Well, that and that raises a question, you know, to circle back to, to your point of view from Silicon Valley.
Greg, um, the way.

(32:31):
Those of us outside, you know, based on just what we see in the media, tend to perceive Silicon Valley.
It's a very scattered picture, for starters. It just tends to be, you know, quotes from big personalities like Elon Musk.
Um, and the, the perception or the assumption is that it's all about forward at all costs with AI.

(32:51):
Uh, you know, it seems to me we hear relatively little assessment or speculation on the risks of AI, you know?
And yet here you are articulating exactly the same set of concerns that we tend to have here.
Um, so I can just say a little bit more about how does the tech community think about AI, and how did these risks sort of process into.

(33:11):
I think most folks in Silicon Valley or just like us, you know, it's, you know, there are there's optimism and there's concern.
You know, I think the people that you hear the most about are people at the extremes, you know, the the Zoomers, you know, who say, you know, hey,
this is the next evolution, essentially, or the boomers who say, well,

(33:32):
maybe that's the case, but that next evolution might be really a problem for humanity.
You know, most people are in the middle, uh, they've got that balanced view.
They mean just like in politics, they may not be, you know, hitting the press every day.
Um, so I think it's something that people generally do agree upon pretty broadly is that I can create a, a lot of economic value.

(33:55):
It can kind of bake a large pie, if you will.
We can come back to some examples that are behind the scenes that are not chat bot oriented, but which are pretty cool.
Um, I think there's, uh, probably most people would agree there needs to be,
you know, some fair distribution of the benefits that, that, that come from that.
Um, and then beyond that, there are many ethical issues that people are trying to sort out.

(34:19):
Uh, all the same ones that you see every day, you know, intellectual property rights, um, uh,
even, you know, environmental considerations, you know, the how much energy it's consuming.
I again, mentioned earlier, I tend to be an optimist, you know, you see,
I think there's a cognitive bias actually in play here, which is, uh, something new.

(34:42):
We look at the scary stuff. You see this in social media all the time, and but there's a lot of good stuff going on as well.
So, for example, you know, you hear about how much energy that data centers using,
but somebody is using AI in materials science to improve a superconductor.
That's going to go into a magnet that's going to generate, you know, basically fusion energy, let's say at MIT.

(35:07):
I mean, this is literally the kind of stuff going on right now behind the scenes.
So there might be a near-term, you know, energy hit.
But I'm actually pretty optimistic in a lot of these dimensions.
I'm wondering, since we have you both here, if either of you have questions for each other?
Yeah, I, I was wondering, dad, how do you use it in your work?

(35:28):
Um, is it mainly deep research? Do you employ other uses?
Yeah, I, uh, deep research is is my favorite. Um, just saves a lot of time.
And, you know, you can again redeploy that time, you know, for for real cognitive work, for time with people.
Um, so really, really nice. I would say in certain specialized areas, I'll, I'll rely upon it for expertise,

(35:52):
which is kind of like, you know, applied knowledge with maybe a reasoning overlay.
And just to give you an example, you know, I just put in place a, uh,
an updated fund opportunity allocation policy, you know, for a new opportunity for investment comes in.
Where do I put it? Um, a legal document, a couple pages, uh, you know, did some, did some work there with the prompt.

(36:16):
Uh, got a great I got a great output. Sent it to our lawyer, and he said, yeah, no edits.
Great. Looks great. Yeah. So that's a nice accelerator. Um, and yeah.
So, you know, pretty much like most people do, uh, I'd say maybe the last category I'm a big fan of,
uh, of a professor at the University of Pennsylvania in this regard named Ethan Malek.

(36:37):
Yeah, he's a favorite. Yeah, he's a favorite of yours. Okay. Terrific. Yeah.
I mean, he says you use it for that one annoying thing. Yeah. You know.
Yeah. I just love inserting one annoying thing in there. Hey, you know, I've got this problem in my backyard, you know,
with these plants or whatever the case, you know, just run it, run it through and see what happens.
And oftentimes you're you're pleasantly surprised. Great.

(36:58):
Do you have questions for Lauren? Well, we've we chat so much about this, you know, uh, I, I have a pretty good feel for how she uses AI.
Um, in fact, if you've, uh, you know, we thinks very similarly.
We were always kind of we're always kind of pattern seekers here.
And, um, the more meta, the better. That kind of higher level of, you know, the pattern, the happier we are.

(37:22):
Uh, so we have we have a lot of conversations like this whole time. I think I have a pretty good feel for how she's using it.
And and it's in a very responsible way. Yeah. Um, at least in my view.
And and I. Yeah, I think we've seen that with many of our Carleton students who are just really being very thoughtful and being responsible about it.
I'm, I'm and I'm happy for that. Well, as you mentioned, one of the promises of AI is to free up time to spend with people or,

(37:50):
um, do more cognitive work or however you would like to spend your time.
Is there something that you would or would not want to offload in your daily life to AI if you could?
Yeah. Um, one yeah.
One thing I would definitely like to offload, which some people are already doing and I have not gotten on this train, um, is email writing?

(38:11):
Oh yeah, I see it as such a chore. I don't know what it is if it's the formality as opposed to texting or something.
That's like a small task. That's that requires a significant amount of cognition.
But it isn't very interesting. But I don't like that doing that at all.
Um, one thing I would not want to offload is the ideation.

(38:34):
And like rough, rough draft crafting process for what I call potential brain baby papers,
um, which is a, uh, which is a paper that, like, gets me really excited.
And it's often under the circumstances when it's a class I'm very excited about.
And also it's an open ended prompt.
So not all papers do this, but I really like sitting down with the intention of ideating for a, uh, potential paper of such.

(39:01):
And, you know, I'm just talking about the ideas for a little while.
Um, and then a paper direction comes, which then turns into a sort of rough outline and thesis.
And I love it so much.
And you'll notice I referred to the process in sort of a passive way, which isn't to say that it's not like an actively generative process throughout,

(39:22):
um, originating from me, but it's also I find it sort of magical because thoughts that I'm very excited about just sort of enter my head.
Isn't that fun in? Yeah, in my field, in science,
we talk about the papers that write themselves in the papers that you have to just crank through, and that's such a joy.
Yeah. And things fit together and your ideas start flowing. Yeah. And it's so energizing.

(39:43):
Yeah. So I would not want to offload that particular wordsmithing if, if that's not as fun as the writing process in the room,
I'm obligated to point out that it's the papers that you have to grind and crank out that put,
that give you the mental framework you need for the papers that write themselves.
This is hey. I'm with George on this one.

(40:06):
Yeah. All right, all right. What about you, Greg? What? Would you offload or not offload?
I yeah, uh, I'll give, uh, one of each, I think on the offload.
Um, I'm going to go with an embodied AI anti inter entropy robot.
Uh, yes. Embodied I, I want to get a smart robot.
You're seeing the, you know, talk about robots showing up.

(40:28):
Um, it's gonna take a while for that to happen. So you know that too early is indistinguishable from being wrong.
Thing I think is relevant. But I'd love one to go out there and clean the fence and the solar panels and the
gutters on the house so I can have more time to play hoops or ping pong with the lawn.
Um, uh, we're not not there yet, but maybe in a decade.

(40:51):
Um, so that's one thing I'd, I'd like to offload, um, on the not offload.
I think it's, it's the point of view.
Um, so, uh, you know, I sometimes I kind of view our business as a point of view business, really at the end, at the end of the day,
if you're dealing with a wide range of uncertainty and you have to come to a point of view that this is a yes and this is a no,

(41:16):
and, and, um, that's hard, but it's also the most fun part of the job.
So I like to do the I like the deep research part to kind of feed the beast, um, but not to offload the conclusion.
You know, the takeaway. Yeah.
That almost comes back to the that question of what it means for an AI to express values that, that I talked about at the beginning.

(41:40):
There's sort of sense of if, you know, I said, there seems to be a certain smugness in that report and maybe, maybe I'm projecting.
But that says, you know, we are AIS simulates the responses of a good person.
Um, but at the same time, if if what you're trying to produce in collaboration.

(42:00):
The eye is is a reflection of your values.
Having a kind of randomized mechanize set of values in the mix is almost, almost makes it worse.
Yeah, well, I'm a little bit encouraged by that.
I mean, I haven't read that report, but I did see the headline.
And I think anthropic is is trying to do it the right way.

(42:21):
You know, if we could, if we could kind of anthropic guys, you know, global AI, I think that would be a win for humanity.
I'm inclined to agree that of of all the big players in that the chat bot space, I think anthropic is, is their heart is most in the right place.
Uh, yeah. Yeah. Well, as we start to wrap up, we always ask for recommendations.

(42:46):
What do you recommend this week, George? All right. So I'm going to punt a bit this week.
And I have to admit that, um, I got behind and didn't really come up with anything until right before we started taping.
So. So my punt is I'm going to recommend, uh, the, the TV series Black Mirror, which is on Netflix.
Uh, and I consider that a bit of a punt because it's a, it's a hit series that most people who would watch are aware of.

(43:07):
And yeah, it's seven literally seven seasons and movie. I haven't watched it yet.
Um, well then I would recommend to you person. Thank you.
Uh, yeah. So it's, uh, if you've never heard of it, it's a Twilight Zone s kind of anthology show,
but every episode focuses on some aspect of technology and the possible implications of technology.
Its signature is that it's mostly, as the name would imply, extremely dark.

(43:31):
I'd say it's about two thirds really dark episodes that you get to the end of and think, that was great.
I never want to see that again. Uh, and about one third lighter episodes that that are kind of palate cleansers.
But, um, but it's a really fascinating show and really good at kind of thinking through in,
in one hour sci fi television format what are really at the heart, interesting wicked problems related to technology.

(43:57):
So that's my recommendation. How about you, Jennifer? I'll put it on the list. I'm going to recommend going for a walk in nature.
It is spring and um, a lot of spring plants and small animals, you know, like bugs, um, but also birds that are waking up.
And in Minnesota, spring wildflower season is amazing, and I'm sure where everyone is, you have aspects that are exciting here.

(44:24):
If you look really carefully, the trout lilies are starting to come out.
They're not in bloom yet, but they're growing. Some of them are actually protected species.
And so that's kind of a rare opportunity.
Um, and if you don't want to be totally unplugged while you walk, there are some really great, um, plants and nature ID apps that make use of AI,

(44:46):
such as I, naturalist and Plant Net, so you can learn a bit more about what you're seeing and maybe share with your nature curious friends.
So get outside and, uh, enjoy what's going on out there.
Lauren, what do you recommend? Yeah. You know, I was thinking about this yesterday,
and I wasn't sure where I wanted my recommendation to fall on the multi-dimensional spectrum of intellectual to whimsical.

(45:11):
So I'll recommend a few things. We recommend all of the above on this show.
Yeah, I figured I'd be briefer and do more. Uh, so the first is book Existential Physics by Sabine Hossenfelder.
It's a popular science book that I found really interesting.
I'll also recommend zine making as an art form, which I have dabbled in a little bit recently.

(45:32):
And the beauty of it is that you can be as chaotic as you want in the process.
Oh yeah. And then now can you just define specifically what you mean by Zen?
Uh uh, the, um, the ins are a little bit hard to define there.
Anything that's like it's, it's derived from magazine, but they're sort of like a DIY shorter form.

(45:53):
Um, scrappy art form.
You know, it can be handwritten. Um, yeah.
So the idea is it's more personal and idiosyncratic than. Yes, than what you'd think of as a magazine or.
Yes. And, and the main thing is they're not like mass published.
They're like self-published. Yeah, sure. Um, and then my last recommendation will be whimsical socks.

(46:17):
Just because I think it's a good I love that excellence. I knitted a pair of whimsical,
knee high socks that I cannot wait until next winter to wear their orange and blue and all kinds of different colors, so it should be fun.
Greg, what do you recommend? Yeah, I'm going to recommend, uh, maybe, I don't know, I haven't listened to all the podcasts yet.
Maybe somebody has recommended this author already, but Steven Pinker.

(46:39):
Oh, yeah. Um, just, you know.
You know, we talked before about how, you know, maybe our cognitive bias is to, you know, think about all the risks, the challenges, the fear.
You know, he points out that kind of behind the scenes. You know, humanity is actually doing pretty well.
Just in my lifetime, the number of people lifted out of extreme poverty is unbelievable.

(47:04):
Um, and so if you're looking for a palate cleanser for all of Black Mirror, there we go.
I got one, one Steven Pinker book, and, uh, you'll you'll cure that.
And maybe your existential angst around AI as well. Who knows?
Good. Well, thank you both for being here. Thanks a lot for sharing a little bit of your time together with us today.

(47:25):
We've had fun. It's been a treat for keeping us. The Year of Curiosity podcast is recorded on the campus of Carleton College.
Your hosts are Jennifer Ross, Wolf and me, George Cusack.
Our producer is Dan Hurlbert, who records and edits each episode along with his team of hard working students.
Our show notes are compiled and edited by Wiebke and are available on our website Carleton.

(47:48):
Dot Edu slash I Mary Jo maintains our Podbean account, which gets our episodes out to whatever platform you're currently listening on.
Our theme music was composed by Nathan Wolfe Carleton, class of 27, and our mascot, Maisie was generated by Jennifer Ross Wolfe using Adobe Firefly.
Advertise With Us

Popular Podcasts

True Crime Tonight

True Crime Tonight

If you eat, sleep, and breathe true crime, TRUE CRIME TONIGHT is serving up your nightly fix. Five nights a week, KT STUDIOS & iHEART RADIO invite listeners to pull up a seat for an unfiltered look at the biggest cases making headlines, celebrity scandals, and the trials everyone is watching. With a mix of expert analysis, hot takes, and listener call-ins, TRUE CRIME TONIGHT goes beyond the headlines to uncover the twists, turns, and unanswered questions that keep us all obsessed—because, at TRUE CRIME TONIGHT, there’s a seat for everyone. Whether breaking down crime scene forensics, scrutinizing serial killers, or debating the most binge-worthy true crime docs, True Crime Tonight is the fresh, fast-paced, and slightly addictive home for true crime lovers.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

The Clay Travis and Buck Sexton Show

The Clay Travis and Buck Sexton Show

The Clay Travis and Buck Sexton Show. Clay Travis and Buck Sexton tackle the biggest stories in news, politics and current events with intelligence and humor. From the border crisis, to the madness of cancel culture and far-left missteps, Clay and Buck guide listeners through the latest headlines and hot topics with fun and entertaining conversations and opinions.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.