All Episodes

December 18, 2024 38 mins

In this episode of NEJM AI Grand Rounds, hosts Raj Manrai and Andy Beam interview Larry Summers about artificial intelligence’s transformative potential and its implications for society. The conversation explores Summers’ perspective on AI as potentially the most significant technology ever invented, his role on OpenAI’s board following the November 2023 leadership transition, and his thoughts on how AI will reshape economics and human society. The episode provides unique insights into AI’s development trajectory, the challenges of technological prediction, and the intersection of economics and artificial intelligence.

Transcript.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
I think it's possible that thisis going to be the most important
technology that ever got invented.
I think that it's an even money chancethat when historians write about the
second-fifth of the 21st century,Russia and Ukraine and Donald Trump and

(00:29):
Xi Jinping will be secondary stories,and that the real story will be the
dramatic discontinuity that was associated
with human like intelligencebeing achieved by non-humans.

(00:53):
I think that in a nearer termthing, it may not be unreasonable to
think that this is to the Internet,
as the computer was to the calculator.
The calculator was a very important thing.
The computer was a much,much more fundamental thing.

(01:18):
Welcome to NEJM AI Grand Rounds.
I'm Raj Manrai and I'm herewith my co-host Andy Beam.
Today we are delighted to bring youour conversation with Larry Summers.
Andy, I think it's fair tosay that Larry Summers truly
does not need an introduction.
He's been so influential.
He's been the Secretary of theTreasury of the United States,
the President of Harvard University,Director of the National Economic

(01:40):
Council, and so many other things.
And as we dig into the episode itself,he's been a member of the Board of
Directors of OpenAI for just over a year.
I've listened to him a lot, but I thinkthis conversation still surprised me.
Maybe it was his enthusiasm or hispredictions for what would be remembered
by historians of this period of the21st century, the way he described how

(02:05):
these models are continuing to evolveand also our uncertainty about where
we're going really was quite powerful.
Yeah.
I'm going to break the fourthwall a little bit here, Raj.
And how cool is it that we got totalk to Larry Summers about AI?
Like how amazing was that?
Again, I agree with you.
I'm a big fan of his.
I've listened to all the podcastshe's been on and still, there
were several things that Ilearned in this conversation.

(02:26):
We got a little glimpse into whathappened with some of the OpenAI drama
with the board turnover andwith the leadership turnover.
And so that was news to me.
You know, I think you did a great jobat asking him about his appearance
in the movie, The Social Network.
And so that was like super greatto hear. So much fun. So much fun.
So many gems like that in thisconversation that I'm excited

(02:46):
for the listeners to hear.
And again, just like what a treatto get to talk to Larry Summers
about such a wide range of topics.
Really, really, really fun conversation.
Totally agreed.
The NEJM AI Grand Rounds podcast isbrought to you by Microsoft, Viz.ai,
Lyric, and Elevance Health.

(03:07):
We thank them for their support.
And with that, we bring youour conversation with Larry
Summers on AI Grand Rounds.
Larry Summers, thank you forjoining us on AI Grand Rounds today.
We're super excited to talk to you.
Larry, welcome and thankyou for being here.
This is a podcast about AI, so pleaseforgive our nerdy question framing, but

(03:31):
we always start with the same question.
Could you please tell us aboutthe training procedure for
Larry Summers' neural network?
What data and experiences ledyou to where you are today?
And please take us backas early as you can.
You know, we say that we like tostart with the initial conditions.
I'm the child of economists.
My parents were economics professors.

(03:52):
I have uncles who wereprominent economists.
So, I found myself writing on agraduate school essay that while some
children were taught to believe in God.
I came to believe in thepower of systems analysis.
So, I'm a person who's always believedvery much in data, empirical analysis,

(04:20):
logic, argument as the way to getcloser to truth and to believing
that you're likely to do much betterin contexts where you have more
understanding than where you have less.
And that's why I've devoted myselfto a career as an economist and tried

(04:46):
to focus on studying economics forits own sake, but very much also as a
tool for leading to human betterment
in spheres ranging from the betterallocation of resources in health care

(05:07):
to the avoidance of financialcatastrophes that lead to millions
of people becoming unemployed.
And so my training has been from a kind of analytical social science perspective.

(05:28):
So, trading the invisible hand of Godfor the invisible hand of Adam Smith.
Is that fair?
That would be a way of putting it.
You know, I've always, as a sort ofideological matter, always thought of
the task as being to find a well-handledmiddle ground where you recognize that

(05:53):
the invisible hand can't do everything,but that there are enormous dangers
to heavy hands, and to try to think ofways in which governments and policy
makers can provide helping hands.
I think that's a greatpoint to transition.
So, you know, we want to diginto some of your work and
your recent roles, particularlyaround artificial intelligence.

(06:15):
And of course, we want to start with largelanguage models and their oversight.
So large language models likeChatGPT. We had Peter Lee from
Microsoft on the podcast last year.
And he gave us a previewof GPT-4 before it came out.
And I can still remember to thisday, losing sleep after talking to
him, thinking about what this means.
You know, we've seen the evolutionof models at OpenAI, but also I

(06:38):
think this very competitive ecosystem of other proprietary
and open-source models emerged since then.
And we like to talk aboutthe scale hypothesis.
And what I mean by that is this idea thatwith increasing compute in data, We are
going to keep seeing improvements to thefundamental capabilities of these models.
So, I think many of us agree thatwe're on some type of, you know,

(07:02):
S-shaped curve, but where exactlyon the sigmoid we are, is debated.
And maybe I can ask you how youthink about the continued growth
of these models? Whether you havesome intuition about whether We will
keep seeing major improvements tocapabilities with increased scale,
as we've seen for the past few years.
I think closely related to thatare your perspectives on what the

(07:25):
key bottlenecks are for continuedexploration of larger and larger scale.
You know, energy, data,compute, they all come up.
But how do you think about these problemsand the sort of fundamental increase in
capabilities at scale of these models?
Well, let me first say that there aretwo groups of people. There are the people
who know that they don't know, and thepeople who don't know that they don't

(07:49):
know, and I'm in the first group.
I know that I don't know for sure.
I think there's a general lesson that'shelpful for us all to keep in mind with
respect to technologies. That things takelonger to happen than you think they will.
And then they happen fasterthan you thought they could.

(08:09):
And there are all kindsof examples of that.
I first met Jeff Bezos when he came toa CEO lunch at the treasury in 1998.
And I remember then-secretary Rubinand I saying to each other that the
owners of malls had better watch out.

(08:29):
And
we were right.
That was a legitimate insight, butthat wouldn't have been a financially
important, legitimate insight foranother 15 or 20 years after we had it.
I remember hearing now a decade agoabout these automated vehicles that had

(08:50):
driven up Route 101 from San Jose to San Francisco with no driver, and I
would've been surprised at that point ifyou had told me that no truck drivers have
lost their jobs by the beginning of 2025.
On the other hand, I'm sure therewill ultimately be pervasive impacts.

(09:14):
It's been a long time since Iread a paper newspaper, and that's
another example of a change.
But I remember not that many yearsago, whenever I wrote an op-ed
regarding it as an importantpart of negotiation, whether my
op ed was going to appear in theprinted paper or only online.

(09:38):
So, it's very hard to know whatthe exact timing is going to be.
I would be very surprised if inthe fullness of what these kinds
of models are going to accomplish,we were past the fourth inning.

(10:00):
And I say that because it's relativelyearly days in terms of the amount of
time that there has been since therewere really serious models going.
I say that because the capacity ofthese models to self-improve is, I

(10:24):
think, a qualitative difference fromprevious general purpose technologies.
Electricity was great, but electricitydidn't self-generate more electricity.
Fire didn't generate more controlled fire.
But AI is already going to, within a year or two, generate be likely to take

(10:49):
on the tasks that many of those doingthe software for AI are now doing.
And so that self-improving aspectrepresents a very big, uh, change.
Could I just maybe diginto that a little bit?
Because like what you hear people talkabout, especially, you know, like in the

(11:11):
Bay Area, is that this is not a technology.
It's the technology.
Sam Altman has said thingslike this will capture the
light cone of all future value.
And so like, in what sense should wetake that literally, but not seriously,
seriously, not literally, do wethink that this Kind of technology is
qualitatively different, or it is yetanother kind of technology that will

(11:32):
increase productivity, enhance humanwellbeing, and things like that.
I think it's possible that this is goingto be the most important technology
that ever got invented. I think that it's an even money chance that when

(11:56):
historians write about the second-fifth of the 21st century,
Russia and Ukraine and Donald Trump andXi Jinping will be secondary stories.
And that the real story will bethe dramatic discontinuity that was

(12:20):
associated with human-like intelligencebeing achieved by non-humans.
I think that in a nearer term thing, it may not be unreasonable to
think that this is to the Internet,

(12:40):
as the computer was to the calculator.
The calculator was a very important thing.
The computer was a much,much more fundamental thing.
So,
it seems to me there are lots ofprospect for thinking that this is
going to be very, very important.

(13:03):
There are, I think, legitimate questionsabout what the constraints are going
to be, and we don't really know.
I think it is clear that thereare substantial gains to be had.
simply from scaling without anyinnovation in design, and without

(13:32):
anything more than mobilization ofdata that has not yet been mobilized.
Relative to what people thought a yearago, I would say that a reasonable
best guess, as I understand it, isthat diminishing returns is likely to

(13:54):
come somewhat faster from scaling, butthat modifications of the technology
to make possible longer chainsof reasoning and more human-like
reasoning processes I would say anypositive surprise in that, I think

(14:21):
has been far larger than any negativesurprise in the efficacy of scaling.
And I think there has alsobeen a substantial progress
and substantial surprise
in what one might think of as compressionor distillation of these models in

(14:45):
which it's possible to do more withless building on a very large model
than one might previously have supposed.
I think it's also important to understandthat the history of profound technologies

(15:10):
is that the initial focus is always on howthey can perform previously defined tasks
better than they've been performed before.
But ultimately, the real impactcomes from the definition of
new tasks that advance progress.

(15:32):
The first cameras, moving picturecameras, were put at the back of theaters.
To record plays.
And then people realized therewas much better things to do
with moving picture cameras.
It was originally envisionedthat there'd be a market for
only five mainframe computers.

(15:54):
Originally
envisioned that there would be amarket for less than a million phones
that people carried around with them.
And so, I think that we surelyhave not exhausted what is going
to be possible with software thatis able to act as people's agents.

(16:23):
Can I follow-up on something there?
I actually am surprised that inyour estimation that essentially
everything will be footnotes to AI.
I think, Raj and Itend to agree with that.
But I wanted to follow-up, like playthat forward and get you to help us
think about the consequences of that.
So, if we have human-level intelligence,one immediate implication, and you
can definitely correct me if this iswrong, is that the marginal cost of

(16:45):
everything essentially goes to zero.
And I've heard folks like Demis Hassabistalking about the idea of radical
abundance in the age of AI. Thatbecause we have these super intelligent
machines who can do so many things, wewill all kind of live in this utopia.
But like, one thing that I go tois like radical abundance in and of
itself is not a purely good thing.
Like we have radical abundance ofcalories and that has been a public

(17:06):
health disaster by sub measure.
So is there something about the radicalabundance idea of AI outside of the
safety concerns that we should bethinking about proactively, that having
a radical abundance of intelligencemay actually have downsides that
we aren't really appreciating yet.
Sure, look, I think you can overdothis idea of radical abundance.

(17:26):
There's still only somuch beachfront property.
There's still only somuch copper in the ground.
There's still only so much of awhole range of goods that have

(17:46):
some inherent scarcity to them.
Some substantial part of what peoplevalue is goods that derive some of
their value from their exclusivity.
Not everyone can go to what'sregarded as the top school.
Not everyone can eat at what isregarded as the coolest restaurant.

(18:12):
And so the human instinct to compareand the human instinct to want things
whose value is determined importantlyin relative ways, both mean that this
idea that we're all gonna aboundin ecstasy with every need met and

(18:36):
nothing to strive for does not strikeme as being a plausible rendering of a
place that the world is likely to get.
I'm not sure that the right, I don't thinkthe right paradigm for thinking about

(19:02):
obesity actually has anything much to dowith the universal abundance of calories.
I think if you look at the upper 75%of the American population, there
was no substantial difficulty in affordingan adequate caloric intake 50 years ago

(19:29):
and yet levels of obesity have risen. So, Idon't think the right way to think about
that phenomenon any has anything much todo with the abundance of calories. I think
it has to do with changes in lifestyle.
It has to do with marketing practices andthe design of products that are in various

(19:50):
ways addicting, but I think linkingand those raise all kinds of issues of
consumer protection and paternalism andhow society should be responding, uh, to
all of that.
But I think thinking of that as aproblem of abundance is not actually a

(20:14):
helpful way of thinking about obesity.
So, I tend to be a person who believes that there should be a strong presumption
in favor of things that give peoplethings that they want to have, unless

(20:34):
there is compelling kind of downside.
I think there is potentially, my guessis that it is a long way down the
road, but I think there are important
issues raised by questions of humansatisfaction and what the role of work

(21:03):
is in thinking about human wellbeing.
And on the one hand, I very much thinkthat we should probably be thinking
about what it is people are going to dowith all that time that is available.

(21:26):
You know, I have never seen a systematicstudy of the experience of those who
inherit great wealth, and therefore don'tneed to work to support themselves, and
indeed can't influence their command overthe ability to purchase things very much

(21:48):
with any work that they're able to do.
And I'm not sure that
those lives are on average moresatisfying lives than those who are less
apparently fortunate in their inheritance.
So, I think the question of purpose amidstabundance is a potentially large question.

(22:14):
I have a bit more reservation aboutthe UBI concepts that some in the
AGI community are enthusiasticabout for those kinds of reasons.
I think that's great.
I want to switch gears just alittle bit from the capabilities

(22:34):
and the implications of the models.
To their oversight.
So, I want to zoom in on November 2023.
This is just about a year ago.
I think I got an alert onmy phone that Sam Altman had
been removed as CEO of OpenAI.
So of course, over the weekend,I was glued to my phone.
We are using these modelsevery day as researchers.
We're using them in clinical studies.

(22:55):
There's pilots, there's bigstudies that are underway.
And of course, patients and doctorsare using these models every day.
I think at a scale that, that is
still vastly underappreciated.
So many of us have come to rely onthe technology and they've become
central in our lives already.
And so particularly in a sensitivedomain like medicine and health care,
this really got me thinking about whatthis temporary, we know that Sam Altman

(23:18):
came back pretty soon to OpenAI and youwere appointed to the board of directors,
but this really got me thinking aboutthe sort of stability of these, uh,
corporations of these companies, as wethink about how we can apply these models
in medicine and as they're being used.
And so, you, as part of that period inNovember 2023, you joined the board
of OpenAI, and my question for youis what made you say yes to that job?

(23:43):
Raj, in the wake of the corporategovernance transition, I'll call
it, at OpenAI, I was approached
by the people who were involved in formingnew arrangements to ask whether I would
take on a position on the board. I thinkthey came to me precisely because I was

(24:08):
outside the situation and didn't havethe extensive prior loyalties in any
particular direction and was thought tobe someone who could grasp the various
issues, both technological and social, andwho'd had a certain amount of experience

(24:28):
between academic life, the private sector,and government with complex situations.
I asked myself, really, two questions.
Did I think that I could makea contribution in this way?

(24:50):
And did I think it would beintellectually fulfilling?
And since I answered both those questionsaffirmatively, I decided to take on
that responsibility. And I've beenvery glad that I did and feel proud of
the little bit that I've been able tocontribute to OpenAI over the last year.

(25:17):
I should say to your listeners that thefirst thing that the new board did, or
that the new members of the new boarddid, was review in a very extensive
way, the circumstances surroundingthat transition, millions of dollars

(25:39):
were spent, tens of thousands ofdocuments were reviewed, witnesses
were questioned for many hours by a
major law firm that had substantialexperience with this kind of thing.
And I can tell you with a extremelyhigh degree of confidence that none of

(26:04):
the issues that were involved in therequest for Sam Altman's resignation
went to anything about the safety or thesafeguards surrounding OpenAI products.
There were complicated issues ofpersonality between him and members of

(26:31):
the prior board that led to that decision.
And there was a kind of collectivejudgment about how the institution
could be best taken forwardthat led to the reversal of
that decision. But you needn't worryor needn't have, with the benefit

(26:54):
of hindsight, worried during thatweekend about the legitimacy or the
quality of the products that youwere using in your medical work.
Got it.
Thank you.
So, I think we want to transitionnow to the lightning round.

(27:23):
We always do this oneach of these episodes.
Larry, the rules are simple.
We're going to ask a series ofkind of rapid-fire questions.
Some are very trivial.
Some are less trivial.
You can decide which ones aretrivial, which ones are not.
And our only guidance is that we tryto have you respond in just one to two
sentences, quick, rapid-fire reactions.

(27:45):
Does that sound alright?
Are you ready for this?
Sounds good!
Alright.

Okay, so the first question is (27:49):
Is the portrayal of your meeting with the
Winklevoss twins in Andrew Sorkin'sThe Social Network at all accurate?
It is not literally accurate, but itconveys the nature of that meeting.
And I would only say this, you learncertain things as a university president.

(28:14):
One of them is that if an undergraduateis wearing a coat and tie on a Wednesday
afternoon, there are two possibilities.
One is that they have a job interview.
The other is that they are an asshole.
I don't think those guys had jobinterviews that Wednesday afternoon.
Ah ha ha ha.
Fantastic.

(28:34):
Okay, maybe that might be themost memorable lightning round
answer we've ever gotten.
Alright, i'm gonna hand it over to Andy.
So that was great okay.
So now we're gonna do a riff on TylerCowen's overrated underrated for you.

Overrated or underrated (28:45):
Arrow's impossibility theorem. Underrated. Underrated.
So, it's interesting because I feel likemost folks in my circle, if you don't know
anything about economics, you know Arrow'simpossibility theorem and you use it to
shut down any attempt at consensus making.
So in what way is it underrated?
I think it speaks to the difficultyin a, it speaks very, very powerfully

(29:09):
to the difficulty of any kindof collective decision making,
particularly in an increasinglycomplex and multidimensional world.
And I think it is understood by alimited number of social scientists
who understand it, but it is notas universally part of the canon

(29:34):
of human knowledge as it should be.
And I should say, I also have a bias.
Kenneth was my uncle.
I didn't know if that was familyallegiance, but I think that that was a
very well stated case for the underratednature of Arrow's impossibility theorem.
Alright, the next question.
Which is a harder job, Presidentof Harvard University or Treasurer
secretary of the United States?

(29:56):
President of Harvard University, it's gota lot more politics in it than working
at the treasury in Washington giventhe, um, extreme decentralization of
Harvard and of universities in general.
Alright.
If you could have dinner with oneperson dead or alive, who would it be?

(30:21):
Probably John Maynard Keynes because heimbodied the practical, intellectual,
economics-oriented life that I have tried to lead, and he could express himself so

(30:41):
cogently, powerfully, and eloquently.
Alright, this is our lastlightning round question.
Which is the best Harvard undergraduatehouse, and why is it Leverett House?
I was a tutor in Lowell House, soI'm gonna stick with Lowell House.

(31:02):
Excellent.
Thank you, Larry.
You have survived the lightning round.
You've passed it with flying colors.
So, we want to ask just one ortwo questions that kind of zoom
out a little bit to wrap up.
So, you're probably aware of thisessay called "Situational Awareness."
Again, like Tyler Cowen and some otherfolks have really been big on this.
It's written by this 21-year-oldnamed Leopold Aschenbrenner,

(31:25):
but it's an economic analysis
of the next five years of AI.
And in it, he makes the conclusionsthat power will be limiting that
essentially scale is going tokeep working and take us to AGI.
So, I don't have a question aboutnecessarily the article specifically,
but more, how useful are thetools of economics going to be

(31:46):
for understanding the next decade?
Are we, as ML researchers, wewould say, are we in distribution
or out of distribution?
Can we make reliable predictionsabout what comes next?
Or is your sense that the next 10 yearsis going to be beyond our predictive
capacity from an economic perspective?
I don't think of the test ofeconomic analysis as being the

(32:07):
ability to make predictions.
For example, one of the great ideasof economics, the efficient market
hypothesis, essentially has as itscentral idea that you can't predict
the evolution of a future speculativeprice, because if you could predict
it, it already would have moved.

(32:28):
And therefore, in a properly definedsense speculative prices are random walks.
So, I'd reject Martingales.
So, I'd reject the way in whichyou framed that question.
But God, I think the principles ofscarce resources, thinking at the

(32:50):
margin, recognizing the importance ofopportunity cost, understanding that
there's no such thing as a free lunch.
That incentives shape behavior.
I think those principles are going tobe as important as they've ever been.

(33:10):
The contexts are likely tochange in a variety of ways.
AI and digital technology moregenerally promote economies of scale.
They promote what economistscall non convexities.
And that's going to make the nature ofthe mathematical analysis different,

(33:31):
and in some ways more difficultthan it has been in the past.
It may make the pure invisiblehand less effective in getting
to the best possible outcomes.
But there's nothing that I see thatwould suggest that economic analysis

(33:52):
is not going to have impact.
And I would rather expect that withmore actors, more perspectives, some
of more competition, some of the forceson which economics tends to focus

(34:14):
are likely to become more important.
While everybody likes to criticizeeconomics, I am struck by how much the
other social sciences increasingly emulateeconomics and emulate methodological
approaches that economists take, Ithink to their substantial benefit.

(34:38):
I think just one last question here that builds directly off of that.
I think Andy brought up a few momentsago, Arrow's impossibility theorem.
So, maybe this is a good one to close on.
So, you know, I think a lot abouthow these models are trained.
And there's this kind of heavy computephase where they're doing, they're trained
essentially to do next token prediction
with massive corpora of data, but thenthere's this kind of lighter compute,

(34:59):
but very important phase where humans arebrought in to label examples of good or
helpful outputs and then to rank outputs.
And so this sort of transmutesthe question of what values are
embedded in these models to whosevalues are embedded in these models.
And I think a lot about
what those human values are thatare embedded in these models.
And I think a lot of work has come outof the United States, but we're seeing

(35:21):
increased activity outside of
the U.S.
as well in building some of these models.
And so again, thinking about whatArrow's impossibility theorem teaches
us generally we have a few differentparties in medicine and health care, right?
We have the payer, we have the patient,we have physicians, and the question that
I really, I think about a lot, and maybeyou can give us some sort of parting
words about this is, what are lessonsabout studying values and preferences?

(35:45):
Again, I think in economics,this is very, very old and has
been done for many, many decades.
And my guess is that we're going tostart borrowing some of those methods
for evaluating, for thinking about howwe influence what comes out of AI models.
But what are the lessons from thinkingabout values and preferences from
economics, including of course,Arrow's impossibility theorem,

(36:06):
to help us think about what humanvalues are embedded in these AI models.
It's a very deep question and I'm notsure I can give you a good answer.
My instinct, Raj, is that for a verylarge number of the questions that

(36:28):
one's likely in medicine to be lookingto AI models for the values are issues
are likely to be somewhat secondary.
I've said of AI that House as portrayedin the TV show was an enormously powerful

(36:50):
figure who was able to contribute a great deal.
That in some sense, what he did isthe kind of thing that an AI system
will be able to do before an AI systemis able to hold a patient's hand as
they're fearfully going into surgery.

(37:14):
And the ability to make more accuratediagnoses more quickly, and more accurately,
and more precisely, suggests steps forwardin the treatment of a patient is not
something that I think is fundamentallymore difficult around values.

(37:38):
When there are judgments thatare reached, then there are human
choices that will have to be made.
But I think that is likely tobe something that remains in
a human domain forquite some time to come.

(37:59):
So, I rather suspect that, um, we'regoing to have some substantial time
before we're going to have to facethe issues that you are describing.
Amazing.
I think that's a great note to end on.
Larry Summers, thank you so muchfor being on AI Grand Rounds.

(38:21):
Thank you so much.
Thank you.
This copyrighted podcast from theMassachusetts Medical Society
may not be reproduced, distributed,or used for commercial purposes
without prior written permission ofthe Massachusetts Medical Society.
For information on reusing NEJM grouppodcasts, please visit the permissions

(38:41):
and licensing page at the NEJM website.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Intentionally Disturbing

Intentionally Disturbing

Join me on this podcast as I navigate the murky waters of human behavior, current events, and personal anecdotes through in-depth interviews with incredible people—all served with a generous helping of sarcasm and satire. After years as a forensic and clinical psychologist, I offer a unique interview style and a low tolerance for bullshit, quickly steering conversations toward depth and darkness. I honor the seriousness while also appreciating wit. I’m your guide through the twisted labyrinth of the human psyche, armed with dark humor and biting wit.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.