All Episodes

November 12, 2025 48 mins

What if the fastest route to real AI value isn’t training a new model—but learning how to use the ones we already have, better? We sit down with author and professor John K. Thompson to chart a clear path from AI’s roots to practical wins you can ship this quarter, while keeping an honest eye on what stands between today’s LLMs and true AGI.

We start with the origin story—from Turing’s early ideas to the Dartmouth workshop—and show why those founding questions still matter. Then we move into the present: how context windows let you infuse models with your voice, policies, and playbooks without fine‑tuning; why unique information assets are your real moat; and how cross‑functional teams (operators plus technologists) turn prompts into production results. John explains the power of causal AI to answer “what should we do?” and shares concrete examples, from proposal generation that compresses months of work into minutes to manufacturing setups that slash daily waste by two‑thirds.

Along the way, we cut through common myths. AGI isn’t arriving next week; we’re missing durable memory, robust causal reasoning, and integrated “composite AI” that blends generative, foundational, and causal methods. GenAI coding is a productivity edge for scaffolding and tests, but complex logic still needs expert hands, strong reviews, and measurable KPIs. For leaders, the blueprint is simple: build around the model first with retrieval, guardrails, and evaluation; organize AI and data science as one team; choose tools that fit practitioners; and measure outcomes relentlessly.

If you’re serious about unlocking AI’s upside without getting lost in hype, this conversation offers frameworks you can use today and a realistic map for tomorrow. Enjoy the episode, and if it resonates, follow the show, share it with a colleague, and leave a quick review to help others find it.

Follow John K Thompson on LinkedIn

What's New In Data is a data thought leadership series hosted by John Kutay who leads data and products at Striim. What's New In Data hosts industry practitioners to discuss latest trends, common patterns for real world data patterns, and analytics success stories.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_01 (00:12):
Hello, everybody.
Thank you for tuning in totoday's episode.
Super excited about our guests.
We have John K.
Thompson.
John, how are you doing today?

SPEAKER_00 (00:20):
Doing great.
It's spring here in Chicago.
It's sunny and warm andeverybody's running around.
So it's uh it's great.

SPEAKER_01 (00:27):
It's wonderful.
Oh, amazing.
So glad to hear that the uh thethe weather in Chicago now that
we're the day of recording here,we're we're April 16th.
And I know sometimes thoseChicago winters can can sort of
bleed into May, but that's niceto hear that.
It's it's it's getting you someuh good spring weather.

SPEAKER_00 (00:44):
Yeah, it's uh the city goes crazy.
You know, the first day that itgets about 50, everybody's out
and at the lakefront and youknow, riding bikes and
rollerblading and you know,roller skating and you know,
doing everything.
So it's uh the city's all uh ithas come to lie, come to life.

SPEAKER_01 (01:01):
Yeah, absolutely.
When people ask me what myfavorite city is, I I always
ask, can I qualify the the thetime of year?
Because it's it's summer inChicago specifically.
And you know, I actually had mywedding in in Chicago, uh, but
it was in the winter, but it'sstill very nice.
Uh it was a nice snowy uh uhwedding.
But John, uh, you know, today Iwanted to discuss your your new

(01:23):
book, the The Path to AGI.
Uh a super fun read, both uhvery uh I think executive
oriented, gives you a very highlevel uh, you know, the value
and and way to achieve uh AI,but also kind of gets into the
the required technical detailsin depth.
It's a really great book thatthat I enjoyed.

(01:44):
Uh, but first, John, uh I'd likelove for you to uh tell of the
listeners about yourself.

SPEAKER_00 (01:48):
Sure.
Yeah, and and thanks for thatopportunity, John.
I I've been doing I've beeninvolved in AI for 38 years, and
and often, you know, people say,Oh, it hasn't been around that
long.
And I'm like, well, yeah, ithas.
It's been around for about 70years.
And and uh, you know, I startedworking in data, you know, so
many years ago, and it justseemed to me that everything

(02:10):
that we did that had really anyinterest or value came from
data.
So, you know, I started on thatthat journey a long time ago and
I've been involved in inadvanced analytics statistics
and AI for that amount of time.
And and uh, you know, writingthe book was uh it was it was
really fun.
I really enjoyed it.
Uh, you know, I've had a numberof people come back and say,

(02:32):
wow, 420 pages.
What happened?
Did you get possessed?
Or, you know, what what whatwent on there?
And I was just like, look, I gotI got sucked into the topic and
and just ran with it.
So I'm an all all around AI nerdand AI generalist, I guess is
the way to say it.

SPEAKER_01 (02:47):
Yeah, one of the uh one of my favorite parts of
about the book was really howyou went all the way back to
sort of the the the foundations,the very early meetings that
were sort of the precursor to toAI and the and the and the
concepts of you know computersreally thinking and and and
making decisions uhautonomously, you know,

(03:08):
conceptually.
Uh I'd love to hear that part ofthe book.

SPEAKER_00 (03:11):
Yeah, I think one of the things that I I you know,
writing the book is it's it's alabor of love.
Anybody that's written a bookknows that.
And and one of the things thatthat made me a little sad was
that you know it said AlanTuring was one of the first
people to go to the RoyalSociety and start talking about
machines that think and andmachines that mimic human
intelligence, but he never henever wrote anything down, he

(03:34):
never submitted those papers.
So there was there's really nowritten history of of him
starting out the movementtowards AI.
And I'm such a history buff.
I I would love to read thosepapers and see his thoughts.
But he was actually talkingabout it in the in the late 40s.
So then we we move up into the50s and and the uh you know the

(03:55):
Dartmouth Summer Project and youknow, and all the great
thinkers, Claude Shannon andJohn McCarthy and uh Marvin
Minsky and all these peoplecoming together at Dartmouth
over a summer to talk about whatAI is and what it could become.
And what I realized in in doingthat research is that what we

(04:15):
think about AI is what theydefined.
You know, going into thatworkshop in the in 1955 and
1956, they had written downeverything that we have been
trying to do and for the mostpart have accomplished over the
last 69, 70 years.
So it's uh it's great to go backand see where this stuff

(04:36):
actually started.

SPEAKER_01 (04:37):
Yeah, it it it really gives you a sense of you
know how how long it reallytakes to solve these types of
problems.
And now we're in this era wherewe have all this technology at
our fingertips, right?
You can provision, you know,essentially limitless cloud uh
resources in the cloud just fromyour browser now, uh and uh

(05:00):
access to all these massivefoundation models that have been
pre-trained with you knowpetabytes of of uh of data, both
public and maybe not so public.
Uh and now it's you know, in andit I that's the part about the
book that really made me excitedin in a way that I wasn't
before.
I was like, wow, imagine uh if Iwas back then in uh you know uh

(05:24):
in in in the 1960s and justthinking about this and being
inspired by it, but not havingthe infrastructure to actually
execute it.
We could only really theorize itback then.
And uh so it's it's a reallygood perspective because now you
can just you know use these LLMsin in any you know customer or
internal facing application.

(05:44):
Uh, but your your book is reallyincredible at you know, sort of
the why you would even want todo that and if you should even
do that, and the right labels onit.
Uh so definitely uh a reallygood great read.
I I I I do recommend it both tofolks who are on the on the
business side, on the executiveside, but also you know, data

(06:06):
engineers, software engineerswho who who really want that
that you know larger, let's saythe big picture behind AI and
and what it can accomplish.
And I like the cover too.
So that that that always helps.

SPEAKER_00 (06:19):
Thank you.
Yeah.
That uh that cover, I think theysent me like 10 or 12 different
variations, and uh that was theone that jumped out.
And then uh, you know, I broughtin a few different people,
including my wife, who uh madedifferent suggestions on the
color of that arc.
Uh so it came together as a verymuch a group project.

SPEAKER_01 (06:39):
You know, they say don't judge a book by its cover,
but people do anyways.
So it's a it's it's important.
Uh but before we get more intothe book, uh you also have a uh
another update.
Uh you you've you've taken onfull-time uh teaching.
I I'd love to hear about that aswell.

SPEAKER_00 (06:58):
Yeah, I uh I've become an uh adjunct professor
at the University of Michigan,uh, the School of Information.
My daughter, our daughter wentto Michigan and graduated from
the School of Information.
And I've been talking to themfor about five years, you know,
different things that I could doto help the school.
And and one of the uh deans cameback and said, Well, we'd like

(07:19):
to have you teach, in additionto being a board member.
And I said, Wow, that's reallyinteresting.
And, you know, and I'm I'mhonored and flattered.
And and I thought, you know,I've taught at DePaul and and
different kinds of universities,but never as a, you know, a
full, you know, a full-timefaculty member.
And uh, so I sat down and andwrote a class uh based on my

(07:42):
second book, and I'm justfinishing it now.
I'm getting ready to send thefinal to the students, and it's
been great.
It's really been fun.
Uh, you know, I as we talkedabout, I live in Chicago, and
the class is at the Universityof Michigan in Ann Arbor.
So I've been driving up thereevery week and teaching a
three-hour class and then youknow coming back to Chicago, and

(08:04):
it's just been such a wonderfulexperience.
One of my one of my friends hadan observation.
He said, Well, it's quiteintriguing, you know, you're
you're interacting with peoplethat are as young as 17 years
old.
Some of the undergraduates are17 years old, 17, 18, and I have
graduate students.
Then I go to work, you know, onand in my day job, and I

(08:25):
interact with people that are intheir 30s, 40s, and 50s.
And then I go and I do work withboards uh over the last couple
of years.
I've been in front of, you know,a couple hundred boards, and
those are sometimes people upinto their 80s.
So I'm getting quite an spreadof exposure to uh you know
different slices of you know,education, corporations, board

(08:49):
governance, and hearing allthese different people uh give
me their impressions and ideasof AI.
And it's it's quiteenlightening.

SPEAKER_01 (08:56):
It's pretty cool to get all those perspectives.
You know, they all think aboutit in different ways and uh
different levels of uh technicaldetail, you know, risk
mitigation, uh looking for youknow opportunities to you know
uh extract more uh internaloptimization.
I I'd love to hear more aboutthat actually.

(09:18):
So you're you're working withall these different groups of
people, it's giving you greatper perspectives.
Like, what's your main takeawayfrom that?
I mean, like, what is the rightlevel to really engage with a
with a with a set of problems?

SPEAKER_00 (09:30):
Yeah, that's a great question.
You know, the theundergraduates, they you know,
they want to do everything allthe time with Gen AI.
You know, it's it's like, well,do I need to go to the store?
Can I have Gen AI bring me myMountain Dew or whatever it
happens to be?
And I'm like, well, probably notyet, but maybe you could.
I suppose you could get a robotto bring it to you, you know,
and that the mid-care careerprofessionals are like

(09:52):
efficiency, effectiveness, youknow, how do we get things done
faster?
How do we automate everythingaway?
Uh, and then once you get upinto the senior levels and the
board, it's all risk mitigation.
You know, what's our risk?
How do we not end up on thecover of the Financial Times or
something like that?
So those are the things I see onbroad brushes, you know.

(10:12):
But when we dive down into it,you know, we have some really
intriguing conversations withthe subject matter experts, the
people that are running, youknow, the actual operations of
corporations, supply chain,pricing, uh, distribution,
sales, marketing, manufacturing,all those things.
Those are those are quiteintriguing.
And then when you get thesubject matter experts in the

(10:35):
room and you bring thetechnologists in, then it's
really you know a great timebecause you you you you really
it's I've never seen this kindof dynamic before where the
technology, you you put it verywell earlier, John.
You know, we've got these largemodels that have been trained on
petabytes of information thatare accessible by almost

(10:55):
everybody.
You know, if you can write youknow cogent sentences in your
natural language, you can promptthese models.
So once we get the technologistsin the room and subject matter
experts in the room, and webreak down the barriers of
communication and start talkingabout how each of them can bring
their skills and experience andtechnologies and understanding

(11:17):
of operations together, youreally start to see impressive
movements in building solutionsthat can really pretty much take
over almost any process you wantto focus on.

SPEAKER_01 (11:28):
And it's it's so interesting to see how, like you
said, different different uhpeople have different approaches
where you know the the thecollege student might throw AI
at a problem that you know thereisn't really a lot of value to
throwing AI at it, but it's justkind of play around with it.
And I see that as well, whereyou know, uh young engineers who

(11:49):
see all the hype sort of want tojust dive into writing
something, right?
Even if it's not the best ideaand there's no no business
purpose to it.
They'll build it with AI,they'll build a uh a model
context protocol server becauseeveryone's talking about MCP and
for all its faults, it's it'svery popular and you know it
does seem like a a frame, atleast from a framework level,

(12:10):
something that'll be uh uh uhpersistent in in
implementations.
Um, and then you know, you youyou look at the the the
operators who are who aredeploying AI, and it's always
very precise in terms of makingsure it has internal buy-in and
uh a number on its back, so tospeak, to make sure that's it's

(12:31):
delivering value because itseems like yeah, the the
developer tooling's there, youcan build stuff that you know
calls LLMs, but you have tothink about you know, why would
we even do it?
And you have a lot of great uhkind of conceptual ways of
thinking about that in yourbook.
What is what is like the essenceand the core things that need to
happen to really get businessvalue out of AI?

SPEAKER_00 (12:52):
That's a great question.
And and there is, you know,we're we've we've gone past the
last two, almost three years ofjust rocketing innovation, you
know.
And and I'm not I'm not quitesure yet, but I do feel like
we're starting to see that curvecome down a little bit.
You know, it's not like everyweek there's another, you know,
crazy breakthrough.

(13:13):
So, you know, we don't have tokeep you know so much energy on
trying to stay on that, youknow, almost vertical curve.
We start to see that flatten alittle bit.
And that that gives people in inbusiness and operations a chance
to take a little bit of thatattention and start to look at
the things that you know areit's are salient and interesting

(13:35):
to them.
You know, I often, as you said,you mentioned young engineers
and and young professionals.
I often have them come to me andsay, I don't know where to
start, I don't know what tofocus on, you know, I'm I'm not
sure where I can make adifference.
And I and I always say, well,partner with someone who's in
the operations of the businessand go have lunch with them.
And then the next question is,how do I extract, you know,

(13:56):
their problems from them?
And I'm like, you don't have toextract them, you just have to
ask them what is the challenge,what's keeping them up at night?
What can't they get done?
You know, and they'll tell you.
So, you know, it's it's reallyquite intriguing.
I was talking to a company acouple of days ago that, you
know, they're one of the premierinformation providers in the
world.

(14:17):
And they were actually verybefuddled on what to do in their
business about, you know, AI.
And and I said to them, I said,look, you're in a privileged
position.
You are a trusted processor ofmuch of the world's financial
information.
You have connections toinsurance and finance and all
these different places.

(14:37):
You know, really what you needto do is take a step back and
say, we have an opportunity torewire, you know, some of the
way people think about creditrisk and finance and granting
loans and insurance and all thiskind of stuff.
And if if you just take a stepback and look at where you are,
you can see, at least I can see,a very clear path forward that

(15:00):
is going to drive innovation andchange and and unleash value.
So it really is different foreach organization and almost
each person, really.
But it it comes down to, youknow, what are the unique
information assets that youhave?
What are the unique informationassets that your ecosystem of
partners has?

(15:20):
And how can you put thosetogether in a way that's going
to drive and unlock value foryou?
And this all comes from apremise that I believe you and I
have talked about is that, youknow, with the Gen AI and the
large language models and allthose innovations, what we've
done is we we've basicallyunlocked the additional 90% of
the world's information that canbe actively used now.

(15:43):
You know, if you if we had thisconversation three years ago,
we'd be talking about ledgerentries and quantitative data
and columns and rows andnumbers.
We wouldn't really be talkingabout audio and video and images
and text.
We just wouldn't be because wedidn't really do much with it.
You know, it's hard for peopleto get their heads wrapped
around the sea change that hashappened in the last three

(16:06):
years.
So I think people need to uhopen their aperture.
They need to take a moment andsee where those unique
information assets are and howthey can actively engage with
them and how they can change thegame today.

SPEAKER_01 (16:23):
Yeah, that's that's incredible advice.
And it's so much this thisinformation is so much
accessible, uh, so much moreaccessible than it was before.
There's all these technicalhurdles have have been removed
in terms of you know act accessto to yeah, to to machine
learning and and and AI, justjust even you know, taking out a

(16:43):
lot of steps required forpre-training and and and things
along those lines, uh, labelingdata sets.
I mean, I'm sure that's stillnecessary for many use cases,
but I would love to hear fromyou like what like what are some
of the the practical unlocksthat executives should be aware
of uh thanks to the uh theinnovations with with AI.

SPEAKER_00 (17:04):
Yeah, you know, one of the one of the things that
really opened my eyes to it, andand I suggest it to everybody
that I talk to is, you know, gotake the prompting 101 class at
Coursera.
And everyone's like, really?
You took that class?
And I'm like, yes, I took thatclass, as a matter of fact.
And you know, what it reallyopened my eyes to is the value
of the context window in amodel, you know, that you can

(17:27):
sit there and I mean what I didin in that class is they have
exercises.
So I basically loaded up, youknow, my five of my books and
papers and videos and it kind ofmade of digital John just to
see, you know, hey, how doesthis really work?
So even without touching thecore of a model, without even

(17:48):
doing any fine-tuningwhatsoever, you can take
information that you control,you know, stick it in the
context window, and the modelwill do an amazing job of
mimicking whatever phenomena youwant it to do.
It did a really good job inanswering questions in the way I
would have answered thequestions and in my voice.
So, you know, I'm saying topeople, look, you know, it's you

(18:09):
don't have to be a techie.
And that's one of the greatmessages, is you don't have to
be a techie.
You know, you can sit down andsay, you know, hey, I'm I'm a
small businessman, you know, andI'm trying to move into opening
another distribution channel,maybe in a country I don't know
about, or maybe in a differentpart of the United States, or
maybe even with customers Idon't really even understand.

(18:29):
And if you can, if you cangather enough information and
upload it into a context windowof a model, you can probably get
a pretty good idea of how youneed to protect yourself legally
to open this new distributionchannel.
Now, three years ago, is thatpossible?
Could that even happen?
Could I, as an individual, dothat?
I mean, I probably could becauseI'm a nerd, but you know, most

(18:52):
small business people, no way,not possible.
You know, they would have had togo get some programmers and some
developers and some you knowdata management people and maybe
some architects, and they wouldall been looking at it and go,
gosh, you know, we got eagleagreements and we got
distribution, we got pricing, wegot you know all sorts of tables
and graphs and things.
How do we bring all thistogether with Gen AI?

(19:15):
Upload it away you go.

SPEAKER_01 (19:17):
Wanted to latch onto that one comment you had about
you know, even fine-tuning,right?
It's it's it's not as necessaryas people think.
People think, oh, for me tocustomize uh uh an LLM, I need
to have like some uh PhD levelengineers who are gonna do some
matrix calculus and change thevectors.
Like, no, you don't need that.
It's just it's all in the uh inthe context window, which is

(19:38):
where where the magic is.
And if you look at some of theuh popular AI deployments, it's
just kind of stacking LLM callsin an intelligent way and and
you know, sequencing thatcontext uh in the right order
and coming up with a plan,right?
So people people jump to to thefine-tuning, but it's it's it's
not necessary, right?
It's if you're building the LLMand selling the LLM, maybe,

(20:01):
yeah, right.
But but most, but that's that'snot where the value is right
now, right?
For for some folks it is, butyou know, you better have 20
billion in capital ready tocompete in that market.

SPEAKER_00 (20:11):
Um yeah.
I mean the the first thing I askpeople, you know, in this in
this kind of dialogue that we'rehaving right now is, you know,
do you want to, as you said, youknow, create a new model,
protect that model, sell thatmodel, and monetize it?
Or do you just want to try toget something done?
You know, basically, do you wantto do things in the model or do

(20:32):
you want to do things around themodel?
And what I'm saying is that youcan do most of the things around
the model to prove that it maybeit doesn't work.
Maybe it does work.
If you prove it does work andthere's some real value there,
then maybe you want to do it inthe model.
But in the model is not whereyou want to start.
You want to start around themodel.

SPEAKER_01 (20:51):
Absolutely.
And you cover a lot of this uhin your book, uh, both both from
a you know practical executionstandpoint and a and a technical
uh you know deep dive.
Uh I'm in your book, Path toAGI, it would be great to just
go through some of the thehigh-level sections.
I don't want to spoil anything,but you know, you you could
probably guess from the titlewhere where it's going with uh

(21:12):
with with AGI.
Uh but yeah, we'd love to hearmore about that.

SPEAKER_00 (21:16):
Sure.
And thanks for that question,John.
Yeah, it's you know, I startedwriting the book, and and the
idea was is that there would bethese these core sections.
And it's sec in the book to now,it's it's section two, section
three, and section four.
And it's foundational AI, past,present, future, generative AI,
past, present, future, andcausal AI, past, present,

(21:38):
future.
So there's patterns there.
Obviously, I'm a pattern person.
Uh, and that was supposed to bethe core of the book.
But then when I wrote it, I waslike, uh, it starts out as like
you're almost jumping off, youknow, Mount Everest.
So I wrote the I wrote the firstsection, which was data for AI,
because I I thought there'sgotta, there's gotta be some way
to you know ramp this up betterthan what I was in the

(22:01):
manuscript.
So then you end up with, youknow, section one, which is
data, and then foundational AI,generative AI, causal AI.
And then the last section iswhat is gonna happen, you know,
over the next you know, fewdecades to get to AGI.
So that's that's the layout ofthe book right there.

SPEAKER_01 (22:19):
Uh AGI, uh we get to just define that and get get
your take on that as well.

SPEAKER_00 (22:26):
Sure.
Uh, you know, we've been talkingabout it a lot.
I don't think we've actuallysaid it, artificial general
intelligence.
Uh, you know, because I thinkmost of the people that listen
to your podcast are prettypretty well versed in this stuff
anyway, but just to be clear, itis artificial general
intelligence.
And you know, we see a lot ofpeople out there, you know, Ray
Kurzweil, Elon Musk, uh, youknow, Sam Altman, you know,

(22:47):
saying, hey, you know, AGI ishere, AGI will be here next
week, AGI here in six months.
Um, and and I don't agree withthem at all.
Obviously, if you've gotten partway through the book, you get
that.
But the one thing that I doagree with every person is that,
you know, the definition of whatAGI is.
And AGI in in everybody'sconsensus definition is where AI

(23:11):
is as intelligent as the aboveaverage graduate, uh, you know,
college graduate.
So, you know, it it's where youknow AGI is reasonably
intelligent, you know, and I'vehad a number of people come back
after they've read part of thebook and said, well, you're you
know, you're a you're you're anAGI hater or you're against AGI.

(23:34):
And I'm like, no, not at all.
I mean, I'm pro-AGI.
I think it's a great idea.
I think we will get there.
I just think there's a lot morework than what people are, you
know, saying and understandingand realizing at this point.
So um, AGI is all about AIacting and reacting and
interacting as if it's as goodas an you know an above-average

(23:58):
college graduate.
So that's the definition.
And you know, there's there'slots of people that are given
pros and cons on it.
I actually am very pro on it.
I think we'll get there.
It's just gonna take a while.

SPEAKER_01 (24:10):
Like, what would you say the current limitations are
like just like very concretethat basically don't allow us to
define what we have today asAGI?

SPEAKER_00 (24:20):
Well, one of the things that you know we've
started to see a little movementon last week was memory.
You know, AGI for the most partdoesn't have any memory.
You know, you come in and youask it questions and you prompt
it and it you know responds toyou just as if you had never met
you before.
So, you know, it it's a lotharder than what people think.

(24:42):
You know, you and I have knowneach other for what, three or
four years now?
You know, and I remember when wemet, you know, and we were
talking about stream anddifferent things, and then one
of our previous conversations,we were talking about the, you
know, the the you know, theincident that had befallen you
and many people in Californiawith the wildfires and and those

(25:03):
kind of things.
And, you know, we've built arelationship and all those
different conversations are inmy head, you know, and and when
we get together and talk, youknow, I'm always thinking about
those things.
Well, how's how's John doing onrebuilding his house?
And you know, how's thataffected his family and those
kind of things?
And um AGI doesn't have that, orAI doesn't have any of that.

(25:23):
AI doesn't have any emotions, ithas no, you know, context for
what we're talking about.
And those can be resolved andthey will be resolved in the
next you know, couple years orthree to five years or something
like that.
But, you know, there's manyother things, you know.
Even I think even Sam Altman hassaid, well, maybe Sam hasn't
said it, but many other peoplehave said that, you know, large

(25:46):
language models or languagemodels, Jan Lacoon has
definitely said it, you know,language models are not the path
to AGI.
You know, there's so many morethings that AGI requires than
just predicting the next word orletter.
You know, causal AI has to comeinto it, foundational AI has to
come into it.
And what we're seeing now isthat, you know, it's it's like

(26:08):
a, it's kind of an old phrase,but it's kind of like a roll
your own.
You know, I see it with youngengineers.
They're like, hey, I took Gen AIand I grafted onto, you know, a
logistic regression, or I tookcausal AI and I put it into in
with Gen AI.
You know, everybody's graftingthese things onto each other,
but not many people are pokingtheir head up above the parapet

(26:30):
and saying, you know, what'sgonna happen is that vendors are
gonna start to look at this andthey're gonna say, hey, you
know, we need an AI platformthat has it all.
And what I'm saying is it'sgonna take at least 30 years for
a vendor or a cadre of vendorsto integrate this stuff all
together.
So you have an AI platform thathas all the different flavors of

(26:52):
AI in it.
And I had one of my graduatestudents ask me in class last
week, they're like, well, thatdoesn't sound too hard.
And I said, Okay, well, let'slet's take one feature and let's
integrate it across foundationalAI, causal AI, and generative
AI.
And we went through a designsession that lasted about an

(27:13):
hour.
And at the end of it, they'relike, God, we're nowhere near
even understanding it, are we?
I'm like, nope.
So I mean, that was just athought experiment in a class in
a university, but there's amyriad of those things that have
to happen, you know, beforewe're anywhere near it.
I mean, we're not even close tocomposite AI, let alone, you

(27:34):
know, artificial generalintelligence.
So I'm pro on AGI, and I thinkwe will get there.
You know, Rodney Brooks is uh, Idon't know if you remember or
know who Rodney is.
He was the founding, you know,uh director of MIT C Sale
Institute, and he's founded acouple of robotics companies.
He's on record as saying that hethinks AGI won't be achieved for

(27:56):
130 years.
So he's even further out than Iam.
And he and I have been trading,you know, emails back and forth.
And and his position is, youknow, why have people worked on
AI for 70 years?
And the reason they've that someof the best and brightest
computer scientists have workedon it is because it's hard and
it's interesting and it's fun.

(28:17):
So when I say AGI is 120 yearsaway, I don't say that as a
detractor or as a Luddite or assomeone who wants to take away
from it.
I say this is a hard problem.
And if you're really smart andengaged and excited, you should
jump in the pool.
This is where the fun is.

SPEAKER_01 (28:34):
Absolutely.
And that's uh one of the thingsthat people who are building
with AI come to terms with,which is you know, just the
probabilistic nature, and it'sjust something that's very hard
to that that nature of it alwaysmakes it just a little bit um
unpredictable, right?
Which is you know, the in itsdefinition, right?

(28:55):
And that's one of the thingsthat software engineers, when
they make that leap from youknow, uh building data-driven
applications to suddenlyAI-driven applications, you lose
that determinism, thatpredictability, and and it's
just a little bit, you know,unsettling for some teams.
And it just kind of changes theway people have to work.

(29:16):
And I think that's sort of theroot of everyone's you know,
lack of commitment to saying,yeah, you know, AGI is here.
Because there's always just thatlittle, you know, uh, you know,
2% chance that you know AI isjust gonna completely make
something up or or or do thewrong plan or you know, go down
and uh uh you know executesomething that's that's

(29:37):
completely off.
Right.
So uh one of the other things Iwanted to ask you about, which
you know, this is well coveredin your book, The Path to AGI,
but you also wrote a book uhcalled uh causal artificial
intelligence, the next step ineffective business AI.
So you wrote a book on that aswell.
Tell us about causal AI.

SPEAKER_00 (29:56):
Yeah.
You know, causal AI is is youknow a great.
A great tool.
You know, you know, kudos toJudea Pural and his team at UCLA
and all his PhD students and andcollaborators for creating an
entirely new branch of calculus.
I mean, I I think I'm areasonably smart kid, but or

(30:17):
guy, but yeah, man, I'm nowherenear creating new math.
And they they have definitelydone that.
And the great thing about causalAI is that while it's still
probabilistic and still, youknow, has you know some of those
you know fuzzy components to it,you're actually bringing in
information and you'reunderstanding at a greater

(30:37):
level, a much greater level, Adid cause B.
You know, and when one of thethe the hard things about causal
AI is that most people thinkthey understand causality.
And I think at some level we do.
You know, I put my finger on ahot stove and I burned my
finger.
Okay, well, you know, you putyour finger on a stove and you

(30:58):
burned it.
So that that's pretty easy tounderstand causality there.
But when you start to try tobreak causality down to its
mathematical level and have iton a predictable,
understandable, repeatablebasis, that's hard.
So, you know, we need causalityto be better than it is today

(31:19):
because causality is one of theelements we require for AGI.
Because without causality,without actually understanding
the true cause and effectthings, you know, AI can never
be anywhere near the experiencethat we have in the real world.
So causal AI is a burgeoningfield that I think there's 15 or

(31:43):
20 vendors out there working onit right now.
And there's some reallyinteresting software being
built.
Um, but it's early days.
And I think it was, it's beenslowed down a little bit because
Gen AI has sucked all the airout of the room for the last two
years.
But, you know, those companiesare still there, they're still
being funded, they're stilldoing interesting work.
But, you know, that will come tothe fore in probably two to

(32:06):
seven years, you know, or maybefive to seven years.
And we need it.
We have to have it.
That is a core component of whatthe AI stack is going to be in
the future.
And it's very exciting.
One of the things that's trulyexciting about cause LAI is that
you can go back in history andtake any data set that's been
collected.

(32:26):
And if it has a reasonableobjective or a reasonably close
objective to what you're tryingto achieve, you can integrate
that data set into your currentanalysis.
So you could take some ofDarwin's data from, you know,
whenever he was alive, I guessit was back in the 18th century,
and you can condition it in away that you can bring it into
causal analytics you're doingtoday.

(32:48):
So one of the great things aboutcausal AI is it makes all the
structured data that we've everhad in history available for use
today, which is mind-bendingwhen you really think about it.

SPEAKER_01 (33:01):
And how does that different differentiate from
generative AI?

SPEAKER_00 (33:04):
Well, generative AI is is it's much more uh
unstructured.
You know, you can actually bringin all kinds of stuff in
generative AI, but these are twomovements, two parallel
movements that we've neverreally seen before.
You know, in generative AI,you're bringing in the
unstructured information, incausal, you're bringing in the
structured.

(33:25):
But what they both do is theymake almost the entire knowledge
repository of the worldavailable to you to use actively
in your analytics today, whichis really cool.

SPEAKER_01 (33:38):
Yeah, absolutely.
And and you work with companiesthat have deployed this and seen
real value from it.
I'd love to hear one of those uhuh stories.

SPEAKER_00 (33:46):
Yeah.
Well, when I was at EY, I leftthere about a month ago now, I
guess it is.
You know, we built uh a gen AIplatform called EYQ that serves
300,000 people on a daily basis.
And they use it for all kinds ofproductivity applications and
uploading documents andcomparing legal documents and
different things like that.

(34:07):
We actually built uh a team inEY actually built something
called Deal and Delivery Assist.
And that allows them to bringin, you know, the best proposals
that were ever built uh in EY.
And it took months for a numberof people at EY to build these
proposals.
Um, and with deal and deliveryassist, it takes one person to

(34:30):
answer about 11 questions andput in a well-formed prompt, and
out comes a fully formedproposal.
So, you know, multiple peopleover multiple months from to one
person for a few minutes, youknow, a pretty impressive uh
productivity application rightthere.

SPEAKER_01 (34:46):
And I think it does require someone who's
knowledgeable in in the kind ofthe parameters required for
successful deployment of causalAI.
And you know, I'd love yeah, I'dlove to understand from you.
Like let's say I'm a businessleader and you know, I
directionally understand AI asthis big unlock, and I just have

(35:07):
this initiative and I want touse causal AI, right?
Where do I start?
Do I go looking for vendors?
Do I try to hire the rightengineers?
Do I need a PhD?
You know, what would be thesteps there?

SPEAKER_00 (35:20):
Yeah.
I I think the the way to do itwould be as if you were just a
you know a regular company, youknow, not a tech company, not a
Silicon Valley organization orsomething like that.
You know, I would talk to youknow some of the early stage
causal vendors and and get themto educate you on where is a
good application for causal AI.

(35:41):
I mean, one of the best storiesthat I've ever heard around
causal was a bakery in the UK.
And, you know, a big bakery.
You know, these people arecranking out, you know, millions
of buns uh, you know, a week.
And what they would do each dayis is they shut down, they had
uh a day shift, they didn't havea night shift.
So they turn they turneverything off, everybody go

(36:03):
home.
The next day they would comeback and the humidity was
different, the heat's different,the you know, the ovens operate
differently every day.
Uh, you know, and what theywould do is they would crank
everything up and they would runyou know the factory and keep
tweaking, you know, the heat andthe speed and the dough and all
this different kind of stuff.
And they would throw away abouta half a million dollars worth

(36:25):
of product every day.
So that seems absolutelywasteful.
Um, but they brought in a causalvendor that said, okay, we can
bring in all the known factorsevery morning and we can spit
out, you know, what we think isthe optimal setting for all the
different, you know, the speeds,the dough, the the ovens and

(36:47):
everything.
And they cut that waste in in uhtwo, they eliminated two-thirds
of that waste.
So that's a really good uh usecase for causal.
If you don't understand, youknow, uh how things are, you
know, any of the kind ofquestions on what we should do.
If you have a question thatstarts with what, it's usually

(37:09):
an application for causal.
So, you know, that's the kind ofthing you're looking for.
And what I would say is go findthese vendors, bring them in,
explain what your challenge is,let them educate you on it, do a
POC.
And if it works, then reallythink about okay, now we know
that we can make this stuff workin the real world.
It's not just some pie in thesky conceptual thing.

(37:33):
Then you can start to planforward and say, yeah, I want to
hire some people, I want to havethis as a core capability.
I'm either going to use thisvendor or I'm not.
You know, that's the way I wouldstart out if I were running a
business.

SPEAKER_01 (37:44):
Yeah.
And you know, you had some goodcomments in your book about this
as well.
That gets much deeper into thedetails of how this can be
executed, along with the thekind of the concepts for for an
executive and or an operator tounderstand.
So definitely uh recommendlooking, you know, at at your
book as well.

(38:05):
I've I've already recommended uhto to quite a few folks.
Um and you you you see theseoperators who've you know every
everything they've done so farhas generally been something the
intersection of softwareengineering and and data
engineering of some sort.
Hey, we're gonna have thesepipelines that ingest data from
external and internal sources.

(38:26):
We're gonna process that datainto some you know form of uh
models, right?
And those models could just beyour dimensional models or you
know, data warehouse models, oryou know, you can evolve them
into machine learning modelswhere where you're essentially
you know generating some outputfrom them automatically, and
then you're just deploying thatin an enterprise context as

(38:49):
software.
So now those same operators aretrying to understand, you know,
they they like we're we're fullyleaning into Jevon's paradox
where now you know their theircloud vendor now has 10 AI tools
and frameworks that that theywant to sell them and they
think, okay, you know, it's allaccessible to me, might as well

(39:09):
use it.
Right.
So I I I see this, you know,there's obviously in the
enterprise a lot of governancework.
There's a lot of approvals interms of what models you can do
uh use and and you know how youshare data with it.
Well, what's your recommendationto those business leaders who
just have all these tools uhaccessible to them now and

(39:30):
really choosing and evaluatingthe best one?

SPEAKER_00 (39:32):
You know, that's that's a great question.
And and it's something that wejust talked about recently in my
class.
Uh you know, we we really aren'twe don't talk much about data
science teams anymore.
I've had conversations recentlywith organizations.
They said, well, we have a datascience team and we have an AI
team.
And I'm like, you do?
You know, but I've heard itmultiple times now that people

(39:55):
have done this, and I'm like,that seems really odd to me.
But you know, they should betogether, they should be one, to
tell you the truth.
So, you know, I I'm not a bigbeliever in in forcing AI teams
or data science teams intostandardized tool sets.

(40:15):
You know, I've had lots of datascientists that have wanted to
use R, many want to use Python,you know, some have wanted to
use proprietary tools and thingslike that.
So I don't it I don't advocatethat you go out and you know use
13 different tools to do thesame thing.
But you know, I don't I alsodon't say that, you know, if
you've got different datascientists and they want to use

(40:37):
different tool sets, youshouldn't force them to
standardize.
Because it's kind of like sayingto uh you know a musician,
you're a guitarist, but on thissong I want you to play the
saxophone.
You know, you're you're kind ofcutting off your nose to spite
your face.
So while you don't want to haveeverything in the world in your
shop, you certainly don't wantto force talented, you know,

(40:58):
analytics professionals to usethings that are going to be
suboptimal for them.
So what I would say is don'thave an AI group and a data
science group.
They're really one thing there.
Uh, and then find out from them,you know, what's going to make
you the most productive.
And that's the tool set youshould use.

SPEAKER_01 (41:14):
Yeah, that's a that's a great point.
Sometimes I'll see uh teams sortof prematurely, uh I shouldn't
say teams, it's really at anexecutive level try to
prematurely consolidate, right?
And they say, well, why do wehave 10 database vendors?
You know, how did that happen?
Let's just have one databasevendor and pick the biggest one,

(41:35):
right?
And imagine the the softwareteams here then say, Oh, okay,
well, what we're gonna migrateand refactor all our uh
applications, go from uh theobject store we use to this this
big uh relational database.
And um, you know, I I I alwaysgo back to you know uh Michael
Stonebreaker's quote, you know,one size does not fit all.

(41:56):
And I think this applies, likeyou're you're saying, to to AI
and and and data science aswell.
Definitely interesting to tohear that perspective.
But that's that's whereexecutives have to be real
context sensitive, right?
And and and understand, youknow, why each team uh chooses
the tools.
And it's really to get the jobdone for them.

SPEAKER_00 (42:15):
That's right.
That's right, you know, and andwe're moving forward into a
world where, you know, if youtalk to many non-technical
people, you know, they certainlybelieve that there's this model,
you know, somewhere out there inthe the cloud that they're
using.
And and sometimes that's true.
They are using a model, but morethan likely in the future,

(42:36):
that's not true.
You know, we saw that Mistralhas always been, they've been
working on mixture of expertsmodels for years now.
And now Llama has released theirmixture of experts environments.
So, you know, more than likelythey're sending in a prompt
that's being accepted by a modelthat's being parsed and then
sent to many different models.
So, you know, we're probably atthe most simplistic world we're

(43:01):
ever going to be at right now.
You know, in the future, youknow, the back end of these
models, there's gonna behundreds, if not thousands, of
them.
So, you know, the idea that youknow things are going to be
simple or should be simple orcould be made simple probably
isn't true.

SPEAKER_01 (43:18):
The other really popular use case for generative
AI is coding.
You know, teams are teams areusing it to to write code.
Um, I actually just saw uh atweet today that from Gary Tan,
the CEO of Y Combinator, sayingthat uh over over 90% of source
code from its port codes aregenerated by AI now, which tells

(43:42):
you that these, you know, the YCombinator, of course, is a uh
uh angel, does does angelinvestments they're they're
they're the one of the mostpopular and and uh uh
prestigious in Silicon Valleyfor early stage companies.
So it seems like they theythey're really viewing
generative AI coding as acompetitive advantage because
they can have these like small,brilliant teams, right, of maybe

(44:06):
two to four technicalco-founders who can you know
build almost an entire start.
And they're they're allextremely bright people, right?
They're not just you know uh youknow uh just vibe coders.
They you know, they're reallygood software engineers.
So he makes it sound like it's acompetitive advantage, and it's
a way that they're gonnadefinitely infil infiltrate and
undercut the market.
What's what's what's yourperspective on that?

SPEAKER_00 (44:29):
You know, I I I do I do believe that yes, you know,
Gen AI is good for coding,there's no doubt about it.
But you know, what we found inour real world experience is
that Gen AI is good for simplecoding, you know, clearing
registers and setting updifferent, you know,
housekeeping tasks and all thethings that as a software

(44:50):
engineer you need to do, youknow, that coding is very easy,
it's very straightforward, itdoesn't vary much.
So Gen AI is good for that.
But when we get intosophisticated logic and and very
hard problems to solve, itusually falls apart.
At least today it falls apart.
So, you know, we generated andand committed millions of lines

(45:13):
of code to our code base, but wefound out that that was about
20% of what we generated.
We threw away 80% of it becauseit just wasn't good or didn't
scale or didn't work.
So it'll get better over time.
I don't know how fast it willget better, you know.
But I talked to people and like,oh gosh, you know, I'm I'm
really sad.

(45:33):
I told my son to become ordaughter to become a developer,
and now there's no jobs fordevelopers.
I'm like, that's not true,that's not the case.
We're gonna need as manydevelopers as we can train for
the foreseeable future.
Yes, the you can do interestingthings and brilliant people can
do good things, and they willcontinue to do good things, but
it's not the end of developersas we know it.

(45:56):
Absolutely, yeah.

SPEAKER_01 (45:57):
And and I think it's gonna be a bit of an art, it's
gonna be a little bitsubjective, and it's gonna be a
little you have to be cleverabout where you want to use that
gen AI coding.
Okay.
Yeah, yeah, okay.
It can probably automate a lotof the QA testing framework for
you, which is great.
Um great.

SPEAKER_00 (46:14):
Nobody likes well, very few people like to do that,
I guess.

SPEAKER_01 (46:18):
Yeah, yeah.
And and even people who do loveto do it, right, can suddenly
just tell you, I hey, analyzethis code, tell me the paths and
help me, you know, kind of mapit out so I can write my my my
my tests faster.
And but you're a hundred percentright that you know having it
become be the smartest person inthe room and and you know
implement the sophisticatedlogic to solve your business

(46:38):
problem.
If you're relying on gen AI codefor that, you're gonna run into
problems because it's it's gonnado a bunch of stuff that no one
understands.
Um, and it'll be very hard todebug.
And it's famously very hard todebug generative AI uh code,
especially if you're if you'reusing it a little irresponsibly
and you know, giving it veryvague instructions and having

(46:58):
that generate thousands of linesof code, and it's it becomes
impossible to to debug,basically.
So definitely one of those thoseinteresting use cases that's
clearly very lucrative.
Companies are making hundreds ofmillions of dollars off of it.
And on the other side, you know,there's there's new startups
coming in and using that as acompetitive advantage.
So it's it's great advice thatyou have to the to the business

(47:20):
leaders because largeenterprises will need to adopt
this at some point too, so theycan stay competitive and keep uh
you know their their efficiencyup to market standards.
But absolutely important forthem to understand that it's not
going to come in and solve yourhardest problems for you.
It can only help you kind ofstreamline that process with
other smart people solving yourhard business problems.

SPEAKER_00 (47:42):
We as humans are still at the center of it all
and will be for the foreseeablefuture.
So this you know, world where AItakes over, that's a long way
off.
Absolutely.

SPEAKER_01 (47:53):
Uh well, John K.
Thompson, author of many books,most recently, The Path to AGI.
Thank you so much for joiningthis episode of What's New in
Data.
John, where can people followalong with you?

SPEAKER_00 (48:04):
Thanks, John.
It's so great to be here withyou.
I I love every time we gettogether and talk.
So grateful for the opportunity.
Thank you.
Uh, LinkedIn, that's the bestplace to connect with me, John
K.
Thompson, John Thompson.
And if you want to, you know,check out any of the books,
Amazon's the best place forthat.
Awesome.

SPEAKER_01 (48:23):
And the link out to your LinkedIn will be down in
the uh show notes for the forthe listeners.
John, likewise, I always reallyenjoy catching up with you.
Uh, hopefully we can do it soon.
We don't have to wait for yournext book to to to do it.
Uh and and uh hope to see yousoon.
Thanks, John.
Take care.
Bye bye.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Ruthie's Table 4

Ruthie's Table 4

For more than 30 years The River Cafe in London, has been the home-from-home of artists, architects, designers, actors, collectors, writers, activists, and politicians. Michael Caine, Glenn Close, JJ Abrams, Steve McQueen, Victoria and David Beckham, and Lily Allen, are just some of the people who love to call The River Cafe home. On River Cafe Table 4, Rogers sits down with her customers—who have become friends—to talk about food memories. Table 4 explores how food impacts every aspect of our lives. “Foods is politics, food is cultural, food is how you express love, food is about your heritage, it defines who you and who you want to be,” says Rogers. Each week, Rogers invites her guest to reminisce about family suppers and first dates, what they cook, how they eat when performing, the restaurants they choose, and what food they seek when they need comfort. And to punctuate each episode of Table 4, guests such as Ralph Fiennes, Emily Blunt, and Alfonso Cuarón, read their favourite recipe from one of the best-selling River Cafe cookbooks. Table 4 itself, is situated near The River Cafe’s open kitchen, close to the bright pink wood-fired oven and next to the glossy yellow pass, where Ruthie oversees the restaurant. You are invited to take a seat at this intimate table and join the conversation. For more information, recipes, and ingredients, go to https://shoptherivercafe.co.uk/ Web: https://rivercafe.co.uk/ Instagram: www.instagram.com/therivercafelondon/ Facebook: https://en-gb.facebook.com/therivercafelondon/ For more podcasts from iHeartRadio, visit the iheartradio app, apple podcasts, or wherever you listen to your favorite shows. Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.