Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:03):
The same thing is starting to happenand will continue to happen with large
language models, and that is a disasterfor a lot of the expectations that
people have for large language models.
So what'll end up happeningis they'll be branded by the
partnerships that they have.
(00:24):
And so for a medical database, right?
Sure, you know, based off ofwhat it already has, it'll pass
the MCATs or whatever, right?
And it'll do basic stuff.
But every day there's new knowledgeaccrued in medicine and the latency
of that is important, right?
And so If I'm, and I'm just usingthese hypothetically, I don't
(00:45):
have any insights or anything.
If I'm Cleveland Clinic, and I thinkI'm adding, creating new value every
day because of the doctors and theresearch we do, am I going to give
that to BARD and to Open SourceFacebook and to Microsoft OpenAI?
Or I'm going to say, look, I think we'reworth a hundred million dollars a year.
(01:06):
Mayo Clinic is goingto say the same thing.
Harvard's going to say the same thing.
And so then all of a sudden, they'reall going to realize there's not
enough money for everybody to get paid.
And then the prices will come down someand they'll rush to it a little bit
to get out front, but you'realways going to have a situation
where something's excluded.
(01:29):
Welcome to NEJM AI Grand Rounds.
I'm your co-host Raj Manrai and I'm herewith my good friend and co-host Andy Beam.
Andy, this is episode number10 and we have a very special
guest today, Mark Cuban.
Mark is a man who needs no introduction.
He's a shark on Shark Tank, theowner of the Dallas Mavericks,
a businessman and investor who also hasbeen quite outspoken on the thorny social
(01:52):
and technical challenges in health care.
We had a great conversation aboutMark's work on lowering drug prices
with the Cost Plus Drugs company,AI large language models like
ChatGPT, and of course, basketball.
Andy, I'm especially happy that yougot to ask Mark Cuban a question about
Skip Bayless and large language models,which is a sentence I did not think
I would say this year or maybe ever.
(02:12):
All in all, this was a reallyfascinating and fun conversation.
Yeah, it was such a fun conversation,Raj, and I can't believe I got to
ask Mark Cuban about Skip Bayless.
However, what struck me the most,and what I think is Mark's particular
superpower, is his ability to walk ina super complex space like health care
and then instantly distill it downto a set of simple principles.
(02:32):
For example, he talks about hiscompany Cost Plus Drugs and the product
that they're actually selling isnot drugs, but it's actually trust.
They get consumer trustthrough radical transparency.
And this is something that youalmost never see in health care.
And it's actually this trust, um,is the thing that they're selling.
You know, when I heard him say this,this was a big light bulb moment for me.
And I think, again, reflects what I thinkis a very unique skillset that he has.
(02:55):
He also does the same things withlarge language models in health care.
Consumers want trust andtherefore branded LLMs, that
have been vetted will likely win.
So I may disagree with him on someof these points, but his ability
to so clearly articulate hisposition provides really fertile
ground to have that discussion.
And just before we start, I wantedto provide a note to our listeners.
As many of you probably know, Markis a passionate guy and just as
(03:18):
a heads up, there's some colorfullanguage at the end of this episode.
So you might not want to listento it with the kids in the car.
And mom, if you're listening, I'm sorry.
The NEJM AI Grand Rounds podcast issponsored by Microsoft and Viz AI.
We thank them for their support.
And with that, we bring you ourconversation with Mark Cuban.
(03:40):
So, uh, thanks for joining ustoday on AI Grand Rounds, Mark.
It's great to have you on.
Thanks for having me on, guys.
Mark, let me also welcomeyou to NEJM AI Grand Rounds.
It's an honor to have you here.
This is a question that wealways get started with, and
please first forgive our AI puns.
Could you tell us about the trainingprocedure for Mark Cuban's neural network?
Take us back to the earlydata and experiences that led
(04:00):
you to where you are today.
And maybe also touch on when you firstgot interested in artificial intelligence.
So just from a tech perspective, I gota job out of college in Dallas at a
software store in the early days of PCs.
And about nine monthsin, I was rolling, and
I got fired for making the executivedecision to go out and close the sale.
(04:21):
That's really what got me into tech.
And the best part ofit was to get the job
the guy asked me a simple question.
You don't know the answer to a question,
what do you do?
And I told him, well,I do what I always do.
I read the manual.
And I don't have a problemreading the most mundane manuals.
It was like, all right, you got the job.
And that kind of set the tone for myapproach to technology going forward.
(04:44):
I always looked at it that whenever anew technology came out, there were two
types of people, the people who createdthe technology, they had the edge.
And everybody else.
And if I spent the time readingand learning and experiencing, and
I did it enough, I could at leastbe caught up with everybody else.
And so that turned into a systemsintegration company called microsolutions
(05:06):
and taught myself to write software from,you know, basic, to a little bit of C.
At the time, Visual Basic, DBase,you know, database languages
did a lot of really coolintegrations with audio and video.
That was a lot of first out, um, becameone of the first local area network
integrators for Novell, and that's reallywhere we made our money at one point.
(05:27):
Way back when I was the largest systemsintegrator for Token Ring for IBM.
I mean, just on and on and on.
Then I sold that to CompuServe, whichwas part of H&R Block at the time.
And then took some time off, tradedstocks, because it was weird back
then, it's totally different now.
People didn't really understandthe technology they were buying.
And so the fact that I was hands onwith local and wide area networking
(05:49):
technology gave me a big edge.
Then from there, my buddy from collegeat Indiana, Todd Wagner came to me one
day and said, look, this is the mid-90s,late 94, early 95 and said, look, there's
this new thing called the Internet.
That's happening.
You're the tech geek that I know.
We've got to be able to figureout a way to listen to Indiana
basketball down in Dallas, Texas.
So I started off thinking, okay, Iwas going to use some MTFS files,
(06:12):
stuff, way back configuration files.
Then they were going to do slow loadsover the Internet, over, you know, um,
anyways, long story short, we started
doing some basic streaming on demand,going to different radio stations
and music sources and created asite called AudioNet, where we grew
that, added video, became Broadcast.
(06:33):
com, we were the YouTube ofour time, where we basically
dominated all audio and video.
We're a top 10, 15 website on theentire internet, sold that to Yahoo.
They destroyed it, bought theMavericks and fast forward.
I always was interested in AI,but AI was very much in the early
(06:54):
days from my exposure to it.
And if this, then that, right,it was logic based as opposed
to neural network based.
And so I tried to write some basicthings where if you're looking
to design a stream or looking to
you know, should you choose amulticast versus a unicast and what
protocols should you use, you know,run it through this and it was awful.
So that died very quickly.
(07:15):
And then fast forward with theMavs, always looking for an edge.
That's really what pushed me towardstrying to learn as much as I could
and read about AI as much as I could.
And then I read a book called TheMaster Algorithm, and that was
about seven years ago, I guess.
And that's what really gotme into it because it really
(07:37):
started to make sense for me.
At that point, trying to distinguishbetween machine learning and neural
networks, and even though there's no realgood definition, you know, people like
to call everything machine learning now.
I always looked at it as machinelearning was linear and neural
networks was not, was everything else.
And so basically refreshed on myJavaScript, taught myself how to
(07:59):
do a three layer neural networkin JavaScript from tutorials.
Read up, watched all the tutorials Icould get with a goal of being able
to someday be able to take a lot ofdifferent inputs from as many different
sources as I could and put it intoa neural network and hope it spit
out something of value for the maps.
And got to the point where I was like,no, I was going to have to spend all
(08:20):
day, every day trying to build models.
If I was going to do it and got tothe point where I can hire people
that continue to try to do that.
And the interesting thing about
that as it applies to the NBA andsports in general, it's become
a very efficient marketplace.
It's really, really hard to get an edge,even trying to, I think the biggest
push that we've made more recentlyis trying to use computer vision.
(08:45):
And a world of occlusion, which iswhat the NBA is and trying to figure
out ways to, you know, extractdata and put it into a network.
So, you know, that,that's been my journey.
That's a long aboutdescription of my journey.
Oh, but I love it.
It seems like basketball is a continuoussort of thread from the very beginning,
beginning even up to now, right?
(09:05):
Well, I'll tell you thecraziest story there, right?
So.
When I was in Indiana, I dideverything back ass half words.
My mission was to take my hard classeswhen I was a freshman and sophomore
and not drink since I was underage andthen take freshman classes when I was a
senior, so I could party like a rock star.
And so I snuck in and was able toregister for a graduate level MBA.
(09:28):
Graduate level statistics classwhen I was 18, and there was a prof
by the name of Wayne Winston, andhe always used sports examples,
which made it easy for me to learn.
Got an A in the class,that's not the point.
Fast forward 20 years later, rightafter I bought the Mavs, I'm watching
Jeopardy!, and I'm like, oh wait,that was my profs, that my stats prof
(09:48):
that was just killing it on Jeopardy!
Fast forward three months later,I'm at the Mavs Pacers game in
Indiana, and I hear Mark, Mark, Mark.
And had I not seen him on Jeopardy!,I would not have recognized him.
And I saw Wayne, we got together,and he became the first full-time
analytics employee for an NBA team.
And that helped us when Icouldn't even begin to tell you
(10:09):
how many playoff matchups andgot us to the finals in 2006.
And that was long before anybody else.
And we were just doing basicregression analysis, but for
lineups, it was a big deal.
And so that's a little fun aside story.
That's cool.
So, you know, you've seena lot of technology trends
come and go over the years.
How do you compare the recentrise in AI with previous tech
(10:31):
booms, like Internet or mobile?
I mean, it's all relative, butI think it blows them all away.
Because it's consumerizedmuch more quickly.
So in the early days ofthe Internet, it was hard.
Right?
Trying to get people to understandwhat it took to set up just
to connect to the Internet.
(10:51):
People forget you had to have amodem, which came with every PC.
That wasn't a big deal.
But you have to have a dial up account.
Most people's dial up accountswere AOL or CompuServe or Prodigy.
These things people don't even remember.
But to try to get online, youhad to have a TCPI client.
You have to have anInternet service client.
There were all thesesteps that were difficult.
And then when you got tostreaming, it was the same way.
(11:13):
You had to have all thoseclients to connect online.
And then you had to have astreaming client as well.
And then you had to find theright file to connect to.
Not like we see it todaywhere it's all automated.
Well, all of a sudden up pops theseLLMs like ChatGPT and my 13-year-
old son is figuring out how to useprompts to get out what they want.
That's a big part ofwhat makes it different.
(11:35):
And that's a good and bad ina lot of respects as well,
because it's so black box.
And so, while we have yet to see thebest of it, I think the fact that
it's been consumerized so quicklyis what makes it so impactful and
even more quickly than the Internet.
Yeah, I think that's well put.
It's, uh, forgot the numbers, butsomething like a hundred million
(11:59):
users is the fastest time ever.
It's a hundred millionusers in a couple of days.
Most people used to complain howhard it was to get on the Internet.
You know, and how you could, youknow, and it's so funny how trust is
such an underpinning issue becauseback in the early days, oh, I'd never
put my credit card online, right?
I'm not going to give mycredit card to Amazon.
That's ridiculous.
It's all going to get.
So each new platform, if you will, hasits stumbling points, but LLMs have
(12:24):
just blown all that out of the water.
So we're gonna, we're gonna dig intoLLMs, but first we wanted to talk about
your work with, uh, Cost Plus Drugs.
Good.
So, you know, everyone agrees U.
S.
healthcare costs are unsustainable.
Prices make essential treatmentinaccessible to many Americans.
I think we have this tendency inacademia to keep estimating the scale
(12:45):
of the problem, writing about it,theorizing new risk adjustment models.
But you said, let me start somethingjust to take on this massive societal
problem of patients not beingable to afford prescription drugs.
I don't know if you're a Game of Thronesfan, but when trying to tackle something
as thorny of a problem as this, oneimage that comes to my mind is Jon Snow
just waiting for this sortof charging Bolton army.
(13:08):
Uh, you know, I think what'simpressed me is your commitment
to transparency and trust here.
So maybe we could begin with you tellingus about the mission as you see it of Cost
Plus Drugs and how you got started, whichI think I read, uh, started with a cold
email you received from a radiologist.
Yeah, so, um, Dr.
Aleksosh Mayansky, a radiologist, um,sent me an email and I've, I've invested
(13:30):
in a ton of businesses from cold emails.
It's actually insane in hindsight.
And he had a compounding pharmacythat felt like the pricing for
generic drugs was out of whack.
And by making them, we could
save a lot of people, a lot of money,but that was hard to scale. As I started
digging in and doing like I did, youknow, reading the manuals across all
(13:50):
these different elements of the industry
it became obvious that this was abusiness where the incumbents did
everything possible to obfuscateeverything that they could, and that
they were really predicated on keepingthings the way they had always been done,
plus the obfuscations that they added.
And to me, when you see an industrythat hasn't really changed, it's
(14:13):
usually the easiest to disrupt.
It's just a question of where'sthe money and how can you create
a business that can be profitable?
And so after talking to Alex, thenthe next step was, okay, what is
it that would really segregate usand really differentiate us so that
people would wanna work with us?
And the most obviousthing was transparency.
(14:35):
You know, there was not anything thatwould keep us from being transparent.
That was just an industry stepthat the incumbents used to sustain
profitability or increase profitability.
So as we put together the idea, I said,okay, if we're transparent about our
costs and we do a markup that we discloseso that everybody knows what the price
(14:55):
is, let's call it Cost Plus Drugs.
Let's go out and seeif we can get that URL.
We could, and we did.
And so it's costplusdrugs.com
and it's really avery simple business.
We buy drugs and we sell themon a transparent pricing basis.
So when you go to costplusdrugs.comand you put in the name of the
medications, imatinib, right?
(15:17):
You'll see, and I'm, I'm, I don'tremember the exact price, but
let's just say it costs us $60.
Then you'll see, weadd 15%, $9, so it's $69.
And then we add $5 for pharmacy feebecause the pharmacist has to check
everything and $5 for shipping.
And now we've added an additionaloption where partnering with Kroger's
(15:37):
and a lot of independent pharmacies,rather than having to buy a mail order,
you can go to an affiliated pharmacy.
We actually have a separatewebsite called teamcubancard.com,
I didn't name it, um,where you can do the same thing.
And pay the same priceand pick it up locally.
And by having the transparencyand having a markup of only 15%,
(16:00):
the net result was our prices werestunningly lower and people were
finally able to understand what theywere going to pay before they paid it.
Because part of the craziness ofthe industry, as you guys know,
if you think about the process ofgetting a prescription, your doctor
says, you need this medication.
Okay, the next question out of theirmouth is what pharmacy do you use?
(16:22):
Not, can you afford it?
Not, here's what it costs.
Not, you know, what insurance.
None of that.
It's what pharmacy.
And then when you go to the pharmacy, youstill don't know what you're going to pay.
You still don't know what yourco-pay is unless you've got
really, really good insurance.
And, on the flip side, on the other sideof the counter, the pharmacy doesn't know what
they're going to be charged yet and theydon't know how much of anything they're
(16:45):
going to make and in many cases theyactually lose money on a per prescription
basis and so the whole industry wasjust ripe for transparency and that's
the foundation of costplusdrugs.
com.
It's a very simple business.
Very, very simple.
We buy drugs and we sell drugs alltransparently. And by doing that,
(17:06):
our total marketing spend from dayone has been zero, not a nickel.
And I remember when I wastalking to Alex, he's like,
well, we need a marketing person.
We need to pay.
I'm like, no, you don't understandif you're on chemotherapy and you
have a need for imatinib, right?
And you're saving $200or $500 or $800 a month.
(17:27):
What are you going to do?
Everybody else that you know, you'regoing to tell because you're in the same
Facebook community groups, you're talkingto your doctor, you're talking to others
in similar circumstances and you're goingto tell them because that's what we do.
You know, when that, if there's onearea where we're collectively aligned
in this country, still today is inthe distrust and the dislike of the
(17:50):
financials of the health care industry.
And so I knew everybodywould speak about it.
And so that's where weare at Cost Plus Drugs.
You can go to costplusdrugs.
com and buy there and have itmailed to you.
You can go to teamcubancard.
com and find a local pharmacy.
And we're actually in the process offinishing a manufacturing plant in
Dallas, where we'll be making injectablesof generics that are in short supply.
(18:15):
And we're going to start withpediatric cancer drugs, which are
horrifically short-supplied rightnow and causing hospitals to do a lot
of abnormal things and really, youknow, negatively impacting patients.
Now we'll have a capacity of oneto 2 million vials, so we're not
going to end the shortages, butonce we can get this thing up and
running, it's a great first step.
So, I think the thing that clicked for me.
(18:38):
When I saw you describe thiswas not like the business model.
Okay.
So we're going to, we're goingto sell generics cheaper.
So, that's like the business model.
But the thing that you saidthat really stuck with me is
that's not actually our product.
Our product is transparency.
And that's actually our product.
Our product is trust.
And the way we buildtrust is transparency.
Right.
(18:58):
And so that to me was likethe non-obvious leap here.
And it made total sense whenyou said it, is there something
in your approach, that's the abstractionthat you make when you look at this
very complex system, that that is thekey ingredient that's missing, or is
that just like the Mark Cuban of it?
No, no, that's the key ingredientthat was missing, you know, for the
reasons I just mentioned, right?
(19:19):
You can't go to a pharmacyand know what the price is.
And one of the benefits that'saccrued from all this is now
we've become the benchmark price.
So that there have been studies done,Harvard and Vanderbilt, all, you know,
et cetera, where they've comparedthe price that CMS pays for various
(19:39):
medications versus if they bought fromus and the savings, you know, I forget,
urology drugs, I think it was, that theCMS would save $1.2 billion dollars.
For some specialty generics, itwas like $6 billion dollars a year.
I mean, it's just insane.
And you get example after example where...
It's just mind boggling.
(19:59):
I had a friend who was, you know,tragically paralyzed and came to me
and said, look, I've got to get thisdrug Droxodopa and I lost my insurance
and they're telling me it's going tobe $10,000 dollars every three months.
And I'm like, let me check.
I mean, I don't know what Droxodopa is.
I'm not a doctor.
And so I check it's a genericand it's available generically.
(20:20):
And so, I have our guys go look.Within a day, we got the price
down to $60 dollars a month.
Now it's even lower and it's all becausewhen people just let things get done
the way it's always been done and nobodychecks to see if there's a better way
or opportunity, it just stays the same.
And health care is like that acrossthe board and, you know, right now
(20:42):
medications were the best place tostart because you can inventory them.
You know, generically, healthcare,you can't inventory a doctor, right,
and just put them on the shelf andsay, we're not paying you anything
until we have a patient come in.
I haven't found any doctorsthat work on commission yet.
And so, you know, we'll get tohealth care and other elements of it, but
medications were the best place to start.
(21:04):
I guess one more follow-up questionthere is, you know, Raj and I both
come from a technical background.
Raj was a physics undergrad, I wascomputer science, and for whatever
reason, we found health care interestingand the problems worthy of study.
My experience has been, though,that that's not typically the case.
When you're coming from a tech background,you show up with a set of assumptions
about how you're going to solveproblems, and then once someone shows
(21:26):
you the full scope of the complexity.
And it's not just technical problems.
It's like a socio political thing.
A lot of people runscreaming the other way.
So what is it about you that made yousee these problems and not say, no
way, I'm not going anywhere near that.
I'm going to think hard aboutsort of the non-technical things
here also, in addition to what thetechnical solutions might look like.
So, I always, when I look at a problem,I always look at process first.
(21:49):
Cause process is theunderpinning of all technology.
Right?
You can't benefit from the technologyif you don't get the processes right
and you don't have the inputs right.
And so it was always like I justsaid earlier, you know, what's the
cause of this this issue and havinglooked at it and read, you know, MedCap
reports and, you know, all this otherstuff just looked at analysis of, you
(22:10):
know, MCR all the garbage that comeswith analyzing health care, right?
You start to realize thatnobody knows anything.
Everybody guesses.
Even Medicare rates were sent fromlike a 1993 benchmark price amount
that was just pulled out of thin air.
And so I just kept onquestioning the processes.
And then when you look at health care now,there's so many, like we saw a lot of
(22:32):
them at the a16z stuff, but everybody'strying to throw technology at a problem.
That's not really solvable by technology.
You know, you can gainimprovements to the processes
on the edges using technology, but itdoesn't change the fundamental problems
that are the real issues for health care.
You see it as, you know, fordoctors and handling patients and
(22:54):
the problems, but the real problemsare the insurance companies. Right?
If you were to extract the insurancecompanies and how payers dealt
with paying for care, 99%of doctor's problems would be over.
You know, it's only because youhave to meet the goals and the
requirements of the payers thatthe whole structure is bastardized.
(23:19):
And so I've looked at different systems.
I'm a big believer that we just dumpinsurance companies, which is not going
to happen in, you know, in the next 10 years
and replace it with means testedpayments to the government, right?
Where the government acts asthe insurer's last resort.
And if you're making two times thefederal poverty level or less, it's free.
(23:39):
If you make what I make, then it's 100%of the cost no matter what, right?
I just pay the bill, and everybodyelse is means tested in-between.
It may be 2% of yourincome, 5%, and then when you
pay it off, you pay it off.
You know, and if it's something that'sa specialty medication or therapy,
um, and it's outrageous, outrageousjust in terms of being expensive,
not outrageous in terms of value,then, you know, we put a threshold
(24:02):
above which you don't have to pay.
And the government just picks thatup because taxpayers pay for illness
of everybody in this country.
No matter what you pay for it onthe front end or the back end and
it's just a matter of modeling itto figure out the best way to do it.
Did I understand that correctlyor do I hear Mark Cuban
advocating for something thatsounds a lot like single payer?
(24:22):
No, it's not though because the differencesingle payer is nobody pays, right?
Other than you're a graduatedsingle payer system though, right?
It's means tested. Yep.
Right?
So that everybody that makes over twotimes the federal poverty level would pay
on a means tested basis, some percentageof their income, as they use the system.
And the reason you do it that way, ifyou start reading, as I did, like an
(24:43):
idiot, some of the proposals for singlepayer, the first thing it says at the
beginning of every federal proposalis it's created, run, and maintained
by Health and Human Services, HHS.
And whoever is in charge ofthat is the political position.
So that means every four years, there'sgoing to be somebody trying to modify
what single payer is or Medicarefor all is and how it's defined.
(25:04):
And that's going to be a disaster.
So you have to look at the processesfrom, like you mentioned earlier, from
the socio-political side and tryingto anticipate what those problems
are going to be. And you're alwaysgoing to get a problem where every
four years, somebody else has a newway of doing it, which you can't put
patients in that position. Butif you make it from you, you
know, to paraphrase to your need,from your ability to pay, right.
(25:28):
It's every day that you normallyhear billionaires quoting Karl Marx.
So I, yeah, I know.
It's, it's, it's great.
Yeah.
Yeah.
So, but you know, I, again, likeI, I got into it with, um, somebody
who really leans far right
in terms of how do you deal withhealth care. And I, and I just try to convey
as much as you want to apply empathy,
it's just a math model.
(25:50):
Right?
We're paying for every bit ofhealth care and wellness that
we've experienced in our lives.
Um, for everybody in this country,we pay for it one way or the other.
We pay for it via education, we payfor end-of-life, we pay for it not
having primary care, whatever it maybe, that cost gets picked up somewhere.
And it's possible to model that,just nobody, it's just not easy.
(26:13):
Right.
And so, you know, maybe that's atransition to AI because, you know,
being able to ingest a lot of thisdata and trying to extrapolate where
the, you know, what data is relevantis such an enormous project, maybe
that's an opportunity in an application.
Mark,
so, you've run a lot of organizations,but correct me if I'm wrong.
The pharmaceutical industrywas not a major focus area for
(26:35):
you before Cost Plus Drugs.
So, you're, you're jumping into this area,you know, with this plethora of arcade
acronyms, PBMs, ICDs, CPTs, PDPs, HSAs.
And, you know, as Andymentioned, right, exactly.
So our listeners won't be able to see it,but we're all sort of like, uh, shrugging.
It's just, you know, a lot of acronymsnow, particularly with analytics, right?
(26:57):
Yeah.
Nothing compared to the pharmaceutical.
It's complexity, right?
It's complexity and it'shuman and obfuscation.
Like it's a lot, an obfuscation.
But so, you know, exactly as Andy, Ijust want to spend another moment on
this before we jump into the next topic.
You know, Andy, Andy said that we see allthese people come very smart from math and
(27:17):
physics and tech and computer science, andthen they just get overwhelmed and sort
of disillusioned and run the other way.
You know, I think you said it,which is you just, just read, right?
You read the manual, you read, youtalk to people, you learn, but what,
you know, what, what can you tell ourlisteners who are intending to do good
in healthcare, but just get overwhelmedby this complexity and don't even want
(27:40):
to bring their, their sort of theirskillset to solving these problems?
You ignore it, right?
You don't, you don't try tochange what's already there.
You tried to reimagine what it should be.
And can you get there?
So when you, that's been partof the problem with a lot of
the investments in health care.
They try to take what appears to be on thesurface, the obvious application of new
(28:00):
technology to gain better results, right?
I'm gonna use telehealth, I'mgonna use the Internet, I'm gonna
use this, I'm gonna use that.
There's so much money in thisindustry, um, across all the facets
of it, that trying, I mean, it'spossible to make money on the edges.
And if your goal is an exit, thenthis is a great industry to do things
(28:24):
on the edges in order to get an exitbecause the incumbents will buy you
to, to, you know, avoid any potentialcompetition or other issues, right?
So from an exit perspective,
it's a great industry to get intoas a, for a technologist. But for a
disruption perspective, you have to ignoreeverything that's going on knowing and
just asking yourself, can I do it betterif I were going to recreate it today?
(28:48):
And the crazy part is if you go backin history, 75 years, they're doing
exactly what we're doing, right?
You'd go to the pharmacy, the pharmacistpurchased it from the manufacturer and
they show you the price and they'd sellit to you, you know, after the doctor
prescribed it, it was super simple.
And for medications, it'sthe same for health care.
It's kind of the same way too.
You know, I looked at potentiallybuying a hospital in North Texas.
(29:11):
And with the idea of going back tothe back in the day where the origin
of insurance was a local hospitalwould charge all people in the area.
X number of dollars per month asa backstop in case there was some
care that they couldn't afford.
And then it just got complicated, right?
And then when there were price freezes,it became a benefit for a workforce.
(29:34):
But why couldn't we go back to thatoriginal model where you get rid
of the insurance companies and youtake a hospital and you look at all
the costs and you don't have allthe, you know, what do they say?
20-to-25% of costs inhealthcare administrative costs.
And you don't have those costsand you just simplify it.
And you go to the community and you go tothe individuals, you go to the families
(29:57):
and you go to the businesses and say,look, local business, instead of you
pushing your people through the ACA, orinstead of you self-insuring or you buying
healthcare, we're going to charge you$200 per family, per month, whatever it
is, and we're going to go get reinsurancebehind that, which is an extra
however much, and thenwe'll make the numbers work.
(30:18):
Well, the problem was when I lookedat this hospital, there were so many
extraneous things that they werealready doing that it was impossible.
The biggest challenge is how do youdetermine what services are needed?
Do you need gynecology, you know, andobstetricians, or do you need an ICU?
And so, I wasn't qualifiedto do that at this point.
(30:39):
But I just think a simplification isgoing to be what changes health care
as opposed to trying to improvethe processes using technology.
I don't think that's the end game.
I think what I'm hearing you sayis that a lot of action in this
space is essentially retrofitting.
Like you have this outdated house,this outdated building, and you're
(30:59):
trying to retrofit something ontoit for a house that maybe wasn't
made for electricity or plumbing.
And forget that, just build a new house.
You just build a new house, right?
You know, people, it's really easyto understand people's motivations
when it comes to health care.
They want to be healthy.
And when there's an issue, theywant to be able to get healthy at
(31:20):
a price that doesn't wipe them out.
They'll even pay more if theyfeel safer and trust it and think
there'll be better outcomes.
But there's nobody you can trust.
That trust is missing and you're not goingto go into the existing system and find.
Somebody who makes $75,000 or$40,000 a year and put them
(31:40):
in that system and say everything'sokay and have them believe you.
That is just not going to work.
But if you do it in a transparentmanner and create a hospital
and say, here's all my costs.
Here's my P&L and here's my generalledger using accounting terms, right?
And I'm gonna make those opento everybody without showing
personal patient information.
(32:02):
Then you start building trustthat wasn't existing, and then
you can start making decisions.
Now, doctors don't necessarily like togo along with that, and you know, doctors
want to earn more money, etc, etc.
So there's going to be a pushand pull to find the equilibrium
from a cost perspective.
But, at that point, maybe laws willchange, or you try to make changes to
the laws so doctors can be equity owners.
(32:24):
Or even...
Patients, even your patients inthat community can be equity owners.
There's no reason if you open up ahospital that everybody within a 10-mile
radius that becomes a subscriber andis paying their X dollars per month.
You can't allocate a third ofthe equity to those people.
So, they participate in any appreciation.
So, I like it that I thinkthat's a good transition to the
next question we want to ask.
(32:44):
So, you know, I think the recurringthemes I'm hearing in your
discussion of Cost Plus Drugs aretransparency, patient empowerment,
and how together they build trust.
I want to stay on those meta themes,but switch gears from Cost Plus Drugs
to another topic, which I thinkwill also move us right to AI.
So, you know, this is the idea.
It's very important to me as aresearcher to, you know, Andy and
(33:06):
I, uh, discuss this work on this.
And a lot of, a lot of researchersare working on this, uh, today.
And this is the idea of what'snormal in your health exam,
including in your blood labs.
So our listeners, uh, can't see this, butnormal, I'll put in quotes here, right?
Um, because it's, it's very,it's very much debated and
misunderstood in medicine.
Even for these routine markersthat are used every day.
(33:28):
Um, so you commented on Twittera few years ago, you know,
you knew, you knew, it was coming.
You knew, it was coming.
You knew, it was coming.
It was coming.
And I got torched.
So yeah, so, so I want to discuss that.
So you, you know, you commented that weshould all get our blood tested broadly
and quarterly to have our own baseline.
If you can afford it.
If you, if you can afford it.
So you got some pushback andone concern was that this
(33:49):
would lead to false positives.
A normal test result wouldlead to another test.
Anxiety, potentially a proceduredown the line you don't need.
You know, so Andy and I, as he mentionedearly on, we're postdocs sitting
in lab discussing this like feud ofMark Cuban with all these doctors.
And we just started analyzingit and thinking about it.
And then we spent maybe a day ortwo just talking about this debate.
(34:10):
So, what's interesting to me is thatif you had said something in a similar
spirit that wasn't a testing policy,
but was instead about medicine's ignoranceof what normal baseline variation is
or what population prevalences are,
I don't even think this is news.
So, you know, who are thosecomparable demographics we assume
are on our, our blood reports?
Is it people the sameage and place of birth?
(34:32):
Is it people who can do,you know, it's not right.
Is it people who can do onelegged fadeaways like Dirk?
That's a silly example, but this isa real fundamental problem, right?
How do we define normaland what is normal?
We're having this
big conversation around themisuse of race as a way of
indexing your normal variation.
So, it's actually shocking to me thatwe make so many of these decisions
in medicine without understandingnormal variation or assuming certain
(34:55):
group characteristics are kind of thestand-ins for the individual person.
So forward looking question foryou, you know, maybe now, and as
we move into talking about LLMsand GPT4 and its descendants.
Can you see signs that point us toa reality in medicine that is more
welcoming to data to inform clinicaldecision making around normal variation?
Look, you know, as you walk downthe street up there and look at
(35:18):
how many people have iWatches.
You don't buy an iWatch to tell the time.
It's the data accumulator thatgoes into your iHealth, right?
And now you're able tolook in things like gait.
I mean, I wear, I got to get mythousand move points every day,
or I feel like I failed, you know?
And so that's your indicationthat people are willing to do it.
(35:40):
Now that's not cheap, right?
And there's also therisk for false positives.
But even if you have a symptom and you gointo a doctor and get a blood test, you
have a greater risk of a false positivehaving a more dramatic impact, right?
Because that, that falsepositive, hopefully, is random.
And it's just as likely going tohit you on your singular test as it
(36:02):
is if you have a series of tests.
But with that series of tests,you know what the outlier is.
And so, like I mentioned backthen, I didn't know I had a
Synthroid problem until I sawmy TSH scores start to scale up.
And so, it got me ahead of the game.
And so, again, not everybody isgoing to be able to afford it.
And then, going on another tangent, Iwas at an event that Elizabeth Holmes
(36:25):
was at and she was talking about it.
And so I connected with herand she invited me to see
everything and potentially invest.
I'm like, no, I'm not goingto invest because you broke
all these rules of business.
You know, she had this huge buildingand all this security for a startup.
I'm like,
something's wrong here.
And then I saw she was runningher analysis, her blood analysis
on Windows seven servers.
(36:46):
I'm like, but anyways,but that concept, right?
I love the concept because just apinprick to be able to get your blood
drawn and I get my blood drawn everythree to six months to this day.
And now, as you were alludingto, there's more markers.
There's companies like Grail that arecoming out and we'll all just get smarter
as these things start to happen, butyou can't just dismiss it because you
(37:10):
don't trust your fellow doctors to beable to analyze the data correctly.
And that's what this comes down to.
I'm smarter than you.
Now, I understand for big populations,you have to be able to deal with the socio-
political issues because you don't want tohave people feel like they're marginalized
because they can't get their blood tested.
And so there's a balance there as well.
(37:32):
But again, It's becoming moreaffordable and we're making bigger
investments and managing our own data.
And I think getting your bloodtested is just a natural progression.
So, I admit to not having KarlMarx and Elizabeth Holmes on my
bingo cards in this conversation.
I got more!
I got more!
(37:52):
Just wait until the lightning round.
Before we get there though, Iwant to switch gears and talk
about LLMs and specificallytheir applications in health care.
So, um, you know, as you and I talked alittle bit before about, uh, the emergence
of ChatGPT, um, barred from Google,and now there's this whole ecosystem of
LLMs that are reshaping large sectors ofthe economy, large sectors of science.
(38:14):
Um, so before we dig into thoseapplications, I'm actually just
curious, given your sort of penchantfor productivity, how do you use
them in your own life right now?
Not as much as I expected to, honestly.
I use them more as a typing hack,you know, where, okay, I know I'm
going to have to type a lot, whetherit's a plan, whatever it may be.
So you use the prompt and it spitsout something that's, you know,
(38:34):
five pages at a time or whatever.
And that saves me a whole lot of typing.
I don't use it as much as a searchengine as I did at the beginning
because you can't go back to the sourcesand I don't know when it's lying.
And so at least I can look at, in a searchengine, and I always use this example, are
(38:55):
vaccines safe, are COVID vaccines safe?
And I can look at the authors and lookup their history in a search engine.
I can't do that with an LLM.
And so it's mostly for, for sillystuff, or to write a business plan, or
to kind of act as an adversary, pickapart this business, or, you know, how
would you compete with Cost Plus Drugs?
Because it's, I analogize it to, I used tolove to walk through bookstores, you know,
(39:19):
back in the day, because I couldn't affordto buy the books, but I would sit there
and glance through them because all ittook was one idea that helped my business.
And it was well worth the time.
And if I was able to buy it, buy it.
And LLMs are a lot the same where beyondthe typing hacks, you might just get one
rudimentary idea or one advanced concept.
(39:39):
Now that that's for the stuff I do.
If I was a programmer,
it's a whole different beastnow, particularly with the
tools that are available.
If I was teaching myself theprogram and starting to get in
that, I'd be all over ChatGPT.
And, and so maybe it's worthcutting right to the chase there.
Um, do you think that we'rein a hype bubble for LLMs?
If you start with this like huge, it'syour search engine, it's your everything,
(40:01):
but actually it's just a better sort ofgrammar check for me or, you know, text
generation when I'm writing and maybe it'sworth qualifying for your average person.
I think we'll go into thescientific verticals later, but
for your average person, have weoversold what current LLMs can do?
Yes and no, right?
So, let's put in terms of collegekids or high school kids, right?
I like it and I tell my kids to useit because if you don't have domain
(40:23):
knowledge, you're gonna look like a moron.
If you ask it to write a paper aboutthe Civil War, and it gives you
the dates 1961 to, you know, 1965,and you don't catch it, you know...
Also, some high schoolers are putting in,like, as an AI language model, I cannot
tell you blah blah blah blah blah asthe beginning of their essay, too, so...
Right, right, right.
You gotta filter that stuff out, yeah.
Right, you gotta filter stuffout, and there's ways to...
(40:45):
You know, deal with the sensitivitiesand all that kind of stuff, right?
To game it in some respects and kidswill get smart like that, but it
doesn't change the hallucinations,particularly in the more complex things.
Like I have a company calledFoodguides.com, which is for
people with acid reflux and similarissues, and just a way to find foods
and stuff that are safe for you.
And we put together anLLM as a test there.
(41:08):
And we found that if somebody nestedmore than five questions to the
prompt, it always hallucinated.
So we had to put alimit to five questions.
And you're starting to read moreresearch that says the exact same thing.
I don't mind it for kids or everydaypeople, if you will, because as long
as you understand that it hallucinatesand you have to check your whole card.
I have a semi-spicy take thatI think hallucinations are
(41:30):
this like transient issue.
Essentially, it's an engineering problemthat we haven't solved yet, but I feel
pretty confident that in the next twoyears, we will have mostly solved it.
So, it depends on how incrementalinformation is ingested, right?
So right now, everybody's using vectors.
To try to look up databases because that'sa simple, just regular database lookup.
(41:51):
Right.
You know, I need to access this pieceof information, who is the president,
you know, and it just looks it up.
Right.
But that's not really knowledge andthere's no wisdom associated with that.
And so I think being able tomodify ChatGPT so that you
actually ingest everything.
And that helps train it,which is expensive, obviously,
(42:13):
but it'll get cheaper.
I think that's why Facebook's opensource model really has legs because
just using databases to access.
I don't think that solves theproblem of hallucinations.
I think it makes it worse.
Oh, interesting.
So I, maybe I'll come back to that, butwhile we're staying pretty broad here.
I'd like your opinion on the AI revolutionthat we're currently experiencing.
(42:35):
So, you were at the forefront ofthe personal computer revolution,
the streaming internet revolution.
And those were, I think, paradigmshifts in large sectors of the economy,
large sectors of everyday life.
Again, I think you live in a similarbubble that Raj and I live in, and
all of us should walk around with theassumption that the AI moment that
we're having now is of a similar scale.
And I wonder, like, if you agreewith that, or if, again, we're just
(42:57):
living in this echo chamber wherewe all know, I agree with it, but
I, what I think everybody's missing,and I think you and I alluded to this
in our conversations before is thebusiness side of it, that the business
side of it will skew everythingthat'll happen with large language
models, because one of the lessonsof the Internet by IP companies was.
(43:20):
Google and others were builton the back of their IP.
If Google was not able to spidereverything, if the Wall Street
Journal and The New York Times andevery newspaper in the country,
you know, turn spiders off day one.
Search engines wouldn't bethat valuable, would they?
Right.
And having talked to people atMicrosoft, excuse me, and others who
(43:41):
are working with LLMs, their perceptionis, well, we'll just spider away.
And you're already, you saw SarahSilverman sue, you're seeing the music
companies, you're seeing a lot of IPowners work together, even though The New
York Times, like, extracted themselvesfrom a group that was doing that.
And this also applies tomedicine in particular.
People are not just goingto contribute their IP.
(44:02):
As a result, just so like whenwe were starting AudioNet
that turned into Broadcast.com,
,the early days of thestreaming industry, just like
the early days of the Internet,people gave us all their content.
Because they wanted the reach.
They wanted to be able to reachpeople that were online that they
otherwise couldn't reach with anewspaper or radio station or whatever.
And we would say, look, radio stationand then TV station, you can't reach
(44:26):
anybody in the office where there'sbroadband and so give us your content.
We'll show you the numbers thatsimultaneously you'll listen and
you'll know your extension on reach.
And then they got to the pointwhere those numbers were big enough
where they said, well, shoot.
You know, I'm not goingto give it to you anymore.
You're going to have to payme, or I'm going to keep it
myself because there's value.
The same thing is starting tohappen and will continue to
(44:47):
happen with large language models.
And that is a disaster for a lotof the expectations that people
have for large language models.
So, what'll end up happeningis they'll be branded by the
partnerships that they have.
And so, for a medical database, right.
(45:08):
Sure.
You know, based off of whatit already has, it'll pass
the MCATs or whatever, right?
And it'll do basic stuff.
But every day there's new knowledgeaccrued in medicine and the latency
of that is important, right?
And so if I'm, and I'm just usingthese hypothetically, I don't
have any insights or anything.
If I'm Cleveland Clinic and I think I'madding, creating new value every day
(45:33):
because of the doctors and the researchwe do, am I going to give that to BARD?
And to, you know, open source Facebookand to, um, Microsoft OpenAI, where
I'm going to say, look, I think we'reworth a hundred million dollars a year.
Mayo Clinic is goingto say the same thing.
Harvard's going to say the same thing.
And so ,then all of a sudden they'reall going to realize there's not
(45:56):
enough money for everybody to get paid.
And then the prices will comedown some and they'll rush to it
a little bit to get out front.
But you're always going to have asituation where something's excluded.
And I don't know when we get to thepoint where whatever it was that a doctor
came up with based off of a patient orresearch experience that the AI will
(46:17):
figure that out without that experienceso, it's interesting because, like, I think
like many other people I think of LLMs aslike the new search engine and therefore,
the trajectory of Google is the rightmodel. But I think what you're saying is
actually, the right model is, like, Spotify.
So, Spotify didn't solve the technical wasa prerequisite, but the licensing deals
that they struck with content creatorsand, and music and record labels was the
(46:41):
enabling factor that made them successful.
And I think that that's whatI hear you saying, right?
Yeah.
And they're still losing money.
You know, all these years later, andthose, those content providers keep
on raising what they charge Spotifyand Spotify finally raised their
prices after trying to absorb it.
And, so, you're exactly right.
And that's a better way to put itthan I'd been putting it, because
(47:01):
why wouldn't you, if you're acontent owner, if you're an IP owner.
And then you have the TaylorSwift of the world that just walk
away and renegotiate an entire...
Well, yeah, just rerecord everything.
And now it's hers, right?
In particular, you know, for.
An agent that I use for content andentertainment, I'm like, hey, I'm
willing to license my voice, right?
I'll record whatever, and I have, I'm aninvestor in a company called Synthesia.io,
(47:23):
which does a lot of that, right?
And so, I record stuff forthem where you can have me say
anything you want in my voice.
That's very handy forus in this interview.
Right?
I like the answers that you give.
This is the best podcastI've ever been on.
As long as you pay the commercialrate, I'm good with that.
But you get the point, right, thatthis is not a direct line like people
(47:44):
expect it to be because IP drives it.
And as much as we don't know what thatneural network's going to create for
the LLM, right, we do know that ifthere's something that's excluded from
it, the chances aren't great, unlessit's very horizontal in basis, that it's
not great for that model to get there.
(48:06):
And that while it may not...
It may or may not hallucinaterelative to that IP.
It's going to lean one way or the other interms of what it does output to prompts.
And so that's like a comment onwhere the raw inputs are going to go.
But I think I also hear you saying thatlike branding, getting back to your sort
of central health care thesis is also goingto be a trust mechanism for patients.
(48:30):
Yes.
So, if I, you know, why am I goingto trust random Joe Schmoe's LLM when
there's a Cleveland Clinic LLM that hasbeen vetted by, trained on that data.
And then it's going to be Mayo versusCleveland versus Harvard versus you name
it, or they partnered or we partnered.
Just like you see now with researchand different projects, right?
And there's a reason why thoseplaces pay premiums to, you know,
(48:52):
guys like you to go work for them.
Because they want that brand and trustand they want the best they can get and
no matter what we do with LLMs rightnow, there's a huge difference between
knowledge and wisdom and we have no basison which to judge wisdom being output
from an LLM Could I get you to tell me thedifference between knowledge and wisdom?
(49:13):
Sure, knowledge if I said this to youbefore I know I this is one of the
things that I didn't get to followup on the last time we spoke Okay,
so knowledge is knowing that a tomatois a fruit Wisdom is knowing not
to put a tomato in a fruit salad.
Interesting.
I heard that from a rugby player guy.
I'm gonna have to meditateon that one for a while.
(49:35):
Am I wrong?
Have you ever seen atomato in a fruit salad?
Does, uh, I not let, nope, I have notseen tomato on a fruit salad, potentially.
That is wisdom.
You have to know to be able to take fact.
So to use a computer science metaphor,it's like integration tests versus
unit tests or something like that.
You know, something like that.
(49:55):
Okay.
So, I guess one thing that I'd like toask your opinion on before we move to
the lightning round is I think you'veclearly shown that you don't need AI to
lower health care costs, but I'm curiousif you think that, you know, that's one
of the things that we all agree that wehave an unsustainable health care system.
Something has got to reign in costs.
Is AI at least a partial solution to therunaway health care costs that we have
(50:18):
now, or are they going to make them worse?
Because I can imagine a scenario wherenow you have an infinite billing machine.
You can run infinitetests and that could be...
It's not even the billing machine, it'sjust who is the payer and how do they
make determinations about the servicesthey offer and what they charge?
And the payer makes those determinations.
Here's what I'm going to pay, andhere's what I'm going to pay for.
(50:38):
And...
AI is a complexity in a medicalenvironment that allows them to obfuscate.
Look, the ultimate blackbox is their ultimate dream.
We can't even determinesources right now, right?
I mean, you can at somelevels, but, right?
So your best answers are the ones that...
Come from your most expensiveassets, whether they're researchers,
(51:01):
doctors, whatever it may be.
And those are the ones you'regoing to charge the most for.
There's no system within that, there's noenvironment within the health care system
where the processes are so efficientthat people are going to charge less.
Okay.
I mean, I, I, I, I hope thatthat's not the answer, but I
think that you're probably right.
I think there's an opportunity tochange that, but that goes back
(51:23):
to the building the house, right?
Right.
You've got to build a newhouse that's designed with it.
And I think people will try that.
You're seeing more and morecash based care centers that
I think have the right idea.
They just don't, it's because you can'tinventory doctors who aren't being used.
That's the Catch 22.
(51:44):
Okay, so I think, Raj, unless youhave any follow ups, I think it's
time for the lightning round.
Let's go!
Okay, um, so the goal here is like, we'regonna ask you a bunch of random questions.
Uh, you can take them as seriouslyor unseriously as you want to.
Uh, but because it's thelightning round, try to keep
(52:06):
your answers brief if possible.
Yeah, okay.
Um, so I am an avid pickleballplayer, have a 4-0 rating.
Um, I know that you are a ownerof a professional pickleball team.
Uh, so the question is, uh, will apickleball tournament ever have the same
viewership as a major tennis tournament?
And I know you like data, soI have some numbers for you.
The U.
S.
Open gets about 800,000 viewers,and which is a, you know, a major
(52:28):
tournament, uh, the, uh, the sortof second tier tournaments like
Indian Wells get 400,000 viewers.
So the answer is no, but just rememberthe medium is the message because
those numbers are defined by theplatform that you produce it on.
So. Got it.
Okay.
All right, Mark, who is thebest NBA player of all time?
(52:49):
And why is it LeBron James?
Okay.
It's Dirk.
Love it.
And that's my, that's my answer.
I'm sticking to it.
We'll allow you a little bit more leeway.
If you'd like to elaborate on thequestions, you know, the M.J., um, LeBron
thing, if I have a team where I need akiller to finish out a game and get that
last bucket, then I'm going with M.J.
(53:10):
If I have a team that has got a lotof great players and I need someone
to make the right basketball play andpotentially, you know, hit the game
winning bucket, I'm going with LeBron.
That's fair.
That's fair.
Um, if you had, uh, so if you had topick one of these two individuals to
fight in an MMA bout, who would it be?
Mark Zuckerberg or Elon Musk?
(53:32):
I'll go with Mark because he's training.
I mean, training's everything.
Yeah, so you're, but I'm saying youyourself have to fight one of them.
Who would you pick?
Oh, I don't care.
I'd kill him either.
I would train first.
I wouldn't just walk in andthink, you know, oh, hey, you
know, I'm bigger than these guys.
I can beat them.
I'd have to...
So we got a twofer there, I think.
I think you're putting your money onZuck, uh, in the Coliseum showdown.
(53:53):
Uh, and if they're fightingyou, it doesn't matter because
it's going to be a wash.
Yeah, as long as I train, I'm good.
All right.
I think I know the answer to thisone because you alluded to it.
But have analytics and AI madethe NBA game better or worse?
I think it's made it better, butnow it's an equal market now.
It's almost impossibleto get an advantage.
There's not new data, there's nothing.
So you've seen everybody adopt prettymuch the same approach to basketball.
(54:16):
So a uniformly better product, but nowthere's essentially no, like, information
arbitrage opportunity because...
Okay.
Got it.
Um, so I've wanted to ask you this one fora long time, uh, but which NBA commentator
could be most easily simulated by a largelanguage model and why is it Skip Bayless?
(54:38):
Oh, because he says the same dumb shitover and over and over again, right?
Same shit.
Different day.
Yeah.
So we can, we can just throwan l, l m, uh, in, in, in
for Skip and we should Right.
We'd actually get better commentary.
Lemme just tell you, I think the mostundervalued LLM opportunity is to,
uh, absorb and you could charge peoplefor us obviously absorb everybody's,
(55:02):
um, email and texts and voicemailmessages and make that their internal.
You know, you're, they're in eternal LLM.
So a hundred years from now,Skip Bayless can have his show.
40 years from now, Skip Bayless can havea show and all of us could for that.
No clutch gene, no clutchgene, no clutch gene.
You watch him way too much.
(55:24):
All right.
What, what skillset do you value mostfor founding a medical AI company?
The technical machine learning skill,medical knowledge, or business skills?
Curiosity.
You've got to always keep on learning.
That's the key.
All right, so this is ourlast lightning round question.
Um, so I know you said you're not runningfor president. You can correct the record
(55:44):
here if you want to you heard it firsthere. But if you were given absolute
power for one day, what change wouldyou make to the U.S. health care system?
Um, I build a reinsurance program andget rid of payers. You couldn't outlaw
them, but you can make it so that youprice them out of the market. Awesome.
Yeah.
All right.
Mark, congrats on survivingthe lightning round.
(56:06):
We just have a fewconcluding questions left.
So the first one is, will AIreplace or augment physicians?
No, because as long as, as long asevolution is evolution, and as long as
there is lead time or just delays, right?
There's always going to be somethingnew, and you're not going to be able
(56:28):
to ingest and process fast enough.
All right, so this question requiresa little setup, so I'm going to
have to read a couple sentences foryou before we get to the question.
So, have you read Marc Andreessen's"Why AI Won't Cause Unemployment?"
Blog post.
Yeah.
Um, so the, the crux is, is that there'sessentially two sectors in the economy,
highly regulated and unregulated.
(56:49):
And as a function of GDP, the regulatedversions are going to essentially dominate
the economy because they're risingfaster than the price of inflation.
The unregulated, essentially the marginalcost of everything is going to zero.
The example he uses is you can buy likea 50-inch HDTV for a hundred bucks now.
Previously that was likea $5,000 thing.
So just the math works out where thesehighly regulated sectors of the economy
(57:12):
are essentially going to be the entireeconomy and sort of by definition,
you can't replace, you can't move AI.
You can't do technicallyinnovative things in that space.
And so, he uses this argumentto say that AI isn't going to
cause mass job replacement.
I guess, one, do you agree with that?
Oh no, mass.
So, I'm thinking ahead.
No, I don't think it's goingto cost mass job replacement.
No.
Okay.
(57:32):
Okay, do you think it's for them?
I agree with them there.
Okay, the regulationessentially ensconces...
No, because he talks a lotabout regulatory capture.
Right?
And that's what we seea lot with health care.
And there's tons of regulatorycapture trying to go on right
now in the pharmacy business.
Right?
And we basically...
You know, disassociate with that.
So, you can disrupt a regulatorycaptured business, but if it,
(57:56):
if it's not disrupted, thenit continues to grow faster.
But I think the piece that isapplies to AI that he's missing
is we will have technicallyliterate politicians at some point
who will start to use AI for government asa service and to use AI to try to optimize
what happens in government, not just froma regulatory perspective, because that's
(58:16):
not really where you benefit, but inoptimizing processes and, and creating.
So, if you look at it, if you were goingto disrupt the United States government
and its impact on its citizens, youwould say, okay, here's our tax pool
as we have as day one, and we'regoing to model out as best we can.
Here's all the services thatwe think our citizens need.
(58:36):
And you can even use technology tohave polls or whatever to get, you
know, real time feedback or nearreal time feedback and start to weigh
these things and try to, you know,use AI to determine across this model.
And I'm trying to optimize it.
What gives me the best outcomesfor the citizens and what processes
can I include that use technologyand AI to accomplish those?
(58:58):
So, where he's wrong isregulatory capture is
enhanced isn't the right word,but increased because we're
so human in what we do, right?
The number one job of a politician is notto improve things, it's to keep their job.
The number one job of a politicalparty is to sustain and retain power.
And so, if you start to undermine thosethings, you start to undermine the
(59:20):
opportunity for regulatory capture.
So, that's why I've been a bigproponent of ranked choice voting.
And other types of models because it takesaway a lot of the extremes and allows more
opportunity for people to participate.
So, if you start thinking in terms ofgovernment as a service and optimizing
government because we finally have peoplelike yourselves that are technically
(59:40):
literate, you start to diminish theregulatory capture, which undermines his
point, if I read it, his points correctly.
I guess like hoping the, but the,is the I bet then that there's
going to be technical literacywithin the government, and that...
At some point, yeah, becauseby default, you know, my
generation, we're idiots, right?
We went from sex, drugs, androck and roll to Fox News.
(01:00:02):
How the fuck that happened,I have no idea, right?
It's the most embarrassing thing ever.
I thought we'd all be 65 yearsold smoking joints and playing
the guitar and, you know...
And singing give a piece achance and instead we're like, well,
what happened to Tucker Carlson?
That's just ridiculous.
And so, I think with Gen Z and maybethe Millennials, but particularly Gen Z
(01:00:24):
and Gen Alpha, whatever they're callingafterwards, where you're born technically
literate and not having it there forthe things that you want and need is,
you know, ridiculous tothem, you're going to see it.
So, in science, we have this phrase,uh, that, uh, field advance one funeral
at a time is that there will be someluminary who makes some discovery.
And essentially the fieldis captured by the ideas of.
(01:00:47):
I don't think it's a luminary discovery.
Sometimes it's just, you know, youknow, cars came around and people
realize cars, the Internet came on.
It wasn't, the Internetwas not some discovery.
AI was not some discovery, right?
It was somebody productized it.
And somebody will productize governmentas a service and say, look, all these
things that you do right now that areridiculously difficult or take, you know,
(01:01:09):
a lot of time and money to automate.
We're going to simplify that right now.
We're going to be ableto look at your existing.
So I'm going to ingestthe entire state of Texas.
Every single one of your websites.
I'm going to take all the code fromall the software that's behind it,
and everything that you do on atechnical basis, um, using software.
I'm going to ingest those into thislarge language model, and I'm going
(01:01:31):
to give it prompts to say, take allthat information, take the data that
we have for our citizens, and outputa new platform, a new product that
accomplishes all the same things ona least cost optimized output basis.
That, if we sat in one of your classesand just tried to design that, we
wouldn't say that's extraordinary, right?
We would say that's a normalpotential program to try to create.
(01:01:54):
That's what's going to happen.
Isn't this just like health care though,where we have the technical solutions,
but you know, there's regulations thatdictate you have to take the minimum
bid, so contractors will underbid.
That's because of the politicians, right?
As long as you've got the systemthat we have in place, who's,
who's the dude that lied from NewYork that lied about everything?
Uh, Santos.
Santos, right?
(01:02:15):
And there's equivalent Democratsthat have been charged with whatever,
but they have to keep them becausethey need to keep the majority.
You're right.
And so, when you start making decisionsbased off of that, that's because
of the underlying political system.
If you want to destabilize
all these downstream issues thatwe've just talked about, you have
to change the political system.
If you look at where there's ranked choicevoting, that's where you saw Republicans
(01:02:39):
vote against Trump in impeachment, right?
That's where you see them vote againstother issues that in other Republican locations,
you wouldn't see that. You know, solooking at disruption, you have to look
at the underpinning of the problem.
And while regulatory captureis an issue, that's a function
of how we elect our officials.
Interesting.
Again, not where I thought that was going.
(01:03:01):
And I feel like my job hasbeen done because I got you
to drop at least one F-bomb.
I think that was, thatwas on the checklist.
So, I'll hand it over toRaj for, I think, probably.
Wait, wait, before you go, am Iway off base or what do you think?
I don't know.
I, I'm just skeptical that,Technically, like I would not
place my bets on technical onthe government becoming extremely
technical literate and even 50 yearsfrom now I mean, so by what standard?
(01:03:24):
Um, so it may be technical, technicallyliterate by the standards of
1980, uh in 50 years. But Mark issaying this is our kids, right?
This is our kids growingup in 50 years, right?
They're growing up in GPT4 and, you know,it's going to be a different group.
Yeah.
I mean, look, all it takes is one, one,somebody going in that's literate on
GPT4 and saying to the local university,here's access to everything that we do
(01:03:48):
online, and here's all the software thatwe have going back to COBOL, going back
to assembly language, whatever it is.
I want you to import all thissoftware and spit it out in
Python or whatever it is, right.
And what the most optimal output is.
That's not hard.
It's not simple, but it's not hard.
It's not technically hard.
(01:04:09):
I agree with you there.
And also, that's the point.
All it takes is one person.
All it takes is one personwho's literate enough to say,
fuck, why wouldn't we do this?
Now, if I did run for presidentthat would be one of the first
damn things I would work on, right?
Because that's... If I may, Cuban2024 fuck, why didn't we do this?
(01:04:29):
Let's do it right.
Let's get this shit right.
Alright, so we have one last questionand I I mean, you might disagree
with the premise actually of this,of this question, uh, but I'm hoping
you can leave us on a note to bottleup some of your energy and do good
things in medicine and health care.
So, I think you've said that themost important talent that you
have is your ability to sell.
(01:04:49):
Can that be taught or is it like humor?
And it's something that you've eithergot or you can teach people to sell.
It's really easy to teachpeople to sell and selling is
really a simple proposition.
How can I help you?
Just all the things we justtalked about in terms of politics
and regulatory capture, thehard part isn't the technology.
The hard part is selling it.
And once you teach somebody that it'snot about, you know, selling ice to
(01:05:13):
Eskimos to, to use a poor phrase,but it's about, here's your needs.
And let me make sure Iunderstand your needs.
And here's how I think Ican make your life better.
That's what we did with CostPlusDrugs.
com, right?
We didn't really invent anything.
We just made it transparent andthe selling takes care of itself.
As long as people trustus, it sells itself.
(01:05:34):
So I think selling candefinitely be taught.
All right.
That's what I was hopingyou were going to say.
Go ahead, Andy.
Yeah.
I might want to ask justlike one follow-up question.
And just cause I'm curioushow you keep this going.
So, you have been so successful at selling.
Obviously, you've had a lot of successin business and other sectors.
My experience has been that when peoplereach a certain level of success, they
run out of people to challenge them.
(01:05:54):
But it seems like to me, at leastbased on following you on Twitter,
that hasn't been a problem for you.
So how do you make sure that there'sstill enough people around you who
can challenge your ideas and you'renot just surrounded by yes men?
I'm never surrounded by yes men.
I mean, I'm so competitive.
I'm always looking for a game.
Now, my basketball game's not where itused to be, so it doesn't take as much to
challenge me there, but intellectually...
(01:06:15):
I'm always looking for a wayto challenge my, that's why
I do stuff like this, right?
And that's why I askedyou, what do you think?
You know, because I don't wantsomeone just to agree with me.
You know, I, I learn morewhen people disagree with me.
And it's like going back tothat blood thing on Twitter,
you know, back in the day.
We're doing it now.
We're calling out Elonor calling out whoever.
You know, it's unfortunate thatit brings out the trolls, but it's
(01:06:38):
still, there's still value there.
Right.
Awesome.
Mark, thank you so much forbeing on AI Grand Rounds.
This was a real pleasure.
Yeah, this is fun.
I really enjoyed it, guys.
Great.
Thanks for coming.
We had a lot of fun, too.
And I'm so glad I got to ask youabout Skip Bayless, because that's
one of my favorite, favoriteYouTube clips of all time.
You just completely... You know, not onmy card to be asked about Skip Bayless.
(01:06:59):
When the Mavs beat the Heat in 2011, andthen you went on the show and just...
It was great.
You torched it.
I put up, I had a picture of my sonwho was like two at the time and
I forget exactly what it said, butI used it to call out Skip, right?
He's like, come on theshow anytime, anytime.
So I happened to be in Miamiwhere they were filming and
(01:07:22):
I'm like, okay, I'll come on.
And I started talking and I kept onwaiting for the hammer to fall, right?
Because they prepare for these showsand him and Steven A were on there.
And I figured at worst, StephenA is gonna try to rip me, but...
They just do vocal cord warmups.
Like they just shout, they don'tactually do like information.
Yeah.
He is like, no, no clutch G, no clutchsheet, and then blah, blah, blah.
You're like, well, noclutch gene, un skip.
(01:07:43):
You know, or those guys.
And then he like got the frozen,the frozen water, whatever.
Right.
Oh my God.
I really expected it.
Something I, I kept on, you know, tryingto stay on my toes 'cause they were gonna
hit me with something and just never came.
Awesome.
Well thanks again, Mark.
This was great.
I really enjoyed it, guys.
Have me on again.
Thank you.
And let me know the feedback.
Let me know the feedback you guysget in terms of, oh, he's an idiot.
(01:08:06):
Because if someone thinks I'm an idiot,I probably am, but I'm willing to learn.
Right?
You know, just so whateverfeedback you get, let me know.
Will do.
Appreciate it, guys.
Thanks.