All Episodes

January 15, 2025 36 mins

Nicholas Thompson is the former editor-in-chief of Wired and current CEO of The Atlantic. There, he negotiated a controversial partnership with OpenAI that The Atlantic’s newsroom referred to as “a devil’s bargain.” In his free time, he uses AI to help himself run faster and write better. Through it all, he maintains a worldview perhaps best described as “techno-enthusiasm.”

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Thanks for tuni in to techt Stuff. If you don't
recognize my voice, my name is Oz Valoshian and I'm
here because the inimitable Jonathan Strickland has passed the baton
to Cara Price and myself to host tech Stuff. The
show will remain your home for all things tech, and
all the old episodes will remain available in this feed.
Thanks for listening. Welcome to tech Stuff. I'm oz Vaaloshian

(00:23):
and I'm Cara Price. So it's Wednesday, and starting on
today's Tech Stuff, and every Wednesday going forward, we're going
to bring you an in depth conversation with one of
the brightest and farthest seeing minds in all of technology.
For me personally, hosting this podcast with you is kind
of a dream come true. Parts I love spending time

(00:45):
with you, but also because I love getting the opportunity
to sit down with people who are in many cases
building the future and asking them what they're looking at,
how they're building, what they're scared of, and what they're
excited about, and then bring that back.

Speaker 2 (01:02):
And it is my dream to have you do all
of the work and to respond to it.

Speaker 1 (01:06):
So thank you so for our first Wednesday episode. Of
tech stuff. There was no one I wanted to reach
out to more than Nicholas Thompson.

Speaker 2 (01:14):
I really like Nick Thompson. I remember when he was
at Wired.

Speaker 1 (01:18):
Yeah, he was the editorn chief of Wired, and he's
been a long time chronicler of tech. In fact, I
really like this thing he does on LinkedIn almost every
single day, which is a kind of selfie video called
the most Interesting Thing in Tech This Week.

Speaker 2 (01:33):
It's podcast series and actually we found ourselves mentioned for
a very particular reason.

Speaker 3 (01:40):
Hosted by Karra Price and Oswalsh.

Speaker 1 (01:42):
And so look at one day. Back in twenty nineteen,
one of Nicholas Thompson's most Interesting Things in Tech was
our very own podcast that we hosted together, Sleepwalkers.

Speaker 3 (01:53):
But what I like about it is it's real reporting,
an analysis, but there's realism about the complicated trade offs.
They're both optimistic and pessimistic.

Speaker 2 (02:08):
Yeah, that was a very wild moment for us. I mean,
you and I do like getting press hits. My mother's
a publicist. I know what a big deal it is.

Speaker 1 (02:14):
Certainly, when Nicholas put this video up on LinkedIn, obviously,
the first thing I did was to get his email
address and right to him and ask him to have
a coffee, which he agreed to do, and subsequently he
actually a wide magazine syndicated our podcast Sleepwalkers as a column,
which was just very very very cool and very exciting.
And I think in some ways it is part of

(02:36):
the reason we're we're back in the seat a few
years later, because he contributed to giving us the confidence
and maybe even the credibility to be hosting tech stuff today.

Speaker 2 (02:45):
Absolutely, I think for us to have the real deal
put his stamp of approval on things, I think was
very exciting for us. But you know, we can't assume
that everyone knows who Nicholas Thompson is, so what does
he do now?

Speaker 1 (02:59):
So Nicholas Thompson went on to become the CEO of
the Atlantic, So yeah, he's kind of bounced around throughout
a long career in journalism. And Nicholas has written about politics,
about the law, and of course technology. He's been a writer,
He's been an editor and an author of books. He
wrote The Hawk and the Dove, Paul Knitzer, George Kennan,

(03:20):
and the History of the Cold War. And because he
never ever stops, he's writing a new book called Running
for Your Life on middle age marathons and the quest
for peak performance.

Speaker 2 (03:31):
I just think about what I do in a day,
what he does in a day. But you know, I'm
very excited to hear from him. I think he I
think we also gravitate towards him as a person because
he is incredibly multifaceted.

Speaker 1 (03:44):
Highly highly, highly energetic. That's disturbingly energetic. In fact, he
really is just a ball of kind of optimistic energy.
And we had a lot to cover together. We talked
about the deal he struck with Open Ai in twenty
twenty four as Atlantic CEO, which included licensing the magazine's
archive to train AI.

Speaker 2 (04:03):
That was drama.

Speaker 1 (04:04):
That was drama. So I asked him about that, and
there are a few other kind of big questions, but
I started the conversation asking about running and how he
used tech to beat his best ever marathon time went
into his forties.

Speaker 3 (04:17):
I had to somehow convince myself at a subconscious level
that I could go faster than I thought I could.
And the funny thing about running, which I didn't quite
understand then, is what slows you down often isn't physiological pain.
It's your body creating an illusion of physiological pain. Because
it's worried that you'll lose homeostasis if you continue a

(04:39):
pace for a certain period of time. And if you
can convince your mind that you can do more, well,
then you can do more. But what do you use
to convince your mind? Or you have to use your mind?
So I started using an arm heart rate monitor, and
so I actually had very accurate readings of my heart
rate as opposed to the highly inaccurate readings that we
normally have, and that allowed me to both sort of

(05:01):
titrate the effort during workouts and during races, but also
to have confidence right when you're running a race and
you're running at a fast pace and your heart rate
is oh look, my heart rate is only one thirty five, right,
I'm okay, I can go harder. That is extremely useful. Now,
of course I use AI. I upload everything I've eaten.
I asked for nutritional advice or to get it. Oh yeah,
you know. This is what I had for breakfast, This
is what I had for lunch. This is the workout

(05:21):
I ran yesterday. What would you recommend I do between
now and my next workout on Friday.

Speaker 1 (05:26):
It's great, how much of a paradigm for human machine interaction.
Do you think this kind of experience, your experience with
optimization through running is huge.

Speaker 3 (05:36):
I mean, if you think about AI, it's very good
at tasks where it's better than the best human available
and the answer doesn't have to be one hundred percent accurate,
where a fast answer involving all of the inputs can
be ninety five percent accurate and it's good. Right, And

(05:56):
that's the case for what should I eat for dinner tonight? Right? Right?
Like even if it tells me I need a little
extra protein and maybe I don't need electric protein, who cares?
But it still knows a lot more about nutrition than
I know about nutrition, and it can analyze the content
of the foods I've eaten in a much better way
and it's an extremely useful tool.

Speaker 1 (06:17):
Yeah, because I mean, you're not a professional runner, although
you were kind of in the elite or sub a
leaite category.

Speaker 3 (06:22):
Sub sub elite, which is better if you folks like
sub elite sounds ridiculous. Elite, you know, excellent, Like I've
never I've never I'm never the elite elite, well not
in running PHAs maybe in other ways you are, but
but elite elite.

Speaker 1 (06:37):
Runners are Also, I mean you've you've talked about a runner,
you know who had a digital twin, I mean talk
about that.

Speaker 3 (06:43):
Yeah, so Deslynden, who is a wonderful runner. She set
the world record for women in the fifty k run,
and she's a force of nature, and so she had
TCS built a digital.

Speaker 1 (06:56):
Twin of her heart.

Speaker 3 (06:58):
And it's still early days to see how useful that
can be. Right now, it sort of just explains in
much finer detail how she recovers from a workout and
how she benefits from her workout. But you can imagine
if I had a digital twin of my heart, I'm
sure I could optimize workouts in a way that I
can't now.

Speaker 1 (07:18):
Well, I guess you look at F one. There's only
a sudden number of hours the cars are allowed to
be on the track. So that's why simulation is so
important for F one. Similarly, in running, I mean, you
don't want to be running way too much, right, So
in a sense, this idea of simulating training allows you
to do way more training than you could do, right
It was a real point.

Speaker 3 (07:36):
Well, I mean they're also there's specific things like so
right now I run ultras and I'm trying to run
a fast fifty miler, and the problem is in ultra training,
you never run fifty miles in a workout, and so
you can't actually test your body and see whether you're
gonna make yourself puke. If you take in five hundred
calories an hour for five hours, and if you could
do that through a digital twin, and you can say, Okay,

(07:57):
here's how my digestive system works. Here's the rate at
which I burn calories. Here's how fast I'll be running.
What is the optimal number of carbohydrates that I can
take in without throwing up? That would be phenomenal.

Speaker 1 (08:10):
Running is one of your key passions. And then there's writing.
I think I read that you've put your interviews that
you're doing for your book through an NLM to kind
of have new connections and themes suggested or.

Speaker 3 (08:22):
Yeah, so this is a really interesting process. I try
to use large language models in every way possible as
I write this book about running, with the exception of
writing any words. Not one word in the book will
be written by A but I try to use it
for everything else to see where it's good and where
it's not good, and also to accelerate the process, because
when you write a story at The Atlantic or the
New Yorkery, of this team of editors behind you right

(08:42):
helping you all the time, you're writing a book, it's
a much more solo project. And so the way the
book is structured is it's partly about my life. It's
probably about my father, and then it's partly about different
runners who I've encountered or competed with along the way.
And some of them are people that I've interviewed episodically
over your time period. And so one of the characters,

(09:03):
for example, is this one Bobby gibb first would run
the Boston Marathon, mother of a friend of mine in
high school. I have all these interviews. Yeah, and so
the most useful task is I wrote a section on
her in the book and say it's three thousand words,
and then I've fed all the interviews into a large
language model, and I said, here's the section I've written.
Here are all the interviews. Is there anything I've written
that is inaccurate based on what you've said? Are there

(09:25):
any quotes from.

Speaker 1 (09:26):
Her faccurate or inaccurate characterization? Both?

Speaker 3 (09:30):
Yeah, it's less good on factually inaccurate, But like is
anything I've said kind of unfair? Which is a test
you should do as a journalist anyway, But is anything
I've said unfair? And are there any quotes that she's
given me that are better than the quotes I've included?
And in fact it said yes, you should include this,
and you should include that, And then I went back
and I said, okay, great Now I have also at

(09:50):
different points I've said, you know, here's the whole manuscript,
you know, with the privacy protections on, so it's not
fed back in where should I add this? And it's
less good at that, But there are specific narrow tasks
where it's amazing. It's just like having a very smart
research assistant right there.

Speaker 1 (10:08):
It's interesting added that little caveat with the privacy setting
zone because I think you had another experience with a
book you've already written that had to do an AI
that was less possive.

Speaker 3 (10:17):
Right. Well, yeah, so there's this big debate about my job.
See you of the Atlantic, how are we licensing the
Atlantics data to AI models? And there is a direct
process whereby they in the last few years during which
they've trained their model, have come to our site have
scraped it have uploaded it, and that is something that
we have some control over, right, and we can say

(10:39):
don't do that. We can say we're going to license it,
we can say you can sue you for doing it.
But what is so interesting is that a huge percentage
of the Atlantic content in these models doesn't come from
reading the Atlantic. It comes from, well, the Atlantic's website
was already captured as part of this process by which
somebody captured the whole open web, or someone copy and

(11:00):
pasted an article and to read it or put it
on instant paper or pot And the same thing happens
with books. So my book published by McMillan, The Hawk
and the Dove, you know, on Paulmitz and George kennon
the history of the Cold War, I never licensed it
to an AI model, but you know, it went out
to libraries and then this code or you know whatever
hacked the stream of all bookstually, like, there are all

(11:22):
these data sets that include the words in my book
that have been fed into all these large language models.

Speaker 1 (11:27):
And that feels weird having realized that, what do you do?

Speaker 3 (11:30):
Well, it's complicated it because there's a there's a question
of whether the AI companies argue that what they've done
is fair use. They just taken data and they transform it, right,
because you can't write the Hawk and the Dove with
one of these things. It's transformative. It's just like they
went into the library and read and then not only that,
because it's from a data set and they didn't know
the contents, it's kind of a secondary copyright violation. It's

(11:53):
a little bit different if they had come and taken
Hawk and the Dove, photographed it and fed it in.

Speaker 1 (11:57):
Yeah, they found a well in the street rather thant
taking out of book gives correct, right.

Speaker 3 (12:01):
Or they went to a thrift store and they bought
a big bucket of things and there were some wallets
in there, right. So you don't have a lot of options.
The option that I am most supportive of is being
pursued by a company called pro Rada, and what they
are doing is they are building a kind of reverse
AI tool that will evaluate the answer given by a

(12:23):
large language model way the sources that went into it,
and then overlay payments process. So it's a little bit
like askap right, And the idea is if an answer
like open AI answers a question and their answer based
on the work that pro Rada has done derives from.

Speaker 1 (12:41):
The Hawk and the Dove.

Speaker 3 (12:42):
You know, one percent derives from the Hawk and the Dove,
and you know open Ai makes one penny off of it,
then I should be given some fraction of one percent
of the one penny, right, And that's the pro Rada
is trying to.

Speaker 1 (12:53):
Develop that as a business model.

Speaker 3 (12:54):
I'm on the board of Them's a full disclosure, but
there are a couple of companies out there that are
trying to all this compensation issue because a lot of
value has been created from copyrighted materials to which the
copyright holders were being given nothing. There's not been a
fair exchange of value, and that's a problem you.

Speaker 2 (13:12):
Have to solve.

Speaker 1 (13:13):
So you'll now see you at the Atlantic. But you're
previously editor of Wired, and I have to ask you,
I mean, what's your advice, honestly to us as we
start this new podcast as to how to how to
approach this set of stories and problems.

Speaker 3 (13:26):
I'm drawn to. I call it tech enthusiasm, where I
love tech right and I think two things are true.
One tech is amazing and two in the long archive time,
technology makes the world better for people. That said, I
was never a perfect fit with the sort of the
pure optimism of early wires. It was also as the
the position of pure optimism kind of fit the tech

(13:50):
industry when they were the underdogs, but once they were
the dominant forces in the world, it was a less
appropriate response any case, So you should choose your own
you of how you come to tech, right and true.
But like my view of tech is I'm constantly trying
to learn about it. I'm constantly trying to understand it,

(14:11):
and every now and then I stop and I'm like, wait,
this is horrifying. But then I'm like, you know what,
like I'm enthusiastic. I'm just gonna keep going. We're gonna
keep looking at this stuff. Because one of the risks
in AI is that you look ahead and you're like, God,
this is I'm terrified of this. And then you say,
you know what I'm gonna do. I'm gonna like just
I'm gonna be like King Canute, I'm gonna say the

(14:33):
AI is not happening. Like on principle, I refuse and
I'm not going to use AI because I don't like
what it's doing to the world. Well, that's not the answer,
because it's not going to go away just because you
find it scary. You're just gonna miss the moment where
you can shape it in a way that maybe makes
it less scary.

Speaker 1 (14:51):
When we come back the Atlantic's decision to partner with
Open AI and how that decision was received in the newsroom,
I also really enjoy your newsletter, which is another content
output that we haven't had it.

Speaker 3 (15:11):
I didn't even mention that, Yeah, that's a fun one.

Speaker 1 (15:14):
In June, you picked up an essay by Leopold Aschenbrenner
called Situational Awareness, which I was both, you know, quite
curious about it and also slightly put off by the tone.

Speaker 3 (15:26):
Oh I mean, I mean, I don't think I've ever
read an essay that has the ratio of insight to
alienation in that essay from like it ends up like
ten times as insightful as a lot that you read
and like ten times as alienating. Right, And so you know,
begins with this sense that you know, because the author
got really high grades at Columbia, you should trust him

(15:47):
to like see the entire future, and he sees the
future nobody else does, and you're like, okay.

Speaker 1 (15:51):
He's also an open AI researcher.

Speaker 3 (15:53):
There, right, So we worked at open A left, left,
open AI, and he's he is, in fact, exceptionally bright, right.
And I've spent time talking to him, exceptionally fun to
talk to. And when you talk to him, it's a
little easier than when you read the essay. But in
case the essay says, hey, everybody, wake up, this is
what happens if there's exponential AI. Here's how the improvement
curve works, and here's what it will be able to

(16:15):
do soon. And now we've watched AI go from being
as smart as a toddler to being as smart as
a high school student to being as smart as a
pH d student. Now let's just extrapolate to when it's
smarter than a Nobel Prize winner, and then when it's
building a model by itself that we have no control over.
And maybe he's wrong. Lots of people challenge his assumptions,

(16:36):
and the question of whether AI will scale exponentially is
hotly debated. The part that I found dangerous, that I
think probably contributed to a series of mistakes that we
are making right now, is the view. Okay, if you
play it out, then whoever controls AI will control the world,
and so therefore we need to really make sure that

(16:56):
it's controlled in the United States and not in China.
So therefore we need to have a very sagonistic relationship
to China. We need to make sure they can't hack
in and get our systems, and we need to set
our foreign policy to prevent them from getting AI. And
you can see not that Leopold Ashenbrener is responsible for,
you know, the Biden administration Chips Act and anti China
policies and technology, but he contributed to a conversation that

(17:17):
I think has led to a set of policies that
most people think are good, very aggressive policies by the
United States to try to slow down China's AA industry,
but that I think are bad.

Speaker 1 (17:30):
You credit that situation on the lne as I say,
with a real policy.

Speaker 3 (17:33):
Shift, Like Leopold Asherbrener was probably in high school right
when Trump started like going after Walway. But if you
look at the conversation this summer, why did SB ten
forty seven in California, which was strict AI regulation, Why
was that knocked back by God Gavin Newsom, governor of California.

(17:54):
I think in no small part because of a fear
that if we regulate the AI industry, China will get ahead, right,
And I do think that the sort of China Hawk
element of the AI industry contribute it to the defeat
of the most systematic attempt at regulation. And I do

(18:15):
think that Ashrabnner's essay played a small role in that.

Speaker 1 (18:20):
Which I guess segues to You're no longer just an editor.
You're also a CEO, right, I am, yes, And I'm
wondering how do you think differently about AI as an
editor versus a CEO.

Speaker 3 (18:32):
Well, it's a CEO you have all these other hard questions. Right,
So as an editor and a writer, you're just like
finding things that are interesting, You're sticking your curiosity. You're
as a CEO, I have to think about how it
will literally change my company and prepare for that. Right, So,
what will it make easier? What will it make harder?

(18:53):
How will it change jobs in the future. If you
assume that ai wei should be powerful, should you be
hiring a different kind of person? Right, So you'd be
hiring somebody who's more flexible about what they do as
many different skills as opposed to very narrow skills. Right,
So you make those decisions. That's one category of decisions.
Probably the most pressing is you have to anticipate how

(19:13):
it will change your field and then how you operate
in it. So how will it change the production of media?
But one of the things that's doing is it's changing
how search engines work. Right, We're going from search engines
to answer engines. Answer engines don't drive traffic. We get
the plurality of our readers from search engines. So if
search engines go away and we move to answer engines,
where will our readers come from. Gosh, there won't be
as many of them. Okay, can your business survive and

(19:36):
thrive in that ecosystem? So that is a hard problem.
So what will it mean if AI becomes as good
as Leopold Ashburnner thinks? And I asked him this question,
like how long will you know serious publications have a mote?
And he's like, oh, three years, right right. I was like, great, right,

(19:56):
But if you believe in his view, in three years,
anybody will be able to say, hey, make me a
magazine that's just like the Atlantic right in you know,
two seconds, and so a guy in Macedonia will make
a pseudo Atlantic, and so you'll have these new competitors
that you're dealing with, right. So then there's a third category,
which is how do we interface with the large language
model companies? And this is the question related to what

(20:17):
you asked me earlier about my book, and that is like, okay,
which ones we make deals with?

Speaker 1 (20:21):
Which ones do we sue?

Speaker 3 (20:21):
Right? And then there's kind of a fourth question of
you know, what products can we build and are there
things that we know about media that we can build
using AI that we can then productize and turn into companies.

Speaker 1 (20:31):
One of the questions this conversation raises, of course, is
if I'm a reporter at The Atlantic and I hate
this conversation, I might think, you know, the sea of
the company thinks that in two or three years there
may no longer be a need for me, and this
company may be replaced by a Macedonian spoof. How do
you respond to that?

Speaker 3 (20:50):
Well, So, first off, I don't believe that it's going
to that fast.

Speaker 2 (20:55):
Right.

Speaker 3 (20:56):
That is the view of Leopold Ashenbrenner and others, Right
when he said, you know you have a mote for
three years. I disagree. I think the mote is much longer, right,
And why do we have a mote. Well, first off,
there's no indication whatsoever that AI can write with any
kind of style and voice, Like it is terrible at
it ask it to try to style in voice. It
can write poems that are kind of silly. It cannot

(21:19):
report right, and it can't go out and have a
conversation with the source the stuff that makes Atlantic stories
Atlantic stories. It can't do right. We just had Robert
Worth out reporting with Ukrainian fighters right in the streets
of Ukraine. Do you really think, even if you believe
the most optimistic AI scenarios, that somehow your AI bought

(21:44):
is going to be able to get these guys on
the phone and like we'll be able to talk as Honestly,
there's no way in hell that's.

Speaker 1 (21:49):
Going to happen.

Speaker 3 (21:50):
So the Atlantic and serious long form publications that write
with style, that do complicated stories with interesting narratives and
do reporting is going to be around for long, long, long, long,
long long time doing the things that it does. Now
all of that said, I would be a fool not
to think about how AI is going to advantage competitors.

(22:14):
The publications that will be started by Macedonians with really
good prompt engineering skills, and that will exist in a
web where search is totally different. Right, And so I
think that the Atlantic will be publishing the kinds of
stories that we.

Speaker 1 (22:27):
Publish as far as I can see.

Speaker 3 (22:30):
And I also think that preparing for World of AI
is something that is extremely important for me SEO.

Speaker 1 (22:35):
So that said, your own magazine, I think refer to
the deal you made with open Ai as a devil's bargain, right.

Speaker 3 (22:42):
Yes, this was a deal that members of our editorial
team were not fully supportive of. Like you know, I
don't tell them what to write, and they don't tell
me what to do. And I am one hundred percent, fully, completely,
absolutely of the belief that that deal was good for
the short term of thee, for the long term the Atlantic,
and for the long term the journalism industry, And I

(23:03):
believe can.

Speaker 1 (23:04):
You explain exactly what the deal was just?

Speaker 3 (23:06):
Yes? So the deal is that open ai agrees to
pay the Atlantic sum of money or a period of time.
In return, it is given the right to train on
the Atlantics material, meaning that the models that are developed
in that window, not afterwards, are allowed to train on
Atlantic content. And when they build a search engine, they

(23:26):
will be able to link to Atlantic stories and reference them.
And so if you go to the search engine and
chat GPT and you ask about something that has happened,
you will get links to Atlantic articles. You will not
get links to the New York Times. Most the New
York Times issuing them does not have good deal. And
so we have gone through a process whereby we have
been giving feedback on how that search engine works. It
doesn't work when the links are appropriate, when they're not.

(23:48):
So it is our belief that the elements of the deal,
which include some influence on shaping the search product, which
is massively important to media, right, some referral traffic, which
is extremely important because as I mentioned, as we switch
from search engines to answer engines, our traffic will decline substantially.
And then an exchange of value over the data that
was used to train. The reason why many journalists, including

(24:10):
many the Atlantic, didn't like it is that they you know,
they don't trust open ai as a company. They feel
like it wasn't a fair exchange of value. Right, there
are lots of reasons why they opposed it. Now we
have no fair exchange of value. We have gotten nothing
from the other large language model companies that have trained
on our data, and there are many of them. So
the open Ai deal was the one major deal with

(24:33):
a big AI company that we signed and that we announced.

Speaker 1 (24:35):
And I guess one of the concerns, of course, that
the training data perhaps becomes less relevant, and that this
may be a very advantageous deal for open Ai in
the short term, and at the other side of it,
they won't need to renew.

Speaker 3 (24:48):
Well, that's interesting, right, because then the argument there, If
that was one's argument, then you would write, well, actually,
we should have made a longer deal.

Speaker 1 (24:55):
Did you consider Carchie a longer deal?

Speaker 3 (24:57):
No, we didn't, And the reason we didn't is that
the price for training on high quality media content is
going to change substantially in the next couple of years.
And it's going to change based on a couple of factors,
one of which is their legislation mandating that they're being
exchange of value, Another of which is will the New
York Times and the other lawsuits be successful. If they are,

(25:19):
the price of this training will go up. If they're not,
the price will go way down. And so we made
it to your deal on the expectation that maybe in
two years the price will go up, and therefore we'll
be able to get more money. It may be a risk,
and in fact, the prices that are being reported in
the press for training have gone down substantially since you know,
we made that deal in May. And that may be

(25:40):
because the AI companies think they're going to win their lawsuits, right.
It may be because they think that they don't need
us because synthetic data is so good. It may be
that they figured out how to train models, and like,
as an environmentalist, I'd like them be able to train
models unless datacause he uses less energy. That maybe that
the AI companies are getting enough from there that they
don't need you know, Atlantic stories, right, or they need

(26:02):
the Atlantic stories less, so their perception of the value
of the tokens that we have is dropping. So maybe
I should have sign a five year deal if I
could have seen in the future. On the other hand,
if the New York Times wins their lawsuit, or the
European Union passes legislation, or any of a number of
other things happen, the price will go way up, in
which case great. One of the questions that will come

(26:23):
up in the lawsuit is can you prove that there
is value to the content that we scraped, and clearly
there is because.

Speaker 1 (26:31):
You're paying somebody else, paying somebody else, right, So.

Speaker 3 (26:34):
That was something we put up publicly because there was
a perception, Wait, you're actively working against the New York Times,
why don't you stand in solidarity with our brothers in
Times Square? And it's like, well, hold on a second,
this actually does help them. Now, maybe it would have
helped them more if we join the lawsuit, but our
job is to find the best deal for the Atlantic,
as much as we love the New York Times and
want to help the larger cause of media.

Speaker 1 (26:54):
Were there any kind of people you spoke to who
were all the opposite opinion to you, who came around
to your opinions through this prest question.

Speaker 3 (27:00):
I think one of the arguments that, for better or worse,
shifted people's minds is, well, they've already done this scraping, right,
And so if what you want is an open AI
that has no knowledge whatsoever of the Atlantic, you can't
ever get that that doesn't exist. It's just sort of

(27:21):
an unfortunate fact. I wish we could have prevented, and
we had somehow through some combination of heroic efforts, to
remove stories from Reddit, right, Like, you know, we had
prevented that from happening. But I think a lot of
people realized oh wait. I also think another argument did work.
So Jessica Lesson, Who's very smart and a good friend,
published in the Atlantic the day before we announced the deal,

(27:43):
this argument saying, hey, media company should not make deals, right,
and look at what happened to all the companies that
made deals with Facebook Watch. A lot of sort of
young social media based companies of ten years ago were screwed.

Speaker 2 (27:55):
Right.

Speaker 3 (27:55):
And so the conclusion that I think many people have
drawn is don't do deals with big tech companies. And
I think an argument that was somewhat persuasive encountering that was,
hold on, don't do bad deals. But how do you
think the Atlantic gets subscribers? What is the number one
mechanism we have for driving subscription? It is Facebook Ads?

Speaker 1 (28:15):
Is that really?

Speaker 3 (28:16):
And so I think this kind of absolutist pure position
no deals with tech companies once you get a little
more granular. Oh wait, okay, no stupid deals. Now you
can still argue that this deal we made was a
stupid deal, right, But I think we had some success
in kind of moving people from the absolute position of
no deals.

Speaker 1 (28:37):
More insights from Nicholas Thompson when we come back. I
want to close with a quote from David Foster Wallace
that I also found in one of your newsletters, which was,
the technology is just going to get better and better
and better, and it's going to get easier and easier
and more and more convenient and more and more pleasurable

(28:57):
to be alone with images on the screen to us
by people who don't love us but want on money,
which is all right in low doses, right, But if
that's the basic main statement of your diet, you're going
to die in a meaningful way. You're going to die.

Speaker 3 (29:12):
It's one of the most prescient and wonderful quotes. And
it's so he was just talking in an interview. But
the people who don't love you but do want your money, right, Like,
how can you say it better than that? And you know,
I am a tech enthusiast. I love taking things apart.
I love trying to understand them. I ask to try
really hard to make sure my kids are off their phones.

(29:34):
I like make sure I have lots of time off
my phones. I like to spend time in the mountain,
right so I think what he said is perfect, which
is and I probably would go for medium doses, not
low doses. But you do also have to disconnect, and
you do also have to be human, and I think
he said it better than anybody. He said that. I
think in like nineteen ninety six, so you know, an

(29:57):
incredible writer and saw out into the future.

Speaker 1 (30:00):
But that brings me back to the beginning of the
conversation and the running, because it's something you both use
technology to excel at also in a way, something which
is very medicinive.

Speaker 3 (30:12):
Totally disconnect. And as I said earlier, the process of
getting faster is like getting your body and your brain
more in sync with each other. And so when you
do a workout, you want a minimal number of mental distractions,
and so much of the benefit is the attention of
your brain and your body in sync as you run
whatever pace. It was five forty six, right, And that

(30:34):
is a lot of what makes you better at the sport.
And so if you are allowing anything to interfere with
that mental physical process, you are doing a disservice to
your training. And so there is a technological element of running.
I do analyze my training. I do like, look at
my historical heart rate data, you know, before a race,
I will, you know, look very carefully at how I've
done in certain you know, workouts and what it indicates,

(30:55):
because that helps me choose the pace that I'll run at. Right,
there's a whole there's a whole process, but it is
also extremely important to disconnect, both as part of the
training and as part of meditation.

Speaker 2 (31:13):
That was a really interesting interview.

Speaker 1 (31:15):
I thank you, Carol.

Speaker 2 (31:16):
I was going to thank you for doing it, but
you know, that's what you want to do, that's your job.
It's actually funny. Tory, one of our producers, was just
saying that every software engineer that she knows is an
avid rock climber just for this reason, like get away
from the.

Speaker 1 (31:29):
Techt away from the phone.

Speaker 2 (31:30):
Yeah, exactly exactly.

Speaker 1 (31:32):
Also why I'm such a big souna enthusiast the one place.

Speaker 2 (31:36):
Oh, I thought, that's because you're a Ukrainian.

Speaker 1 (31:37):
Well, it's partly that the comment. But not being able
to have your phone in the sonar as be part
of why this sonar is so great.

Speaker 2 (31:44):
I have done something recently and not you know, to
sound very to so twenty twenty one, but I don't
sleep with my phone in my room anymore.

Speaker 1 (31:53):
I'm very proud of Yes, I born an alarm clock,
but I haven't gone around to that's pathetic. Set it up.

Speaker 2 (32:01):
What I love in the discussion of like the way
that technology optimizes human performance is like, there obviously is
something inherent to a great athlete's performance like Michael Jordan's
Michael Jordan regardless of his shoe in a certain sense.
But technology does sort of.

Speaker 1 (32:20):
Truly enhanced performance, truly push the human being in the
human body.

Speaker 2 (32:24):
Well and redefine like what it is to be a runner,
what it is to be an athlete.

Speaker 1 (32:28):
I really like what Nick said about the way you
get better at running is to quiet your mind, take
the fear away. And the only way you can quiet
your mind is with your mind. But for him to
be able to see that his heart rate, even though
he was maybe approaching panic in terms of how hard
he was pushing himself, because of his heart rate monitor,
he kind of knew that he was okay, and that

(32:51):
allowed him to generate better and better time. So it
wasn't the technology per se, he wasn't He didn't have
like you know, air boosters and his trainers but the
technology allowed him to quiet his own mind in a
strange way.

Speaker 2 (33:03):
Yeah. One of the things that I was thinking about
is that, like, there are two ways that technology affects us.
There are things that make us less human and there
are things that make us more human. Like human enhancement
can both be you become superhuman or you become more
of who you are through personal optimization. Yeah, and I

(33:23):
just thought that was very interesting. The other thing that
I actually I was so happy you were interviewing him
because I remember, I'm not going to compare it to
some of like the great you know, world events, but
I do remember where I was sitting on West Broadway
when I got the alert, and I remember saying that

(33:43):
the Atlantic is making a deal with Open AI to
basically allow them to mine the Atlantic's what do you
call catalog or archive? Is a turning point in the
history of journalism where someone has decided that the way

(34:09):
to make money is to make a deal with the devil.
And you getting this as our first interview, I think, again,
it might not seem like it's that big of a
deal to people, but I think in the conversation between
you know, what is the future of journalism how do
these newsrooms monetize in a way that does not cannibalize

(34:31):
the thing that the newsroom does. And then to see
the Atlantic make and other newsrooms now too, and other
just content providers make that pact, I think is a
real turning point.

Speaker 1 (34:44):
And actually one of the people who wrote a piece
in the Atlantic really is a broadside against media companies
partnering with AI companies was Jessicallessen, the CEO and founder
of The Information who are going to be talking to
on the show soon. But this is a hot, hot,
hot button issue obviously. I mean Nick's point was basically,

(35:06):
this is happening anyway, and I got us a some
compensation and be some ability to show up in open
AI's search engine, which will be useful for brand awareness
and to drive subs in the future.

Speaker 2 (35:19):
It is if you can't beat them, join them.

Speaker 1 (35:21):
It's a little bit. If you can't beat them, join them.

Speaker 2 (35:23):
All right, Before I go into too much future tripping,
I think this is a good place to leave it.
And that is all for tech stuff today. This episode
was produced by Shena Ozaki and Eliza Dennis, with help
from Lizzie Jacobs and Victoria Domingez. It was executive produced
by me, Kara Price, os Valashan and Kate Osbourne for

(35:44):
Kaleidoscope and Katrina Norvel for iHeart. Our engineers are Biheed
Frasier at iHeart and Kathleen Kanti at CDM Studios. Kyle
Murdoch wrote our theme song, Thanks again to Nicholas Thompson.

Speaker 1 (35:56):
Join us on Friday for tech Stuff's The Week in Tech.
We'll run through our favorite headlines, talk with our friends
for form media, and try to tackle a question when
did this become a thing? And please rate and review
on Apple Podcasts or Spotify wherever you listen, and reach
out to us at tech stuff podcast at gmail dot

(36:17):
com with thoughts and feedback. We really do want to
hear from you. Thank you.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

This is Gavin Newsom

This is Gavin Newsom

I’m Gavin Newsom. And, it’s time to have a conversation. It’s time to have honest discussions with people that agree AND disagree with us. It's time to answer the hard questions and be open to criticism, and debate without demeaning or dehumanizing one other. I will be doing just that on my new podcast – inviting people on who I deeply disagree with to talk about the most pressing issues of the day and inviting listeners from around the country to join the conversation. THIS is Gavin Newsom.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.