Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:12):
Welcome to tech stuff. This is the story. I'm Kara Price.
Today I want to spend some time discussing how the
Internet has shaped us as individuals. I was exposed to
the Internet very early on, with both a techno optimist
for a father and attending a school in the early
two thousands that insisted every student have a laptop starting
(00:34):
in fifth grade, I was bred to be chronically online.
I started thinking about how this influenced me after reading
a new book called Searches Selfhood in the Digital Age.
The book is by Jahini Vara, whose debut novel, The
Immortal King Raw was a Pulitzer finalist in twenty twenty three.
But before diving into creative writing, Vara was a tech reporter.
She worked at The Wall Street Journal in the early
(00:55):
two thousands, when Silicon Valley was rapidly becoming the force
it is today, among the first to cover companies like
Facebook and Oracle. VARA's curiosity for tech stayed with her
through the years. In twenty twenty one, she wrote Ghosts,
an experimental essay where she collaborated with a nascent version
of chat GPT to intimately write about her sister's untimely death.
(01:16):
When Vara was a teenager. When she wrote Ghosts, being
Vulnerable with your chatbot of choice wasn't yet a thing.
In her latest book, Searches Self Food in the Digital Age,
Vara fully dives into how technology has impacted her life
in the most intimate ways. She shares the Amazon reviews
that she writes to hold herself accountable when buying from
the site. She asked chat gpt to review her chapters
(01:39):
as she writes them. All of these exercises to help
her answer a question that haunts many of us. How
have we been shaped by the Internet? And that's where
I started my conversation with Wahini Vara. I wanted to
start with something very broad, your relationship to the Internet
growing up and how that relationship has changed since then.
Speaker 2 (02:02):
Yeah, I mean I was born in nineteen eighty two,
so I was in middle school when I first encountered
the Internet. It was at the home of like these
family friends where the daughter was my age. She sort
of like pulled me into this room with one of
those beige desktop computers and was.
Speaker 3 (02:21):
Like, I have something to show you, and.
Speaker 2 (02:24):
You know, opened up a little window, pressed some keys,
tapped a button and that like screechy AOL log in
sound started going like that, and it turned out she
was showing me what AOL was and what AOL chat
rooms were. Where at the time, for those who can
remember this, like you would just go into these rooms
(02:46):
in these little kind of windows on your desktop screen
and usually kind of like pretend to be whoever you
wanted to be, like a version of yourself, but not
really your actual offline self. And I found that really enchanting.
I found it kind of miraculous. I was like this
nerdy middle schooler. We had just moved to a suburb
(03:07):
of Oklahoma City from Canada. I had very few friends,
and it felt really exciting to like get to go
online and be whatever I wanted to be without like
the encumbrances of who I was offline.
Speaker 1 (03:21):
You start as someone who is like playing with the
early Internet to now being someone who is a tech reporter,
a teacher, a novelist. Why, after all this time being online,
do you choose to write a book about selfhood in
the digital age.
Speaker 2 (03:40):
So the thing that I understand now that I didn't
understand then is that this idea that I had back
then that the Internet was like this safe space for
self exploration, self expression, where like I didn't have to
worry about the kinds of power dynamics I had to
worry about in my middle school halls. Right Like, there
was a way in which that was true and beautiful,
(04:02):
and there's also a way in which it was an
illusion then, just like it's an illusion now, because actually,
anytime I was engaging with a technology company's product, I
was providing that product with information about me that they
could then use to go get more wealth and more power.
Speaker 1 (04:21):
And in the book you describe how your own personal
and professional trajectory unfolded alongside the tech boom of the
nineties and early two thousands.
Speaker 2 (04:29):
How So, when I was in middle school the eighth grade,
my family moved to Seattle, and we lived in a
suburb of Seattle that was one town away from the
town where Jeff Bezos and his wife Mackenzie were building
Amazon out of their garage, and they would go have
their meetings at this Barnes and Noble on Bellevue Way,
(04:51):
which was the same Barnes and Noble where I would
like go and sit on the carpet and read magazines
and books. I was very aware of Amazon at the time,
I was also aware of Microsoft because one of its
co founders, Paul Allen, was engaging in this big campaign
to buy the Seattle Seahawks and was putting a lot
(05:13):
of money into this local political campaign that was bound
up in that, and so I got access to the
early use of tech money to influence politics, just by
virtue of growing up in Seattle. I then went to
college at Stanford, and I arrived there right after the
founders of Google had built Google out of Stanford. My
(05:34):
senior year of college, Facebook shows up on campus. I'm
one of the first, like, I don't know, five thousand
or so users of Facebook, because I happened to be
there when Facebook was just getting started.
Speaker 3 (05:46):
It was still really new. Nobody really knew what this
thing was.
Speaker 2 (05:50):
But it took off super super quickly, and by virtue
of being the news editor at the time, I edited
our papers first stories about what Facebook was and how
it was growing so fast. We interviewed the journalist who
interviewed Mark Zuckerberg, and already then he was talking about
like creating this space where people could connect, which is
(06:10):
kind of still the rhetoric he uses now, And so
I kind of like fell into this world of Silicon
Valley while being at school.
Speaker 3 (06:16):
And then I graduated from.
Speaker 2 (06:17):
College and my first job was as a tech reporter
at the Wall Street Journal, where I had previously interned,
and there I ended up in the San Francisco bureau.
I happened to be the youngest person in the office
because I was the newest fresh college grad working there.
And so I ended up saying to my colleagues and
to my bureau chief, Hey, there's this new company called Facebook,
(06:39):
and we're not really covering it. I think it's worth covering.
And because they were a private company and actually not
that influential yet in the grand scheme of things, my
editor kind of finally relented and was like, Okay, fine,
if you want to cover this Facebook company, go ahead
and cover them. And then fast forward a couple of years.
I ended up becoming actually a little disillusioned, as maybe
(07:00):
too strong a word, but maybe not with the state
of the tech industry at the time then, like it
felt like these companies like Facebook were already growing really fast,
and you can imagine this future in which that speed
and wealth and power became a problem. And I think
we were having a hard time contending with how to
write about that journalistically. And so what I did was
(07:21):
I just I took a leave of absence and I
went and studied creative writing at the Iowa Writers Workshop.
And I thought I was sort of like leaving this
tech stuff behind entirely, but apparently I hadn't mentally, because
I ended up writing this novel about a tech ceo
who takes over the world. It ended up being kind
of like my way of trying to contend with the
(07:41):
stuff that I couldn't contend with in journalism.
Speaker 1 (07:44):
So tech has been inescapable for you, yeah, for much
of your well for all of your life, it seems, actually,
So eventually you would be among the early users of
chat GPT with a version that looked very different from
the one we know today. Can you tell me how
that came about and also how it inspired you to
write Ghosts, which ultimately became a viral essay back in
(08:08):
twenty twenty one.
Speaker 2 (08:09):
So back in like twenty nineteen, twenty twenty, I was
starting to read about this technology that OpenAI was developing,
which was first called GPT, then GPT two, then GPT three,
And so when GPT three came around, I wrote to
Sam Altman, I had previously profiled him for this magazine
(08:29):
called California Sunday, so I had his number. So I
texted Sam Altman and I said, hey, I heard that
you're working on this thing.
Speaker 3 (08:37):
Would you give me access to it so I can
play around with it?
Speaker 2 (08:40):
And he eventually gave me access to it. It worked
differently from chat GPT back then. So the way it
worked was the interface was this kind of web app
where you would type some text and then hit a
button and then it would just sort of keep writing
for you.
Speaker 3 (08:55):
It would kind of complete.
Speaker 2 (08:56):
That text, and it did that in a way that
was often kind of like more interesting, I would say,
than the text that chatchypt produces. It would be more surprising,
what you might describe as more creative. And the thing
that came to mind was that it was promising that
it could provide language for us when we find ourselves
(09:18):
at a loss for language. And as a writer, there
aren't that many things where I find myself at a
loss for language. But a thing that I have always
struggled to write about, to talk about is the death
of my sister when we were in college from cancer
and my grief over it. I think it's like an
incomprehensible thing to me that happened, And so I wrote
(09:42):
this sentence that was when I was in my freshman
year of high school and my sister was in her
junior year she was diagnosed with e in sarcoma. And
then I hit the button and it produced this text
for me that was like, had nothing to do with
me or my sister.
Speaker 3 (09:55):
Really, it was a story kind.
Speaker 2 (09:57):
Of from the perspective of somebody whose sister had been
diagnosed with ing sarcoma in high school. But the last
line of the little story it produced was she's doing
great now. So it took like this essay that I
wanted to write about my sister's death and kind of
wrote something that was the exact opposite of what I
was interested in. So I kept trying it over and over,
(10:20):
and each time I would like delete the what GPT
three had written and add more writing of my own.
And what I found that was that as I wrote more,
this technology did a relatively better job like kind of
matching the subject and tone of what I was writing,
(10:42):
such that there were these lines that it came up
with that it generated that sounded to me like if
a human had written them would be really well written lines,
you know, like lines that moved me that I thought
were really intellectually engaging. And yet ultimately it wasn't expressing
(11:03):
my experience on my behalf.
Speaker 1 (11:06):
And so by the end of it, why do you
decide to write the whole thing yourself?
Speaker 2 (11:10):
It kept not describing what it was like to be me,
to be my sister having this experience right understandably, like
it sounds silly in retrospect, but of course, like this
machine that's built to put words one after another based
on statistical predictions about which words should come after which
other words was not going to somehow reach into my
(11:33):
soul and express the thing that I couldn't express. And
then furthermore, like even if it could, I think, for
me as a writer, what is important is the actual
act of writing, as the attempt to take something confusing
and try to articulate it right. And so you know,
there is it turns out there is no machine that
(11:54):
can do that for me, right, because the thing that
I'm trying to do is the act of writing it.
Speaker 3 (12:00):
And so ultimately I was the one who had to
do that.
Speaker 1 (12:10):
After the break how to resist big tech stay with us?
So from your perspective as someone who's been a tech
(12:31):
journalist for a long time, who teaches creative writing, but
who also thinks about these things on a really philosophical level.
How do you engage with but not give yourself completely
over to these seemingly innocuous technologies.
Speaker 2 (12:46):
I mean, I don't have a great answer to that question.
I think that there's a way in which no matter
what we do, even when we're using these technologies, say
we're doing Google searches for how to is this big tech?
Or we're chatting with chat GBT about all the problems
with chat GBT, or we're organizing our friends on Instagram
(13:11):
or Facebook or whatever meta owns social network, right to
take political action. All of those things are really beautiful
acts of like of self expression in some ways, of resistance.
Speaker 3 (13:23):
Right.
Speaker 2 (13:24):
And it's a fact that those are legitimate forms of
self expression, of community building, even of resistance. And yet
each of those actions that I just described benefit the
companies that we are attempting to resist.
Speaker 3 (13:38):
Right.
Speaker 2 (13:38):
So when I search in Google for how to resist
big tech, Google then knows that I'm interested in resisting
big tech, and it can target messages to me based
on knowing that fact about me, when I organize friends
on social media for political action, Meta then knows my
political orientation and can target messages to me based on
(13:59):
what it knows about my background. And so any self expression,
any community action that takes place in these platforms that
sort of necessarily bound up with them. Interestingly, I'm finding
this with my book as well. Like I published this
book thinking of it as a critique, I still think
(14:20):
of this book as as being a book that contains
a critique of big tech and uses big technology companies
products to make that critique.
Speaker 3 (14:30):
But a lot of the headlines that.
Speaker 2 (14:31):
Have appeared about my book and some of the interview
questions I've gotten have said things like, you know, this
author collaborated with chat GBT to explore her selfhood. Or
I've been asked questions like, so what did chat GBT
teach you about yourself that you didn't already know?
Speaker 3 (14:47):
Right?
Speaker 2 (14:48):
And so it turns out that my book itself, which
I constructed as a critique, the life of the book,
like the public life of the book, turns out to
also be bound up in that that sort of complicit
relationship that we have with these companies, which at first
I found really upsetting and now I find kind of
kind of fascinating.
Speaker 1 (15:09):
The description of collaborating with AI has become a very
common way of describing how we interact with it. At
face value, it makes sense, especially in cases where someone
is approaching it to work through something that in the
past would have happened with someone else, like a creative
idea or an emotional problem. So what do you think
(15:31):
of this description of collaborating with a large language model.
Speaker 2 (15:36):
You know, there's an extent to which we talk on
Google and we talk on Meta's social media platforms about
like the problems we're facing, our deepest, darkest secrets. But
my sense is that people are going even further with
CHATDBT and products like that in doing that because they
have the impression that they're talking to something that is
(15:59):
like another right, but doesn't have the baggage of actual
human beings.
Speaker 1 (16:04):
Right.
Speaker 2 (16:05):
So we might be talking to chat GPT and feeling
like it's using the language that our therapists might and
giving us good advice.
Speaker 1 (16:13):
Right.
Speaker 2 (16:13):
But whereas with our therapist, we pay our therapists, and
our therapist's job is to help us understand something about ourselves,
and they have training in order to do that well.
Chat GPT's job is to get information from us, is
to get us to keep using the product, to like
(16:33):
the product, to use it more and more. And so
I think that's why that term collaboration is really problematic.
It's a term that people like Samul and the CEO
of open Ai are using a lot.
Speaker 3 (16:46):
I think partly.
Speaker 2 (16:47):
Because they used to say things like AI is going
to take over all our jobs, are going to be
any jobs left. Nowadays they talk more about collaboration. They say,
don't worry, AI is not going to get rid of
all of us. What's going to happen is that it's
going to be a really helpful tool we can use
in our daily lives. It's going to be a collaborator.
It's going to be helpful, It's going to be useful.
They use language like that a lot, which I think
(17:09):
is meant to kind of defying the threat of AI
taking our jobs, right or AI consolidating more wealth and
power in the hands of people who already have it.
Speaker 1 (17:21):
Given that you did use artificial intelligence at points in
your book, and I can imagine that tech companies would
love to claim that you collaborated with AI and it
helped you explore your selfhood. Do you think they can
do that?
Speaker 2 (17:34):
Well, It's complicated because using artificial intelligence products in the
book helps me show what the promise is that these
companies behind the products are making to us, and what
we as users do in response to those products, like
how we engage with those products in response to those promises,
(17:57):
and also help me show how those promises fall short
and how ultimately the way in which we're engaging with
these products is bound.
Speaker 3 (18:08):
Up in the exploitation of us.
Speaker 2 (18:09):
That was what I was interested in showing by using
the products in the book. A critical reader will read
the book and see that, For example, when I ask,
you know, image generation products from companies like Microsoft and
open ai to generate images to go along with stories
I'm telling about my mom's childhood in rural India, like,
(18:34):
those images are not factual representations, unbiased representations, representations that
bear any relationship with the reality of my mom's experience.
Speaker 3 (18:46):
Right, Like I think, I think.
Speaker 2 (18:48):
I expect readers to see that and think, Okay, I
see that. I understand what's happening here, right, I understand
that there's a critique here.
Speaker 3 (18:58):
I don't believe that these.
Speaker 2 (19:01):
Products, due to any significant extent, help with self expression,
help with you know, communication with others, and so that
was not something I was particularly interested in exploring. However,
I was interested in the promise these companies were making
that their products could do that.
Speaker 1 (19:22):
Yet we seem to be moving more towards a future
where people believe that their self expression is bolstered by
these products. How do we resist the belief that these
products are actually like aiding in our self expression.
Speaker 2 (19:43):
I mean, I think there's a bigger problem that these
companies are exploiting. I think there's a bigger problem in
society in which the kind of like dominant cultures are
held up as like and dominant sort of ways of
expressing oneself and communicating are held up as like the
(20:06):
best ways. So, for example, chat GBT they have this
kind of style guide that's public and they open a
eye does and they explain how chat GPT works, and
they say that chat GPT is built to sound kind
of like a colleague, you know, at an office, presumably
like a white collar office. It's meant to be polite,
(20:29):
it's meant to be empathetic, it's meant to be rationally,
rationally optimistic, I think is the phrase. So they are
all these ways that chat gypt is built to use
language that itself is political, right, like the fact that
they're saying this is how chat GPT should talk, and
(20:51):
the way it should talk should be like this sort
of like, you know, the language tends to be American English, right,
like like office building, white collar American English, because chat
GPT talks that way, and because chat gbt is held
up as something that can help us with our writing.
What chat GPT is really doing when we go to
chat gbt and say, hey, I wrote this, can you
(21:12):
help me make it better? Which is embedded with this
assumption that we also have in our broader culture that
like a certain kind of like corporate, white collar, like
you know, white guy in the cubicle next door kind
of language is like the best kind of language to use.
I write in the book about how my dad has
(21:33):
this habit of like writing me emails and then having
chat GPT edit them and then sending me the edited version,
And he wrote this perfectly well written, well reasoned email
to me at one point in which he uses this phrase.
You know, he says something like you have to talk
to people to understand where they're coming from.
Speaker 3 (21:53):
Then only can you.
Speaker 2 (21:56):
Can you write well about their experiences something like that.
The chat GBT version inverted the then and the only,
So the chat GBT changed it to you have to talk,
you have to spend time with people and understand their experiences.
Only then can you understand what they went through. And
the interesting thing about that to me is that then
only is a very common and very correct construction in
(22:18):
Indian English, and my dad's from India. Only then is
American English. And so chat GBT is like purportedly improving
my Dad's English, but what it's actually doing is just
turning my Dad's English into American English. And there's this
like value judgment embedded in that that that American English
(22:39):
is a better English, right, And so I think when
we turn to these kinds of products and think that
they're improving our self expression, like that's not actually what
they are designed to do. They're designed to like use
a certain kind of language, and so yes, they can
turn our language into that, but of course, like I
(23:00):
both question the usefulness of that in self expression, and
I worry that if there's a future in which, like
we all start to talk like that, it's going to
be like culturally problematic, politically problematic, they're all kinds of
issues that are eyes.
Speaker 1 (23:16):
If that happens, I want to go to another I
think very important piece of your book, which you know
has to do with the handwringing over AI taking jobs
of creative workers. To what extent do large language models
threaten our current definition of what a novel is? And
(23:38):
do you think it's important that people like yourself, people
who write both fiction and nonfiction, consider the way in
which large language models might affect not only writers, I think,
but more importantly readers and what readers expect from the
written word.
Speaker 2 (23:55):
Yeah, I mean I love that question because I think
a lot of scholars of the novel would argue that
what makes a novel a novel is its novelty and typically,
like I think, as a culture, historically, what we've celebrated
in the novel is like finding new ways to express
(24:17):
ourselves as human beings. So I think AI has come
along at a moment in which that has been complicated
by commercial changes to and commercial pressures on the book,
on the novel and other forms of literature, where the
publishing industry has become more and more consolidated and dominated
(24:42):
by just a few companies that do you know, want
to turn a profit, and because of that, are increasingly
publishing things that.
Speaker 3 (24:54):
Are going to do well on Amazon.
Speaker 2 (24:56):
And finding that the things that do well on Amazon
on our books that are not particularly unconventional, that do
like follow certain well worn paths, Like if the goal
of a book isn't to find some new way of
expressing something, but rather to like follow a formula, that
(25:18):
is something that I think a large language model could
be trained to do.
Speaker 3 (25:23):
That said, I think all kinds.
Speaker 2 (25:26):
Of readers, including readers of the sort of genre books
like romance and fantasy that are thought of as following
certain genres, have a lot of room for experimentation. Like
they want to see something new, they want to see
something human feeling in what they're reading. They want to
know that there's an actual human being behind the text.
(25:47):
And so I think I think like the future of
books is really going to be determined in some ways
by readers more than writers, because they're the ones who
buy the books and create the market.
Speaker 1 (25:56):
And so to sort of answer the question about how
one exercises autonomy in an age of increasingly less agency,
just because we're all sort of trapped in this bubble
of technology that's been created around us that we now
have to use. It's about kind of insisting on the
(26:18):
type of stories, information, images, movies, TV shows, poetry, whatever
it may be, insisting that it's human centered and human created.
Like that seems to be where the autonomy lies.
Speaker 2 (26:35):
Yeah, I mean the thing that I find really exciting
is like, we as human beings, as communities, can create
whatever future we want, right Like, there are certain futures
that big technology companies are invested in moving us toward,
but we can be invested in something totally different. In fact,
we can be invested in something that's in resistance to
(26:56):
what they want for our.
Speaker 3 (26:58):
Future, and we can build that.
Speaker 2 (27:00):
And Yeah, I think it has partly to do with
the things we choose to engage with, the things we
choose to spend our time.
Speaker 3 (27:08):
On, how we.
Speaker 2 (27:11):
Choose to connect with friends, with family members, with community members,
right Like, all of those things are important.
Speaker 3 (27:18):
We can also think about our choice of products.
Speaker 2 (27:21):
Right An example I like to give because it's an
easy one to wrap our heads around is Wikipedia.
Speaker 3 (27:26):
Wikipedia is the seventh most.
Speaker 2 (27:29):
Visited website in the world, and it's run by a
foundation It doesn't cost that much to run it, relatively speaking,
and it's built on volunteer labor. But that volunteer laborer
I don't think of as exploited labor, because everybody is
sort of coming together to provide this communal service that
everyone can benefit from, and nobody is trying to make
(27:52):
a buck off of it. What I mean is there
is no corporation behind Wikipedia that is trying to use
Wikipedia as a tool tom.
Speaker 3 (28:00):
As wealth to a mass power.
Speaker 2 (28:02):
And so there are lots of different kinds of entities
like that building technologies that we can think of as
potential models. We can put our funds together, as communities,
as individuals, as workers, as users.
Speaker 3 (28:15):
We can build things.
Speaker 2 (28:16):
In new ways and not have them be bound up
in you know, technological capitalism in the way that our
dominant technologies are right now.
Speaker 1 (28:25):
Absolutely, And I think, what's one of the things that
I find, I don't know, I guess so deeply disconcerting
is this idea that publishers are now allowing for the
training of large language models on their pre existing content.
I think, again, you know, where does autonomy factor in?
(28:45):
It's now like a new point of contention for both
union workers and well, mostly union workers actually to say, like,
to what extent do I get to participate in the
way in which my company does business?
Speaker 2 (29:00):
Yeah? I think that's something that's really hard about this,
because you know, I hear sometimes from people who are like,
you know, I am a teacher and I don't want
to use AI, but my district says that we should.
Or more commonly, I work at a company. I have
questions about AI or concerns, but my bosses that I
need to use it.
Speaker 3 (29:18):
Right, What then do you do? And listen?
Speaker 2 (29:21):
I think they're perfectly legitimate uses of these products, of
products that sort of fall under the AI umbrella, by
the way, But for those who are interested in resisting
use of these products, who happen to get paid by
institutions that say, no, you have to use these products,
like that puts you in a pretty tight spot. I
think autonomy is very bound up in knowledge, right. I
(29:45):
mean I write in my book a lot about like
these gaps and knowledge between big technology companies and us.
Speaker 3 (29:52):
But one meaningful thing that we.
Speaker 2 (29:53):
Can do is collect information about how these companies actually
function as ourselves, questions about like what these companies are
promising with their products, what they're actually delivering what we're
delivering to them, and how they might make money from that,
how that might help them a mass more power. Right,
(30:15):
Just to sort of like ask ourselves questions and understand
the dynamics that are underlying our use of these products,
I think is a really valuable thing to do because
I think you know, for many of us, once we
understand these dynamics, we feel that we're in a better
position to enact individual or collective resistance.
Speaker 1 (30:36):
One of the things that I thought was so interesting
about your book is like, in a certain way, it's
sort of a self help guide for people who are online,
right in that like you're asking the reader to think
about the history of themselves online because you're considering the
history of yourself online. And so, what are the most
(30:59):
profound ways in which the Internet and the kind of
ensuing AI powered internet. H How has it changed your
life and your idea of selfhood?
Speaker 3 (31:13):
Yeah, I mean.
Speaker 2 (31:16):
These products have given me sights of self expression, of
communication and communion with other people, have allowed me to
gain more information about the world about myself, have made.
Speaker 3 (31:31):
My life more convenient. All of these things are true.
Speaker 2 (31:36):
And with each of those things that I just mentioned
there is a significant cost, and that cost is really
hard to see, and so I think experiencing life with
these products. But I think, even more importantly, just like
writing it all down in this book has helped me
to clarify for myself, like just how bound up the
(31:58):
things that I get out of these products are with
the exploitation of myself and of all of us and
our planet. And the reason I wanted to write about
this wasn't because I thought anybody should care, particularly about
my own experience. It was that I know that other
people are like me. I know that there are other
(32:18):
people who know all the problems with Amazon and still
use Amazon, who know about the problems with Google and
still use Google. And I wanted to talk really frankly
about my own complicity as a way to kind of
like open this window for all of us, for readers
to have a conversation about our collective complicity and figure
(32:40):
out what to do about it, because I don't think
I come out of the book with an answer to
the question and now what we should we do about it?
In part because I don't think that I want to
be this sort of like single authority figure, like a
kind of like you know, twin to a Sam Altman, right,
who has so much conviction about his about the future.
(33:01):
I don't want to be standing here saying and here
is the one single future we should move toward, and
here are the five steps we should take to get there.
It want us all to be interested in our own complicity.
Speaker 1 (33:15):
Thank you so much for talking to me, and I
just I think your book is such an interesting exploration
of digital life and one that I think if someone
isn't thinking about, they're not thinking enough.
Speaker 3 (33:26):
Thank you so much for having me. This was a
really interesting conversation.
Speaker 1 (33:45):
For tech Stuff, I'm Cara Price. This episode was produced
by Eliza Dennis and Adriana Tapia. It was executive produced
by me Oswa Wasshan and Kate Osborne for Kaleidoscope and
Katrina Norvell for iHeart Podcasts. Jack Insley mixed this episode
and Kyle Murdoch wrote our theme song. Join us on
Friday for the week in tech Oz and I will
(34:06):
run through the tech headlines you may have missed. Please rate, review,
and reach out to us at tech Stuff podcast at
gmail dot com.