All Episodes

May 25, 2023 56 mins

Welcome to a special bonus episode of How To Citizen. We are sharing Baratunde's appearance on the What Could Go Right? podcast, created by The Progress Network. Baratunde discusses technology, and specifically generative artificial intelligence, and how it might help or hinder human progress and how it aligns or deviates from our concept of citizen as a verb.

As always, find How To Citizen on Instagram or visit howtocitizen.com to join our mailing list and find ways to citizen besides listening to this podcast! Please show your support for the show by reviewing and rating. It makes a huge difference with the algorithmic overlords and helps others like you find the show!

How To Citizen is hosted by Baratunde Thurston. He’s also host and executive producer of the PBS series, America Outdoors as well as a founding partner and writer at Puck. You can find him all over the internet

 

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
What's up. This is a special dispatch I'll call it
to the how To citizen community. To the feed from
baritunde Hi. I really hope you enjoyed season four. I'm
actually still reverberating from so many of the conversations we had,

(00:22):
from the opening with Adrian Marie Brown to the close
with doctor Sam Raider and everyone in between. I'm popping
in because we have a bonus track on this album.
I've been doing a lot of podcasting as a guest
on other people's shows, and we've decided to throw one
of those episodes right here into our feed. It's from

(00:44):
a group called the Progress Network. I love I love
that already who's against progress? And their show is called
What Could Go Right? And they invited me on and
I wasn't fully sure what we would be talking about,
but we ended up going deep on artificial intelligence, the
chat GPT, the DHALI, the mid journey, the large language models,
the job displacement, the creative possibilities, the plagiarism. There's so

(01:08):
much potential with this, and we did a whole season
on Tech season three and explored some of these themes.
In this moment, with things moving even faster, I am
confident that what we're calling artificial intelligence and all the
machine learning technologies underneath, will have a massive effect on
every part of our lives. I think we still have

(01:29):
a chance to shape it rather than just be shaped
by it. And I'm eager to see more citizen uses
of these tools, including citizen ownership and citizen oriented initiatives.
All the principles we talk about showing up and participating,
investing in relationships, understanding power, and valuing the collective. They
don't have to disappear because of a more machine dominated world,

(01:56):
but they will if we're not consciously engaged in this stuff.
So there's a lot of alarm and concern on the
one hand that you'll hear from me in this conversation,
as well as a lot of possibility, potential, and hope
for what we can still create. If you know anything
by now, you know that I am about us co

(02:16):
creating a better story of ourselves together, and that might
need to include increasingly together with machines. But I don't
want to do that at the expense of our relationship
with ourselves, with each other and with nature. So tune
into this. Let us know what you think we're how
to Citizen on Instagram. I'm Baritundale on Instagram, TikTok and

(02:38):
all the social sharecropping sites where I till these digital fields.
And of course we just have email, old school owned
and operated comments at howtositizen dot com. Check out our
website well, constantly shifting things around there, and I'm wishing
you a great summer as we go through a lot,

(03:00):
get it to build a lot. Last little plug September thirteenth,
I have discovered is the premier date of America Outdoors.
That's my second season with the PBS series. A lot
of the themes you're used to hearing us banter about
here and try to activate WHI I'm also being able
to witness and amplify that happening in relationship with nature

(03:22):
all across the country. So that's a PBS show September thirteenth.
In the meantime, please enjoy what could go right in
my conversation about artificial intelligence peace if we could eliminate
so much work and so much drudgery and so much labor. Again,
always promised, never delivered, but maybe this time is different

(03:44):
than what will we do with all that time? Would
I let my report write itself, so I could take
a walk with my wife and just connect more. Would
I let those emails send themselves so I could call
my sister instead, That's a great trade off. Let the
email write itself, let it respond to the email that
someone else's robots, and let the robots do all the
emailing and just give me a weekly report on what

(04:05):
I said this week, so that I could meditate more.
That's glorious. That's really great. Most of the employers of
the world, they won't see it that way, and they'd
be like, oh, great, you don't have to focus on
writing emails. You can do xyzning instead.

Speaker 2 (04:21):
What Could Go Right?

Speaker 3 (04:23):
I'm Zachary Carabell, the founder of the Progress Network, and
I'm joined as always on this podcast by Emma Varva Lucas,
the executive director of the Progress Network. And What Could
Go Right, in addition to being the title of our
weekly newsletter, is also our podcast, which is a weekly
attempt to change the tone of our collective dialogue from

(04:44):
one of dyspeptic, dystopian despair to a more positive case
for a brighter future. So, Emma, who are we going
to talk to today? I'm very excited about this conversation.
I know who we're going to talk to today, but
why don't we share that with everyone else?

Speaker 4 (05:00):
All right, let me tell everyone else we're going to
talk to you today, which is Baratunde Thurston. He is
many things, among them the author of the books How
to Be Black and I'm Judging You, the Do Better Manual.
He worked at The Onion and on The Daily Show
with Trevor Noah. Right now, he's the host of a
podcast called how to Citizen, as well as a TV
show called America Outdoors with PBS, and he's a founding

(05:22):
editor of Puck. He's been writing a lot recently about
AI technology and its role in the future of humanity,
so we're going to delve into that with him today.

Speaker 3 (05:30):
So with that, let's talk to Baratunde. Baratunde is such
a pleasure of this conversation with you. You are one
of these kind of eclectic renaissance E maybe emphasis on
the E type humans who have ranged far and wide
in your interests. You're eclectic, you're informative, you're curious, you're

(05:52):
open minded and passionate all of the above. You've been
writing a lot lately about AI and generative AI, chat GBT,
and Open AI, and there has been a fair share
of attendant technophobic hysteria, as there always is when something
seemingly dramatic a new comes along in.

Speaker 1 (06:11):
The technological universe.

Speaker 3 (06:13):
People freaked out about the printing press, about the telegraph,
about the telephone, about the television, about the Internet, about
the cell phone, about the smartphone.

Speaker 1 (06:20):
So the degree to what your horse, don't forget the horse.

Speaker 3 (06:24):
The horse absolutely freaked out about the horse too, This
four legged harbor duers of doom. So tell us in
a nutshell, are we right to be dramatically worried? I
think you had a line in one of your columns
where you're simultaneously fascinated and terrified. I think I'm getting
that right. If not, it's a good way to characterize it.

Speaker 1 (06:44):
Regardless it is, that's a fair characterization. It's good to
be here with what could go right. I just love
the ethos and title of that. Most of the default
settings on the news are what has gone wrong. So
thank you for contributing to a different psyche and mindset
around the state of the world and our remaining possibilities
with AI. Your categorization is accurate. I am I'm fascinated

(07:07):
and I'm terrified that The nutshell is that I see
a lot of solid reasons to be concerned about what
the rapid deployment of these tools means for everything from
copyright and ownership of our creativity to the very nature
of the human experience as one that is just so

(07:27):
increasingly mediated by a technology creating distance between us and
the physical world, between us and our fellow humans, and
even between us and ourselves. And I think just our
experience with social media the past decade or so and
fragmentization and all balkanization, we just in so many ways
we're not ready also excited, so excited about creative possibilities,

(07:52):
about having another tool and sort of colleague and copilot
in the quest for justice and the ability to try
to live together better and to find answers to some
really deep questions, whether they are scientific or existential. So
it's all possible, but the concern is very real. I'm
not going to try to sugarcoat it.

Speaker 4 (08:12):
I wanted to talk a little bit about the RNC
AI AD. Did you see that that was all AI
image generated?

Speaker 1 (08:19):
I did not see it.

Speaker 4 (08:21):
Basically, it was like imagine Joe Biden becomes president again,
and then they ran through all these nightmare scenarios of
what that might be like.

Speaker 2 (08:28):
So the first.

Speaker 4 (08:28):
One was war over Taiwan. The second one was fifty
regional banks collapse, and they flashed an image from like
the boarded up windows of BLM. Obviously it was ai created,
but that was the image that you got in your head.
And then they did some really heavy drug users in
San Francisco and a couple of other dystopian things, and

(08:50):
the response to it was like really intense. Right, there's
this response of like the rn C is now you know,
has this new tool and their power. People are going
to be confused and they're gonna blah blah blah. And
I was interested to ask you about it because my
reaction was like that was a bad ad.

Speaker 2 (09:06):
It wasn't it actually was.

Speaker 4 (09:08):
It didn't feel of course I'm not the intended audience,
but it just it honestly did not pull me in
at all.

Speaker 1 (09:14):
This is speculative, happy to do it, just gotta name it.
Most political ads are garbage. I think we can all
agree on that. Like, regardless of where you sit on
any political spectrum ideal, they're just bad. They're the worst
Hollywood movie trailers in a world where, and it sounds

(09:34):
like they just unleashed the in a world guy on
on their projection of the worst possible future with Joe
Biden as president. That's typical. It's uninspiring, it's fear based,
it's it's unimaginative, which I think makes it appropriate to
be built on generative AI systems because it can only

(09:57):
mine what we already know in so many many ways.
And so whatever they trained that data on, whatever they
trained that ad on, is like just the history of
shitty political ads. Cool. I think what would be more interesting.
Years ago, the artist Mally Krabapple teamed up with a
few political voices. Climate people. Alexandrio Casio Cortes did the

(10:18):
voice over for this animated like the inverse version what
you just described.

Speaker 5 (10:23):
Ah, the bullet train from New York to DC. It
always brings me back to when I first started making
this commute in twenty nineteen. I was a freshman in
the most diverse Congress in history up to that point.
It was a critical time. I'll never forget the children
in our community. They were so inspired to see this
new class of politicians who reflected them navigating the halls

(10:47):
of power. It's often said you can't be what you
can't see, and for the first time they saw themselves.
I think there was something similar with the Green New Deal.

Speaker 1 (10:58):
We knew that, and it was imagine the future where
a Green New Deal was like fully adopted and implemented
and executed on and it was inspiring, It was beautiful.
It was a bit like all that would be magical,
but it appealed to a different piece of our human

(11:19):
mind and our human spirit, and a human artist helped
bring it to life. And I'm not saying machine art
can't do that, but it just feels really predictable, also ironic.

Speaker 5 (11:30):
That's what they chose to run the game, a society
that was not only modern and wealthy, but dignified and
humane too. By committing to universal rights like healthcare and
meaningful work for all, we stopped being so scared of
the future, we stopped being scared of each other, and
we found our shared purpose. Ileana heard the call too,

(11:53):
and in twenty twenty eight she ran for office in
the first cycle of publicly funded election campaigns, and now
she occupies the seat that I once held. I couldn't
be more proud of her a true child of the
Green New Deal. When I think back to my first
term in Congress writing that old school Amtrak in twenty nineteen,
all of this was still ahead of us, and the

(12:15):
first big step was just closing our eyes and imagining it.

Speaker 6 (12:20):
We can be whatever we have the courage to see.

Speaker 3 (12:37):
So look, the fears around this are this is the
version one point zero of the Singularity, or people love
to use the terminator. Skynet is sentient right that this
is the first shot across the bow, and that's what
we're heading. We're heading to AI that is indistinguishable to

(12:58):
can pass the Turing tests. These all the tropes of
artificial intelligence land. I think I've used at least three
of my quotient of cliches in the past one minute.
But yes, maybe I am indeed myself. AI generated a
I generated cliches for five hundred alex.

Speaker 1 (13:15):
So a lot of what AI is doing right now.

Speaker 3 (13:20):
I'm as susceptible occasionally, in moments of procrastination and boredom
to click on the listicles, you know, the ten worst
movie scenes or the ten scenes that ruin their careers. Yeah,
I don't know that life would be radically different if
I read a really good AI generated listicle like that,
or a really good AI generated tour guide. A friend
of mine I was just in Pakistan a few weeks
ago and sent me this like Lahore tour and I.

Speaker 1 (13:41):
Was like, oh, this is cool. Where'd you get this?
He said, it was chat gbto. We just said things
to do in Lahore. I was like, that was pretty good.
It's a lots to can look impressive.

Speaker 4 (13:50):
He is.

Speaker 1 (13:50):
I don't know whether that's a problem.

Speaker 3 (13:52):
But you mentioned the horse before, right, the fact that
humans were human labor was substituted for horse labor, or
that horses were then made less relevant by cars. Yeah,
there was disruption, and there was a certain amount of
creative destruction if you believe in the Stumpitarian view of reality.

Speaker 1 (14:10):
But I don't know that.

Speaker 3 (14:11):
Was like ultimately bad destruction of what we use here.

Speaker 1 (14:16):
Yeah, when cars came along, they mostly displaced horses. And
automation in general is never a one to one substitute.
It doesn't just like every human job becomes a robot
job or a software job and it's bad. It's generally, Yeah,
people who had certain types of occupations can't do that.

(14:37):
Part of the occupation anymore, and the new technological capability
creates new jobs. Right. The telephone operators transitioned right in
many ways to a different type of role. They weren't
physically connecting inbound call to destination, but they became information

(14:58):
services operators. They became receptionist secretaries. There were still a
need for a human role in some cases. What often
I think gets missed, though, is that the new capability
just opens the pathway for many more people to engage
in the thing. Right, So, writing used to be a
high tech endeavor that only monks did. You had to

(15:19):
be trained. There were like fifty people in the world
who could write. They lived in a castle. It was
very Game of Thrones, right, they were masters. And when
you know, writing became much more available to the public.
You still have monks, you still had people in an academy.
You still had masters and castles. But also anybody could
write any bullshit, Right, I could write bullshit. I could
write a postcard. And so what's gonna like? The people

(15:41):
who can generate tour guides of Lahore is greater than
professional travel agents, which has been the case for a while.
It's greater than the wiki. How crap writers. It's greater
than the people who've already blogged a billion times about
and online. It's like your friend who's never written a
travel guide a day in their life can generate one
for you. And that's gonna create a flood of new

(16:06):
content and new information. Not entirely bad. I think. The
worry side of it for me is we will create.
We've already created so much stuff. I can't watch the
TV that's being made. It's in We will never finish Netflix.
It's an endless scroll, and so what's the point. And

(16:30):
if we need machines to help us overproduce, we're also
gonna need machines to help us over consume. There's just
too much. So we're gonna watch the shows on one
point five speed, and we're gonna have a generative bot
summarize and give us the abstract, the cliffs notes, the
kind of vary, the cheat codes on all these experiences.

(16:51):
So we'll rely on machines to create, will rely on
machines to consume an experience. And our role in that
loop is like a phone operator. We're just connecting an
input to an output, but not experiencing the call ourselves.
That's an existential worry I have is and the other

(17:12):
is economically. So much of this is just driven by capitalism.
It's like, increase throughput, increased output, create more efficiency. Why
the great lie of all these tools is that they
give us our time back. They don't. The BlackBerry took
more time than it gave. Email took more time than
it gave Slack we're supposed to solve. Email takes more
time than it gives. We just are on more. We

(17:36):
have to be more responsive, and so we have this
colonization of our time and of ourselves in service of
ultimately a venture backed, profit driven escapade in which we
don't really have an interest ourselves, some benefit, very little interest.
So the concentration of wealth to create more wealth, that's

(17:57):
not very inspiring to me. That's a very concerning and
it's like, all right, so we just become better constant
workers connecting machine output to machine input. Eh, it's got
to be a better use for all this stuff, venture backed,
profit driven escapade. I like that, not try I agree,
but I like it as a Phrase's yeah, that's cool,
And I don't use the word escapade very much, So

(18:17):
I should get a point for.

Speaker 4 (18:18):
That, I guess what that premise takes to be as
necessarily true is that we are all responsible for consuming
all the content that's already produced. But I think that
we're not right. Like, the idea is that there's the diversity,
the plurality of content out there, and you can pick
and choose what's useful to you. Like, I certainly don't
feel like there's a pressure, tubbing incumbent upon me to

(18:39):
like watch all the TV shows that have been produced,
or all the podcasts, or read all the articles. I
just the fact that the things that I want to
learn about are there or see are there seems to
be a benefit to me. Does that ring true to
you at all?

Speaker 1 (18:54):
Yes? And no, I think you know, the pressure that
I feel comes from the desire to feel like I'm
a part of a common experience. There's something shared in
these human experiences. And in a simpler time, we had
more common reference points, just in a different time. Not

(19:16):
necessarily simpler in some ways I think it was, but
certainly we could agree it's different. And so that common
reference point might have been a religious one, it might
have been an informational one, it might have been a
labor common experience for generations parents and children had very
similar lives, and whatever advice parents could pass on was
based on a set of experiences. Their child was likely

(19:37):
to have. Work too hard in the field, Your teeth
are gonna fall out at thirty, You're gonna die at
forty three. Good luck, kid. That was a lot of
the human experience. And now within a generation there were
radically different experiences less and less common between an older
sibling and a younger sibling. And so what we have

(19:59):
to share or between each other and pass on it
just gets more difficult. I think the flood of TV
shows is a simple example of this larger trend. And
so what show am I watching? What show were you watching?
What do we talk about at the water cooler? Which
no one has a water cooler anymore, but the idea
of that ancient, like sitting around the fire, common human storytelling.

(20:22):
I think there's something deep within us that still demands
and needs that to not feel too untethered. And so
we tether to each other, We tether to the fire,
we tether to a tail. And if everybody can generate
their own tail constantly, then can how do we also
create a shared story? See it's a TV show, maybe

(20:43):
it's a new religion. Maybe it's a set of principles
and values. But I think everybody creating a custom lens
on reality will have some negative consequences because we won't
see the same things.

Speaker 3 (20:54):
So I want to circle back and maybe one thing
I'd want to throw out there, maybe we'll come back
to this is I'm not entirely show that any of
us entirely I know what capitalism is or means.

Speaker 1 (21:01):
I mean, we use the word or any of these isms.
It's the very.

Speaker 3 (21:05):
Loose terms for multi multi level, multifaceted systems. But this
idea of a shared reality. We've gone over the course
of a couple hundred years from a billion people on
the planet barely in eighteen hundred to eight billion plus
people right now. And it's not just there were a
billion people tuned in some odd years ago. Those were
they weren't really connected to each other, so it was

(21:25):
more like several hundred group of tens of millions at most.

Speaker 1 (21:30):
Yeah, little tribes mostly. I don't think it's possible for
eight billion people to have the same story, and I
don't think it's desirable to throw everyone into a massive
group chat Hello Twitter, right, These things will naturally fragment
and subdivide themselves, and that's okay. A family has subcultures,
much less a species the thing that I want. I'm

(21:54):
not an advocate of pull the plug on the machines.
I think that's really simple. When I say capitalism, I'm
talking about a desire to produce outcomes that are optimized
for a particular measurement of success, which is profit, at
the expense of many other measures that matter to us,

(22:17):
quality of life, happiness, sustainability of the home planet, a
future that we don't fear in terms of habitability, like
very basic human needs, stuff that capitalism has helped to
serve but now threatens at the extreme to completely undermine.
There is no market for any good or service with
no Earth, at least not a growing one. It's literally

(22:40):
a shrinking market. So these tools and the race to
project them into the world are driven by a particular
perspective on maximizing public market capital defined as returns on investment,
not defined as quality of air or depth of loving relationship.

(23:01):
There are other things we could try to optimize for
we haven't really tried. And when we talk about shared
narrative and the risks, yeah, there's gonna be like a
bright bart bot. There's going to be an right bot.
There's going to be like a BLM buera of land
management I'm referring to, as well as a Black Lives
Matter bot, and that's going to create some chaos. There's

(23:24):
subreddits too for all these different groups, and it's created
some chaos. It's not necessarily the end of the world,
but the combination of the speed with which this stuff
gets deployed, the narrow I think drive behind it. These
are not so far community defined efforts. If we said,
all right, listen, these large language models are generated basically

(23:47):
off of the wisdom and data, at least maybe wisdom.
It's certainly the data generated by the world. We've scraped Wikipedia,
which is contributions from ever we sucked in a whole
bunch of photographs taken by millions of people. But the interest,
the literal financial interest in it is just a couple
of shareholders. It's like one of the largest set companies
in the world. That seems like a huge disconnect, and

(24:09):
I would feel less alarmed if we had collectively some
deeper stake in these systems, as well as some more
kind of democratic, small d method of determining how we
are going to manage what goes in and how we
decide to create what comes out. Instead, we're just jammed

(24:29):
with it, right, Hey it's GPT two. Nope, GPT three
point five, No, GPT four. Hey, Now we can make
videos like wow, this is so fast, and I just
feel subject to the whims of an almost literal handful
of people. And I don't like that. For eight billion
people in a place where we're supposed to be democratizing
tools and all having a voice, it seems like a

(24:52):
lot of this stuff is being imposed, and so I
just I don't have a deep answer to it. But
I have a deep concern, and I don't think that
is how I want the future defined for so many
of us. So I'm trying to weave in a couple
of the things we've been talking about here and see
if I can get more specific about what what might
bother me the last. It is not last, but I

(25:13):
think it's my other deep on the concern side, we
can get to the exciting I'm excited about stuff too,
but on the concern side is one of the things
I wrote about in PUCK. I had this clear memory
of GPS and I used to manually make trip plans
with my mom. She ordered things from trip la and
they helped you route a path on paper. They chip
it to your house, these little vertical maps called tripticks.

(25:35):
Then I ran the GPS off a laptop in the
car with CD ROMs. Then we got map Quest, then
Google Maps and voiceover GPS, and that all made navigation simpler.
It made it smoother, it made it faster, made it
more optimal. And I never know where I am, Like,
I don't know anything anymore. I will defer to the

(26:00):
which claims to be all seeing and all knowing versus
my eyes. And if my eyes perceive that looks like
a traffic jam, but the screen says it isn't, I'll
just wait in traffic. If my eyes say there's no
bridge ahead of me, but the screen says you can drive,
I'll drive off the bridge. I haven't done this, but
other people have. And I think there is a deeply

(26:22):
troubling metaphor for our deference to these systems which are
sold and marketed as intelligent, as human like, and as
better than us. So then we don't have agency, We
just we follow. We all become followers of a system
that we didn't choose to create, and we deny our

(26:43):
knowledge of ourselves. I have this smart ring. I wake
up in the morning and my wife says, how'd you sleep?
That's anything. I slept pretty good. Then I checked the
app and the ring says I didn't sleep pretty good,
and I'm like, correction, I apparently I had a terrible
night sleep. I need to try harder to sleep tonight
because the ring told me I don't feel the way
I feel, and so I find that deeply troubling. Man

(27:07):
and spending all this time with intelligence of an artificial
nature creates gaps with a different form of nature that
I think we need to try to maintain. Otherwise, we
just follow these generated instructions for what to do with
every moment of our lives. How to tourist city, what
to cook for dinner, where to turn in our vehicles,
who to mate with. It's very optimal, it's very optimized,

(27:31):
it's very efficient. It's also dead. If life is just
following instructions from a computer, we get to live longer,
we have less life. I don't know. I don't know.
That's kind of weird.

Speaker 4 (27:42):
The one thing I would say to that is that
I think that we're a little bit more in control
about our choices and how much tech is in our
lives than what you just describe. For instance, like if
you realize that like you are sleeping just fine, and
the smart ring is telling you otherwise, so you probably
get to a point where like, screw you, smart you
were not useful to me anymore.

Speaker 2 (28:02):
With the GPS.

Speaker 4 (28:02):
Example, one time I was in Bulgaria with my sister.

Speaker 2 (28:06):
We were driving.

Speaker 4 (28:07):
Google Maps told us to go straight.

Speaker 1 (28:09):
We looked at.

Speaker 4 (28:10):
This freshly painted sign from the Bulgarian government that told
us to turn left, and we were like, no, like
the Internet gods know all, we will go straight. And
of course the straight road led us like into the
woods in the moullains gravel road we you know, popped
or tire we had to get toe trucked. Left was
the way that we should have gone. So that seems
to prove your point.

Speaker 1 (28:29):
But the other side of us the government, Yes, trust.

Speaker 4 (28:31):
The freshly painted side from the Bulgarian But the other
side of that is, now that I've had that experience,
I'm like, Emma, if your gut reaction is telling you something,
go with that instead of the Internet God. So there
is an adjustment process here and a little bit more
of have a choice aspect to it.

Speaker 1 (28:46):
And here's the good. I think, here's the opportunity and
the invitation in this moment for us, as we have
a much more artificial existence, we will value natural existence
even more. Like I get to spend a lot of
time in the outdoors, in the woods, on rivers and mountains.
I make this beautiful show on public television in the

(29:08):
US called America Outdoors, and I just I love that.
I was fly fishing with foster kids two weeks ago,
standing in a stream, just popping shrout out. They trained
me well, and is the opposite feeling of being efficient
and making sure that every physical step I take in

(29:29):
the world is maximally optimized to minimize discomfort and paint.
It was pretty uncomfortable in that stream at times it
was cold, but it was so real and so grounded.
And my experience with the ring, your experience with the
Bulgarian freshly painted sign. Being way smarter than the smart
tech of Google, which is one of the smartest entities
ever created, Like collective wisdom inside the Google machine is

(29:54):
extraordinarily intelligent, more so than any single human being but
that sign was right and Google is wrong. So yeah,
you're absolutely right. You've got an extra caution. And there's
gonna be some tickle that we all feel where we
get to choose, Okay, what instruction, what feeling, what intuition

(30:15):
do I want to listen to? And we got to
practice that though I think for me most of the time,
I'm going to follow the GPS. Otherwise why do I
have it? Right? If I question everything the machine tells
me to do, then that machine's not doing its job.
So on net like, overall, we're going to tend toward following.
That's the whole premise of creating these systems. But if

(30:38):
we can decide maybe not now, maybe not today, maybe
I need to break put some more agency back into this.
Maybe we need to redesign this thing. Maybe it needs
to ask us questions. I don't know, there's just gotta
be something more, And I think it's going to force
many of us to ask why, right, what if we
could eliminate so much work and so much drudgery and

(30:59):
so much labor again, always promised, never delivered, But maybe
that's kind of different. Then what would we do with
all that time? Would I let my report write itself,
so I could take a walk with my wife and
just connect more. Would I let those emails send themselves
so I could call my sister instead, That's a great
trade off. Let the email write itself, Let it respond

(31:20):
to the email that someone else's robots send. Let the
robots do all the emailing, and just give me a
weekly report on what I said this week, so that
I could meditate more and do some psychedelics and to
try to achieve some higher plane of existence. That's glorious,
that's really great. Most of the employers of the world,
they won't see it that way, and they'd be like, oh, great,
you don't have to focus on writing emails. You can

(31:42):
do xyzning instead. You can proofread the robots emails.

Speaker 3 (31:59):
Okay, So on the flip side of all this, and
I do love the It's totally right about the GPS.
You know, one of the effects of our technologies in
this case is no one's ever lost and no one
ever knows where they are. And that is true for
a plethora of our daily realities. Again, part of the
ethos of doing what we go right is we do

(32:20):
have a very human tendency to look at problems and
both evolutionarily and culturally. That's not always a bad tendency, right.
We are constantly and I've alert for that which will,
if not now, soon be a threat and be a danger.
The flip side is we always lose sight of the past, right,
because the past isn't particularly real. In the future isn't either,

(32:42):
But the future we can invest with hopes and dreams.
In the past we can being back with unrequited desires.
And like a lot of human history, labor to just
produce food. If you looked at the amount of energy
and human hours to produce a loaf of bread two
hundred years ago versus now, right of human labor to
do these things has in fact freed up all this

(33:04):
other labor to do email right, meaning you are the
right that one type of work pushes out another type
of work.

Speaker 1 (33:11):
And to watch Netflix. Right, we are more entertained than ever,
you know.

Speaker 3 (33:15):
Our leisure time has gone up and our work time
has become more elastic. But I guess I would push
back and offer the following. I'm not saying I'm right
about this, I'm just something I think about, which is,
if you really did in your sort of small d
collective global democracy. Way tried to get buy in from

(33:35):
a planetary population about what these technologies are doing, or
what the arc of modernity and capitalism has done, I think.

Speaker 1 (33:44):
Net you would probably find more buy in than not to.

Speaker 6 (33:46):
Like.

Speaker 3 (33:47):
These things have allowed for far less physical labor, longer lives,
and the ease of procuring material necessities and material desires.

Speaker 1 (34:00):
So while a lot of the.

Speaker 3 (34:00):
Benefits have unfairly or inequitably, I don't know whether it's
unfair enough'll have inequitably flowed to the few and not
to the many.

Speaker 1 (34:08):
I'm not sure that I agree.

Speaker 3 (34:09):
That the system itself would be radically different if you
had if it weren't quite so foisted upon you.

Speaker 1 (34:15):
But was in fact, like voted upon. When I talk
about small D democracy, I'm not talking about updown votes.
And I think, just as you rightly pointed out, like
what do we mean when we say capitalism, we're not
always clear about that. What do we mean when we
say democracy? A lot of our experience of democracy is outsourcing, right,
It's delegation to a chosen few to over simply make

(34:38):
decisions on our behalf, and there's all kinds of corruptions
of their motives and perversions of interest with money and gerrymandering,
and a lot of things that have diluted the ability
of our representative democracies to actually reflect the will of
the people. Again not a partisan statement. I'm a very
part of them person, but that is academically and mathematically demonstrable.

(35:00):
So these citizen assemblies and deliberative democracy is like one
form and practice where random assortments of people are selected,
almost like juries, to develop policies, to weigh in on budgets,
to advise or determine political decision making on behalf of
a community, could be a city, could be a nation.
I don't think an eight billion person citizen assembly makes

(35:22):
any sense, but smaller collectives can roll up to something
that better reflects what people actually want. We have a
failure of democracy around abortion access and reproductive rights in
the US, around gun sensical regulation of firearms safety and access.
In this country, vast overwhelming numbers of people want a

(35:43):
certain thing. Our democracy, in quotes, has not delivered it.
It's failed in those particular points, and there are a
lot of other examples. If we could have a better process,
we could have a better outcome that better reflects the nuance,
the ranges, the exceptions that most people actually feel. And
so when it comes to something like AI, we can,

(36:06):
through a process bring in the hopes, the dreams, the concerns,
the fears of many more people than a couple of
machine language researchers and a couple of VC firms who
think that what we really need to optimize is illustrations maybe,
or is like asking workers what they would want to

(36:27):
use these tools for rather than just imposing it. You
might find and that there's basic, low tech ways This
happens already through surveys, but that's when I refer to
more small de democracy in the process. Part of what
I'm hinting at is a different way of surveying and polling,
a different way of gathering, soliciting input, and demonstrating alignment

(36:49):
of desires or non desires in a way that our
current political system and sometimes our business system are unable
or have been increasingly unable to deliver on on the
what could go right? What could go right with all
this AI stuff is we are buried. We're overwhelmed with

(37:10):
confusion around our financial services, around our health insurance situations.
My friend Ron J. Williams has this view of like
radical comprehensibility, and that is aided significantly. I unleashed chat
GPT on my one hundred and eighty page health insurance
coverage policy document. It was never meant for me to

(37:32):
understand that document. These businesses, so many of them, they
make money by making sure I don't understand how to
get what they're actually offering me, and they hope I
won't figure it out. Like it's a really shitty way
to operate, but it's also can be really profitable. If
we make people jump through thirty hoops and we know
fifty percent of people only jump through two overage, we're

(37:55):
good and you could pick a business. There's a lot
of them that operate this way. These tools give me
an army of robocallers, of legal scholars, of sentenced diagrammers,
and researchers to throw against that wall without investing all
the time it would take to become an expert myself,

(38:15):
or just wait on hold that long to argue with
an agent. I love that. I love that adversarial relationship.
I love even in the playing field, And then I
can imagine that for coordinating action, for climate, for making
policing much more sane, and actually a public safety interest,

(38:38):
not just an enforcement of property rights interest or a
refuge of racism and slave catching. In the US as
we've practiced, it that we could unleash these tools in
the arms of civilian oversight boards, in the arms of judiciaries,
in the arms of city councilors and activists to say,
all right, let's analyze this in a much faster way.

(39:01):
When the Justice Department has to come into a city
and they get the right to review, I forget the
specific language what they do, but they basically take years
combing through and observing and then they write a low
report and consent decrease. We don't have to do that
one at a time. We could just say all police agencies,

(39:21):
all meat inspection plans. There's where there's imbalances now and
the folks who kind of abuse the advantage win because
they just have more time. Nope, we can balance that out.
So I think that's a great possibility, and I want
to see that. I want to see it for climate
stuff in particular, because we have to coordinate such complicated actions.

(39:42):
Last example, I'm actually sitting on this right and I'm
holding up this item is called future card. It's a
visa debit card gives you five to six percent cash
back for your lower carbon purchases. You buy an electronic
item through back market, this secondhand marketplace instead of new
you get more points for that. I'm a small investor
in this disclosure. They built something on GPT chat dot green.

(40:07):
It's green GPT and then lets you figure out all
the subsidies for solar, for wind, for graywater systems that
you're eligible for, and a lot more. It's basically a
green economy expert at your fingertips, and you can throw
all kinds of stuff at it from your life, and
it answers all these questions that would take me weeks
to figure out the Riverside County rules on this and

(40:29):
the LA County rules on that in the city of
Oh my goodness. And that's not designed to make it hard.
It just is hard. So to simplify to make more accessible.
It's incredible the effect on literacy. If you're not literate,
these tools can help you behave as if you were
and get access to the information and the power that

(40:50):
comes through literacy without having to go through all the
hoops that you might not have time to do. That
A friend of mine is not so great at English.
He's using it to improve his email communications, so he
shows up more professionally, people understand him better, his dealings
are smoother. Scale that scale that, So, yeah, it will

(41:11):
be truly unlocking potential, freeing us up, enabling folks who've
been disabled by their circumstance, by oppression, by whatever. I'm
here for.

Speaker 3 (41:21):
That that's a really good note to wrap up a
conversation that's in the middle of a conversation. So hopefully
we can keep having the conversation. And clearly, like a
lot of things that we talk about, and none of
these are terminal conversations, right, These are not like election analyzes.
These are like long term who are we?

Speaker 7 (41:41):
What are we?

Speaker 3 (41:42):
How are we going to shape the physical and spiritual
and technological environment that we are all swimming in? And
I think one thing that I do like about your
writing and you're speaking is you are curious. And for me,
curiosity is the openness to the new, and the openness
to learn about the new and to not allow yourself

(42:05):
to constantly just reimpose an a priori framework on that
which is unfamiliar and new. Otherwise all you're doing is
just stamping reality with a very rigid template.

Speaker 1 (42:16):
These calls to pause AI, for example, for six months,
an arbitrary time unit, and an unclear meaning around pause.
My instinct is instead to engagement. Right, So the way
we resolve this, I don't think is just unplug the
machine and then figure it all out and then plug

(42:39):
the machine back in. And I think what Sam Altman,
an open AI has said, which I find kind of
shocked but not entirely to agree with, is we had
to release this in beta because we couldn't figure this
all out in the lab. And so there is some
level of collaboration, like the concerns I'm raising, the excitement
you're feeling, and everything in between. That is a somewhat

(43:03):
democratic process in the sense that we're participating. I want
the terms of that participation made much more clear and
explicity and blah bla blah blah blah. But I wouldn't
be able to offer these critiques or these hopes if
I hadn't be able to play with the thing myself,
and that is a really important part. If I could
offer one last I think hopeful possibility here. What has

(43:25):
jumped out the most from me from play I've played
extensively with GPT four with mid Journey five. Those are
my primary experiences, and then a little bit of production
tools around video editing, audio editing for my own workflows.
But those first two examples, I can ask it something
and it can give me an okay response. My prompt

(43:46):
is eh, if I focus on the prompt, really make
it the most specific, the most nuanced kind of help
speak it in the language I know it is prime
to understand better than just my my own natural way
of speaking. I get much higher quality responses. And I've
done these experiments with like average prompt versus super prompt,

(44:09):
and it just puts the focus on how you ask
and it gives us this superpower, this performance enhancing drug,
this thing that I think of increasingly as magic, where
it's oh, we're casting spells and some you can cast
the spell and turn your friend into a frog. Whoop,
sorry I invoked the wrong spell. Or you can turn

(44:32):
water into wine, which is a wonderful spell unless you
really just need hydration, which case that's not the spell
you want at the moment. But that's a point, like
we need. This is going to force us slash invite
us to be much clearer about what we actually want,
and I think there's a micro little metaphor in that,
the clarity of the question like what are you going

(44:54):
to use this power for? Literally how are you going
to ask? And that will determine what we get out
of this in your next chat GPT session or over
the next decade, as we ask ourselves what are we
going to do with all this power.

Speaker 4 (45:11):
I think it's a perfect way to end. I'm glad
you said the bit about the collaboration at the end
as well, because I was going to remark upon that too,
that now when you open chat GPT there's at least
that little warning thing at the bottom that's like chat
GPT may make up facts like.

Speaker 1 (45:25):
Please you know, be away.

Speaker 4 (45:27):
Yeah, but again it's what you said. This comes out
of a dialogue with the public, and whether or not
we're satisfied with the amount or scale dialogue is another thing,
but there is some So yeah, anyway, I don't want
to take too much away from your final statements because
I think they're really poetic and we'll see what the
magic of AI does for us or does to us.

Speaker 1 (45:47):
So thank you so much for coming on, and we'll
determine it. We won't just see us, We'll create it ourselves.

Speaker 3 (45:55):
So I love that love, that love that I guess again,
there are simply people who articulate their concerns and hopes
about the world in a way that I think is
more likely to lead to a constructive future, and there
are those who do so in a way that seems
like it's going to lead to a more conflicted, less

(46:15):
constructive future. But Bartunda clearly embodies that both hard edged
but open minded approach to what do we do about
the world. And we didn't even get into a lot
of the work that he's done over the past years,
which is more about polarization and race and inequality. And
even then, when dealing with these issues that are really fraught,

(46:37):
he looks for and tries to create connection.

Speaker 4 (46:41):
And he's funny, right, That's the other big charm and
talent that not everybody has. He was part of the Onion,
he was a producer on The Daily Show with Trevor Noah,
and there's certainly something to be said about the power
of humor.

Speaker 3 (46:53):
So hopefully we will continue to have more conversations with
him and with others like that. But again, to me,
that kind of epitomizes what we're trying to do and
what we're trying to amplify.

Speaker 1 (47:08):
Let's talk about the news, shall we.

Speaker 4 (47:14):
First news story that we have for today is busting
the myth of the broke Millennial. And this is an
article in the Atlantic by Gene Twangy. She has a
new book coming out, so it's actually an excerpt from
her new book. But the excerpt was fascinating. Right because
I'm a millennial, I can say personally that that is
a feeling among my friends that we got dealt the
short end of the stick coming of working age during

(47:36):
the financial crisis, that we had the pandemic, and they
were very much So is this feeling of we're poor,
we're broke, we're saddled by a student debt, we can't
buy a house. Houses are really expensive anyway, so screw it.
This life ain't for us. But the article is really
about how it is true that millennials were behind when
they first started working, but now they have caught up.

(47:57):
Gene Twangy calls it a breath taking financial comeback that
started in the late twenty tens, and so she does
all this comparison of data of median household incomes compared
to Gen X and Boomers at the same age. Basically,
it's actually greater than Gen X and Boomers at the
same age. Also adjusted for inflation, millennials have one of

(48:18):
the three of us have a college degree by our
late twenties. It's a first generation ever to have that
kind of number, which means that our income is up,
and there's fewer millennials in poverty than Gen X and
Boomers at the same age, and home ownership rates are
almost the same as like forty eight percent to around
fifty percent, which all which is to say, this reputation
as sad and broke is no longer true, and it's

(48:40):
important that we rectify that and look.

Speaker 3 (48:43):
I think some of the challenges of why that image
remains so profound is the media people or people who
are creating content who tend to be overrepresented in places
like New York and coastal cities like San Francisco, La.
Those areas are actually pretty unaffordable relative to even pretty

(49:04):
high incomes, and so there's an understandable tendency to magnify
that as.

Speaker 1 (49:09):
A global reality.

Speaker 3 (49:10):
You yourself had the experience of having a pretty decent
job in New York City, but that being inadequate to
a lifestyle in New York City, and that is a
real problem of New York. So it's not like there
aren't pockets of places where cost of living and income
are just not favorably aligned under any circumstances. But that
doesn't make it actually nationally true.

Speaker 4 (49:32):
Right, or even just that it was true that millennials
were having a hard time, but it's no longer true
and regardless of where you live, right, And it's really
important if we if that were true, right, it's it
does call for what we see, which is, should we
really be trusting capitalism with our economic fortune? Should be

(49:52):
looking to other systems of government? It starts those trains
of thought.

Speaker 1 (49:56):
Right.

Speaker 4 (49:56):
But if it's not true, if the American dream is
still live for millennials, and we'll continue down through gen
Z and jen Alpha, hopefully that means that we need
to take another look at things and think that actually,
maybe the system is working all right.

Speaker 3 (50:11):
Exactly and recognize, Look, if you're an American, it's been
a weird twenty plus years since the collapse of the
Nasdaq bubble in March of two thousand through nine to
eleven through everything else that has happened subsequently, It's been
a really challenging cultural time, irrespective of it's also strong
periods of being challenging economically. What we have.

Speaker 4 (50:31):
Next is gen Z blowing past other generations. That's the
headline from Yahoo when it comes to four oh and
k's and retirement saving. So this is actually, if I
come from the narrative of the Brook millennial, I think
like gen Z took a look at the world around them,
They're like, ah, let me save a bunch of my money.
Sixty two percent of eighteen to twenty four year olds
contribute to four oh one k's in twenty twenty one,

(50:52):
compared to only third percent in two thousand and six.
And actually, despite what I just said about gen Z
being scared, that's the personal adopes that I hear from
people that are that age that they like have freaked
out by what happened to millennials. But y'aho actually says
that so many more gen zs are contributing for something
that's really a small, boring change is just automatic enrollment

(51:13):
in four oh and K plans went way up in
two thousand and six, only eleven percent of employers offered
automatic enrollment by the end of twenty twenty one, it's
about fifty percent, so simple fixes.

Speaker 3 (51:26):
That is kind of wild, and especially when you consider
that every single statistical model of investing and saving shows
that the earlier you start, the delta between starting in
your early twenties versus early thirties and what you will
then have by the time you're in your sixties is massive.

Speaker 1 (51:43):
So it's not just.

Speaker 3 (51:45):
Isn't that good that there's a little more preemptive savings,
it's that the change that will make across a lifetime
if past trends hold, which obviously they could not, really
sets that cohort up particular while in the future. So
another piece of news, and that one I did no
idea about.

Speaker 1 (52:05):
That's really cool.

Speaker 4 (52:06):
So let's talk a little bit about the right to repair.
It's a topic we I don't think have covered on
the podcast at all.

Speaker 7 (52:13):
We live in a free market, but when it comes to
repairing electronics like smartphones.

Speaker 1 (52:18):
You are not free to choose where to go.

Speaker 7 (52:20):
If you were the hopeless person with a broken gadget,
you'd immediately go to the Apple store, and that's exactly
what Apple wants you to do. The company and many
others restricts how and where you can.

Speaker 1 (52:31):
Repair your stuff.

Speaker 7 (52:32):
Anything that has a chip in it right now is
probably impossible to repair without using the manufacturer.

Speaker 1 (52:38):
That means tractors and cards, it means your smartphone, It
means increasingly the refrigerators and washing machines that people have
in their homes. When something breaks and the only solution
is to take it back to the manufacturer, they can
charge you whatever they want.

Speaker 8 (52:56):
So this is a MacBook Pro Apple the Apple Store
said would cost twelve hundred dollars to fix and wasn't
worth doing. So if I walked in off the street
with this problem, what would you charge to for the
repair you.

Speaker 3 (53:08):
Just did, depending on the model, anywhere from seventy five
to one fifty.

Speaker 4 (53:12):
June twenty two, twenty twenty two was the United States's
first ever right to repair law. That was in Colorado,
and it was actually for wheelchairs. And there's been a
spate of laws since then, but the one that I
wanted to highlight today from the month of April, again
from Colorado, and it's the right to repair for farmers.
And I wanted to highlight it because it's some one

(53:33):
of those things where you learn you're like, that's crazy,
Like I did not know that farmers did not have
the right to repair their own equipment, So now they will,
at least in Colorado. Generally, retailers like John Deere have
made farmers come to their own stores to get tractors
repaired or another farm machinery. Now they will be legally

(53:56):
obliged to provide farmers with diagnostic tools, software documents, and
repair manuals starting January first, and similar resources must be
made available to independent technicians. So seems like a wind
for farming.

Speaker 3 (54:09):
Absolutely, yet another one of there should be some balance
between personal autonomy and corporate bottom line. Yeah, so that's
some good news of the week, Emma, thank you.

Speaker 4 (54:20):
That is. I do also have to give a correction
on news that we covered previously. I think it was
a couple of weeks ago on the fentanyl test strips,
and at the time I said that only two states
allowed the sale of fentanyl test strips. That's because I
completely misread a Cato Institute report that is incorrect. Thirty
six states now allow the sale of fentonal test strips,

(54:40):
which allow people to test to see if the drug
they're about to take has fentyl in it. That's an
addition of sixteen since January twenty twenty two, so the
movement is definitely new. And at the time I said
there's some movement here, not that much. I'm thankful to
be wrong in a good direction.

Speaker 3 (54:56):
It's actually you underplayed your already good starle Yes.

Speaker 7 (55:01):
I did.

Speaker 1 (55:02):
I did.

Speaker 4 (55:02):
And the remaining states that where it's not legal, there's
pending legislation, so we're actually probably going to be seeing
soon that it's legal across the US.

Speaker 6 (55:11):
Cool.

Speaker 1 (55:12):
Good to know.

Speaker 3 (55:13):
Thank you all for joining us for this.

Speaker 1 (55:15):
What Could Go Right and our conversation with Barton Day.

Speaker 3 (55:18):
This is weekly. Please sign up for the newsletter we
also called what Could Go Right? Go to the Progress
Network dot org and you can sign up and it's free.
So thanks Emma, and we'll talk again next week.

Speaker 2 (55:29):
Thanks, I agree.

Speaker 4 (55:40):
What Could Go Write is produced by Andrew Steven executive
produced by Jeff Umbro and the plug Vomberate. To find
out more about What Could Go Right the Progress Network
or to join the What Could Go Write? Newsletter, visit
the Progressnetwork dot org.

Speaker 2 (55:53):
Thanks for listening. Day
Advertise With Us

Popular Podcasts

1. The Podium

1. The Podium

The Podium: An NBC Olympic and Paralympic podcast. Join us for insider coverage during the intense competition at the 2024 Paris Olympic and Paralympic Games. In the run-up to the Opening Ceremony, we’ll bring you deep into the stories and events that have you know and those you'll be hard-pressed to forget.

2. In The Village

2. In The Village

In The Village will take you into the most exclusive areas of the 2024 Paris Olympic Games to explore the daily life of athletes, complete with all the funny, mundane and unexpected things you learn off the field of play. Join Elizabeth Beisel as she sits down with Olympians each day in Paris.

3. iHeartOlympics: The Latest

3. iHeartOlympics: The Latest

Listen to the latest news from the 2024 Olympics.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.