Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:06):
Good News for Lefties and America. Hello, and welcome to
another edition of Good News Deep Dive, where we try
to find good news and get into it a little
more in depth than we do with just the news headlines.
It's important to know that the stories are actually happening
(00:30):
that are positive, and then we want to get into why,
because we don't almost have time during the daily episode
that we present to you to tell you why and
how good it is. And you know, this is a
little bit offbeat because I don't know how good AI
is really and I've been thinking about it a lot
(00:52):
because it is being pushed on us. It is pervading
our lives. It's like a huge portion of what constitutes
both in our economy right now. And I think there
is a good aspect to it, but I think there
are also a lot of aspects to it that we're
not talking about, and so on the program today, I
have invited Jacob Ward to speak with us. He is
(01:14):
a former technology correspondent for NBC News previously worked as
a science and technology correspondent for CNN, Al Jazeera and PBS,
and he is the author of The Loop. How AI
is creating a world without choices? And how to fight Back.
He also runs the newsletter and the podcast The Ripkra.
Jacob Ward, Welcome to Good News for Lefties. Thanks so
(01:36):
much for being here.
Speaker 2 (01:37):
Thanks very welco. It's great to be here.
Speaker 1 (01:39):
So let's clarify what we're talking about when we talk
about the AI that is out there right now and
being pushed like, especially the generative AI that most people
are familiar with through chat GPT, this is a pretty
specific thing. What is AI like and what is it not.
Speaker 2 (02:03):
So when we think about this current moment of generative
AI right where you can make any silly you know,
dinosaurs having sex with trucks kind of you know imagery
or you know, write knock knock jokes about it for
your wedding vows or whatever, it's going to be right,
that is a specific flavor, a specific use of what's
(02:25):
called a transformer model. And so these are a thing
that revolutionized AI in about twenty seventeen twenty eighteen. Prior
to that, you basically had to sit a bunch of
humans down, have them model good decision making around something
you know, fill in the blanks kind of statements. And
then once you had you know, or pictures of dogs
and cats and tell us the difference between the two right,
(02:47):
And once you had done that enough, you could do
what was called human reinforced learning and teach one of
these pre transform model systems how to use the example
of human judgment to make choices. Well, what Transformer also
made possible is basically pouring a huge amount of undifferentiated
data into the top of a funnel, and then it
can create little shorthand rules for itself and pull out
(03:12):
of the bottom some insight. And as a result, what
you can do is take an ungodly amount of information
and find patterns in that that you can regurgitate really
quickly that you being the AI in a way that
human beings could never And so when we think about,
you know, what AI is and what isn't, one thing
that really upsets me is just how often we use
(03:34):
these sort of human terms for AI systems, like reasoning
and understanding. You know, they would love to sort of
answer morphize these systems. It's as I can for you
to tears about why our brains want to do that.
But the the truth of the matter is that these
systems are essentially just predicting statistically based on all the
(03:58):
patterns they've seen in human righting what will come next
in the conversation? Right, And you can poke and prod
it a little bit to make it sort of you know,
focus in on one topic or another. You can give
it some guidelines don't go here, don't do this, and
you can also tell it to constantly come back and
remind itself what it's talking about. Right, So that runs
(04:19):
out after a couple of hours. But really, I mean
the thing that I understands that these systems are wicked
stupid from.
Speaker 1 (04:26):
Us, right, right, I mean that's the bottom line, isn't it.
It's not really intelligence as we know it as applied
to human beings. And yet that's kind of what we're
being told in a way.
Speaker 2 (04:37):
Well, that's right, And what we're seeing more and more.
What I spend so much of my time as a
journalist and as a sort of advisor to companies thinking
about is how quick these the big companies making these
systems are to try to get you to treat it
as so much more than it really is, and to
try to get you to really depend on it for
(05:00):
something much more than just logistical help. And so you know,
we just this week, you and I are speaking on Thursday,
November sixth there's a release this week a video that
came out from Open Eye in which they announced some
changes to sort of their mission and their infratructure costs,
which are ungodly and we could talk about those, but
one of the things they say is that they really
want that, you know, one of the takeaways was they
really want this thing to be much but as much
(05:23):
an emotional companion as a productivity tool. Wow. And that
is where I really get worried, because your average human being,
including myself, has trouble understanding that this system doesn't speak English.
It doesn't really English. All it does is read patterns
and makes a prediction about what should come next in
the conversation. There's no good human analogy to this. But
(05:44):
it's like if you had somebody so robotically ingenious that
they could just make the sounds of another language well
enough to fool someone else into thinking they speak that language.
That's kind of where we're at. There's no huge there's
no reasoning, there's no understanding. It is just it is
just that. And so the idea that we're going to,
you know, have something as stupid as that standing in
(06:06):
for your kid's best friend or your kids first you
know love interests, yeah, or or your you know therapist. Right,
it really bothers me.
Speaker 1 (06:16):
Yeah, it is very worrisome. And not only is it ubiquitous,
is it being pushed on people to do like very
personal things. But I also see it being accepted, you know,
by by very powerful, very wealthy people, as something that
(06:37):
they should use like every day. I mean I literally
heard a very wealthy person the other day say that,
you know, their chief technology person urged upon them to
like stop using Google and to use AI for for
for say and that and then that to me is
(06:58):
is crazy because I know from having used it myself
and seeing other people use it, that it comes up
with complete falsehoods on a regular basis that you have
no idea what the what the source of those are.
I mean, yeah, sure you can go on to Google
and get links of people saying nonsense, but at least
you know you have an idea where that's coming from.
Whereas with generative AI and things like chat GBT, you
(07:21):
can you you it's being presented as because of the
language model in a way that makes it seem like,
oh yeah, this is just just a natural thing, and
our brains like glom onto that and and it gets
through our defenses much more easily. I see that as
a very very dangerous thing.
Speaker 2 (07:40):
Yeah, it's a really big deal. And I mean one
of the things I think we just sort of have
to better understand about ourselves in the United States. We
don't particularly like to think about it this way, because
we like to believe that we are in charge of
ourselves and that and that, you know, we love to
blame drinkers for their drinking problem and gamblers for the
gambling problem. We're not We're not good at saying that
your brain has its own pathway and you and it
(08:01):
sometimes acts contrary to your interests. But you know, we
we know from the I mean, the essence of my
book is looking at, you know, more than one hundred
years of behavioral science, which shows that one of our
great evolutionary gifts is the reason that you and I
are sitting here in clothing and in shelters, you know,
having had a nice meal before. You know, I just
(08:22):
finished my nice bowl of rice pudding before sitting with you.
Speaker 1 (08:25):
You have these well, that sounds delightful, by the way,
rifle delightful, right, I love that we're here.
Speaker 2 (08:30):
Thank goodness, I'm not fighting for my life every day.
And one of the reasons I'm not is that we
have developed this incredible ability to take cognitive shortcuts we
are great at. Our brain loves to outsource its decision
making so it doesn't have to think about it doesn't
have to worry about it, and so it can say, oh,
(08:52):
I know this one, all right, I know this story,
I know with this you know I know a bus ride,
so it doesn't have to show you a single detail
the bus.
Speaker 1 (08:57):
Or or I know how this group of people act.
Speaker 2 (09:00):
I know how this screen group of people acts exactly.
A huge and huge another problem, right we are we
have this unconscious bean counting machine that is in there
developing our biases for it, and in the case of
something like AI, it is the perfect thing for hacking
a system that we've had for millions of years now.
It basically wants us to go with our gut. It
doesn't want us to think it through. It doesn't want
(09:21):
us to, you know, make a rational choice. It wants
us to say, oh, I know this one, and I'll
just go with the goosebumps I'm getting from this. That's
how I'm going to make this choice right, and it
is a great gift. It has kept us alive because
it kept us efficient and moving quickly and bonding with
our fellow tribes people and all of that. But man,
is it trouble now that for profit companies And the
(09:43):
big thing to also remember is that these systems are
entirely made and held inside for profit companies. Those companies
are now trying to get this system to really be
a daily part of your life, and that I think
is something we have so little defense against.
Speaker 3 (09:57):
Let's pause, we'll be back in just a min This
is good news. Deep Dive. I'm Beowulf Rockland, and we're
back with more good news for lefties. Deep Dive. I'm
Beowulf Rocklin.
Speaker 1 (10:10):
I'm wondering, is the reason that this is so pervasive now,
in this moment, in twenty twenty five. Does that have
a lot to do with the fact that those shortcuts
have been like so appealing to the gatekeepers of technology
because it really seems to be everywhere, and I've just
been overwhelmed by its presence in popular culture. In every
(10:36):
product that you see technologically is trying to push some
form of AI. It's becoming a larger and larger part
of the economy. I mean, is it because is it
literally kind of like not literally, but is it engaged
in a sort of capture of the decision making class
(10:57):
to it seems to me it has to a frightening extent.
Speaker 2 (11:00):
If I definitely feel that, I definitely feel like there
is a very specific frailty that the human brain has
around outsourcing our decision making that this thing takes perfect
advantage of it. Absolutely, And we should also remember, right
this stuff is trained on, you know, the the incredible
amount of surveillance data that we turned over or were
(11:22):
convinced to turn over to some of the same companies
that are now making these systems over the course of
the social media era, and so you know, we were
already at the far end of a system that was
taking advantage of our most primitive sort of lizard brain.
Now all of those patterns have been fed into this system,
and so it's going to be the most incredibly appealing
sort of system. At the same time, it is also,
(11:43):
I would argue, a function of the really, I mean,
it's a couple things. There's the incentive structure of these
companies which suddenly have to make back a huge amount
of money there. You know, people talk about Nvidia being
the you know, the first company of the world worth
a trillion dollars or five trillion dollars now, yeah, most
biggest company in the history of valuable company in the
history of companies. And these incredible amounts of money at
(12:03):
trillion dollars that open a eye is expecting to spend
on its infrastructure costs. Well, remember that's money they're going
to have to pay back at some point. That's the
that's a debt, right, And so these companies are going
to have a huge incentive to make money off of
you when they can. So they're going to push this
really hard. And I would say because I don't believe
that anybody. I think it's a very rare sociopath who
(12:26):
gets into a position of power and does things for
truly evil purposes. Almost everybody has a belief that they
are pursuing, and in the case of the leadership of
a lot of these companies, I think they honestly believe
that they are democratizing a kind of accelerating power that's
going to unlock a huge amount of creativity and scientific discovery.
And on the far end of that is going to
be a utopia. You know, I really believe they have
(12:48):
this kind of faith based conviction that that's what's coming.
And I can just argue, now, there are lots of
people I would love to throw this toolat. I have
a piece, as you mentioned, I have a podcasts called
The Rip Current the rip current dot com and it
is a and I have a piece up there in
my newsletter about you know, just give this stuff to
(13:08):
scientists for five years. Let's just do five Let's do
just scientists for five years. Let's not make AI girlfriends,
Let's not you know, have it doing your kids homework yet,
Let's let's just give it to scientists because those folks
can make incredible use of this stuff. A pattern recognition
tool that can find the statistics in huge bodies of data,
let's go. I'm all for it, But you know, I've
(13:31):
talked to people who say that they can probably you know,
end lead poisoning in this country by using survey demographic data,
earth records and addresses, and you could go in and
pre strip out the lead paint from kids apartments before
they come home from the hospital. You know, like incredible
things could be done with this, they could never be
done with human analysis. But here's the thing. Nobody's making
(13:52):
money off stripping money is stripping lead out of out
of apartments. You know, that's not how we're going to
make money. And so so it's it's the it's the
you know, it's the profit motive. That of course, there
you go, I think twisting us in a difficult way.
Speaker 1 (14:06):
Yeah, And that is and continues to be the pervasive
problem I think behind technology, because there really are many
wonderful things, but if we talk about how it has
developed over the past several years, maybe in the past
generation or so, there's the phenomenon that Corey doctor Oh
has put out there in the public of incitification.
Speaker 2 (14:29):
It really has become it's a great way of shorthanding.
That's right, that's keep going.
Speaker 1 (14:34):
I'm sorry, yeah, no, no, no, It's that things have
gotten progressively worse around technology because it is more profitable
to make them poorly and make them, you know, to
put these barriers in order to find these different profit streams.
And that really is ultimately what's what's wrong with the technology.
(14:55):
So as many bad things as we have said about AI,
and this is phenomenally supposed to be a positive program
about good news. There actually is good news because it
has the potential to do some incredible things. And you
and you just talked about one like what what what
other ways if we let the scientists have it or
(15:17):
or let humanitarians uh have the technology, not that they
don't have it, but but if we just let them
have it as opposed to people with gazillions of dollars, Like,
what sorts.
Speaker 3 (15:31):
Of things could we be looking at?
Speaker 1 (15:32):
What what sorts of possibilities could could we have that
that could actually benefit humanity?
Speaker 2 (15:38):
So here's a here's a mundane example and a very
sort of exciting and lofty example. So the mundane example
is you could, by virtue of the system, like a
transformer makes transformer model makes possible, you could basically take
the randomness of human need and match make it where
(16:00):
social services in a way that would reverse the way
we currently do it. Right now, if you are someone
struggling to make ends meet, or you've got a family
member in crisis, or whatever it is, right you are,
it's on you to go find the services you need.
I've got to negotiate, navigate the bureaucracy and figure it out, and.
Speaker 1 (16:20):
I will add that it is a huge process and
that there is no at least in the United States,
there's no central location where you go to get other
You have to go here and there, and you catch
all these pieces together.
Speaker 2 (16:34):
You don't know what you've missed out on right right,
It is a you are sailing in the fog. Right well.
I have spoken to people who are who are actively
working on the notion that you could match make people
with the services they need. In reverse, you just have
someone arriving. Let's say a person moves to a state
for the first time, and you know this is one
(16:55):
month again a mundane example, but you've got a list
of state jobs that have gone unfilled. And this is
as much as we're having some trouble with employment right
now and worrying a lot about AI taking away jobs,
there's a huge number of jobs out there that need people.
You know, bus drivers, you know sanitation engineers. You know,
there's a state level and a regional level. There are
(17:19):
huge amounts of jobs going unfilled that we could use.
And so there is a system. I believe it's New
Hampshire that is trying to basically make it such that
when you move there or you arrive as a new immigrant,
you get match made with you know, hey, would you
like to do this job? You know, I mean, like
what a tremendous idea. Now to make that possible, you've
(17:40):
got to make sure that you're not creating a kind
of surveillance panopticon that can go the wrong way. But
we have examples of countries like Estonia. It's a very
small country, but a very technologically advanced country, where they've
created a whole sort of Christmas tree system where each
there's a bunch of ornaments hanging on the tree, and
each ornament of a different data pool about your life.
Here's your police records, your banking records, here's you know
(18:01):
this stuff, and only you can access all of them.
But you can, you know, you could conceivably you could
use something like that to figure out here's a need
you might have, you know, let us fill that need
for you. There's a way there's an air traffic control
for social services that I could make possible. They'd be unbelievable.
So that's among the mundane example, right, here's the lofty example.
(18:22):
I was talking to a guy, Aser Raskin, who you
may know Tristan Harris, he's a guy. They did a
documentary together called The Social Dilemma, where they were highlighting
the problems with with social media. These are a bunch
of former tech people who now feel great regret for
what they have foisted on the world, and they've gone
on and become very good activists and advocates. They speak
(18:43):
to Congress and they try to educate people on how
tech companies make decisions and how we might fight back
against their influence. So Aser Askin does that work. And
then on the side, his side hustle is he's got
a whole project where he's using these transformer models in
the same way that we were talking about a moment ago,
how you could have a very smart but robotic human
imitate the language of another uh, you know, of another
(19:07):
nation without understanding what they're saying. Yeah, well, he's using
a similar kind of architecture to try and decode animal language.
Oh wow. So because you can do you can do
the same kind of thing. You could even begin communicating
back with these animals without really understanding what you're saying yet,
but eventually you can get to meaning right. And he's
already at a place where he's starting to decode meaning
(19:27):
he's learning, for instance, that whales have names for each other.
There's a whole language of individual identificator identifiers that they use.
There's also a crazy thing about I think I can't
remember if it's what kind of animal it is. That
that is clearly teaching. He's coding things about how they
(19:48):
are teaching their young to do stuff. You know, So
we're there's some really cool stuff that we can do
around decoding nature. I mean, you point one of these
kinds of pattern recognition systems at the stars or at
you know, pre cancerous mole images or whatever, right, it
can find the patterns and we love that. We love that,
(20:09):
we want that to happen. So there's some great stuff coming.
I don't want to keep dragging us back to us.
It's a tough place. But the essential tension and then
and then we get into some other things that I'm
positive about, But the essential tension here I think is
really well summarized by guy named Dario Amedai who runs Anthropic.
He's the CEO of Anthropic, which is one of the
competitors to open Ai in others, and he told the
(20:30):
news outlet Axios in an interview, he said, you know,
you should really be worried about what we're making because
on the one hand, we might I'm paraphrasing here, but
he said, we might, you know, we might very well
cure cancer and create enormous riches, but twenty percent of
people might not have a job. That these things are
held in balance here, and so this faith based conviction
that a lot of these folks have, or at least
(20:51):
profess to have, you know, really believes that it's going
to be worth it to get there on the far side.
And I think there's going to be all kinds of
things that are going to be miraculous and amazing on
the far side of that. So this is really cool
stuff to look forward to. We just got to keep
an eye on all the brain hijacking they're going to
do to make money.
Speaker 3 (21:06):
We'll be right back with more. This is good news
for lefties. Deep dive. I'm a able Frocklin. Thanks for
sticking around. We're back with more on this good news
for lefties. Deep Dive. I'm Bayable Frocklin.
Speaker 1 (21:20):
Have these AI evangelists knowing that this is going to
likely be the case that there's going to be this
huge social disruption that twenty percent maybe more of people
aren't going to have jobs in the future that they
themselves invasion. Have they made contingency plans for this, Do
(21:42):
they care about that portion of humanity or not at all,
or do they just assume that the technology is going
to solve that problem as well.
Speaker 2 (21:52):
So I think they. You know, obviously it's not a monolith.
There's lots of different perspectives, but there's a really interesting
range of conviction around it. Some people clearly just don't
care and really believe that in the in the real
in the quote unquote real world, there are winners and losers,
and some people are just going to lose. The socialists, yeah,
social artists, there's some there. There are those folks out there,
(22:13):
and you know, and many of them are in charge
like that. Yeah, you know those folks, We know those folks. Yeah,
there are folks like Sam Altman, who, when he begins
to judge, the founder of open Ai, who, when he
begins to gesture at this stuff, starts to talk about
a real shift in the way we will be governed
and in the social contract. And he has uh talked
for many years about trying to create, you know, a
(22:35):
system in which there'd be some kind of input, a
new democratic input from the world that would make it
possible to kind of reinvent how we are all governed.
And you know, and he's gestured at the idea of
a kind of universal basic income and various things. Now
I consider that kind of a college freshman's idea of
how this stuff is going to work, you know what
I mean, Like these are these are very very intelligent
(22:56):
people with not a huge amount of political science training. Yeah,
and so to my mind, there's some there's some really
naive kind of stuff around. Yeah.
Speaker 1 (23:04):
I can't exactly see the current people in power willing
at this point in time to democraccientiice the system to
that extent.
Speaker 2 (23:13):
Plus, you know, as as some very much smarter people
than me. I had a wife and husband political science
duo on my podcast named John Petty and Alistavent. Incredible duo.
They are mathematicians and political scientists, and they would tell
you they've shown in the math that like you can't
get to people to agree on almost anything, Like it's
just not plausible, you know. So the idea that somehow
(23:34):
this thing is going to is going to like vacuum
up a common will from from the world's population. It's
not it's not a realistic idea. So there's that kind
of idea, and then there are people who seem to
believe that, you know, I mean, one of the one
of the weirdest ones I've seen is this guy Alex Karp,
who is the CEO of Palentteer right now. Palenteer, to
your listeners, probably is a you know, it's a it's
(23:56):
a scary company in a lot of ways. They make
stuff for the Defense Department of the early users of
the early you know, the very sophisticated users of AI,
and they've been commissioned to create all kinds of sort
of surveillance apparatus while at the same time being drawn
into some civilian database work. They're depending on who you ask,
they're creepy as hell. But Alex Krp the head of it.
So the other day or I think it was this week,
(24:18):
just like a day or so ago, he was asked
on CNBC about the guy Michael Bully. I want to say,
the guy who from the Big short has now shorted
the AI stocks. I think same guy who called the
mortgage crisis is now calling bs on the on the
AI bubble. And Alice carp was asked about that, and
he flew into a rage on live television about how
(24:41):
short sighted that is and so stupid and regenerating so
much wealth and what are you talking about? But then
he said this funny thing at the end of it,
the sort of unguarded thing he said, the thing that
we'll really need to be judged on is whether our
contribution to GDP only goes to people like me or
does it go to everyone, which is a really not
a thing I would necessarily expect from guy in that position.
(25:01):
But I think what that belies is this recognition that, like,
if they don't deliver value to all of us, if
this really does turn out to be a Darwinistic thing,
I think they recognize that it's they're going to be
in a great deal of trouble. And I feel like
there's some comfort to be taken from the idea that
(25:21):
that guy at the who he doesn't have a lot
of people around him saying no, right, right, even he
seems to recognize how unpopular he would be if in
the end it only ends up enriching.
Speaker 1 (25:32):
You know, I agree with you, and I just wish
that in order to perhaps help preempt the sort of
situation that he's alluding to. He could say it a
little more frequently and a little louder.
Speaker 2 (25:46):
And that's right, and if we could maybe not. You know,
so I have a new book project that I'm working on.
I have to figure out how to make a living
in the meantime. But because this stuff is hitting me early.
But the book idea that I'm calling great ideas we
should not pursue, you know, it is trying to figure
out how do we start evaluating this stuff different? Yeah,
(26:08):
than just is their market fit and can you make
money off it? Right? And when I bounced the hat,
I was on a different podcast recently with a couple
of a former Uber guy and a former Google guy,
And these are guys who talk about tech startups and
the sort of ups and downs of that they do
kind of like sports coverage style analysis of companies. Anyway,
one of them just his head popped off when I
told him that, you know, I said to him, I
was like, you're gonna hate this, And I told him
(26:29):
that that title, and it's true. He hated it because
his attitude is a very common one, which is we
should just build and build and build and build. We'll
sort out the problems later, you know. And I think
there's something I don't know. I really want there to
be something different than just let's build it and figure
it out later. Yeah, And I like to imagine that
sort of work coming up on that. But well, if
(26:51):
I just want to because I want to, I want
to deliver for your audience what they want. Things that
that are that are.
Speaker 3 (26:57):
I appreciate that.
Speaker 2 (26:58):
Maybe let me try and let me as we started
to rinde down, let me just at least try and
give you a little bit of good news. So here's
one thing that I love right now. There's this great
term going around among young people that they use it
for AI, so they'll you'll, they'll use it to refer
to a fake video or to refer to sometimes an
old person who is using AI too much. Yeah, and
(27:18):
that term is clanker. Don't be a clanker or look
at that is awesome? Isn't that great? Right? It's so great?
And I just I want to just like pay homage
to the fact that it's always the kids that are
going to save us and there and the fact that
they have spotted that and named it so quick, you know.
Speaker 3 (27:40):
Is just great that you see.
Speaker 1 (27:42):
That to me is a much more reassuring trend than
any of this six seven stuff that's going on, which
is which is without any meaning it can it completely
airs as can.
Speaker 2 (27:53):
I just tell you, as one who is six seven,
that has been a particularly tough meme. Make me same.
Speaker 1 (28:00):
Well, I'm only six y four, so I'm not even insane.
Speaker 2 (28:04):
Your trend is coming, but this was my year. So
but there's a great I think recognition on the part
of young people that this stuff is kind of crape
and that and that the vision being sold on this
stuff is kind of crap, you know. And I really
hope that that continues to be a trend. I have
no numbers on it. I don't know how prevalent that
(28:26):
idea is. I don't know that that's going to offset
the huge number of lonely kids who are going to
have their first romantic experience with a chatbot, you know,
Like I don't know, you know, but they see that
stuff pretty well. And I really I really like that
part of it. And I would alsoay another thing I've
been really focused on lately and trying to do myself
is like I'm trying to get away from the alarmist language,
(28:47):
although I am very alarmed and more towards this idea
of amplifying what we like about being human. Yeah, the
best parts of being here.
Speaker 1 (28:56):
There's some pretty cool stuff actually, I mean, yeah, there's
a lot to be worried about. But you know, being
alive in the world today in a lot of respects,
it's pretty cool.
Speaker 2 (29:07):
Oh man. I mean you look back at like I
love to go look back at if anytime I want
to make myself feel better about something, I look back
at the medical history of like at the history of medicine,
looking at at the the you know what it was,
you know, whenever people want to play that party game
of like what what era do you wish you lived in?
And maybe people talk about, you know, the jazz Age
(29:28):
or whatever. Never mind the fact that I would not
be allowed to marry my wife back then. You know,
you look at what dentistry was like in the age,
what being shot in the gut was like in the
jazz Age, dying that way, Oh my god, you know, like,
it is awesome to be alive right now. This is fantastic.
You know, I'm meeting more and more people, really serious thinkers,
(29:51):
who are trying to think about Okay, Well, rather than
focusing so much on on what we're going to conceivably
lose here, let's focus on things like purpose. How do
we create purpose for young people and for ourselves? How
do we create you know, connection? The I'm a recovering
drinker and an AA they say the opposite of addiction
(30:12):
is connection. Right. I think we're starting to by virtue
of what we're seeing, this thing, this AI play on.
I think we're starting to recognize what we what we
want to make better? Right, strong? Right, that'll be one
of the nice things.
Speaker 1 (30:24):
Yeah, because because if we can and and really this
is part of my goal on this program is to
focus attention on the positive, not just for its own sake,
but because if we can see that far horizon, if
we can you know, define its features, then we can
move toward it. If we're not looking at it, then
then we're then we're not going there. We need to
(30:47):
be aware of it and we need to be able
to see it in order to practically obtain it.
Speaker 2 (30:52):
Yeah. I love that. I love that. I think that's
absolutely right, and I think there's something good coming. It
feels to me in the kind of alert reaction you're
seeing young people have to this stuff. I think, you know,
while I worry about how all of this AI generated video,
he was going to really screw up, for instance, my
bread and butter, which is trying to create video audience
(31:12):
of important stuff. I mean, there's going to be some
really tough things. But I also think, yeah, we're just
we're better at recognizing and protecting the best parts of
being human that I think we give ourselves credit for
us sometimes. And I'm excited to see the ways in which,
especially people younger than you and I figure that out.
Speaker 1 (31:31):
Yeah, positive words and good news indeed. Jacob board he
is the author of The Loop, how AI is creating
a world without choices and how to fight Back, and
he is the host of the podcast The Rip Current.
You could find more at the ripcurrent dot com. Thank
you so much for being with us today, Jacob, A
(31:51):
pleasure speaking with you, and we did manage to find
some hope in there.
Speaker 2 (31:55):
I appreciate that. Well, really fun to find us over
lining with you. I really appreciate the best