All Episodes

June 20, 2023 54 mins

There is a good chance that in March of 2023, humans crossed a threshold into a transformative new era when a new, smarter type of AI was let loose in the wild and an AI arms race began.  

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Welcome to Stuff You Should Know, a production of iHeartRadio.

Speaker 2 (00:11):
Hey, and welcome to the podcast. I'm Josh, and there's
Chuck and Jerry's here too, And that makes this a
timely topical Not timely is in fork times. I meant
to say timely topical episode of Stuff you should Know.
That's a great forecast of.

Speaker 1 (00:29):
How it's going to go to I think fork cast.
Oh God, is this really us or is it AI
generated Josh and Chuck.

Speaker 2 (00:42):
Until this year I would have been like, don't be preposterous.
Now I'm like, just give it some time.

Speaker 1 (00:51):
You know how we would know as if it said
if one of us said, of course it's the real Us.
We met in the office and bonded over our Van
Halen denim vests.

Speaker 2 (01:01):
Yeah, we'd be like, sucker, you just fell for the
oldest trap in the book, the Sicilian switcher.

Speaker 1 (01:08):
Yeah, aka fake Wikipedia entry stuff?

Speaker 2 (01:12):
Is that still up?

Speaker 1 (01:13):
I haven't been to our Wikipedia page in years, so
I don't know.

Speaker 2 (01:17):
Well, regardless, we're not talking about Wikipedia, although it does
kind of fall into the sure it figures in the
rubric of this. Not sure if I use that word correctly,
but it felt right. We're talking today about what are
in the biz known as large language models, but more
colloquially known by basically their their public facing names, things

(01:38):
like chat, GPT or barred or being AI. But essentially
what they all are are algorithms, artificially intelligent algorithms that
are trained on text, tons and tons and tons of
text written English language stuff, that are so good at

(02:00):
recognizing patterns in those things that they can actually simulate
a conversation with you, the person on the other side
of the computer, asking them questions.

Speaker 1 (02:10):
Yeah, this is it's gonna be fun doing this episode
over every six months, right until we're replaced totally.

Speaker 2 (02:19):
So I think we should say, though, like this is
we're going to like this is such a huge wide
topic that is just we're in the ignition phase, like
the fuse just caught, right, Yeah, that we're going to
really try to keep it narrow just strictly to large
language models and the immediate effect they're planning on having

(02:39):
or they're going to have, hopefully not planning on anything yet,
but I really would like to do one on how
to keep AI friendly and keeping it from running away. Yeah,
I say, we just kind of avoid that whole kind
of stuff. And really I'm talking to myself right now
at least for this episode.

Speaker 1 (02:56):
Okay, Yeah, you know, we're going to kind of explain
how these things were and what the initial applications look
like and kind of where we are right now, and
then what it could mean for like jobs and the
economy and stuff like that. But you're right, it is
as a whole ball of wax, as you well know.
And this is a great time to plug the End
of the World with Josh Clark, which is still out there.

(03:19):
You can still listen to it.

Speaker 2 (03:20):
The truth is out there in the form of the
End of the World with Josh Clark.

Speaker 1 (03:24):
Yeah, it's a great ten part series that you did,
and AI is among those existential risks existential that you covered.

Speaker 2 (03:34):
Yeah, it's episode four, I believe. And Chuck like just
from having done that research in forming my own opinions
over the years about this. Yeah, Like I'm it's staggering
to me that we're like we've just entered like what's
going to be the most revolutionary transitional phase in the

(03:58):
entire history of humanity. You can argue everything else took
place over very long periods of time. We started playing
with stone tools and then we started building cities. All
this stuff took place over thousands and thousands, hundreds of thousands,
millions of years. We just entered a period where stuff's
going to start happening within weeks pretty soon. As of
twenty twenty three, the whole thing just started.

Speaker 1 (04:19):
Yeah, and none of this was around like this when
you did The End of the World and that was
what like five years ago.

Speaker 2 (04:26):
Is Yeah, it was twenty eighteen. All this was being
worked on, but we hadn't hit that point. Like all
this was pretty much predicted and projected, and it was
clear that this was the direction people were going.

Speaker 1 (04:36):
And it's here, baby, it is.

Speaker 2 (04:37):
It's nuts, but it's actually here. So we're talking about
our large language models, which is a type of neural
network that are easiest to think of in terms of
like a human brain, where you have neurons that are
connected to other neurons, but they're not connected to some
other neurons, and all of those neural connections kind of

(04:58):
are activated by input. It's that put out something like
your conscious experience or you say a sentence or something
like that. It's it's very similar in its most basic nature.

Speaker 1 (05:10):
I guess, yeah, I mean Olivia helped us out with this,
and che did a great job. I think and Google
themselves basically say, you know what it's really it's sort
of like how when you go to search for something
on our search engine tool here a weird way I
could have said that, I don't think so are handy

(05:31):
search for Then you know, basically what we're doing is
auto completing, like an analysis of like probability, like like
what you're typing. If you type in you know, John Coltrane,
or start to chop type in John Cole, that might
finish it out as John Coltrane a Love Supreme or

(05:51):
John Coltrane jazz and they're saying, you know what, what
is happening now with these lms is it's the same thing.
It's just it's got way more data, way more calculations
in the algorithm. So it's not just completing like a
word or two. It's potentially you know, hey, rewrite the
Bible or whatever you tell it to do.

Speaker 2 (06:13):
Yeah, And the big difference is in the amount of
info the neural network is capable of taking into consideration. Yeah,
so may I for a minute, oh please, So imagine
with one of those autocomplete suggestion tools like they have
on Google Search. If there's five hundred thousand words in

(06:34):
the English language, that means that you have five hundred
thousand words that a person could possibly put in. That's
the input into the neural network, and then there's five
hundred thousand possible words that that network could put out.
So you have five hundred thousand connections to five hundred
thousand other connections. So it's like, I think two hundred
and fifty billion connections you're starting with right there. That's

(06:56):
just the autocomplete suggestion because it based on those connections.
In studying words in the English language and phrases in
the English language, it places emphasis more on some connections
than others. So John Coltrane is what's what's his album
I Can't Remember is a Love Supreme classic album. So

(07:16):
John Coltrane is much more closely related to a Love
Supreme in the mind of a neural network than John Coltrane.
Charlie Brown disco is right, just to take something off
the top of my head. And so based on that
analysis and that weight that it gives to some things
other than others, it suggests those words what the large

(07:38):
language models that like chat GPT, that we're seeing today.
They do the same thing. They have all those same connections,
but the analysis they do, the weight that they put
on the connections is so much more advanced, yes, and
exponential that it's it's actually not just capable of suggesting
the next word, it's capable of holding a conversation with you.

(08:00):
That's how much it understands how the English language works.

Speaker 1 (08:04):
Yeah, Like if you said, you know, write a story
about wintertime, and you know, it got to the word snowy,
it would it would go through you know, I mean,
and this is like instantaneously it's doing these calculations. It
might say like you know, oh, hillside or winter or
snowy day, like these are all things that make sense

(08:26):
because I've learned that that makes sense I being you know,
the chatbot or whatever. But it probably won't be snowy
chicken wing because that doesn't seem to fit the algorithm.
And it learns all this stuff by reading the internet.
And you know, put a pin in that, because that's

(08:46):
pretty thorny for a whole lot of reasons, but not
the least of which is the fact that some companies,
and again we'll get to it, are starting to say
like wait a minute, Like, we created this content and
now you're just scrubbing it and then using it and
charging people to use it, and we're not getting a
piece of it. So that's just one tiny little thorn.

(09:08):
But in order to do this, like you said, it's
like it needs to know more, and you came up
with a great example, like the word lemon. In a
very basic way, it might understand that a lemon is
roundish and sour and yellow. But if it needs to
get smart enough to really write as if it were

(09:29):
a human, it needs to know that it can make lemonade,
and that it grows on a tree in these agricultural zones,
and that it's a citrus fruit, because it has to
be able to group lemon together with like things. And
those groups are either like, you know, hey, it's super
similar to this, like maybe other citrus fruits, or it's

(09:50):
you know, sort of similar to this but not as
similar as citrus fruits, like desserts. And then you get
to chicken wings, although actually that's not true because lemon
chicken wings.

Speaker 2 (10:00):
You could have lemon pepper chicken wings.

Speaker 1 (10:01):
Right, Yeah, that's what I'm saying, So yeah, Kau, But
the instance you use is like greenland, which I guess
doesn't grow lemons.

Speaker 2 (10:08):
No, but I mean, I'm sure they import lemons, so
there's some connection there. But based on how connected, how
often these words show up together, and the billions and
billions of lines of texts that these language large language
models are trained on, it starts to get more and
more dimensions and making more and more connections. Right. So

(10:30):
as that happens, words start to cluster together, like lemon
and pie and ice box all kind of cluster together.
And by taking words and understanding how they connect to
other words, you can take the English language, just the
words of the English language, and make meaning out of it.
That's what that's all we do. And large language models

(10:51):
are capable of doing the same thing. But it's really
really important for you to understand that the large language
model doesn't understand what it's doing. It doesn't have any
meaning to the word lemon whatsoever. All of these dimensions
that it waits to decide whether what word it should

(11:13):
use next, they're called embeddings. They're just numerical representations. Yeah,
So the higher the number, the likelier it is it
goes with the word that the user just put in
or that the large language model. Just use the lower
the number, the further away it is in the cluster, right,
it doesn't understand what it's saying to you. And as
we'll see later, that accounts for a phenomenon that we're

(11:36):
going to have to overcome for them to get smarter,
which is called hallucinations. But that's a really critically important
thing to remember.

Speaker 1 (11:43):
Yeah, Another critically important thing to remember is and you
probably get this from what we said so far if
you already know about a little bit about it, But
there's no programmer that's teaching these things and typing in
inputs and then saying here's how you learn things. Like
it's doing this on its own, and it's learning things
on its own, and what we're talking about eventually like that.

(12:07):
You know, where it could get super scary is when
it gets to what's called emergent abilities, where it's so
powerful and there's so much data that the nuance that's
missing now will be there right exactly.

Speaker 2 (12:21):
So, yeah, that's when things are going to get even
harder to understand, you know, to remind yourself that you're
talking to a machine, you know.

Speaker 1 (12:31):
Yeah, And the other thing too, though, even though I
said humans aren't in putting this data. One of the
big things that is allowing this stuff to get smarter
is human feedback. It's called r l HF, which is
reinforcement learning on human feedback. So at the end of
your whatever you've told it to create, you can go

(12:53):
back in and say, well, you got this wrong and
this wrong, this is what that really is, and it says,
thank you, I have now just gotten smart.

Speaker 2 (13:01):
Exactly. So one of the reasons why these things are
suddenly just so smart and can say thank you, I've
just gotten so much smarter is because of a paper
that Google engineers published openly in twenty seventeen describing what's
now like the essential ingredient for a large language model
or probably any neural network from now on. It's called

(13:22):
a transformer. And rather than analyzing each bit of text,
let's say you say one of the very famous things.
Marvin Minsky was one of the founders of the field
of AI, and his son Henry prompted chet Gpt to
describe what losing a sock in the dryer is like
in the style of the Declaration of Independence. Right, So

(13:45):
depending on how Henry Minsky type that in before transformers
the neural network would analyze each word and do it
one increment and a time, maybe not even words, sometimes
strings of just letters together, phone names even if you
can believe it, phone names even And what the transformer

(14:07):
does is it changes that. It allows it to analyze
everything all at once. So it's so much faster, not
just in putting out a coherent answer to your question
or request, but in also training itself on that text.
So you just feed it the internet and it starts
analyzing it and self correcting. It trains itself, It learns

(14:27):
on its own, and that unfortunately also makes AI, including
large language models what are known as black boxes. Yeah,
we don't know how they're doing what they're doing. We
have a good idea how to make them do the
things we want, but the in between stuff, we cannot
one hundred percent say what they're doing. How they come
up with these conclusions, which also explains hallucinations in them

(14:51):
not really making sense to us.

Speaker 1 (14:53):
Yeah, and you know, the T in GPT stands for transformer.
It's generative pre trained transformer. And the reason they call
it GPT for short is because if they call it
generative pre trained transformer, everybody would be scared out of
their minds.

Speaker 2 (15:10):
We just start running around to nowhere in particular.

Speaker 1 (15:14):
Yeah, should we take a break, I say, we do.

Speaker 2 (15:17):
I think that we kind of explained that fairly well.

Speaker 1 (15:20):
Yeah, fairly robust beginning, my friend. All right, So open

(15:50):
Ai launched their chat GPT and very recently in November
of twenty twenty two, and just in that brief window
was like six or eight months ago. Things are kind
of flying high, and all kinds of companies are launching
their own stuff. Some of it is well. First of all,
open ai is now at chat GPT four, yes, and

(16:14):
I'm sure you know more will be coming in quick succession.
But companies are launching, and we're going to talk about
all of them kind of like broad stuff like chat,
BT GPT and really specific stuff like well, hey, I'm
in the banking business. Can we just design something for
banking or just something for real estate? So they're also

(16:35):
getting specific on a smaller level in addition to these
large like Google and Microsoft and Being and all that stuff.

Speaker 2 (16:41):
Yeah, And to get specific, all you have to do
is take an existing GPT, a large language model, and
add some software that helps guide it a little more. Yeah,
and there you go or just train it on specific
stuff like medical notes. That's another one one of the
other things that's that's changed very quickly between November of

(17:05):
twenty twenty two in March of twenty twenty three, when
I think GPT four became available. Just think about that.
That's that's such a short amount of time. Yeah, all
of a sudden. Now you can take a picture and
feed it into a large language model and it will
describe the picture. It will look at the picture essentially

(17:26):
and describe what's going on. There's a there's a demonstration
from one of the guys from open Ai who doodles
like on a little like scrapbook piece of paper some
ideas for a website. He takes a picture of that
paper that he's written on, feeds it into chat GPT four,
and it builds a website for him in a couple

(17:49):
of minutes. That functions the way he was thinking of
the on the doodle scratch pad.

Speaker 1 (17:54):
I wonder if the only way to slow this stuff
down is to literally slow down the Internet again. Go
back to like the old days when a picture would
load like three lines at a time, right, and they
describe a picture will be like someone's hair, someone's nose,
someone's gin. Don't forget between Yeah, an hour later you

(18:14):
have a complete picture.

Speaker 2 (18:15):
Right. I don't think there's any way to slow this
down because we're in not to be alarmist, but we're
in a second worst case scenario for introducing AI to
the world, which is, rather than state actors doing this,
which would be really bad, we have private companies doing it,
which is just slightly less bad. But they're competing in

(18:38):
an arms race to get the best, brightest, smartest AI
out there as fast as they can, and they're not
taking into account like all of the downsides to it.
They're just throwing it out there as much as they can.
Because one of the ways that these things get smarter
is by interacting with the public. They get better and
better at what they do from getting feedback from people

(19:01):
using them.

Speaker 1 (19:02):
Yeah, even if it's for just some goofy fun thing
you're doing, it's learning from that. And you talked about
the advancements made between the launch of three point five
and GPT four, and three point five scored in the
tenth percentile when it took the uniform bar exam, and
four has already scored in the ninetieth percentile, and they

(19:26):
found that chet GPT four is really it's great at
taking tests, and it's scoring really well on tests, particularly
you know, standardized tests. All. I think it basically aced
all of the AP tests that you would take to
get into AP classes except well, it took a couple
of AP class but the max score is five, and

(19:49):
I think it got five's kind of on everything except
for math. It got a four and math it's kind
of it's we it's kind of weirdly counterintuitive because it's
a number space thing, but it has more trouble with math,
like rudimentary math, than it does with like constructing a

(20:10):
paragraph on you know, Shakespeare or something, or as Shakespeare
does better with like math word problems and more advanced
math than it does just at basic math apparently, or.

Speaker 2 (20:20):
Like describing how a formula functions using you know, writing.
The thing is though, and this is another great example
of how fast this is moving. They've already figured out
that all you have to do is do what's called
prompting where you where you basically take the answer that
the the incorrect answer that the large language model gives you,

(20:43):
and then basically re explain it by breaking it down
into different parts, and it learns as you're doing that,
and then all of a sudden it comes up with
it gets better at math. So they've figured out tools,
extra software you can lay over a GPT that basically
teach it to do math or prompt it in the
correct way so that you get the answer you're looking
for that's based on math.

Speaker 1 (21:05):
Yeah. I mean, every time I read something that said, well,
right now, it's not so great at this, I just
assume that meant and we'll have that worked out in
the next few weeks.

Speaker 2 (21:15):
Yeah, pretty much. I mean, because as these things get
like bigger and smarter, in the data sets that they're
trained on get wider, they're just going to get better
and better at this because they again they learned from
their mistakes.

Speaker 1 (21:29):
Yeah, just like humans, right, exactly like humans. So you
mentioned these hallucinations kind of briefly, and this this is
one of the big problems with them so far that
again I'm sure they will figure this out in due time.
But one example that Livia found was to prompt it
with what mammal lays the largest eggs. And one of

(21:51):
the problems is when it gives hallucinations or wrong answers it.
You know, it's not saying like, well, I'm not so
sure about this. It's saying this is true, just like
anything else. I'm spitting out right with a lot of confidence.
So the answer there was the mammal that lays the
largest eggs is the elephant, and elephant's eggs are so
small that they are often visible to the naked eye,

(22:11):
so they're not commonly known to lay eggs at all. However,
in terms of sheer size, and elephant's eggs are the
largest of any mammal.

Speaker 2 (22:18):
Which makes sense in a really weird way if you
think about it.

Speaker 1 (22:21):
Sure, those little invisible eggs.

Speaker 2 (22:23):
Yeah, because mammals don't lay eggs obviously, But the way
that it put it was if you didn't know that
mammals don't lay eggs, or you didn't know anything about elephants,
you'd be like, oh, that's interesting, and take that as
a fact, because it's saying this confidently. And I saw
written somewhere that one GPT actually argued with the user
and told them they were wrong when they told the
GPRIT that it was wrong, Yeah, which is not a

(22:46):
behavior you want at all. But that's what's termed as
a hallucination, and a hallucination is a good way to
understand it. That I saw is that again, this GPT,
this large language model, doesn't have any idea what it's
saying means. It's just picked up it's noticed patterns that

(23:09):
we've not noticed before, and it's putting them together in
nonsensical ways. But they're still sensible if you read them.
It's just factually they're not sensible because it doesn't have
any fact checking necessarily, it just knows what it's finding
kind of correlates with other things. So there's some sensible
stuff in there, like the phrase invisible to the naked eye,

(23:32):
or laying eggs or elephants in mammals. Like this stuff
all makes sense. It's not like these are just strings
of letters. Yeah, it's just putting them together in ways
that are not true. They're factually incorrect, and that's a hallucination.
It's not like the computer is thinking that this is true.
It doesn't understand things like truth in falsehood. It just creates,

(23:58):
and some of the time it gets it really wrong.

Speaker 1 (24:00):
Yeah, I didn't know what an elephant is.

Speaker 2 (24:03):
No, it just knows that it correlates to in some
really small way. That we've never noticed before the word eggs.

Speaker 1 (24:11):
Yeah, and this is uh that that's a problem if
if it's just like, oh, well, this thing isn't quite
where it needs to be at because it thinks elephants
lay eggs. But there have already been plenty of real
world examples where people are using this and it's screwing
things up for their business or for commerce or something
where the yeah, or their client, well, that's that's one.
There was an attorney who was representing a passenger who

(24:35):
was suing an airline and used chat bt to do research,
and it came up with a bunch of fake cases
that this attorney didn't bother to fact check, I guess.
And there were like a dozen fake cases that this
attorney submitted in his brief.

Speaker 2 (24:52):
And it wasn't like so like from what I understand,
like the the brief was largely compiled from what the
GPT spit out. The It wasn't like the GPT just
made up the names of cases. It made up the
names of cases and then described the background of the
case and how they related to the case at hand, right,
So it just completely made these up out of out

(25:12):
of the blue. And yeah, that lawyer had no idea.
He said in a brief later that he had no
idea that this thing was capable of being incorrect. So
it was like one of the first times he used it,
and he threw himself on the mercy of the court.
And I'm not quite sure exactly what happened. I think
they're still figuring out what to do about it.

Speaker 1 (25:29):
Maybe just go spend some quality time with your little
chat butt exactly.

Speaker 2 (25:34):
Similarly, Meta had a large language model that basically got
laughed off of the Internet because it was very science
focused and it would make up things that just didn't exist,
like mathematical formula, like there was one that called the
yoko or no, the lenin Ono correlation or something like
that completely made up this thing that I read, and

(25:57):
I was like, oh, that's interesting. I had no idea.
I have never heard this stuff before, and I would
have just thought that it was real had I not
realized and known ahead of time that it was a
hallucination that this math thing does not exist anywhere. And
it even attributed it to a live mathematician said that
this was the guy who discovered it. So like, it

(26:18):
really can get hard to discern what's true and what's not,
which again is a really big problem if we haven't
gotten that cross yet.

Speaker 1 (26:26):
Did they say that mathematician's name was math be calculus. Yeah.
Another example, and this is, you know, we're going to
talk a little bit about you know, replacing jobs in
the various ways that can and already is happening. But
seeing that, for instance, said oh, you know what, let
me try this thing out and see if we can

(26:47):
get it to write an actual story. And so they
got an AI tool to write one on what is
compound interest, and it was just there was a lot
of stuff wrong in it. There was some plagiarism, you know,
directly lifted. So there's you know, these things aren't fool
proof yet, and it's definitely not something that should be

(27:08):
utilized for like a public facing website that's supposed to
have like really solid vetted articles about uh well, especially
seen it about a tech right of all things.

Speaker 2 (27:21):
That's something that the National Eating Disorder Association found out
the hard way. They apparently replaced entirely it's human staffed
hotline with the chatbot, and supposedly they were accused of
doing this to bust the union that had formed there
and so when they released the chatpot into the world

(27:41):
and it started offering advice to people suffering from eating disorders.
It gave standard, you know, weight loss advice, which you
probably get from your doctor who didn't realize you had
an eating disorder, but in the context of an eating disorder,
it was all like trigger, trigger, trigger, one right after
the other. Right, Like, it was telling these people with
eating disorders to like, weigh yourself every week and try

(28:03):
to cut out five hundred to one thousand calories a
day and you'll lose some weight, and just stuff that
that would set everybody off. And very quickly they took
it offline and I guess brought their humans back, hopefully
at double the pay.

Speaker 1 (28:16):
Yeah. But I mean this stuff is that's already being
solved as well, because they point out that GPT four
has already scored forty percent higher than three point five,
again just a handful of months ago on these accuracy tests,
so that that is even getting better. And you know,
where where I guess people want it to get to

(28:39):
is to the point where it doesn't need human supervision
to spit out really really accurate stuff exactly.

Speaker 2 (28:45):
That's pretty much where they're hoping to get it. And
I mean it's just they have the model, they have
everything they need. They just it just has to be
tinkered with.

Speaker 1 (28:54):
Now, should we take another break? I think so, all right,
we'll take another break and then get into sort of
the the economics of it and whether or not your
job may be at risk right after this.

Speaker 2 (29:29):
So one of the astounding things about this that it
really caught everybody off guard is that these large language models,
the jobs they're coming after are white collar knowledge jobs.
They're so good at things like writing, they're good at researching,
they're good at analyzing photos. Now and that's a huge

(29:50):
sea change from what it's been like traditionally. Right wherever
whenever we've automated things, it's usually replaced manual labor. Now
it's the manual labor that's safe in this Yeah, this
generation of automation, it's the white collar knowledge shops that
are at risk. And not just white collar job but
artists in yeah, like just who have nothing to do

(30:12):
with white collar or jobs, they're at risk as well.

Speaker 1 (30:17):
Yeah, I'm sure the farmers are all sitting around going,
how's that going for you?

Speaker 2 (30:22):
Yeah, how's that taste? Uh?

Speaker 1 (30:24):
So? Yeah, art art is when when d A L L. E.
Doll E came out, that was an art tool where
a lot of people, a lot of people I know,
would input. I guess I never did it. I never
do anything like that, not because I'm afraid or anything,
but I just just not interested basically. But I guess
you would submit, like a photograph of yourself and then

(30:46):
it would say, well, here's you as a superhero or
here's you as a Renaissance painting or whatever. And you know,
it's sourcing images from real artists throughout history, from Getty
Images and places like that, and there are are ready
artists that are suing for infringement. Getty Images is suing
for infringement and saying you can't even if you're mixing

(31:09):
up things and it's not like a Rembrandt. Let's say
you're using all of the artists from that era and
mashing it up together in a way that, like we
think basically is illegal.

Speaker 2 (31:21):
Yeah, they say this doesn't count as transformative use, which
is typically protected under the law. Right, this is instead
just some sort of mash up that that a machine
is doing. It's to me, it's almost splitting hairs. But
I also very much get where they're coming from not

(31:41):
just a place of panic, but like they're a real
like they have a basis in fact that these things
are not transforming because they don't understand what they're doing.

Speaker 1 (31:52):
Yeah, and companies are taking notice very quickly. There are
some companies I'm sure everyone's going to kind of fall
in line, that are already saying, well, no, you got
to start paying us for access to this stuff. We
paid human beings to create this content for lack of
a better word, and put it online for people to access.

(32:13):
But you can't come in here now and access it
with a bot and use it and charge for it
without giving us a little juice. And there are a
lot of companies that are already saying like, you can't
use this. If you're an employee of our company, you
can't use chatbots at all because some of our company's
secrets might end up being spilled somehow, or you know,

(32:37):
our databases are all of a sudden exposed. So companies
are really moving fast to trying to protect their ip
I guess.

Speaker 2 (32:44):
Well, yeah, and one of the I mean some of
the companies that are behind the GPTs that are out
right now, the large language models that are out right
now are well known for not only not protecting their
users information, but for rating it for its own use. Like,
for example, Meta is one of the ones with They

(33:06):
have their large language models called Lama, and there's a
chatbot called Alpaca. And it makes total sense that you
are probably signing away your right to protect your information
when you use those things on whatever computer you're using
it on or whatever network you're using it on. I
don't understand exactly. I haven't seen anything that says this

(33:27):
is how they're doing it, or even that they are
definitely doing this. I think it's just that the powers
that be no, like they would totally do this if
they can, and they probably are, so we should just
stay keep our employees away from it, you know, as
much as we can.

Speaker 1 (33:42):
Yeah, it's like we said, it's being used on smaller
levels by One of the uses that Livia dug up
was like, let's say a real estate agent, instead of
taking time to write up listings, has a chatbot to it,
and then they can go through afterward and make adjustments
to it as needed.

Speaker 2 (34:01):
Well, in exchange, that database now knows exactly what you
think of that one ugly bathroom.

Speaker 1 (34:07):
That's right, or doctors may be using it to compile
lists of possible diseases or conditions that someone might have
based on symptoms. These all sound like uses that are like, hey,
this sounds like it could be a good thing in
some ways, and it can be in some ways. But

(34:28):
it's the wild West right now, so it's not like
there's any there's anyone saying, well, you can't use it
for that, you can only use it for this, you
know what I'm saying.

Speaker 2 (34:37):
Plus, also, everything that we've come up with as just
Internet users in the general public has been what we
could come up with in given three months, with no
warning that we should start thinking about this. It's just like, hey,
this is here, what are you going to do with it?
And people are just finding new things to do with
it every day, And yeah, some of them are benign,

(34:58):
like having a draft a blog post for your business.
I thought they were already doing that based on some
of the emails that I get from like businesses, right, Yeah,
but they definitely are now if they weren't before. And
that's totally cool because there's there's a it's just taking
some of the weight off of the humans that are

(35:19):
already doing this. Work. Right, what's going to be problematic
is when it comes for the full job, or enough
of the job that the company can transfer whatever's left
of that person's job to other people and make them
just work a little harder while they're supported by the AI.

Speaker 1 (35:39):
Yeah, here's some stats that they were pretty shocking to me.
I didn't know it was moving this fast. But there's
a networking app called Fishbowl and in twenty twenty three,
just earlier this year, they found that forty percent of
what they call working professionals are already using some kind
of either chat, GPT or some kind of AI tool

(36:01):
while they work, whether it's generating idealists or brainstorming lists,
or actually writing stuff or maybe looking at code. And
this is the troubling part. Forty of those forty percent,
almost seventy percent are doing that in secret and hadn't
told their bosses that they were doing that.

Speaker 2 (36:21):
Right, those are just working professionals. We haven't even started
talking about students yet.

Speaker 1 (36:25):
Yeah. I mean you combine that with work from home,
you got a real racket going on.

Speaker 2 (36:29):
For sure, you know. Yeah, no, totally again though, I mean,
like if you can use it to do good work,
and you can now do more work. I think you
should be paid for more work. Like if your productivity's
gone through the roof, great, you figured it out. I've
got no problem with that. It's the opposite that I
have the problem with.

Speaker 1 (36:48):
Well, let's skip students for a second and then and
talk about that since you brought it up, Because here's
the thing this is the United States doesn't have a
G eight track record of ignoring the bottom line in
favor of just keeping hardworking humans at their jobs. So

(37:10):
I think it was a Goldman. Sachs said that they
found that there could actually be an increase in the
annual GDP by about seven percent over ten years because
productivity increases. And I guess the idea is that productivity
is increasing because let's say you've got twenty to thirty
percent of stuff being done by AI. That opens up

(37:32):
twenty to thirty percent of your time for your employees
to maybe innovate or you know, get do other capitalistic things.
But what it to me, and this is just my opinion,
and again we're really early in all this, but it's
a bottom line world, and especially a bottom line country
that we live in, and I imagine what it would

(37:55):
likely mean is by by jobs more than it means, well, hey,
you've got more time, and why don't you innovate it
your job, because for most jobs, it'll probably be like, oh,
wait a minute, if we can teach it to do
forty center your job, I bet we could train it
to do a one hundred percent.

Speaker 2 (38:12):
Yeah, or we can get rid of, you know, a
bunch of you and just keep some of you to
do the other sixty percent.

Speaker 1 (38:19):
You know. But now see these people are out of
jobs that it's going to bite them in the rear though,
because it's not ultimately going to be well, who knows.
It doesn't seem like it could be good for the
overall economy if all of a sudden, all these people
are out of jobs. Because people being out of jobs
mean they're not that means the economy is going to tank.
They're not spending. And it's not like a situation where

(38:40):
you know, the tractor replaced the plow and then the
robot tractor replaced the tractor. But hey, now we've got
these better jobs where you're designing and building these robot
tractors and they're higher paying and they're great. It's not
like that because you know, the farmer was replaced who
drove that tractor and isn't skilled in the practice of

(39:02):
designing robot tractors. And in this case, in most cases,
they're not being there's not some other job waiting for
someone who got fired in the world of designing AI.
Does that make sense?

Speaker 2 (39:17):
No, it makes total sense. But yeah, and in this case,
one of the big differences is instead of the farmer
having to go figure out how to work a computer,
that people working computers now have to go figure out
how to be farmers in order to sustain themselves. Right,
But you're right, we don't have a track record of
taking care of people very well, at least who are

(39:38):
out of a job. And I mean, without getting on
a soapbox here, what's either going to come out of this,
because there's going to be one or the other. The
status quo as it is now or as it wasn't
as of up to twenty twenty two. We don't know
that that's going to be around anymore. Instead, we'll either
do something like create universal basic income for people to

(39:59):
be like, hey, your industry literally does not exist anymore
and it just happened overnight. Basically, we're just gonna make
sure that everybody's at least minimally taken care of while
we're figuring out what comes next, or it's gonna be
like good luck, chump, you're fired, You're out on your own. Instead,
we're gonna take all this extra wealth, this extra two

(40:20):
trillion dollars that's gonna be generated, and push it upward
toward the wealthy instead and everybody else. Is just the
divide between wealthy and not wealthy is just going to
exponentially grow. One of those two things is gonna happen,
because I don't see how there's just gonna be a
regular middle ground like there is now where it's kind

(40:40):
of shaky, and how we're taking care of people, because
there's just gonna be so many layoffs and fairly skilled
workers being laid off too. We've just never encountered that before.

Speaker 1 (40:52):
Yeah. I mean, that's the thing that these the largest
corporations might want to think about. Is all that's gonna
take is one CEO of a huge corporation to say,
wait a minute, it I think I can get rid
of seventy five percent of the vps in my in
my company, right, and like who unless who accept the

(41:18):
person at the very very top of that food chain
is protected. And the answer is nobody.

Speaker 2 (41:24):
No, No, the offense essentially.

Speaker 1 (41:26):
At the end of the day, because they make a
lot of money. If you it's one thing to lay
off a bunch of you know, technical writers that are
all sitting in their cubicles, But if you start laying
off those those vps who get those big bonuses, that's
more bonus money. And you know, are we looking at
a situation where a corporation is run by one human?

Speaker 2 (41:44):
I mean, it's entirely possible, Like you can make a
really good case that what it is going to wipe
out is the middle management, right, vps, This is exactly
like you said, and that we still will need some
humans to do some stuff.

Speaker 1 (41:57):
Like the board take care of the board, right, sure.

Speaker 2 (41:59):
Of course, yeah, but yes, I mean who knows, we
have no idea at this point. Ultimately, it could very
easily provide for a much better healthier society, at least
financially speaking, it could do that, especially given a long
enough period of time.

Speaker 1 (42:19):
I'm a cynic when it comes to that kind of
trust though.

Speaker 2 (42:21):
I am as well for sure. But if you look
back in history at the history of technology overall, especially
if you just turn a blind eye to human suffering
for a second and you just look at the progress
of society. Right in a lot of ways it has
has gotten better and better things to technology. There's also
a lot of downsides to it. Nothing's black and white,

(42:43):
it's just not that's just not how things are. So
there's of course going to be problems. There's going to
be suffering, there's going to be people left behind, there's
going to be people that fall through the cracks. It's
just inevitable. We just don't know how many people for
how long and what will happen to those people on
the other of this transition.

Speaker 1 (43:02):
Yeah, I was talking with somebody the other day about
the writer's strike in Hollywood. The WGA is striking right now.
For those of you who don't know, it's kind of
all over the place. But one of the things that
they have argued for in this round of negotiations is, hey,
you can't replace us with AI, and the studios all

(43:25):
came back and said, well, how about this, We'll assess
that on a year to year basis. And that's frightening
if you're if you're either a writer in Hollywood or
you're somebody who loves TV and films and quality TV
in films, because I don't know if I think ideation

(43:46):
and initial scripts maybe even right now, could I could
see that happening where they said they're like, all right now,
we'll bring in a human to refine this thing at
a much lower wage. That it's probably what they're most
afraid of, rather than being wholesale replaced, because like you said,
these programs are they're all about just data and numbers.

(44:09):
They're not They don't have human feelings, and that's what
art is. And so I think I would be more
concerned if I was writing pamphlets for Verizon or something,
or if I was.

Speaker 2 (44:22):
Some pamphlet writer for Verizon.

Speaker 1 (44:23):
Just went in gulp, No, I'm so sorry. But like
BuzzFeed back in the day, instead of having a dozen
writers writing clickbait articles, why not have just one human
that is a prompt engineer that's managing a virtual AI
clickbait room that's just pumping out these articles that you
know they were paying someone down to forty grand a

(44:46):
year to write previously.

Speaker 2 (44:47):
Yeah, I mean it's a great question, like that was
a horrific, horrible job to have. Not too many years ago.
So it's great to have a computer do it, but
that means that we need these other people to go
on to be to have writing jobs that are more
satisfying to them than that. But that's not necessarily the case,
because as these things get smarter and better, they're just

(45:10):
going to be relied upon more. We're not going to
go back. There's no going back now. It just happened
like it just happened basically as of March twenty twenty three.
And one of the big problems that people have already
projected running into is if computers replace humans, say writers,

(45:33):
basically entirely. Eventually all the stuff that humans have written
on the Internet is going to become dated. It's going
to stop, yeah, and it will have been replaced and
picked up on by generative, pre trained transformers. Right, And
eventually all the writing on the Internet after a certain
date will have been written by computers, but will be

(45:54):
being scraped by computers. When humans go ask the computer
a question, the computer then goes in reference is something
written by a computer, So humans will be completely taken
out of the equation in that in that respect, we'll
be getting all of our information, at least non historical
information from non humans, and that could be a really
big problem, not just in the fact that we're losing

(46:16):
jobs or in the fact that computers are now telling
us all of our information, but also that there's some
there's some part of what humans put into things that
will be lost that I think we're going to demand.
I saw somebody put it like, I think, I can't
remember who it was, but they said, we're going like

(46:37):
people will go seek out human written stuff. There will
always be audiences for human written stuff. Yeah. Maybe, like
you said, will rely on computers to write the Verizon pamphlets,
but we're not going to rely on computers to write
great works of literature or to create great works of art,
Like we're just not going to They'll still do that.
They're going to be writing books and movies and all that,
but there will always be a taste in a market

(46:59):
for human created stuff. This guy said, I think he's right.

Speaker 1 (47:04):
Yeah. And Justine Bateman, I don't know if you saw that.
I don't know if it was a blog poster.

Speaker 2 (47:11):
Are you having a hallucination right now? Did you mean
Justine Bateman?

Speaker 1 (47:15):
Yeah, yeah, Justin Bateman, the Jason Bateman's sister the actor,
and she's done all kinds of things since then. I
know she has a computer science degree, so she's very
smart and knows a lot about this stuff. But she
basically said, and this is beyond just the chatbot stuff.
But she was like, right now, there are major Hollywood

(47:36):
stars being scanned, and there may be a brand new
Tom Cruise movie in sixty years. Yeah, long after he's
dead starring Tom Cruise. He may be making movies for
the next two hundred years. And like, is this what
you want? Actors? Do you want to be scanned and
have them use your image like this in perpetuity for

(47:59):
you know, there will be money involved. It's not like
they can just say, Okay, we can just do whatever
we want. But what if they're like, here's a billion dollars,
Tom Cruise, just for the use of your image in perpetuity,
because we will be able to duplicate that so realistically
that people won't know human voices, same thing that's already happening.

(48:20):
What yeah, it is. So that stuff is kind of scary.
And you know when you read I didn't really know
this was kind of already happening in companies. But Olivia
found this stuff IBM CEO Arvin Krishna said just last
month in May that he believed thirty percent of back
office jobs could be replaced over five years, and it

(48:44):
was pausing hiring for close to eight thousand positions because
they might be able to use AI instead. And then
Dropbox talked about the AI era when they announced a
round of layoffs. So it is happening right now in
real time.

Speaker 2 (49:00):
Pretty amazing. Yeah, that's I mean, there's proof positive right there,
Like that guy couldn't even wait a couple months a year,
Like this really started up in March, and he's saying
this in May. Already in May, they're like, wait, wait,
stop hiring. We're gonna eventually replace these guys with AI
so soon that we're going to stop hiring those positions

(49:21):
for now until the AI is competent enough to take over.

Speaker 1 (49:24):
I mean, how many people does ib employ? What, what's
thirty percent of that?

Speaker 2 (49:28):
I don't know. I would say at least at least
one hundred people, right, So, yeah, like you said, it's
happening already. And then one other thing to look out
for too, that's I believe is already at least theoretically
possible since AI can write code. Now they'll be able
to create new large language models themselves, So the computers

(49:50):
will be able to create new AI.

Speaker 1 (49:54):
Well, that's the singularity, right.

Speaker 2 (49:56):
No, the singularity is when one of them a understands
what it is and become Yes, that's the singularity.

Speaker 1 (50:05):
But this leads to that though, doesn't it It does.

Speaker 2 (50:07):
It's hypothetically yes, but we just understand what's going on
so little that you just can't say either way. Really,
you definitely can't say that it no, it won't happen.
It's just fantasy. And you also can't say yes, it's
definitely gonna happen.

Speaker 1 (50:19):
Yeah. And here's the thing, man, I'm not a paranoid technophobe.

Speaker 2 (50:26):
You don't any measure foiled cap on.

Speaker 1 (50:28):
No, by any measure. I'm a pretty positive thinker, and
this this is pretty scary to me.

Speaker 2 (50:37):
I'm just gonna leave that there, agreed, Chuck. Okay, if
you want to know more about large language models, everybody
just to start looking around Earth and when you see
people running from explosions, go toward it and ask what's
going on?

Speaker 1 (50:55):
You almost said, type it into a search engine.

Speaker 2 (50:57):
Yeah, steer clear of those. Yeah, there's so much more,
we could have talked about. But this is if you
ask me, this is round one. I think we definitely
need to do at least one or so.

Speaker 1 (51:06):
More on this, Okay, Yeah, and then one day, like
I said, Ai, Josh and Chuckle, just wrap it all
up and spank it on the bottom and say no
problems here.

Speaker 2 (51:14):
Hopefully they'll give us a billion dollars rather than like
a month free of blue Apron instead.

Speaker 1 (51:20):
Yeah, I mean we could talk here.

Speaker 2 (51:23):
Well, since Chuck said we can talk real confidential like
that means it's time for listener.

Speaker 1 (51:28):
May I'm gonna call this conception, not inception, but conception.

Speaker 2 (51:35):
Oh I saw this one. I don't know how I
feel about this.

Speaker 1 (51:39):
Hey, guys. Last year, my wife and I were attempting
to get pregnant. A couple of months in we made
plans to stay with some friends in another town for
a weekend, and when we can arrive, it happened to
coincide with my wife's ovulation cycle. As shy people, we
both felt a little bit awkward about, you know, hugging
and kissing in a friend's guest room, but we really

(51:59):
didn't want to miss that chance and that time of
the month, so we went about getting in the mood
as quietly as possible, and my wife suggested we play
a podcast from my phone so that, you know, if
any noise is made outside the room, it would sound
like we were just doing a little pre bedtime listening.
I knew I needed something with nice, simple production values,
so we wouldn't get distracted, of course, by the whiz

(52:21):
bang sounds and whatnot. And since you were my intro
to the world of podcast, I've always had a steady
supply of yours downloaded. I picked the least interesting sounding
one in the feed at the time, How Coal Works.

Speaker 2 (52:33):
Okay, I thought that one turned out to be surprisingly interesting. Yeah,
I could see how we would have thought that though.

Speaker 1 (52:40):
Yeah, for sure. We put that on and we did
our business. Six weeks later we got a positive pregnancy test,
and now over a year later, we've welcomed our son
into the world. Name is Cole, and of course we
named him Cole. That is what this person said.

Speaker 2 (52:57):
Wait a minute, Wait a minute, they really did name
them Cole.

Speaker 1 (53:00):
No, he said it as a joke. Oh but great
minds right, good joke for both of them. It's almost
like you're both chatbox. And this person said You're fine
to read this, but give me a fake name. And
so I just want to say thanks to Gene for
writing in about this.

Speaker 2 (53:20):
Gene is in Gene Transfer.

Speaker 1 (53:23):
Sure, thanks a lot, Gene.

Speaker 2 (53:25):
We appreciate that. I think again, I'm still figuring that
one out. And if you want to be like Gene,
I'm making air quotes here, you can send us an
email to wrap it up. Spank it on the bottom.
Only humans can do that.

Speaker 1 (53:41):
I wonder if when you said spanking on the bottom,
if that created any issues.

Speaker 2 (53:45):
Yeah, I hadn't thought about that. Maybe playfully how about that? Sure,
and send it off to stuff podcast at iHeartRadio dot com.

Speaker 1 (53:58):
Stuff you Should Know is a production of iHeartRadio. For
more podcasts my heart Radio, visit the iHeartRadio app, Apple Podcasts,
or wherever you listen to your favorite shows.

Stuff You Should Know News

Advertise With Us

Follow Us On

Hosts And Creators

Chuck Bryant

Chuck Bryant

Josh Clark

Josh Clark

Show Links

AboutOrder Our BookStoreSYSK ArmyRSS

Popular Podcasts

Cold Case Files: Miami

Cold Case Files: Miami

Joyce Sapp, 76; Bryan Herrera, 16; and Laurance Webb, 32—three Miami residents whose lives were stolen in brutal, unsolved homicides.  Cold Case Files: Miami follows award‑winning radio host and City of Miami Police reserve officer  Enrique Santos as he partners with the department’s Cold Case Homicide Unit, determined family members, and the advocates who spend their lives fighting for justice for the victims who can no longer fight for themselves.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.