All Episodes

November 1, 2018 35 mins

In October, 2018, a portrait created by an artificial intelligence program sold at auction for more than $400,000. But there's controversy surrounding the story - did the team that created the work of art steal the artist?

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Get in touch with technology with tech Stuff from half
stuff works dot com. Hey there, and welcome to tech Stuff.
I'm your host, Jonathan Strickland. I'm an executive producer with
how Stuff Works in a love all things tech, and
today we're going to tackle a story that recently unfolded recently.
As of the recording of this show, I'm sitting in

(00:26):
the recording studio on October two, thousand eighteen. It's not
my normal studio either, So if you hear other noises,
that's because we've got noisy people walking around the office
and I'm in a different studio. That's commentary. But this
story unfolded just at the very end of October. That
was when the auction house Christie's, put a special item

(00:48):
up on the auctioning block. It was a somewhat blurry
portrait of a man dressed in antiquated clothing. It looked
like a painting that could have come from the eighteenth
century from one of any number of artists, but it
was in fact a much more recent painting. The artist
was not a famous painter. In fact, the artist wasn't

(01:10):
a person. It was an artificially intelligent algorithm that created
the portrait through the process of machine learning. And what's more,
the group of human artists who supplied the AI generated
portrait had taken a great deal of direction, let's say,
from a different computer programmer, but perhaps did not do

(01:31):
as much to attribute that coder's work to the creation
of this portrait that they should have done. So what
we have here sounds a bit like a twenty first
century futuristic art heist, only this isn't about stealing a
work of art, but rather a means of generating art itself,
and it's creating a lot of interesting conversations about concepts,

(01:54):
ranging from what is art in the first place, to
the practical applications of machine learning to the nature of
open source code. So let's dive down into this, because
when it comes to discussing our how technology interacts with
our lives, this is a doozy of a story. It
highlights not just technological issues but human ones that just

(02:15):
happened to intersect with technology. So to begin with, let's
talk about the tech behind generating this portrait in the
first place. It is an application of machine learning. That's
one of those topics we've talked about a lot on
tech stuff, especially recently. But basically, machine learning is all
about designing processes that allow machines to parse data in

(02:38):
some useful way and then apply the results of those
operations to future problems. But that's pretty darn vague, right,
that's not that doesn't really tell you anything useful if
you dive down a bit further, it's about creating a
framework within which machines can learn to perform a task
without having to be programmed to do it. So let's
use an example, and it's one I've talked about a

(03:01):
lot because it was one of the early examples of
what machine learning could do once it reached a certain
level of sophistication. Back in two thousand twelve, Google showed
how their computer scientists teams had taught an AI algorithm
or neural network to recognize images of cats. Now, this
was perhaps a funny way of showing an approach to

(03:21):
a difficult problem. So if you want a computer to
recognize an image of a cat, if it's a specific
image of a cat, you have a couple of different options.
One is, you can program the computer so that when
it encounters a specific arrangement of pixels for this particular image,
it recognizes that as the image of a cat, and
that you have programmed the computer to say, when you

(03:45):
see this arrangement of pixels, then that means this is
a cat. The computer doesn't understand what a cat is,
it doesn't have any context. It doesn't understand what any
other picture of a cat might be because that would
be a different arrangement of pixels. So you could program
a computer to do this and it would be able
to do it with that one image. But if you

(04:07):
gave it a different image of a cat, or even
an image of the same cat, but it's a different picture,
the computer would not be able to identify it. You
would have to repeat the entire process from beginning to
end to get the same result. And once you start
adding up images, you realize this is not really an
efficient means of teaching a computer anything. Or you could

(04:30):
create an artificial neural network that examines the pixels in
an image, and each neuron might be looking at a
different element of the data to determine if that data
was consistent with images of cat pictures. So we've talked
about this recently too, and artificial neuron can take in
multiple bin binary points of data's euros and ones and

(04:53):
then create a single binary output. So it might be
looking at specific features that might have to do with ears,
for example, and if it detects that the ears are
consistent with those of a cat, it might pass a
positive response further down the neural network, and a full
collection of all these looking at multiple points of data

(05:13):
would allow the computer to come to a decision does
this image represent a cat or does it represent something else. So,
in this way, by feeding thousands or tens of thousands
or hundreds of thousands of images to a computer, you
can train it to recognize cats. And the more you
train it and the more closely you're able to tweak

(05:35):
the network so that it waits certain elements more than others,
the better it gets. So the tweaking makes the network
more capable and eventually get to a point where it
can identify a picture as either being a cat or
not a cat with pretty good results. Um Back in

(05:56):
two thousand twelve when Google was talking about this, it
was still a little jankie. It could sometimes recognize a cat,
and sometimes it would think that a person was a
cat or that a cat was a person, So it
was not infallible, but it was pretty good. Now, because
I've covered artificial neural networks in recent episodes of tech Stuff,

(06:17):
I'm not gonna go through the whole thing all over again.
That high level I just gave you that's a pretty
good starting point. It's just important to remember that the
general output here is through training and network using that
input data set in this case or in the case
of that example, hundreds of thousands of images of cats.
Machine learning can actually take a few different approaches. The

(06:40):
one that I sort of outlined earlier would kind of
fall into the category of supervised machine learning. See in
that approach, we human beings are trying to teach a
machine through algorithms and data sets two recognize something that
we already know the answer for. Right, you can look
get a picture, and you can recognize whether that picture

(07:02):
is of a cat or not, so you already know
the answer. You're not asking the computer to give you
new information. You're trying to teach the computer to do
something that you already can do. So we human beings
are able to supervise the machine as it is learning
this process and make those minor adjust adjustments that are

(07:23):
needed throughout the system in order for it to get
better at its job. That is supervised machine learning. We
can keep working with it until it reaches what we
consider to be an acceptable level of success, which doesn't
mean it has to be perfect. It just has to
be good enough for whatever it is we're building it for.
But there's another approach called unsupervised machine learning, and as

(07:46):
you might imagine, this is different from the previous one.
On this approach, you only have input data and your
goal as a human is to learn more about that
data itself. So you don't have a correct answer in mind.
You don't already know that the data represents, say a
cat in a photo. It's a different type of problem
you're looking at. Uh. The machine is learning about the

(08:09):
nature of the information itself, including how different points of
data relate to one another or correspond with other data,
and you in turn can learn more about the information
as well. So within this category you have a couple
of subcategories. There are clustering problems. With a clustering problem,
you're learning about the groupings within data. So one example

(08:32):
might be that you have a population of customers. Let's
say you own a business. You've got customers. You have
data that represents all these different customers, and you're using
the collective behaviors of those customers to sort them into
meaningful groups so that you can better serve each of
those groups. Maybe you learn that there are four basic
types of customers, and that helps you plan out your

(08:55):
business so that you can cater it to those four types.
But another type of problem in unsupervised machine learning is
called an association problem. Now, in those problems, you want
to learn rules that describe large parts of the data
that you're feeding into the system. So, for example, let's
go back to you run a business. You've got this
big pool of customers, and you're feeding all the customer

(09:18):
behavior data into your system. It might tell you that, hey,
it turns out that of the customers who are buying
widgets go on to buy sprockets. So that would tell you, hey,
now I know more information. I know that if I
sell a widget to someone, there's a good chance I
can upsell that and include a Sprocket as well. So

(09:38):
I'm going to tailor my business approach to try and
take advantage of that. Now, the reason I went through
all of this is to explain that the type of
artificial intelligence algorithm that was used to produce the painting
I was talking about at the top of the show,
falls into a group called generative adversarial networks or g
a N or a GAN. These are used in unsupervised

(10:03):
machine learning applications. So it's in that second category I
was just talking about. So what is with this name?
What is a generative adversarial network? Well, for one thing,
it actually uses a pair of deep neural net architecture networks.

(10:23):
These two nets are in competition with one another. That's
why it's called an adversarial network. You have these two
different constructs that are working against each other. The approach
was first proposed by researchers at the University of Montreal,
and we chiefly associate the concept with a guy named

(10:44):
Ian Goodfellow. Ian Goodfellow wrote the definitive paper on the
subject back in two thousand and fourteen, and it is fascinating.
So from a very high level, what's happening is that
you have a neural network called the generator and you
have a second year old network called the discriminator. So
you're feeding the discriminator your input data. Let's again go

(11:08):
with pictures of cats, So actual pictures of cats photographs
of cats. If you will, you're you're feeding photographs of
cats to the discriminator. The generator's job is to create
a an image that fools the discriminator into thinking that
that's a legitimate photograph of a cat, but in fact

(11:29):
it was created or generated by the generator. So you've
got two processes going on at the same time. The
generator is trying to create essentially a forgery or a counterfeit.
It's it's creating something from scratch to fool the discriminator
into thinking this is a legitimate piece of data from

(11:50):
the training data set. The discriminator is looking at each
image and thinking, all right, now does this represent a
real picture or is this something that is coming from
the generator that's designed to fool me, And the two
are working against each other. Both networks learn as this
goes on. If the discriminator gets an image and rejects it,

(12:11):
that becomes a feedback to the generator and the messages. Essentially,
this was not good enough, and the generator starts to
try again, taking a slightly different approach. If the discriminator
accepts it, the generator says, ah ha, you're onto something.
But then you can tweak the discriminator and say this
was wrong. You you got this part wrong, and it

(12:33):
can start to try and look for signs that might
otherwise fool it. The goal here is that you are
going to have a generator producing better and better versions
of whatever it is you're trying to create. And that
could be a picture, it could be text, it could
be music. You could feed any sort of data to

(12:56):
both of these systems in an effort to deuce a
computer generated version of that thing, and as long as
it reached a certain level of quality, the discriminator won't
be able to tell the difference, and then you've got
yourself a computer generated whatever it might be, in this case,
a painting. I'll explain more about the specifics of this

(13:18):
case in just a moment, but first let's take a
quick break to thank our sponsor. So a couple of
years ago, there were computer scientists at Microsoft as well
as tu Deft University, and they were working together with
a banking company I n G to create a brand

(13:41):
new painting in the style of the painter Rembrandt. This
project involved processing high resolution digital scans of three hundred
forty six different images of Rembrandt's works, specifically portraits of men.
That information was fed to a deep learning algorithm that
analyzed Rembrandt's style and also the techniques that were common

(14:05):
across all the images. What were the common elements that
were found in those numerous paintings, And eventually this machine
was told, or this system was told to produce a
new painting based on those uh those common factors. And
so it narrowed down the approach to be a portrait
of a Caucasian white male because that's what most of

(14:26):
Rembrandt's portraits were of, somewhere between the ages of thirty
and forty, wearing white and black clothing, because again that
was the vast majority of the portraits that Rembrandt created,
and the focus of the subject was off to the right,
like looking slightly off to the right, because a lot
of the subjects in the other paintings were doing the same.

(14:49):
The algorithm also analyzed the faces of all those portraits
and came up was sort of a kind of a
mishmash average of them to produce the face of the
fictional Dutch gentleman in the new painting. To go a
step further, the team then added depth to this painting.
It was a two dimensional image, and then they decided
to add some depth. They included some ridges and some

(15:09):
bumps that would have been created from brush strokes onto
a two dimensional surface. So if you're using paint, then
it's actually a three dimensional image. You know, if you
get super close enough, you can see raised areas and
dips and trenches and stuff like that that the brush leaves.
And it all depends upon your painting technique how these

(15:31):
get laid out on canvas. So the team added those
details in to make it look even more authentic. Ultimately,
the design was printed using thirteen layers of ultra violet
based inc and the result is a work that looks
like it could have come from Rembrandt, complete with techniques
Rembrandt used in actually making his brushstrokes. And that's just

(15:53):
one high profile example of computers generating paintings after being
fed information about works that human artists have created. Now,
as get back to the story of the recently auctioned painting. Now,
to do that, we have to talk about a young
man named Robbie Barrett. Barrett is nineteen years old and
is attending Stanford and has been doing some really interesting

(16:16):
work in machine learning. It was his code that would
be the basis for the computer generated portrait that was
recently auctioned off. Barrett's work was going a step further
than copying the style of an established artist. Barrett's algorithms
would work to create new images after having analyzed numerous
real world examples. So just a couple of years ago,

(16:38):
the state of the art in GAN networks or GN
networks might produce some really disturbing images, like there are
early pictures of GAN attempts at making realistic human faces
that were not terribly successful, and that's because those networks
were able to recognize certain basic visual elements and images,
but not understand the reation ships between multiple elements within

(17:02):
an image, so you could end up with a face
with really extreme features like pronounced asymmetry. But over just
a short amount of time, people have developed much more
sophisticated GAN algorithms and performance has improved, and there of
course artists who have gone in a different approach, specifically
emphasizing some of these more absurd elements in order to

(17:25):
get that kind of a result when you're actually producing art.
Verrett created GAN algorithms that could generate all sorts of
interesting images. He was enabling computers to make art themselves.
And sure, these computers were learning to create art after
being fed numerous paintings and images from human artists. But

(17:45):
you could argue that if you want to become a
human artist, you have to do the same thing. You
have to study art that was created by other people.
So computers are no different. The computers weren't replicating specific works,
they weren't trying to make a copy. They were learning
various styles. Barrett would frequently put these images and also

(18:07):
the algorithms he used to create those images up on
get hub for free and open source. He also had
uh people download these and upload their own art, and
it was all in the spirit of this open source community.
This way, not only could people use the tools that
Barrett had created, they could understand how those tools worked,

(18:28):
and perhaps in the future they can make their own tools,
tweaking the approach the Barrett had used, maybe making art
that was even more indistinguishable from human art, or perhaps
going in a totally different direction, making something truly new
and alien. By the way, some of the images created
by Barrett's algorithms are a little unsettling. They can be

(18:51):
surreal and absurd, and some of them even come across
a little sinister to me. But that's my own interpretation.
I mean, that is what art is all about, is
the interpretation of the person looking at art. But they
remind me of some of the horror movie effects you
might see where the visual effects artists will distort a
person's face for the effect of horror, like in the
movie The Ring. Anyway, Barrett created several GAN algorithms and

(19:16):
put them up online for others to use, and this
in itself was not unusual. There are many in the
digital art field who work on AI who have done
similar things. Now he creates this code, Let's take a
trip across the world from Stanford over to France. That's
where three artists in their mid twenties were working in

(19:37):
a group they had called Obvious and their stated goal
is to promote ganism, that is, the art that has
been generated through AI algorithms running on this GAN approach. Now,
according to an article on Medium written by one of
these artists, they quote want to send out an update

(19:57):
of the state of the research and AI end quote.
They want to do this they want to tell the
world what is going on in the world of AI
research through showing off artwork made by AI, so kind
of a creative artistic way of talking about artificial intelligence.
The group says that the value of the art may

(20:18):
not be in the art itself, but rather the discussions
that the art inspires, like what is it that makes
art art? Can machines be creative? Who ultimately would you
say is the artist in a work that was created
by a machine? What does that art mean? Who does
it belong to? That's a big one. So the artists

(20:40):
reached out to Barrett when they were tackling this project.
They wanted to use a gain algorithm to generate a
portrait in a style similar to what you see in
eighteenth century paintings out of Europe. The students have made
it clear that Barrett had been a big part of
their inspiration. More on that in just a second now.
Members of Obvious began using gan code to generate portraits,

(21:03):
and they created several of them, eleven in fact of
a fictional noble family they named the Bellamy family B. E. L. A. M. Y.
The name Bellamy itself was a bit of a pun
and a reference to Ian Goodfellow, the guy who wrote
that main paper on gangs. In the first place, Bellamy
can be broken down into bell and Amy. That would

(21:27):
mean all the different spellings. It would mean good friend
or good fellow, which is kind of cute. Right. Well,
the artists produced these portraits, and they are all of
hollow eyed nobles that will stare right into the void
in a way that actually that's getting off track. Never
mind it. It creates me out a little bit. But

(21:48):
the last in the line of portraits would be Edmund
do Bellamy, the fictional noble whose portrait would go up
on auction in October and fetched way more money than
was anticipated and so obvious had fed to the algorithms
numerous paintings from the eighteenth century to guide its efforts,

(22:10):
and once they started producing these, they had each one
signed with a line of code referencing the algorithm. They
framed the machine generated portraits in golden frames, and when
Edmund de Bellamy went up for auction, the best guess
was that it would probably fetch between seven thousand and
eleven thousand dollars. Instead, the winning bid was for more

(22:31):
than four hundred thirty thousand dollars. So that raises a
good question who the heck should get that money. Who
was responsible for this painting and that would become something
of a controversy. I'll explain more in just a second,
but first let's take another quick break to thank our sponsor.

(23:00):
So as the group Obvious was getting press coverage for
the AI produced Bellamy portraits, this is before they had
even put one up for auction, some people, including Barratt,
express some disappointment with the group. They said that it
looked like they had used Barrett's code to produce these portraits,

(23:21):
and yet they weren't quick to attribute him. They didn't
give him credit, at least not readily and not visibly
in a lot of locations. And so his code, while
it was open source and he didn't begrudge anyone from
being able to use it, would have usually meant that
people would give him credit. Typically in the open source community,

(23:44):
it's considered bad form or even ghosh if you prefer
to not give credit where credit is due. As to
how much of the code was actually used unaltered, that
is a bit of an open question. The artists that
Obvious have admitted that they did use his code and
they changed it a little bit. Some other artists say

(24:05):
they believe that or more of the code was unaltered.
One such artist, a New Zealander named Tom White, said
he downloaded Barrett's code and ran it unaltered to see
if he could produce images similar to those that Obvious
had generated, and he said they look pretty close. So
I took a look at as well. I would say

(24:26):
that the ones that that White had created with that
AI have a little bit more of the weird facial
distortion thing going on than the ones that were made
by Obvious, but they are fairly similar. Throughout the project,
members of Obvious reached out to brot to for for
help and getting the GAN algorithms to run properly on computers.

(24:48):
Those communications are up on geth hubs, so I mean
they definitely happened. Anyone can see them. So that's definitely
a sign that a significant portion of the code used
to create the expensive painting came from ROT. So we
get into that tricky question who owns the art before
it gets purchased at auction? Obviously, so does the computer

(25:10):
scientist who created the code own anything that the code produces.
I mean, the code has to have a programmer. Without
a programmer, there's no code. So without the code, you
get no artistic output. But then again, you could say
that human artists learn from their teachers. There's a long
history of artists taking on apprentices, and those apprentices later

(25:33):
on go on to become great artists of their own.
So maybe you could argue that Brought was a teacher
and the AI was the student, and therefore Brought wouldn't
own the art. He didn't make it. He just taught
the student how to make art, not in a traditional sense,
but that's how it happened. But here's another problem. AI

(25:56):
cannot own stuff. Artificial intelligence can't have property. We have
no legal means to assign ownership, so that a program,
or an algorithm or an artificial neural network could own property.
And even if we did, what good would it do.
The AI doesn't want or need anything. It doesn't even
have will or self awareness. So maybe Obvious could claim

(26:21):
ownership because they were the ones who fed the information
to the algorithm. They're the ones who gave the algorithm
the access to all the different portraits. They made some
changes to the code, and the algorithms ran on computers
that they controlled, so if the code was using their assets,
maybe they own the output. But this is also complicated.

(26:43):
They didn't build the algorithm. They made use of it,
but they didn't design it from the ground up. But
if someone else could have run the code and use
the same general pool of images and train the code,
they might have seen similar results, which means someone else
could have done the exact same thing that obvious did,
and so that raises questions as well. Maybe there's nothing

(27:08):
special about owning the machine. In other words, in the
digital world, using open source code to make something new
and then profit from it sell it. That happens regularly,
but again it's all on how you do it. If
you follow the general rules of etiquette, you're typically pretty good.
But if not, people think of that as being kind
of a jerk face. So it's not it's it's frowned

(27:33):
upon in the open source community. Broad is quoted in
a piece on The Verge as saying, quote, I'm more
concerned about the fact that actual artists using AI are
being deprived of the spotlight. It's a very bad first
impression for the field to have end quote. So he's
not saying he's upset and missing out on money, but

(27:55):
rather that the the whole field is getting is represented
The Verge piece also does a great job pointing out
how many in the AI digital art field feel that
Obvious is painting a misleading picture to use a pun
that if you were to look at the press release
that the group has put out and the way that

(28:16):
they've presented the art, it would seem as if these
programs were largely undirected or even fully autonomous, and they aren't.
Just because it's called unsupervised machine learning doesn't mean that
there's no human component. So there's a debate going on
within the digital art world on where in the spectrum
these algorithms should fall. Are they closer to being tools

(28:41):
like what a paint brush would be to a traditional painter,
or are they more closely connected to a collaborator, maybe
someone who's assisting a painter. But they certainly are not
fully autonomous robots. Now. In a way, this question of
ownership actually makes me think of an earlier incident involving
a different art form. It involved a monkey, a digital camera,

(29:06):
and a lawsuit. So back in two thousand and eleven,
a photographer named David Slater was working on an assignment
in Indonesia and that's where he met Naruto Naruto was
a seven year old crested macaque, so Naruto was a
monkey now. On this assignment, Naruto at one point grabbed

(29:27):
Slater's camera, and while handling Slater's camera, Naruto took a
photo of himself. So it's a monkey selfie, and it's
a great photo. If you've not seen it, you've got
to look up monkey selfie because it is amazing. The
monkey obviously didn't understand what it was doing, but the
selfie is just about perfect. So then this image goes

(29:50):
up online and it goes viral. It gets posted all
over the place, including on Wikipedia, and David Slater would
reach out to Wikipedia and say, hey, you can't just
put my photograph up on your site without asking for
permission or paying a licensing fee. The Wikipedia said, dude,
you didn't take the photograph. It doesn't belong to you.

(30:12):
It was taken on your camera, but you didn't snap
the picture. A monkey took the photos, so you don't
have copyright to that image. In fact, no one has
copyright to that image because news flash, animals can't hold
copyrights to any work. But then Peter ak, a People
for the Ethical Treatment of Animals, would sue David Slater

(30:34):
and a publishing company called Blurb for copyright infringement, saying, Hey,
Naruto took that photo, so Naruto should hold the copyright.
The judge in that case would ultimately say that animals
can't hold copyright, backing up what Wikipedia had said, and
that this whole argument was invalid. Peter appealed the decision

(30:55):
it went to or it was scheduled to go to
a higher court, but ultimately the various parties came to
a settlement out of court. And this is where I
kind of roll my eyes at Peter. But this situation,
while silly on the surface, raises questions that also applied
to artificial intelligence. In a case like this, who has

(31:15):
the right to use or exploit a work? Now, I
would argue than the case with artificial intelligence, it gets
even thornier than that. Right now, we're talking about paintings.
But as I said earlier, gain algorithms could produce all
sorts of different stuff, including text. So we could have
a computer generated novel or a screenplay in the future,

(31:37):
and sure, the first versions of those will probably be terrible,
And to be fair, we already have a surplus of
terrible books and terrible movies and terrible TV shows that
are made by real human beings. We don't we don't
need robots to make more of those, but we could
also end up with some that are interesting or that
say something surprising that people will value. In those cases,

(32:01):
who has a claim to that intellectual property? Who should
profit from it? Maybe it should be the person who
wrote the code in the first place. But if that's
the case, let's take this thought experiment in another direction.
Let's say someone creates code for an AI that does
something entirely different. There it's not generating any content. Let's
say it's the artificial intelligence you would need to power

(32:24):
an autonomous car. Now, let's say one of those cars
is found to have caused a really bad accident. So
should the person who wrote the code be held responsible?
What if the scenario that led up to the accident
was so unusual that no one would have ever predicted it.
Because it's one thing to overlook a common event, Like

(32:45):
if someone were to program an autonomous car and say, oh, crap,
I totally forgot about stop signs, that would be demonstrably bad,
And you could say, well, that is that is endangerment,
That is definitely not cool. But it's a totally different
thing if you just don't predict an accident that involves
a lot of unique factors, because those happen too. There's

(33:08):
stuff that happens on the road every single day that
happens in a way that nobody anticipated. And because we
have so many people driving so many cars on so
many roads under so many conditions on a daily basis,
it's inevitable that we're going to have moments where those
unique situations pop up and it would be impossible to

(33:29):
identify or predict them. So in those cases, would you
still hold hold someone who made the code responsible that
they weren't able to predict something that nobody could predict?
Or does that put them at an unreasonable standard? Is
it the fault of the car manufacturer? Is it the

(33:50):
fault of the person who designed the road. I mean,
there's so many different questions and we don't have all
the answers, But I think in this case, with the painting,
we have this high profile example of AI producing something.
It leads us to get into a deeper conversation about
those ideas, and my guess is we will ultimately come

(34:11):
up with answers that are not entirely satisfactory for all situations,
but maybe some people will even go so far as
to to vehemently disagree with him. But more importantly, we
will actually have maybe answers right So, yeah, it might
be answers that not everyone is happy with, but at

(34:33):
least they would be answers right now we have nothing.
So this is a good case study for us to say,
we've got to start thinking about this stuff because the
era of AI playing a more pivotal role in our
lives is right around the corner, and it would be
better for us to figure this out now rather than
have to react to it when it's too late later.

(34:55):
I'm curious to hear what you guys have to say
about this subject. Why don't you pop on over to
text Stuff podcast dot com. That's our website. Get in
touch with me and let me know what you think.
If you have suggestions for future episodes of tech Stuff,
I'd love to hear those two. Make sure you go
over to t public dot com slash tech stuff. Check
out our our store. There lots of cool things over there.

(35:18):
Get yourself something fun for the holidays, because every purchase
you make goes to help the show, and I greatly
appreciate it, and I'll talk to you again really soon
for more on this and bathands of other topics, because
it how stuff works. Dot com

TechStuff News

Advertise With Us

Follow Us On

Host

Jonathan Strickland

Jonathan Strickland

Show Links

AboutStore

Popular Podcasts

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.