All Episodes

May 31, 2024 37 mins

We’re back for a brand new series of too long didn’t read!

This month Smera and Jonah are joined by Megan Hughes of the Turing’s Centre for Emerging Technology and Security to inform us about misinformation during election periods, and assures us to not always believe the hype. Smera and Jonah also discuss some of the latest breakthroughs with robotics and what that could mean for our workforces, and they explore the tricky topic of how generative AI is being used to create pornographic material, much of which is harmful.

Find out about the Centre for Emerging Technology and Security - cetas.turing.ac.uk/ 

Discover the Turing - turing.ac.uk

 

Topics discussed in this episode:  

  • Introduction [00:00:00] 
  • New Format for Season 2 [00:01:00] 
  • Misinformation During Elections [00:02:00] 
  • Examples of AI Misinformation [00:07:00] 
  • Impact of Misinformation on Elections [00:10:00] 
  • Authenticating Real Information [00:14:00] 
  • CETAS Report on AI Election Threats [00:19:00] 
  • Robotics and Figure01 [00:22:00] 
  • Impact of Robots on Jobs [00:27:00] 
  • Generative AI and Pornography [00:29:00] 
  • Responsible Use of Generative AI [00:34:00] 
  • Positive AI News: AlphaFold3 [00:35:00] 

 

Resources mentioned in this episode:  

[00:02:00] Misinformation during elections 

[00:27:00] Impact of Robots on Jobs  

[00:29:00] Generative AI and Pornography  

[00:35:00] Positive AI News  

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jonah Maddox (00:02):
Too Long, Didn't Read.
Brought to you by the AlanTuring Institute, the National
Institute for Data Science and AI.
Welcome to Too Long, Didn't Read.
AI news, developments and researchdelivered directly to your ear
holes from the experts and me.
I'm Jonah, a content producerhere at the Turing, and

Smera Jayadeva (00:22):
I'm Smera, a researcher in data justice and global ethical futures.

Jonah Maddox (00:26):
Smera, in season one, you were the resident expert, brilliantly
answering questions on everything AI, fromthe ethics of AI labor, to history of the
chip wars, and even briefly stopping toadvise Santa on a potential AI workflow.
But you have been given,I'm gonna say, a promotion.
You're my co presenter now!

Smera Jayadeva (00:46):
Yeah yes, that's right.
We have a slightly new format this season.
This time you and I will be discussing theAI news, but we'll also be seeking various
expert voices from a wide range of AI anddata science disciplines, ultimately to
create an even more comprehensive podcast.

Jonah Maddox (01:04):
Exciting stuff.
On this episode of TLDR, we will betalking about misinformation, not a rising
star in an academic beauty pageant, buta very serious threat for democracy.

Smera Jayadeva (01:15):
We'll also see if the age of dynamic, real time robotics is actually
upon us, and what it means for your job.

Jonah Maddox (01:22):
And we look beyond the headlines when talking
about generative AI and sets.
That's funky music on the guitar.
2024 is the year of elections.
Over 80 countries and half the globalpopulation will be voting this year.

(01:43):
Since there's no collective noun for agroup of elections, let's go with a group.
Electiontastic.

Smera Jayadeva (01:50):
Yeah, taking on elections is a huge endeavor.
Take India, it's seen as theworld's largest democracy.
Elections began in late Apriland will go on till June.
And we're talking a massivepopulation of nearly a billion people.

Jonah Maddox (02:04):
Yeah, and as if sorting through the candidates campaign promises
wasn't tricky enough before, we nowhave good old AI to consider and how
it can help people spread falsehoods.

Smera Jayadeva (02:15):
Yeah, and we discussed some of this in the
first episode of our first season.
Remember Jonah, way back when?
But essentially we face threestrands of information manipulation.
First we have disinformation,which is the big bad boy, wherein
information is falsely construed withthe intent of manipulating audiences.

(02:35):
Then we have misinformation, but withoutthe intention of actually causing harm.
Finally, we have the complexitiesof malinformation, where information
is exaggerated or conflated toobscure the truth or the narrative.
This is also where secret or classifiedinformation is often shared at a
strategic time just to influence voters.

Jonah Maddox (02:56):
Check out series one, episode one for the fuller explanation
on this we'll link it in the show notes.

Smera Jayadeva (03:02):
And there's also a few points to keep in mind when it comes
to the misuse of data and information.
For one, any group or individualmanipulating information doesn't
necessarily have to follow a single route.
Even if they're intentionally planningon manipulating information, one can
begin by exaggerating historical eventsand follow it up with intentionally

(03:24):
false and misleading information onlyto galvanize voters towards their cause.
For instance, you know, I mentionedIndia early on and actually was
in India during the elections andthere's a good chance that by the
time the This recording is out.
India is probably still goingto be counting the votes.
But if one were to track themisinformation or disinformation

(03:44):
campaigns in the country, it's, it'srarely a day without reports of false
information making rounds on social mediaplatforms or communication channels,
be it Twitter or X or even WhatsApp.
In fact, the world economic forumsaid India has the highest risk of
miss and disinformation in 2024.

Jonah Maddox (04:05):
So, it's rife, and it's relevant, and we're going to deal with it.
It's probably time we should, uh, bringon our special guest to navigate this.
DLDR Expert Guest!
This month, we are joined by anexpert who has worked as an analyst
within the Defence and SecurityResearch Group at RAND Europe.
She's led projects which assessthe impact of emerging technologies

(04:26):
on the information environmentand worked to identify the
implifications of disinformationand conspiracy theories in Europe.
That sounds cool.
Her research has informed strategy andpolicy at the UK Home Office, UK Ministry
of Defence, the European Commission andthe United Nations Development Programme.
From the Turing Centre for EmergingTechnology in Security, CETAS,
we are very happy to welcomeresearch associate Megan Hughes.

(04:48):
Woo, sounded a bit like the sort ofbeginning to blind date there where
Graham kind of brings him on like, fromthe Turing Center of Emerging Technology.
Hello Megan, I'll let you speak now.

Megan Hughes (05:02):
Hi Megan.
Thank you so much for having me.
Looking forward to hopefullyan interesting discussion.
Definitely.

Jonah Maddox (05:07):
So can you give us a very brief explanation of what CTAS
is and what a research associate does?

Megan Hughes (05:12):
Sure.
Yeah.
So CTAS is the Center forEmerging Technology and Security
at the Alan Turing Institute.
And I'm a research associate withinthe team and we work on policy
research relating to emergingtechnology and national security.
So we look at kind of the implicationsof emerging tech technologies and we
try to advise actors like the governmenton what they should do in response.

Jonah Maddox (05:34):
Okay.
So, we've learned sort of quicklyin the intro from Smyrna about
misdisc and malinformation.
But could you tell us abit more about how it plays

Megan Hughes (05:42):
out during election time?
Sure.
Yeah.
So I'll kick us off with a kind oftraditional influence operation.
If we look back at the 2016 U.
S.
presidential election,Presidential elections.
We can see quite a typical example ofa state sponsored influence operation.
So this was when Russianactors looked to influence US

(06:02):
voters ahead of the elections.
And they did a number of things.
So it wasn't just a kind ofmisinformation, disinformation campaign.
It was much broader than that.
So we had things likehack and leak techniques.
where hackers got into the Clintoncampaign emails and then shared
these emails over a period ofa few weeks to kind of distract
from the main campaign messages.

(06:24):
But specific to misinformation,Russian actors created a network
of fake accounts of bots.
We're looking at about 50, 000 ofthem, and they were all sharing emails.
divisive content, fake news stories,reposting hashtags to make them
go viral, like Hillary for prison.
That was one of them.
And they were also publishing politicaladvertisements, criticizing Clinton.

(06:47):
So that's looking a few years ago,and that's looking at something that,
like I said, I'd kind of term thata traditional influence operation.
When we look to the past.
past few years, and we look at AIexamples from elections that have taken
place since we've looked at the startof 2023 and research I've been, I've
been doing, and I can talk to you aboutthree examples of AI misinformation.

(07:09):
So you've got AI generated voice clones.
I don't know if you've, if you sawcoverage on the Biden Robocalls.
So this is where we had a deepfake audio.
clips of Joe Biden urging votersnot to turn out and vote in
the New Hampshire primaries.
We've also got an example, if we lookto Poland in their recent election, the

(07:31):
opposition party actually published adeepfake audio clip of the prime minister
reading a set of real leaked clips.
So you can see how kind of generatedvoice content we're seeing come up.
General AI generated content as well.
Over in the U S there are reports of wholenews sites that have been generated by
AI sharing completely fake news stories.

(07:52):
So that's more text based contentthat's quite easily shareable.
And lastly, coming closer to home,looking at the London mayoral elections,
we saw AI powered bots, that were again,sort of similar to the tactics in the
Russian operation, circulating hashtags.
So the hashtag London voterfraud was circulated quite a
lot ahead of the elections.

(08:12):
So those are some examples oftechniques and tactics that
we've seen being employed.
Right.
And is there any evidence ofthem having the desired effect?
So that's really interesting.
So when we look atMisinformation generally.
So if we kind of take the AI out of thecontext, a lot of studies have shown that

(08:33):
only a small minority of people actuallysee the majority of misinformation.
So I think there was a study in 2016on X, formerly Twitter and it showed
that only 1 percent of X users actuallywere exposed to 80 percent of the
fake news content on the platform.
And if you're exposed tomisinformation, it doesn't necessarily

(08:57):
mean you'll be persuaded by it.
So fake news, we know, is morelikely to enhance existing views.
It's not as likely to radically changeyour behavior, so not as likely to
kind of influence voting intentions.
And studies have quite consistentlyfound that in relation to elections,
misinformation hasn't meaningfullyThe outcomes of elections.

(09:18):
And that's because there areloads of factors that contribute
to someone's voting choices.
What we can look at is what's new with AI.
So looking forward to kindof upcoming elections.
Well, AI might make a difference in.
The amount of disinformation andmisinformation that might be disseminated.
So it might help people, helpactors reach more people.

(09:40):
It also might help topersonalize misinformation.
So this is called micro targeting and it'sa concept where personalized campaigns are
aimed towards individuals or groups, andit has been shown to be quite effective.
I think something that's quiterelevant is the platforms on which
people are finding their news.

(10:01):
So we know that young peoplebetween 16 to 24, the majority
of them find their news online.
So I think it's 80 percentfind their news online.
And most of that is through social media.
Not to kind of, you know, scareanyone because it's perfectly easy to.
Look at to see BBC news on social media.
It doesn't mean that people are justgetting all of that news from fake sites.

(10:22):
But what's important is traditional socialmedia sites use graph models where they
show you content based on the contentthat your network, your social network
is sharing and liking and engaging with.
When we look at TikTok, whichis obviously going to be a big
player when it comes to sharinginformation before elections, TikTok.

(10:44):
doesn't use that model so much.
So TikTok actually shows youinformation that comes from
outside your social network.
It actually uses algorithmicrecommendations to bring in new content.
So if we look at kind of what'snew with AI in terms of being able
to personalise Disinformation ormisinformation in, in able to,

(11:05):
being able to reach more audiences.
Could we see more effective use ofmisinformation on platforms like TikTok?
Maybe, but that's notto kind of sow worries.

Smera Jayadeva (11:16):
So would you say TikTok's the answer to echo chambers then?
We're breaking, we're breakingwhat, what things were before.

Megan Hughes (11:22):
I wouldn't recommend spending all your time looking
for information on TikTok.
I think it yeah, maybe, maybe The answerto echo chambers, but I know that groups
like, like Meta are exploring, changing.
They're kind of using the graphmodels, the social graph models.
So who knows, but I think you'redefinitely right that echo chambers
exist on traditional social media sites.

(11:43):
And we know that people are likely tokind of share things that they agree
with, with people that agree with them.
Right.

Smera Jayadeva (11:50):
So surely voters are used to being sold something when it comes
to electoral promises and manifestos.
That's all the basis ofelectoral campaigning.
But doesn't that mean we've alwaysbeen vigilant towards such, you
know, trends in communication?

Megan Hughes (12:05):
Sure.
In the interest of talking about areally, you know, timely, hot topic, can
I suggest we go back to ancient Rome?
Yeah, you know, just down the road, yeah.
Just down the road, you know, findingthe really relevant facts here.
But there is an anecdote.
There's a point.

(12:25):
So, so if we go backto the Roman Republic.
It's facing civil war.
Octavian, who is Caesar's adopted son,wants to really get the public on side
so he can win against Mark Antony,one of Caesar's most trusted advisors.
So what does he do?
He spreads a bunch of rumorsthat Mark Antony is a drunk.

(12:46):
And because he's having an affairwith Cleopatra, he doesn't have
any of the traditional Roman valuesthat would make a good leader.
I hope you see the point now.
I'm trying to point out thatthis is a very, very early
example of misinformation,disinformation even, campaign.
So we can trace thisback thousands of years.
Misinformation is definitelynot a new problem.

(13:09):
It's something that, as you say,we've been dealing with for a while.
When we look at the impactof new technologies like AI,
there are some differences.
So, you know, I mentioned beingable to disseminate misinformation
more easily and to more, you know,more people, but there's also a
concept called the liar's dividend.
I'm not sure if you've come across this.

(13:30):
No.
So this concept was coined by a coupleof US law professors and the concept
is that people can now claim that trueinformation is false and you can avoid
accountability by relying on publicscepticism and the belief that the
information environment is completelyinundated with false information.

(13:50):
So that's something that, you know.
We might expect to see we've we've seenan example of it actually in relation
to elections in Tamil Nadu in India, aclip came out of a minister accusing his
own party members of illegally findingfinance or fraud, basically, and he
came out and said, No, I dismiss that.

(14:11):
That's not true.
I never said that.
But a later analysis of the clip bytechnical experts found that it It's
quite likely that the clip was authentic.
So that's one example we've seen.
We've not seen lots of examplesof this, but it's definitely
something that, you know, there'spotential there for it to happen.

Jonah Maddox (14:29):
Yeah.
So as that begins to happen, people'strust in truth will sort of disappear.
Bit by bit break down.
It's funny, isn't it, that youthink of kind of this as a sort of
highbrow topic, but it's basicallyjust playground tactics, isn't it?
Completely.

Smera Jayadeva (14:43):
With all of this, how do we authenticate real information?
I know you said there are a couple ofexperts, but if there's so much of this
going around, are there any ways we can.
you know, trying to ascertain thetruth, at least for an audience
that might not have that much time.
So is there maybe someone outthere doing this work for them?
I think

Megan Hughes (15:00):
there are a few things.
So there's things that platforms cando and there's things that we can do.
And I think the first piece ofadvice I'd give is to maintain
a healthy level of skepticism.
It's important not to.
believe all the hype and not to worrytoo much, because just as you mentioned
Jonah, if we get kind of really confusedabout the state of the information

(15:22):
environment and we think, you know,the waters are completely muddy, we
can't find true information anywhere.
That's not going to help anyone.
And it creates a kind of senseof, of public anxiety that
might actually undermine thingslike real election results.
In terms of kind of practical things that,that people can do and platforms can do
as well, we've seen that, uh, pre bunkingis a method that can be quite effective.

(15:46):
So this is a prevention rather thanthe cure method where you anticipate
the use of disinformation and you warnpeople about it before it spreads and
you provide factual information on atopic so people are kind of aware that.
Disinformation might be coming their way.
I

Jonah Maddox (16:01):
read about pre bunking, that's not a word I'd encountered before.
Yeah,

Megan Hughes (16:05):
and it's actually been effective.
They've used they've done some earlystudies on climate disinformation.
Um, and I think that platforms likeMeta have actually started using
pre bunking techniques online.
So it's, it's proven kind of effectiveand platforms are deploying techniques.

Jonah Maddox (16:21):
Looking at the stat you gave us about how few people are
actually exposed to misinformationmeans that The majority of information
we're getting is information andwe should be told yeah, you can
believe a lot of what you're getting.
Is that happening?

Megan Hughes (16:34):
I think you're completely right.
And it's really important that, youknow, we do need to be encouraging
trust in the information environment.
So I think when you log on to Facebook,I think in the campaign period if you
share a post that's to do with a politicalparty, for example, I, if I remember
rightly, a little comment comes up saying,you know, have you, have you checked this

(16:54):
source or have you checked the content?
And I think that's a great example ofsomething that could be done to just kind
of make people pause and think, Oh, okay.

Smera Jayadeva (17:03):
So on the different methods that we're probably, we can
use either as an individual or thatplatforms are taking on, I've also
heard about the, The Coalition forContent Provenance and Authenticity.
So essentially content watermarking,is that going to have any real impact?
What can we see in thefuture when it comes to C2PA?

Megan Hughes (17:21):
I think it's a great question.
I think C2PA is.
a step in the right direction.
So it's a group of organizations thathave come together and they have committed
to developing technical specificationsto be able to trace the origin of media.
And there's lots and lots of ongoingresearch on watermarking, but there

(17:43):
are a lot of problems with it.
So there's the adoption problem.
So if one platform adopts a formof watermarking and they're putting
notices out saying, you know, Oh, thiscontent is AI generated, there might
be an assumption by users that anycontent that then isn't watermarked.
is legit.
And that might not be strictly true.

(18:05):
So there's, there's, there's a,there's an adoption problem there.
And even if watermarking kind of becomesvery good, I think we can assume that
sufficiently capable and sufficientlymotivated actors, they'll get around it.
So it's a step in the right direction,but it won't be a kind of A great
solution that solves all of our problems.

(18:26):
So Megan, could you tellus about this CTAS report?
Sure.
Yeah.
So, so this has been a great projectto work on and it's, it's ongoing.
So we've got a publicationcoming out soon.
That's, that's a briefing paper andthen a longer form report due out later
this year, and what we've been lookingat is the impact of AI enabled threats

(18:47):
to the to the security of elections.
And.
We've been looking at examples of AImisuse from 2023 to date, and the kind
of takeaway that I'd like listeners to,to think of is that examples are quite
scarce and where they do exist they'rereally hyped up by mainstream media.

(19:07):
The, the risk isn't reallyin AI use during elections.
You know, there's, there's a small risk,but the major risk is the heightening
of public anxiety and the underminingof the general information environment.
And what we don't want is forpeople to lose trust in genuine,
authentic sources and information.
So that's the kind of key top line I'dwant people to take away from our report.

Jonah Maddox (19:31):
Yeah.
Yeah.
That's a really good point.
Let's make sure that we're notContributing to the hype about
misinformation with this podcast.
So I suppose that that kind of leadsus to any final thoughts from you.
A concluding statement, if you will.

Megan Hughes (19:46):
Sure.
I think the, the key message is,you know, misinformation has been
around for thousands of years.
AI is relatively new to us all, but itis just a tool, so people will use it
for good and for bad, but please don'tworry that it's going to hugely impact
all of the upcoming elections in thisvery important year for, for democracy.

(20:09):
There's a lot of hype, but we're yetto see any real evidence that AI has
actually impacted any election results.
So just think critically,check your sources.
Think about the contentof news and that's it.

Smera Jayadeva (20:21):
All right.
So just before we leave,there's one final question.
Is you may hypothetically in a worldwhere you are facing off for prime
ministerial elections in the UK, Megan,what legislation should you be elected,
what legislation would you spearhead?

Megan Hughes (20:38):
Oh, That's a really good question.
I have to really think carefullybecause there's a lot of public
accountability with a public podcast.
I think that the Online Safety Act hasmade some good steps, but I think I
would like to see stronger legislationsurrounding pornographic deepfakes,

(21:01):
because, you know, we've spoken aboutAI in the context of election security,
but 95 percent of online deepfakes arepornographic material, often of women.
So, you know, that's a huge problemthat I think it got discussed a lot.
with what happened to Taylor Swiftbut the kind of conversation spiked
and then as dropped down a bit.

(21:22):
So I think that that's a really importanttopic that we need to, we need to have
really strict laws in place to deal with.

Smera Jayadeva (21:29):
I mean, that's a great point.
I'd, I'd vote for you just on that.

Jonah Maddox (21:34):
There's my campaign.
We'll be coming to a bit of the thedeep fake stuff later in the episode.
Thank you very much, Megan.
You've been a wonderful guest.
We'll let you get backto saving the world.

Megan Hughes (21:44):
Thank you very much for having me.
This has been lots of fun.

Smera Jayadeva (21:51):
Okay, Jonah.
So for our second story, I reallywanted to talk about robotics.
Robotics.
So I saw a really interesting videothe other day from Figure AI about
their new robot named Figure01 andOpenAI software has been integral
to the development of this robot.
And the reason why.
I think I was so surprised by itis because of the way in which the

(22:13):
robot responds to some of the tasksthat the person's asking them to do,
not only in terms of its movement,but also the way the robots spoke.
I think that was the first time I actuallyconfronted the fact that, you know, this
isn't Something that's, you know, a fewdecades away, but this is something that
we're actively working on right now.

Jonah Maddox (22:31):
Yes.
It's a pretty amazing video.
We will link it of course,in the, in the show notes.
For our listeners that haven't seenit, the launch video for figure
zero one has someone asking thisshiny Chrome robot for some food.
It gives him an apple.
And then proceeds to clean up a mess whileexplaining why it chose the apple because
it was the only edible thing on the table.

(22:51):
I know the task of giving someone anapple doesn't sound hugely impressive,
but you do need to watch it to see howdifferent it is at least of how I thought
of humanoid robots were progressing.
It's mad.

Smera Jayadeva (23:03):
Yeah.
And you know, this, this startupfigure AI is backed by some
of the biggest names in tech.
Jeff Bezos, Microsoft a lot ofcompanies have invested, I think over
a billion into the development of thistechnology and what's key to this new
shift that's happening is that openAI's recent generative AI software has
been a key part of the entire puzzle.

(23:24):
It's making the robot more dynamicand it's making that text to that
natural language speech a lot moreimpressive for the general audience.
And I think it, it really showshow quickly tech has been evolving.
I mean, if we see, you know, theindustrial revolution times of the
early 1800s, early and mid 1800sto, you know, the quick jumps, the

(23:45):
rapid jumps that we saw from the year2000 to now where, you know, we had
some basic computing and now we haveReally, really, really smart phones.
And I just wonder if we're seeingthis right now, what can we can expect
in like the next two or three years?

Jonah Maddox (24:00):
Yeah.
So are we going to see a massiveincrease in robots around us now?
Is, are we prepped for this?

Smera Jayadeva (24:06):
As I said before, you know, generative AI has been
instrumental to giving that boost tothe robotics industry to make it more
dynamic and respond in real time.
But if you watch the product videos,it's far from our imagined idea of a
perfectly mobile and you know, a robotthat's able to respond that quickly.
If you see some of thesevideos, especially of the ones

(24:27):
that look like little dogs.
Yeah, it's a bit creepy to say the least,but But that's just talking about, you
know, more performance related aspects.
I think there's also the generalchallenges of generative AI, some
of which we've already covered.
Yes,

Jonah Maddox (24:41):
the impact on vulnerable communities.
Or is it the biases?
Or the safety concerns?
Or the explainability?

Smera Jayadeva (24:48):
Or all?
Yes.
It's pretty much all of that.
I mean, this isn't to say there aren'tgreat users for robotics though.
We can use them to navigatedifficult terrains.
For instance, NASA is working on arobot to navigate celestial bodies.
So you don't need to put a human atrisk on, on the moon instead, a robot
may be able to walk around and, youknow, pick up some space material

(25:12):
to bring back for research purposes.
Yep.
But it is a giant leap for machines, jokesaside, there are studies showing that
there is success with AI and robotics inhealthcare for mobility access and so on.
Interestingly, we can also integratethem into the larger internet of

(25:33):
things network infrastructure.
And this might bring us one stepcloser to what we envision smart
homes and smart cities where allour devices are interconnected.
And they're perpetually consumingour data about our every
movement, our every decision.
You know what I'm going with

Jonah Maddox (25:51):
this.
You say it like it's a bad thing, butI feel like I'm still so naive to how
this data collection really impacts me.
It's too easy to accept theTNCs we're bombarded with.
So what can we expectin the next few months?

Smera Jayadeva (26:05):
So for the next few months for manufacturers and this
ranges from Amazon to Boston Dynamicsto Hyundai to Nvidia to Tesla, you
know, everyone's getting in on it.
It's a rather evenplaying field as of now.
So if we're

Jonah Maddox (26:17):
imagining a sort of Jetsons esque future, then presumably
the production costs need to come down.

Smera Jayadeva (26:23):
Well, if we continue on an unregulated path where robots
are affordable, it would actuallycome at the simple cost of your
data, your Agency, or even your job.
Who needs them?
Do you think, do youthink it is that dire?
It is interesting, especially froma market analysis point of view.
And, you know, if you take thelanguage of these websites, these

(26:47):
robotics websites, it might lead youto believe we need these machines
to fill up these jobs and so forth.
And that we, in fact, are themore lazier humans, but that's
me reading between the lines.
But fundamentally, many of.
The repetitive manufacturing jobs, whichrobots could replace not only very,
very low paying, but incredibly taxing.

(27:07):
So if one wanted to upscale and move outof, say, working in a warehouse where they
have rather repetitive tasks, they mightnot have the time because they're stuck
in endless shifts just to make ends meet.
Thus creating the working poor

Jonah Maddox (27:21):
side note.
Right.
Or side thought.
If you were to lose work, like,production lines, you could lose
the creativity that's born from it.
Right.
Is an interesting nugget.
Gordy Berry, who founded Motown Yeah.
Was inspired by the production line.
He worked.
on it and building cars in Detroit,he thought you could do the same
with a musician, like bring themin, send them up the production

(27:42):
line and come out with a hit.
He even had a quality control systemlike the car factory did where they would
make sure each song was like the best itcould be before it left the hit factory.
Even Rerecording them with differentsingers and things like that.
So yeah, remove all repetitive jobsand we might not get another Motown.

Smera Jayadeva (28:00):
Oh, wow.
But I mean, are you saying we shouldcontinue keeping workers in very
repetitive factory jobs, Jonah?
In case we get another

Jonah Maddox (28:08):
Motown.
Easy for me to say.
Yeah.
Although I must say I used to be a veryunskilled builders, builders laborer.
And that is easily the time that I'vebeen most prolific in making music
and art and feeling really creative,not quite to Motown standard, but,

Smera Jayadeva (28:26):
in all seriousness, there needs to be a lot more analysis
and review of what's going to happento the state of our markets and,
you know, what economic models willlook like with greater automation.
You know, we have a lot of fundamentalassumptions about labor costs, about
knowledge, about information and soforth, but it really needs to, you
know, Get a proper deep dive as wesee greater and greater automation.

Jonah Maddox (28:54):
Click bait.
I know what you are up to with yourtantalizingly open-ended question and
error of seductive mystery . I thought Iwas kind of impervious to it until this
month when I found myself paragraph deepinto an article titled OpenAI is exploring
how to responsibly generate AI porn.

Smera Jayadeva (29:13):
Let me guess, they're not actually exploring
how to generate Porn at all?
Basically, you're right.
Yes.

Jonah Maddox (29:20):
So what happened was this month OpenAI released draft guidelines for
how the tech inside ChatGPT should behave.
And with regards to not safefor work content, it says,
Basically, we don't do that.
However, the article that I read thatwas in Wired and also Guardian focuses
on this note lifted from the document,and I quote, we're exploring whether we

(29:41):
can responsibly provide the ability togenerate NSFW content in age appropriate
contexts through the API and chat GPT.
We look forward to better understandinguser and societal expectations
of model behavior in this area.

Smera Jayadeva (29:54):
See you.
Can kind of see wherethe article got excited.

Jonah Maddox (29:57):
Yes, can see where they got it from.
But, but they were also told byan OpenAI spokesperson that we
do not have any intention forour models to generate AI porn.
So this segment is kind of at riskof becoming clickbaity itself.
Clickbait of a clickbait.

Smera Jayadeva (30:12):
But it does raise some important questions, I think,
about the future of generative AIand where we need to be more careful.
The platforms want users to havemaximum control, but also don't
want them to be able to violatelaws or other people's rights.
I think we touched upon it in ourseries where we looked at deep fakes
being used for generative porn, andsince then there have been the very,

(30:35):
very public questions about it.
Deepfakes of Taylor Swift.

Jonah Maddox (30:37):
Yes.
Yeah.
As Megan touched onearlier in the episode.
And we'll obviously link the episodewhere Smera and Jesse talk about that from
last series as well in the show notes.
So a month or so ago, the UKgovernment created a new offense that
makes it illegal to make deepfakes.
Sexually explicit deepfakes of over18s without consent and OpenAI are

(30:58):
very clear that they do not want toenable users to create deepfakes, but
it is happening on some platforms.
I read an unpleasant article about therapid rise in the number of schools
reporting of children using AI tocreate indecent images of other children
in their school, which is very sad.

Smera Jayadeva (31:13):
I know, but I mean that we're talking about
something within schools in April,we saw the first of what will.
hopefully be a larger crackdownof sex offenders using AI.
A 48 year old man from the UK wasprosecuted and banned from using
AI tools after creating more thana thousand indecent images of

Jonah Maddox (31:33):
children.
Yeah.
So we, we need better tech,better regs and a better education
towards sex and respect in general.
Aside from the illegal and abusive usesof AI when we're talking about sex,
I can't see a future where some formof pornography isn't created by AI.
I imagine it's often the fringecommunities that the tech isn't
specifically made for who improvise tomake what they want and end up discovering

(31:57):
some new use case that no one thought of.
Surely it's going to play a partsomewhere in the future of AI.

Smera Jayadeva (32:03):
I would actually be more worried about AI driven porn.
There is no transparency on the data usedto train some of the generative AI models.
And we also have the problemof poor explainability.
If we can even say there'sany form of explainability.
In this case, there may be a chancethat someone's photographic data
that may have been used to train amodel and maybe somewhere down the

(32:24):
line, there's some gen AI porn, whichlooks very oddly familiar to you.
And I personally do not want to wakeup to a future 20 years down the
line where a photo I uploaded onFacebook, completely non harmful ends
up being part of a training data set.
That has very non welcome users.

Jonah Maddox (32:41):
Yeah.
And I wonder if there's something inthe idea that if AI companies do explore
the more questionable avenues theresulting new architecture developed
could enable people with ulterior motivesto jailbreak the system and use it
for their own even more dubious means.

Smera Jayadeva (32:55):
Oh yeah, definitely.
I mean, better tech doesn't meanwe eradicate crime as much as
criminal justice, AI systemsmight make you want to believe.
The more interconnected our networks,I think there are more risks of cyber
operations, be it data theft, dataleaks, or even model replication, where
they can reproduce some of these modelsand the outcomes and at the risk of

(33:17):
the person whose data is being used.

Jonah Maddox (33:19):
Yeah.
Okay, let's wrap it up there I supposejust to bring it full circle, back to
clickbait and having learned from Meganabout being aware of what we read and
where we get our information, I supposethe message here is to be vigilant
although this clickbaity headline led usdown a valid rabbit hole, sometimes you
could find yourself in a more spuriousplace, ew, think before you click.

Smera Jayadeva (33:41):
At least we're on the right track when it comes to the law.
It's good to see that, you know,there are active steps being taken to
make sure that people are protectedand that there are court rulings now
that can be upheld in future cases.
Hopefully it's not the case, butyou know, knowing how the world
tends to use tech, it wouldn't besurprising if we hear more about this.
as this technology improves.

Jonah Maddox (34:01):
Yeah, keep you posted.
Well, that's about it for this month.
But before we go, Smera, I want tocontinue a tradition from the last series,
and that is our positive news segment.
So what made you feeloptimistic about AI this month?

Smera Jayadeva (34:18):
There's a lot been happening, but there's
one story I want to focus on.
It's this big breakthrough withDeepMind's AlphaFold 3, essentially.
I've

Jonah Maddox (34:26):
heard of it.

Smera Jayadeva (34:27):
So the big breakthrough is that this AI system can now map
out protein structures quicker thanever to give cures for diseases.
So essentially improve drug discovery.
Would you like to knowexactly how that works?
Because I spent some time going intothe physics and the biology behind it.

Jonah Maddox (34:42):
I absolutely would, because I did read the sort of the
headline of this story and thoughtthat sounds positive, but then I
read the rest and understood nothing.
So I would love to hearSome help there, please.
Okay,

Smera Jayadeva (34:52):
keep in mind I'm not a doctor by any means.
If I was, my parents wouldbe so proud of me, but okay.
Basically, proteins arethe workhorses of the cell.
They're important for everything,and each protein is made up of
complex amino acid sequences.
The issue is that these sequencesand how they make up the protein
is governed by these very complexphysical and chemical interactions,

(35:14):
which has meant that humans trying tomap it out have taken a lot of time.
Apparently, it's like a 50 year grandchallenge for medicine and biology.
But now there's a computerthat can do it for us.
And if it means it can map out.
proteins, it's thefuture of drug discovery.
Why, you ask, is thefuture of drug discovery?
It's because drug molecules bindto specific sites on proteins.

(35:35):
So, if we know where those sites are on aprotein to bind the drug molecule to, then
we find a way to make that drug effective.

Jonah Maddox (35:42):
Very nice.
Shout out AlphaFold3.
Shout out AlphaFold3.
You like it.
So, that's it for this month.
Thank you very much again to Megan Hughes.
Our excellent guest, thank you to Jessebehind the scenes, thank you to Smera.
I should also just mention thatSmera, this week I watched you
perform at the Pint of Science eventin London, which, where you were

(36:07):
performing your imagined future.
You came from Mars fromthe year 2060 or something?

Smera Jayadeva (36:12):
Yeah, 2064.
Yeah, I came down from Mars.
It was a very hecticmoment of traveling for me.
I don't usually come back down toterrestrial earth but I luckily got
the funds from a specific sponsor.
It was

Jonah Maddox (36:23):
Lidl, right?
Yeah, it was really good.
And yeah, for the, for those interestedin that our YouTube will have the
point of science in the future.
That's Mira.
Well done.

Smera Jayadeva (36:32):
Thank you for everyone who listened this far and we can't wait
to see you next month with a new set ofstories that we will cover in detail.

Jonah Maddox (36:40):
Bye.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.