All Episodes

April 13, 2025 16 mins

Artificial intelligence has been the hot topic of debate in the business world for the last few years.

But increasingly, it’s an area that is encroaching in on the creative industries.

The latest OpenAI update is so advanced, fans online have used it to eerily replicate the hand drawn art style of Japanese anime favourites, Studio Ghibli.

It’s just the latest sign of AI coming for the arts, with recent headlines also highlighting concerns over entirely artificial models in ad campaigns, and fake movie trailers that look close to the real thing.

What protections are there in place for our creative sector, or could they become one of the first industries to fall to our new AI overlords?

Today on The Front Page, University of Sydney business school Associate Professor Sandra Peter is with us to take us through the impact of these emerging technologies.

Follow The Front Page on iHeartRadio, Apple Podcasts, Spotify or wherever you get your podcasts.

You can read more about this and other stories in the New Zealand Herald, online at nzherald.co.nz, or tune in to news bulletins across the NZME network.

Host: Chelsea Daniels
Sound Engineer: Richard Martin
Producer: Ethan Sills

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Jiaoda.

Speaker 2 (00:05):
I'm Chelsea Daniels and this is the Front Page, a
daily podcast presented by The New Zealand Herald. Artificial intelligence
has been the hot topic of debate in the business
world for the last few years, but increasingly it's an
area that is encroaching on the creative industries. The latest

(00:27):
open ai update is so advanced fans online have used
it to eerily replicate the hand drawn art style of
Japanese anime favorites Studio Ghibli. It's just the latest sign
of AI coming for the arts, with recent headlines also
highlighting concerns over entirely artificial models in ad campaigns and

(00:48):
fake movie trailers that look close to the real thing.
What protections are there in place for our creative sector
or could they become one of the first industries to
fall to our new AI overlords. Today on the Front Page,
University of Sydney Business School Associate Professor Sandra Peter is
with us to take us through the impact of.

Speaker 1 (01:09):
These emerging technologies.

Speaker 2 (01:16):
Sandra, you've recently written about open aiy's latest update. Can
you explain what's happening here with these Studio Ghibli style images?

Speaker 3 (01:25):
So Open Eye's latest update to chat jipt was a
significantly improved image generation capability. This means that it allowed
users to create like really convincing images in the style
off And everyone's been trying out the Ghibli style because
it's been one of those very very clearly identifiable styles,

(01:49):
and it was also used by some altmann to tell
people about this thing, and it's been really enormously popular,
so much so that basically their systems have crashed since
they've released it. And I think it was about another
ten million people that have joined the open aies efforts
based on this new image generation model.

Speaker 1 (02:09):
And so how is this possible?

Speaker 2 (02:11):
Has openaiy just scraped all two dozen Ghibli movies in
order to replicate this style.

Speaker 3 (02:17):
They haven't. They've scraped the Internet, and the internet has
a lot of things on the internet, and we can
talk about what goes into training some of these models,
but the idea with Generative AI is that they've changed
the way that they create these images. So they've kind
of moved from the traditional what we call diffusion models,

(02:38):
where we have models that gradually refine just noisy data
into something called autoregressive algorithms. And no one wants to
go into the details of that. All you need to
know about is is that basically it treats images like language.
So Chagipiti now predicts words in a sentence, but it

(02:59):
can also predict visual elements in an image. This means
that you can basically use all the things that jipt
has learned about Ghibli style, for instance, the fact that
it's got things like soft pastels, it's a Japanese type animation,
and so on, and it can use those to more
accurately create these images from precise prompts that people give it.

Speaker 2 (03:23):
What are the copyright implications here of this happening.

Speaker 3 (03:26):
Oh, there's a lot of copyright images. And can I
actual say when it comes to generative AI, this comes
on top of many other controversies that we've had. But
the ability to work with styles, and when I say styles,
I don't just mean the Ghibli style. Everything becomes a
style for generative AI. That means things like bananas or cats,

(03:49):
or bloody corporate emails, they all become styles. The ability
to work with these styles it becomes the heart of
this controversy because for many artists, like our distinctive approaches
to how we create art is not a style. That
can be applied in a specific prompt. However, the traditionally

(04:09):
the copyright law doesn't protect styles, only very specific expressions
because we don't want to stifle creative expression. Right if
you could copyright things like impressionism, that would limit what
people can do. But there's a very clear difference between
a general style and then the highly distinctive style that
a person might have. If you remember a while back,

(04:32):
there was a guy called Greg Rutowski. This was a
Polish artist, and people were using his style over and
over again to generate images on unstable diffusion. Now, if
you try to do them in the style of Greg,
this threatening sposed his livelihood and his craft. So creators
have taken legal action against this on a number of fronts.

Speaker 4 (04:59):
The newest trend is to create studio I say create
is to imitate Studio ghibli images truly just coming for
one of our most beloved examples of human manual creativity.
Everyone loves studio Ghibili because it's beautiful and also because
it's painstakingly created by humans who love art. And now
to actually bt is trying to imitate that and creating

(05:20):
images that are just worse. They're just soul less versions
of Studio Ghibli. It's also worth noting that Miyazaki, the
mastermind behind Studio Ghibli, he once said of AI, I
would never wish to incorporate this technology into my work
at all, and once when he was presented with an
example of how AI could be applied to his work,
he saw the footage and said quote, I strongly feel
that this is an insult to life itself.

Speaker 2 (05:46):
It emerged a few months ago that allegedly Meta CEO
Mark Zuckerberg approved of his company using pirated versions of
copyright protected books illegally uploaded to sites like libjen to
try its AI software. If true, it seems like it
might be breaching all sorts of laws, I suppose, But
is there actually anything that can be done about it.

Speaker 3 (06:09):
This is one of those huge, if true, kind of things,
and also something that is still being debated and contested.
There is a good likelihood that things like libjan were
ingested to train these models. We don't know at what
stages and at what scale. It's also true that when
we ingest what is commonly known as the Internet, we
will end up ingesting things that have copyright that we

(06:32):
don't necessarily think of being ingested. I think an easier
way to think about that might be the controversy around
generative AI models being able to write really really good
dialogue for movies because they've ingested OpenSubtitles dot org, which
has pirated subtitles for all the movies in all the languages,
and even though it wasn't trained on actual movies and

(06:53):
on movie script ingesting the pirated subtitles means that it's
now very good at doing this. Yes, things can be done.

Speaker 1 (07:02):
The law will.

Speaker 3 (07:03):
Evolve much as the technology has, except that technology moves
a lot faster than the law. But there's a lot
of work on the way on new legislation to try
to balance the fact that we need an enormous amount
of data to create these models. We're protecting things like artists'
identities and their creative work. And it's obviously not just

(07:25):
studio Ghibili right. There's same concerns in written text in music.
I've had Billie Eilish and Pearl Jam voicing concerns about
generative AI in music. So this is very widespread and
something that is being debated around the world now. The
question of how we'll be able to kind of balance
the fact that we do need a lot of data

(07:48):
and there is scarcity in data, the fact that this
legislation is has there are varied approaches around the globe.
There have been caused in the US to allow companies
to use copyrighted information to this go away for the
company's training generative model. That we have an AI race,
this is all this, this will all have to be

(08:08):
worked out in the next couple of years.

Speaker 2 (08:20):
We've also recently seen the YouTube channel screen Culture demonetized.
You may have seen some of their AI generated trailers
that they make look like the real thing.

Speaker 1 (08:31):
How advanced is AI getting.

Speaker 2 (08:33):
Are we going to be able to tell the reels
from the fakes in a few years time, And what
does that mean for film studios and the like.

Speaker 3 (08:41):
That's a that's a complex question. Let me let me,
let me, let me take a step back and talk
about the bigger picture. This is this is an arms race,
and at the moment we're not really winning it. In
that we are able to create very very good fakes,
deep fakes, and any type of content that is indistinguishable

(09:03):
from the real thing. We were already able to do
this really quite well with images a few years ago.
We are now increasingly good at doing it with a
voice with audio, we are increasingly good at doing it
with video. I think the thing to focus on here
is first that most of the generators, that all of
the generators that are out there online cannot reliably tell

(09:25):
you if a work is AI generated or not. If
you remember the Pope in a white Valenciaga jacket controversy,
it's been an arms race since, and that most of
us can create convincing content using over the counter, commercially
available software. So if you were trying to create a
deep fake of my voice, it would likely cost you
two dollars a month with commercially available software, and it

(09:47):
would be a fantastic clone of my voice that can
fool my mom. If you were looking to create video,
you'd probably need a minute and a half of video
of me in the public domain to create deep fakes
of me. So it is an arms race, and you
know we're not good at telling these things apart.

Speaker 1 (10:02):
And it's not just movies.

Speaker 2 (10:04):
Recently I saw global fashion giant H and M announced
plans to use AI by making digital twins of thirty
of its models. Now the company has said that the
models would own the rights to their likeness. But this
is something that's been worrying the fashion industry insiders for
some time. Does there need to be better regulations in

(10:25):
place before companies start to innovate and use AI in
this way?

Speaker 3 (10:30):
Obviously there needs to be better regulation in this space,
but I think first it will be down to companies
to figure out how they want to use these technologies ethically.
Technology always evolves faster than the law, so the work
in the legal space will take some time. So I
think it's really really important for executives, for leaders in

(10:50):
this space to upscale themselves around artificial intelligence, to understand
what the ethical challenges are, what the practical challenges are,
but also what some of the huge opportunities are in
that space, and to make informed choices about what they
do in their organizations. This has been a long time coming, right.
We've had digital humans like like Little Mikaela back in

(11:12):
twenty sixteen. They were on Instagram, then they became you know,
digital flesh and blood. They did ads for things like
I think it was Prada back in the day and
bal Main. We had them evolve music careers on Spotify.
So this is not new, but it's up to companies
in the first instance, to really figure out how they

(11:33):
want to do this the right way, and also a
huge opportunity for companies to lead in an ethical way
in that space.

Speaker 5 (11:41):
One of the most iconic brands bringing in the holiday season,
but take a closer look at the new Coca Cola
commercial and you might notice that it was made with
artificial intelligence. Social media is certainly caught with it shows
how lifeless that Christmas commercial is. Book A Colloges put
out an ad and ruin Christmas and their entire brand.

Speaker 3 (12:03):
It's less festive, more creepy holiday vibes. We's push for
marketing efficiency, right, how do we create more with less?

Speaker 1 (12:13):
That's just a sort of business one on one.

Speaker 2 (12:17):
I've also seen, you know, entire ad campaigns generated by
using AI. Campaigns that would usually require entire teams, days
of shoots, makeup artists, stylists, lighting technicians, directors, photographers all
replaced by just typing words into this kind of generator.

Speaker 1 (12:34):
There's something sad about that, isn't there.

Speaker 3 (12:36):
Well, there's something sad about the volume that we can
make in this space. There again, opportunities in that space
as well. If you think about the kind of a
deep fake of me. We're not using it obviously to
create videos of me, but if we need to do
a little pick up, it might be easier to do
it with a typing word in rather than booking the

(12:57):
whole studio and everything else again and disrupt everybody's day.
So there are opportunities in that space, but there's also
a real danger that we might be just rehashing, recombining
all things and not giving artists, creators, directors, producers, writers
the opportunity to really bring our humanity to this. I

(13:19):
think the useful way to think about AI is as
an assistant. This is not a technology that should come
to replace what we do. It's not there to take
our jobs, but it's there to enhance us. So I
would encourage people to think about what they can do
with AI that we weren't able to do before, and
think of it as adding to our capabilities rather than

(13:41):
replacing us. There will be disruption, and I think all
of us, all of us know that that will happen.
I always say it's not coming for your job, but
it's definitely coming for your job description. But it's an
important moment, I think for especially people who lead businesses
lead creative things to upscale themselves on what this technology is.

(14:04):
I would, of course say the University of Sydney had
does the best work around the effluency anywhere in the world.
So do come and do this with us, but find
find ways to understand what this technology can do. It's
really quite different to what we've been able to do before.
Even the idea of style engines. It's not intuitive to
us to think that, you know, things like bananas or

(14:25):
corporate emos become styles. Catness is a style, right, So
try to understand what the tech can do and then
really be very mindful and very deliberate in how you
implement it in your organization. I think it's up to
people who lead, whether it's the creative industries or whether
it's any business or government, to lead the way on
how we think we ethically want to be doing this.

(14:47):
It's not happening to us. We are making this future
happen and the next two years will be crucial.

Speaker 2 (14:53):
Are there any protections that you'd like to see in
place in order to better protect the creative industries? Because
I think that there is, given how fast the technology
is evolving, there is a fear that it could stifle
that creative production and that creative process.

Speaker 3 (15:12):
I'm not a lawyer, so I live the I leave
the lawmaking and the details of this to those who
who know better. But I would want to see protections
around artists, right. I do want to see protections around
how their work is being used to train these models,
because I think at the moment it's a bit of

(15:33):
the wild West out there, and I am worried that
AI generated content might end up in the long term
severely diluting the earnings for original creators, which also means
that fewer people will want to join the creative art
So ultimately, I think owners consent in using any of

(15:54):
this should be should be a requirement.

Speaker 1 (15:56):
Thanks for joining us, Sandra, Thanks for having me.

Speaker 2 (16:02):
That's it for this episode of The Front Page. You
can read more about today's stories and extensive news coverage
at enziherld dot co dot nz. The Front Page is
produced by Ethan Sills and Richard Martin, who is also
a sound engineer.

Speaker 1 (16:18):
I'm Chelsea Daniels.

Speaker 2 (16:20):
Subscribe to the Front Page on iHeartRadio or wherever you
get your podcasts, and tune in on Monday for another
look behind the headlines.
Advertise With Us

Popular Podcasts

Amy Robach & T.J. Holmes present: Aubrey O’Day, Covering the Diddy Trial

Amy Robach & T.J. Holmes present: Aubrey O’Day, Covering the Diddy Trial

Introducing… Aubrey O’Day Diddy’s former protege, television personality, platinum selling music artist, Danity Kane alum Aubrey O’Day joins veteran journalists Amy Robach and TJ Holmes to provide a unique perspective on the trial that has captivated the attention of the nation. Join them throughout the trial as they discuss, debate, and dissect every detail, every aspect of the proceedings. Aubrey will offer her opinions and expertise, as only she is qualified to do given her first-hand knowledge. From her days on Making the Band, as she emerged as the breakout star, the truth of the situation would be the opposite of the glitz and glamour. Listen throughout every minute of the trial, for this exclusive coverage. Amy Robach and TJ Holmes present Aubrey O’Day, Covering the Diddy Trial, an iHeartRadio podcast.

Betrayal: Season 4

Betrayal: Season 4

Karoline Borega married a man of honor – a respected Colorado Springs Police officer. She knew there would be sacrifices to accommodate her husband’s career. But she had no idea that he was using his badge to fool everyone. This season, we expose a man who swore two sacred oaths—one to his badge, one to his bride—and broke them both. We follow Karoline as she questions everything she thought she knew about her partner of over 20 years. And make sure to check out Seasons 1-3 of Betrayal, along with Betrayal Weekly Season 1.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.