Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jerod (00:04):
Welcome to Practical AI,
the podcast that makes
artificial intelligencepractical, productive, and
accessible to all. If you likethis show, you will love the
changelog. It's news on Mondays,deep technical interviews on
Wednesdays, and on Fridays, anawesome talk show for your
weekend enjoyment. Find us bysearching for the changelog
(00:24):
wherever you get your podcasts.Thanks to our partners at
fly.io.
Launch your AI apps in fiveminutes or less. Learn how at
fly.io.
Daniel (00:44):
Welcome to another
episode of the Practical AI
podcast. This is DanielWitenack. I am CEO at Prediction
Guard. And today, we're we'rereally excited to talk about AI
in in filmmaking and contentproduction as we have with us,
Sami Arpa, who is CEO andcofounder at Largo AI. Welcome,
(01:09):
Sami.
Sami (01:10):
Thank you. Thank you
having me.
Daniel (01:11):
Yeah. Yeah. It's great
to it's great to have you here.
I remember specifically, youknow, of course, we're always
looking for for interestingfolks to to join us on the show
and talk about, you know, how AIis use being used in various
verticals and and industries.And I remember seeing a variety
article about, this sort ofSylvester Stallone backed, team
(01:37):
at at Largo AI, and, you know,it talked about the world's
fully AI automated film company,which was was very intriguing to
me.
I'm sure we'll get into a lot ofthose details. But maybe before
we hop into the specifics aboutLargo AI, I know that you all
have have been in the industry,for for some time and and been
(02:00):
doing this work. Could you givejust give us a maybe a high
level picture of how AI and kindof advanced technology has been,
evolving in recent times in thefilm industry. Of course, for a
long time, maybe many peopleknow about CGI and certain
technology that's actuallyfairly advanced that's been used
(02:22):
in filmmaking for some time. Butmaybe give us your state of AI
in filmmaking and how that'sevolved in recent years.
Sami (02:32):
Yeah, absolutely. So film
industry has always been
advanced with the technology,but the adoption with AI has not
been easy. I can tell that withour journey during past six,
seven years. So, I mean, we canthink the primary adopters of AI
(02:55):
in industry as Netflix andAmazon, because they started to
use AI for recommendationsystems. That was already twenty
years ago.
And then they started to createtheir own content with original
content. And that also createdanother step for AI because they
(03:16):
could analyze the content andselect order the type of content
that they know already that willwork. That was also with AI
because specifically, forexample, Netflix, were having
the system of micro genres,which they still have that,
knowing the audience behavior,and also understanding content
(03:38):
at earlier stage by using AItools, you could order the right
content right from the start. Sothis way they could get larger
audience with narrower catalogs.And so then if you put this as
two phases, then we can alsoname a phase.
This is the phase that we seestarting with CheckJPT, like
(04:03):
every industry in the PIMindustry as well, that we saw
more of applications, and alsothat has changed the adoption of
industry. We measure adoption ofthe industry for usage of AI
during past six, seven years. Itwas 2% when we started Largo AI.
Now it is around 30%, And thatis a great progress. And it is a
(04:29):
progress despite the things likestrikes.
We had two big strikes also inHollywood and party that was
also against AI.
Daniel (04:37):
Yeah, and maybe just to
dig in a little bit there, what
some people might think of askind of the the thing that pops
into their mind with AI and filmis maybe what they've seen
around the actual videogeneration side or changing the
(05:01):
visual effects. But it soundslike you're talking a little bit
more kind of wide reaching andoperationally across the film
industry. So could you give us alittle bit of a picture of,
maybe the question is how kindof an overall categorization of
how AI might be used indifferent parts of the AI
(05:24):
industry? I know you're digginginto certain parts of that, but
maybe you could help usunderstand kind of more
generally the differentcategories or ways that it could
be used.
Sami (05:33):
Yeah, I think for that,
it's important to understand the
chain of development of a film.A film is a very big project,
and it takes many years. Asaudience, we just see the end
results on the screen, but anyfilm has four main states of
development. The part isdevelopment of overall story,
(05:56):
then pre production, the stagethat we attach also people to
the story and also raising thebudgets. Then we have
production, and then postproduction and distribution
stages.
So, at any stage, there are manypeoples are involved, and
there's a lot of work, and manyprojects cannot finish all this
(06:19):
process. Actually, so forexample, going to development to
preproduction, already 90% ofthe films are eliminated. Or
even coming to development, thatthere's a big amount of products
are eliminated, like there arescript writers writing
screenplay that are never pickedby the producers. But at each of
(06:44):
these steps, are a lot of works,and eventually, obviously the
goal is to bring that to thescreen, but only a few projects
are coming to that stage. So, ofcourse, the sexy part is text to
video, the visual parts, but forall other parts, there is
applications of AI, andactually, there is a big benefit
(07:07):
of usage of AI.
That's something we have focusedas well. We have focused more on
earlier stages for the usage ofAI, understanding the content,
character casting, andpredicting financial results for
helping to raise the budget forthe project. But of course,
there's strong applications forpost production, production with
(07:31):
new text to image tools that wewill see more promising
applications of that as well,that we have seen. Recently,
Google VO3 has been released,which is amazing. Actually, the
results look amazing.
So that will change stillsignificantly post production
(07:52):
production parts as well. So wewill still observe that there is
not enough strong applicationsat that stage, especially in
live action movies. But thepoint here, to summarize, for
every step, there is importantefforts. Some parts are not
visible to the audience, And forevery step, is type of AI tools
(08:15):
that we can apply.
Daniel (08:17):
Yeah, and just for, of
course, probably most of the
audience that has not beendirectly involved in in any of
these stages of film production,maybe except, you know, watching
it on on Netflix or or, whereverthe venue might be. But could
(08:37):
you give us a sense of theinvestment and how much effort
is put in proportionally in eachof these stages leading up to
the distribution investment inboth time and people and scale?
Sami (08:57):
Yeah. Biggest investment
is in terms of money, it is for
production and distributionstages. Production, really, just
to produce the content on theset, and then I include post
production budget in that aswell. And then distribution is
the part for spending marketingbudgets. So these are investment
(09:20):
wise, these are the biggestparts.
But for time wise, for most offilms, they are not actually.
For many projects, developmentand pre production states are
taking much more time. There areprojects even having six, seven
years of development or preproduction. The biggest
(09:41):
challenge over here to convincemany people to bring around a
project. And while doing that,you need to make a lot of
iteration.
So producers, screenwriters,even director might be involved
at that stage. So you create astory, and you engage people
around that story, and you needto find people to put money in
(10:06):
that project. That might bestudios, investors, etcetera.
That's actually normally thebiggest, most difficult step for
most projects. And in general,most of projects are not good,
actually.
That's also a reality, which wecan see in our AI system as
well, that we have producers,they put their projects to get
(10:30):
financial results, and in mostof cases, it says that project
will fail. Because, yeah,finding good project is not an
easy task, and it requires a lotof work, a lot of study. That's
why AI tools at that stage canbe very, very helpful, very
(10:52):
critical. And for a producerfilmmaker, the biggest hurdle
for future is to fail at thecurrent project, because that's
also a way to open the door fornext projects or not. That's why
making sure that that projectwill be successful is very
(11:16):
critical.
Daniel (11:17):
Yeah, and that's very
interesting to me that you're
sort of focused in this. I guessI'm hearing is there's sort of
noisy early phases of theseprojects where you've got a lot
of kind of maybe good projectsmixed with a lot of noise.
There's difficulty in kind ofparsing through that also for
(11:38):
those that you know, maybe havewritten the story or are
promoting the the production ofa of a project, it's hard maybe
to to stand out. How is fromyour perspective, is this sort
of technology and we'll getinto, you know, exactly what you
all are doing. But generally indigging, you know, bringing
technology to these earlystages, does that change the
(12:01):
dynamics of, you know, smaller,you know, smaller studios or
script writers or or maybelesser known folks that that
could maybe use technology tohelp them play on maybe more of
a level playing field with kindof the the the big studios or or
well known known folks?
(12:21):
How did how is that dynamic sortof shifting or or is it?
Sami (12:25):
Absolutely, yeah. The AI
and new technology create much
more opportunity for smallerproduction companies, newcomers.
If you think from studioperspective, they have,
especially at early stage, theyhave a lot of resources to
understand if a content can besuccessful or not. They have
(12:48):
experience, they can get manyresearch done, including focus
groups at very early stages. Sothose kind of things are not
available for newcomers or smallcapacity production companies.
At development stage, don't haveyour movie budget. You have not
(13:12):
raised yet, so if you are lucky,still you can find some people
or some institutions areinvesting at development stage.
If not, they use their ownresources to understand if a
content will be successful ornot. That's why using AI tools
at that stage is very cost andtime effective way to understand
(13:35):
the potential success ofcontent. That's also for later
stages.
It will be the same as well, bythe way, because we will see the
cost of production, postproduction, will reduce
significantly with the AI tools.If $100,000,000 budget can be
produced for 1,000,000 budget,that will change the whole
(13:55):
ecosystem, right? Because 1 to$10,000,000 budget films can be
produced by independentproducers, but not $100,000,000
We will see also all thosechanges in next years.
Sponsor (14:13):
Okay, friends, build
the future of multi agent
software with AGNTCY. The agencyis an open source collective
building the Internet of agents.It is a collaboration layer
where AI agents can discover,connect and work across
frameworks. For developers, thismeans standardized agent
(14:33):
discovery tools, seamlessprotocols for inter agent
communication, and modularcomponents to compose and scale
multi agent workflows. Join CrewAI, LangChain, Lambda Index,
Browserbase, Cisco, and dozensmore.
The agency is dropping code,specs, and services. No strings
(14:57):
attached. You can now build withother engineers who care about
high quality multi agentsoftware. Visit agency.org and
add your support. That'sagntcy.org.
Daniel (15:15):
Well, Sami, we we've
kind of talked, or or referenced
some of Largo AI and and, youknow, the way that you're
digging into these early stages.Could you give us a little bit
now kind of digging inspecifically to what you all are
doing? Could you give us alittle bit of the back story of
Largo, kind of how it cameabout, what those initial ideas
were? Obviously, you mentionedit's existed for, I think you
(15:38):
said, six years. So this is, youknow, before the the latest kind
of boom of of AI, at least asfar as the the general public
has has perceived it.
So, yeah, give us a little bitof the backstory. I'd love to to
understand how how all this cameabout.
Sami (15:55):
Yeah. Sure. I will connect
that with my my personal
background, because that's howLARBUS started, the story of
LARBUS started. I did my PhD atEPFL, which is a university in
Lausanne, Switzerland, in thefield of computational
aesthetics, understanding artand generating art by using
(16:16):
computers. And film has beenalso one of the subfields over
there.
So that's the technical part,technical scientific part, but
parallel to that, I've been alsodirector producer on some small
projects. So I've been on thecreative side as well, just
(16:37):
working hands on. While workingon my own projects, with a bit
of head of engineer as well, Iwas always curious why we don't
have a way of representing filmssimilar to music. Because music,
we can represent musical sheetswith the partitions, which gives
(17:03):
a way of understanding of thecertain formats or the rhythms
and the type of structures fordifferent genres, that doesn't
make music less creative. Inreverse, actually, it makes more
creative because you can focuson a specific structure and go
deeper on
Daniel (17:22):
And when you say that
sort of partitions or
structures, you would mean likea pop song or something like
that has a verse, chorus, verse,bridge, or something, or even in
the Musically, there's bars andother things like that?
Sami (17:38):
Exactly, all those things.
In film, there is a bit of
structures, but these are toomuch formulated. So it's not a
same type, low level structures.That was a bit of my curious
too, also that's something Ishared with my PhD advisor at
that time as well. So, we work abit on that.
(18:00):
We were thinking how we cancreate similar type of
structures for film as well thatcan be useful both for people
working on that creatively, butalso for machines, because
understanding and learning fromfilms is also very difficult for
machines, because, yeah, film istwo hours of content, all the
(18:20):
frames, millions of pixels, or ascreenplay is hundreds of pages
of content, and you need toassociate this with all
metadata, but there is not evenenough sample of data to learn
confidently on those. So if wecan represent anything in a more
structured way, in a smallerspace, smaller vector space,
that's even easier for machineto learn from that. So, that was
(18:43):
our starting point, and withthat starting point, we created
this, we call that genrerecipes, emotion recipes. So
this is like we put any film innine dimensional space of
genres. And then, for example,let's say drama.
Drama is one of this pattern. Wefind how drama is evolving over
(19:04):
the story from start till theend. Same thing for comedy, for
romance, for thriller, horror.So this way, a timeline, we get
a map of a film, of a TV series,or for any content. That can be
from screenplay or from directvideo material of a content.
(19:28):
So, this is like a baselinerepresentation of a film for us.
That was our starting point. Andwith that, of course, then we
engage many other points,metadata, like the actors, the
budget, all the other contentthat becomes a representation
for a film. So, we start to usethose as a base both to provide
(19:49):
as a feedback to creatives, sothey can see the really
structure of the content, butalso to machine. Once machine is
learning from that type of data,it becomes much easier to learn.
For example, easier to learnfinancial predictions, box
office predictions, streamingpredictions.
Daniel (20:05):
Yeah, yeah, that makes
sense. I can sort of imagine
this graph of drama or comedygoing up or down on the on the
arc of a on the on the arc of amovie. So that gives kind of a
understanding level, I guess, ofthe of the movie. How then does
that connect to more concretely?How can that connect to concrete
(20:30):
kind of value for those involvedin promoting or writing a story
or producing a movie?
Sami (20:37):
Yeah. I mean, for
producing movie, there is three
important elements. The one iscontent itself. The part, the
people are involved, primarilythe cost. And the part, the
financial part, the budget andexpected return for that.
(20:58):
And our forecast insights arealso in these three main
categories. So the contentanalysis, it provides insights
related to weak, strong pointsof the content and audience
emotional reaction to thatcontent. Character casting part
is about understanding thecharacters and then making
(21:18):
casting propositions. So the AIis making propositions for the
cast. It has for this characterthat that actor will be best
fit, for example.
And then the part is thefinancials part. So here, for
that part, it makes thepredictions directly, how much
money the film will make withgiven content, and then all
(21:41):
invested money and the othermetadata, like attached casts,
director, etcetera. And thatpart, of course, there's many
sub part of that because thatpart is also relevant to
understand the audience, becausehow much money you make is
relevant to understandingaudience, writing the rights,
(22:01):
marketing, all these things. Soit goes deeper to predict the
demographics of the audience forspecific countries. And along
with that, we have also thesimulated folks groups, which is
one of our most exciting tool.
There you can really getquantitative and qualitative
feedback from the audience.
Daniel (22:22):
Yeah, that's great. I'm
wondering, in the earlier
stages, like you talked about ofdetermining what type of content
to make, casting all of that,I'm assuming the assets, you I'm
thinking more from the, I guess,the technical side now. The
(22:43):
assets that you have to work offof, I guess, are the script and
maybe some other things. Couldyou talk a little bit about kind
of the inputs to this? Likewhat's required to really get
good results out of a systemlike this as a starting point?
Sami (23:01):
Yeah, it really depends on
the stage, but if starting very
early stages, the system wouldneed at least a treatment. The
treatment is a very shortversion, early version of story.
Typically, it can be even like atwo, three page. And a bit later
stages, it will be a screenplay.That's like a full storyline of
(23:23):
FM.
Coming to screenplay stage,together with that, the system
would ask also basic packaginginformation. Because a
screenplay, if you think aboutfinancial forecasts, a
screenplay can make any money,bad or good. It can make any
money because how much it makesis also very relevant, the
(23:46):
attached people to that project,and also the budgets, how much
budgets that has been put. Withthe current standards, let's
say, if you try to make aperfect sci fi screenplay with
$1,000,000 budget, we can tellthat the results will be very,
(24:07):
very bad. So it's not difficultto tell even for regular people,
but, yeah, I mean, that AI willgive the same warning as well.
So actually, in that manner, wealways say content is the king.
It is very important, but oncewe put all the features of
making a film and look at AIlearning, we see the parameter
(24:30):
that is impacting most thefinancial results is budget. And
it doesn't mean having a highbudget, it means the right
budget, especially for a goodreturn on investment. Sometimes
some firms are having too muchof budgets than what they need,
(24:50):
then it becomes very difficultto make it profitable for the
people who make the project.
Daniel (24:57):
Yeah. And just
practically for maybe some of
the practitioners out there thatare working, maybe not in the
film industry, but they might beworking simulating other things
in other verticals or differenttypes of production processes or
whatever that might be, maybeunrelated to film. But just for
(25:19):
their benefit, sounds like thesystem that you've kind of built
with Largo works on varioustypes of projections. There's
various stages to it. I'massuming sometimes there's this
misconception now, I thinkexacerbated by GenAI that you
have kind of one model, you putone thing in and then you get
(25:41):
everything out.
I'm assuming that your system,which has been developed over
years, kind of involves multiplemodels that maybe do different
things. Like you mentioned theone around kind of detecting or
mapping these genredistributions or semantics
(26:02):
across a film. I'm assumingthere's different stages of
these things with differentmodels involved. Maybe the
financial forecasting modelwould be different than the
model that's producing the genreresults. Could you give us just
kind of at a high level anunderstanding of how this kind
of all fits together as asystem?
Sami (26:24):
Yeah, absolutely. You are
right that we have a lot of
models. So using the models thatwe use for genre prediction for
financial forecasts, it wouldn'twork. Financial forecast is
(26:44):
typically very, very differentmodels or shallow models
compared to contentunderstanding models, which are
much deeper models. So, that'salso the important thing with
the current AI wave, that LLMsare really great to answer for
(27:05):
many things.
But even if you go to LLMC andChatGPT, we see that they have
many models. Actually, eachmodel is better at different
type of solution. The samething, of course, for us as
well. So, we have, for each typeof task, have different models.
And also, we have two maincategories.
One category is for the modelsthat are learning from past
(27:26):
data, and it uses this learningto make the predictions for new
content. That's one way oflearning. The learning is
learning audience. There, we dois basically we are creating
digital twins of real people.So, we don't learn anything
(27:49):
content related.
We learn people themselves, andthen we show the content to
these digital twins of realpeople. And actually, the one is
having advantage of not missingoutliers almost. Because the one
big danger for just learningfrom past data is, yeah,
(28:11):
outliers in the film industrythat we can often have. Okay, so
for general content, we canpredict successfully, but we can
always have something completelynew that we don't know well the
type audience behavior for thattype of content. The model will
miss that.
But with the approach, thedigital twin approach, we can
(28:32):
even capture outliers becauseyou are much closer to humans.
You are already creating theirdigital twins, and that digital
twins are having quite a shortlifetime, like one year, so you
are very close to currentbehavior of people. And it is
very successful also to capturenew approaches.
Daniel (28:49):
Well, I'm really
intrigued by the way that you've
built up this system of toolsthat helps in in various ways
throughout the the film, filmcreation, film production
process. I'm wondering in termsof and and this is probably
something on a lot of people'smind in relation to AI models
(29:12):
and content, especially art ormovies or images, that sort of
thing. Obviously, you need somesort of reference data with
which to train models and kindof help them produce results.
Maybe for financial projectionsor something, you know how much
(29:33):
a movie has brought in orsomething and that's public
information. I'm not sureactually how much of that is
public information.
Sami (29:40):
Not fully, but here
Daniel (29:41):
we Yeah. Basically my
question is how do you go about
kind of creating the datasetsyou need in an industry where of
course there's a lot of, youknow, there's proprietary or
copyrighted content, that sortof thing. What does that look
like for you as a company?
Sami (30:01):
Yeah, I mean, are open
data that we can learn. For
example, movie summaries, That'spretty open. Or movie metadata,
like who has been engaged withwhich films. There is already a
lot of open data, or box officedata, like how much they have
done, which is, for most ofthem, this is publicly
(30:23):
announced. But there are alsotype of data that is not
publicly available, and one ofthe most important of them is
streaming data.
Streaming platforms do notprovide data. Netflix has
started to publish some datarecently in terms of viewership,
(30:43):
but it is still very limited.And also, yeah, not having that
type of data is shaping theindustry, not just going outside
of AI perspective, because weknow many producers are
complaining not to have thatdata, because the value of a
film is very much related to thesize of audience. And that
(31:08):
relationship is very clear inthe box office, because you are
just putting the film in the boxoffice and get the money as much
as the tickets have been sold.But that relationship, at least
from the producer side, is notclear on the streaming
platforms.
Of course, platforms themselves,they know they can make a video
on their sites, but it becomes abit one-sided. That has been a
(31:31):
bit of a problem. We dostreaming forecasts as well, and
the way we approach that isanalyzing social noise in the
past, and we created the modelsto correlate social noise with
the households' viewership. Andfrom that, we even started to
(31:51):
create fair value calculations.So basically, if streaming
platforms were paying accordingto household viewership share,
how much they should have beenpaying, considering also their
subscription revenues.
We also make this kind of fairvalue calculations. Of course,
(32:13):
it is not relevant with whatthey are paying, because they
are paying according to theirown calculations. That's the way
we calculate. We say, if it wasopen, like box office, that will
be the share of the film. So,the data part is like that.
(32:35):
Obviously, there's differentmodel, requires different type
of data content models. We lookmore content data, financial
models, content, metadata, andthe financial results. And then
again, here, our data dependencyis a bit reducing with our
simulated focus group, thisdigital twins approach, because
there you don't need anyway thepast film's data, because we
(32:57):
just get people that people'sdigital twins, so their reaction
becomes our data and it alreadytells us how film will perform.
Daniel (33:09):
And one of the things
that has been going through my
mind as you've talked about thisplatform that you've built,
which is fascinating, you know,what what was occurring to my
mind is, well, why don't we justmake this thing a loop if we
have this whole, if we have thiswhole process which can give us
these projections and put theright casting together and and
(33:31):
all of those things? There's onething to say, well, we can take
in a script or a screenplay intothe input of this process and
then create all of thoseprojections and help them plan.
What's preventing us or maybethere's nothing preventing us
from just looping that feedbackback and modifying the screen
player script to kind of updatethe projections in a sort of
(33:56):
more favorable way. Has thatbeen discussed or part of the
conversation?
Sami (34:02):
Yeah, I mean, it's not
that easy for several reasons.
Of course, with the models youcan put into loop and make
continuous improvement, evenautomatically. But even that, I
mean, it's like reaching a pointof perfection is not easy.
(34:23):
Because even at the currentstage, our financial forecast
models are having 80% accuracy,which is, it might be looking
low for if you think manymachine learning models is
coming at 95, 97, 99%accuracies. It's difficult to go
(34:45):
over 80% because there is manyelements that you cannot
control.
Because a film success becomes asuccess together with audience
behavior, and audience behaviormight change even very quickly
in short term. A big naturaldisaster happens that changes
all the ambience, or somepolitical situation changes
(35:07):
overall behavior. A heat wavearrives, for example, for a box
office movie, they were notcalculating that, and then
people goes to the beach insteadof movie theaters. So there's a
lot of factors that you cannotstill fully determine because it
is to audience behavior withmany factors. That's one
(35:29):
element.
The thing is the dynamic ofcreatives. Because films are
done with many people. Manypeople are contributing for
certain decisions. It's not likesomebody can tell, hey, let me
make this script better andpeople better, etcetera, and
let's go to next stage. No,because you have many companies
(35:52):
that are involved.
So, still, you need a lot ofagreements to be done among many
people. So, I think that's alsosome blocking point, even if we
have a machine looping, makingit better. That wouldn't be
easily the case. Maybe more inthe future, but yeah. But then,
(36:14):
if the machine is all the timelooping without human touch,
that might also create too muchalike movies as well, of course,
that's kind of dangerous aswell.
Daniel (36:24):
Yeah, yeah, maybe on
that point specifically, the
other question I had, which youactually already just mentioned
in passing was outliers. I thinkyou know, there would be a lot
of maybe there's some people outthere that are that might think,
well, I've seen what sort of AIdoes to content, let's say, on
LinkedIn. I I go on LinkedIn andthere's just, like, a feed of AI
(36:47):
generated posts that are sort ofall similar. Right? They they
just sort of look look the same.
Right? And I I think, you know,here we're talking more
forecasting, maybe simulation,focus groups, that sort of
thing. But there might be peoplethat would say, Well, that's
really good. You can hone thatin and obviously help these
(37:10):
There's a really beneficial partto that as we talked about to
helping bring up smallerstudios, give them tools,
augment them with technology.That's really amazing.
But then there might be otherpeople that say, well, if we
start doing that sort ofprojection, everyone will be
kind of shooting for the samething or trying to hit the same
metrics. There what about kindof the artistic piece of it? I'm
(37:36):
sure even hearing yourbackground, that is likely a
very important piece of why youlove this sort of art and
content, right? So yeah, we'dlove to hear your perspective on
that.
Sami (37:49):
Yeah, I think that's very
important point. Firstly, in our
product, we don't do the reverseprocess for that reason. So it's
always forward process, that'swhat I mean. So we always get
human content as an input, andwe provide all AI insights, and
(38:10):
then they take a decision, andthen they, again, go forward. So
we don't tell them, Hey, youshould do this type of content,
write this kind of story.
That's the reverse side, so wedon't do this reverse side
formulation. I think one reasonfor that is exactly that danger,
because we think if you doforward process, AI will augment
(38:32):
creativity. Reverse site, itmight create too much LI
content. That is definitely onething. We can see as well in the
results that we are looking inforward process that the
variations of the content andimprovements are really great
(38:52):
because then with the AIinsights, again, human are
improvising over that.
It gives them inspiration to dosomething different. That is
amazing to see. That's why I'mtelling you, I I don't think we
should put in a basket AI willjust make all content same, or
(39:13):
it will augment creativity. Ithink it really depends how you
use it, yeah. And this is alsoone thing related to fear,
because we see that there's alot of people are having fear of
that, especially in filmindustry.
The part of strikes wererelevant to that as well, the
strikes that happened inHollywood. So, I mean, in our
(39:35):
view, it's still very difficultto beat a human, like a very
good script writer, filmmaker.It's very difficult to beat
their version of using AI. Sothat's what we see. Because a
regular person, they can go andwrite a screenplay as well now
using JetGPT, but that's alwaysvery average.
(39:57):
If a very good scriptwriter isalso using AI and writing
screenplay is difficult to reachthat level. So that's, we will
see that bar will get higher andhigher, but again, to go above
that bar, we need really skilledpeople in that field.
Daniel (40:12):
Yeah, well, already
started going there, but as we
kind of draw to a close here,I'd love to hear your
perspective on what you'rereally excited about as this
technology gets adopted more andmore in this industry. What
excites you kind of looking tothe next year or two? What do
you expect to see? What are youexcited to see?
Sami (40:34):
Well, what I am excited
is, the production budgets. I
think the production budgetswill go down. That means we will
see more films to be done. Wewill have some content
inflation, but because of that,I think there will be also more
competition. We will augment thecreativity over there.
(40:57):
I think we will see much betterfilms. It doesn't mean we didn't
have good films. We definitelyhave a lot of great films from
great directors, but we will seemuch more of those. So that's
great news for the audience. Butof course, that creates problem
a bit with the industry itself,because the way that they will
(41:18):
work will change.
I think it will be more of afrequency game. So a good
filmmaker, let's say they weremaking one film per year, maybe
now they will do two, three ofthem.
Daniel (41:30):
Awesome. Yeah, well, I
certainly look forward to
consuming some of that greatcontent that you're helping
produce. So, yeah, thank you foryour work. Thank you for digging
into this over years and kind ofreally, innovating in this
industry in a way also that Ithink is responsible in
(41:51):
promoting the human augmentationof the process with the human as
pilot. Really appreciate yourperspective there.
Thank you for joining, Sami, andhope to have you on the show
again.
Sami (42:04):
Yeah. Thank you very much.
I really enjoyed the
conversation. Thank you.
Jerod (42:14):
Alright. That is our show
for this week. If you haven't
checked out our changelognewsletter, head to
changelog.com/news. There you'llfind 29 reasons. Yes.
29 reasons why you shouldsubscribe. I'll tell you reason
number 17. You might actuallystart looking forward to
Mondays. Sounds like somebody'sgot a case of the Mondays. 28
(42:38):
more reasons are waiting for youat changelog.com/news.
Thanks again to our partners atfly.io to Brakemaster Cylinder
for the Beats and to you forlistening. That is all for now,
but we'll talk to you again nexttime.