All Episodes

January 4, 2024 46 mins

In this episode, we talk with Imaginario AI co-founder and CEO Jose Puga on how AI is going to help us search, transform, and create videos faster and cheaper.

We cover:
‣ How Imaginario uses AI to search through videos based on context instead of tags and keywords
‣ The importance of content repurposing and customization
‣ What roles need to level up to stay ahead of AI
‣ Predictions on AI and the role of creatives in 2024

And a whole lot more!

📧 GET THE VP LAND NEWSLETTER 
Subscribe for free for the latest news and BTS insights on video creation 2-3x a week: 
https://ntm.link/vp_land


Connect with Jose @ Imaginario AI:

Imaginario AI - https://www.imaginario.ai
YouTube - https://www.youtube.com/@imaginario2030
Facebook - https://www.facebook.com/imaginarioai
Instagram - https://www.instagram.com/imaginario.ai
Twitter - https://twitter.com/Imaginario2030
LinkedIn - https://www.linkedin.com/company/imaginario-ai/
Jose @ LinkedIn - https://www.linkedin.com/in/jose-m-puga-a397922b

#############

📺 MORE VIDEOS

Final Pixel: How this virtual production studio lives up to its name
https://youtu.be/t0M0WVPv8w4

HIGH: Virtual Production on an Indie Film Budget
https://youtu.be/DdMlx3YX7h8

Fully Remote: Exploring PostHero's Blackmagic Cloud Editing Workflow
https://youtu.be/L0S9sewH61E

📝 SHOW NOTES

Midjourney
https://www.midjourney.com

Runway ML
https://runwayml.com

Pika
https://pika.art

ElevenLabs
https://ntm.link/elevenlabs

Descript
https://ntm.link/descript

Synthesia
https://www.synthesia.io

Respeecher
https://www.respeecher.com

Zapier
https://zapier.com

Apple Vision Pro
https://www.apple.com/apple-vision-pro

Magic Leap
https://www.magicleap.com

Grand Theft Auto
https://www.rockstargames.com/gta-v

Fortnite
https://www.fortnite.com

Imaginario.ai Brings Artificial Intelligence to Video Editing
https://lift.comcast.com/2022/09/22/imaginario-ai-brings-artificial-intelligence-to-video-editing

How is AI disrupting content marketing and how can this help you
https://www.linkedin.com/pulse/how-ai-disrupting-content-marketing-can-help-you-jose-m-puga


#############

⏱ CHAPTERS

00:00 Intro and Overview of Imaginario
02:48 Multimodal AI and Contextual Understanding
03:45 Data Storage and Hosting Options
04:30 Content Repurposing and Enriching Metadata
07:00 Use Cases for Raw Footage and Completed Media
08:30 Training AI Models for Specific Use Cases
10:01 The Vision for Imaginario and AI-Powered Creativity
13:05 AI Agents in Video Creation
15:13 The Impact of AI Tools on Creatives
29:19 The Future of Metaverse and AR
38:40 The Dominance of Netflix and the Importance of Social Media
40:18 AI Tools for Content Creation
42:54 2024 Projections

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jose Puga (00:00):
I'm not sure why people get scared about AI
thinking about their jobs, whenin actual fact you should be
able to do more with less.

Joey Daoud (00:06):
Welcome to VP Land, where we explore the latest
technology that is changing theway we create media, from
virtual production, to AI, toeverything in between.
I am Joey Dowd, your host.
You just heard from Jose Puga,CEO of Imaginario AI.
Imaginario is an AI poweredplatform that helps creators,
content marketers, and studiosSearch, curate, and transform

(00:27):
media at scale.
In my conversation with Jose, wetalk about how Imaginario is
using AI to analyze and searchthrough videos based on context
and imagery, not just tags andkeywords.

Jose Puga (00:38):
So we're trying to essentially bring curation and
transformation closer to a humanlevel.
Rather than relying on, youknow, thousands of different
labels and keywords to sitthrough all your content.

Joey Daoud (00:48):
How we'll be able to generate synthetic media in the
future based on our own medialibraries.

Jose Puga (00:53):
Models are going to get smaller, smarter, and more
focused on specific tasks.
Text to video generation isgetting better and better.
And if anything, it's going tobecome also cheaper to create.

Joey Daoud (01:02):
What the role of creatives is going to be in the
future.
I

Jose Puga (01:05):
see creatives as Creative Directors.
At the end of the day, we'retalking about storytelling.

Joey Daoud (01:09):
And a whole lot more, including what roles
should start leveling up theirskills to stay ahead of AI
automation.
Now, if you're an

Jose Puga (01:16):
instructor, I think you should start retraining and
looking for other jobs, becausethis is the first area where AI,
at least in video workflows,will, will attack.
But embracing AI, I think it's amatter also of survival for many
creatives, rather than anoption.
And if they're not doing this, Idon't think they're going to do
well in the future.

Joey Daoud (01:33):
Personally.
Show notes for everything thatwe talk about is available in
the YouTube description or overon our website vp land.
And now enjoy my insightfulconversation with Jose.
Well, Jose, thanks for joiningme.
I really appreciate it.
So yeah, I wanted to kind ofjump into two things.
First part, let's talk about,uh, Imaginario.ai.

(01:55):
And then second part, let's divedeeper into just AI in general.
But for the first part, can youkind of just explain high
overview of what, Imaginario isand does?

Jose Puga (02:03):
Yes, so Imaginario, uh, essentially is run by video
and AI experts.
So in my case, I come from amedia entertainment background.
My co-founder is a CTO withexperience in robotic perception
and autonomous driving.
And we're essentially bringingthat technology to marketing and
video creative workflows.
What we do on one hand is thatwe have an API that allows
creative and marketing teams tofind anything they need in their

(02:24):
video libraries at scale, andthen repurpose them or transform
them into any sort of screensize.
Or if they need to enrich theunderstanding of their videos,
they can do that as well.
On the other hand, we pair thatAPI or that backend with custom
video AI agents, so then peoplecan create their own personal
agents.
That's still in R&D mode, butuh, we do have an app and an API

(02:46):
that it's live for differentusers to test.

Joey Daoud (02:48):
And one of the big parts is being able to upload
and have your AI index footage,not just based on objects and
words, but context as well?

Jose Puga (02:57):
Exactly.
So what we do is that we applywhat is called multimodal AI,
right?
So what is multimodal AI?
We don't look just at dialogues,which traditionally could be
keywords, right?
That's the easiest sort of wayto search inside a video.
But we also look at visuals downto one frame per second, as well
as sounds.
So let's say there's an engine,car engine, or an explosion, we

(03:19):
can pick that, uh, inside avideo.
And once we look at those threemodalities, so dialogues, uh,
visuals, and sounds, we alsounderstand the passing of time.
So we can compare shot by shotevery x number of seconds, and
then understand how the passingof time impacts meaning inside a
video.
So we're trying to essentiallybring curation and
transformation, uh, closer to ahuman level rather than relying

(03:42):
on, you know, thousands ofdifferent labels and keywords to
sift through all your content.

Joey Daoud (03:47):
And so, I mean, you mentioned it's an API and so I
imagine you can plug it in with,uh, whatever your own interface
or backend app, which I'mcurious about more.
But also, in some of your demos,I saw that there was- I'm
assuming it was an Imaginariointerface, like user interface.
So from a user perspective,where are you storing your data
or like what data options areavailable?
And like if you just wanted touse Imaginario to index your

(04:09):
footage and search for it, isthat an option or does it have
to plug into another data systemwhere you're storing something
either you built yourself orwhere you're storing your data
somewhere else?

Jose Puga (04:18):
Yeah, so there are two options, right?
So you can either, uh, host thecontent with us and we are using
AWS for that, or you can justuse us to create an index,
right?
Um, now this index is not, as Isaid before, not driven by
labels or time-based metadata.
It's, uh, what we callmathematical representations or

(04:38):
mathematical objects.
And then all you have to do isjust query our API for us to
give you the time codes.
But you can host the contentwherever you want.
Now, today, of course, we havethe full solution that you can
access through the website.
This sort of custom solutionwhere we just hold index that's
more for enterprise at themoment.
So that's not, uh, available toSMBs and creators, uh, just yet.

Joey Daoud (04:58):
Okay.
And then what can you do onceyou kind of find what you're
looking for?
Your origin story of sort of theidea, uh, resonates with me
where you're like, you saw, youworked as a journalist and you
saw video editors like siftingthrough hours and hours of
interview footage to try andfind stuff.
I have a background videoediting.
I've done a lot of that sifting.
Um, so it's always on my radarof like what tools, like
text-to-video editing, has beena huge game changer in all of

(05:20):
the apps.
When you're sorting through thefootage on Imaginario, are you
able to like build out stringouts, rough cuts?
Like once you're gonna find whatyou're looking for, like what's
the next step?

Jose Puga (05:29):
Yeah, so you can create individual clips for
different social mediaplatforms.
So that's one use case, right,which is, uh, repurposing long
form content.
And not just one piece ofcontent, but multiple pieces of
content, uh, into compilations.
So, high-end compilations, let'ssay around the topic.
If you're, you know, runningyour own podcast, it could also
be the best moments of characterx and y if you are a broadcaster

(05:53):
or a streaming service, and thenyou put together those
compilations for YouTube,TikTok, and others, right?
Uh, in some cases, you will needto resize and also, uh, insert
key frames where you'reessentially, it's kind of like
reshooting inside the shot whereyou need to follow a character
and then the, the, the, let'ssay, emotional reaction of
another character.
So that's one use case, but uh,also another use case in media

(06:15):
and entertainment, uh, is theone of enriching metadata,
right?
So you will have thousands ofhours of content and then what
these media companies want to dois build, uh, recommendation
algorithms or just improve, youknow, their media asset
management systems.
And for that they need metadata,and trustworthy metadata.
So then they can use our API.
And the output is notnecessarily a video, but it's a

(06:36):
description of what's happeningevery three to five seconds, uh,
inside that video.
And then they can use that forso many different, uh, use cases
from contextual advertising,which is pairing the right ad
with the right scene, topowering, you know, uh, global
storage solutions, torecommendation algorithms and
more.
Again, it varies depending ifit's an enterprise or if it's an
SMB, so small production companyor if it's a creator, right?

(06:58):
And the type of content changesas well.
It varies.

Joey Daoud (07:00):
Yeah, and I mean, so on a, on a smaller, on a smaller
scale, right now, currently,it's like the use case, more for
like, you already have producedmedia, like a library of like
content that's already kind ofcompleted and you're looking to
find it or repurpose it.
Uh, or is it more in the stageof raw footage, uh, like a
digital asset manager?
Well, I guess that would kind ofapply to both, but, um, raw

(07:21):
footage, like you've got all ofyour just raw B-roll clips or
raw interview clips, and you'restill in the like editing stage
put that together and find yourmaterial.

Jose Puga (07:30):
It's both.
It's both.
So, because as you know, inediting, first, it's about
curation.
So you need to re-watch contenttraditionally, right?
You select those key moments.
And then you enter properediting, you know, editing mode,
which is an adding transitions,subtitles, whatever that is.
Uh, so we do both.
Um, and the use cases, again,they vary depending on the
company.
Like we've been speaking witharound 20 production companies

(07:52):
in the last month or so.
Uh, and they are all about,bringing back to live their
archives because they have, youterabytes of content where they
have potential b-roll contentthey can use and new content.
Uh, and not just visual scenes,but also sound effects, right?
And then with our tool, they canuse all of that, not just to use
in their own projects, but alsoto resell them on the stock

(08:13):
footage, uh, platforms as well.
So production companies are alsolooking for ways to monetize
and, create further value fromthese assets.
Because at the end of the day,these are assets that we're just
sitting on and doing absolutelynothing with, which is a shame,
like there's so much knowledgeand so much, you know, artistry
inside this content that it's ashame that it's not being
properly exploited, right?

Joey Daoud (08:33):
Uh, and then you sort of touched on it before,
but also on your explainervideo, um, you mentioned like
another use case was, uh, beingable to train your AI models for
specific industries or specificuse cases.
So can you expand on that alittle bit more?

Jose Puga (08:45):
Yeah, so, when it comes to fine-tuning our models,
which is training for specificuse cases, or let's say, uh,
around your own library, thereare different options.
Where we are today is not wherewe wanna be tomorrow.
Where we are today is that youcan send us a specific type of
data.
We tend to focus more on highquality data, like textbook, uh,
level data, right?
So we will give certainparameters about the type of

(09:06):
data that we need, which again,it's not complex, like you will
be talking about, clip and textpairings or image and text
pairings.
When I say text pairings, I meana textual description of what's
happening in an image, right?
Like let's say you are after,you know, Messi and you, you
just wanna follow Messi andeverything Messi is doing inside
your archive.
Then we just need photos ofMessi, you know, and just a
description of what he's doingin that image.

(09:27):
And with a, a few, thousand ofthose images we can train and
fine-tune models.
So in that case, we can do thisat a tenth of the cost, uh,
without sacrificing quality, uh,and again, without the need for
data notation, complex modeltraining and the traditional
sort of, uh, flows that you needin place if you want to build
this in-house.
On the other hand, all ourclients from Universal Pictures

(09:49):
to Warner Brothers, Syniverse,production companies, none of
them have asked for fine-tuning.
So that tells you that ourbaseline model is pretty good.
Like they're very happy with it,which is great, right?
Now in the future, uh, talkingabout where we wanna head
towards is that we wantself-training.
So for people to self-servethemselves where they just

(10:09):
ingest, you know, their entirearchive, our AI models will be
able to select the best data andthen from there, train the
models, even create syntheticdata with your own permissions,
so then you can further trainand fine-tune the models.
That's what what I mean bycustom AI agents.
Custom in the sense that theywill be personalized, understand
your editing style, your, uh,curation, style, or taste.

(10:31):
And we wanna get to that pointwhere editing becomes less and
less relevant, at least forthese sort of short form use
cases and for understanding andreasoning.
For long-form content, likefeature films, of course, you're
still gonna have editors andvery, you know, high-end
creatives, right?
But we think that for regularshort form content, this is not
gonna be needed in the nearfuture.

Joey Daoud (10:49):
Yeah.
And you're sort of teasing whatI wanted to ask next of like,
where, what are the next sort oflike stages for Imaginario?
I think it was a Comcast articleyou said that you see your
company as an AI-powered Canva.
I think you kind of laid theground work, but like what's
sort of like next beyond sortingthrough data, what's sort of
like the next vision that youhave?

Jose Puga (11:06):
What we are quite focused on today is to build,
uh, AI agents, because again, webelieve that the future is going
to be the way humans, uh,interact with software and
specifically point-and-clickgraphical user interfaces is
going to change.
Software today, as we know it,is dumb in the sense that it
requires your input, um, andtherefore it's reactive, right?

(11:29):
It's not proactive.
It's not going to come and tellyou, Hey, I need this from you
to give you the bestrecommendations, or you just
turn on your computer andimmediately your computer knows
who you are and what you likeand how your day looks like.
So that's what we mean, thatwe're moving from a reactive
kind of like user interfaces toproactive user interfaces.
And we believe that agents aregonna make this possible by

(11:50):
essentially, or multi-agents,uh, that you will be able to
customize, almost like, or likehaving a colleague in your team,
right?
Especially if you're a businessowner, they will know your data,
your company data, your personaldata, and you'll be able to
custom those agents.
So we are, we have builtin-house, one of those agents
and, uh, but of course it'sstill a bit of hit and miss
depending on the use case.
So that's why we haven'treleased it to the world yet.

(12:12):
What we want to do is then notjust have a chat based
interface, right, like ChatGPT.
We believe that's one part ofthe user experience.
But we also think that youshould be able to zoom in or
zoom out from a point-and-clickuser interface.
So we believe that that way thathumans will interact with
computers will be based both onconversational, uh, design, uh,
experiences, right, and visualinteractions.

(12:33):
That's what we wanna buildtoday.
And then of course, on thetraining side, we are pretty
much focused on, uh, tacklingspecific use cases and training
for those specific use cases,rather than, you know, trying to
build a large language modelwith 1 trillion parameters,
which we think that's a thingtoday.
But in the future, it's notgoing to be about that.
Even in the near future.

(12:54):
We believe in small languagemodels, uh, multimodality
self-training, and open sourceoverall.
But again, that's our belief.
I know many people thinkotherwise.
That was a very long answer.

Joey Daoud (13:05):
No, that's great.
Uh, for the agent part, can youconnect that or explain a little
bit more of, uh, because when Ithink of agents, I'm thinking of
like chat agents or your, like,just communicating like chat
interface.
Can you connect that or bringthat to me of like where that
fits in with the video libraryor in media creation, video
creation?

Jose Puga (13:22):
Yeah.
So if I give you an example in avideo repurposing, right?
Like, uh, you have a podcast, orlet's say you have 200 hours or
200 episodes, one hour episodes.
All of that content is sittingon your Google Drive.
So first you need to define theinput.
Then you pull the content fromthere.
Then you have a few tasks,right?
Because let's say the goal is topush this to TikTok, adapt them
to TikTok.

(13:42):
The first task might be look attop trending topics on content
marketing.
Let's say you're in the contentmarketing space.
Then based on those topics, uh,find those specific moments.
Um, then you need to trackspeakers, and resize the
content.
So there are a few tasks in themiddle, right?
When you say find trendingcontent, you mean like go to the
actual web or like TikTok.

Joey Daoud (14:02):
And see what topics are trending and what you have
in the library to match that?

Jose Puga (14:06):
Exactly.
So a key part of all of this isnot just chaining together, or
linking together different tasksand input and output platforms,
right, but also layeringexternal analysis.
You need to understand what'shappening inside different, you
know, platforms.
We will be partnering with otherthird-party analytic companies.
And then you have your own,insights, right?

(14:26):
So from your own YouTube channelor TikTok channels.
And then the idea is that the AIlearns from those sources and
then can make the best decisionson how does that workflow should
look like, right?
Not just where to pull it andwhat to do, but also how to make
that editorial decision thatit's essentially it's like a
digital twin like it's youcloned but just with augmented

(14:46):
capabilities, right?
And we wanna head towards thatfuture, um, where essentially,
you don't need to go to onetool, let's say Premiere to
edit.
You need to go to Google Driveto get your content.
You need to go to, you know,Riverside.
I love Riverside, but Riversideto record your content.
But you just talk with one ortwo agents and they just chain

(15:07):
all the different apps and theyknow you, uh, exactly, right?
Like they know exactly what sortof output you need.

Joey Daoud (15:13):
Yeah, I think that's a big gap now, too.
'cause like focusing on thisspecific example of short clips
and, I mean, yeah, we put shortclips up of the podcast.
These will be short clips.
but there's are tons of toolsout, you know, where it's like,
oh, give us your YouTube link oryour podcast episode and we'll
like create the short clips foryou.
They take a clip and they kindof do a decent job of condensing
it.
The soundbite they pick, maybeit's good, maybe it's not.

(15:33):
But the biggest issue is theydon't make those editorial
decisions that we do, and we doit by hand where it's like, take
this soundbite, move it ahead ofthe other soundbite because the
soundbite is a better hook.
It sounds more interesting, andwe like reshuffle things out of
order, like editing what editingis.
And that's been a kind of a biggap in lacking in AI tools.
And it sounds like this is thatnext step in like making those

(15:55):
small scale editorial decisionsfor a short clip, for something
30 to 120 seconds.
You said you defined shortsunder two minutes.

Jose Puga (16:01):
I would say under five minutes shorts.
But normally it's under two.
Yeah, under two minutes.
Uh, a, that's It's just thatthere's this gap between two
minutes and 15 minutes thatnobody knows how to define,
right?
But yeah.
say like mid four.

Joey Daoud (16:13):
Whenever, whenever I hit the limit on, uh, YouTube or
TikTok, except TikTok keepsexpanding what they- how long
you can upload, uh, everythingelse.
Instagram Reels and, uh, YouTubeShorts is about 60 to 90
seconds, so-

Jose Puga (16:23):
And, Joey, based on what we just said, like we, from
day one, we believed in humancuration.
So a human in the loop.
I come from the media industry,and I do have high standards
when it comes to curation.
So, we don't think that AI canjust- it can do a decent job
with conversational content.
There are a few platforms outthere.
You, you just type a link andthen you get TikToks.

(16:45):
And we're gonna be doing that,for some customers because they
don't wanna spend, you know,more than five minutes doing
this.
And we understand that.
Um, however, we are reallycatering more to production
companies, marketing agencies,uh, marketing and creative teams
inside large and mid-size mediacompanies.
So where the, the editorialaspect is, uh, it's very

(17:05):
important, right?
You, you need human curation.
As the AI learns more from you,and of course with all the, all
the terms and conditions, veryclear about privacy and data,
then that AI can do a great jobfor you and just help you do
your job better, right?

Joey Daoud (17:19):
This is a platform where you're getting, uh, data,
but you're getting like aspecific data training set of
like someone's media that theyupload and assuming, you know,
they give their permissions.
You mentioned about, uh,synthesizing videos in the
future.
What does that look like?

Jose Puga (17:33):
Yeah, that's a, that's a great question.
So when you train, uh, thesemodels, this regards if it's a
language model or a visualmodel, or you're in
multimodality- in a multimodalspace, um, there are different
sources for this, right?
Like if you look at OpenAI, it'sa mix of data that they've
commissioned, then open sourcedata sets, then you have some

(17:53):
synthetic data as well.
So it's a mix of like, you needto curate the data, of course.
And we strongly believe incurating data, before training.
It's not just a matter of justpushing all the data you find
online and just hoping for thebest, right?
Because that's when theprocessing costs go up and you
need hundreds of millions ofdollars in funding.
So, um, anyhow, so,

Joey Daoud (18:13):
A lot of NVIDIA chips.

Jose Puga (18:14):
So yeah- so yeah, NVIDIA chips.
I believe we have more than 90something of those, right, the
A100s.
But anyhow, this is not mebeing, you know, jealous about
other companies, but we justthink that the future, it's not,
it's not about, uh, having tospend hundreds of millions of
dollars in, in, you know, intraining models.
Uh, we will believe academiawill come back in 2024.
We believe that open source willget stronger.

(18:35):
But anyhow, going back to yourquestion about synthetic media.
Text-to-video generation isgetting better and better.
And if anything, it's gonnabecome also cheaper to create.
So if you can mimic or recreatecustomer's data to then train
your models that at some point,will become cheaper than
licensing that data set fromothers, right?
And that's where- we wanna headtowards that.
Now, in some cases, you'll stillneed to license that data or

(18:58):
commission it, depending on theuse case.
But if we can take advantage ofsynthetic data, by all means, we
will do it.
Now, a key challenge here isalso, uh, bias inside data,
right?
So any personal data has bias.
We are human beings after all.
So it's also about having thesafeguards to, balance out the
different data sets and, andjust trying to reduce bias
depending on the use case aswell.

Joey Daoud (19:19):
And now are we talking about- uh, you did a
shot at a a client's office, youforgot to get a wide shot at
synthesizing that type of shotor are we talking about you need
them, you need a pickup line ofa sit down interview saying
something.
and you're gonna generate thatpickup line of the sit down
interview.
Where are we talking about sortof in this range of like what
kind of media we're gonnacreate?

Jose Puga (19:38):
Yeah, so, so it depends on the type of content.
So for instance, on one extreme,you have conversational content,
right?
So these are podcasts, uh,interviews and, and, uh,
webinars and content marketing,kind of like B2B content
marketing, uh, videos, right?
And then on the other extreme,you have highly sophisticated
content that it's, more visuallysophisticated like feature
films, TV shows, and others.

(19:59):
And in the middle you might havesome corporate videos or event
videos or a mix of- ordocumentaries, let's say.
So it depends on the type ofcontent.
But what's most valuable for usare the inputs.
So the start time, right?
And then the end time of thatclip, for example.
So this is some type of datathat we need, and then
descriptions of what's happeninginside a video.

(20:20):
That's ideally what you want.
And then finally, of course,dialogues or timestamped
scripts.
So this is the sort of datathat, that we need to train our
models.
Some of that data is sittingwith media companies, but we
just use like, uh, user's datafor- in terms of exports, right?
So you use the platform at theend of the day that, at the end
of the workflow, you aredownloading that or pushing it
to TikTok, and that has alreadybeen curated.

(20:42):
So somebody already probablyadjusted a bit at the beginning,
at the end, added subtitles.
And then with that, you can thenuse that for training as well,
and actively learn from users.
cause that's the tricky part is,is like what's a well-rounded
idea in conversational contentand what's a great scene with
sophisticated content, right?
And you need to learn fromcurators to do that, or hooks,
for example, you were talkingabout hooks at the beginning of

(21:02):
the video, is like, what's agreat hook that can summarize
what's happening inside thatsnippet?
but that would also generateviews on TikTok and other
platforms, right?
So that's sort of data that youneed to overlay between, um,
trends and insights, as well aspersonal, like subjective
curation of content, right?

Joey - Shure Mic (21:19):
All right, real quick, just want to jump in
here.
If you are enjoying thiscontent, then you will like the
VP Land Newsletter.
It's not just a podcast, we havea newsletter.
It goes out twice a week.
It covers all sorts of things inthe virtual production industry
and the latest AI tools that areaffecting how we make videos and
media, and a whole bunch ofstuff in between.
So you'll highly enjoy that.
You can get it for free over atvp-land.com, or the links are

(21:40):
going to be in wherever you'relistening or watching to this in
the show notes.
All right, and now back to myinterview with Jose.

Joey Daoud (21:46):
Widening out a bit you know, with, not just
Imaginario, but just the otherrise of AI tools and everything
that's happening right now.
where do you see the role ofcreatives in the future?

Jose Puga (21:56):
I see creatives as creative directors.
Um, essentially at the end ofthe day, we're talking about
storytelling, right?
So for me, at the core ofeverything that we do in, in
media and in content in general,there needs to be storytelling.
Ideally, educational value aswell, depending on the type of
content.
For instance, for contentmarketing, educational content
is super important, that sort ofeducational values.

(22:16):
High-end content is, is veryimportant.
Yeah, and I think creatives willbe able to orchestrate different
workflows and be able to createHollywood great films with much
more- okay.
Not one person probably, but asmall team, a small production
company will be able to createquite sophisticated content in
record time.
I, We believe that the problemmight become distribution,
though.

(22:37):
So if anything, uh, videoproduction is going to increase
exponentially and it's gonna getmore sophisticated, more
creators are gonna be able tocreate, go beyond conversational
content and be able to createfantastic long form content even
right?
Uh, which is quite excitingbecause it's like, if you do a,
an analogy, it's a bit liketelevision in the 1930s, 1940s,

(22:57):
where most of the content was,uh, studio content, talking
heads, like us now, right?
But ideally, in the near future,you wanna do something that's
closer to a news report or ahigh-end documentary.
Uh, and that really adds valueto people, where people can
actually learn something, youknow, from that content or be
entertained as well, right?
So I don't think the role ofeditors will go away.

(23:19):
Now, if you're an editor, andI've said this before, that is
used to do like cookie cuttingediting, I think you should
start retraining and looking forother jobs or just become a more
high-end sophisticated editorbecause this is the first area
where AI, at least in videoworkflows will, will attack,
right?
So it's search curation,reediting, recutting for short

(23:40):
form.

Joey Daoud (23:40):
I mean editor aside or let's say, very basic editor
aside where you're justrepurposing type clips, but not
heavy, you know, feature or,long form story editing, what
other roles do you think, will,like after the editing, like get
affected or shift in how we dothings?

Jose Puga (23:58):
Like managers or creative directors, essentially.
So you need to define- have aquite a, a good understanding of
pre-production, production andpost-production, and be able to
define those roles and agentsagain, that you want to work in
tandem with your team, right?
You need to be knowledgeableabout AI.
So if anything, if you don't,you are pushing back because you
believe in artistry.
Great.

(24:18):
Like there's still space forhigh-end content.
But embracing AI, I think it's amatter also of survival for many
creatives rather than an option.
And if they're not doing this, Idon't think they're gonna do
well in the future personally.
However, I believe thatstorytelling curation and
curation overall And humaningenuity and creativity is

(24:40):
still, it's still here, right?
And if anything, AI learned fromus, we can also learn from them,
uh, from, from these agents andthese sort of models just work
in tandem to be more productive,create more content and better
content, higher quality contentfor a lower cost.
I don't see anything wrong withthat.
I'm not sure why people getscared about AI, taking away
their jobs when, in actual fact,you should be able to do more

(25:01):
with less, right?

Joey Daoud (25:02):
Yeah, definitely more with less.
I feel like the importance ofstory is only gonna shine more,
uh, especially with, you didn'tsay this, but my interpretation
is we're just, we're gonna havemore noise, you know?
And it's gonna be a little bittougher for everyone to find the
signal of what's good.
I feel this sort of reminds meof just when, you know, iPhones
started shooting video, or evenbefore that when like, you know,
digital cameras and VHS tapes,and it became easier for

(25:23):
everyone to make movies.
You know, you didn't have tohave shoot 16 millimeter, you
didn't have to like pay forprocessing fees.
You're able to do it a lot lowercosts.
More people made movies.
There are a lot of crappy,mediocre movies, but then it
also gave rise to just a lot oflike good stuff coming out.
And I think it's just gonna beanother, the next step in that
where easier to make stuff,there's gonna be a lot more
stuff being made.
A lot of it's just gonna begarbage, but someone who might

(25:45):
not have had the resourcesbefore tell a great story will
have it, but ultimately it allalways comes back to good
storytelling.

Jose Puga (25:51):
Exactly.
And, and I think thatcompetition is good, right?
Like at the end of the day,consumers can be other companies
learning, you know, from othercompanies that are creating
educational content.
If anything this is gonna help,in terms of quality, it's gonna
just push the quality up.
It's gonna increase quality ofcontent, there's gonna be more
competition.
Um, and I think at the end ofthe day, the winners are gonna

(26:12):
be on one hand, consumers and onthe other hand, distribution
platforms.
So if you look at high-end, uh,you know, streaming services,
uh, like Netflix and, a bunch ofothers, there are like three or
four of them, I think they willbe the big winners in this, in
this race, at least in the, inthe high-end, you know, media
space.
And when it comes to contentmarketing, honestly, I, I'm
excited because that means thatpeople will not necessarily need

(26:34):
to go to university.
Uh, they can take a YouTube, youknow, course or just learn from
other companies like HubSpot,for example.
They've done a, a great job whenit comes to educational content,
right?
We believe that creators, uh, orsmall companies, from two to 10
employees to they need to becomemore active.
And this is just that as abusiness owner, and I can tell

(26:54):
you, Joey, probably feel thesame, you just don't have the
time to, to, you know, to createreally high quality content and
do this at scale, uh, to be topof mind and, uh, and increase
brand awareness.
Uh, and I think if anything, if,if these tools become more
accessible, then great, uh,whoever's more creative and
engaging will win.
And I don't see anything wrongwith that personally.

(27:16):
I think it's better foreveryone.

Joey Daoud (27:17):
Do you think there's gonna be a lot of just
consolidation of the tools aswell, or like that some the
platforms that sort of where youcould just be that one are going
to be the more dominant players?
Uh, because like right now it'slike, uh, if you wanna try to
make some sort of AI film inquotes, it's like generate
images in Midjourney, then go toRunway or Pika and animate the
images, and then go generateyour voice in ElevenLabs or some

(27:39):
other platform.
And it's just like a bunch oftools that you have to stitch
together right now.
not like a central platform yet.
Do you think it's gonna be thefuture where it's just like
where wherever this canintegrate the tools more is
gonna be the dominant player,which could be an existing
player, like an Adobe orsomething or, or just something
new that we, we don't even knowyet?

Jose Puga (27:59):
I think the problem with Adobe is that part of our
business model, just to talkabout Adobe for a second, uh,
it's a fantastic company andthey have incredible AI, but
they rely on people licensingindividual tools, right?
And as I was saying initially, Ithink that relying on different
tools as part of your workflow,in that sense there will be

(28:20):
consolidation.
So, this multi-agent, or even,one single custom agent will be
able to chain together differenttools.
And even not rely on some ofthose tools, just with the task
themselves, Now consolidationfrom a tool perspective, I'm not
sure.
Like if you look at Zapier, forexample, their entire business
model is just integrating input,tasks and output, right?
So it might be a sense that acase that you just need a, a

(28:42):
good integrator that canunderstand goals and just be
able to customize those goalsthen the AI will figure it out
where to pull those models fromand what to do, right?
We don't think there's oneplayer that has the right to win
in this space.
Uh, I think it's quite early aswell to tell this.
But we, we also think that ifanything, processing costs are
gonna go down.
There's Moore's law, right?

(29:02):
And that's gonna keep happening.
Uh, models are gonna get smallerand smarter, be able to be
highly personalized.
And the user interface is gonnabe all about integrations,
right?
So that's what we believe in andthat's the sort of future that
we're building with Imaginario.
We're not there yet, of course,we're just starting, but yeah.

Joey Daoud (29:19):
I wanna ask you too, um, you'd written, uh, metaverse
was a hot topic a couple yearsago, and then of died down a
little bit definitely in 2023.
Where do you see it going?
Especially we've got, uh, theApple Vision Pro coming out next
year.
Spatial video is now a thing.
Augmented reality metaversemight be in a little dip and
then coming back up.
So what, what are your thoughts,the future of just Metaverse,

(29:39):
AR?
You've written a bit about this.
If, uh, you where you still seethis going five, 10 years from
now.

Jose Puga (29:45):
So, my views a bit controversial and probably some
of my investors that love, ARand VR might not agree with me.
Uh, that's one that I have inmind though.
I do believe that XR is thefuture and will happen, and it
will be everywhere, right?
Now, the, the main constraintsthough, and I'm talking from an
experiential point of view,right?
So being able to experience likeimmersive or augmented,

Joey Daoud (30:07):
Like from a user perspective?

Jose Puga (30:08):
Yeah, from user's perspective.
Now many things need to happen.
Um, and I, did study, this wasmy final thesis with my MBA, so
I, I did a lot of work here.
I even predicted that Magic Leapwill pivot to B2B, by the-

Joey Daoud (30:22):
There you go.

Jose Puga (30:23):
Way before that happened.
But anyhow, my view on this isthat, many things need to
change.
First is infrastructure, right?
So we need high end, high speedbroadband everywhere to take
this technology outside home.
That hasn't happened and thereare many things that need to
happen.
Not even 5G has been fullyadopted.
And here, the bottlenecks are,you know, telecoms and other
infrastructure players, probablywith Starlink and other, you

(30:45):
know, solutions.
This might change in the future,but we need to solve this from
an infrastructure point of view.
Then the second hurdle, andeverybody knows about this, is
that head-mounted displays areclunky, are too big, are
uncomfortable.
Great if you're into gaming, butif this is not something you can
wear, like, you know, a pair offrames.
That needs to change.
They need to get smaller,cheaper, and be able to connect

(31:05):
with your phone, probablyaugment your laptop or your
phone, uh, probably as aperipheral, I would say.
There are some efforts here, andthere are some companies that
are pushing the boundaries whenit comes to the form factor,
right, and adoption.
But also, I don't think we'requite there yet.
Uh, there's still latency aswell.
People don't talk much aboutthis, but they should talk about
it how socially acceptable is towear- let's say this is one of

(31:27):
those devices to wear these sortof devices in public, right?
So if you are-

Joey Daoud (31:32):
Yeah.
I think that was the biggestissue with Google Glass.

Jose Puga (31:34):
Yeah, well the glass, glass holes, you remember the
term glass holes, right?
So those people that wear wereGoogle Glass while, while
commuting to work.
So that is another problem thatit's also like if you have
someone staring at you with,with, you know, some smart
glasses.
You don't know what they'relooking at, like they might
might be doing something dodgyor trying to look through your

(31:54):
personal profile.
So there are a lot of privacyissues like social acceptance as
well, that, and cultural mattersas well.
Every culture is differentaround privacy and, and being
able to- people to access thatsort of personal information,
uh, on the go, right?
So I think that's another,another factor.
So there are a few things stillthat need to happen before
there's wider adoption of this.
Now, I think there's stillgrowing interest and I still

(32:16):
think, uh, in two particularareas.
So that's future of work.
That's one.
So where you can augment yourscreens, for example, for
productivity, right?
Um, remote collaboration aswell.
So virtual offices, that'sanother area where I'm still
trying to see the winner ofCOVID-19 when it comes to future
work.
But distributed teams and, and,and virtual teams, it's here to
stay.
Even if some large corporateswant to tell us, no, you have to

(32:38):
go back to the office.
You look at the SMB space,especially startups, there are
many distributed teams, right?
And it works.
We are one and it works.
Those are two areas.
And I would say also incorporate training.
So, for instance, inmanufacturing or, or where you
need some kind of specialtraining where it's too costly
to bring people to a specificlocation, that's where we
believe VR, especially VR oreven AR, you know, it could be

(33:01):
quite, uh, useful.
But is it, is it ready forprimetime?
Is it gonna disrupt the, likethe smartphone?
No, I don't think so.
I don't think that's gonnahappen in the next five to 10
years.
Uh, it's gonna take longer, butagain, it's my, my own point of
view.
I'm not sure what, what do youthink about this, Joey?

Joey Daoud (33:17):
I mean, do you think the idea like- Meta's vision was
like, uh, it's gonna be ametaverse, there's gonna be like
a fully immersive 3D world like,uh, Ready Player One oasis kind
of thing.
And Apple, I mean, it's not outyet, but from their demo video
and their pitch, definitely didnot mention Metaverse once.
they didn't, I I don't eventhink they mentioned virtual
reality once.
Uh, and it was, their pitch wasvery much augmented reality.

(33:40):
This is like in a blend withyour real world.
They have a huge focus on theeyes, so you're not separated.
So, I mean, do you think it'sthe metaverse of, like the
virtual world is maybe wayfurther off if that even ever
happens and the more immediateuse is just gonna be some sort
of augmented screen, like just away to have more screens,
possibly have some sort ofvirtual meeting in the 3D space

(34:02):
but sort of still grounded inour real world?

Jose Puga (34:05):
I think it's going to start like that, the latter.
So essentially focus onproductivity, collaboration, 3D
immersive environments that helpthere training education as
well, for example.
However, at some point we willmove towards an immersive
metaverse, but I'm, I'm evendoubting if, if this will be
enough.
Like we probably need to do itwithout glasses, right?

(34:25):
Like it needs to be either 3Dprojections or something that
it's immersed within smartcities or, or infrastructure.
But for that, you needgovernment intervention.
You need, you know, thegovernment to spend money.
This be a mix of, you know,governments and private capital
pushing cities to the nextstage.
Uh, places like Dubai forexample, you know, there's quite
a strong focus on, on smartcities there.

(34:48):
Will this work in Los Angeles?
I'm not sure, right?
So I think there's a lot ofthings that need to happen.
But we will eventually getthere.
I'm a hardcore believer that theMetaverse 3D, immersive
Metaverse will happen.
It's just calculating thetiming.
You need ecosystem alignment.
Uh, it doesn't depend just onone company.
And you need people to adopt it.
And for that, it's not justabout technology, but it's also

(35:11):
about their cultural background,their own habits.
Is it uncomfortable or not aboutlatency user experience?
And we tend to forget aboutthat.
Like, people that lovetechnology, they focus too much
on the latest trends and thelatest technology, but they
don't focus on the form factoron adoption.
And, and that's the mostimportant thing.
And I, for instance, Steve Jobscompletely got this right, like
with the, um, with the iPhoneand before that, with the uh,

(35:33):
what was it called again?
The, uh, the, not the the iPod.
Yeah.
So there were attempts.
Steve Jobs didn't build thefirst, uh, iPod, right?
Or the first, uh, sorry, thefirst MP3 player.

Joey Daoud (35:43):
MP3 player.
Yeah.

Jose Puga (35:44):
Yeah.
Um, he had to align, musiclabels with a DRM system, with,
uh, an app store, with the rightuser experience.
Everything needed to be aligned.
Uh, full, integrated verticalstack that can work, right?
Um, that can deliver a flawlessexperience.
So it's not just one factor.
I think Apple is probably theonly one that can get away with

(36:05):
bringing to this world animmersive, you know, metaverse,
Now, do we live in the Metaversealready?
Yes, but it's not immersive.
It's optional and it's 2D,right?
So right now, for instance, thisis a virtual representation of
two people talking.
We're in different parts of theworld.
There are gaming environments aswell, like Fortnite, and other,
uh, other sort of gated, uh, youknow, communities, especially in

(36:28):
gaming.
That's the metaverse as well.
Metaverse is not just 3Dimmersive and adding a
blockchain layer to that, right?

Joey Daoud (36:34):
And then tying this back to media and video, um, I
mean, do, how do you see this,uh, potentially changing the
ways or types of media that wecreate?
I mean, so we've got the spatialvideo, the iPhone 15 Pro can
shoot spatial video.
We can't really do anything withthat currently, unless you have
the Vision Pro, which is not outyet.
Or I've seen some people hackMeta 3 to watch it.
But yeah, some journalists saidthat they did it and it like

(36:55):
brought them to tears, seeingsome of their memories like in
this 3D environment.
And also just to tie in everybuzzword, photogrammetry and
being able to scene, scan aroom, and then possibly, you
know, in the future you can likescan your childhood room and 50
years from now, you can walkaround in it like you're there
in the future.
How do you think this all tiesback into just potentially
changing the way that movies aremade and what kind of movies we

(37:18):
are making?

Jose Puga (37:19):
Well, that's a, that's a huge question, Joey,
but, uh, because it's, it'sgonna- probably like
storytelling is not gonna belinear, but there will be
multiple, different types of,uh, storylines within a single
film, right?
And it's all about influencingthe audience towards a specific
sort of ending or specific sortof outcome.
So I think filmmaking is gonnalook more and more like gaming

(37:41):
if you ask me and, and notnecessarily open world gaming
but gaming overall.
So you need to achieve certainlevels, uh, talk with people and
look, you know, treasures orwhatever that is, right?
I think there will be at somepoint gaming comp, like, you
know, the gaming sort of a, wayof work, impacting, uh,
Hollywood and vice versa, whichis already happening in gaming,

(38:03):
right?
Like if you look at Grand TheftAuto and these franchises that
cost more than Hollywood films,I think gaming is borrowing more
from Hollywood than Hollywoodfrom gaming, and it should be
going both ways.
Because the natural progressionof 2D media is 3D and we will
get there.
It's just that it's not hereyet, right?
Now when it comes to searchcuration and what we do, we have
this as part of our roadmap.

(38:24):
We've already been doing testswith 3D asset search.
And the beauty of our technologyis that, I'm not trying to sell
the Imaginario again, but itdoes work with 3D assets and,
photogrammetry as well and, andwhatnot.
But of course, you need to addother layers and, and modalities
to this, right?
So it's not like 2D media that'sa lower hanging fruit.

Joey Daoud (38:40):
Yeah.
I mean, I, I see this allconverging and blending as well,
and that's sort of one of thetheories of just this VP Land,
this whole idea where it's like,it started on virtual
production, but it's like theseare just all overlapping.
And even in a conversation inanother episode with, uh, Final
Pixel, it was like with virtualproduction, it's like, yes, we
have to build these 3Denvironments to film in our
scene, but it's like we builtthese 3D environments, we can

(39:01):
easily repurpose this to make itan immersive experience where
it's like, Hey, you want to likewalk around the world that we
filmed this movie in'cause wehad to build this world for this
movie.
We could turn it into a videogame or we could turn it into
something else where you couldwalk around.
It's just like all of theseelements are just overlapping
and, and, and converging in allfacets of what used to be kind
of separate media as just sortof like evolving into one big
thing.

Jose Puga (39:20):
Yeah.
and the beauty of AI is that itcan help you regenerate.
Well, it's repurposing, but it'salso regenerating new worlds,
new storylines, new characters.
Plots and twists that you didn'teven thought about and that you
can still guide and customize,right, um, as a creative
company, as a creative agency orproduction company.

(39:41):
So again, I think it's anexciting future and, and also
for, again, for the likes ofNetflix and others, which are
the ones dominating thedistribution, right?
Uh, because distribution is, hastraditionally been the
bottleneck in mediaentertainment, at least where
production is fragmented, right?
And the closer you get to theconsumer, the more consolidation
there is.
So unfortunately, uh, unlessproduction companies, make use

(40:02):
of, uh, social media better andmore proactively, and there are
tools that can help you surfaceyour content, again, I think
Netflix and the likes are gonnawin you know, in the long run.
And I don't think it's acoincidence also that Netflix is
experimenting with, uh, gaming,for example, right?
So offering games and-

Joey Daoud (40:18):
No.
Yeah, not at all.
Yeah.
Yeah.
Big push into gaming.
All right, last one.
What AI tools or updates lately,and I should preface this is
December 2023.
'cause this changes so quickly.
So if

Jose Puga (40:29):
I know.
It's Crazy.

Joey Daoud (40:30):
what, uh, what's been on your radar?
What's kind of been interesting,uh, for you, uh, recently?

Jose Puga (40:34):
When it comes to tools or you're talking about my
2024, uh, projections?

Joey Daoud (40:39):
Oh, that's a good one.
Let's talk about tools and thenlet's end on, uh, your 2024
projections.

Jose Puga (40:43):
Yeah.
So I think, uh, when it comes totools, on one hand, I think what
Runway ML is doing is prettyincredible.
Like, the text to videogeneration tools that they have,
I think they've improved theuser interface, like much better
user experience.
It's, it's much better, easierto use.
I think stock footage companiesshould be worried.
I, I think there is like somekind of collaboration between,
I, I'm not sure if it'sShutterstock and, uh-

Joey Daoud (41:04):
Shutterstock.
Yeah.

Jose Puga (41:05):
And Runway ML, but I can completely see Runway ML
taking over that business.
So, that's one to watch.
Now the costs will need to comedown, right?
This is VC funded, but, uh, I'mpretty sure probably they're not
making money with text to videogeneration.
But once the unit costs makesense for Runway ML, I think
they can easily take over thestock footage, uh, space.
I know they have much largerambitions, but that's something

(41:27):
you can use today.
Midjourney.
Uh, it's getting incredible.
I'm preaching to the choir,probably like the, when it comes
to up-res-ing content, to 4K oreven beyond that, it's becoming
better and much more powerful.
And if you chain together thelikes of Midjourney, uh, as well
as uh, Runway Ml, you can createreally high quality stock
footage that we've done itourselves actually for our pitch

(41:49):
deck.
Uh, funny enough, we were in LArecently and we used, some of
these, uh, images and videos.
Then I would say tools likeDescript.
Uh, and there are a few ofothers that are just focused on
conversational content.
They're already commoditized.
For us, it's all about goingbeyond conversational content,
language-based media, right?
Um, Podcasts are hugelyimportant, but, but content will
get more sophisticated.

(42:10):
So I, I do think that that'spretty important.
And then what else?
Well, synthetic voices andavatars.
from Synthesia to ElevenLabs,uh, or Respeecher in the,
synthetic voice space.
Uh, if anything, the qualityalso is going to increase.
There are some pretty amazing,uh, tools out there, like the
ones I mentioned that can mimicyour voice.
Uh, however, I do see somebottlenecks when it comes to

(42:32):
training.
So it still asks you to trainon, I think, half an hour of
your voice or 20 minutes.
It takes, in some cases, 24hours to process that.
So I think there's still a lotto be done on the user
experience part of things andactive learning.
So that's on one hand on on ontools, but do you want me to
talk about my projections?
I can talk a lot.
I'm Latin American, so, um-

Joey Daoud (42:54):
That was great.
Uh, yeah.
So what are your, uh, 2024projections?

Jose Puga (42:56):
First, I think they're gonna be more and more,
State-of-the-Art, foundationmodels in 2024.
that have, less parameters, uh,so that they use, for instance,
anywhere between one and 5billion parameters.
there are some models like Phi1.5, Phi 2 released by
Microsoft.
Uh, and then you have companieslike Mistral or Deci AI that are

(43:17):
creating what's called smalllanguage models that can
outperform today.
I think Mistral's model, thelatest one they've released, uh,
has around 7 billion parameters.
And they've been able to,deliver results that are better
than GPT-3.5 with hundreds ofbillions of parameters.
So if anything, models are gonnaget smaller, smarter, uh, and
more focused on specific tasks.

(43:38):
The focus is gonna be also moreon, synthetic data, as I said
before.
But also on being able tocurate, uh, data and data that
has educational value, that hascontent quality, that can, for
instance, understand commonsense reasoning, right?
Most people just want commonsense reasoning, that understand
general knowledge includingscience, daily activities or

(43:59):
theory of the mind, right?
And then based on this textbookquality data, they will be able
to achieve.
I would say equally, uh, youknow, like high results as, as
many of the GPT models.
Then I think that, data notationcompanies are in big trouble,
uh, right now.
So if you are in the businessof, let's say, managing
annotators in India or whereveryou are, uh, I think that's a

(44:20):
business that is gonna changecompletely with, with
multimodal, models, coming outof, academia and also from these
large labs.
Also model quality.
I think everybody's competingfor quality today, like GPT-4
can do this or Mistral can dothat.
The reality is that users arecarrying less and less, when it
comes to the, um, how can I say,like the accuracy or, because

(44:42):
open benchmarks are quitelimited.
And uh, I think at the end ofthe day in the AI space, it's
gonna be all about distribution,user experience.
And what people perceive as aquality experience overall or,
or, or brand quality perception,right?
And, uh, and yeah, that's someof, uh, kind of like some of the
predictions that I have that I'mnot sure if, if that, that helps
a bit.
But, uh, yeah, we don't believein the large language model play

(45:04):
where you need hundreds of, ofmillions of dollars to play in
this space.
I think it's a VC move.
Is it sustainable?
No, we don't believe in that.
We think open source andacademia are gonna catch up and
accuracy is already pretty highin many areas.
It's just gonna become betterand cheaper.

Joey Daoud (45:18):
Yeah.
Well, I really appreciate it,Jose.
Uh, thanks a lot.
Where could, uh, people findmore about, uh, Imaginario?

Jose Puga (45:23):
So they can go to our website to Imaginario.ai.
We have a free trial, uh,packages, uh, or free forever,
tier as well.
So if you're a productioncompany or a marketing agency,
um, not necessarily just in therepurposing space, but you have
a ton of content that you wanna,bring back to life and be able
to search and repurpose or usein different ways, please, you
can, uh, also, you know, justsign up there or just send me an

(45:44):
email onjose.puga@imaginario.ai.
And, yeah, happy to give you ademo.

Joey Daoud (45:50):
All right.
Well thanks a lot.
I appreciate it.

Jose Puga (45:51):
Thank you, Joey.
Have a good one.

Joey - Shure Mic (45:53):
And that is it for this episode of VP Land.
I hope you enjoyed myconversation with Jose.
Let me know what you thoughtover in YouTube in the comments
section.
I would love to know what youthink about this episode.
And be sure to subscribe over onYouTube or in whatever your
favorite podcast app of choiceis.
And if you made it to this pointin the podcast, you will
probably like the newsletterthat we send out twice a week.
So be sure to subscribe to VPLand.

(46:15):
You can go to vp-land.com orjust click on the link in the
description of wherever you arelistening or watching this.
Thanks again for watching.
I will catch you in the nextepisode.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Cold Case Files: Miami

Cold Case Files: Miami

Joyce Sapp, 76; Bryan Herrera, 16; and Laurance Webb, 32—three Miami residents whose lives were stolen in brutal, unsolved homicides.  Cold Case Files: Miami follows award‑winning radio host and City of Miami Police reserve officer  Enrique Santos as he partners with the department’s Cold Case Homicide Unit, determined family members, and the advocates who spend their lives fighting for justice for the victims who can no longer fight for themselves.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.