All Episodes

September 9, 2025 41 mins

What if you could reverse engineer viral content and use AI to build your own content machine without writing a single line of code?

In this session, we go beyond theory and into execution. Step by step, you'll learn how to scrape top-performing YouTube content, analyze it using Gemini and ChatGPT, extract what works, and generate AI prompts that produce high-performing visuals and copy. All built with accessible tools like Make, Airtable, and ChatGPT API.

Our guest, Nadia Privalikhina, is not just a power user, she’s a systems thinker with a bias for action. With experience leading innovation and AI at scale, she's now building cutting-edge automation systems that blend marketing intuition with serious technical chops. Her content regularly turns heads and clicks on LinkedIn. This is your chance to see exactly how she does it.

Expect live demos. Real prompts. Actual outputs. And the exact workflow behind a system that can transform how your business creates content.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Isar Meitis (00:00):
Hello and welcome to another live episode of the

(00:03):
Leveraging AI Podcast, a podcastthat shares practical, ethical
ways to leverage AI to improveefficiency.
Grow your business and advanceyour career.
This is Isar Metis, your host,and if you are listening to this
podcast, it means you areconsuming content that somebody
else generates in thisparticular case, me.
Uh, but generating content ingeneral is a great way to build
an audience and createrelationships.

(00:24):
And if you nurture theserelationships and you
continuously provide them value,it will over time lead to
business.
I've been a big believer in thatfor many years when I work for
large companies and now when Iam doing, my thing right now on
my own.
But either way, creatingvaluable content that really
helps a specific audience, thatis your target audience, is a
great way to grow your business.

(00:47):
However, knowing.
Exactly what to post and how youshould post it is not that easy.
Many organizations and manypeople have been running very,
very fast on the contenttreadmill, but going nowhere
because they're not necessarilysharing content that helps their
target audience in a meaningfulway.
But to do that properly, it'sactually a lot of work, right?

(01:09):
You gotta do research and figureout what's actually connecting
with these people.
You gotta do research and figureout what's trending right now
and what they might beinterested in.
You gotta then analyze thatcontent and figure out which
components they might beinterested in, and only then you
need to start generating thecontent and generating the
images and all.
It's a lot of work, and so.
Doing that manually is somethingthat many companies and

(01:29):
definitely individuals are notdoing, or at least not doing
effectively.
However, AI is a great platformthat knows how to do all these
things.
It knows how to do research, itknows how to analyze data, it
knows how to create content.
So if you can take each andevery one of these components
with the right prompts and thencombine them together into a
single automated process, youcan create an incredibly

(01:50):
effective content machine, whichis exactly what we're going to
show you how to do today.
Now, our guest today.
Nadia Priva is an expert in AIautomation.
She's been building AIautomations for multiple clients
in the past few years.
Now, her background is being asoftware engineer, which means

(02:10):
she definitely understands thetechnical side and how to build
things that flow effectivelywith minimal interference and
issues.
But she also spent a whilerunning an e-commerce business,
which means she definitelyunderstands what people, what
motivates people to buy and howto connect with them in order to
drive them to take actions,which this combination makes her
the perfect person to show usexactly how to do this process.

(02:33):
Now, when I saw her post aboutthis on LinkedIn, I'm like, oh
my God, this is absolutelybrilliant.
I need to bring her on the show.
So here we are, and I'mpersonally very excited, uh, for
this episode.
I know it's pure gold.
Again, you will see how to doall the things we've just talked
about in an automated way.
Leveraging AI together withautomation capabilities.
And so, Nadia, welcome toleveraging ai.

Nadia (03:36):
Thank you so much for such a nice introduction.
Uh, today we are going to talkabout how to create YouTube
thumbnails specifically that isthe use case that we are
covering, but I believe the sameconcept can be applied to
different, uh, industries andyeah, so the

Isar Meitis (03:50):
automation that Nadia showed is how to take
what's working for other peopleon LinkedIn.
Combine it with the stuff thatyou want to include in your
thumbnails, and then createthumbnails on the fly for your
content.
The same thing can be broadenedto anything that you want to
create, by researching,analyzing, and then creating

(04:11):
based on the same exactparameters.
you'll see it, it feels likemagic once you'll see the
results.
I, for all of you who arejoining us live, whether you are
joining us live on LinkedIn orwhether you are joining us live
on, the Zoom call, then feelfree to ask any questions.
First of all, introduce yourselfright now in the chat.
Say where you're from, uh, saywhere, who you work for.

(04:32):
Uh, send your, your LinkedInlink if you are on the Zoom chat
so other people can connect withyou and tell us in two words
what is.
Your biggest interest in this,uh, in this particular, uh, use
case.
And, if you're not here with uslive, the question is why aren't
you here with us live?
We do this every singleThursday, uh, noon Eastern time.
We have amazing people likeNadia that are gonna share

(04:53):
exactly how to do different usecases.
And then you can, uh, askquestions, which is the last
thing that I'm going to say, andthen I'm gonna give it back to
Nadia.
feel free to ask questions inthe chat on LinkedIn or in the
Zoom chat, and I will bring itup to Nadia, and we're gonna
answer your questions with that.
Nadia, uh, the stage is yours.
walk us through your magic.

Nadia (05:10):
Yeah, thank you.
So I think that I will share myscreen because it is the best
way to showcase what is what weare going to talk about.
for those

Isar Meitis (05:18):
of you who don't see the screen, by the way,
because you're listening to thepodcast after the fact, then uh,
we're gonna explain everythingthat's on the screen, uh, so you
can follow us as long, Despitethe fact that you're driving or
walking your dog or on thetreadmill or whatever it is that
you're doing.
However, if you have theopportunity to also watch the
YouTube, then that's anothergreat benefit because you can
see afterwards what we've done.

Nadia (05:39):
Okay.
And right now I'm sharing thescreen with my LinkedIn post
that's attracted that attention.
And what happened is that I, Iam part of Liam Soley, private
group, uh, where we learn how touse AI and how to sell AI
services to businesses.
But actually, Liam runs a bigchannel with more than half a
million subscribers.

(05:59):
And one of the problems that,uh, he faced is that they need
to constantly create thumbnails.
They do research, they createthose, uh, different variations
and then post those variations.
Yeah.
So, uh, there was a hackathonand I created the system.
I will quickly.
Uh, run through it right now,but here you can see the first

(06:19):
examples of what types ofthumbnails the system produced.
And yeah, I haven't mentionedthat.
I won that hackathon.
I, took the first place, uh,with those images, and since
then, a lot of people reachedout saying that those are
amazing, because if you don'tknow how Liam, uh, looks, he
looks exactly like, uh, thoseimages that I'm sharing right
now to those who are listening,you have to, and that

Isar Meitis (06:41):
was pre, pre nano banana, right?
So now, now this is probablyeven easier.

Nadia (06:47):
I would still say that, uh, character consistency can be
a problem.
So I tried after Nana Banana,uh, released to do the same
visit, but the characterconsistency is still a problem.
So I will tell you how Iapproached, this problem
awesome.
And how I solved it.

Isar Meitis (07:03):
So again, those of you're not seeing, there are
multiple thumbnails with a veryclear style.
So they have these kind of likebeige background or, or kinda
like off-white background withgraphics that are all look the
same as far as like sketches andimages of the person in multiple
directions and angles.
But his face is in most ofthese, most of these thumbnails

(07:25):
and they look very consistentfrom a style perspective.
And they look highlyprofessional and yet no person
has created them, which is thewhole magic here.

Nadia (07:34):
And maybe even share a few of examples a little bit
closer.
So here are those finishedthumbnails and how amazing.
Okay.
Lemme exit this screen, and nowI will tell you how I approach
this problem and how you can dothis.
Awesome.
So the first, just a

Isar Meitis (07:53):
quick question.
How long was the hackathon?
Like, how much time did you haveto work on this?

Nadia (07:56):
Well, in theory we had al almost one month, but, uh, as,
because I have my clients and,uh, I believe e everyone else
faced the same problem.
So we finished it, in aroundthree or four days.
Oh wow.
Okay, cool.

Isar Meitis (08:12):
Yeah.

Nadia (08:13):
Okay.
And now I will share.
So to prepare this and how wouldyou create a thumbnail or any
other, content, you would firstwant to understand what works
and what works for others.
And this is what I also did.
So now you can see on the screenthis.
So I give it a moment and it'lldisplay the thumbnails.

(08:34):
I screen all the thumbnails frommultiple creators on YouTube and
created, um, my database of weneed thumbnails.
And later, initially, my ideawas to analyze those thumbnails
because you have models thatcan, transform visuals into
text, for example.
And then my idea was to train tofine tune a model, but actually

(08:55):
it didn't work even though Iused that fine tuning for my own
LinkedIn content.
Interesting.
But yeah, but in this case, withsomeone else, for some reason it
didn't work.
So what I ended up doing is Icreated, I scraped multiple
channels.
Obviously there is still somehuman work.
I had to pick those channels onmy own.
and then the system could, letme show you the, what.

(09:19):
Going behind the scenes of thatscraping part.
So now I'm sharing my NI 10workflow.
It is super easy and quick, andsomeone mentioned that it's
probably not legal to scrapeYouTube, but actually YouTube
gives you quite a lot of datapoints, including thumbnails
that you can get absolutely,legally, uh, with some
limitations.
Interesting.
So first I set the channels thatI want to monitor, and then for

(09:43):
each of those channels, I getall of the videos from the past
year of them.
And you can see when you'resaying get the video,

Isar Meitis (09:50):
you're getting the thumbnail and the description.

Nadia (09:53):
let me show what we have here.
We can, uh, YouTube provides alot of information about those
videos.
So you can see we have title, wehave channel name.
It is somewhere description.
Then thumbnails.
Yeah, thumbnails here.
And this and this

Isar Meitis (10:09):
node in NA 10 is just, that's what it does.
It just grabs information fromspecific channels.

Nadia (10:15):
Yes.
So it takes the channel id and Isay that I want to get all the
data from past year in thisexample, and then I get all of
that video.
So it found, I believe I enteredaround 10 channels and almost
every channel posted more than100 videos.
So I got all of those 100 videosfor each channel, which, makes
it up to 1000 items in thiscase.

(10:37):
Okay.
And later obviously want tofilter them.
And yeah, there is another, datapoint I will not be sharing
today because it'll expose myapi.
But there is another, option toget, um, more sta stats about
that videos.
um, if you count and commentsand, um.
Whatever you want.
I would say there is still a lotof information that can be, and

Isar Meitis (10:59):
that's, and that's from what tool did you use that?
Is that like an appify?
Because it, see

Nadia (11:03):
no, it is also YouTube's official.
Oh, so it's,

Isar Meitis (11:05):
it's YouTube's own on, uh, resources.
Okay, cool.

Nadia (11:09):
Yes.
Yeah, I just, and doesn't, uh,give you an option to list all
of those.
So in this case, I use

Isar Meitis (11:17):
Yeah.
So to, to explain to people howthese work.
So all these tools work thesame, whether you're using NA 10
or Make or Zapier or, or anyother, they, in all their tools,
they don't expose every singlefunction of the API that exists
in the tools they connect to, inthis particular case, YouTube,
which means you can.
You can connect to the simplenode and it will give you
several functions that you cando in the node.

(11:38):
But you can use an HTTP call,which is very, very easy.
And you don't have to know howto code.
You just have to get thedocumentation and drop it into
a, your favorite large languagemodel and say, I want to connect
to that with an HTT P call and Iwanna find that information.
And it will give you the codeand you can drop it here on N
eight 10.
and then you can call the restof the functions that did not
exist and was not prebuilt intothe tool itself.

Nadia (11:59):
Awesome explanation.
Thank you so much for that.
And after we get thatinformation, in my case, I
wanted to filter out short formvideos, but for some people it
may be the opposite case, sothey would want to see what's
stringent for, uh, short formvideos.
And I had to use a little bit ofcode for that, but I believe
that it's also possible to dowithout code, but because I have
technical background, for me itwas easier to use code in this.

(12:22):
Then I just filter out, uh,short videos that are, uh,
shorter than two minutes.
And after that I get also,because we scraped all the
videos from the past year, wecan get a data about what is the
average view for this specificchannel and, uh, what is their
average a like rate.

(12:42):
And this, uh, will help us toidentify what is, what are the
outliers for this channel.
Oh, cool.
So in my,

Isar Meitis (12:48):
I see where this is going.

Nadia (12:50):
Yeah.
It's, it's not that simple.
I mean, it, it is, uh, quitesimple, but you need to have
your own strategy.
This is where the human partcomes in, and.
So yeah, we get the average,views per hour for this channel
and average lags per video.
And then this information helpsus to identify which videos are
good and which videos are notthat good.

(13:11):
And after that, I use thatinformation and save it to my
data.
I will just, and again,

Isar Meitis (13:16):
to explain to people, when you say database,
people are like, oh my God, Idon't know any databases.
It just saves information toAirtable and those of you
haven't used Airtable.
Airtable is like, uh, Excelwith, uh, better and easier to
use user interface, right?
So it allows you to do morefiltering and control more, uh,
the look and feel and addimages.
Like you see in the examplehere, you see all the thumbnails
from all the different channels,uh, that this process has

(13:38):
brought, but it, it's saved intoAirtable, which is a very
user-friendly tool.
You don't need to know how torun databases or set them up or
so on.
You literally go to Airtable,create a new space in Airtable,
define what you want to have init, is it images?
And what's gonna be the columnsand what fields do you want in
it?
And then you just can feed thatfrom whatever source.
In this particular case, from N10.

Nadia (13:59):
Absolutely.
I have non-technical clients whoare quite, uh, quite navigating,
irritable very easily.

Isar Meitis (14:05):
Yeah.

Nadia (14:07):
And so I save those, uh, thumbnails.
And then I have one more, uh,workflow that actually analyzes
those thumbnails.
So I use an agent in this case,but why I use agent, maybe it is
not so specific and you can use,let's say your open AI can
analyze what is on that image.
But in my case, I decided that Iwill do this with agents.

(14:27):
It's just a small limitation ofan item that I found or what I
wanted to use, that I had topass the image and use an agent
for that.
So my agent analyzes thosethumbnails and saves, uh, them
to, again, to an air table.

Isar Meitis (14:41):
Can you explain what exactly it analyzes?
Like what is it looking for?

Nadia (14:45):
Okay, let me see.
So the prompt says that you area world class YouTube strategist
with attention to detail.
You, uh, leave, and breatheYouTube content marketing
strategy packaging, and arescaled at creating v packaging
for high performing YouTubevideos.
You always output whether Jason,this is one trick that this, we

(15:06):
just needed more structuredoutput from this tool, but it's
not mandatory, I would say inthis specific case.
And then I give an additionalstandard prompt where I say
that, um, this agent will begiven a YouTube video that
performed well.
And, uh, okay.
I also feed it with atranscript, uh, title and
thumbnail.

(15:26):
And the mission of this agent isto deconstruct why this, uh,
thumbnail works for the video.
And I identify the thumbnailarchetype and this.

Isar Meitis (15:35):
before you close this again to people.
Uh, so what is JSON and how doesit work?
So, JSON is a very simple codinglanguage that basically
describes the data as if it is atable.
So it tells you which fieldsthey are and what values they
can have.
And the benefit of using JSON isthat more or less every tool out
there knows how to use it.
And so if you export and orimport that, then you get

(15:58):
consistent results because it's,it's gonna come in a.
Very well-defined format.
And again, you don't need toknow what it is.
You can go to whichever tool andsaid, I need a js ON that will
gimme these parameters and itwill write the code for you and
it can paste it in here.
The benefit of that, once thedata gets into an automation
tool like NA 10 or make orwhatever, you can identify each

(16:19):
and every one of the fields.
You can map them to the columnsin the table in Airtable and so
on.
Uh, so it just gives you astructured output versus just
free text.

Nadia (16:29):
Exactly.
And, uh, maybe I will.
Okay, let's move on.
And so this system could, uh,look at all the thumbnails and
transform them into, textrepresentation.
Okay.
And then I say that informationagain to Airtable.
And so, uh, by the end of thatsecond workflow, what I have is
a lot of structured data.

(16:50):
I have titles of, Highlyperforming videos, channel
names, summary and thenthumbnail description.
And for thumbnail description itsays what is in the background,
what is the text on thethumbnail?
And, um, a few other things thatwe will, yeah, what is, on the
foreground?
And those are the maincomponents.
And so you basically

Isar Meitis (17:12):
deconstructing the thumbnail into its various
components.
And because these are all thetop performing, uh, the top
performing videos from thespecific channels, then that
hints that these are soliddescriptions and or thumbnails
or a combination of the two,which gives you hints to what
you need to do in order tocreate successful ones yourself.

Nadia (17:33):
Yes.
Now we are just preparing thedata for, uh, the future
automation.
Yeah.
Awesome.
This is the data preparationstep.
What we do now, Airtable allowsyou to export that data.
I don't remember where it is,but you can actually export all
the table into a CS file andwhat we do next.
And you can do it with any typesof content.
I did the same with LinkedIncontent.

(17:53):
So you export it, and then in mycase, I went to Google AI
studio, or you can go to Gemini,but Google AI Studio is a
similar version.
So it is just Gemini fordevelopers where you can safely
test your ideas and experimentwith, that model and how it
works.
So what I did is I went thisGoogle AI studio and I exported,

(18:14):
and I give it that CS file withdescription of best performing
videos alongside with, uh, thethumbnails because we have.
already transformed the, uh, thevisual thumbnails into text.
And so Jam and I can work nowwith that text.
And why Jam and I, because ithas a huge context window.
It has more than 1 milliontoken, a context window, which

(18:35):
means that you can basicallyfeed it with, in my case there
was around 100 videos, but youcan potentially give it 2000
videos, for analysis.
And so I, after that, after Igive, uh, it this CC file, I
instructed that, uh, you now seeperform, top performing
thumbnails for videos, theirdescription, explanation, and so

(18:57):
on.
And your mission now is togenerate a prompt for an LLM.
This is also a secret trick thatI use.
I always ask Gemini to createprompts for me because it is
just, it knows how to createprompts better than me.
And also I give it a link tothe, uh, prompting guide for
gem.
Google publishes those,prompting guides.

(19:17):
They are officially available.
So I give this prompting guide,uh, along with the Cs, a file.
And now what Jim and I did onthis step is that it came up
with a good prompt.
It now created a structuralprompt with a persona.
So again, it, uh, says that youare world class YouTube
strategist, and it is not myprompt already here.

(19:38):
This is what Jim and I created.
Yeah, and it created the task.
So task is to analyze andprovide video title and summary,
and generate, uh, in this case,in the context I mentioned that
I want to generate, based on thetitle of the video and
transcription of the video, Iwant to generate those
thumbnails.
And so all that information,came into the prompt.

(19:59):
Later I made a few modificationto inject the branding,
guidelines that I wanted it tohave, but the structure is the
same.
Uh, I will just iterate upon thefirst version and ask to add a
few more components to it.
Okay.
I believe this is the secondone.
Yeah.
I ask it to deconstruct andreverse engineer how I can get a

(20:20):
clear description of a thumbnailbased on title and transcript.
And I used one of those prompts,uh, later on in inside and
automation.

Isar Meitis (20:28):
So the prompt, again, just to understand what
it does, it takes all the inputsfrom the CSV file that was
created based on all the datathat we collected previously,
and now it gets an input for anew video.

Nadia (20:40):
I ask it now to create, prompt for basically itself.
I say that it'll be given yes.
Title and transcription of thevideo.
And

Isar Meitis (20:50):
it needs to then define what the exact, yeah.
Okay.
What the exact sum needs to bebased on the best practices that
it learned from all the previousmaterial.
Combining it with the new needfor the new, uh, video,

Nadia (21:06):
something like that.
Yes.

Isar Meitis (21:07):
Okay.

Nadia (21:09):
And then, uh, is there a

Isar Meitis (21:10):
section for like brand guidelines and something
like that?
Because again, we saw everythingwas perfectly branded when you
showed us.

Nadia (21:17):
Yes.
Later I injected it, manually orGot it.

Isar Meitis (21:20):
So, so you added manually a segment about the,
the branding of it.
Exactly,

Nadia (21:24):
yes.

Isar Meitis (21:24):
Yeah.

Nadia (21:26):
And after this step, I have a prompt that I can use,
but let's go back to the, thisis.
Now on the screen, you see thefinal automation that I created.
It starts with a form.
So we need to somehow instructthe system that, hey, this is
our title, and hey, this is our,what is our video, about.
And so it starts with a formwhere we feel that information
and later it can summarize thetranscript if it's needed.

(21:49):
And, then I use that exactprompt from Gemini.
It has this persona about worldclass, YouTube thumbnail
designer and task andinstructions and so on.
And then it comes up with, a fewconcepts.
We don't want to have only one,uh, thumbnail because most of
the channels test multiplethumbnails, and we want this

(22:10):
system to create multipleconcepts.
So it came, uh, it comes up witha few concepts, and it is based
on, uh, concept archetypes andso on.
And.
It outputs visual descriptionsof what is, what should be in
the background, what should bethe primary subject?
Is it a person or is it an justan object, which fits this video

(22:30):
and what are the elements ifthere are some arrows or
additional, I dunno, a pile ofmoney or something to, evoke
some emotions And then textelements.
This is all what we need toalmost all what we need to
generate and visualize thoseconcepts.
Uh, but what else do we need?
If it is a personal brand, thenwe obviously want to have a

(22:52):
person on those thumbnails.
And if you don't know, then um,open AI has this image model
which allows you to referenceimages.
So now we have none of bananaBefore we didn't have it, uh,
but still open ai.
Has that, possibility toreference not only one image,
but multiple images.
And it also can be done via API,which means that it can be

(23:13):
automated.
Yep.
Now, on the screen, you see oneof the examples that OpenAI, uh,
provides, uh, there are fourobjects, and then if you give it
a prompt to, compile thoseobjects into one image, then it
combines all of them and putsthem into one image.
And we use a similar approachwith thumbnails.

(23:36):
Okay.
So what I did here is I used afew, thumbnail examples from the
channel.
Okay, let me open it.
So here is one example and thereare a few more.
So now I use those threeexamples and then, There are
just a few support and blocks,but later I ask an open air

(23:57):
model.
So I reference those images,multiple three images, and then
I say that again to that.
It is, an expert YouTube artdirector and a master of photo
realistic concepts executionsusing generat, FAI.
Its sole mission is to create asingle masterpiece level YouTube
thumbnail based on the detailedconcept provided below.

(24:18):
Uh, you must perfectly, use thevisual style guide and the
perfect and specific conceptinto one adhesive hyperrealistic
image and so on.
And here I also give itadditional, references.
So I mentioned that the imagesthat I provided are the
reference of a creator of thischannel, and I also give
additional styling guidelines.

(24:38):
Right here in the prompt.
So we keep styling guidelinesinto, multiple places because
the first place it affects thelayout of the image.
And in the second place when weactually generate that image,
then yeah.
styling guidelines affect how itlooks and what colors Yeah, it
use.
And so it, let's actually use,so this is one, image, another

(25:00):
input image, and then we give ita concept.
And finally, I will not show it,right here, but let's go here.
okay.
This is,

Isar Meitis (25:10):
so these are few.
So again, what, what you're notseeing is that basically what
this step does in the automationis it takes.
Basically everything we'veprepared so far, but mostly it
cre It has a prompt on exactlywhat to include in several
different variations of thethumbnail.
So it, it take, it takes these,uh, as NAIA called of archetypes

(25:30):
of different, what works,basically.
So it's gonna generate severaldifferent versions to use.
It's using images from previousthumbnails to get a visual
reference of the brandguidelines as well as the look
of the person.
And, uh, then it combines all ofthat into outputs.
So every time you run this,you're gonna get x, I dunno how

(25:52):
many, three, four, uh, differentoptions, four thumbnails that
are already aligned with thebrand, that already has the
person, the text, thebackground, the uh, supporting
graphics, whatever needs to bein there already in the image.

Nadia (26:04):
Yeah, exactly.
So we first distilled what isworking into a prompt.
Then that prompt created a fewvariations, a few text
representations of, variations.
Yeah.
And then we kind of convertedthat text into an actual image
together with a few referenceimages that we use them only to
reference, base of a creator.

(26:24):
That should be on the thumbnail.
Uh, but if you are watching it,you see that the final images,
they are not perfect.
So even though we give chargeGPT, the model behind charge
GPT, reference images, itdoesn't, uh, preserve that
character so well.
And that is why we needadditional steps.
And anyway, this image is not inthe proper format and so on.

(26:48):
So we want to do the face swap.
And face swap was one of the,um, hardest things to figure
out.
But what we found is that thereis this flux context model.
You probably know about itbefore, again, before nano
banana flux context was one ofthe best models for visual
editing.
And with it you can modify sometext on a, an existing image,

(27:11):
you can trans, uh, transformthat image into a different
style.
And one good thing about it isthat you can actually create a
fine tuned version of this blackcontext.
And what it allows to do, let'ssay we have an image with not a
perfect face.
Let's say we have, uh, thisimage and we want to use, we

(27:31):
want to have an image with anaccurate face.
So we would train a model thathas an input image with, uh,
someone else's face.
And the final image would be theface that we want to see.
And in this way, we can lateron.
Run that fine tuned model onour, on the images that our LGBT

(27:52):
model produced.
I hope it makes sense.
Interesting.

Isar Meitis (27:55):
Yeah, so I'll, I'll, I'll re let me explain
this in a minute.
So, first of all, flux is anopen source model, uh, that
generates images and can editimages.
Probably the best open sourcemodel that there is out there
right now.
Uh, they've been around for awhile and they became extremely,
uh, like they created a reallybig buzz because it was really
easy to train them.
They're called Flex Loras, andyou can literally just take, uh,
it's a relatively simpleprocess.

(28:16):
Many tools allow you to do itoutta the box.
Uh, but you can just, uploadmultiple images that are
examples, of either how you wantthe image to be, or like Nadia
said, examples of before andafter.
This is my before.
This is my after.
This is my before.
This is my after.
This is my, before this, myafter.
You load as many of these aspossible and then, or not as
many as possible.
Like you, you need just areasonable amount.

(28:36):
Like one or two is not enough,but 15 to 20 is definitely
enough.
And then what you can do is ifyou have a before and after all
the afters showing the correctface, then it learns how to do
that.
And then you give it a beforeimage and it knows how to change
it to an after image.
I actually haven't seen anybodyuse it for this kind of use
case.
So I found that you're doingabsolutely brilliant.
I think this is so freakingcool.

Nadia (28:58):
Thank you.
because it appeared not so longago, around a month or two ago,
I didn't find many tutorialsthat I talk about it.
So it was quite challenging tofind, to figure, to figure it
out on your own.
Yes.
And here you can see oneexample.
So this was the, input image andthe face here is what GBT
produced.
It is quite good, but still theface is not the face of the

(29:19):
person that we want to see.
And after running this model, itproduced a much more, familiar
face to

Isar Meitis (29:25):
do you have.
So just as an, uh, question, uh,for the audience, do, is there
a, do you also train it just onthe face separately or All the
images are just before and afterwith him?
with a good image and a badimage.

Nadia (29:38):
So you need to have, um, obviously we need to have a
photo shoot or existingthumbnails.
You can take them from thatdatabase, that from the initial
steps.
When you scrape thosethumbnails, you can use them as
a final image.
The problem here is to createthe initial image.
Something that,

Isar Meitis (29:55):
yeah, to, to create the before images.
So how did you, that was my nextquestion.
Again, to explain my, myquestion in the dataset, when
you're training it, you need agood image, which, okay, you
have the guy's YouTube channel,you have his existing, his
existing thumbnails.
These are easy, but how do youget the one that is not his
face?
you have to kind of like destroyor ruin the good images in order

(30:16):
to do that.
How do you do that?

Nadia (30:18):
The old way would be to use a Photoshop.
The new way is to use the exactsame model, flux context.
Laura, or not Laura, just fluxcontext.
Or now you can use nano banana.
You would give, uh, this modelthe final image and ask it to
come up with another, just swapthe face with someone else.
Oh, see the face?
See what you're saying?
I see.
Yeah.
And this way your final imagebecomes your starting image for

(30:42):
the training.
Yeah, yeah, yeah,

Isar Meitis (30:43):
yeah.

Nadia (30:45):
And this is how we've done, so we have after this
step, the.
Thumbnails is almost, perfect.
The Romanian steps are to justresize it or to feed YouTube's,
nine, nine by six 16 or 16 bynine, uh, ratio.
And also to, uh, upscale thatimage.
But that is quite, so twoquestions about

Isar Meitis (31:06):
this.
The resizing is done with whattool?

Nadia (31:10):
Resizing.
Let me see.
I'm okay because we, what chargeGPT model open AI image model
produces is, three by two, uh,size image.
And we don't want to lose, someof the.
Top how to explain top andbottom pixels on the image.
That is why what I use is outpainting.

(31:32):
And you can use other models forout painting.
What, uh, the platform that Iused for, flux training is
called file.ai.
Yeah.
And it hosts a lot of visualmodels as well.
So you would go there and justfind a model that I perform that
can do out painting.
I used, what did I use?
I used some ID for out painting.

(31:56):
Oh,

Isar Meitis (31:56):
interesting.
Yeah.

Nadia (31:59):
And then I also found a model for upscaling.

Isar Meitis (32:02):
Which one did you use for Upscale?
That's another interesting one.

Nadia (32:04):
that was a basic model from cloud ai.
I don't even remember its name.
It's, uh, it's not somethingthat people talk about.

Isar Meitis (32:13):
yeah, yeah.
There, there are, I, I actuallyhave.
One or two very good open sourceones that, that do a very good
job in upscaling.
So I was, I'm always curious tosee what people are using.

Nadia (32:21):
Yeah, you can, you can do it definitely on your local
machine.

Isar Meitis (32:25):
No, I'm, I'm running it on file and I'm
paying them either that or ateor one of those, like it's
running somewhere.
It doesn't run on my computer.
Mm-hmm.
But, uh, I see, very, very cool.
So this is, yeah, I wanna runthrough a quick recap so people
understand of the entireprocess.
and then I will see if, ifthere's any question.
Uh, so first of all, there's aquestion.
how long did it take for thisprocess to take developed?

(32:47):
So we, we talked about this GWin the beginning.
Maybe you missed that.
it, it took a few days.
uh, to create this thing and I'msure, while doing other stuff,
so I, I assume you didn't justsit and do this for a few days.
I'm sure you have other stuff todo other than developing the
automation.
Is that a, is that a truestatement?

Nadia (33:01):
Yeah, that's true.
I spent a few.
Nights developing it.
Yeah.

Isar Meitis (33:06):
Yeah.
Okay.
So, so probably overall fromworking hours, were we talking,
I don't know, 10 to 15hours-ish?

Nadia (33:13):
maybe a little bit more than that.
Maybe a little bit more that Mynights long.
Okay,

Isar Meitis (33:16):
cool.
Long nights.
A few.
Long nights.
Long nights.
Awesome.
Yeah.
Yeah, yeah.
It's, it's brilliant.
So let's do a quick recap ofwhat this entire process does
and why is it so amazing.
And then I'm gonna mention alittle bit how you can
generalize it.
The way it starts is it startswith research, right?
So Nadia started manuallysaying, okay, who has successful
YouTube channels?
'cause that's already gonna be agood place to start.

(33:37):
And she randomly picked who havesuccessful YouTube channels, and
then she used inside of NA 10 Ascraper to bring all the
thumbnails of their informationand then choose another call to
YouTube to get additionalinformation, such as how many
views they have, which is, is acritical aspect to the next
step.
And then the next step was,filtering only the ones that are
positive outliers, right?

(33:58):
So if the average person has10,000 views to each, to each
episode, some episodes suddenlyhave a hundred thousand, which
mean it caught people'sattention.
and, and by the way, thenumbers, the actual absolute
numbers don't matter.
Like if the regular.
View is 500, and then suddenlyyou have something with 3000.
That's a good outlier.
So then not only you're startingwith people who know what

(34:19):
they're doing because theirchannels are doing well, you're
picking the ones that werereally doing well from that
channel, which means either thedescription or the thumbnail or
the, like the topic, somethingin there, was working very, very
well.
And we use that as a way totrain the AI on what is working
from a thumbnail description,title, perspective on LinkedIn.

(34:39):
So that just sets the stage thatbuilds the machine.
Then the other half is saying,okay, I want to create a new
YouTube video about topic X.
What should I put in thedescription?
What should be the thumbnail?
What should be written on thethumbnail, and so on.
And so Nadia asked Gemini tocreate a prompt.
On how to do that process.
She used that prompt in theprocess.

(35:00):
And what that prompt generatesis it knows how to take the
following inputs.
It takes the outputs of theprevious step, what is working
and what's not working, bestpractices, and it's getting the
input from the user on what thetopic of the new podcast is, and
it generates a prompt on how tocreate basically a very detailed
description of the thumbnail.

(35:20):
Then that gets fed into an imagegeneration tool that actually
generates the actual thumbnail.
Then the next step actuallyswaps the person's face.
To a better version of that faceof the person, uh, using a
trained AI model from FAL.
So using flux context flux,again, is an open source model
that you can train very easilyjust by uploading examples to

(35:43):
it, and then that makes the facebetter.
And then the final step was toget the right aspect ratio and
the right resolution because inmany cases, uh, the image
generation tools generate reallyrelatively low resolution images
and in standard, outputs as faras aspect ratios.
Some of them actually know howto generate the image in the
correct aspect ratio to beginwith, and some of them don't.

(36:03):
So if you found one that givesyou the right quality of
thumbnails, but necessarily theright aspect ratio, then you can
then change that, uh, as well.
And again, just to.
Dive a little deeper instead ofcropping the image, which means
you lose something and you neverknow what you're losing because
you're not creating thethumbnails on your own.
Uh, that did it the other wayaround.
She used out painting, whichbasically generates new pixels

(36:23):
around the image in order toextend it to whatever the aspect
ratio needs to be.
Um, again, absolutely mindblowing and brilliant.
I'm not surprised you won, firstplace.
So there's a question.
It's a general question, but Ithink it's a good question to
add to this.
Uh, they're asking, are youusing the cloud NA 10 or
yourself hosts NA 10?

Nadia (36:40):
I self host on a 10.
In the cloud if it makes sense.
Yeah, yeah, yeah.
I'm not hosting it locally.

Isar Meitis (36:46):
Yeah, yeah, yeah.
So, so to, to explain thequestion to those of you who're
listening, you can go to n eightten.com and just sign up and
then use it like any othersoftware, right?
And then you're gonna pay forusage.
So the more automations you run,the more money it's going to
cost you.
Option number two, LA 10 is anopen source model.
You can go and install it.
Wherever you want, like on athird party cloud, which is the
way I run it.

(37:07):
I run it on a hosting platformcalled Railway.
Uh, I think most people who useNA 10 a lot host it somewhere
and not use NA ten.com.
Uh, it gives you severaldifferent benefits.
Benefit number one is money likeyou for the same cost of costing
of hosting.
I'm paying$6 a month, I can runas many automations as I want.
It doesn't matter.
Uh, so that's a big benefit.
The other big benefit is from adata security perspective.

(37:27):
the data doesn't go to a thirdparty server where you don't
know where anything runs theirdata.
It just stays on your server.
And if you run it on a localmachine, then.
A hundred percent the datadoesn't go anywhere because it
just stays on your machine.
So these are the two bigbenefits of running it this way.
It sounds fancy, like how, Idon't even know what open source
is and I dunno how to hostsomething on a third party
server.
Again, I just was looking forthe easiest way to do this.

(37:48):
The easiest way I found israilway, there's a dropdown
menu.
Say create an NA 10 instance forme.
You click go and that's it, andyou have one.
So the knowledge you need isexactly zero, and you can have
as many instances of NA 10, asyou want.

Nadia (38:01):
I'm also using railway.
I just wanted to add that whenyou work with images and videos,
then like your server, serverwill cons, consume a lot of,
memory.
So you need to also keep it inmind.
And because I, automate thoseimages and videos quite
frequently, then my usage costgoes up.

Isar Meitis (38:19):
Yeah.
uh, but again, it doesn't go up.
With usage, it goes up once tothe level you need and then you
can run it a thousand times.
It's not gonna cost yousignificantly more money.
Right.

Nadia (38:28):
It's, uh, you need to keep an eye on because stores
your executions.
Yes.
And when you create multipleimages or videos, then those,
oh, executions are hosted for 30days.
They stay.
Uh, so do you have

Isar Meitis (38:40):
a step that then deletes them at the end?

Nadia (38:42):
no.

Isar Meitis (38:43):
You do not.
Yeah, because that's aninteresting thing I'm doing in
other automations.
Now, now we're drifting a littlebit, but in some of my
automations, like in, in, incases where I upload information
to, uh, let's say vector storesof a custom GPT, then the final
step of the automation goes anddeletes the file.
After I already have the outputthat I wanted, I go and delete
the memory because otherwiseagain, you got, you gotta start

(39:04):
accumulating, more and morestuff.
But, okay.
So.
Nadia, this was absolutelyamazing.
Again, this is such a brilliantuse case.
And now I said I will generalizeit for a minute.
Think about what we've learnedtoday from Nadia.
We've learned how to researchand scrape data to learn
something about it.
We learn how to filter it tofind all in the stuff that is

(39:24):
relevant and that is the bestperforming.
We learned how to turn that intoa prompt that can generate new
variations of the winningconcepts.
And then we've learned how toapply that in order to create
both texts and graphics thatwill mimic the successful thing.
You can take that to anythingfrom ads to posts, to, post on

(39:45):
social media to blog posts.
Like literally any content thatyou wanna generate can follow
the same exact process thatNadia was showing.
And then all you need is some.
Basic, well, not basic.
You need solid NA 10 skills toput it all together.
but once you put it together,once you have a machine that can
do it day in, day out, nonstop,which makes it really brilliant.

(40:07):
Uh, this was amazing.
Thank you so much.
If people want to follow you,connect with you, work with you,
learn from you, what are thebest ways to do that?

Nadia (40:15):
Thank you.
Sorry so much for, outside andhow this all works because you
explained it much better thanme.
I feel I'm on the technical sideanyway, if Yeah.
For people to find me.
I, I am active on LinkedIn.
It's just my name, NA and I alsoactive on YouTube.
I post long and I attendtutorials about video and
generation as well.

(40:35):
It is, na ai insiders.

Isar Meitis (40:38):
Awesome.
Uh, and I wanna thank everybodywho joined us live.
both on LinkedIn and on Zoom.
I know you have other stuff thatyou can do on Thursdays at, uh,
noon Eastern.
Uh, and I appreciate you beinghere.
I appreciate you askingquestions and introducing
yourself and chatting, uh, inthe chat.
Uh, so thanks everyone.
Uh, for those of you who arelistening to this after the
fact, come join us nextThursday.
It's, uh, noon eastern everyThursday.

(41:00):
You can join us either on Zoomor on LinkedIn, and we share the
magic, right?
It's brilliant people like Nadiawho's gonna tell you exactly how
to do really, really cool stuff,uh, with AI that you can start
implementing immediatelyafterwards.
That's it.
Everybody have an awesome restof your day.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Cardiac Cowboys

Cardiac Cowboys

The heart was always off-limits to surgeons. Cutting into it spelled instant death for the patient. That is, until a ragtag group of doctors scattered across the Midwest and Texas decided to throw out the rule book. Working in makeshift laboratories and home garages, using medical devices made from scavenged machine parts and beer tubes, these men and women invented the field of open heart surgery. Odds are, someone you know is alive because of them. So why has history left them behind? Presented by Chris Pine, CARDIAC COWBOYS tells the gripping true story behind the birth of heart surgery, and the young, Greatest Generation doctors who made it happen. For years, they competed and feuded, racing to be the first, the best, and the most prolific. Some appeared on the cover of Time Magazine, operated on kings and advised presidents. Others ended up disgraced, penniless, and convicted of felonies. Together, they ignited a revolution in medicine, and changed the world.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.