Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
GMT20251027-221125_Record (00:00):
Hello
and welcome to the Leveraging AI
(00:02):
Podcast, a podcast that sharespractical, ethical ways to
leverage AI to improveefficiency, grow your business,
and advance your career.
This is Isar Metis, your host,and we have a really fun.
Episode today, I very rarely doepisodes in which I focus on a
specific tool versus specificuse cases because I believe we
all start with the use cases.
(00:23):
So yes, we are going to focus onuse cases.
However, I decided to shareabout this tool because it is
extremely valuable, extremelypowerful, extremely easy to use,
and really, really fun.
I couldn't resist telling youabout this and how I'm using it
and how other people are usingit and so on.
So this episode, it is going tobe about how to create visual
(00:46):
assets at scale for yourbusiness and or your personal
life, whether it's ads or.
These visual assets could beanything from static images to
ads with or without text, or anykind of combination of things
that you want, promise, Ipromise you that you will find
this absolutely fascinating.
And like I said, once you findhow much fun it is, you'll just
(01:07):
start creating stuff just forthe fund of that, which is by
itself worth learning.
So let's jump right in.
(01:58):
So the tool that I'm talkingabout is called vy.
There's a link in the show notesthat will take you directly to
the tool.
And the idea behind Weavy is itcombines together multiple
different tools.
So basically any imagegenerator, any video generator,
any.
Prompting of any LLM all intoone, very easy to use user
(02:18):
interface, plus some additionalreally cool tools that enables
you to build these processes oruse cases in which you can
automate the entire processbeginning to end.
So I picked several differentexamples that I played with in
the past few weeks and.
And I'm gonna walk you step bystep exactly how it works.
(02:38):
What are the differentcomponents?
Now, the reason I picked thesespecific use cases, or built
them specifically for this demois because while each and every
one of them represents arelevant use case in real life,
each and every one of them willalso show you at least one new
capability that I did not showin the other use cases.
So it's gonna give you a verybroad idea.
On how this actually works.
Now, since this is a very visualtool, feel free to later on when
(03:02):
you're not driving or doing yourlaundry or walking your dog to
switch to our YouTube channeland watch it over there.
There's gonna be a link in theshow notes to our YouTube
channel as well.
So let's dive right in.
So first of all, let me explainto you what is IV before we dive
into this initial use case.
VY is a canvas.
That you can add multiple stepsand string them together into a
(03:25):
process that becomes a reusablemachine that generate consistent
visual outputs.
These could be images, videos,with or without text, et cetera.
You can choose any imagegeneration tool, any large
language model, any videogeneration tool, and combine
them together to generate theoutputs that you want, combined
(03:45):
with several different internaltools that I'm going to show you
as we review the differentprocesses.
So this particular process iscalled baby clothing, but in
reality it's a productpromotional process, and I will
show you what it does.
The input that it gets is justan image of a onesie in this
particular case, could have beena shirt, could have been
anything else from the internetthat I just dropped into this.
(04:08):
You can see that the type of thebox is file, so I just uploaded
the file and what it's doing.
Then.
It is trying to understandwhat's in it.
So it's reading the actualimage, understanding what's in
the image, and then I'm gonnause the understanding of what in
that image, in the followingsteps.
So this is where I add my ownprompt.
So this is a prompt box as youcan see, and I wrote the
(04:29):
following prompt.
This is me adding a prompt.
Describe 10 distinct 16 by ninenew scenarios for the scene,
each with a differentenvironment.
Each should be a description ofa potential ad.
The baby must wear the sameexact onesie in all images.
And I ask it to separate all ofthem with an asterisk between
the different ideas.
(04:51):
So what I connected to it is Iconnected to it, the original
image plus the prompt that Ijust read to you, and I
connected it into a largelanguage model box, which large
language model, whichever one Iwant.
So right now it was done byGPT-4 oh.
But in the dropdown menu, youcan see, I can choose from
almost any large language modelout there.
And so what it is actuallydoing, it is now going to
(05:12):
generate 10 ideas for ads basedon the image of the onesie that
I uploaded.
So I'll give you several exampleof what it created in that,
because the onesie says, thankfor for family.
It understood on its own becauseit's a large language model and
it saw the image and it saw thedescription, it understands that
it is a family orientedcampaign.
(05:33):
And so the examples, I'm gonnaread a few of them, but not all
of them.
The first one is the imagedepicts a baby wearing a green
onesie that reads Thankful forFamily lying between two adults
on a bed.
The second one says, in a solidmeadow, a baby giggles in the
onesie as butterflies flatteraround celebrating family joy.
The next one, and this is myfavorite one as far as the
(05:55):
output, as you'll see in aminute, on a cozy couch by a
crackling fire, the baby yawns.
In the onesie surrounded bycuddly pets and so on and so
forth.
Now you see that the promptsthat it's generated are really,
really short because that's whatI requested in my prompt.
I said, keep every descriptionunder 30 words.
I could have done thissignificantly more detailed, but
as you'll see in a minute, thatis not required.
(06:17):
So the output that I got so farand to this step is 10 different
prompts of ideas for ads for ababy onesie.
Based on just the image on theonesie that uploaded nothing
else, the next tool is calledArray.
What array does is it literallyjust breaks the 10 different
prompts that showed up in onebox into separate boxes, and
(06:39):
then there's a tool called Listthat really breaks it down.
So now what I have is I have 10separate boxes, each and every
one of them with a prompt that Idid not write.
That will change dynamically ifI change the image of the
original onesie.
This is the magic of all ofthis.
The dominoes can fall all theway through with me changing
just the one image of input.
(07:00):
So now I have 10 differentprompts.
What did I do with the prompts?
So then I connected a box fromnano Banana, Gemini 2.5 Flash,
which is an image generationtool.
So what kind of image generationtools they could have picked.
Here in the menu, you can seeall the different ones that
exist here.
So there's uh, three differentoptions from Google.
One from Chachi pt, rev, HiggsField Flux.
(07:22):
So many open source, one SDthree Ideogram.
Literally any model that existsout there.
And if it doesn't exist yet,they will add it later on or
sometime in the near future.
So I can pick from, I dunno, 15or 20 different image generation
models.
I pick nano banana in thisparticular case for a reason
because nano banana allows me toput several different.
Image inputs in addition to theprompt.
(07:44):
So how are these images created?
Each and every one of theseimages, and again, those of you
who are watching the screen, cansee that there are 10 different
images that were created basedon the 10 different prompts, but
each and every one of them getsthree different inputs.
One input is the prompt that wasautomatically generated.
The other two are the originalimage of the onesie.
And the reference image of theoriginal image I created
(08:05):
separately to get it someadditional idea of how this
looks like on a baby.
So every image gets.
The onesie as a flat item, theonesie on a baby, and then a
prompt that I did not generate,but it generated on its own.
And you can see how cute thisis.
You can see the baby laying onthe blanket.
You can see the baby in thepark.
You can see the baby next to anold car in a picnic.
(08:25):
You can see the baby with thefamily on an outdoor dinner.
Everybody's smiling.
It's kinda like a sunset,beautiful lighting on everybody
and so on and so forth, baby inthe market and so on.
My favorite one, as I said, isthe yawning baby on the couch.
It's just the cutest thing ever,and you literally want to get
teleported right there into thisroom to look at the yawning baby
(08:46):
with the pets lying with thepets lying next to him.
So that could have been the endof it.
I could have added text on topof it.
So there's a way to do that.
But instead, I wanna show youhow to take this and create a
video.
So again, I don't wanna be.
A blocker to the process.
I wanted to be able to createvideos regardless of what is in
the image, because I don't knowwhat's going to be in the image
because the prompts aregenerated dynamically based on
(09:07):
the item I uploaded in thebeginning.
So what I'm adding now, I'mstarting with a box that's
called Image Subscriber.
And what it does, as the namesuggests, it describes in detail
what's in the image.
You can use it with any image,not just with the ones that are
generated.
So.
It says a cozy image featuring ayawning baby in an olive
colored, thankful for familyonesie.
(09:28):
Seated next to.
An orange tabby kitten, both ona beige couch with a tan blanket
on it.
A sleeping bulldog rests againstthe kitten, and a golden
retriever lies in theforeground.
In the background is a lit gasfireplace with a wood mantle and
decorative objects on it.
A very good description.
(09:48):
Those of you who're not watchingit can kind of imagine how it
looks like this is how good thisis.
So now I have a description ofwhat's happening in the image.
Then I wrote a prompt thatagain, I built as a generic
prompt on purpose.
You are an expert video addscript writer.
Please create a script for ashort video that starts with the
image that I attached andcontinues for about eight
(10:09):
seconds.
The goal.
Is to highlight the baby'sonesie and provide a cozy family
feeling.
Include instructions for thebackground, music and sound
effects.
And the reason I ask for that isbecause if I wanna use VO three
or another image generation toollike Sora that knows how to
create sound as well, I wantthat to be included.
So now I have the image of thebaby on the couch, I have the
(10:32):
description.
In words of exactly what's inthe image and my prompt on what
I wanted, on what I want thetool to do with it.
And I collected all three ofthem into, again, a large
language model tool.
In this particular case, if Iclick on it, you can see on the
right I selected philanthropic3.7 sonnet because I think it is
more creative than the otherones.
And it created a prompt that isa script for an ad, and then all
(10:56):
I have to do is connect it to avideo generation tool.
And one of the cool things hereis because I have all the video
generation tools, I can test allof them in parallel and see the
different outputs.
So on the top you can see onefrom Higgs Field.
That is really, really cute.
And again, all we're seeing iswe're seeing the yawning baby
and we see the fireplace movingand the pets are moving around a
little bit.
The second one has a lot lessmotion and is more boring, and
(11:18):
that's from runway.
And the third one is fromMinimax, which is also really
cool with the baby.
More moving, but it distorts alittle bit as it zooms in and
out, which I don't like, by theway, for each and every one of
them, I could have ran it xnumber of times until I get an
output that I like.
So if I now wanna get acompletely different output, all
I need to do is.
(11:39):
Change the onesie in the veryfirst step, and it will
automatically come up with 10new ideas that are related to
what's on the onesie as far asthe topic.
Break it down into 10 differentprompts, create 10 different
images, pick the best image, andgenerate three different cute
videos in three different imagegeneration tools when all the
work that I have to do is changeone picture.
(12:02):
This is one aspect of this.
I want to go to a separate usecase that will repeat some of
the stuff, but we'll use someother capabilities.
So in this particular case, manyof my clients are in the branded
merchandise universe, and whatthey do is you want to purchase
a product and you want to putyour logo on it, and you want to
be able to see how it is goingto look like.
(12:23):
So what I created here is amachine that does exactly that.
And creates promotional assetsto promote it.
So the first input is a productthat you can upload.
In this particular case, I tooka white metal water bottle that
I found on the internet.
The second input is my company'slogo, my second company.
So Data Breeze, which is thesoftware company that generates
(12:46):
reconciliation, automaticprocesses with AI agents.
Uh, that has nothing to do withwhat we're doing right now, but
it's a logo and a brand name,uh, as a p and g with no
background.
And I wrote a prompt, and again,you'll see that the prompt is
very generic.
You're an expert in promotionalproduct logo placement.
I would like you to look at theproduct in the image I'm
providing you and considermultiple aspects where should
(13:09):
the logo be placed in order tomake the product appealing while
the logo still appears on theproduct in a clear way, and then
apply.
Your decision and place the logoin the location on the product,
make sure that the logo isaccurate and that is laid on the
surface of the product in aphotorealistic way.
The entire product and theentire logo must be visible in
(13:30):
the image.
So again, I don't know whichproduct, and I don't know which
logo, but because I wrote theprompt, very generic, it'll
work.
So then you can see that I tookGemini.
Then I took nano banana, whichagain, is the same image
generation tool, but I couldhave tried other ones as well.
And I'm giving it the threeinputs.
I'm giving it the prompt, thegeneric prompt.
I'm giving it the product, andI'm giving it the logo, and I
(13:52):
get an output that is absolutelyamazing.
It looks completelyphotorealistic, and it has my
logo that looks curved on thewater bottle.
Placed in a logical place toplace it, which in this
particular case is mid height onthe water bottle.
But then I use the same trickthat I used before.
I wanna show the product withthe logo in different cases.
(14:13):
So I wrote a prompt that, again,is generic.
You are an expert marketer inthe promotional product
industry.
Your goal is to come up with 10different detailed descriptions
of scenarios that can highlighthow to use this specific branded
product.
Again, I don't name the productbecause I don't know what
somebody will upload to thisprocess.
Then I gave it several differentexamples and then make at least
two of the ideas funny.
(14:34):
And if the product requiresmanipulation for some of the
images, make sure you mentionthat in your description.
As an example, if it is a waterbottle and somebody's drinking.
Filling it up, the cap should beoff.
The reason I added that isbecause I did not do that in the
beginning and then it didn'twork well, and so I added that
function in the end.
And just like in the previoussteps, I'm attaching it to a
(14:54):
large language model.
What is the input of the largelanguage model?
The prompt I just read to you,plus the image of the product
with the logo on it that wasauto-generated by this process.
Very similar way.
It creates 10 separate prompts.
Then I put it into an array,which creates a list.
So now I have 10 differentseparate prompts, each and every
one of them in a separate boxand in a very similar way to
(15:15):
what I did before.
I'm connecting them to nanobanana and nano banana.
This case has two differentinputs.
The prompt, one of the 10prompts that was generated, and
the image of the water bottlewith the logo on it.
You can see how cool this is.
And so for those of you who arenot watching, I will describe it
in the first one.
There is a lady businesswomandressed up in business, attire
(15:39):
in an office with thebackground, has an office space
with two people in a meetingroom, and she's standing next to
a fancy water fountain and she'sfilling up the water bottle and
the logo is clear in the imageand the.
Cap is off because she's feelingthe water bottle because I
prompted it before.
Then there is another image of amarathon runner.
Again, female.
A lot of people are cheering onthe side.
(16:01):
It actually did something very,very cool, which on the
background and on the shirts ofother people are cheering to
her.
It also put Data Breeze as.
Something that is everywhere inthe image.
It's not over the top, but she'sthere.
And another lady on the side ofthe track is filling up her
water bottle from a jug ofwater.
The next one is a businessperson dressed in a suit.
(16:23):
And he's holding the bottle.
He's sitting next to a desk andwhile something in the
background seems like a Zoommeeting that he is on, and he's
showing off his fancy bottle.
The next one is a marathonrunner.
The next one is a couple on ahike, and they have matching
bottles and they're standinghappy, smiling on top of the
mountain.
And I didn't create the promptsfor any of these.
(16:44):
They were all auto created.
So here I did something similarto what I did before, but I
wanted to show you.
Other things that you can dowith this tool when it comes to
video generation.
So I started with a similarprocess to what I did before.
I asked the image descriptor todescribe the image, one of the
images.
In this particular case, what wesee is the bottle right in the
middle, in the background.
There's a beautiful scenery of asunset next to a lake, and you
(17:08):
can see it's standing on a yogamattress, and you can see the
legs of a female person that isleaning down to pick up the
bottle, but we're not seeing herbecause it's focused on the
bottle.
It's actually an awesome ad ifwe wanted a static image to
promote this.
So the image descriptordescribed this image, and then I
wrote a similar prompt to what Iwrote before.
You're an expert video marketerwho specializes in creating
(17:30):
short videos that capturespeople's attention.
Your goal is to write a detailedscript for an eight second
video, et cetera, et cetera, etcetera.
So again, I'm not telling itwhat to write about because it's
gonna learn that from the imagein the description that I'm
uploading.
Then I send it to a largelanguage model.
In this particular case, Google,Gemini 2.0 Flash.
Why?
Just to show you that I can, soI'm mixing and matching
(17:50):
different tools, and then Iconnected the output that is now
a detailed script of exactlywhat needs to happen second by
second over this entire video.
So it's a very long descriptionthat is broken down into seconds
on exactly what needs to happen.
I took the output.
Off that script that wasauto-generated, connected it to
the image of the feet of thefemale standing on the yoga
(18:12):
mattress and created a video.
If you're looking, she's pickingup the water bottle and she's
drinking, and there's birdsflying in the background, and
the whole thing is really,really beautiful.
Now here what I forgot to add isthe whole idea of opening the
water bottle, which as you cansee, she's trying to drink from
the water bottle when the cap isstill on, but that's an easy fix
that already showed you how tofix with prompting.
(18:34):
So this created this video, buthere is where the next step of
the magic happens.
As you know, all these tools,the video generation tools have
a limit on how many seconds ofvideos they can create.
But sometimes you want to createa longer video, you need
additional scenes, and you wantthe second scene to be either a
complete smooth transition fromthe first scene or at least
something that is connected.
(18:55):
So they have a tool calledExtract Video Frame, and what it
does is I can select.
Which frame from the video so Ican scroll the video left and
right until the frame that Iwant.
In this particular case, I chosethe very last frame, so now I
know exactly what the last framein the, in the video is.
And what I did then is I startedwith the same process image
(19:15):
descriptor.
So I have a detailed descriptionof what happens in that last
frame, and I have the last frameitself.
I then added several differenttext messages that I wrote.
The first one says, you areworking on the second scene of
the video.
The first scene of the video is,and then there's a column and
nothing after that.
And then the last text box isyou need to write a script for
(19:38):
the scene starting with, andthen again, nothing after that.
Then I'm using a really cooltool that makes it even more
powerful.
That is called the really weirdname of prompt Concatenated,
which basically what it means isit knows how to combine several
different prompts together.
So what did I combine?
The very first prompt is myoriginal prompt, literally just
(19:59):
taken from there that describedyou are an expert video marketer
who specializes in creatingshort videos, blah, blah, blah,
blah, blah.
So I'm reusing the same promptthat I used in the beginning.
The second prompt that I've usedin the list is you are working
on the second scene of thevideo.
The first scene of the video is,and then I took the description
that was created in the originalstep.
(20:21):
The one that created theoriginal video, there's a script
for that.
I added that into my prompt.
The third thing that I wrotethat I connected into this
merger of prompting is you needto write the script for the
second scene, starting with, andnow I connected the description
of the last frame and Iconnected it into a large
language model and it created ascript.
(20:44):
That now is the second scenethat is perfectly connected to
the first scene because it hadthe input of the description and
the exact first scene, script,and so on and so forth.
And then I added that to a newvideo generation tool.
And in this video generationtool, I connected two things,
this output of the new scriptfor scene two and the last frame
(21:05):
from the previous.
Video as the first frame of thisvideo.
So if I merge them togetherlater on with any kind of
editing tool, whether using CupCut or something else, it will
seamlessly connect to oneanother because the last frame
of the first is the first frameof the second video, and I can
keep on adding more and morescenes.
I can obviously provide anyguidance that I wanted in the
(21:27):
middle, but the cool thing aboutthe way it is right now is that
it's completely generic.
Which means going back to what Isaid in the beginning, I can now
change the product from a waterbottle to a baseball cap and it
will change the ideas for theimages.
It will change the images, andit will change the videos,
including two different scenesshowing off the hat on one of
(21:48):
those images.
This is absolute magic.
What I'm just showing youwould've taken a team of people
a few days to create.
It took me about an hour togenerate this process and now to
regenerate it takes five secondsfor me, and then about 10
minutes of the different modelsto run one after the other
because they do need to run in asequence.
(22:10):
Let's go to a completelydifferent kind of use case.
This one is more of an interiordesign use case.
So the way I'm starting here onthe left, I'm starting with an
image of a room.
It's a nicely designed entryfoyer of a house.
There are two doors.
There's a green wall in thebackground, a mirror and a
mantle with several differentthings on it.
And what I've added is I'veadded a text message that's
(22:32):
saying, remove the floor tilesand keep it plain off white.
And then I used a differentimage generation tool that's
called Flux Context, which is anopen source model that is
amazing in keeping consistencyacross different images.
And as you can see in the secondimage, or as you can believe me,
if you're just listening tothis.
The image is the exact sameimage, only without the tiles
(22:53):
off the floor.
Then I created an image of aorange couch, and I combined the
two of them together.
So now what I did is I combinedthe input from the room without
the floor, or without the fancyfloor, and I added the couch
inside that room and it knowshow to size it in a way that
would make sense.
I then took an image of tilesoff the internet that shows a
(23:15):
different kind of tiles, and Iwrote a prompt.
Cover the floor with the blackand light floral pattern carpet
and keep the rest of theinterior view intact,
maintaining architecturalclarity.
And as you can see, it did sonow I have the original room,
but with a new kind of floor andwith the orange couch in it.
But you can go even further andadd more steps.
(23:37):
I've used a mask technique,which I will show you in the
next example, so I'm not gonnadive into it right now.
To replace what was the mirrorwith a window looking outside
into the woods.
And I have a completelydifferent view of the room.
And you can experiment withinterior design or any other
kind of design this way very,very quickly and iterate very,
(23:58):
very quickly across multipleoptions in just seconds, which
used to take with Photoshophours to do so now to my final.
And I told you this can be fun,so I wanna show you something
fun that I did.
So several different people thatI know, including my sister and
several others have used oncethey've seen nano banana to try
different hairstyles onthemselves.
(24:19):
So you take an image of yourselfand then you take an image of a
specific hairstyle and you askNano Banana to combine the two
together.
Now, as you know, because myimage is on the cover of the
podcast, I don't have any hair.
I'm completely bald.
So I found it to be really,really funny to play with
different hairstyles on me.
So I wanna show you how simplethat can be to get different
(24:40):
hair stars.
So I started with an image froma conference I was at last week,
it's a selfie of me and one ofthe fans of the podcast that
wanted to take a selfie with me.
So both of us are in the imageand you can see the background
of the conference hall, with allthe different lighting and so
on.
So this was the input, just animage, a actual live image of me
(25:00):
plus another person.
Now there's the concept ofmasking when it comes to graphic
manipulation, and that basicallymeans cutting things either in
or out of the frame.
Now, doing masking, if you havePhotoshop is still an art, but
you still need to know whatyou're doing.
What they have here in vy, theyhave a really cool tool called
mask by text.
(25:20):
So instead of selectingvisually, you explain in words
what you want the system to do.
So.
It has two inputs.
One input is an image, and theother input is a prompt.
So I took this image of me andthe other guy, the selfie that
has both of us, and I wrote aprompt.
Keep just the guy on the right.
That's it.
That's the prompt.
And now what you can see, themask is basically just a black
(25:41):
and white representation of whatit is going to do.
So you can see a cutout.
Of me without anything else.
And then there is a tool that'scalled Merge Alpha.
Again, you don't need to knowwhat that is, but it's basically
taking the original image andapplying the mask on it, which
means it's just gonna showmyself and nothing else out of
the image.
This is a process that would'vebeen significantly more painful
(26:03):
in any other way.
So just doing this is worthdoing, uh, on its own.
But now what I did is where thefun actually begins.
So I took pictures of reallyfamous hairstyles from history.
I've got Marilyn Monroe, I'vegot Amy Winehouse, I've got BA
from the A team.
I've got another Marilyn Monroe.
I've got, uh, Farrah Faucet.
(26:24):
I've got, uh, a bronze statue ofCaesar.
I've got another Amy Winehouse,and then a bunch of other famous
people, Cleopatra, uh, and soon.
Even Gwen Stefani with her spacebuns on both sides of the head.
And all I did is I used a thirdparty tool.
And again, just a tool thatexists inside of weve, that
(26:44):
comes from an open source toolthat's called Face swap.
That's it.
So all I had to do for each andevery one of these images is to
connect the now face of just methat I cut out using the really
easy masking tool.
And the other input is a knownhairstyle.
And what you can see is, andagain you're not watching this,
but this is absolutelyhilarious.
(27:05):
So in the first one, I've gotthe great really big hair of Amy
Winehouse in the same style andin the same smile and
everything, and the same greendress, only it's me, and then me
as Merlin Monroe in black andwhite in the famous dressing.
Uh, and then.
Me as far fast, and then me asthe bronze statue of Caesar with
(27:28):
Caesar's hairstyle and then soon and so forth.
Gren, Stefani, et cetera.
Okay, so now that we've seenmultiple different options, I
wanna do a quick summary ofeverything that we have seen and
why this is so powerful.
Almost every one of the steps.
That I showed you beforewould've taken a lot of time
using traditional tools.
Combining them all together inmany cases required several
(27:50):
different people with differentexpertise.
People who are graphic experts,people who are prompting
experts, people who are.
Image generation experts, peoplewho are video generation experts
and so on and so forth, and itreplaces all of them with a very
flexible workflow, with simpleinputs and simple outputs that a
monkey like me can use.
(28:11):
There's zero technical skillsrequired to use.
All the stuff that I showed you,and once you start playing with
it and you understand that bymixing and matching the
different tools that theyprovide, you can go from any
input to any visual output youwant in a way that is now
scalable.
You want to do the same processevery single day or every single
week or whenever.
(28:31):
All you have to do, if you dothis correctly, is replace the
input.
That you created, whether it's aprompt or an image, or several
images and several differentprompts, whatever the input is,
and it will give you all thedifferent outputs that the
process knows how to generate.
This is an insane time saver,and it is even more incredible
when it comes to ideation andexperimentation.
(28:53):
Now, by the way.
They do have all the differentupscales that are up there.
So while the original image thatis generated is usually not high
resolution because it'sgenerated by nano banana or SD
three or whatever, you canupscale them with really
powerful upscales and getsignificantly high resolution
images.
You can change the aspect ratio.
Of any image in out paint, soit's not blank.
(29:13):
So you can take a square image,change it into a 16 by nine or
by nine by 16 or whatever youwant, and use an open source
model to out paintin the rest soit still looks like a complete
image and so on and so forth.
Literally, anything you canimagine you can do with no
technical skills, set it up onceand then run it as many times as
you want.
This is the magic of AI at itsbest, so.
(29:35):
Go back to the show notes.
Click on the link to get VY andstart playing with this.
I think you get X number of freetokens to start playing with
this for free.
One more thing that I will saythat is really important to
know.
There's a growing community ofthis tool and many of these
people are sharing.
They're templates.
So if you don't know how to usesomething, just go to a large
(29:56):
language model or to Google ifyou're still old school and say,
Hey, I'm looking for a VYtemplate that does 1, 2, 3, and
you'll most likely find one.
And then you can duplicate it,learn how it works, make
whatever changes you want, andmake it your own on just, or
just take components out of thattemplate and combine it with
components of other templates tocreate your entire.
Workflow.
(30:16):
So the community aspect of thismakes it even easier and even
more fun.
That is it for today.
I really hope you found thishelpful.
As I mentioned, I highlyrecommend if you listen to the
entire podcast to when you havea little bit of time, go watch
the YouTube video as well.
It will give you a much betterunderstanding how this works,
but I hopefully, uh, was able todescribe everything that's on
the screen and how it is workingand.
And if you are enjoying thispodcast, please hit the
(30:39):
subscribe button so you don'tmiss any episode that either
myself or a guest or the newsthat we share every single week.
I do the very best that I can tocreate, to bring you the best,
most practical content on theweb right now and on podcasts
for sure.
So subscribe and share thepodcast with other people.
There's a share button on yourpodcast player.
(30:59):
Just click share, share it witha few people that can benefit
from it.
I would really appreciate that.
And until next time, have anamazing rest of your week.