Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
GMT20250417-154445_Recor (00:00):
Hello,
and welcome to another live
(00:03):
episode of the Leveraging AIPodcast, the podcast that shares
practical, ethical ways toleverage AI to improve
efficiency, grow your business,and advance your career.
And we have a really incredibleand really fun episode for you
today.
Today we are going to look atmultiple use cases on how to use
the newly released imagegeneration capability on chatGPT
(00:27):
Now.
We named this particular episodein the beginning 10 incredible
and amazing ways to use it.
And we actually came up with 15.
So you are in luck today.
We're gonna have 15 differentuse cases on how you can use
this really incredible out ofthis world capability that Chat
gave us just a couple of weeksago.
So let's start with a little bitof background before we dive
(00:48):
into the actual, tools and howto use them and all the
different use cases.
And by the way, those of you whoare joining us, live either on
LinkedIn or on Zoom, please goahead and share who you are,
where you're from, what are youdoing, introduce yourself so you
can meet other people as well.
And also.
If you want, share what you wantto get outta this episode, how
(01:09):
have you used it so far andthings like that.
I will read the notes and I'lltry to relate to that.
And also as we go forward, ifyou have any questions, please
ask them in the chat and I willtry to relate to them, in a
normal manner, but now into thebackground.
So if you think about what wehad until not too long ago, we
had Dall-e, which was built intoChatGPT and you could tell it to
(01:31):
create images, but it wasmediocre the most, and it was
very far behind other imagegeneration capabilities.
We also had image generationaltools, which are incredible,
such as, flux and mid journeyand stable diffusion and tools
like that, which really provideincredible capabilities of image
generation.
(01:52):
But they are standalone imagegenerators while ChatGPT and
Dall-e has the ability toactually have a conversation and
understand context.
So let's explain.
If I'm trying to create a designfor a presentation, if I'm
trying to create an image for apost that I want to do on
LinkedIn, if I'm trying tocreate anything that I'm trying
to create an ad.
(02:13):
If I'm doing this on an imagegeneration tool, I need to work
very hard to capture in myprompt everything that I need
from that image.
if I am in a chat, whether it'sGemini or Cha Chippie and so on,
I can actually explain and havea conversation about it because
it's a chat.
I can provide it, thebackground, the information, the
(02:36):
context of what I'm actuallydoing, and hence it will give me
better results, more customizedfor my particular needs.
And so it's a huge benefit.
But again, the disadvantage wasthat the image generation
capabilities of both Gemini andChatGPT were below par.
a few weeks ago, Gemini came outwith Imagine Three, which is
actually really good, and Istarted using it for almost
(02:58):
everything instead of using meJourney and Flux, which were my
main go-to tools before.
Then ChatGPT came out with theirsolution, which totally blew my
mind.
Literally from the day I startedusing it.
And I started using it for moreand more things.
And now I must admit, I use itfor probably 95% of everything
that I'm doing, and the other 5%is divided between Mid Journey
(03:20):
and Flux and Gemini.
Imagine three.
So the good news is you now haveone unified tool.
Chat, PT or Gemini.
If you're in the Google universeand you prefer to go that path,
that can really understand whatyou're actually working on.
What is the project?
Who's the target audience?
Do research before you actuallylike even deep research in all
these tools and give it as muchinformation before you actually
(03:42):
generate the images and then theimages will come out exactly
what you want.
But the other thing that thesetool became very good and cha
GPT is really incredible in thatis just understanding what
you're trying to do.
Even when you don't give it allthat information, even when you
just give it a reference imageand one short, two sentence
prompt, and you get incredible.
Results.
(04:02):
So what we're going to dive intotoday is, as I mentioned, 15 use
cases, things that you can dowith this capability that will
blow your mind if you've neverplayed with it before.
And we're gonna start withthings that are incredible.
We're gonna go to some regularthings and we're gonna end with
a few things that are reallyincredible that literally, blew
my mind and that I showed it toprofessional customers of mine
that do this for a living, andthey would spend hours doing
(04:25):
something that this tool can nowdo in.
Minutes, and so let's dive rightin.
For those of you, by the way,who are listening and not
watching, I will try to explaineverything that's on the screen,
but if you have the opportunityto watch this on our YouTube
channel, then in your show noteson the actual device you're on,
there is a link to our YouTubechannel where you can actually
watch this and see all thedifferent examples that we're
(04:48):
going to show.
But I will do everything withinmy power that if you are driving
or walking your dog or doingyour dishes or whatever you're
doing when you're listening tothis podcast and you cannot
watch it on YouTube, then I'lltry to explain what we're
watching.
One last thing before we divein.
this is going to be highlyeducational and I'm gonna show
you the prompts that I'm usingand exactly how I created all
the images.
I'm gonna talk about the good,the bad, the ugly, the different
(05:09):
steps, all the magic that I'velearned how to do in working
with this tool, literally everysingle day in the past few
weeks.
But this is just creatingimages.
If you wanna learn how to usethis in your business in a
broader way, learn how to buildstrategy, learn how to write.
Documents, not how to analyzedocuments, not how to do data
analysis.
Both, qualitative andquantitative data analysis,
(05:30):
combining data from multiplesources into reports and
dashboards.
If you wanna know how toactually implement AI in a
business wide perspective, howto apply this to different
departments, how to train yourpeople, and so on.
We have been teaching the AIBusiness Transformation course
for two years.
Yes, since April of 2023.
(05:52):
We have done this course atleast once a month for this
entire two years, and hundredsor maybe thousands of business
people and business leaders havetaken the course and completely
transformed their businessesbased on the knowledge that they
acquired.
So everything you're gonna seein this episode today.
Is awesome, but it's just imagegeneration and it's going to be
very quick if you want morestructured training that can
(06:12):
literally and dramaticallytransform your career and your
business.
Don't miss this course.
We teach this course all thetime, but most of the time it's
privately taught, meaning anorganization it consortia a
buying group or a specificcompany will hire us to train
just their people.
And so we do publicly opencourses only once a quarter ish.
(06:33):
So the previous course wasJanuary and now the next course
is in May.
And so on May 12th, we'reopening another cohort.
So if it's something you'reinterested in, again in your
show notes, you can open itright now.
Wherever you are, if you'redriving, wait for the next
stoplight and click on that andyou can open it and see all the
information about the course andyou can sign up right there and
then we would love to have you.
You if you haven't done anythinglike this.
And even if you haven't, it wastheoretical and you're not sure
(06:55):
exactly how to apply it, comeand join us.
Now.
Let's dive in into all of ouruse cases.
So I'll start my PowerPoint justto guide us through it, but
we'll jump back and forthbetween the PowerPoint and
actual chat.
GPT.
And again, if you have anyquestions as I'm going along,
you can ask them in the chat.
Okay.
We are ready to go.
So.
Product promotion.
(07:16):
This is maybe the biggest magicthat this thing can do, and it's
what we're seeing on the screenright now.
Again, for those of youlistening on the left, there is
a low quality image that Idownloaded off the internet of a
sunscreen by Neutrogena.
It's poor quality and it's has abite white background, and it's
literally just a download fromthe internet on the right.
(07:36):
What you see is something thatlooks like a professional ad of
a girl hand closeup, holding thesame kind of product in the
background of the beach, likeblurry, and she has the colors
of the US flag painted on herfingernails, and it says, 4th of
July, sale Neutrogena.
Buy one, get one free.
Now I went from one to theother, from the ugly low quality
(08:01):
image on the internet to thereally beautiful, amazing
looking ad on the right.
In four or five.
steps of simple prompts, andlet's go through these prompts
to see how this works.
And so the first thing I did isI uploaded the ugly image and
then the trick if you want toget the text right, more times
(08:23):
than not, and this tool isincredible at creating text, but
creating the text accurately.
There's a little trick to dothis.
So what I do in step one is Iwrote the following prompt,
please provide a line by linetext of what is written on the
product.
And then it literally wrote itdown for me, Neutrogena
dermatologist, recommendedbrand, blah, blah, blah, blah,
blah, all the differentcomponents.
(08:44):
And then I went and wrote theactual prompt.
Now, I would like you to createa closeup professional product,
photography of a female handholding the product.
The background is the ocean andthe beach, but it is blurry as
the focus is on the hand and theproduct.
And then I gave the informationof the camera, Nikon Z seven 50
millimeter lens aperture 2.8.
(09:06):
And then I copied and pastedwhat the product text says, the
same thing it wrote for me, andit created the first image.
So the first image just showsthe hand without all the fancy
stuff around it, holding theproduct.
It looks perfect, looks like aprofessional photography.
It's fantastic.
And then I went ahead and Iwanted to see if I can change
(09:26):
the SPF.
I wanted to see how much itactually understands what's in
the image.
Again, a huge benefit of a chatthat is a model that actually
has logic versus just an imagegenerator.
So all I wrote is, pleaserecreate the image with SPF 50
instead of 70.
And it did.
So now I had an image with SP50, same everything else.
(09:47):
And then I started to have funwith it.
I said, please redo the imageand change the girl's nails
polish to the US flag pattern.
And he did a really good job.
Now it looks really cool.
I'm like, oh, that's cool.
So then I went ahead and I askedfor the following, extend the
image on the bottom.
To make it a nine by 16 imageand write in a fun, font.
(10:08):
4th of July sale, Neutrogenabogo.
So it did that.
But what it did, it actuallywrote the Neutrogena BOGO in
red, which wasn't very obviouson the actual, image.
So I asked it to change the fontto say, make the Neutrogena BOGO
also white with brutal outline,like the text above it.
Make sure that all thefingernails are painted because
(10:31):
it missed one in the previousone.
And that's it.
And I had this final ad that Icould literally use in a
professional setup.
It looks really cool and I canuse it today in that scenario,
that's mind blowing.
This would have taken aprofessional photo shoot, a lot
of editing in order to get thispicture.
And now you can do this in fourto five prompts, starting with a
(10:52):
poor quality image of a product.
Let's move to the next one.
So creating.
Icons.
That was something that, again,you usually had to go to some
kind of a website and grabicons, sometimes pay for them if
you wanted something to beprofessional.
And if you wanted somethingunique, something that's only
(11:13):
yours, you would have to havesomebody created somebody in
your design team or pay a thirdparty person that knows how to
create high quality icons.
But what I did here is Iactually took two existing icons
from my website and I'veuploaded them to ChatGPT, and I
created this last icon.
(11:35):
So let's see how this was done.
So all I did is I uploaded thesetwo images of icons that I have
on my website, literally rightclick copy, and then I put it in
there and I said, let's trysomething different.
I've tried 50 different thingsin this chat, so ignore that,
but then attached our images.
Of two icons from my website.
I would like you to learn theexact style of these icons and
(11:58):
create new icons following thestyles and the colors.
The icons that should be createdare blah, blah, blah, blah,
blah.
And the first one I ask for isthe AI podcast icon, and it
created this that looks.
Absolutely incredible.
Again, for those of you who arenot seeing it, follows the same
exact style of the originalicons.
This 3D shiny look icons, and ithas a professional kinda like
(12:22):
podcast microphone withheadphones around it and it says
AI in a speech bubble.
It's a perfect icon for apodcast that I'm actually going
to use on my website.
The other very cool thing thatno other professional image
generation tool that I know ofactually created a transparent
background because itunderstands that this is an icon
(12:42):
that I need to use as an icon,meaning it needs to have a
transparent background.
So it created that on its own asa p and g.
When I download this, I canimmediately use this wherever I
want to use this.
Now I wanna touch on somethingelse that has to do with this,
and it's kinda like use casenumber three, which is
understanding and applyingstyles.
In addition to the fact that Ijust created an icon, it
(13:05):
actually learned the style, theessence, the colors, the.
Type of thing that I'm trying tocreate.
And yes, in this particularcase, we've applied it to a.
Icon, but you can apply this toanything else.
If you're trying to createtextures, if you're trying to
create a specific style ofimage, if you're trying to
(13:27):
create old looking photographs,if you're trying to create a
specific.
impression of something, you canupload the original ones and it
is very good at understandingand capturing the essence of it
and applying it in a new way.
And we're going to see moreexamples of that.
So at number four, we haveapplying known style.
(13:51):
So in addition, you giving itthe style, you can ask for
basically any style that youwant.
So when this tool came out,everybody started to creating,
Ghibli studio style anime imagesof this.
But what I've done in thisparticular case, I wanted to
show that you're not limited toGhibli.
You can use any style that is aknown style or artist that is
(14:12):
available on the internet.
So on the top left is an imageof a family.
Actually, my family on a trip inChattanooga that we came back
from a day before thiscapability came out, so it was
an inspiration for me.
And then I went ahead and askedit to recreate the image in
multiple styles.
So the first thing is Ghibli.
The next one is Legos, which ismy favorite out of all of them.
(14:33):
So it's actually my familysitting on a rock in
Chattanooga, all built out ofLegos.
It's absolutely insane.
and then we have, Dall-e andthen Van Gogh, and then, Andy
Warhol.
And each and every one of thesewere created with a simple
prompt.
So let's take a look veryquickly.
So we have, Very simple andshort prompts.
(14:55):
So the first one, I justuploaded the image and said,
please turn this image into astudio Ghibli anime.
And it did the second one.
Now let's use the original imageand make it into Lego.
So what I did initially, it'stry to build them from Lego
pieces, which is not what Iwanted.
I actually wanted like minifigures.
So I went ahead and changed itagain.
I said, great, but now make thepeople as Lego mini figures and
(15:18):
the surrounding more Lego styleand now it actually looks like
it's built out of Lego.
And then so on and so forth.
I did for all the rest of them,Salvador Dall-e and the other
ones.
and when I asked it to createVan Gogh, it actually did a good
job, but not incredible.
So what I did in the end for theimage that you've seen, I've
actually uploaded an image ofVan Gogh and I actually gave it
(15:39):
as a sample.
I said, make the brush strokesmore prominent and more
directional, like the attachedimage.
So instead of it coming up withVan Gogh from its information in
its background, I actuallyuploaded an image.
I got a much better.
Outcome that is aligned withwhat I was actually looking for.
Why is this important?
Because you can use this in anyexpression use that you want,
(16:02):
whether you want to createsomething for fun, whether you
want to create something for,holiday seasons, greetings for
people.
You send this to, whether you dothis professionally and you need
this for any campaign thatyou're running with your
company.
You can use any one of thesethings.
And yes, people are asking, isthat okay that they're using,
actual people's, references andso on.
(16:23):
That's a very big open-endedquestion.
They do have some limitation onwhat styles it will and will not
do from an IP protectionperspective.
I must admit that sometimes itactually blocks me in really
weird situations that do notmake any sense.
And yet it does allow me to dothings where it's things like
(16:44):
this, like Van Gogh or Dall-e orGhibli studio for that matter.
So it's very open compared toprevious models that existed
before and it allows you to domore or less everything that you
want.
somebody asked Disneycharacters.
Yes, it will do Disneycharacters.
It will do, Simpsons it will doall of these things.
Okay?
The next use case is followingbrand guidelines.
(17:06):
So now we're taking it to thenext level and make it more
professional.
And I must admit, thisabsolutely blew my mind.
So what we're looking at rightnow is two images side by side.
One is a kinda like Spring Shoesales for women by Walmart.
The other one is from Target andnone of them is real.
And both of them is a hundredpercent ai.
(17:27):
And the way I created them iswhat shows the real power of
these tools when it comes tounderstanding the chat and
context and additionalinformation for creating these
images.
So how did I create these twoamazing images?
What I did is I started.
With a simple prompt that saidyou are an expert graphic
designer.
(17:47):
We are working on a partnershipwith Walmart.
As a first step, I would likeyou to review their brand
guidelines, and then I literallygave it a link to the document
that has the Walmart detailedbrand guidelines.
And then I said, visit all thesub pages and learn their style
guide in detail.
Please create a short bulletpoint summary of what you found.
So then it created a shortsummary of what is the style
(18:09):
guide of Walmart.
On the actual chat, and then Isaid, okay, great.
Now I would like you to createan ad for new shoes sale for
Spring.
Using these guidelines, the adshould show two smiling female
friends sitting on a bench withcool sandals.
It is also.
Professional productphotography.
(18:30):
The angle is looking upwardswith their shoes in the middle
of the frame.
Intricate sandal details.
Bright sunny day, blue sky inthe park, Nikon Z seven
Aperture, 4.5 50 millimeterlens.
It is a Walmart a so use all theguidelines you learn previously
and it created this amazing ad,like it's absolutely stunning
(18:51):
and it looks as if a Walmartactually created this.
All I wanted to do is I wantedthem to wear dresses instead of
long jeans because I think thatwill make the sandals stand out
even more and it will look morespring like.
So I said, okay, good start.
Let's try having the girls wearspring dresses and sit with one
leg over the other.
This way it will show each ofthe girls, on the ground and one
(19:14):
not.
giving additional angles, somaybe basically showing the
sandals in different angles.
So that's what it created, andit looks amazing.
Again, it looks like an ad andit looks perfect, and it has the
Walmart logo and it follows alltheir brand guidelines on the
type of text and how to addcolors and things like that.
there's again, another question.
Does that violate any policiesand so on?
(19:35):
I don't think so.
I'm just following their brandguidelines, and again, I'm not
gonna pose this, but it's justshowing it literally read their
professionally produced brandguidelines, which are very
detailed and learned how tofollow it.
Then I tried a different angle.
I uploaded a PDF of the Targetbrand book.
It's many pages of all theirdifferent details and I said,
(19:56):
this is great.
Now let's copy this ad totarget.
I attach target brandguidelines, please study them
carefully and then recreate thead by using Target's, brand
guidelines, everything elseshould stay the same.
And I got the same ad with thered background and the logo and
this different font, with morespacing.
Like it literally follows theirbrand guidelines.
And it's just mind blowing to mehow easy it is to do.
(20:20):
I did not need a camera, I didnot need the models.
I could change what they'rewearing.
As you saw.
I can change the sandals, I cando all these different things
and create, At least inspirationfor what the ads needs to be and
show it to the relevant peopleand get approval before we
actually get the models and dothe actual photo shoot, or
eliminate the photo shootaltogether.
(20:41):
next one is infographics.
So this is the first tool everthat can create long complex
text accurately.
So the next best thing that wehad so far was, ideogram, or
ideogram, depending on how youwanna pronounce it, which is a
great tool that still knows howto create images with good text
in them.
But it will not create aninfographic.
(21:01):
You will not create a full pagefull of text.
It will not create sophisticateddiagrams accurately.
And ChatGPT now will, now, thisuse case is actually really
interesting, because what I didhere is actually a much longer
process than actually creatingthe.
infographics that you see on thescreen that has two different
segments.
and it talks about running AIagents safely on a virtual
(21:24):
machine.
So let's see what happened here.
So what happened here is I'm nowtesting many different agents
and to test the agents, I wantednot to run them on my computer
because to be honest, I'm not ahundred percent sure what's
going to happen.
And so I wanted a safeenvironment to run this.
So I did a deep research processon chat.
(21:44):
GPT Deep research is anincredible way for you to find
information and summariesinformation from hundreds of
websites across the internetand.
I did a completely differentepisode about this a few weeks
ago that you can go back andlisten.
I compared all the differentdeep research tools and you can
go and check it out.
But what I said is thefollowing, I'm looking for a
(22:04):
safe way to run Manus and otherAI agents on my Mac.
It needs to be in a separateuniverse than all my regular
applications and everything Iuse.
The best option would be to runit on a remote server.
The second best would be aparallel machine on my Mac.
I am open to other suggestionand would like to know what are
(22:24):
the pros and cons of thedifferent options.
how much would it cost, ifanything, and what are the
technical skills required to setit up?
And then he went ahead and didthe whole research.
And then it gave me multipledifferent options where the best
option after going back andforth multiple times and asking
it about different options andso on.
And again, those of you're notseeing the screen, I'm
scrolling.
Like it's a very longconversation that I had with it
(22:46):
back and forth.
And the best option came to bea.
Virtual machine on Google Cloud.
Now, I am a geek, but I'm nottechnical enough to do any of
these things.
So I literally asked it forstep-by-step instructions and I
followed its instructions.
And in the beginning, I couldn'tget it to work, and I went back
and forth and eventually I wasable to create a virtual
(23:06):
machine.
The whole process took me abouthalf an hour, and now I have a
virtual machine running on aGoogle Cloud where I can install
and run all these agents.
And if something goes south andgoes terribly wrong, nothing
happened to any of myenvironments.
And so that's something worthdoing on its own if you want
experiment with agents.
But what I did in the end ofthis very long conversation is I
(23:29):
literally said, please create abeautiful infographics of this
guide.
Use the latest version of theconversation using Ubuntu
operating system, which is theone I actually managed to get to
work.
use the attached brandguidelines, my brand guidelines
in this particular case as thecolor scheme for the
infographics, and use themultiply logo that is attached
as a document as well.
(23:50):
Start with, a background of whatare agents, why is there a risk
in using them and why thissolution is even necessary.
And then describe the steps thatwe have taken in order to create
the environment.
And now I have the ability totest Different agent platforms
with zero risk and it createdthe infographics, which is
(24:11):
incredible.
It's clearly done.
It follows exactly what I said.
The text is perfect.
It follows my brand guidelinesand colors.
It created icons for thedifferent steps of the process.
It's just absolutely amazing.
And so that's unique.
You can now create sophisticatedinfographics based on anything
you want.
(24:31):
Next inspiration from images.
So I was trying to have fun, butI was also trying to have fun in
a sophisticated way that willinvolve text and reproducing
images with new text in them.
So I found this image, actually,somebody sent me this image of a
homeless guy that's holding acarton sign.
He's standing in the middle ofthe street and the sign says, 16
(24:53):
wives, seven hungry dogs, threethin cats, 25 kids, and still
horny.
Please help with loose change.
I find this really hilarious anda really smart guy, and I wanted
to do the same thing and come upwith other smart and funny
things to write on these kind ofsigns.
So I went to check GPT, uploadedthe image, and I asked it to
come up with new, funny.
(25:15):
Things to write on a homelesssign, and then I ask it to apply
it and recreate the image withthe new thing.
And it created this image thatsays, too ugly to strip, too
honest, to steal, too sober forkaraoke needs beer and therapy.
Spare change.
Question mark.
Again, I find this to beincredible.
if you look at the image, it'salmost identical to the original
(25:37):
one.
It definitely follows the samekind of thing like he's standing
in the middle of the road.
The road has two sides, one withlike stones and one paved.
there's an old school BMW behindhim.
Like it looks very clear that itgot the inspiration from the
original image.
It's not exactly the same, butyou can recreate images with
things that you wanna change inthe image, including text in the
(26:00):
image.
Think about having the,billboards and Times Square
saying what you want them tosay.
Stuff like that is now possible,which just one prompt.
Sean said that he's stillwaiting to be wowed by AI humor.
Overall, I think it's gettingbetter.
I think this is actually notbad.
but let's continue.
So number eight, which isactually a fail for me, but I
(26:22):
saw people do really good jobwith that, which is professional
headshots.
So what I did is I actuallyuploaded several images of me
from different angles and Iasked it to create professional
headshots of me.
I asked for one that is, with at-shirt, the same t-shirt that
I'm wearing right now.
My multiply, t-shirt and thenone wearing a suit.
(26:43):
And the reason it's a fail, theactual professional headshots
looks amazing.
It just doesn't look like me,which is the whole point of
this.
But I saw other people online dothis and get much better
results.
So I don't know if it's my, facethat is not easy to reproduce,
but it's worth you knowing thatthis is an option.
So let me show you how this isdone.
So let's go into ChatGPT.
(27:04):
And see the process that I didhere.
so the process started with meuploading several different
images, front two sides andback, images of me that I just
took with my phone.
And then I wrote, you are theworld's best headshot
photographer.
You have 20 years of experiencein capturing the essence and
glamor of people and making themlook amazing in headshots.
As a first step, I would likeyou to look at the different
(27:26):
angles of this person anddescribe him in as much detail
as possible.
Include things like the aspectratio of his face, the shape of
his nose, the direction of hiseyebrows, the color of his eyes,
et cetera, et cetera.
Try to create as detailed aspossible description of this
person so you can replicate itlater.
And you can see this is arecurring theme.
What I found is when you ask itto describe things in details,
(27:49):
it actually does betterafterwards.
We've done with the text, andI'm doing this now with.
Myself.
And then it created a verydetailed description including,
general structure, facialfeatures, eyes, nose, mouth,
ears, skin tone and texture,hair, all of that kind of stuff.
it's pretty long and verydetailed.
And then I uploaded my logo soit can capture it and put it on
(28:11):
the shirt more accurately thanjust grabbing it from the image
that it had, and I wrote thefollowing.
Now let's try to create thefirst headshot.
Let's start with the sameT-shirt that I'm wearing with
the same company logo.
I'm also attaching the companylogo itself for you to copy.
and I want it to look like aprofessional headshot, but
wearing the t-shirt I'm wearingin the original image.
(28:32):
Try to capture all the detailsyou mentioned above as well.
And it created the image.
And again, it's a greatheadshot.
It just doesn't look exactlylike me.
it looks somewhat like me and ithas a lot of the features, but
it doesn't look like me.
And then I ask it to change itto a professional.
With a suit and tie and he didthat and I replaced the tie and
(28:53):
I tried different things.
none of them actually came uplooking like me, but the
clothing change actually worksvery well.
So worth trying.
If you need professionalheadshots, I will say something
else, I assume, and it is verylikely that if instead of trying
twice, I will try 20 times.
One of them will actually looklike me and then I'm done and
it's gonna save me the time andthe money of actually going and
(29:14):
getting professional headshots.
So it's worth trying 20, 30times.
It's still gonna be a lotcheaper and a lot less time than
actually getting a professionalphoto shoot.
And because I have uploadedmultiple angles of me, I will be
able to get like a 45 degreething.
I will be able to get it for mypassport if I need to instead of
paying$25 at CVS, and all thesekind of things.
(29:35):
So if you manage to do it, it'sactually very helpful.
Let's move on.
Number nine.
This one is really one of myfavorites because it really blew
my mind, which is combiningimages.
So what I did, and again, I wasjust experimenting with
different ways I can use this ina business context perspective.
So what I did is I took an imageof a basketball player holding a
(29:57):
basketball with a darkbackground and he has a very
serious face on it.
And I was starting to think whatwould be completely out of place
to put in his hands instead ofthe basketball.
So I took a very fancy, verykids friendly cake that has lots
of sprinkles on it and colorful,candles and so on, and I asked
it to combine the two, and Icreated the image of the
(30:19):
basketball player with the darkbackground, with his really
tough face holding the cake.
And it looks amazing, and itlooks almost exactly like the
original cake.
So let me show you again.
This was literally just oneprompt.
And so what I did is I've,uploaded the two images, so the
image of the basketball playerfrom the internet, the image of
the cake from the internet, andI said, please have the
(30:43):
basketball player hold the cakeinstead of the basketball.
Try to make the cake asidentical to the original image.
I attached.
And as you can see, if we zoomin a little bit, you can see
that it even captured like it'snot regular sprinkles that on
this cake, like they have theseweird shapes.
it captured the drip of thechocolate.
(31:05):
It captured the colors of thecandles, it captured the actual
plate color, like it's almostthe same exact cake.
And it's just amazing that itknows how to combine these two
images together because itunderstands what I'm asking.
And again, that's something thatdoesn't exist in the regular
image generators.
(31:26):
let's move on.
I'm just reading some of thecomments.
Somebody else says it did workfor them.
Either like professionalheadshot.
for some people it works.
For some people it doesn't.
I think again, with a lot oftrial and error, you can
probably make it work.
So product placement, this oneis another big one.
This is an actual client of mineand they have this product that
(31:48):
is called I Port, which allowsyou to put a.
iPad in a stand that can beplaced in different places, it
can be removed from that.
It's detachable.
It has a lot of really coolfeatures, but to create images
of it in different scenarios sothey can sell it to different
kinds of client.
they need to actually take theproduct and place it in the
place and take pictures.
not anymore.
(32:09):
What I did is I uploaded severalimages of the product into
ChatGPT, and then I justdescribed the background that I
wanted.
So let's go and take a look atwhat I did in ChatGPT.
So scrolling up, you're anexpert product marketing
photographer and designer.
We are about to work together ona project, for a product called
I Port.
You can learn about it here, andI actually gave it a link to the
(32:31):
website.
I want you to review all theproduct pages on this site and
let me know what you can learnabout the product.
And I'm doing this again becauseif it understands what the
product is, it will have aneasier life of actually
generating the outcome that itneeds to generate.
Again, a huge benefit that justan image generator doesn't have.
So it provided all thisinformation.
(32:52):
And then I said, okay, great.
we're going to focus on mountediPad product.
Here are a few images, that Iwant you to look at.
And I uploaded multiple images.
I also attached the logo so itknows how the logo looks like,
and then I ask it to, give me adescription.
Again, I did the description ofwhat it looks like from
different angles in differentdirections and everything.
(33:13):
So it came with shape andstructure, color and finishes,
design details, functionality,usage, angles observed, and a
summary.
So again, a detailed descriptionof the product.
I asked it for some additionalinformation and then I went to
create the first image.
I would like you to create apromotional product, photography
image of an port on a counter atStarbucks.
(33:33):
The shot is from the baristaside.
Looking into the restaurant, thebarista fingers are operating,
the iPad mounted on the port.
There are happy customers inline waiting in order for their
drinks.
The airport needs to be in highresolution and the focus of the
image, and it created the image.
And it's amazing.
It's exactly everything that Iasked for.
(33:55):
And creating that image with acamera would've taken.
Taking over a Starbucks andhaving a professional
photographer and having actorsto be there and stand in line
and take all these pictures andthen pick one.
And here's just one prompt.
And then I created all the otherimages that you can see in this
image with any scenario in anoffice building with a glossy
table on the wall, likewhatever.
(34:15):
I actually want interior orexterior design.
This is another amazing usecase.
And what on the left is anactual backyard.
It's real, it's an actualbuilding and you can see that
the yard looks nothing exciting.
but what we did is we literallyask it to give it some life and
(34:37):
suggest new designs for thatbackyard.
And it created these differentvariations that look really
inviting and warm and somethingI would love to have, myself,
but it kept the exact aspectratio and the size and the wall
and the window, everything fromthe original actual yard in the
image, just changing the designof it.
(34:58):
And you can do the same thingfor interior design as well.
So let's see, what are theprompts over here?
So again, uploading the image ofthe backyard.
And then I said, recommend acozy backyard.
I want to have one tree in thecorner.
And then he just recommended allthe things that can happen.
And then he asked me questions,would you like to, visualize
this?
but yes, not too much trees.
(35:20):
I want, to reduce the amount ofmaintenance.
So it reduced the amount oftrees in the description.
And then we asked for it tocreate the photo, and it created
the photo, and then we asked forother stuff and created more
variations of it.
this work was done by Joyce, myamazing assistant for an actual
house.
so this is incredible.
And you can do the same thingfor interior design and ask for
(35:42):
recommendations or ask it toshow the sofa that you
considering buying in yourliving room and stuff like that.
And it will do that as well.
It's a really amazing use caseto get inspiration or to
actually do the design that youwant to do.
Number 12, changing angles.
So this is somewhat related towhat we did before, but what I
wanna show is how amazing thistool's ability to understand
(36:05):
what it's actually looking at.
So again, different than animage generator that generates
an image and that's it.
This actually understands whatyou're creating.
So on the left you can see animage of a room.
By the way, created with AI onthe right is a top down view.
So the first image on the leftis looking into the room from
eye level.
(36:25):
The second image is looking topdown into the room from like the
corner where the wall meets theceiling and it's looking into
the room and you can see howaccurately it actually captures
all the details.
Now it's not a hundred percent,but it's close enough that it
could have been used as a gamefor kids to find the differences
because the sofa is the samesofa, the cushions are the same
(36:47):
cushions, that chairs are thesame chairs.
The table is not exactly thesame, but it's almost the same.
It even captured the two rugs onthe floor that are overlaid one
on top of the other and thecurtains and the lighting and
the window and the plants.
Like it captured all thesedetails and just created it from
a different angle.
Why is that helpful?
'cause you can use this againwhen you're doing design and you
(37:08):
wanna see how will this looklike from a different angle.
If you want to think about aproduct from different angles
and use your product and show itin different angles in different
situations, you can do that aswell.
and again, this, by the way,the.
very simple prompt to go fromone to the other.
So I will show you that veryquickly.
this is the first image, andthen the second image literally
(37:30):
said, show me the rest of theroom from a bird's eye view.
That's it.
And it gave this incredibleoutcome of the room from a
completely different angle.
Next one is user generatedcontent, also known as UGC.
And this is another thing that'sgonna change an entire industry
because brands can now quoteunquote fake UGC.
(37:53):
Now, previously brand did thisanyway, but they actually had to
pay people to do this and hirepeople to do that.
And now you don't need to.
So what in the image on theleft, again, is a low resolution
image of a cream that I took offthe internet.
And on the two images of theright, you see a young girl.
Holding the product.
I ask it to not to look notprofessional in one of them,
(38:14):
she's just holding the productand smiling as if she's talking
about it.
On the other hand, she has alittle bit of cream on her face.
Totally looks like something shetook as a selfie with her cell
phone.
And I did that on purpose inorder to make it look like UGC.
And I don't have the link forthat one.
but literally all I did is Ifollowed the same process that I
described before I asked it to.
(38:37):
Tell me what is the text on theproduct.
So then it shows up accuratelywhen I render it again, and then
I ask it to create a usergenerated content style image of
a young girl holding the productas if it's user generated, as if
she shot it as a selfie with herphone.
And now I can create as manyversions of this that I want
until there's a few that I like,and then I can use it for
(38:59):
whatever I want.
By the way, the next step beyondthat, which we're not gonna do
in this session, but you candefinitely do, is you can now
use this image.
And upload it to a tool thatalso knows how to create video.
And then you can have a video ofthis girl talking about your
product when the girl doesn'texist, and the product in the
image exists, but not in anactual image.
(39:20):
This is, generated by ai.
So you can create user generatedcontent, including videos,
including voice, includingmaking it look completely
realistic as if a person shot itwith their phone.
And you can do that in seconds,in multiple people, in multiple
places around the world, in anylanguage that you want with
every kind of person, all young,female, male, different
(39:42):
ethnicities, et cetera.
All of that with one simpleprompt.
Number 14 blew my mind becauseit combines a lot of the things
that we have done before.
So let's see what's going onhere on number 14, first of all.
I've followed the entire designprocess.
(40:03):
Several of my clients areapparel brands, some of them for
adults, some of them for babies,with different kinds, with
different styles.
some of them just do t-shirts ofmultiple brands and so on
because they have the licensingto do that.
But so different companies thatdo different things.
And what I tried to do here is Itry to follow the entire design
process.
(40:23):
So I started with a mood boardon the left, and then using the
mood board, I created graphicsthat can go on onesies.
And then I uploaded, what on tophere is a 2D flat image, of a
onesie.
So it's not an actual image,it's just like the graphic
design, which is how designersactually work when they create
these designs.
(40:44):
And from.
Me giving it to it, plus thecombination of the first few
steps.
It created the design on theonesie as a flat graphics, and
then I ask it to put it on anactual baby.
And it did.
Now, I'll show you the prompt ina second, but what I wanna show
you that is really incrediblehere is if I zoom in on the
(41:07):
actual onesie that I uploaded,the flat graphics one, you will
see how the stitches look like,how the overlay of the shoulders
and the neck opening work, soyou can slip the baby's head
through it.
you can see the buttons on thebottom and the stitching on the
bottom.
On the original image and itcaptured all of that in the
(41:27):
design that it created as thegraphics, as well as in the
actual product it put on thebaby.
So all of this is mind blowing.
This process takes professionalcompanies weeks to do with
multiple people, and now ChatGPTcan do it in matter of minutes.
So let's see how this processworks.
(41:48):
So I started with you're anexpert clothing designer at
Gerber Children's Wear.
As a first step, I would likeyou to do a quick research on
the company and its designguidelines.
Get back to me with a summary ofwhat you find so we can
establish a baseline of what aregood designs in Gerber.
And then it came back again withcore values and product
(42:10):
categories and material.
Choices and design guidelines,color palettes, pattern
graphics, functionality, keytakeaways, like all of that, it
did just by me asking it to doit.
And again, I do this because ifit has more context, it is gonna
generate better results that arealigned with what I need.
And I see now that there's a lotof people writing, so I, I
(42:33):
apologize for missing some ofthe, comments.
I will get to it in a second.
So then I said, great.
I need your assistant inplanning the next round of
onesies design.
As a first step, I would likeyou to create a mood board, that
will connect us from the springseason we're in right now with
its beautiful seasons, blueskies, warmer weather, two
(42:55):
clothing of babies and householdenvironment.
And then it created thisdescription of what it thinks it
needs to be, the color palette,the texture materials, the
motives and patterns, the designelements, the nursery
inspiration, and then it gave meother examples, and then I
selected one and it dove intothe details of what this one
(43:15):
should be.
from the three different optionsthat it gave me.
And then I said, okay, great.
Now let's create the mood boarditself.
And it created this amazing moodboard that has everything that I
asked for.
It has the color palette, it hassome of the designs.
It actually looks like a boardwith two onesies hanging on it,
or three in different sizes.
And the templates and even thenursery theme that I ask it,
(43:37):
there's like a crib and theskies and so on.
All looks amazinglyprofessional.
And then I said, okay, I reallylike the board.
And also really like the colorsyou picked.
I would like you to create moredesigns in the styles of the
balloons and the rainbows andthe sun behind the clouds, which
is some of the things thatappeared in the mood board.
And just create patterns that wecan later on use and add on
(44:00):
onesies.
So it gave me multiple options.
And then I uploaded this, right?
I uploaded a image from theinternet of a.
Design diagram of a onesie chosethe back and the front in just
lines.
And then it said, I would likeyou to use the image I attached
as a reference.
This is how we create designs.
So using 2D variation of aonesie.
(44:23):
so please use the exact one I'veused.
Just change the color of it to acolor from the palette, that you
created and just apply one ofthe designs.
Remember, this is the front andthe back.
So the image I uploaded has thefront and back of the onesie,
but I also need the front inthis particular case.
So I ask you just to create thefront, and I ask it to create
(44:45):
one with the sun hiding behindthe clouds because I like that
pattern.
And it created this incredibledesign that follows everything I
requested that can be usedprofessionally right now.
And then all I did is say, thisis fantastic.
Great work.
Now let's do the next step.
I would like you to create aphotorealistic photo shoot style
version of this with a babywearing it as he's smiling and
(45:08):
standing in his crib and itcreated this image.
This is mind blowing again.
This process would've takenweeks to a professional design
team and now takes minutes usingcha GPT.
Let's go back to thepresentation.
Number 15 is just for fun.
Many of you have seen that, butit still blows me away that this
(45:29):
is doable.
so you can do a lot of funthings.
In this particular case, the bigcraze after the Ghibli style
anime thing was action figures.
So again, this, Joyce createdthis, her son is really, both
her kids are actually reallyinto TaeKwonDo.
And to surprise him, she took animage of him wearing his
TaeKwonDo clothes and createdthis action figure package for
(45:50):
him.
And I find this.
really fun and mind blowing howaccurate it captures her son's
details, as well as the wholeTaeKwonDo gear and everything.
So how is this done?
Very simple way.
In the similar process, we'vedone everything else.
So if I scroll back, she uploadhis image, and she said, create
image.
create a toy of the person inthe photo.
(46:12):
Let it be an action figure.
Next to the figure, there shouldbe the toy's, equipment, each of
the individual blisters, aTaeKwonDo helmet, a pair of
shift gloves, TaeKwonDo armor,et cetera, et cetera.
Describe what needs to be there.
And it created this really cooland fun image summary this tool
literally changes everything weknow about how.
(46:36):
You can use AI to create visualaids for anything from
PowerPoint presentations to adsincluding text, infographics,
fun stuff, final design, stepsin the design, in the
professional design process,interior design, exterior
design, all the things that wedid in minutes.
(46:56):
And the biggest benefit is thatit understands the context of
what you're actually trying tocreate.
And if it doesn't, you cancontinue the conversation, which
again is something you cannot doin just image generation.
Tools like Mid Journey andDall-e and Stable Diffusion.
If you haven't played with it,just go ahead and play with it.
(47:20):
It takes a while to create eachand every one of the images.
It takes a minute to twominutes, sometimes five, to
create the images.
You can ask for different aspectratios, but start just by having
fun and then start thinkingabout what business use cases
you can apply this to.
I can guarantee you that if youhave any need to do any kind of
design when you are in yourwork, whether this is your
(47:43):
actual work, you're a graphicdesigner, or you're not a
graphic designer, but you arenow can do this in your work
either to get inspiration togive to your design team, and
you can very quickly get to avery solid draft that they can
then perfect.
Instead of explaining to themwhat they need to create, you
can actually show them inseconds or even if you're the
CEO of the company or the headof something or somebody in
(48:05):
product and you wanna be able todo all these things, you can now
do it in minutes in chat, GPT.
That's it.
I remind you about the course.
If you wanna learn more stufflike that across everything that
AI can do, come join us on May12th.
You can look for the link in theshow notes.
For those of you who have joinedus live, thank you so much for
spending the time with us.
(48:25):
There's multiple people onLinkedIn right now, from
multiple places and multiplepeople on the Zoom.
I appreciate every single one ofyou.
I know you can do other stuff onThursday afternoon with your
time, and so I really appreciateyou being here and joining us.
Those of you who haven't joinus, you can come join us.
We do this every Thursday atnoon, going into details like we
(48:45):
did today on a specific businessrelated AI use case that you can
learn how to implement in yourbusiness today.
So come join us live.
We also do the Friday AIhangouts every Friday at 1:00 PM
where it's just open.
Ask me anything kind ofenvironment.
Everybody participates and bringuse cases, and you can come and
join us for those as well.
That's it for today.
Have an awesome rest of yourday.