All Episodes

May 20, 2025 • 52 mins

👉 Fill out the listener survey - https://services.multiplai.ai/lai-survey

Most business leaders know AI can generate stunning visuals.
But turning that potential into real, consistent, on-brand creative?
That’s where most teams get stuck.

In this live session, we’re pulling back the curtain. Ross Symons will walk us through the exact workflows he teaches marketing teams worldwide. You’ll learn how to pick the right image-generation tools for the job, how to maintain visual consistency, and even how to integrate your product photos into ad-ready AI-generated assets.

Ross has spent the past decade blending creativity with code — from viral origami animations to helping global brands reimagine content with AI. He’s not just using the tools — he’s pushing them to their limits and building systems others now rely on. And now, he’s here to show you how it’s actually done.

If you're a business leader, marketer, or content creator looking for real, tactical value from AI — you’ll want to be in the room for this.

About Leveraging AI

If you’ve enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Isar Meitis (00:00):
Hello, and welcome to another live episode of the

(00:04):
Leveraging AI Podcast, thepodcast that shares practical,
ethical ways to improveefficiency, grow your business
in advance your career.
This is Isar Meitis, your host,and as you probably know,
everybody loves creating visualcontent with ai.
First of all, it's really fun.
You can create really cool stuffthat you just couldn't create
before.
But more importantly, if youknow what you're doing, you can

(00:24):
also make it a very effectivebusiness tool to promote
whatever it is you want topromote.
But the key.
In that sentence is if you knowwhat you're doing.
And to be fair, most people donot exactly know what they're
doing.
We're kind of winging it when itcomes to creating graphics with
AI or videos with ai, myselfincluded.
And there are some very seriousissues when you try to create

(00:46):
images with AI for businesspurposes.
One of the biggest ones isconsistency.
And consistency is key whenyou're doing business related
image generation because ofseveral different reasons.
One, it's your brand, right?
There's some brand guidelinesthey're trying to say.
I.
aligned with.
And the other is, if you'reselling a product, well guess
what?
The product needs to look like.
The product, your logo needs tolook at your logo.

(01:08):
the people who represent itneeds to look consistent and so
on.
And the same is also true whenyou're doing things just for
fun.
If you're just trying to dostorytelling, whether with your
kids or for any other purpose,then the characters needs to
stay consistent.
And so knowing how to do that isan interesting mix between
science and art, right?
On one hand, you need tounderstand the science of how
these AI tools work so you canmake things look consistent.

(01:31):
And on the other hand, you needThe other side of your brain to
be creative and come up withideas that will be interesting
and will capture people'sattention.
And so as I mentioned, it's likethis balance between right
brain, left brain kind of use ofAI and your brain.
I guess, and this is why I'mreally, really excited about our
guest today, Ross Simmons.
So Ross started his career as adeveloper writing code and

(01:54):
shifted to the creative sidewhere he's been really
successful.
And so he really.
Is this amazing mix of leftbrain, right brain success,
which makes him the perfectperson to first of all explore
on his own, but also share withus specific ways on how to do
this effectively.
So Ross became obsessed withcreating images on AI before

(02:17):
even Chachi PT came out so waybefore most of us.
And he has poured himself intothis and he is delivering
amazing training to people andcreative teams in enterprises on
how to leverage AI to generatebusiness related images and
videos.
And today he's gonna walk usthrough the process and the key
points that we all have to knowin order to create effective AI

(02:40):
visual content that maintainsstyle and brand and character
consistency, which all of usdesperately need because it's.
Future.
And it allows us to createamazing content much faster and
much cheaper than we could havedone it so far now since this
capability is somethingeverybody needs, I'm really
excited to welcome Ross to theshow.
Ross, welcome to leveraging ai.

Ross Symmon (03:45):
Thank you Sa, thank you so much for having me.
brilliant introduction.
Thank you so much.
I'm quite humbled by that.
interesting.
I'm good at

Isar Meitis (03:51):
introductions.
All I need is then smart peopleto do everything else

Ross Symmon (03:54):
so well.
You've got me when I'm smart ornot, we'll decide, you know,
through the course of the day,but thank you so much for having
me.
Yes.
You know, as Isar mentioned, Icome from a technical, creative
background.
did many things across myjourney and I run a company
called Zen Robot, and we focuson, AI training for creatives,
marketing teams, and we also docreative production, which is,
you know, a field I was in manyyears ago.

(04:16):
yeah.
You know, and just being part ofthis whole, journey and the
whole AI space and sharing ideaswith creatives I'm very active
on LinkedIn and I feel like, youknow, when I create a piece of
content, it's work for me, butit doesn't feel like work.
it really is just something thatI have such a, passion for and
such a, I don't know.
There's just, it just feels likethere are so many options.
There are so many different waysof using this technology.
I think it stems from me havinga computer in front of me since

(04:38):
I was, you know, six years old.
So, you know, just continuouslyworking with it at the time.
You know, you kind of, you getthis thing and you think it's
just for games, but if you sitbehind it for long enough, you
realize it's pretty powerful.
So, yeah, it's an honor to be.
Yes.
So, thanks for having me.

Isar Meitis (04:52):
No, I'm really excited.
Like I said, you know, I createimages with AI tools, you know,
the usual suspects between, me,journey Flux, ideogram, ChatGPT,
Gemini, like, depending on whatI'm trying to create, and we can
talk about all of thatafterwards.
I'm sure we will.
but I get what I want, manycases, but I struggle in others.
And part of it is because Idon't have a very well-defined

(05:12):
system and process.
I experiment more or less everytime from the beginning,
depending what I'm trying to do.
And I know you're the other wayaround or not the other way
around.
You're just a few steps ahead ofmost of us in this exploration
process.
And so I think learning how toapproach this in a way that is
consistently, working will befantastic.

Ross Symmon (05:30):
Amazing.
Cool.
So let me share my screen here.
So what I'm gonna do today isI'm gonna show you.
Look to explain what you've justasked in like 45 minutes is near
impossible, but I'm gonna do mybest to try and build a
foundation that, you know,hopefully anybody listening to
this will be able to walk awayand go, ah, okay.

(05:51):
That guy said the one thing andthis is gonna help me going
forward.
So we'll get into the brandstuff, towards the end, but I
wanna take you through threetools that I use regularly.
One is midjourney.
The other one is Image fx, whichis, Google's, I guess image
generation tool and the imagegeneration portion of chat GPT,
which there's, you know, I dunnoif it's Doley three or if it's

(06:12):
image gener, you know, it's chatGT image.
There's, regardless of what it'scalled, that's the thing.
So what we're gonna do is, welllook, firstly a couple of
challenges that I find, youknow, obviously consistency.
Like you mentioned, consistencyin imagery and consistency in
style is always something that,you know, being part of a brand
team is very important.
So I'm gonna take you throughand show you how I would build

(06:33):
on a prompt.
So starting with a very basicprompt and adding slowly, I
guess, parameters and just morecontext to the prompt.
'cause all of these AI tools,all of it is about context.
It is about a.
How much information you can putin, how much relevant
information you can put inthere's an old saying with
developing, which is garbage in,garbage out.
And it's exactly the same withall of these tools.

(06:55):
and many people get frustratedbecause, you know, I've asked so
many people, have you used myjourney?
They're like, yeah, I tried, butI just got bad results.
I'm like, well, problem's notmid journey, unfortunately.
It's actually you.
and I'm not pausing anyjudgment.
It's just there's a new way toengage with these tools.
and it's a new understandingthat we have to develop in
working with that.
And that's a lot of workshopsthat we do is, it's about that.
It's not about which tools youuse.

(07:16):
Because that's one of the otherthings, and this is why I'm
gonna go through these threetools.
I don't see these as the mostpowerful.
They are more tools that you canuse.
But hopefully, I give you anunderstanding of, you know, what
you are able to do, which toolsare better, because that's
another thing.
We get asked all the time, like,what's the best tool to use for
images?
It's like, I don't know, what'sthe best car in the world?
You know?
Yeah.
What's the best clothing brand?
Like there's no right answer.

(07:36):
And we are at the stage nowwhere, you know, the maturity of
these AI tools has reached apoint where there are best tools
for specific jobs and they arebetter tools for other jobs.
which is great.
I mean, if you, right at thebeginning it was so frustrating,
like using tools where it wasjust, you try and create
something, but it was just like,come on, surely it can be better
than this.
'cause, you know, it's acomputer and surely the computer
should be doing better thanthis.
Like we in the future and it'snot working.

(07:58):
So right now we are at a placewhere I think you can, from a
visual perspective, maybe not somuch with video, but definitely
with imagery you can createpretty much anything.
So, I'm.

Isar Meitis (08:09):
I'll add a couple of things.
First of all, I want to thankeverybody who's joining us live.
So I have people with us live onthe Zoom and people live with us
on LinkedIn.
And I want to thank you spendingthe time with us.
feel free to ask any questions.
So write them in the chat,either on LinkedIn or on Zoom.
and for those of you who are notwatching us live and cannot see
the screen, I wanna say twothings.
One, we'll explain everythingthat we're seeing on the screen.
We're gonna read the prompts,we're gonna explain in words

(08:30):
what we're actually seeing, butalso if you have the option to
watch this, meaning if you'renot driving your car or walking
your dog.
Right now, we have a version ofthis on our YouTube channel.
There's a link to the YouTubechannel On the show notes.
So you can literally click abutton from where you are right
now and go and watch the YouTubeversion of this, as well.
Or you can do both.
You can listen to us whileyou're in the car.
And then when you get home, ifyou wanna see the actual images

(08:50):
and the outcome of everythingthat we're doing, you'll be able
to do both.
The other thing that I wanna saythat is to piggyback on two
things that you said.
One, everything in these tools,everything including text and
everything is context, iseverything.
Like if you give it more contexton who you are, what you're
trying to do, what's the targetaudience, what's your brand,
what are like, all of that justadds to your ability to get
better results.

(09:11):
And the second thing that yousaid that is very important, is,
and I actually say it a littledifferent than you, yes, there's
different tools who are betterfor different things, but from a
visual perspective, all thesetools should be good enough for
probably 90% of use cases onceyou get to edge use cases.
Yes.
There's.
Mid journey is gonna be betterfor this and Idio Gram is gonna

(09:31):
be better for that and Chacha isgonna be better for this.
But when it comes to, I don'tknow, 80 to 90% of what we need
on the day to day frommarketing, all these tools will
be good enough, some will beeasier, which is a whole
different story, right?
You can get to the outcome.
It's just gonna be easier withthis tool versus that tool.
But now I will let you dive intothe details because I'm sure
people are really curious to seewhat you have to share.

Ross Symmon (09:51):
Okay, cool.
So yeah, like I said, I've gotthese three tools that we are
gonna work through.
So let's start with, midjourney.
So I have created a simple,Prompt.
Here is a portrait of, lemmejust check here.
Portrait of all.
Yep.
So now if you look at this, it,is it a portrait of a woman?
Absolutely.
is it a hyper-realistic portraitof a woman?
Absolutely not.
Is it a painting?
Maybe Is it an illustration?
I don't know.
I.

(10:11):
Context.
How much information have Igiven the machine?
I've told it I want a portraitof a woman and it has delivered.
And I think it, this is where itgets frustrating for people.
'cause they're like, this is notwhat I asked for.
And the reality is, it's exactlywhat you asked for.
They don't get it.
these tools have been trained onvast amounts of data.
I don't even think you canreally comprehend how much
information has gone intotraining these models.

(10:31):
So the deeper you can get intowhat it is you're asking for,
and the more specific you can befrom a physical standpoint,
that's another thing that peopleforget is you are describing
physical attributes of an image.
You're not, leave out the fluff,leave out the beautiful and the
fantasy and all these wonderfulwords that you think are gonna
help, they would help a humanfeel, an emotion towards
something.
You know, excitement.

(10:52):
Create an exciting image of aportrait of a woman.
What does exciting mean?
Exciting.
Maybe there are elements thatwould, you know, suggest that
this person is excited, but areyou excited because you're
seeing the image or is the imageshowing you someone that is
excited it doesn't know?
It can't translate.
You get where I'm going withthat?
So this is a, you know, anexample of Portrait of Women.
Amazing.
Cool.
We've got the right, what it'sasked for here.

(11:12):
So exactly the same prompt inthis tool, which is, Google's
image effects.
Okay.
Portrait of a Woman.
Exactly the same thing.
A couple of things to point outhere is, firstly, these are
hyperrealistic, in my opinion,hyperrealistic photographs.
They're all looking directly atthe camera studio sort of vibe.

(11:33):
There is one thing that I haveto point out here is there is
racial diversity here, which theother tool doesn't produce.
I think it's just white girls inthe other one.
It's the truth of the matter.
So,

Isar Meitis (11:42):
so again to, for those of you who are listening,
we have a red head.
We have a darker skinned womanwith a short hair.
We have, very white skin girl.
And we have an older woman,she's older and she's wearing
more formal clothing as well.
But all of them is definitely aportrait that looks like a
picture from a camera by aprofessional photographer

Ross Symmon (12:00):
and all a portrait of a woman.
There we go.
And all a

Isar Meitis (12:02):
portrait of a woman.
Correct?
Exactly.

Ross Symmon (12:04):
Cool.
so far, all of these tools areadhering to what we've asked
for.
Let's go to a good old friendchat.
Now here, chatt works kind ofdifferently.
I was under the impression thatit also used a diffusion
process, which is the process.
I'm not gonna get into that, butit's the process of generating
an image where, Chatt, forexample, is a large language
model.
So it uses text as opposed toimagery.

(12:25):
So behind the scenes have workedout a very clever way of
generating images.
It's gonna be, I think, thestandard going forward for a lot
of these tools.
but if you've ever used chartGPT image, it used to be
terrible.
It was really bad.
Like it was Dali two and Dalione were unusable, in my
opinion.
It was fine for making littlekid pictures and, you know,
random sketches and whatever,but when they upped their game
to this, which was about two anda half months ago, where it just

(12:47):
went viral and everybody knewwhat it was doing.
'cause you could turn yourselfinto a boxed character in a, you
know, in a, or a you or yourselfas your dog, or yourself as a
cat or a Studio Ghibli versionof yourself.
Anyway.
We're not gonna get into that.
So I've just said, yeah, goahead.
Please create an image.
So just for the sake of speed,I've generated all these images
already so that we don't have towait for the process.

(13:08):
On that note, out of thesethree, the three that I've just
shown, my journey, image effectsand, chat, GPT chat, G PT is the
slowest.
It also only produces

Isar Meitis (13:14):
by a big spread.
And you get only one image everysingle time.
So you only

Ross Symmon (13:18):
image.
So, so two things there.
Context is extremely importanthere.
and you're probably gonna wasteyour time and get frustrated
because you're not gonna get thething unless you're asking for
something specific, like I said,the Studio Ghibli version of
yourself, which was a trend awhile ago, whether you saw or
not.
So again, portrait of a woman.
Do you have a portrait of awoman?
Absolutely.
this, I think is maybe kind ofsimilar to what was produced
here.

(13:38):
I mean, not exactly the same,but they could fall into the
same thing.
Another thing that Chate does,which is a bit frustrating for
me, is it puts the SPI tone onthe images.
I don't know if you've noticedthat it creates this yellowish
tone, which I have to take intoa, like light room or Photoshop
and remove every time there willsort that out.
But that's just one thing I havenoticed.
Anyway.
Okay, let's not get too boggeddown in this.
So now we've got a portrait ofwoman.
Let's go.

(13:58):
Okay, so now we have, let's goto the next version.
I've added context to this.
So I've said a closeup portraitof woman.
Okay.
da adhere to the prompt.
Absolutely.

Isar Meitis (14:11):
So again, for those of you who are not seeing this,
the image is now the face of thewoman fills up the entire
screen.
Like you can't even see herentire hair.
You cannot see her neck.
You cannot see what she'swearing.
which was all the previousimages were that way.

Ross Symmon (14:25):
Yeah, exactly.
because contextually we haven'ttold it.
Well, we haven't programmed itin this case'cause Midjourney
requires some, referencing.
We haven't told it to choose aspecific character.
We've just said create a closeupportrait of women.
So again, it's definitelyadhered to the prompt.
Let's look at this.
We've got, this is imageeffects, portrait of closeup,
portrait of a woman.
Again, we've got diversity here.

(14:46):
the portraits are a little bitmore, AI is, I mean, you can see
that they have the skin isthere, there is something about
that AI skin look, which isjust, it's just too perfect.
But there is, there's aworkaround for that as well.
oh my bad.
Oh, we, there we go.
That's just disappeared.
there we go.
so that is image effects lookinggood.
By the way.

Isar Meitis (15:03):
Another interesting thing, in this particular case,
yes, they're a little more closeup than we saw in the first
version of the image effects.
But they're not as cropped incloseup as the midjourney one.
So Midjourney really croppedjust the face.
Yeah.
And Google did just a slightlymore zoomed in version than what
we got before.

Ross Symmon (15:22):
In my opinion.
It honestly feels like Google isgoing for the very, very safe
option where it's kind of like,yes, we don't wanna suggest too
much in terms of diversity, interms of the actual image.
It's not gonna go too far.
I often get blocked with imagesas well.
It's like, this does not adhereto our, you know, content
policies.
and I don't even, I try andsearch for the word that could
have blocked me and I stillcan't find it because there are
some words Yeah.
Naturally is gonna block the

Isar Meitis (15:42):
same thing happens to me in Chachi piti sometimes
weirdly, like on image of me.
Like, but it's me.
Okay.
And I told you it's me.
Like, why are you blocking this?
Like it is?
And then you ask for somethingthat should have been blocked,
and it's like, yeah, no worries.
Like a Mickey Mouse version ofwhatever.
It's like, yeah, sure,

Ross Symmon (15:57):
totally.
So we've got portrait of women.
Then I've just said now, becausewe in one context window, okay,
this is another thing.
When you're creating images in,chat, GPT just know that if you
create one image, if you wannacreate a completely different
image, you need to open anotherbrowser window.
This is a very important, factorbecause what it is gonna do, and
you'll notice through thisthread, I've said, a closer
portrait of a woman.
It's taken the woman that Icreated initially, and it's

(16:18):
created a closeup portrait ofher.
So now.
If I get upset because I'm like,oh, but that's not what I asked
for.
It doesn't understand thatcontextually you are in the same
browser window as you were atthe beginning.
You've asked me to createsomething, this is what I'm
creating.
And I think from a safetyperspective, in terms of it
wants to create consistencychat, GPT, if you're gonna
create one single image and tryand keep it in one thread, you
will get consistency providedyou have the right prompt at the

(16:41):
beginning.
And if you aren't getting theright prompt, move to a
different tab and open it up, ina completely different thing.
So you can see where I'm goingwith this now.
I mean, as I'm building on, say

Isar Meitis (16:50):
one thing about what you said right now, which
is to me right now, the biggestbenefit of using Chachi PT
versus the other tools, and bythe way, you can use the Gemini
image, imagine three imagegenerator in Yes.
A Gemini chat, which then givesyou that benefit that you're
getting from, Chachi PT as well.
which is, it does.
Follow a conversation, whichnone of the other tools do.

(17:11):
Like if you're in mid journey,you just gotta prompt better to
get a better version or aslightly different version.
Where in Chachi PT you canexplain in simple English and
iterate on stuff because it's asingle conversation and it
understands what you mean.
Does it always work?
No.
Does it work?
Most of the time, yes.
Can you do magical things likeupload your brand guidelines,

(17:31):
PDF 20 page long and ask you toadhere to that and it knows how
to read that?
Yes.
And you cannot do that in MidJourney.
So when I said some tools arenow easier to use, they don't
necessarily give you a betteroutcome, but they're easier to
use.
One of the reasons Shafi iseasier to use because it just
understands English.
You don't need to know how toprompt, as good as you need to
know how to prompt in, midjourney as.

Ross Symmon (17:51):
Exactly.
And my journey from that anglefor me, allows you so much more
control.
it does give you more options.
It's quicker.
I find when I'm working on avery creative project,
midjourney is my first port ofcall.
I'll use chat PT to help mewrite my journey prompts maybe,
and then bring it in here and belike, okay, cool.
Let's iterate on that andchange.
Okay, so you can see where I'mgoing with this.
Now.
I've created a whole bunch ofprompts here, and as we are
going up, we have a closerportrait of a woman looking

(18:13):
directly at the camera.
So now these maybe could havefallen into the same category,
but we've just specified we wantthe woman looking directly at
the camera.
Great.
Okay, so now we've got more tothe prompts.
We've said close up of a womanlooking directly at camera,
natural, soft light, 85millimeter lens, shallow depth
of field.
So whether you know what a lotof that is like natural, soft
light.
I mean, that's prettyself-explanatory.

(18:33):
But the 85 millimeter lens is avery closeup, it's good for
closeup photography.
This is something that I don'tthink many people.
Bother with, but it's notessential to know.
But if you want to get reallygranular in terms of creating
and generating images,understanding a little bit about
photography, about angles, aboutlighting, and which lenses to
use and which cameras are bestfor certain things will
definitely help you tell betterstories.

(18:53):
'cause at the end of the day,that's what we're in this for.

Isar Meitis (18:56):
And I agree with you a hundred percent.
There's actually a question thatI didn't ask because it came a
little too early, but it's theperfect time to ask now.
So Stefan, who is a regular onthese shows and also on my
Friday Hangouts, when I showthem stuff like that, and I show
them that I'm using actualcamera definitions and this
definition, like how the hell amI supposed to know that if I'm
not a photographer?
And the reality is you don'thave to, but it helps a lot.

(19:17):
And you can look for.
Photography cheat sheet for AIimage generation, and you'll get
a, an amazing page that willexplain which lenses, which
lighting, which cameras, whichfilm, if you wanna be really
specific, that you need to usein even setups of the specific
aperture value and speed andstuff like that.
So you can find multiple cheatsheets out there today and just

(19:39):
experiment, you know, and justplay around and say, oh, now I
understand what he's doing, andthen you'll know.
But the cheat sheet is a verygood starting point.

Ross Symmon (19:46):
Absolutely.
You know, we've got amasterclass that we do, monthly
and the, like, that's a massivepart of it.
It's like a cheat sheet whereyou've just got a list of all
the things.
'cause I'm not gonna rememberall of these.
So it's kind of just a touchpoint that you can use.
So as I'm scrolling up here, I'mgetting more granular in terms
of the details.
So the last one is a closerportrait of a woman looking
directly at camera.
So that we've got, we've keptthe lens, the soft, natural

(20:06):
lights, shallow depth of field.
Now we are adding physicalattributes, which are gonna
enhance the realism.
Okay.
being highly detailed skintexture.
okay.
Light freckles, realistic POSclean, minimal hairstyle, no
makeup.
not a neutral background.
So you can see now how thisbecomes a lot more real for me,
this is as close to photorealism as you'll get, when

(20:28):
you're, sort of making theimages bigger and, scaling them
up.
Choosing a specific tool.
Like there's one tool I usecalled Magnifi, which is
amazing, that you lose a littlebit of detail in getting a
bigger, are you gonna use thisfor a billboard ad?
I don't know.
maybe we are gonna get that atsome point.
But for social media, website,banner ads, anything you need to
use online content is, this, isperfect.

(20:48):
So yeah.
So that is mid journey.
That's done that.
Okay.
Let's go to image effects.
And I'm gonna scroll through tothe same prompt.
Okay.
Where we have, and this didsomething very interesting,
this, which I thought was, it isjust fascinating.
So it's exactly the same prompt,close up with all the details.
It, the character looks almostexactly the same in all these

(21:10):
images.
Now, has it picked a character?
Where did it draw thisinspiration from?
I have no idea because I haven'tspecified the ethnicity.
I haven't specified the, yes,the hairstyle I have, but these
could be sisters.
Yeah.
it's just very, well, well,

Isar Meitis (21:23):
it's exactly the opposite of what we've seen so
far.
Right.
So far it gave us the biggestdiversity In images.
And when we became veryspecific, they all kind of look
the same.

Ross Symmon (21:30):
Exactly.
And what you can and can't dohere, well, the difference
between this and mid journey,for example, is maybe in Gemini
you could use an imagereference.
So please reference this imageand then build an image.
Or create an image based on theaesthetic features of the image
I've uploaded.
you can't do it directly inhere.
I just like to use this becauseit's quick.
it's easier for me to use.
I started here.
I don't use Gemini much, sothat's just for, you know, for
the reason I do that.

(21:51):
So let's go to chat your pt NowI've just kept building on here.
So you can see the closeupportrait of a woman and the
realism that gets achieved hereis pretty crazy.
Like, I mean, I think, adheredto the prompt pretty well.
Um.
it is a very realistic image inmy, I mean, the first one was
pretty realistic already.
yeah.

Isar Meitis (22:09):
I think this one definitely going back to both
the character there is in theface as well as the level of
details, the smaller details,like the, you know, the black
lines on her eyes because shedoesn't have makeup, because
that's what you asked for.
A little bit of wrinkles hereand there, like it looks, the
last two images are justincredibly good,

Ross Symmon (22:26):
detailed, and that's because I've added, you
know, light freckles, realisticpos, clean, minimal hairstyle.
it's those details that you havebecause we've never in our lives
before.
If we had to describe, yes,we've had to describe something
physically, but not on a levelthat a machine now has to
understand, if that makes sense.
Yeah.
So it's a, it's this newlanguage we were developing.

Isar Meitis (22:43):
I wanna add one more thing, which is something
that is very interesting to me,especially with this specific
example that you gave, and Ilove the way you're building
this, all these women on midjourney and.
On Google look like models, likethey're all extremely beautiful.
The one on Chachi PT looks likea common person, like somebody
that is your next door neighbor.

(23:04):
Yeah.
and you didn't ask for one orthe other in any of the tools.
And you can obviously specifyone or the other for any of the
tools if you want to get go forthat look or the other.
But it's interesting to me howthe self-selection is very
obvious between the differenttools.

Ross Symmon (23:19):
It's amazing.
It's crazy.
It's, yeah.
Yeah.
I agree with you and the thingthat, what I was alluding to
earlier about being specific, Imean, if you say, you know,
create an image of an uglyperson.
Ugly is relative.
Create an image of a beautifulperson.
Someone might look at it and go,I don't see that as a beautiful
person at all.
But someone else might look atthat and go, Ooh, you know, not
my, not quite my tempo, youknow, but it's all about

(23:39):
describing physically what,like, in your mind, what, you
know, maybe someone with likemassive scars on their face is
something someone who'sunattractive, you know?
But someone might find thatquite kinky, so they you know,
it looks yeah.
All it comes down to taste, itcomes down to context.
So, yeah, I don't wanna hop ontoo much about this now.
I've done the same thing this isjust to show that you can add
text into images.

(24:01):
It was something that only upuntil recently was almost
impossible to do.
any, if you ever try to put textin any of these tools, it would
very seldom come out, unless itwas a single word, like hello or
Hi or welcome or whatever thecase was.
whether you've used this, youknow, to try that or not.
Like, hear from me.
It was bad.
It was, the text was just allover the place.
And that's because I think thesetools are image generation tools

(24:22):
and not text and design tools.
which you'll see in examplesgoing forward now.
So yeah, this is just a basicexample of, and I've built on
the prompt, which is, a cozyliving room now, cozy living
room.
We've got, you know, it is, it'sa cozy living room.
Are they?
This one is animated, looks likesomething from a, you know,
story of some kind.
again,'cause contextually wehaven't told it to be specific

(24:43):
about what it is that we'relooking for.
Then I've gone into add a dog onthe couch.
this isn't even a, this lookslike a painting or an
illustration of some kind.
Same here that looks like a cat

Isar Meitis (24:57):
and it's not on the couch, it's on the floor and
next, not on the couch either.
You know,

Ross Symmon (25:00):
So where I got to here is like I'm, I started with
Midjourney because midjourney isthe worst at putting text into
images.
So I've asked it to say, so theprompt was.
A cozy living room in the lateafternoon.
Warm, natural light, softshadows.
A golden retriever sitting onthe couch, which it got right
books and plants in thebackground, realistic textures.
Framed quote on the wall thatsays, welcome home friend in
handwritten text now.

(25:22):
So if you can tell me what thatsays,

Isar Meitis (25:25):
it's the welcome,

Ross Symmon (25:26):
says welcome.

Isar Meitis (25:26):
Other than that, it's not exactly

Ross Symmon (25:29):
what it needs to be.
It's not exactly, so, you know,and it's tried on all four
images.
That's one thing with, thesetools is they usually generate
four images because one of themmaybe is the right direction
that you wanna go in, butchances are it's probably not.
And this one kind of got it.
I think this is not a bad job,but now the dog is not on the
couch, the dog, and it's not a

Isar Meitis (25:45):
golden retriever,

Ross Symmon (25:46):
it's not a golden retriever.
It's got a weird head.
There's too many plants inthere.
Anyway, you see where I'm goingwith this.
So, that's just what that wasable to do.
So if we move forward now, let'sgo to image effects.
Very safe options.
I mean, very realistic images.
'cause I assume, you know, ifsomebody types in that, that
they want a photo of what it isthey, are looking for.
So cozy living room.
In the late afternoon, there's adog on the couch, golden

(26:07):
retriever on the couch.
So, again, very similar.
You see, it's interesting how inboth cases the more granular we
became, the more, similar theimages became, which I've only
noticed that like as we arespeaking now, which is just an
interesting,

Isar Meitis (26:19):
but that's specifically on, Gemini, like on
image effects.
Yes.
the images became almost, I, thefour images became almost
identical.

Ross Symmon (26:26):
Almost identical with, from the same angle, the
same dog and the welcome homesign at the back wall.
Welcome home friend.
which it got right.
I think it did a pretty good jobthere.
I mean, I think on all of them.
So all of them are spelled

Isar Meitis (26:37):
correctly.
All of them looks likehandwritten picture.
Yeah.

Ross Symmon (26:39):
Yeah.
let's start thinking about froma branding perspective.
If you wanna sell a specificcouch or a specific dog for that
matter, or a specific, you know,you wanna sell your art on a
wall, this is how you startthinking about that.
Like, maybe you don't want tohave welcome home friend.
Maybe you wanna place your imageor your art piece, or whatever
it is, or a pot plant thatyou're trying to sell, or a
cushion of some kind.
I mean, it really does it to,for a single use case.

(27:00):
This is like, it's impossible toexplain, but it is very useful
across the board.
and then, so that is from a, youknow, from an interior
perspective.
The last thing I wanna gothrough is a, let's have a here.
I'm conscious of time.
Anthropomorphic, try and saythat 10 times in a row.
Anthropomorphic fox, which isjust a humanesque.

(27:21):
Anthropomorphic just means humanstyle fox.
Okay.
We've got an illustration, whichhe's got horns, which is kind of
weird.
He is got a top hat or some sortof ballad on, and a coat cool
looking fox.
One of them is, you know, again,we've asked it for that.
And to get more specific, we'veasked it for a second time.
This time he's wearing a cloak.
Okay.
So now it's a, the fox wearing acloak standing in a misty forest

(27:42):
again.
So this is starting to get alittle bit storybook ish.
Yeah, we are all, we are stillin mid journey here.
now we've got him in.
Which I didn't listen to theprompt at all, looking toward
the viewer with a calmexpression.
None of these images are lookingtoward the camera, but or toward
the viewer, which again, this iswhere I think it gets
frustrating for people who areusing AI for the first time,

(28:04):
particularly in my journey,because it does require its
iteration.
It does require a, anunderstanding that you're gonna
be sitting there for a while,especially at the beginning.
But the structure that I'm usingwhere I'm building on the
prompt, this makes it so mucheasier for you to go back at a
specific point and go, okay,change the fox to a squirrel.
Change the, misty forest to abeach.
So where you at least there, yousticking to some sort of,
format.
Also, if you want things to bemore prominent.

(28:25):
For example, if you want a wideangle shot, put that at the
beginning of the prompt asopposed to the end of the
prompt.
These things that you want.
This just works particularlywith Midjourney, really well
take, and I use this with anglesand composition.
So if you have a wide angleshot, of a person on a beach or
a fox in a forest, put the wideangle at the beginning as
opposed to the end.
Just a nice little hack treatthere.
so now we've added more details.

(28:46):
This is getting a bit more real.
So we've asked for the same foxwearing a green coat, standing
in the forest, holding a goldencrystal or a glowing crystal in
one hand, cinematic lighting, 85minute lens.
So cinematic lightingautomatically implies that it's
gonna be realistic, so you don'thave to put things in like
realism and, you know,explaining the details that make
it look real.
So this fox, I mean, not quitefantastic mister, but I mean,

(29:07):
he's there.
He looks real ish.
He's, he's quite charming.
He's still not, well, althoughwe didn't specify that we want
him looking at the camera.
getting to the last section ofthis, we've asked it to look
directly at the camera, becauseI've put it at the beginning, so
I made sure that I put, yeah,

Isar Meitis (29:20):
let's look, let's read the entire prompt, because
that's actually very differentand the outcomes very different.

Ross Symmon (29:24):
So I said, looking directly at the camera, an
anthropomorphic fox in detailedfur texture, wearing a green
coat or a green coat with a goldembroidery standing in a misty
forest holding a gold, glowingcrystal in one hand and the
cinematic lighting with 80millimeter lens.
And it didn't get it right withall of them, but he's a lot
closer to looking toward thecamera.
Cinematically, you breaking thefourth, I'm getting deep here,

(29:47):
but you're breaking the fourth,what they refer to as the fourth
wall.
When the character in the movielooks directly at the camera,
that very seldom happens.
So if you bring cinematiclighting in, it's automatically
gonna make the subject look awayfrom the camera.

Isar Meitis (29:58):
Yeah.
One, one more thing that I wannasay to people.
so here we're going for the lookof the fox, right?
We said that it has a calm look.
If you're trying to capturedetail of faces and expressions,
the best way to do it in midjourney, and I think it's the
only tool out of the four thathas it out of the box, is
actually to do a closeup.
Of the Fox's face and then outpaint out of that.
So you can go and zoom out on apicture on Midjourney and ask it

(30:21):
to then add whatever it is thatyou're gonna describe.
And that's an awesome way tostart with a highly detailed fox
face because it's the onlything, it's painting like the
whole picture.
The whole prompt is describinghis face and his look and so on.
And then you zoom out and youcan add the forest and the mist
and all the other stuff.
so that's another little trickwhen you're trying to create,

(30:41):
expressions or very detailedface of a person or an animal in
this particular case.
And then zoom out from there.

Ross Symmon (30:47):
Yeah.
Out painting, just, I'm justdoing a quick example here is
you basically, well thisessentially is in painting, but
you're painting out the sectionthat you want to change.
So whatever you put in theprompt box now will be filled in
the background there.
Little Yep.
Very fancy little trick that,people have used here.
So yeah, I don't wanna, reallywanna get into the subject
stuff, but just a quick, so Ihave an image here, which is, I

(31:09):
like the style.
Let's say I like the style ofthis image, which is, it's pink
clouds in the background.
There's an ocean, there's a, youknow, non style door leading
from away from a chair.
I like this image.
I want to take the same fox thatI had.
I wanna use the exact sameprompt, but I wanna add that
style into it.
So simply, you just add theprompt in, you drop the URL of
the image that you using, whichis the image that I had there,

(31:30):
and that allows you to, oh man,you've got options.
To put it as a style reference.
You can use it as an imageprompt.
So this is where you startgetting a real control.
Over what your image is gonnaget.
And also consistency.
you're not gonna be doing a foxin a green jacket for your
brand, maybe you are.
But the idea I'm trying to getacross here is this is how you

(31:51):
crosspollinate between, or yougrab images from anywhere on the
internet, upload them and usethem as style references to
guide your final image.
So you can already see here thatjust using this, just simply
dropping that in, it's changedthe style of the image in Tivy.
midjourney has something calledSRF codes, which are super handy
as well, which help you guidethe style.
So here we've got like, it, itjust looks very cool and it's

(32:12):
definitely adhered to that.
And you can crank up the, youknow, how strong the style is,
and.
How much you want to change.
Again,

Isar Meitis (32:19):
for those of you not watching, just to give you a
little bit of understanding whatwe're doing in mid journey
specifically, you can upload animage or use a previous image
that you created with AI andsay, I want you to capture
either the character, like thesubject, or I want to capture
the style of the image and thenapply that to the next image
that you're generating.

(32:39):
And that's basically what Rossdid.
He took a beautiful, kinda likesunset pink clouds on the ocean
and said, I want to use thisstyle with a fox that you have
from the other image.
So you basically mixing.
The subject from an image withthe style of a different image
to get a new image that reallyadheres to both concepts, which
is a very powerful tool to getexactly what you want.
Because instead of trying toexplain with a million words the

(33:01):
style, which is something veryamorphic, and it's sometimes
hard to explain, you can justbring an image, say, okay, this
is the style that I'm lookingfor, but I want this character
or this, product in this style,and then it's gonna do the thing
for you.

Ross Symmon (33:13):
Exactly.
And off the back of that, youmentioned like a style, or a
character consistency.
So I picked this character asthe fox in the, you know, in his
little green, cloak.
And with the image that I had, Iwanted to create a, like, I
basically wanted the fox.
In the same, sorry, in adifferent scene, but I wanted

(33:33):
the fox to be a consistentcharacter.
So what I'm scrolling throughnow is the same fox, he's in
different positions, but he'sgot, well I've used the same
style across and this is how youstart bringing consistency in
elements, objects, sub, butparticularly with characters,
when you are bringing in asubject in, I didn't do it
exactly the right way here, butif it's got a clean background,
so you just have the character,get rid of the background,
Photoshop it, remove thebackground, make it white, and

(33:55):
then it focuses entirely on justthe character themselves.
And the more detailed you have,a detail you have in the image
itself, the more it's gonnaplace that detail into the final
image of the character itself.
That is just a quick rundown ofwhat's going on there with, ma
journey.
Then the example we have here,so I, an Anthropomorphic Fox,

(34:16):
it's done exactly the samething, but here we have no
control.
Now, you know, it's, this islike a.
Don't know, these images arekind of cool.
We've just built on that.
Again, there's a, something Iwanted to point out here.
The difference between thisimage and the other one is
purely the aspect ratio, it'sthe orientation.
And that has a massive influenceon this is where understanding a

(34:37):
little bit about film and alittle bit about, how to take
photographs.
If you say you want a portraitof a fox or a portrait of a
person, place it into aportrait.
Don't make it a wide shot.
It's not, it understands better.
That's a little, just a neatlittle hack that, it's often
helps you get the, the resultsyou want.
So we're just building on ourfox here.
He's getting a little bit moredetailed.
He's getting a bit more real.
Definitely adhering to theprompt, but very similar.

(34:57):
Like you can see that it's justpicked a character and it's just
made him across it very safe.
Yeah.
Like this is the character weare going with.
so, you know, do with it as youplease.
If we go and have a look at whatchat GPT has done.
Done.
Oh, we actually missed chat.
PT with the welcome home.
You can see the welcome homehere is perfect.
That's one thing Chatt is, yeah,so I, I

Isar Meitis (35:15):
think from a text perspective, Chachi PT right now
is the best tool of adding textto the images.
Number two was number one for avery long time, which is
ideogram.
And then number three is Gemini.
so if you want to get text rightin the image itself, go to cha
GPT, you're probably gonna getit.
19 out of 20 times it's actuallygonna get it right, even if it's
a lot of text.

Ross Symmon (35:34):
Yeah, exactly.
So here we've got our fox andit's made it illustrative.
You've added more details made,and I think it's come out with
an image that's pretty cool.
I mean, again, very kind ofsimilar, I mean a little bit
more moody than the one thatwe've got in, image effects with
a yellow SIA tone, which is the,yeah, like the ate, which will
get changed.
Anyway, I wanted to get into, soyeah, I mean if there are any

(35:54):
questions or anything, you verywelcome to.
Speak about that, but, okay, so,oh yeah,

Isar Meitis (35:59):
the next one I think is awesome.
So I think now we're gonna diveinto product photography, right?

Ross Symmon (36:02):
yeah.
So look, I just, this is not areal brand I created.
I just said I need a brand to,you know, it's a fictitious
brand.
I came up with the word fleece.
I Why That popped into my head,I have no idea.
But it's a fleece and it's afragrance brand.
Cool.
So what I did is I took the,image and I cut it out and I
said, in charge UPT, I said,cool.

(36:22):
Okay, place.
this is just a workflow.
from here, there are a milliondirections to go.
Is there a best way to do it?
I'm not gonna say there is, butthis is one way of doing it.
getting your product into aphoto is like, it's very
difficult.
To get it perfect.
If you want, if you're lookingfor perfection, then hire a
designer.
But if you want content that is,usable that on social media and
just for quick hit content, thisis one process of going about

(36:44):
it.
So you cut the image out, makesure there's no background.
Okay.
I've asked, place this becauseNam can speak in normal English
language or any language.
Place this fragrance bottle on amat at stone Pedestal soft
lighting editorial style shot.
Cool.
So.
The difference here.
I mean, you can clearly say it'sjust squash the bottle.
For some reason it's just madethe bottle too small.
I mean, it still has thebranding, which is great.

(37:04):
the, all the text is verylegible and it's a nice image,
but the bottle is not, I mean,if you're gonna be selling this
on, you know, I.
Anyway, Amazon or whatever.
If the product doesn't look thesame, then people are gonna get
it and be like, what's going onhere?
Like, this is not what I askedfor.
You get where I'm going.
Yeah.

Isar Meitis (37:17):
Two, two things about this that I've learned
that works really well in ChachiPT specifically when it comes to
trying to place a product anactual product in an actual
image.
two things that work for me verywell.
One is I ask it to pull the textoff the product and write it
just in text.
Tell me row by row, what does itsay and what kind of the style
of the font and how big is thefont?

(37:37):
And then you kind of get adescription in English of what
is written on the bottle.
In this case, it's just twothings, but usually a bottle
has, you know, the size and howmuch liquid it has in it, the
percentage of alcohol, if it'sa, sunscreen, then all the
different and like, like there'sa lot of stuff that's going on,
usually on, on a product.
And if it writes.
On its own, all this stuff downit usually then you tell it.

(37:58):
Okay, now use everything you'vewritten, write it back in the
same style on the product.
It usually gets it right again.
About eight out of 10 times it'sactually gonna get the text.
Exactly right.
The other thing that I'velearned, going back to the same
thing, is ask it to describe theproduct in detail, including
color palettes, specific colorscheme aspect ratios.
Like I have a whole thing thatI'm asking you to describe, and

(38:19):
when it's describing it, it'sbasically becomes a part of the
next prompt.
that actually works really well.
It doesn't always come outperfect, but it increases the
chances of it coming very closeto the real thing.

Ross Symmon (38:31):
it's faster than doing a photo shoot, that's for
sure.
Yes.
So you see where I'm going withthis.
I've said to it, make the bottlelook exactly the same as the
image above it still didn'tadhere to that.
Make the bottle taller longer.
Then it just.
Messed the whole thing up andmade it completely, no, it's way
taller.
Yeah, now it's way taller.
Okay.
So I was like, this is my sortof creative mind.
And this is something you needto think about when you're using
these tools, is you need to, Idon't think this is gonna be

(38:53):
like this forever, but for nowwe have to switch between tools.
So what I did is I went into Midjourney and I cut out the image,
and I dropped it into, Ibasically took this image, which
is one of the images that I,that images that I created, cut
it out, no background.
And I said, I asked Chacha youto write a prompt for me.
Create this prompt, put this inhere using the omni reference,

(39:13):
which is the subject reference,and.
You can see the text is notgreat.
The images look amazing.
Like, I think they look great,like in terms of you're trying
to sell a product, this isgreat, but you can't have the
now one option is you take theimage into Photoshop, you do a
bit of Photoshopping, I'm nottrying to get rid of designers
jobs here.
I have a lot of friends who aredesigners, and I know a lot of
people get upset about this, butthe reality is, a designer
should know how to do thiswithout having to go into

(39:35):
Photoshop.
This is the truth.
So anyway, I'm gonna rushthrough this.
we've got a whole bunch of theseimages, so I tried a whole bunch
of different weights andwhatever, but it wasn't working.
It really, it screwed the bottleup a little bit.
It started getting a little bitstrange here.
I changed the background.
Then I went with like an oceanbackground, and then to Zas
point, I took the image backinto Chachi pt.

(39:55):
I said, explain in detail whatthis bottle looks like.
That's when I started getting abit of a breakthrough.
Now it started looking a littlebit more like the bottle.
Eventually it was like, Hey, nowwe getting somewhere again.
The text is not perfect, butwhich tool does pretty well with
text chat GT.
So now I've got this image.
Go back to chat, GT, drop it inand say, make this image more

(40:18):
realistic.
So this is the image I droppedin, and boom, we've got it.
Still got that SEIA tone, whichI hate, but you get where I'm
going with this.
So to me, looks like the bottle.
But this is how you have tothink about using the tools you
have to mix and match.
You have to know like, that'sbetter at that.
That's faster that I'm going to,as opposed to just asking the
same question and bumping yourhead.

(40:38):
You're in the same contextwindow.
Chacha PT doesn't understandthat you want something
different.
I then dropped in another image,so this was another one created
in Midjourney, asked it to makeit look more realistic.
and now it's not loading theimage

Isar Meitis (40:52):
of course, but yeah, I can, yeah.
Yeah, we can see it when it'snot zoomed in.
So, so I wanna pause for asecond.
oh, you know what?
Let's go to your next thingbecause I see what it is
important.
Yeah.
So

Ross Symmon (40:59):
the last one is turn it into a banner style ad
that could be in a magazine, onsocial media, whatever.
And this is without, I just saidstillness as a scent.
Flee.
To me, this looks like somethingyou'd be in a find in a
magazine.
It's not loading, but yeah, youget it.
So it's used the same imagehasn't changed anything.
Well, it's made it more spi andthere you go.
There you have something thatyou can use now.
Swap out the product, swap outthe background.

(41:21):
Go play.
this is what's available.
This is happening right now.
This is what you can do and youcan do it at scale.
If you've got four peoplerunning this, the amount of
content that you can generate ina day is probably what you could
do in a month about two yearsago.

Isar Meitis (41:33):
I agree.
And I'll say two more things.
first of all, do a quick recapof everything we talked about,
because I think it's importantfor people because you, we
shared a lot of things.
First of all, gotta learn how touse these tools.
And you can start very simple.
Start with a very simple prompt,but then what you add in is a
structure that includesdescribing the subject, the fine
details of the subject,describing the lighting,

(41:54):
describing the fine tones of thelighting.
Is it soft lighting?
Is it bright lighting?
Is it daylight?
Is it a studio?
Like what kind of lighting doyou have describing the
background.
In more and more detail, in mejourney, specifically, as you
mentioned, it's important tostart, like, think about it as
the highest weight in the imageis gonna be what you mentioned
first and the way it goes downas you go further into the

(42:15):
sentence.
So the start with a thing thatyou care about the most in the
image, and then work your way inthe sentence over there.
and what you'll see people aresharing cool pictures of you,
Ross as a fox in the chat, youcan check that out.
So build the prompt with thedetails of the different
components of things you wannado.
Provide it as much context aspossible.

(42:35):
In Chachi pt, it's easier toprovide context because you can
have an entire conversation.
You can give it a story aboutyour brand and your target
audience and previous imagesthat actually worked on social
media or on an ad campaign.
Like you can give it all of thatand it will take all of this
into account where a midjourney,you have to explain that in a
single prompt because that's theway Midjourney works.

(42:56):
On the flip side, on the singleimage, midjourney provides you a
lot more control.
You can bring reference images,which you can also do in Chacha
piti, but not in the same.
Level of understanding of what areference is.
they now introduced an actualbetter way to do character
reference, which allows you tokeep even more consistent
characters, in mid journey.
So advanced mid journey usagewith actual, parameters of

(43:19):
midjourney allow you to get moregranular in your control over
different components of theimage.
That is not as easy to get to onchat GPT, but I think the bottom
line is, and now I go back fullcircle to what we said in the
beginning.
You can get to decent results onall these tools today.
It's probably easier to get toan eight out of 10 on Chachi pt

(43:45):
just because you can explainyourself without being an
expert.
You can probably get betterresults as far as image quality
and more sophisticated look on.
Midjourney than you can oneither Gemini or chat GPT.
And I can tell you myself andthen I'm gonna ask Ross the same
question Today, right now Icreate and I switched
completely.

(44:05):
So until two months ago, Icreated probably 70% of my
images with Midjourney, another20% with Igram, and then the
rest with like flux.
Now I'm creating probably 65 to70% of the images with Chachi
PT, just because I can get to agood enough image with Chachi PT
and the rest are betweenprobably Flux and mid Journey

(44:27):
and Ideogram, depending exactlywhat, and I only go there when I
fail getting to the output Iwant on Chachi pt.
So I'm curious about kinda likehow you see things and how your
work is right now.

Ross Symmon (44:37):
Yeah, so I'm I mean, if I have to break it into
percentage wise, I'm stillmajority midjourney because I
just enjoy the tool.
I find myself on the explorepage all the time, and I draw a
lot of inspiration from that.
I would say it's probably 40% myjourney.
And then 20.
what's the rest there?
35.
35.
Yeah, so like, no, not 35.
What, how is 30?
30?
I would say that it's majoritymy journey and then the other

(44:59):
two, image effects and, chat,CPT.
But again, it depends entirelyon what I'm doing.
Yeah.
Now talk, we spoke about itearlier, that js ON structure,
which is a coding languagestructure where you can now use
that to build prompts for yourimages.
That changes everything for meand maybe for someone who's more
technically minded, because Ican see in code, almost in a
code codified format, what theimage might look like.

(45:22):
I can read that like a, sheetmusic almost.
I'm like, okay, cool.
I can see where to change thatchange that.
It just makes it so much, justthe developer in me just
appreciates that.

Isar Meitis (45:29):
It provides a lot more structure that is very
visible versus just a sentencewith commas in between, there's
a very clear structure toeverything.
there's an interesting questionin the chat.
which is how long do you investin a quote unquote good image?

Ross Symmon (45:43):
Well, good is relative, but a relevant image.
It depends on what's personally,what I'm using it for.
If it's for a brand campaign, Ispent on one image, I spent
three weeks.
That was for one key visualwhere I, it was backwards and
forwards with the client usingmultiple tools.
This was four months ago, so Icould probably do it in a week
and a half now.
That's the reality of it,because I know better tools, but

(46:05):
that's what it can really take.
But once I have an idea and I'mable to ask chat GPT to spit out
different versions of that idea,then it's a lot.
it's quite simple.
I can get it, the image I'mlooking for within a couple of
minutes.

Isar Meitis (46:16):
Yeah, I agree.
and that's, by the way, thesetwo answers are awesome because
they're, they serve twodifferent purposes, right?
One, they serve, I need a quick,something for an ad or a social
media.
And the other is the client.
A client has a very specificoutput they are looking for, and
they want it to be perfect, andthat's just gonna take more
iterations and trying tounderstand what they want and
showing them different samplesand understanding their style

(46:37):
and a lot of other stuff.
So I think those three weeks, ora week and a half are not three
weeks or a week and a half ofworking in Midjourney.
There are three weeks or weekand a half of back and forth
with a client getting theirfeedback, meeting with them,
getting their comments.
The actual time probably spenton the tool is probably a few
hours, I don't know.
Two, four.
Yeah,

Ross Symmon (46:52):
exactly.
Yeah.
Yeah.
I forgot that I actually createdthis as well.
So this is just a quick, let mejust do this quickly, quick

Isar Meitis (46:57):
video.
Oh, so this is the next level ofall of it.
I mean, out of the scope of whatwe, yeah.
But.
Yeah, so again, for those of youlistening, we took the ad that
was created for social mediathat has this square around it
that sells stillness, has ascent, and it says Fleece, which
is the brand that Ross invented.
So don't go buying this becauseyou won't.

(47:19):
and then took the image next tothe ocean, kind of like on a
rock next to the, on the beachand used one of the video
generation tools.
I don't know which one, to justmake it live.
And it looks awesome, like it'sa slow shot that zooms in and
pans to the side.
the background is blurry, butyou can see the waves kinda like
coming in.
It's just a beautiful scene.

Ross Symmon (47:39):
Also, just so you know, I also took this into
After Effects and separated thetwo images from the video and
the outline itself.
So this is another thing that Ithink people forget.
It's like you don't have to befully ai, you can't, yeah.
Like the reality is what youunderstand about the traditional
tools that you have used ifyou're a designer or any, you
know, any sort of tool thatyou've used, whatever gets the
work done the fastest and themost, effective way.

(47:59):
For me, I'm an After Effects,you know, motion graphics
artist, an animator.
So for me, this is logicallylike, just pull all these in
there, I can get way betterresults and at half the time as
well.
So, yeah.
Just a little note on that.

Isar Meitis (48:10):
there's a great question from Stan on LinkedIn,
and I think that's gonna be thelast thing because we're running
out of time is Can you show whatis the prompt that you're using
in Chachi PT to have it create abetter prompt that you can use
in mid Journey?

Ross Symmon (48:22):
Ah, yes.
that is, that's a greatquestion.
That is a great question and Ihope I have it here somewhere.
Let me see if I can find it.

Isar Meitis (48:29):
Yeah.
and by the way, I do a lot ofthat as well and kinda like
cross between creating promptsin one tool to use it in a
different tool, and then I seethe output and then I go back
and forth.

Ross Symmon (48:39):
Ah, here we go.
can you see my screen?
Yeah.
Cool.
So I said create an image.
So I didn't realize, but myjourney has this create an image
button.
So that just saves you fromhaving to type, please create an
image of, create an image of amock, fragrance bottle.
The brand's name is fleece, in astudio environment.
And then I created this, which Iclip the images out of.
which is just like a productshot of multiple angles.
I then said, describe thisbottle in detail for Midjourney.

(49:01):
I then said, here's a detailedmidjourney prompt for the free
fleece bottle.
Now it said a studio shot.
what?
It actually described the entireimage as opposed to just the
bottle.
I then said, just describe thebottle and nothing else.
And then it said, square glassperfume bottle with rounded
edges, filled with pale blueliquid.
It has matte black cylindricalcap with a white rectangular
label on it with the wordfleece.

(49:22):
This is very important with theword fleece because That is what
helped Midjourney like, okay,cool.
I need to structure whatever'sin the inverted commas onto this
bottle.
On the label itself, fleece inbold, black, uppercase letters,
the bottle has clean lines,thick walls, and a flat base.
Now that is a very descriptive.
Way of taking that into adescriptive prompt that you can
use in Chat two pt along withthe image reference, which I've

(49:43):
used as a style, objectreference.
That's how you get as close tothe product shot as possible in
Mid Journey.
Take it out of Midjourney, putit into Chat two PT can then
create the final product foryou.

Isar Meitis (49:52):
I love it.
I'll say one more thing thatagain, I've learned that works
well.
When you start to do products,ask it for aspect ratios and
ratios between the things.
So what's the aspect ratio ofthe bottle, what's the aspect
ratio of the cap, and what's theratio between the bottle and the
cap, or whatever it is.
This could be, again, a dog onthe couch, right?
Like, it gives in thedescription the understanding of
what this thing is, and itdramatically increases the

(50:13):
output of not squishing thestuff that you don't wanna be
squished.
it doesn't guarantee anything,but just increases the chance.
Ross, this was fantastic.
I think we covered a lot ofstuff.
I think we gave a lot ofexamples to people, and I really
appreciate you coming, spendingtime with us and sharing all
your experience with us.
If people wanna follow you,learn from you, take your
course, connect with you, whatare the best ways to do that?

Ross Symmon (50:35):
Yeah, man, please, thank you.
And no, it's been an absolutehonor.
Thanks.
thanks for having me.
I'm very active on LinkedIn.
I'm Ross Simmons.
On LinkedIn, I run a companycalled Zen Robot, which you
could find on LinkedIn as well.
And yeah, we are running a fourweek masterclass, which is a
four week masterclass incontent, I guess gen AI for
content creation from thebeginning image generation, all
the way through to creatingimages and a final, or multiple,

(50:58):
portfolio pieces that you'llhave at the end of it.
So yeah, the link, you'll seethe links on my pages, but,
otherwise you can go to Zenrobot.ai and that'll push you to
the, the course page as well.

Isar Meitis (51:08):
Awesome.
Thank you so much.
Thanks everybody for joining uslive.
great, chat and cool images ofpeople posting that never
happened before.
So you inspired people to dostuff.
Amazing.
So literally I've been doingthis, once or twice a week for a
very long time now, the firsttime that people are actually
posting images.
So that's, that was really cool.
thanks everybody again forjoining us.
Thank you so much, Ross forbeing here and sharing all your

(51:28):
amazing experience.
And the last thing that I'llsay, go experiment yourself.
Like the tools are nowproduction ready.
That's all that I could say.
All of them, like regardless ofwhich tool you pick, and
especially if you, try with allof them, they're production
ready, not for everything.
Like if you need to do abuilding side printout, then
maybe that's not, the tool, butyou can still use it in the
inspiration phases and in theapproval phases with the client

(51:49):
and then go to, upscaling oractual, photo shoots and stuff
like that.
Awesome stuff.
Thanks everyone.
Have a great rest of your day.
Thank you.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Ridiculous History

Ridiculous History

History is beautiful, brutal and, often, ridiculous. Join Ben Bowlin and Noel Brown as they dive into some of the weirdest stories from across the span of human civilization in Ridiculous History, a podcast by iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.