Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_02 (00:00):
You are listening to
the Why Smart Women Podcast, the
podcast that helps smart womenwork out why we repeatedly make
the wrong decisions and how tomake better ones.
From relationships, careerchoices, finances, to photo
jackets and chaos movies.
Every moment of every day, we'remaking decisions.
Let's make some good ones.
(00:21):
I'm your host, and you'reMcCubber, and as a woman of a
certain age, I've made my owncareer of really bad decisions.
Not my doctor to find it.
And I wish this podcast had beenaround to save me from myself.
This podcast will give youinsights into the working of
your own brain, which will blowyour mind.
(00:44):
I acknowledge the traditionalowners of the land on which I'm
recording, and you are listeningon this day.
Always was, always will be,Aboriginal land.
Well, hello, smart women, andwelcome back to the Why Smart
Women Podcast.
Today is what's the date, David?
SPEAKER_00 (01:02):
It's the 20th.
Is it the 20th of Novembertoday?
Or is it the 19th?
SPEAKER_02 (01:05):
It's the 19th of
November.
You will be listening to this onthe 20th of November.
Um I am broadcasting from DY andthe Northern Beaches, Sydney,
New South Wales, Australia, andit is a pristine, perfect spring
day.
And as I look at the news in theevening and I look at the
reporters all over Europe, Inote it's getting cold over
(01:28):
there.
So sorry about that.
Sorry, not sorry.
You all need to come toAustralia for a holiday.
So um last night we were umpreparing for an acting class,
and um David in the previouscouple of hours had had an
(01:51):
experience where he'd done acasting for a job, and the
casting situation environmentwas not positive, was it, David?
SPEAKER_00 (02:01):
Oh no.
I mean it was probably one ofthe one of the worst casting
conversations that I've had withthe uh you know the director and
the uh the apparatus uh thatI've ever had, and really quite
surprised about it.
Um I would even say it was kindof insulting, you know, when it
was an opportunity to have a aconversation at depth to um
(02:23):
understand your perspective onthe role.
Yeah, that's right.
You know, get to get alignmentaround values, to to to talk
about approaches to to actuallyperforming this particular role.
Um instead of having that richum clarifying conversation, it
just felt like a um a tick thebox, what do you know about us?
(02:44):
What can you tell us that we canput to our advantage if you were
to join us?
Um and um So it was a verydisappointing casting.
SPEAKER_02 (02:51):
Oh and I I think the
tone of it uh you know when you
discussed it later is they weredismissive, rude, disinterested,
and also a bit bullying.
Would you agree with that?
And we don't need that in thearts, do we?
SPEAKER_00 (03:06):
Well, I I I I I The
arts are hard enough.
I felt that right from the rightfrom the very, very beginning of
the casting.
Of the casting, um the uhdirector was intent on keeping
me in my place.
Yeah.
Uh you know, when the when whenwhen the when the the video
conference began, I sort ofgreeted everybody.
I'd done a bit of research oneverybody, sort of knew their
(03:27):
names, got a sense of who theywere, uh, and uh and and and
said hello.
And then the director sort ofrested back control and said,
Well, you know, let me do theintroductions.
SPEAKER_02 (03:38):
Yes, it's very
important, David, that you don't
overstep the line.
Um that's a real status issue,is that's a real power, grab for
power.
SPEAKER_00 (03:45):
Grab for power.
So yeah, look, it was it it wasreally disappointing, a bit
insulting, and um and uh uh itdoes not does not make me think
that this particularorganisation has got a good
culture.
SPEAKER_02 (03:58):
Yeah, and I think
that's that.
And so we we were we weretalking about this um prior to
starting our acting class thatwe teach on Tuesday nights, and
um we were talking about how wecould wreak revenge upon the
company and the director.
That's what we were talkingabout.
SPEAKER_00 (04:16):
In in a
tongue-in-cheek way.
SPEAKER_02 (04:17):
I mean, of course
there's no way I don't mind a
bit of it.
SPEAKER_00 (04:20):
And um Well that
well that well that was the that
was the tone of theconversation.
Yeah.
You know, you you you uh youwere waving swords and shields
in the air, we're going to wreakrevenge, and I was sort of doing
the oh well, you know, you'vegot to take the good with the
bad.
But the very lovely centeredyoga teacher overheard Annie and
then thought it was a good timeto give her some um uh a
performance conversation.
SPEAKER_02 (04:39):
Okay, so yeah, so we
we had this discourse, and she
was like, you should forgive.
And I was like, Well, should we?
Whose rule book is this?
I don't want to forgive.
And then she was saying, Well,the only person that's damaging
is you because you're hanging onto, you know, the the the sort
(05:02):
of this the negative emotion.
So this, you know, this wholenotion of, you know, be the big
the person and let it go and andand and and don't hold a grudge,
etc.
And my point about it was thatforgiveness is very, very loaded
when it comes to women.
(05:23):
Um, you know, we are socialized,are we not?
To um smooth the edges and holdthe emotional centre and
maintain the relationship andalso understand where the other
person's coming from, right?
So anyway, we had this sort ofdiscourse, and also prior to
that, which also maybe put me inmind to have an argument, well,
(05:46):
a discussion, was that I haddone some yoga the week before
and the music wasn't working,and she had said to the room,
that's because Mercury's inretrograde.
SPEAKER_00 (05:56):
Hmm.
SPEAKER_02 (05:57):
Which is an
astrological bit of nonsense
anyway.
So I was sort of like, mmm,right.
SPEAKER_00 (06:02):
So how did that
impact on the music?
SPEAKER_02 (06:06):
Well, David, don't
you know anything about Mercury
in retrograde and astrology?
SPEAKER_00 (06:11):
I guess I don't.
SPEAKER_02 (06:12):
Well, can I help you
out there?
When Mercury is in retrograde,it does all sorts of terrible
things, one of which is ruinscommunication lines.
Oh, I didn't know that, did you?
Okay anyway.
So I guess I'd already had a bitof a mmm going when she sort of
sort of thrust the idea that I Ineed to be forgiving.
I find the notion of revenge,you know, sort of deeply
(06:35):
satisfying having a go back.
I don't want people to not be,you know, people should be held
accountable for what they do.
Absolutely.
I don't want to drift around theplace like some sort of monk
wandering out of a a cave.
I don't want that.
SPEAKER_00 (06:47):
No, no, no, and I
and I could see Annie had been
sort of hooked into thisconversation when I heard the
question, how old are you?
Exactly.
How old are you?
And the and the yoga teacherprovided that information.
Then Annie informed her that shewas that Annie herself was twice
her age and uh and has adifferent perspective.
SPEAKER_02 (07:04):
I do have a
different perspective, and I I
as you know, I find these sortof memes around look, of course,
there's upsides to forgiveness.
I mean, I know that it itcertainly Of course if you're
hanging on to something that'sit's absolutely pointless and
that that does clutter up youryour brain, of course.
I think there's a place forforgiveness.
Yeah, but I don't think itshould be reflexive.
(07:27):
I don't think you shouldultimately go, oh, because I
think what it is a lot of thetime is just women being
submissive.
unknown (07:32):
Yeah.
SPEAKER_02 (07:32):
Just going, I've
just got to forgive.
No, you don't.
You don't have to do anything.
I can do what I like.
If I want to forgive, I will.
I will try and analyse mycritical thinking, but I I don't
know.
SPEAKER_00 (07:41):
Okay.
Well, I mean, what what's what'sthe damage that's done when when
people uh forgive too easily andtoo soon?
SPEAKER_02 (07:48):
Well, I I I I think
people forgive too easily and
too soon when they're they wantto go towards the old piece at
any price.
And if you forgive before you'veprocessed what happened, then
all that happens is all the thefury and the rage goes
underground, right?
Also, a lot of this forgivenessnonsense is all is is is
definitely um aimed at us froman exterior, you know, there's
(08:10):
somebody who's saying to us likeit happened to me yesterday,
well you should be um, you know,you should move into a forgiving
space.
And also, um, it can justreinforce a power imbalance.
You know, somebody who's beenvile, you move quickly into
forgiveness.
Well, how you know, where's thewhere's the lesson that they
have to learn that they can't dothat to you, right?
SPEAKER_00 (08:30):
Yeah, 100%.
SPEAKER_02 (08:32):
So that was that was
sort of, you know, that was
where I um That's where you werelast night.
Yeah, that's where I were, andthen that's where I was where I
were.
Um yeah.
Look, the thing is that peopleneed to be held account if
they've done the heldaccountable if they've done the
wrong thing.
And going into some sort ofpremature forgiveness, um it's
(08:54):
is is just is just a way to togo into some sort of conflict
avoidance.
Okay.
It's just avoidance.
SPEAKER_00 (09:01):
So so so forgiving
somebody might be right when the
process of forgiving themactually makes you feel calmer,
you know, not tighter, you know,calmer at the thought of letting
it go.
Um it could be right when you'vemade sure that people are
accountable for what they havedone, you know, even if people's
contribution has been imperfect.
Um when you are choosing toforgive and not just pretending
(09:23):
to forgive or performingforgiveness because that's
what's expected of you, and whenit genuinely gives you back, you
know, energy or space orfreedom, then it's the right
thing.
But it's not the right thingwhen the process of doing it
actually compounds that thatfeeling of it.
Yeah, yeah, yeah, yeah, yeah.
SPEAKER_02 (09:42):
You're you know, th
you know, when someone's
demanding forgiveness becauseit's it's proof that you're a
really good person, it's justbullshit.
SPEAKER_00 (09:48):
Yeah, yeah.
SPEAKER_02 (09:48):
So that was my
thing.
So anyway, that happened lastnight, and then I was telling
David about this this morning,and then what I normally do
prior to doing the podcast is Iha have my very own opinions,
but then what I'll do is I'llhave a little bit of a search on
AI.
SPEAKER_00 (10:06):
All right.
SPEAKER_02 (10:06):
And just to see what
what it says about um you know,
whatever it is that I'm talkingabout.
SPEAKER_00 (10:12):
Yeah, to clarify
your thinking, get other
perspectives.
SPEAKER_02 (10:15):
Yeah, so I don't
just get stuck in my own.
SPEAKER_00 (10:17):
You want to use it
as a you want to use it as a
kind of a collaborative tool.
SPEAKER_02 (10:20):
I do.
Yeah so I I did a bit of I didmy usual search on AI, and then
I was showing that to David.
Um and and what happened wasquite interesting.
Do you want to describe whathappened?
SPEAKER_00 (10:33):
Well, I mean I look
I I I noticed Annie's Annie's
first prompt.
Um, yeah, in into into Chat GPT.
Um she wrote, What are thepositive and negative aspects of
forgiveness?
And um and what came out of theout of Chat GPT was um was kind
of what you expect, you know.
SPEAKER_02 (10:51):
It's what I it it
certainly didn't it's sort of
I've been investigating this allthis these sort of what I call
um social memes for a long time.
So I've been investigating, youknow, way before Chat GPT or any
of the others, I was on to thefact, if I may say, that um that
(11:14):
acceptance and gratitude andgratitude had serious downsides.
So I've got some pretty wellformulated opinions through a
lot of research on things likegratitude, acceptance, and
forgiveness.
So what I'm looking for when Igo to um an AI, is it is that
(11:38):
what I'm saying?
Yeah, yeah, yeah, yeah, yeah.
Is I need I want something thatdoesn't just um give me what I
already think.
SPEAKER_00 (11:45):
Yeah.
SPEAKER_02 (11:45):
Right?
SPEAKER_00 (11:46):
Yeah, yeah, yeah,
yeah.
SPEAKER_02 (11:47):
I don't there's no
point in that.
Like I've got some wellformulated ideas.
It's what I'm doing on thepodcast, right?
SPEAKER_00 (11:51):
Yeah, yeah, yeah,
yeah, yeah, yeah.
And and and and the um I meanthere was nothing there was
nothing in your first promptthat actually um was
groundbreaking or or or paradigmshifting or anything like that.
SPEAKER_01 (12:03):
No, nothing.
SPEAKER_00 (12:04):
And and I think, you
know, as as I was watching Annie
doing this, I thought um, youknow, this is this is this is so
common to the way that a lot ofpeople are using AI at the
moment.
And um I hate AI.
You do?
Yeah, yeah.
That doesn't surprise me at all.
I hate it.
What what what do you hate aboutit?
SPEAKER_02 (12:22):
I I just uh I sort
of hate its um I I don't like
its tone.
SPEAKER_00 (12:30):
Um you don't like
its tone.
SPEAKER_02 (12:32):
No, I don't like its
tone.
I don't like its sort ofunbridled optimism.
I just don't like it.
It's like a really crawly, Iguess that's an Australian term
for our European and Asianlisteners.
SPEAKER_00 (12:45):
So you find it
sycophantic?
SPEAKER_02 (12:46):
It's really
sycophantic.
SPEAKER_00 (12:48):
Like Annie, you're
so smart.
What a great idea.
No one's not.
SPEAKER_02 (12:51):
It's like one of
those employees that always
wants to go and get your lunchfor you and tell you how
fantastic you are, and it's justreally crawly and fancy.
SPEAKER_00 (13:00):
I'm just trying to
imagine a world in which you
would be unhappy with someonesaying, Annie, can I get you
lunch?
SPEAKER_02 (13:05):
I'm okay about the
lunch.
SPEAKER_00 (13:06):
The lunch, yeah.
SPEAKER_02 (13:08):
I'm not okay about
someone constantly telling me
how great I am.
Actually, I quite like that.
But can you give that a go bytelling me how great I am?
SPEAKER_00 (13:15):
Annie, you're a
genius.
SPEAKER_02 (13:16):
Thank you.
Do you mean that?
SPEAKER_00 (13:17):
I feel I feel warmed
by your being near me.
Your your outstanding balance ofof perspectives and insights.
SPEAKER_02 (13:24):
Thank you.
SPEAKER_00 (13:24):
You helped me um
make sense of this crazy world
in which we live.
SPEAKER_02 (13:28):
Yeah.
unknown (13:28):
Yeah.
SPEAKER_00 (13:29):
Is that doing it for
you?
SPEAKER_02 (13:30):
No.
SPEAKER_00 (13:30):
No?
Yeah.
SPEAKER_02 (13:31):
You didn't commit to
that.
SPEAKER_00 (13:32):
Like I look, I'm I'm
I was just paying lip service to
it.
And funnily enough, that's youknow, that's that's the way that
AI will work.
Um AI's, you know, prime missionis not to um is not to be
truthful, um it is to beconvincing in the way that it
works.
SPEAKER_02 (13:47):
So can we just
ignore it?
Like let like not have be haveit be part of our life.
SPEAKER_00 (13:52):
If you're if you can
find yourself a rock to go live
under, um then you possiblycould ignore it.
But whether we like it or not,AI is creeping into, has already
crept into so much of the thethings that we just do
habitually in daily life.
Like what?
Well, I mean, look, one littleone little little test for
somebody is to um to go to theirapplications page, take a
(14:14):
screenshot of all theapplications that they're
currently using, upload thatscreenshot into a Chat GPT or a
copilot or a or a Claude.
Are they all LLMs?
They're all LLMs.
Large language models.
Large language models?
Gee, you so you say that like anexpert.
And you can ask it, how many ofthese are actually using AI in
order to deliver the servicesthat I'm currently using?
(14:35):
And will it tell you?
And it'll tell you.
It'll go through, and you'll besurprised how many of the tools
that we currently use today haveactually got a little bit of AI
into them.
Now, I I mean Annie's Annie's uhincompetence using AI.
Mean?
Well, no, look, it's just thetruth.
It's actually very similar tothe way that a lot of um uh uh
(14:56):
people in business, leaders,managers, uh are uh are are
relating to AI at the moment.
You know, I the truth and thedata backs up my own um sort of
anecdotal experience that a lotof leaders are are performing
confidence.
You know, they're saying, Ohyeah, I've got this, I'm using
it, I'm I'm being moreproductive.
When they're quietly feelingoutpaced.
SPEAKER_01 (15:17):
I I don't blame
them.
Yeah, look, I feel outpaced.
SPEAKER_00 (15:22):
The pressure that is
on these people is that I think
a lot of boards are saying tothe market, yeah, we're acting
on AI, we're you know, we'reputting things in place, but the
evidence actually doesn't matchthe rhetoric.
Um one of the big consultingfirms, I think it was oh, was it
KPMG or Deloitte?
You know, 70% of of boards aresaying we're making meaningful
steps in um in the in the uh theapplication of of AI to you know
(15:46):
achieving our mission.
Um but surveyed only about 30%of the leaders actually feel
competent with it.
SPEAKER_02 (15:52):
And and what it's
all it's too quick.
Everything's happening tooquickly.
Our pool, we're not our brains,aren't equipped.
SPEAKER_00 (15:59):
Yeah, the w look,
100%.
You know, the the the boards arein sort of you know the cycles
of uh of of of quarterly or oror half-yearly um uh strategy
adjustments, but AR is actuallychanging every day.
And so I mean people are by byyou know reality's rules, they
(16:20):
are ignorant of what's comingdown the pipeline because
everything's changing soquickly.
SPEAKER_02 (16:24):
Exactly.
It's bad.
SPEAKER_00 (16:26):
And that hesitation
bad.
You know that that that thathesitation is understandable.
You've got very capable peoplejust trying to steady themselves
while the ground is shiftingbetween you know beneath their
feet.
Um and what is emerging is areality that AI isn't exposing
(16:47):
technical gaps first.
It's actually exposingleadership gaps.
SPEAKER_02 (16:51):
So it's not exposing
gaps.
Technical gaps.
Okay, so it's not that I'mtechnically incompetent, it's
that I'm not leading.
SPEAKER_00 (17:00):
Um Well, it uh
you're you you don't have the
competencies that are uh thatare required for you to lead.
SPEAKER_02 (17:06):
And you know why do
I need to lead AI?
Well why can't isn't it isn't itcapable, it seems it's pretty
bossy.
SPEAKER_00 (17:13):
Uh look, look, look,
AI is simply, um, you know,
speaking very, very generally,it is a pattern-seeking,
pattern-generating machine.
It seeks patterns to learn um,you know, what it is that it's
dealing with, and then it justcontinues those patterns as a
way of saying, you know, this iswhat's going to happen.
So if I was to say to you, thequick brown fox jumped over the
(17:35):
lazy dog.
Do you want to try that again?
Yeah.
If I was to say to you, thequick brown fox jumped over the
lazy dog.
Now, did the quick brown foxactually jump over the lazy dog,
or did you just finish thatsentence because that was just
predictably, you know,statistically, the right word to
finish that sentence?
This is the right word.
It's the right word.
(17:55):
Okay.
So that's the way that AI works.
Like night and day.
Up and down.
Black and hot and cold.
Yeah.
Yeah.
So lovely antithesis.
It it gets a a it it recognizesa pattern and then then it
simply completes the pattern.
And so don't expect AI to haveany ethics, any values.
(18:19):
Um don't expect it to understandcontext.
I mean you're able to finish thequick brown fox jumped over the
lazy because you've got thatcontext.
If you want AI to do somethingto do something in which it will
give you back what it is thatyou want, you need to direct it
as precisely as a theatredirector directs a performer.
(18:44):
You have to give it a characterand context.
You have to get clear on whatthe intention is.
SPEAKER_02 (18:53):
Is that what you're
saying?
SPEAKER_00 (18:54):
Yeah, that's right.
Okay.
So so a lot of people, youincluded, talk to AI like it's a
machine that that is going togive them an answer.
But it's not how the systemsbehave.
They don't answer questions,they imitate answers to
questions.
They they take shape around thedirection that you give them.
And the closest, you know,analogy that we have is the
(19:14):
director-actor relationship.
When you direct an actor, youdon't you don't you don't tell
it what to do.
You don't dictate emotion.
I mean, you might if you're a abad director.
A bad director, a high schooldirector.
But if you're a professionaldirector and you want to be
working with respect with aprofessional performer, you will
have a conversation and you'llset the intention.
(19:36):
You'll give the the actorcontext for the scene.
You'll break the work intobeats, you'll create a space
where the performer can discoversomething real for themselves.
SPEAKER_02 (19:45):
But the AR's not
like an actor's not real.
No, no, no, no, it's not.
It hasn't got a beating heart.
SPEAKER_00 (19:50):
Look, it it it is it
is as real as an actor, insofar
as that it has strengths andweaknesses.
Um it will do what you want itto do, like you know, like can
what's the difference betweenbetween an actor, you know,
dealing with a difficult text,you know, uh a difficult
character like Richard III or oror Porsche?
Um an actor that has directionand an actor that has no
(20:13):
direction, just instinctively.
What's the difference betweenthe performances?
SPEAKER_02 (20:16):
Of of a vague
generalized performance.
SPEAKER_00 (20:19):
If you don't give
them good direction?
SPEAKER_02 (20:20):
Yeah, and if you
give them actors are terrible
without direction.
SPEAKER_00 (20:22):
Yeah, actors actors
are terrible with bad direction.
AI is terrible and embarrassing.
SPEAKER_02 (20:27):
And um Yeah, but how
do you know?
I don't get it.
Like, how do you then know whwhich bit which one of the LLMs
to go to?
SPEAKER_00 (20:34):
All right.
Look, uh I mean that's aninteresting question.
Which one should I choose?
Should I choose ChatGPT?
Should I choose Claude?
Should I choose Perplexity?
Should I choose Gemini or Pope?
I mean, okay, so AI is acrossall of these things.
Um my advice, look, there's twoschools of thought.
One advice is to pick one andstick.
So go deep and wide with withopen AI and and Chat GPT and
(20:56):
custom GPT.
SPEAKER_02 (20:57):
Why would you do
that?
SPEAKER_00 (20:58):
Because the deeper
that you go with an AI Oh, the
more it gets to know thecontext.
Well, the the more it gets toknow your context, um, and the
more it gets to know uh what youexpect of it.
Okay?
So if if you um wish they'dnever bloody invented it.
Look, one way of describing whatwe've done, uh, you know, for
(21:20):
many years we've been worriedabout uh sort of alien
intelligence coming in andtaking over the planet Earth.
What humanity has donecollectively is that we have we
have we have manufactured thatintelligence.
You know, we do have an alienintelligence, it's it's in
action on the planet at themoment.
People are using it andprofiting from it.
SPEAKER_02 (21:39):
It can't do
everything.
SPEAKER_00 (21:39):
And it can't do
everything.
SPEAKER_02 (21:40):
I tell you what, it
can't do.
SPEAKER_00 (21:42):
What can't it do?
SPEAKER_02 (21:42):
This morning, when I
was on um a walk back from the
beach, there was a boy standingnext to a car, and it was a pea
plate, which is a provisionallicense here in Australia.
And across the road from himwere two fire trucks, two fire
engines, and standing around thecar were five um fire and rescue
(22:08):
um people.
SPEAKER_00 (22:09):
Okay, I can picture
that.
SPEAKER_02 (22:10):
Yeah, that were
trying to break into his car for
him with a coat hanger.
Oh, good, right, okay.
And they were all there, andthere they were, and I thank God
we're an extraordinarilywell-resourced part of the
country, aren't we?
SPEAKER_00 (22:26):
Yeah, there's a
there's two fire trucks and a
and a and a wire coat hanger.
SPEAKER_02 (22:30):
And a wire and there
they all were, and he was
standing there, and it was quitea jolly little it was a very
sweet scene.
I really appreciated you knowliving in Australia at that
moment.
But there's something that, youknow, no AI can't do that.
SPEAKER_00 (22:46):
What can't open a
car with a hangover, uh a coat
hanger?
SPEAKER_02 (22:49):
No, it can't.
SPEAKER_00 (22:51):
Okay.
That's that's tremendouslyinsightful.
SPEAKER_02 (22:54):
Uh I know.
SPEAKER_00 (22:55):
It can't do anything
physical.
Do you know it will though oneday?
And here's the other here's theother scary thing that's on the
way.
SPEAKER_02 (23:00):
But will it want to?
I mean, they were they sort oflooked engaged.
Will it want to help and theywanted to help and they were
awesome.
SPEAKER_00 (23:07):
No, it um okay, so
so robotics and AI is is really
mind-blowing.
So we're now building autonomousrobots that can move around the
planet and I think couldprobably manipulate a wire coat
hanger into a broken down alongthe way.
SPEAKER_02 (23:27):
But would they would
they ever have the empathy to
want to help?
SPEAKER_00 (23:34):
No, of course not.
That I mean okay, so this is thepoint.
Um artificial intelligence, youknow, for all its for all its
strengths, it has no humanemotion whatsoever.
It has no no no emotion, noethics, no value.
However, you can provide it tothem.
You can give it to them in theprompt.
A lot of people talking aboutprompt engineering at the
(23:56):
moment.
SPEAKER_02 (23:56):
And uh because you
changed the prompt for me this
morning.
SPEAKER_00 (23:59):
Oh, yeah, yeah,
yeah.
SPEAKER_02 (24:00):
And that's how we
started off on this.
You changed the prompt, didn'tyou?
I had that I had Chat GPT and Ithought it was pretty
pedestrian.
SPEAKER_00 (24:07):
Yeah, yeah.
Because you simply asked thequestion, you know, what's the
difference between And I washoping it would give me a new
thing and it didn't.
Yeah, yeah, yeah.
SPEAKER_02 (24:15):
And then what did
you do?
You changed the prompt.
SPEAKER_00 (24:17):
Well, I mean what I
did was I actually applied what
our approach is, and that is totreat artificial intelligence
like it's a performer.
So the first thing that we didwas I I cast the character of
the AI.
I told it that it was playing apsychologically literate, warmly
pragmatic, insight curator.
A guide who understandshumanity.
SPEAKER_02 (24:35):
Now, where did you
come up with that language?
SPEAKER_00 (24:37):
Where did I come up
with that language?
SPEAKER_02 (24:38):
Yeah, because that's
very specific.
SPEAKER_00 (24:39):
Yeah, it look it is
very specific.
SPEAKER_02 (24:41):
Psychologically
literate, warmly pragmatic,
insight curator.
Where did you come up with that?
SPEAKER_00 (24:46):
Can I keep going and
then I'll tell you where I came
up with?
Please do.
Because what this what thisprompt is giving at the start is
a character.
Right?
So this is this is what you are,and you understand human
behavior, emotional dynamics,and relational power.
You think like a coach, youspeak like a storyteller, and
you communicate like someone whoknows confidence and clarity are
(25:06):
acts of self-protection, notself-judgment.
So basically, I'm I'm I'm givingthe AI a character and a
backstory and a perspective.
And I give it a worldview.
And the worldview is thatforgiveness is complex.
SPEAKER_02 (25:21):
You gave it that.
SPEAKER_00 (25:22):
Yes, I gave it that.
And so that's the character.
Um I told it what the intentionwas.
SPEAKER_02 (25:27):
Hang on, can we
scroll up again?
Yeah.
Where did you I I'm interestedin that language at the top
because it's very Okay.
SPEAKER_00 (25:35):
So I I would say
that um I've moved beyond
beginner level when it comes toworking with AI.
And when I've got a um aquestion like the one that
you're asking, um, I know thatto get a good result, I need to
provide it with a character, anintention.
SPEAKER_01 (25:52):
Yeah.
SPEAKER_00 (25:52):
Um I need to, you
know, give it the structure that
it's doing, I need to give itperformance notes, I need to
give it constraints, um, youknow, rehearsal logic.
Yeah.
I mean, I'm I'm basically usingthe same concepts that we use
when we are directing actors.
Directing actors.
SPEAKER_02 (26:07):
But so you came up
with psychologically literate.
SPEAKER_00 (26:09):
Well, actually, I
didn't.
Oh.
I didn't in this instance.
Because what I have done is Ihave built a custom GPT.
So again, for those who whodon't know, you can you can use
the the standard uh open AI chatGPT.
You can also build an assistantthat will do things for you.
And I have built an assistantthat helps me to write prompts
(26:33):
that work.
SPEAKER_02 (26:34):
So how hard was
that?
So for our listeners, how howdifficult was that process?
Because it I you've been lockedin this studio as far as I could
see for about six weeks workingon this, right?
SPEAKER_00 (26:48):
Yeah, yeah, yeah.
SPEAKER_02 (26:48):
From dawn till dusk.
Occasionally I throw a glass ofwater at you and some dry bread.
Yes, which is with a deliciousdinner at night, though, may I
say.
SPEAKER_00 (27:00):
And I'm enormously
grateful for this.
SPEAKER_02 (27:02):
And thank you.
SPEAKER_00 (27:03):
Yeah, yeah, yeah,
yeah.
So um if if uh if if if yourknowledge of uh of working with
AI says you need to get yourprompts right, well, you know,
here's the process that you cango through.
You can you could go to, let'ssay we're using ChatGPT, but
once again all of them are allof them are going to behave
somewhat similarly.
Um you'll go to Chat GPT andyou'll say, Okay, I want you to
(27:27):
help me um write effectiveprompts, you know, effective
prompts that do this, that, andthe other.
And then it will give you the umthe character and the intention
and the background.
You it it basically gives youthe prompt to write the prompts.
SPEAKER_02 (27:45):
Yeah, I get it.
So the psychologically literate,warm, warmly pragmatic, insight
curator who thinks like a coach,speaks like a storyteller,
communicates like someone whoknows confidence, it understands
these worldviews, forgiveforgiveness is complex,
empowering in somecircumstances, costly in
another, the scene.
Yeah, it's it's like it is sortof like looking at a play.
(28:07):
Yeah.
It it is like a play, it'sreally interesting.
I would never have thought thatbecause I've ignored you for the
past six weeks because I'm youknow, sort of um organically or
instinctively bored by the wholething.
But this is interesting.
SPEAKER_00 (28:22):
If you if you just
sort of tone down the artificial
in artificial intelligence andactually appreciate that what
we're working with isunintelligence.
SPEAKER_02 (28:31):
And intel and
intelligence.
SPEAKER_00 (28:32):
It is an
intelligence.
SPEAKER_02 (28:33):
Yeah.
SPEAKER_00 (28:34):
Um and and to get
the best out of unintelligence
and intelligence.
And intelligence.
SPEAKER_02 (28:39):
You're saying
unintelligence.
SPEAKER_00 (28:41):
And intelligence.
To get the best out of your outof your LLM, um you want to give
it instructions so of thelimitless directions that it can
go in, it goes in the directionthat you want it to go in.
So yes.
Um I'd be very happy if if ifanybody wanted to reach out and
(29:01):
say, um, you know, could we haveaccess to your prompt generator?
Oh, yeah.
SPEAKER_02 (29:04):
Anybody who needs
help with this, because I've got
a live-in helper here who knowsstuff and sort of uh um can sort
of override my antipathy.
Um so if if you want any help,do do get um to let us know.
Um because we can also, Davidcan also sort of talk to you
from anywhere in the world, eventhough we're in Sydney, New
(29:24):
South Wales, Australia, becausehe is really good at this.
I I don't mean to soundsurprised, but I have ignored it
because uh because you know it'swhat I'm like.
SPEAKER_00 (29:35):
Yes.
Um but I think that you're goingto become um more interested as
better prompts give you muchbetter uh results in terms of
the kind of research that youwant, uh the kind of angles that
you can take, the way that youcan shape arguments and things
like that.
SPEAKER_01 (29:50):
Okay, that's that's
that's true.
SPEAKER_00 (29:52):
I didn't quite I I I
didn't actually finish the two
choices clearly spoken in thisepisode.
SPEAKER_02 (29:57):
Go on.
SPEAKER_00 (29:58):
Oh well I okay.
SPEAKER_02 (29:58):
No, go on, do, go,
do it, do it.
SPEAKER_00 (30:00):
Well let me do let
me let me let me let me just
finish this.
Okay.
One school of thought is thatyou get to know one LLM really,
really well.
SPEAKER_02 (30:05):
Yeah, is that true?
SPEAKER_00 (30:06):
Yeah, yeah.
SPEAKER_02 (30:07):
Look, I I Which is
your favourite then?
SPEAKER_00 (30:09):
Okay, so the LLM
that I've got to know really
well is the the OpenAI, so thethe ChatGPTs.
It was the first one that Istarted with.
Um they have continued to evolveand add new features to keep me
interested.
However, I have gone fairly deepand wide with perplexity, and
that's fantastic for research.
Um I do have a relationship withClaude, uh, who helps me
(30:31):
generate code.
Um I do work with Copilotbecause Copilot is sort of right
inside the Microsoft suite, andI do look at how we can get
insights out of Excel and youknow put them into PowerPoint
and those sorts of things.
So uh and also um Gemini, whichis the uh the Google one.
SPEAKER_02 (30:49):
That's a lot.
That's that you just sort of oneand you just mentioned six.
SPEAKER_00 (30:54):
That's right.
And so this is this is thealternative point of view.
Uh go deep and wide with one, oror you can do what I've done,
which is to go as deep and wideas you can with one in
particular, but find out whatthe strengths of the other ones
are.
So it's not that you've just gotone actor that you are now
you've got six.
You've got an ensemble.
You've got an ensemble cast.
An ensemble cast.
For opening nice.
(31:14):
That's right.
So my my um my research promptsI will generate in Chat GPT.
I will put those researchprompts into Perplexity, and
that will do the research.
I might consolidate all of thatresearch into a single document
and put that into Claude andthen ask Claude to build me a
um, you know, an HDMLinteractive dashboard um that
(31:36):
helps us work through a process.
So you can go from one to one toone.
My suggestion is that you don'tdo that until you have built a
solid relationship with youryour foundational LLM.
So the challenge here is to finda way to work with AI every day
that that feels like it's anextension of your capability, an
(31:58):
extension of your leadership.
Because working with AI startswith fluency, not expertise,
right?
You don't need to learn how tocode, you just need enough
literacy to ask the smartquestions, spot bad assumptions,
uh, and continue to drivetowards excellence.
(32:19):
I think that because some peoplehave seen just how quickly AI
can work and the um the outputsimmediately appear very, very uh
convincing and com compellingand credible, um, they start to
think that it's a it's a quickprocess.
It can definitely be quicker umwhen you are automating the
(32:41):
research tasks and and you knowusing the incredible reach and
depth that an AI can go to.
Um but to think that you'regonna get it in one go is as
stupid as a director thinkinghe's gonna get a great
performance out of a cast withone run-through of the play.
You need to iterate, you need tospend time on it.
(33:02):
Um I would suggest that youspend time thinking about what
your workflow is going to be.
Um pose the question, do theresearch.
You know, take that research,you know, draw from it a report
or some conclusions or somefindings or the skeleton
structure of a PowerPointpresentation.
Um, you know, taking some of theideas into a into image
(33:26):
generation or video generationand and and bring your ideas to
life.
You can get a workflow that canbe very solid, but it's it it is
about repetition.
Don't worry that you don't feelyou're in control of the the
whole thing.
Just start, set littleshort-term objectives, like um,
you know, I want to I want toimprove the quality of my
(33:48):
prompts and learn how what youput in your prompt is going to
directly impact your outputs.
So um, yeah, I mean, that's themost practical advice I can
give, you know, in this formataround how to how to really
start working effectively withartificial intelligence.
SPEAKER_02 (34:04):
We all know that
there's shocking bias in AI in
terms of hiring decisions incompanies, in terms of
healthcare.
It's distinctly um, you know,puts women at a dis at a
disadvantage.
SPEAKER_00 (34:16):
Yeah, yeah.
It puts any demographic that isnot the mainstream, not the
core, at a disadvantage.
Because again, think about whatI was saying earlier about what
what AI fundamentally is.
It's uh it you know, it'sautocorrect, it's it's
predictive, it's statistical.
And when searching for a youknow the best candidate, it will
(34:38):
look at who have been thecandidates selected in the past.
And it'll use historical data.
Historical data taken from atime when gender biases were.
SPEAKER_02 (34:47):
Yeah, so m men are
the scientists and women are the
nurses.
SPEAKER_00 (34:50):
That's right.
SPEAKER_02 (34:51):
And and and also and
also if it goes back, you know,
historically in terms of medicalresearch, we we know that
medical research is absolutelygeared towards men and not about
the specifics of women's, youknow, symptoms and the
difference between thebiologically between men and
women.
(35:11):
We know it's it it's been waymore research to do with um to
do with men.
SPEAKER_00 (35:15):
That's right.
And also and also um, you know,uh people of colour have drawn
the short straw as well aroundmedical things.
SPEAKER_01 (35:22):
Oh, terrible.
SPEAKER_00 (35:23):
You know, and uh,
you know, uh uh you know, quite
specifically around skincareproducts and and things like
that that fail to take intoconsideration the the different
effect of having you know morepigmentation in your skin.
SPEAKER_02 (35:37):
Exactly, exactly.
SPEAKER_00 (35:38):
Yeah, w when you do
a um when you do a search, you
know, give me an image of um youknow leaders discussing
strategy, I guarantee thatyou'll probably see more men
around the table than there willbe women.
And they will probably all havebeards and they will probably be
all of a of a certain age.
SPEAKER_02 (35:53):
How old is that,
like 42?
SPEAKER_00 (35:54):
Oh yeah, yeah,
roughly.
I mean that's your you know itit gives you the That's your
prime age, right?
That's right, it gives you themean.
And I can tell you that if thereare any women in the photograph
of leaders sitting around thetable talking strategy, they are
probably, you know, about thirtyyears old, slim, very
attractive, and they'll havestraight hair.
Yeah, there won't be anyinteresting sizes and shapes.
SPEAKER_02 (36:17):
Um what about will
they have hair like me, curly
hair?
SPEAKER_00 (36:20):
No.
No, no, no, because curly hairum is um is is is wild and
creative and non-strategic.
SPEAKER_02 (36:27):
Yeah, yeah, that's
right.
SPEAKER_00 (36:29):
That's right.
I mean, you know, someone withcurly hair.
SPEAKER_02 (36:31):
Like I'm a wild I I
I I should be like a wildly
artistic, deeply disorganizedsort of maverick, shouldn't I,
really, with my hair.
SPEAKER_00 (36:42):
Show me a portrait
of the HR team celebrating um at
the annual general conference.
That's where you'll turn up.
SPEAKER_01 (36:53):
W really?
SPEAKER_00 (36:54):
Yeah.
SPEAKER_01 (36:56):
Really?
SPEAKER_00 (36:56):
Yeah, women with
long curly hair.
SPEAKER_01 (36:58):
Okay.
Right.
Okay.
SPEAKER_00 (37:00):
And th they'll
they'll probably be um, you
know, uh a bit diverse in theirlook as well.
SPEAKER_02 (37:06):
Yeah.
Yeah.
Anyway, it's really annoying.
I mean, I I do get your point.
Uh this actually I I I regretthat I've just left you to sort
of moulder in here for six yearswhile I ignored you.
Yeah.
I did keep the food up, didn'tI?
SPEAKER_00 (37:24):
You did keep the
food up.
Look, I uh Annie, what I wantyou to know is that AI is not a
threat to your life, right?
It's not a threat to all of thethings that you are competent
in.
In fact, I reckon it's aspotlight for it.
Because I think it's the skillsthat many women innately have.
You know, that that that sort ofnatural emotional intelligence,
you know, clarity about what'simportant, the capacity for
(37:46):
nuance, um, being aware of ofrelational issues.
This is exactly whatorganizations seeking to harness
AI actually need.
There's nothing in the data thatsuggests that brusque certainty
is the winning trait of the AIera.
And it's actually quite theopposite.
It's the leaders who cancommunicate clearly, humbly,
(38:08):
stay grounded under pressure andcollaborate with with with real
boldness, they're the ones whoare going to be shaping the
future.
SPEAKER_02 (38:16):
Oh, shaping the
future.
SPEAKER_00 (38:18):
And I do think that
um that they're I think that
where women leaders aredemonstrating strengths, those
are the strengths that are gonnaget the most out of AI.
SPEAKER_02 (38:33):
Okay.
All right.
I forgive you for spending sixweeks in here working that out.
SPEAKER_00 (38:41):
And I forgive you
for leaving me to moulder.
SPEAKER_02 (38:45):
Mulder.
You moulded in here, didn't you?
SPEAKER_00 (38:49):
Okay, call me Fox.
SPEAKER_02 (38:50):
You even you even
told me to leave you alone, like
frequently.
Oh, can you please notinterrupt?
But it was necessary.
Yeah, but it's still quitegendered because I'm also doing
stuff.
I tell you what, uh when I'veput my book that I'm currently
writing and said, you know, I'veput it in to sort of look at
structure, the stuff it gives meback is rubbish.
(39:12):
It's so awful.
It's like the worst romancenovel.
SPEAKER_00 (39:15):
Okay.
SPEAKER_02 (39:16):
People are kissing
the downy heads of babies.
I ought to shoot myself withdreadful.
SPEAKER_00 (39:20):
So let's let's see
if you've been listening.
Um when when when AI gives you areally bad bad prompt.
No, a really bad output, areally bad result.
What do I do?
Is it no, no, not what do youdo?
SPEAKER_02 (39:31):
Oh, I know why.
SPEAKER_00 (39:32):
Let's assign blame.
Is it its fault or is it yours?
It's mine.
Yes.
Yeah, you've really fault.
It's your fault, and what couldyou do instead?
SPEAKER_02 (39:41):
I could give it a
better prompt.
SPEAKER_00 (39:43):
You could well, you
could give it a better prompt.
SPEAKER_02 (39:44):
I could I could
develop my own helper assistant
that helps me with good prompts.
SPEAKER_00 (39:49):
Yeah, okay, so now
imagine you're directing an
action.
SPEAKER_02 (39:51):
I can't do code.
Imagine.
I'd give it a character.
SPEAKER_00 (39:54):
Okay, okay.
SPEAKER_02 (39:54):
So I'm I'd give it a
context and I give it a
character and I give it aworldview.
SPEAKER_00 (39:58):
Right, I'm your LLM.
SPEAKER_02 (39:59):
Yeah.
SPEAKER_00 (40:00):
Right?
You want to do you want to dosome you what do you want to do?
Do you want to do some research?
Do you want to do you want asample paragraph?
Do you want a poem?
Do you want music?
SPEAKER_02 (40:07):
I want to I I want
um I want to segue from one
incident in my novel.
I want an idea for a segue intoanother.
SPEAKER_00 (40:19):
Okay, alright.
Oh, okay, okay.
So you're looking for editorialadvice?
SPEAKER_02 (40:23):
No.
SPEAKER_00 (40:24):
No?
SPEAKER_02 (40:24):
No.
You're looking for it's anarrative leap.
SPEAKER_00 (40:28):
A narrative leap.
Okay, so um I'll be your uh I'llbe your your avatar, I'll be
your LLM.
In order to in order to toactually give you what you want,
what would I have to be?
Well You want a narrativedevice, yeah?
SPEAKER_02 (40:43):
Yeah.
You'd have to be um you'd haveto have a good grasp of the
language, you'd have to bearticulate, you'd have to
understand the context of thestory that I'm writing.
SPEAKER_00 (40:55):
Okay, okay.
Now slow slow down.
So what's the role?
What am I?
Uh so Annie, imagine that you'reyou're preparing your prompt and
you are prompting an actor.
I'll be your actor.
What are the things that youwould tell me if you wanted me
to perform what it is that youwant me to perform?
What would you have to do?
SPEAKER_02 (41:11):
Well, I'd have to
cast the the the character.
SPEAKER_00 (41:14):
Yes, yeah.
So who am I?
SPEAKER_02 (41:15):
You're a you're
literary and you're a
storyteller.
SPEAKER_00 (41:18):
Yeah, and what are
my what are my values or what's
my world view?
Um What kind of storyteller amI?
SPEAKER_02 (41:24):
Um w well you're
you're not a it's not
mechanical.
You're a you you um you're anemotional storyteller who takes
what's what I you know, youtakes people from one state that
they're in, the reader, toanother state, sort of.
Yeah, yeah.
SPEAKER_00 (41:40):
Okay, so I mean it
sounds like that I'm a I'm a
literary storyteller.
Um You're articulate.
Yeah who who is articulate andhas a feeling for emotion.
You know, the emotion is a fewyears.
And and I and I and and I thinkthat transitions should actually
have emotional fidelity.
Yeah, yeah, yeah.
SPEAKER_02 (41:58):
And then what I want
you to do is help me, the
author, uncover a whole lot ofcreative options for for segue
between two scenes about a wildbirth.
SPEAKER_00 (42:09):
Right here.
So that's what I am.
Yeah.
That's what I'm going to do.
SPEAKER_02 (42:12):
Yeah.
SPEAKER_00 (42:13):
Um is there are
there any things that you want
me to bear in mind as I'm doingthat?
You know?
Are there are there any thingsthat you don't want me to do?
Any things that you want me toleave out?
SPEAKER_02 (42:24):
Uh well, I I don't
want well, I don't want that
crappy cliche business.
I hate all that.
I don't want cliches aboutbirth.
Quite yo, yo.
Okay, so uh I don't want you andI don't want you to overexplain.
I don't want it to beoverwritten.
SPEAKER_00 (42:37):
Yep, yep.
SPEAKER_02 (42:38):
And I want to sort
of, you know, it's sort of
intuitively uh, you know, thatyou should be guided by the im
let the imagery guide it, not Ijust don't want it to be
mechanical.
SPEAKER_00 (42:48):
Yeah, yeah, okay,
okay.
And and looking, you know, wecould go further, but I'm
starting to get the ingredientsof a prompt.
SPEAKER_02 (42:54):
And that's what I
would need to put in.
Yeah.
As and what I was putting in wasgive me a segue in my book about
a wild birth.
SPEAKER_00 (43:02):
Yeah.
It's like saying to an actor,just act better.
SPEAKER_02 (43:05):
Ah, yeah.
Yeah, that's the worst note.
SPEAKER_00 (43:07):
Yeah, do it do a
monologue in an interesting and
um and and vaguely comedic way.
SPEAKER_02 (43:11):
Yeah, that's a be
funnier.
SPEAKER_00 (43:13):
Yeah, be funny.
SPEAKER_02 (43:13):
Can you be funny?
Yeah, I've had that.
SPEAKER_00 (43:15):
It's a bit like that
casting that I went to
yesterday.
SPEAKER_02 (43:18):
Yeah.
SPEAKER_00 (43:18):
You know, it's just
like be this, and it's like a be
what?
Be what, exactly.
SPEAKER_01 (43:23):
Terrible prompts.
SPEAKER_00 (43:23):
And look, this is
this is actually a a point not
to be missed.
Yeah.
That the same communicationskills that a director uses with
an actor, yeah, that a um aprompt engineer or a leader or a
manager uses when prompting anLLM, they are exactly the same
skills.
Dressed up differently, but theyare the same skills that that we
(43:44):
employ when we are simply ddrawing a performance out of
another human being.
SPEAKER_02 (43:50):
You you you have not
wasted the past six weeks I
commend you on your analysisbecause I mean I never ever
would have come to thatconclusion that you know you can
if you're prompting AI, you'vegot to treat the AI like a
director teachers and actorartists would never have come up
(44:10):
with that.
And I can I can actually see, Idon't mean to be daggy about it,
but I can actually see how thatcould help me.
SPEAKER_00 (44:17):
Yeah.
And look, look, look, the bestevidence that something works is
something working for you.
Yeah.
So Annie, yeah, I am going toencourage you to start
experimenting with AI anddirecting AI in my absence.
SPEAKER_02 (44:31):
Well, where are you
going?
SPEAKER_00 (44:32):
I'm gonna go moulder
somewhere else.
SPEAKER_02 (44:33):
Go moulder somewhere
else.
I'd quite like to go to themaildives.
SPEAKER_00 (44:37):
Muldering in the
maldives.
SPEAKER_02 (44:38):
Muldering in the
maldives.
I'd quite like that.
Anyway, that's very good.
That's very helpful.
Thank you, David.
You're welcome, Annie.
Well done.
SPEAKER_00 (44:44):
I hope I hope that
was useful.
SPEAKER_02 (44:46):
Well done.
Let us know if you want anyhelp, David.
It's very helpful.
SPEAKER_00 (44:49):
There is a um there
is a uh a webs a web page that
that includes a lot of this uhin in more detail and time to go
through it in your own time ifyou'd like.
And uh we'll put that in theshow notes.
Harry will put it in the shownotes.
It's I mean it's simplyku.college forward stroke
directing AI.
Directing AI at ku.college.
SPEAKER_02 (45:09):
Um I'm finishing
now.
SPEAKER_00 (45:11):
Okay, okay.
And I will be quiet.
SPEAKER_02 (45:13):
Good.
Um, thank you so much, smartwomen, for tuning in.
Um I hope that was helpful.
I actually found it helpful.
That's for sure.
So I hope you did too.
So, wherever you are in theworld today, stay safe, stay
well, and keep your criticalthinking hats on.
(45:35):
See you later.
SPEAKER_01 (45:36):
Bye.
SPEAKER_02 (45:37):
Thanks for tuning in
to Wise Smart Women with me,
Annie McCubbin.
I hope today's episode hasignited your curiosity and left
you feeling inspired by myanti-motivational style.
Join me next time as we continueto unravel the fascinating
layers of our brains and developways to sort out the facts from
(45:57):
the fiction and the over 6,000thoughts we have in the course
of every day.
Remember, intelligence isn'tenough.
You can be as smart as paint,but it's not just about what you
know, it's about how you think.
And in all this talk of whetheror not you can trust your gut,
if you ever feel unsafe, whetherit's in the street, at work, at
(46:19):
a car park, in a bar, or in yourown home, please, please respect
that gut feeling.
Staying safe needs to be ourprimary objective.
We can build better lives, butwe have to stay safe to do that.
And don't forget to subscribe,rate, and review the podcast,
and share it with your fellowsmart women and allies.
(46:39):
Together we're hopefullyreshaping the narrative around
women and making betterdecisions.
So until next time, stay sharp,stay steady, and keep your
critical thinking out shiny.
This is Annie McCubbin signingoff from White Smart Women.
See you later.
This episode was produced byHarrison Hess.
(47:00):
It was executive produced andwritten by me, Annie McCubbin.