All Episodes

May 14, 2025 37 mins

Jackie Ferguson is no stranger to transformative conversations. As co-founder of The Diversity Movement and host of the Diversity: Beyond the Checkbox podcast, she’s helped redefine what inclusive leadership looks like in the modern workplace. But what happens when that workplace is powered by artificial intelligence?

In this episode of AI: Voice or Victim, hosts Greg Boone and Erica Rooney sit down with Jackie to explore how AI can either widen or close equity gaps—depending on who’s in the room and who's doing the prompting.

About Jackie Ferguson

Jackie Ferguson is the host of the top-rated podcast Diversity: Beyond the Checkbox, where she leads honest conversations on leadership, equity, and belonging with thought leaders and change makers across industries. As a certified diversity executive and co-founder of The Diversity Movement, Jackie helps organizations build inclusive cultures through education, content, and strategy.

Connect with Jackie on LinkedIn.


👉 Don’t forget to subscribe, leave a review, and share this episode with someone navigating the AI revolution.


Subscribe to AI: Voice or Victim for more conversations that move you from AI anxious to AI curious. Hosted by Erica Rooney and Greg Boone aka AISerious™, we're helping people and organizations embrace AI ethically, strategically, and with humanity at the center.

Follow us and join the movement to shape the future — before it shapes us.


🔗 Follow us and dive deeper:


On the web: https://voiceorvictim.com/


Greg Boone on LinkedIn: https://www.linkedin.com/in/gregboone

Erica Rooney on LinkedIn : https://www.linkedin.com/in/ericarooney/


© 2025 Walk West Production

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:07):
What happens is if we don't have aleader that's saying, you know what?
I tried this thing and it didn't work.
I couldn't figure it out.
I gotta go back and try it again,then you don't feel comfortable
as an employee doing that.
You don't wanna tell your manager thatyou made a mistake or went down a whole
road and have to come all the way back.
If they've never done that, or if they'venever shared that they've done that.

(00:31):
'cause we all have, right?
And so we want that vulnerability and, andtransparency and honesty from our leaders.
AI isn't the future.
It's now.
And whether you're in hr, sales,operations, or leadership.
The choices you make today will determinewhether you thrive or get left behind.

(00:53):
Welcome to ai, voice or victim.
I'm Greg Boone, marketingexecutive and AI series.
And I'm Erica Rooney, author,speaker, and gender equality advocate.
And I'm AI curious and we are hereto cut through the noise and show you
how to leverage AI in your career,your business, and your brand.
In every episode, we will break down realworld use cases and give you AI driven

(01:16):
strategies that you can apply immediatelyready to stay ahead of the curve.
Let's jump in.
Today's guest is a powerhouse of empathy,advocacy, and inclusive leadership.
Jackie Ferguson is the co-founder ofthe Diversity Movement, a bestselling

(01:37):
author and host of one of the topdiversity podcasts in the world.
She has spent her career creatingconversations that change
hearts and minds, and today.
She is here to help us explore how AI canbe a tool for inclusion, not exclusion.
This episode of AI Voice or victim is allabout bridging DEI and tech, and Jackie

(01:58):
is the perfect person for this journey.
Jackie, welcome to the podcast.
Thank you.
I'm so glad to be here.
Oh my gosh.
It's incredible to have you here.
And I wanna know, you've built thiscareer around inclusive communication.
Mm-hmm.
How do you see AI playing a role eitherin expanding or constraining that mission?

(02:18):
Yeah.
It's interesting becauseAI is just taking over.
How we work, right?
It's, it's integrated into everything.
I think that with regard to inclusion,you have to think about some of the
concerns that people have with ai, whichis bias and adoption, um, of specific

(02:42):
demographics being lower than others.
Um, but I think that what's important is.
Thinking about how it can level theplaying field in lots of different areas.
The ways that people work, the waysthat people are learning, the ways
that, um, people do their jobs and howthat can be, uh, expanded and right,

(03:04):
because for example, I'm a writer,but I can write significantly faster.
Or use my chat to help write and thenedit, um, which allows me to do my
work faster and, and then get intomore things, learn different things.
And so I, I think that AI is such a greattool and everyone should be working to

(03:26):
adopt it and incorporate it into the work.
Hmm.
I love it.
I kind of wanna rewind it and takea step back because we have got a
wide range of AI listeners, right?
Yes.
All the way from the aianxious to the AI serious.
Right.
I wanna talk a little bit more aboutthe bias that, that we're hearing about.
'cause I don't think everybodyreally understands what that means.
So can you tell me whatkind of bias are you seeing?

(03:48):
What should we be concerned about andwhat do leaders need to be thinking about?
So there's a couple of steps, right?
And using ai.
One is you create your promptand then you get this output.
The very important part is youhave to look at it and edit
it from a human perspective.
Generative AI and large languagemodels are pulling information from

(04:09):
what's already on the internet and theninputs of the people that are using it.
Um, so there are a few things there.
One, depending on the information that's.
Provided right where they'repulling, where AI is pulling from,
you're, you can get a biased view.
For example, one of the things that,that I read and, and used as an

(04:32):
example a lot is if you put in, youknow, give me the top philosophers.
Right.
Sometimes what you'll getare Western philosophers.
You won't get Eastern philosophers, right?
So very important philosopherslike Confucius are excluded.
So you wanna make sure that in yourprompts, you're creating a level

(04:54):
playing field and and eliminatingbias there by saying global, right?
Just being deliberate about yourlanguage and the words that you use
to create that prompt so that you geta broader view, um, in the output.
And then you have to look at itand say, you know, if, if that were
the prompt, and that's what I got.

(05:15):
Hmm.
These are all fromWestern tradition, right?
So then you wanna adjust your prompts.
So you have to be the human.
Intervention is extremely importantin how you use AI to prevent bias.
The other thing, um, and we can talk,this is a whole different topic, but
just bringing this into the conversationis one of the things that we're

(05:35):
seeing in McKinsey, uh, did a studythat shows that women are adopting
AI at 25% less of a rate than met.
So if it's majority men that are creatingthose inputs, right, and creating the
prompts, then what happens is, you know,you are not getting that input from women.

(05:56):
The perspective from women and then largelanguage models, again, are developed
and improved by those inputs in part.
And so if you're not having the same rateof women as men, creating those inputs.
You don't get, um, youknow, the outputs that are.

(06:17):
Level and even and inclusive.
Yeah.
This is super interesting because ina previous episode we were talking
about the importance of promptengineering and how that plays into
everything, but we were also talkingabout how women aren't adopting ai.
Yeah.
And we were looking at it more throughthe lens back then of like, okay.
Women, you're gonna be not gettingthe jobs that all the men are

(06:40):
getting, et cetera, et cetera.
But we didn't touch on this lensof, okay, now all the information
that we're all gonna be getting.
Yeah.
Out of ai.
Even the women who are using it.
That's right.
It's gonna be all skewed towards the men.
That's exactly right.
And we don't want that.
Right.
So that's one of the things that'sidentified that is a concern
that, you know, as with alltechnology, it's gonna evolve.

(07:03):
So we want those inputs.
To be able to evolve itin a more inclusive way.
It's a real issue that women arenot adopting it at the same rate.
McKinsey also did a study that showedthat 92% of companies are investing
in ai, reskilling, upskilling, um,and training for their employees.

(07:25):
And so if women don't participatein that at the same rate.
It affects both them as individuals andhow they're able to grow their career.
It affects businesses and, and,you know, the outputs and the
productivity of the organization.
But you know, it also affects the, the AIalgorithms and the large language models

(07:50):
and, and how they're able to develop.
Um, so we, we definitely want morewomen participating and sharing and.
Also, you know, when you get thoseoutputs, then what happens a lot of times
is we go back in and we tweak what, whatthe answer is to get a better answer.
Right?
And so we need women doing thatwith their perspectives, with their

(08:13):
experience, um, with their needs, withtheir habits, and so that we get a, a
better, um, a better technology overall.
I wanna touch on a fewthings you say here, right?
I think the, um, the,the, the innate bias.
The systems themselves.
Like people have to understand, like, youcalled up, you made a great point, right?

(08:33):
It's global in natureand how it was trained.
It was trained on, you know, everyone.
Sure.
Not just the folks that are in your townor in your state or in your country.
Right?
Right.
And so it is because humans have a bias,whether it's conscious or unconscious.
So now the machine has been trained on it.
It is, you know, folks say,well, the machine is biased.
Well, it's, it's biasedbecause humans are biased.
Correct.

(08:53):
Right.
And so that's how it was built.
And so I think thing one is making surepeople understand fully that it's trained
on a, a kind of a global view, right?
And not all parts are equal.
Not every country, you know, has the samelaws and rules and things, to your point.
So it's, it was an excellent call out.
I think the, again, on ai, voice ofvictim, we're talking a lot about, you
know, if you, we need to have more womenvoices, so we have less women victims.

(09:16):
Right?
Right.
As, as a part of what, whatwe're describing here today.
And then to the concept of whatyou're describing, if we have less
women that are actually using it,there's the concept of reinforced
learning from an AI perspective.
So kind of what you were hintingat is there's the things that it
was trained on and then it's thethings that IT is actively learning.
And so if it was trained on bias thingsand then it realizes or rec, not that

(09:40):
they realizes if the reality is thatless women are actually adopting and
using it, then they reinforce learningfrom the men or from the folks that are.
Speaking in a way, or using it ina way that's detrimental to women
will then also compound the issueof potentially becoming a victim.
Right.
So, you know, you need the womento be the human in the loop.
You need a broader set of folks, andnot just women, but just in general.

(10:03):
Mm-hmm.
Whether
it's, you know, there's gonna beage discrimination, there's gonna
be all these types of things.
So if you don't have abroader demographic that's.
Actively engaging at the same rate.
I could see how you'regonna, um, outpace that.
So I appreciate youproviding that context.
Absolutely.
And one thing just to, to levelset is the word bias, right?
Because so many words in oursociety right now are being

(10:24):
weaponized or, and negative, right?
But bias is all of us have bias.
We are.
Skewed based on our experiences, our,the people around us in our, you know,
family, our friends, our work environment.
Um, and so, and becauseour brain processes so much
information so quickly, we.

(10:48):
Put things in categories very fast.
And so all of us have bias.
It's not a a negative, the outcomes frombias are negative, but it's not negative
in that people are intentionally biased.
Right.
Necessarily some people, but not all.
Um, but you still have tomitigate that in how your.

(11:10):
Writing, how you're using ai, um, theoutputs of AI so that you're helping to
train this, like very large technologyto be better and to be more even
and balanced, um, among all people.
I always wonder like, why do we thinkwomen are so hesitant to lean into ai?

(11:32):
You know, I've never been that person.
Yeah.
So like I've played around with it, butI know plenty of women that just aren't
touching it and I don't understand why.
What, what's your perspective?
There's a couple of reasons, and Ithink this was also in a McKinsey study.
McKinsey's done actually a fewstudies, um, along with Boston
Consulting Group in Deloitte on this,and there are a couple of reasons.

(11:53):
One.
Women feel that it might not be asethical, right, to use AI versus, you
know, I can do this work on my own, right?
And and don't understand theimplications of yes, for sure
you can do it on your own.
I can write a thing on my own.
I have.
Many times, but if I can do thatsame thing and 15% of the time,

(12:17):
can I do it the rest of my time?
How can I make more of an impact?
So that's one.
The second thing is what womenin the workplace experience very
often is when they make mistakes,the the impacts are harsher.
And that's certainly.
The case for black women in the workplace.
And so they're hesitant to use it becausethey're not quite sure how, what if they

(12:41):
make a mistake, what if the output'snot exactly right and they, they're.
Hesitant to make mistakes because oftheir experiences and the experiences of
others that they've seen in the workplace.
And so new things sometimes are toughto adopt because they don't, they are,

(13:02):
you know, good at their job as is.
They don't wanna make mistakes.
They don't wanna try a new thingthat could cause them to, um,
you know, lose their job to bewritten up, whatever the thing is.
And so, unfortunately, becauseof that, uh, imbalance, right
in our corporate world, um.

(13:22):
Women are just judged more harshlyand they're afraid of that.
And so, especially
the fact that AI hasn't even beengamed out all the way, and they're
like, I'm not gonna touch that.
Right.
And you said somethingthat got my wheels turning.
And that was the fact that men arepromoted based off their potential.
Women are promoted basedoff their performance.
That's right.
And so how will women be promoted forsomething that we can't yet perform on?

(13:45):
Exactly.
And that's just gonna be interestingas it paves the way forward for
future AI roles and how those arebeing put into companies, so, right.
I don't know.
Greg Boone, what's your thoughts on that?
Well, I mean, I think firstand foremost, you gotta have a
lot more men that are allies.
Right.
In general, it's not,it's not just about ai.
Right.
I was at the, uh, uh, just recentlyat Women's leadership, uh, conference,

(14:09):
uh, here in, in, in Raleigh.
And I, I was taking a picture,you know, and the person that was
running a camera says to me, shesaid, look at you being an ally.
Like I wasn't reallythinking about it that way.
Mm-hmm.
I was just here to support.
Yeah, the team, right?
We had multiple folks that wanted to join.
I wanted to see what theconference was about.
So first and foremost, I think.
You gotta have more leaders thatare ally to women in general.

(14:32):
Right.
Just like most underrepresentedgroups have to have some
form of an ally Absolutely.
To really make, uh, significant gains.
Right.
So that's thing one.
I think thing two is, and, andyou write about this and in one
of your recent articles you talkabout you have to have more ai.
Leaders, you have to have more CEOsand C-suite folks from the top down

(14:53):
that are using it, empowering folks,giving them, you know, pathways to
actually, you know, grow within, uh,their organizations using these tools.
Right.
So one of the things that I will say tothe team quite a bit is like, Hey, you
know, and I say this, you need to be AIcurious because I'm AI serious, right?
Yeah.
But I'm also saying it'sokay if you make mistakes.

(15:14):
It's okay if certain things like you gottagive them enough air cover, you know,
to fail fast and to learn and explore.
Right.
I think, um, one of the other thingsI talk quite a bit about is we have to
also move from this abstract concept of.
What AI actually is, right?
Everyone on this show today, youknow, maybe half of the listeners

(15:35):
may have already had some, uh,interaction with ai, but they
don't really know what it's right.
And, you know, my co-host here ishelping me work on different analogies.
But what I always say to folks,it's, it's that moment, like when
Henry Ford was talking about the car.
And he says that if I would'veasked people what they wanted,
they would've said faster horses.
Mm-hmm.
Right.
Not because they didn't necessarily,you know, want to have a car.

(15:55):
They had no idea what a car was.
Right.
Right.
They thought this crazy guywas talking to them about this
box that I was gonna sit in.
It was gonna go way faster than a horse.
Right,
right.
It'd be much moreconvenient, blah, blah, blah.
So now we're talking about goingfrom horses to space shuttles.
Mm-hmm.
And people seem to be verysurprised at the adoption level.
Isn't great.
They have no idea whatyou're talking about.
Right,

(16:15):
right.
And AI can mean so many differentthings in so many different situations.
Right.
What you're talking about is moreconversational in nature, using more
of a text interface, if you will.
Right.
And bringing this back to bias,even in image generation, right.
Generative AI is multimodal.
So meaning it will generate.
You know, it can take in, uh, textinput, it can take in, uh, image,

(16:36):
it can spit out text, it can spitout images, it can spit out video.
Right.
It's the same fundamentalconcept of generating content.
Right.
But even within image generation,I was having a moment where I kept
trying to, I was trying to justcreate an image of an executive.
I said, gimme a, a c, a, A, a, confidentCEO in their, in their mid fifties.
Hmm.

(16:56):
Time and time again, it was thesame white male that kept coming
up to the point where I had toforce it and say, give me a woman.
Mm-hmm.
Of color.
Right?
Blah, blah, blah.
Right.
And so I think it's helpful for one,for folks to put a face on it and
understand what we're talking about.
We're not talking aboutrobots coming in necessarily.
Take your job.
What we're talking about is havinga PhD. In your pocket on your

(17:20):
phone, democratize intelligence,the ability you've referenced,
uh, McKinsey and BCG and others.
You can create McKinsey like, youknow, reports, McKinsey like right
information to then learn faster,being able to reinforce the things
that you want to do with your career.
So there's a lot of different things, but.
I think in general, going back tothe original point of the question,
is you gotta have more men that areallies and you have to have more

(17:42):
executives that allow folks to lean in.
Absolutely, Greg.
And you know, the thing withleadership is leaders have to, one,
create a communication plan on ai.
That's very important.
Why are we doing this?
How is it gonna help me?
Right?
Why is it good for the company?
The second thing is training, right?

(18:03):
So as you're thinkingabout how you roll it out.
So providing training andthen practice is the other.
Right?
So Greg, you talked about makingit okay for people to fail and the
problem in the workplace generally,and not just with women, but they're
afraid to make a mistake because.

(18:24):
As a Gen Xer, right?
When I went into the workplace,your leader had all the answers.
They never made a mistake.
They were in a good mood everyday or a terrible one, right?
But that was their, their stick, right?
They weren't a wholeperson in the workplace.
They didn't make mistakes.
And so what happens is if we don't havea leader that's saying, you know what?

(18:45):
I tried this thing and it didn't work.
I couldn't figure it out.
I gotta go back and try it again.
Then you don't feel comfortableas an employee doing that.
You don't wanna tell your manager thatyou made a mistake or went down a whole
road and have to come all the way back.
If they've never done that, or if they'venever shared that they've done that.
'cause we all have, right?

(19:07):
And so we want that vulnerability and,and transparency and honesty from our
leaders, especially when it comes tochange, what I call change leadership.
Some people call itchange management, but.
Change with people can't really bemanaged, but it can be led through, right?
And so leaders have to one, behuman, be vulnerable, try a thing.

(19:31):
Hey, I've been using thisfor this amount of time.
It was a little clunky at first.
I've gotten better.
Here's, here are theclasses that I've taken.
Here are the ways thatI've used it, and just.
Create that connection and openness andcomfort with their employees to be able
to say, okay, I am gonna try this thing.

(19:52):
And if I mess up, or if it's not good, orif someone can say, yeah, AI wrote that.
Right,
right, right.
It's
okay.
Bring it back.
Let, let's work on the editing.
And I think whether it's AI or anynew thing in the workplace, I think
that's what's important to employees,to feel comfortable and know that

(20:12):
they have the right leader that'sgonna be behind them and say, it's
okay if you mess up, if you try it.
Let's work on it.
Let's practice.
And I think that's one of the mostimportant things with adopting
a, a new thing, whatever that is.
One of the things that I would imaginework would work really well if you
had a very apprehensive team, is setthem down and get them in an exercise

(20:37):
where you were all practicing together.
Mm-hmm.
You can fail together, right?
Like make it so that it'snot a business critical.
This is your one big clientcommunication and you're putting
it through AI for the first time.
No, let's, right, let's not do that,but let's sit down and just get people
comfortable and say, Hey, why don'twe take 10 of our documents, throw
'em in here and say, Hey, can you.

(20:58):
Streamline this.
Mm-hmm.
Or tell me what I'm missing.
Right.
You know, when I wrote my book, one ofthe things somebody suggested to me was
to take your outline and run it throughand say, give me a five star Amazon review
and then give me a one star Amazon review.
Mm. So that you could see, what pieceswould they pick out as something that
could be really great that you wannamake sure you double tap on, or like,

(21:20):
okay, you got a one star because maybeyou gave too many real life examples.
Yeah.
Who knows, right?
But it's a safe way tostart playing with it.
So if you're a leader, start thinking ofways that you can bring people together in
that small group and fail in AI together.
Absolutely.
Right?
Yeah.
Now one of the issues though, isright now we have space to do that.

(21:42):
But as the technology grows andadvances and more organizations
are using that on a regular basis,then you're gonna be behind.
So you need to get into it now.
You need to practice and play withit with lower stakes right now,
because the stakes are going up.
Month, over month, over month, andyou're gonna be behind, and then

(22:04):
there's gonna be a different pressure.
So you need to get in and havefun with it and play with it and
work problems now because you needto be good at it a year from now.
So, so what Jackie just said isyou need to be a voice right now so
you don't end up becoming a victim.
Exactly.
Hey, look, tie it in there.
But look, I'm gonna, I'm gonnapull it back into this because.

(22:24):
Snarky women exist, right?
There are women.
Oh, don't look at me kicking who they, Ihave no idea what you're talking about.
We climb, right?
Mm-hmm.
They kick while we climb.
And I remember when I first startedusing AI and I was using it for all my
social media posts, I would hear allthis chitter chatter in the background.
Oh, you can tell what's AI and what's not.
And I wanted to be like,like, yes, it's ai.

(22:46):
And guess what?
I was able to cook a full ass meal withmy family because I used ai, right?
And you know, a lot of those arethe laggards, the people who don't
wanna lean into it and adopt it.
So like, what do we doabout those snarky women?
You know, they're gonna figure
it out.
This is like the AI and the empathy piece.

(23:07):
I'm pulling from you here, right?
They're, I would say nothing.
Because they're gonna figure outthat the time that they're taking
to, I'll use a, a very clear example.
So I did for the diversity movement,a, um, micro videos platform.
So we did 600 micro videos, whichrequired, um, you know, identifying

(23:29):
topics, script writing, editing.
Now you bring people in, yourecord their video, you edit it.
It's a long process.
Took us a very long time andwas very expensive to do.
I imagine I could have saved 90% ofmy money and time using AI for that.
Oh God, it makes me cringe.
It's painful.

(23:49):
That really does 90%.
And then what could I have donewith the rest of that time?
I could have started awhole other business.
So those naysayers are, are goingto, they're, they're going to
get quieter and quieter becausebusiness is just gonna change.
And the expectation of what you can doin that eight hour day is gonna change,

(24:12):
especially if there's, you know, heavyadmin heavy process that can be automated.
It's the people that can getmore done in that eight hours
that are gonna be valuable.
And especially in an economy that'sa little bit shaky right now.
They want the peoplethat are most valuable.

(24:34):
Provide the, the mostproductivity for them.
Yeah.
I
mean, one of the things I, I tellfolks all the time, I say by the end
of 2025, it's more likely than notthat most hiring managers in the US at
least are gonna ask you what AI toolsare you using to be more productive.
Absolutely.
And the idea that you're gonna say,I'm using none, you're basically
just raising your hand and saying,Hey, I'm gonna be the least
productive employee you've ever met.

(24:55):
That's right.
Right.
Please hire me.
Mm-hmm.
And what do you
think is gonna right andwhat is gonna happen?
And again, you know, I think.
One of the things I wanna takea step back on, like how do you
get folks to be more curious?
I do love the idea.
I think you write aboutthis around AI literacy.
Like I try my best notto use change management.
'cause people don't liketo be changed or converted.
Right?
Right.
And so I would say AI awareness,AI adoption, AI literacy.

(25:19):
Right.
Because what's not happening, again,taking a step back to the abstract.
A lot of companies are excitedabout using AI to have process
improvements, productivity gains, right?
And they're saying,Hey, we're gonna use ai.
We're gonna be this much better.
We're gonna drop things this much faster.
But what they're not understanding isthat 90% of their workforce, all they hear

(25:39):
when they is, you're gonna replace me.
You've not taken the time.
You're talking about a use casedownstream, and then you're highly
surprised that upstream you're havingso many people roll against you.
The thing I remind folks all the time,I've been in digital transformation
and, and kind of a change management,if you will, for a long time in
consulting and for the last two decades.

(26:00):
And I tell folks all the time, it's likethis is the first technology in my career.
That is touching every single employee.
That's right.
A lot of times when you're doing a digitaltransformation, it's contained to an
IT group or a marketing group, right?
And so you're only having to focus on oneor two potential saboteurs or naysayers.
Now you're trying to transformyour entire organization.

(26:21):
You're leaving it to theIT department, right?
To do this thing.
That's more of a people and culture changeor shift, not just a technology shift.
And people are highly surprised ofthe lack of adoption or the progress
they're making, like the thing that'sunderneath some of these, uh, um,
surveys and things they talk about.

(26:42):
90% of companies are doing AI or80% of this or developing use cases.
All right.
But is that one person in the company?
I. Or everyone, like no one's quantifying.
Are we talking about 2% of yourco company is digging into this?
You're just asking oneperson are they using?
Yeah, and and I would arguebecause of FOMO and everything

(27:03):
else, who's gonna say no?
Who are the 8% that saidthey're not doing it?
That's true.
I don't know, but Iwould like to meet them.
I really would.
Interesting.
But since we were just talkingabout snarky things, I do think
it's a good time to introduceour favorite game to the podcast.
Okay.
Which is called The Last Chat.
We're doing last chat.
Let's do last chat.

(27:24):
All right.
Already we're doing last chat.
Last chat is where everyone involvedhas to pull out their phone and
read about their last chat, GPT orGemini Prompt and what they put in.
What
was the last thing you looked for?
And they're gonna have to
explain themselves.
So
we're getting AI serious now, so I haveto, on my shades, have to themselves.
Ai serious shades.
Yeah, we gotta put, nowwe're getting serious.
I know.
Mine.

(27:44):
No mind.
You gotta pull out the phone
because now we just thinkyou're making up stuff.
Okay.
Right.
This is not one of those trust exercises.
You will know that I did not make this up.
Well, you know, as we get intothis, one of those things, right?
When you talk about snarky and folkssaying, well, I know this person used, uh,
you know, chat GPT or, or this and that.
The way I, I think about that sometimesis like, are you saying I was dumb?

(28:07):
Like, is is your take on that?
I could never sayanything that articulate.
Is that what you're saying to me?
That's how I take it.
I don't know.
Look,
so, let's see.
I'm going last.
Is that okay?
All
right.
Yeah, yeah.
All right.
Jackie, you
wanna go first?
Yes.
It was the, um, introduction thatI made to Greg to my product lead.

(28:27):
Very good.
Check.
Done.
Wait,
wait, wait, wait, wait.
I, you didn't
take the time.
Nope, she shouldn't.
She should write
that.
Very compelling.
Now you know how many times
I reread that email becauseit was just so thoughtful.
I read it at least 10 times.
Oh, which I'm also lying about right
now.
I put my, I know, I put mynotes in and there you go.

(28:48):
Write me an email.
I
may or may not have used AIto help reply to the email.
Right.
I love that.
So I passed it back in kind.
Well, I use it all the time.
I've been on it a lot thismorning, and my latest one said.
My cohost always uses the sameanalogy that if you ask people if
they needed a, a, uh, faster horsebackin the day, they couldn't imagine

(29:11):
a car, you know, that whole thing?
He said it.
I think on every single episode, could you
just read your chat?
And I said, uh, what's a new analogythat will give him the same result?
And it says, I love that you're lookingto update the faster horse analogy.
It's iconic, but definitelydue for fresh twist.
And then it gave me some examples.

(29:32):
Oh, so that I didn't particularly love.
So I said, those are lame.
Give me some better ones.
And then I haven't gotten that part yet.
So is this your digital twin,Cheryl, or is this this?
No, this is just straight up chat.
Okay.
I do have a digital twin,
but it But it's learned about you?
Yes.
And so the snarkiness didn't come fromthe chat, it came from you directly?
No, it came me channeling the chat.
Okay.
So you reinforced the learninggot got, it's upgrading your

(29:54):
typewriter instead ofinventing Google Docs.
All right.
So, okay, we'll, we'll,we'll work on that.
Mine is a little bit deeper.
Okay, let's go.
Let's go deep.
Uh, so mine was list out the AIadoption rates in percentages for men
and women knowledge workers in the us.
Breakdown by industry,geography, role and title.

(30:14):
Put in a table for ease of comparison.
List any detailed comments per rowin the last column of the table.
And so I use deep research.
This is chat, GPT-4 Oh.
Uh, then it asks a couple ofclarifying questions about which
specific technologies or tools.
You're talking generativemachine learning.
It asks about geography.
It did ask me again about whether thisfor us, even though I just told it, it was

(30:37):
for the us so we're not gonna judge there.
But this was deep research.
This wasn't a fast response.
This thing reasoned for 16 minutes.
It was searching the internet,it was doing the consolidation.
It was, you know, refining, makingsure that it came back with, you
know, and that's one of the things,there's the fast things that can
come back, but then there is, whereI'm using it most is deep research.

(31:00):
I'm trying to understand McKenzielevel, B, c, G level, Deloitte level.
How should I really bethinking about this?
Okay, I got a question.
A AI curious over here, right?
I have heard that perplexity.
Is better for the deep research thatit's more accurate than Chad GBD four.
Now, again, I don't know Right.
These are just words on the streets.

(31:20):
I think
the, uh, I mean, so toto that point, right?
Perplexity was one of thefirst large language models to
be connected to the internet.
Yes.
Claude recently announced that, uh,or philanthropic, Claude recently
announced that they're connected.
Now they're, uh, to the internet and folksare, you know, uh, excited about that.
When chat CPT first cameout, it was trained on.
It wasn't connected to their internet.

(31:41):
And then the partnership withMicrosoft being and all that
gave you that accessibility.
Gemini out the gate was tied to it, right?
Because it's a Google product.
So obviously that made sense.
Now, having said that, first of all,Google had launched, uh, deep research
for folks like me that were like, uh.
Premium, uh, subscribers back in December.
So deep research as a functionhas been around for a while.
Right now

(32:01):
what I would say is peopleare getting more familiar with
different large language models.
So some folks will like perplexity,some folks will like Claude.
'cause they said it.
I've, I've heard it just gets them right.
It feels more personal.
Right.
And that's one of the weirdphenomenon that people are starting
to, people are starting to gravitateto different ones because they're
feeling a certain vibe, right?

(32:22):
I don't think that there's any datathat says that perplexity versus
Chad, CBT versus Gemini from a deepresearch is any better than the other.
I think that what people alsowill gravitate to is how is
the, the experience, how is itpresenting back the information?
Right?
What level of detail and chain ofthought are is being documented, so I
understand how it came to its conclusion.

(32:44):
One of the big knocks earlydays was it was a black box.
Right?
And so you had no clue what it did.
It just spit out an answer.
Right?
Right.
I prefer also the the deep research,because by design it's intended to
be thoughtful and to think about it.
Now you can hack that too and justtell it, Hey, think deeply about this.
You can use the.
The faster ones, and what it'lldo is it will actually think

(33:05):
through what it's doing, so.
Mm-hmm.
I don't know, maybe some peoplejust like perplexity, right?
I mean, look, I don't know either.
I'm on the curious side of things,so I'm just playing around.
But what I do think is it'sgonna be very controversial.
Like in the future, it's gonnabe like a PC or a MacBook.
Yes.
You know what I mean?
That makes sense.
Like people are gonna be like, Uhuh, Nope.
I'm all in over here.
Mm-hmm.
Or I'm all in over there.
Because the more you work with it,the more familiar it becomes with

(33:28):
you, becomes like your bestie, right?
Like.
That's, there's a reason why you likehanging out with her than other people.
Right?
And it's 'cause she knowsyou and she gets you right.
So, so
what I would say, so two things on thatpoint, just going back to an adoption
standpoint, if we're talking about at a,at a company level, what I will see is
that a lot more folks, so for example,we're a Google Workspace, uh, company.
So we already have Geminis already tiedinto our email, our Google Docs, right?

(33:52):
Uh, there's just so many things,so it's gonna be natural.
The same thing for folks thathave Microsoft and use copilot.
If you're a Microsoft shop, itis gonna be more natural, it's
gonna be more safeguarded, right?
Sure.
So using it in that environment.
Will make a lot of sense.
I think the thing that I'm urgingfolks to do, especially those
that are becoming more curious, itdon't try to go play around with

(34:13):
10 different large language models.
Right?
Go pick one or two.
Right?
There's, they're quickly reachinga point where there's a lot
of commonality across them.
The other part of it isto, to Erica's point.
If you're using, and I've runinto this multiple times, chat,
GBT knows me a lot better.
Gemini knows me a lot betterbecause I use those more.
Mm-hmm.
But then I still willuse perplexity and claw.

(34:34):
But the problem is they don't know me.
That's right.
Right.
And so you spread yourself too thin.
Now.
You didn't get the benefit of itactually understanding what you look
for, your writing style, your voice.
Right.
And so it's actually not beneficial.
And then over time it's, it'sgetting easier and easier to
create software and use technology.
Right.

(34:54):
There may end up being a dozendifferent large language models.
There may be thousands upon thousandsof AI apps that you can use.
Correct.
And then you get that, uh, youknow, that analysis paralysis
or the paradox of choice.
Yep.
You know,
situation.
So I would say start small.
Just choose one or two.
Let it get to know you,you get to know it.

(35:15):
Right.
And then play around with it.
They also all have differentmodels to do different things.
Right.
And so I. I like, uh, some ofthe image generation stuff.
There's some cool things.
It's not, one of the things I did tohelp demystify AI was not just talk about
the text version of the things I showed.
Using Suno and making a song.
Still kinda same type ofexperiment, uh, experience.

(35:35):
You're prompting it use, so tomake a video, you're prompting it.
It's creating something, usinglovable to create a. Website or
an app, you're still prompting it.
The concepts are the same.
Across generated ai, it's theapplications and how you use, it's
so sorry to be long-winded, but No, Iwanted to give greater context to that.
Absolutely.
Well, all that being said, we alwayslove for our listeners to have something

(35:57):
tangible they can walk away with.
Mm-hmm.
So we love to hear whatis your perspective?
What is one action that someoneshould take in the next 24 hours?
That's all you
got, 24 hours.
24 hours to
further their knowledge, theirexpertise, their experience with ai.
I would just say go in and practice.
Give a new prompt.
Give a personal prompt.

(36:18):
And, and just try something new.
Try something you haven't done.
If you use it to write emails, like Ijust said, use it for something else.
Use it to help you think througha problem or a a new process
and see what it comes up with.
I
love that.
Incredible.
Jackie, thank you so much.
Thank you.
It's good to be here.

(36:43):
Thanks for joining uson AI Voice or Victim.
If you want to stay competitivein the AI age, start now.
Take one insight from today's episode andput it into practice in the next 24 hours.
Make sure to follow us, shareyour thoughts, and subscribe
for more actionable AI insights.
See you next time.
Advertise With Us

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.