Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
The views expressed in the following program are those of
the participants and do not necessarily reflect the views of
SOGA nine sixty am or its management.
Speaker 2 (00:18):
Well good evening, everyone, and welcome to the Brian Cromby Radio.
Have a fascinating young lady to introduce you to tonight
who we're going to talk a little bit about marketing.
We're going to be talking about cybersecurity and issues around that,
and we're also going to talk a little bit about AI,
so it's going to be interesting. I want to introduce
you to Carrie Ann Mercer. She is a seasoned global
marketing leader. She's an executive advisor for over twenty years.
(00:39):
She's got experience leading high impact of marketing strategies in
SaaS and I guess we'll have to explain what SaaS is. Cybersecurity, fintech,
insure tech. Never heard insure tech. I've heard fintech before,
I've never heard insure tech before, and now artificial intelligence,
which is the topic du jour. She's a fractional chief
(01:00):
marketing officer and advisor. Carrie and she brings what he
describes as a rare ability to connect brands, businesses, and
culture in ways that accelerate growth, innovation and trust, so
we'll talk a little bit about that too. She currently
serves as the fractional Chief Marketing Officer and executive advisor
to something called the Cambridge Forum, a consortium bringing together
global leaders to discuss how technology can be harnessed as
(01:23):
a force for good and a catalyst for empowering communities,
individuals and for spurring development. She's also the fractional Chief
Marketing Officer for ethic Rhythm, which is all about ethical algorithms,
which is another topic to jure. So that'll be an
interesting conversation and Crosier in both of whom deliver AI
(01:45):
powered services and or products. Carrie in welcome to the show.
Speaker 3 (01:51):
Thank you pleasure to be here. We've got a lot
to talk about.
Speaker 2 (01:54):
We sure do. What a fascinating background you've got. I
really appreciate, and you're coming to us. I understand from Ottawa?
Speaker 4 (01:59):
Correct?
Speaker 2 (02:00):
Yep? Were you golf and play at least watch some
hockey and sports and enjoy the cultural environment of Ottawa?
Is that correct?
Speaker 4 (02:10):
That is correct?
Speaker 3 (02:11):
Even in the winter, I try and get out and
embrace our weather.
Speaker 2 (02:14):
But I saw you recently post about this Cambridge form
that you attended I think in the UK. Tell me
a little bit about that if you could.
Speaker 4 (02:23):
Yeah, sure, I'd love to.
Speaker 3 (02:24):
So Cambridge this was the first in aural Cambridge form.
It was a conference that was put together by a
wonderful gentleman by the name of doctor Ihab Shanty. He
is the co founder and CEO of Kroscher and we
can talk about that. That's the insured tech piece. And
he has a vision like many, but he has a
really specific vision, and that is to enable a world
(02:46):
where technology can be harnessed as a force for good.
Speaker 2 (02:50):
So it can be harnessed for a force for good,
for good.
Speaker 4 (02:54):
Right.
Speaker 2 (02:54):
So people are scared and worried about technology.
Speaker 3 (02:58):
These days, absolutely absolutely, but he has a vision and
a belief that we can use technology and this year
we focused on AI to actually empower communities and individuals
and use it for inclusivity, which I thought, you know,
when he started to share what his vision was for
(03:19):
the conference, I was like, this is amazing, tell me more.
I want to support you. So we set out with
a mission to bring together technologists, innovators, academics, policymakers, entrepreneurs
to talk about how we can harness advanced technology and
this year we focused exclusively on AI and insure tech
(03:41):
to build out programs that were inclusive and empowered communities.
So yes, it was held in Cambridge, UK, which by
the way, is one of the globes tech centers for
AI and innovation, and so it's no coincidence that Cambridge
University is there. It was held September third and fourth,
and it brought together about one hundred and fifty people
(04:04):
from across the globe. So we had speakers who were
professors and department heads PhDs from Cambridge University. We had
speakers from UNESCO, which is United Nations organization that promotes
peace and cooperation through education, science and culture. We had
(04:28):
different entrepreneurs, we had policymakers, and we had representation from
big tech like Google for example. So it was really
a great collection of speakers and attendees all coming together
to have really open and honest.
Speaker 4 (04:44):
Discussions about the good, the bad.
Speaker 3 (04:46):
The ugly and how we can actually.
Speaker 4 (04:48):
All work together to leverage.
Speaker 3 (04:50):
AI and make it, you know, like I said, a
catalyst for good and empowerment.
Speaker 2 (04:56):
So good, empowerment, inclusion, all nice things. How can AI
help any of that?
Speaker 4 (05:02):
Well, that's that's a great question.
Speaker 3 (05:05):
And so at the conference, we had a number of
examples of how AI was used for good and I
and I've written down a number that I wanted to
to to share with you today. Because there was so
much good happening at the conference, I had to go
back through my notes and remember everything and all of
the all of the discussions and the panels, but there
(05:27):
were a few.
Speaker 4 (05:27):
That was really good. We we we had a.
Speaker 3 (05:29):
Panel to discuss the use of AI and one of
those panel panel members was actually the founder of a
company called Calm Shore and that means community insurance, and
they're using AI to partake in efficiencies for their business,
(05:50):
which allows them to then return profits to their community.
So they're using AI to reduce you know, costs and
to speed up, for example, insurance claims is feed up
you know, onboarding and the purchasing of insurance products for
(06:13):
a community based insurance, and they use AI tools to
be able to see, for example, if you were purchasing.
Speaker 4 (06:22):
Proper insurance, they.
Speaker 3 (06:24):
Would use AI to see the amount of theft or
climate in that particular area where you lived, to see
what the rates and what the hazards may or may
not be. And so they're using that in order to
keep your rates lower and onboard you faster. And then
the whole idea of community insurance means that it's like
(06:45):
a cooperative and they put the profits back into the
community of which it serves, and so the consumer is
able to then choose which charitable organization they would like
to you know, give those profits back to. So what
(07:06):
is it brilliant?
Speaker 2 (07:08):
What is the they're serving? Is this geographic community or
occupational or or or what community?
Speaker 3 (07:15):
Well it could it could be all of the above. Right,
So they're launching They did a soft launch at the event.
They're going to be officially launching in November in the
UK region. So it is going to serve different communities
that don't have or have been historically underserved by insurance products.
Speaker 2 (07:35):
But if you're making you know, onboarding more quickly, if
you're using AI to UH to evaluate potential you know,
people to be covered, if you're you know, doing whatever
you need to do in regards to claims, aren't you
just getting rid of people and jobs? And so therefore,
you know, is that really good carry in that you're
(07:57):
that you're restructuring, eliminating the deployment and then maybe giving
a little bit back to two communities. How can that
possibly be good?
Speaker 3 (08:06):
That's a great question, and it's it's one that you know, uh,
people are continuing to talk about and to discuss whether
or not. You know, AI is used to displace rules.
Speaker 4 (08:19):
I view it as a way to.
Speaker 3 (08:20):
Augment roles and to augment.
Speaker 4 (08:23):
And speed up efficiencies.
Speaker 3 (08:25):
So it's never going to be, in my mind, a
replacement for the strategic work and the work that requires
a human. It could help with efficiencies for redundant processes
that are repeatable, right, so it could help with that,
but in my mind, it doesn't displace the need for
that human connection. So I've got when you.
Speaker 2 (08:47):
Know, you've seen I've seen probably lots of reports of
companies laying people off, that call centers are being replaced,
that you know, basic research is being done by AI
rather than by junior accounts or and your lawyers or
junior consultants. Seriously, you don't think it's going to displace jobs.
Speaker 3 (09:06):
I think I think it's going to change the way
that we work. And so maybe what needs to change
is the jobs that we have been doing and the
way that we go about doing them. Right, So when
I yeah, so when I think about my role as
a marketer, and I think about the way that AI
(09:28):
over the past two years, a year and a half,
and particularly over the last six months has really taken off.
It is not necessarily displacing roles, but it is allowing
teams to scale faster. So, for example, if you are
creating campaigns or programs and you need to do it
(09:50):
and scale, you know, quickly to be able to take
advantage of a certain opportunity or in the market, for example.
Speaker 4 (09:58):
You could use you could build.
Speaker 3 (10:00):
For example, templates right that allow you to get to
market faster. Those templates can be uploaded into an AI
tool and used to help with that second or third
or fourth piece of content that you need to deliver.
So it's not doing everything, but it is certainly helping.
Speaker 4 (10:21):
You with scale and speed.
Speaker 3 (10:23):
You still need a human at the end of all
of that to review it, right, to do a brand
check to make sure that you know everything is audited correctly,
that it is representative of the organization correctly.
Speaker 4 (10:38):
So I think the role in which.
Speaker 3 (10:40):
The marketer is going to play is going to be different, right,
So I don't necessarily think it's going to be like
we're all going to move to AI, and we're all
going to lose our jobs. I think our jobs are
going to change and shift.
Speaker 2 (10:53):
So Jeffrey Hinton, who is, you know, the gentleman from
the UFT that won the Nobel Prize for for being
supposedly the godfather or grandfather or whatever it is of AI,
has said that he thinks that job displacement is going
to be massive. He's really worried about it. And he said,
in response to the question about what would you tell
your own kids or grandchildren, he said, to be a plumber,
(11:15):
because you need a skill that can't be replaced by
AI or robot. And he was worried about.
Speaker 3 (11:22):
A smart man.
Speaker 2 (11:23):
Yeah, what do you think of that?
Speaker 3 (11:24):
Yeah, I think he's a very smart man. I'm not
going to sit here and disagree with him by any means,
but I do think that there's an opportunity to adjust
and to reskill and to still be gainfully employed.
Speaker 2 (11:39):
So I think the interesting thing, and you're probably potentially
the survivor in this whole process because the creativity that
hopefully goes into marketing. When you think about a large
language model, what it's doing, which is what AI is,
is predicting the next word based on sort of you
know what, the words have been connected together in the past,
(12:02):
and so what it's doing is effectively doing what has
happened the vast majority of time in the past, which
means that if you ask it for a marketing program,
it's going to give you something very similar to what
has been planned before, like you used it for you
suggested a framework or a or a program or something
like that. It's going to be the framework or the
(12:22):
programs or the pitch or the whatever that has been
done the most in the past, because that's what it's
going to predict is the right thing to do in
the future. What I think it's going to be challenged
to do is come up with something new and innovative
and creative. Isn't that the role that hopefully the humans
will still have is to come.
Speaker 4 (12:41):
To something creative.
Speaker 3 (12:42):
Yeah, the creative and the strategic side is not going
to be displaced by AI.
Speaker 2 (12:47):
And did you talk about that at the conference?
Speaker 4 (12:50):
We sure did, We sure did.
Speaker 3 (12:52):
We talked a lot about treating AI as technology, not
magic and applying discernment, right, and so the discernment piece
has to come from humans avoiding blind use, focusing on
situations where it generally offers positive impact. Right, So that
(13:13):
discernment piece, which is only you know, the human aspect
of it, needs to be added in. Right, Be precautious,
you know, have an approach that's precautionary. You know, implement
due diligence again, prioritize inclusiveness, fair benefit sharing. Right, all
(13:33):
of these things were discussed around the use of AI.
Speaker 2 (13:37):
Fascinating conversation. We're going to take a break for some
messages and come back with Carrie and Mercer talking about AI.
We're talking about something called what is it, the Cambridge
Society or Cambridge Form, the Cambridge Form, I apologize and
her marketing experience with with with both just after a
couple of minutes for messages, back in too with Carrie
(13:57):
on stay with us, everybody Interergry Conversations.
Speaker 1 (14:00):
Time stream us live at SAGA nine am dot CA.
Speaker 2 (14:20):
Welcome back everyone to the Brian crimeby Radio ar. We're
talking about marketing. We're talking about AI. We're talking about
something called the Cambridge Forum. Our guest is Carrie and Mercer.
She is a global marketing leader. She is working as
the Chief Fractional Marketing Officer and executive advisor to an
organization called the Cambridge Forum that had a big forum
(14:41):
in Cambridge, UK recently, and they talked about AI AI
for good. I guess was what you were saying, and
you mentioned it was for inclusiveness, So I got to
ask you how AI can be a force for inclusiveness.
And there was a big report that might have seen
that came out this past weekend that said that the
(15:04):
Internet and AI has a gender bias, and the gender
bias is a male gender bias, and so if you
ask it whose one more soccer awards, it'll come out
with a male name and won't think about the female name,
even if the female name has one more soccer awards,
(15:25):
et cetera. And the same and almost and very sort
of aspect of society. So it has a male gender bias,
and so therefore they're trying to figure out what other
biases might it have, and the hypothesis that probably has
a white racial bias, it may have a Christian religious bias,
et cetera. If if the Internet and therefore AI, because
(15:50):
it is gathering its knowledge from the Internet and that's
how it's built its long large language model, has these biases,
how can it be inclusive.
Speaker 4 (16:02):
It's a great question. It's a great question. So we
we we really discussed.
Speaker 3 (16:09):
The need for inclusivity at all levels at the forum
and that's why we had you know, folks from United
Nations there in UNESCO.
Speaker 4 (16:19):
Right. So we had a couple of female speakers.
Speaker 3 (16:23):
Doctor Hyatt Cindy was one of them, and she, you know,
is an innovator and an ambassador to the United Nations
for promoting women in girls in tech. And so we
need to have people from all diverse backgrounds part of
(16:43):
the conversation of what we.
Speaker 4 (16:45):
Need to do with AI moving forward.
Speaker 3 (16:48):
It is moving you know, at rocket speed, lightning, right,
and you're right. Today it has been historically built and
developed by you know, uh an engineering group and a
math and science group that has been historically white males.
Speaker 4 (17:11):
Right.
Speaker 3 (17:12):
And so when I talk about inclusivity and when I
talk about the use of AI for inclusivity, one of
the examples that I shared with you was the company
that's launching November called Calm Shore and they're uh serving
underserved communities with h with insurance products that are traditionally
(17:38):
not that have not been uh uh for to casul
to Cathol is for Islamic finance. So within com Shore,
they're serving a community of UH Islamic UH and and
(17:58):
faith based folks who are looking for insurance products that
align to their faith. And they have been historically not
able to you know, partake and that's where insured tech
comes from, the the you know, insurance or buy insurance
products that aligns to their faith. And so Calm Shore
because they are leveraging AI and because they give back
(18:21):
to the community, and it's a shared risk and it
follows purpose you know, the principles of to caful. And
I'm not an expert on to caffle at all, so
I would if you want to get into that, I
will certainly invite someone else to come and speak about that.
But it follows the basic principles of you know, shared
(18:42):
risk and shared community of to caful And so using
AI for them was allowed them to do that because
it it reduced the amount of cost that it takes
and the amount of time it takes to onward someone.
Speaker 4 (18:57):
For a new insurance product.
Speaker 3 (18:59):
And so you know, and there's also financial insurance products
and financial investing products.
Speaker 4 (19:07):
That AI is being leveraged for.
Speaker 3 (19:11):
For finding companies for investing purposes that are inclusive of
those who want to have a faith aligned choice or
to capitul for.
Speaker 2 (19:21):
Example, and maybe not an AI. But the algorithms in
social media have you know, everything that's saying. They take
you down the rabbit hole and they end up showing
you more and more stuff that you want to see,
confirming your bias, so that you're hooked and you listen
and watch more. And that's what why you know, numerous
people are saying, we got to get rid of these algorithms.
And so my question is, can you have an ethical algorithm?
Speaker 3 (19:47):
I would say yes, And I'm going to say, you know,
I'm going to take the optimistic side and say yes,
I think that the algorithms and the tools can be ethical.
It's in the hands of it's in the hands of
whom they're with.
Speaker 4 (20:04):
That's what's unethical. Right.
Speaker 3 (20:06):
So it's that human piece again, Right. If you take
a tool good in your bad and you put in
the hands of someone who's evil, you're going to get evil.
Speaker 2 (20:15):
It's just they're just trying to make profit and so
Facebook and you know other people that all they want
to do is get you to keep watching. So, you know,
the ethical thing or the natural thing or The thing
that would have happened in society before we had algorithms
would be if you're a liberal, you would be shown
conservative information. If you're conservatively you'd be shown liberal information.
But people get pissed off about that and they'll turn
(20:36):
off their system and not watch it. But that's what
you should do, is we should be confronting people with
other points of view.
Speaker 3 (20:43):
Yeah, I mean social media is you know, a whole
other can of worms that if we want to talk
about social media, what's happening on social media right, particularly
lately with with politics you know in Canada, south of
the border, around the globe, right, that's a whole other
can of worms. Well, we would need a couple other shows,
(21:03):
Brian to really dive into that one.
Speaker 2 (21:06):
Okay, so Athnical Rhythm. Yes, trying to change the algorithms
and social media. They're consulting to people in regards to
the algorithms in AI.
Speaker 3 (21:18):
Is that correct, cur Right? So what they're doing is
they're taking AI tools like for example, a company called
Palanteer Technology that does process automation, process intelligence. It helps
with automating certain process within your operations. Right, it's mostly
focused on large enterprises. You know for example, say supply
(21:43):
chain management, or say like a construction business. So it
brings together and analyzes vast amounts of data quickly, so
where a human might spend days trying to go through it,
bring it together, make sense of it. Some of the
AI tools allow you to get through that in you know,
hours for example, right, So then it's allowing the leader,
(22:08):
say the VP of operations, make decisions for her or
his business faster, so they're able to act in real
time for their business, which at the end of the day,
could you're right, make profitability better?
Speaker 2 (22:23):
So we've talked about two of the companies that you're
involved with as well as the Cambridge Forum. There were
numerous different sessions, a couple hundred people at this forum.
What were some of the other interesting sessions and topics
of discussion?
Speaker 3 (22:38):
Oh boy, there there were a number of panel discussions.
There was I shared with you the story of doctor
Hyatt Cindy. She is the founder and CEO of Institute
of Quality and the ambassador for stamm at Enesco.
Speaker 4 (22:57):
She shared a story about.
Speaker 3 (23:00):
Growing up in innovation and she was born in Saudi
Arabia and in the seventies. She loves science, so you
can imagine a female in the seventies in Saudi Arabia
wanting to learn about science. She told a story about
how she was begging her father to let her move
(23:21):
so she could learn and study math and science.
Speaker 4 (23:24):
Well it ends up, he agreed.
Speaker 3 (23:25):
She now is a PhD Science from Cambridge University, and
her story was one of resilience and hope, not just
about you know, women in STEM, but about you know,
courage to break boundaries, encouraged to you know, you know,
just move forward and keep pushing through to bring you know,
(23:49):
new innovations to very diverse groups.
Speaker 4 (23:54):
Around the globe.
Speaker 2 (23:54):
Okay, that's interesting.
Speaker 4 (23:56):
Yeah, yeah, I love that.
Speaker 3 (23:59):
There there was another session by a gentleman by the
name of doctor Ramit deb Naff, and he spoke about
designing responsible AI for public good And in here he
talked about that it's it's I got to get this right,
(24:22):
responsible AI. So designing it not just from a technical perspective,
but from a design perspective and from a community perspective. Right,
So extracting it from public good. So think beyond just
the technology, think about the implications of it in society,
(24:42):
and you know, response the concept of responsible AI and
how do we get responsible AI, because that's what's going
to drive trust. Right, So if if responsible AI can
drive trust, then it can be effectively applicable to help
commune unities and to you know, help the inclusivity of
(25:07):
you know, underserved populations for example. And he he also
talked a lot about the impact that you know AI
has on the environment and it it it isn't always
(25:27):
the best right, so the real world impact like for
climate and urban planning for example, the processing powers to
uh and for using those very large, you know, computer
models to get to the information, you have to be
(25:48):
again discerning when.
Speaker 4 (25:50):
You're using it.
Speaker 2 (25:52):
Do you think people are.
Speaker 3 (25:55):
I think some people are. I mean we saw this
group in Cambrids all get together, one hundred and fifty
people there were trying to do that.
Speaker 4 (26:02):
Absolutely.
Speaker 2 (26:03):
Yeah. I'm going to take a break for some message
and we come back in and ask you about, you know,
can AI be ethical? Can AI be a force for good?
Because I think this is the big issue. Stay with
everyone back in two minutes with Carrie an Mercer talking
about something called the Cambridge Forum where people from around
the world got together for a couple of days talking
(26:24):
about about really using AI for good and inclusivity. Stay
with everyone back in two minutes.
Speaker 1 (26:35):
No Radio, No Problem stream is live on SAGA nine
six am dot C.
Speaker 2 (26:40):
A welcome back everyone to the Brian Crumby Radio Hour.
My guest tonight is Carrie and Mercer. She's a marketing expert,
has been for a long period of time. She's currently
working as a fraction marketing Chief marketing officer for a
(27:02):
couple different organizations, one of them being something called the
Cambridge Forum, which got together a whole bunch of people
from around the world in September to talk about really
AI and AI being a force for good. So I'm
gonna press you again if I could. Oprah recommended a
summer read her every year, I guess she picks one
book to be a summer read, and this one was
something called Culpability. I'm not sure if you heard about
(27:25):
it or read it, but it was really about how
AI can screw things up and AI became very friendly
with one of the members of a family and really
took that member of the family down a bad hole.
AI was driving an autonomous vehicle and the autonomous vehicle
may have made a bad choice. And really what it
(27:48):
is is it's confronting you, all of us, all the readers,
and actually Oprah ended up having a whole bunch of
people get into a book clubs to discuss it about
han AI be good and so you know, when when
Jeffrey Hinton warns us that we're all going to lose
jobs or a lot of people have lose jobs, when
he says we need to have you know, global governance
(28:11):
for AI. When this book comes out and really challenges
people in regards to, uh, you know, what AI can do,
when at the same time, the big Beautiful Bill supposedly
that Donald Trump passed the United States actually reduced regulation
for AI because his argument was that if we regulate it,
then China will end up beating us, and so therefore
(28:32):
we need less regulation of AI. You know, are your
people that are talking about goodness, building community and inclusiveness
in AI, are they just the woke weirdos when the
reality is this is going to just cost everyone a
lot of jobs. And uh and and and uh and
(28:53):
really you know, some people have suggested almost create a
human race that is different than the current human.
Speaker 1 (29:02):
I hope not.
Speaker 4 (29:03):
I don't believe that to be true.
Speaker 3 (29:04):
You know, I spent time over a week in Cambridge
and planning, you know, for six months for this conference,
and you know, we're able.
Speaker 4 (29:16):
To work with a lot of these people, work with you.
Speaker 3 (29:19):
Know, a lot of these really great smart thinkers from
around the globe, from you know, very well accomplished you know,
PhDs from Cambridge universities, ambassadors to the u N you know,
UNESCO policymakers, entrepreneururs who are looking for ways to make
(29:39):
good of all of this, right, And so, do I
think we have challenges?
Speaker 4 (29:43):
I had? Absolutely?
Speaker 3 (29:45):
Do I think that there is a path forward. I'm
hopeful that there that there is, and I don't say that,
you know, I'm not naive to think that there's gonna
be really horrible things that are going to happen. We
see it, right, We're seeing it. We see it in
the news every day with you know, the use of
(30:05):
AI and.
Speaker 4 (30:05):
Children right right.
Speaker 3 (30:08):
So one of the panels that we had is what.
Speaker 4 (30:10):
Do we do with AI? What do we actually do
with it? How do we move forward with it?
Speaker 3 (30:15):
And there were a number of key takeaways from this
panel and I want to share a few of them
with you because this is how they're thinking and some
of you know, the global leaders are coming together and
thinking about this. So again, as they mentioned, treat AI
as a technology, not magic and applied discernment. The second was,
you know, implement due diligence and a precautionary approach, right,
(30:39):
So before widespread deployment, which has potential harms of AI,
you know, put mechanisms in place to test it, right,
controlled setting testing, not just a free for all here
and there, which is really a lot of what.
Speaker 4 (30:53):
You see at times of open AI. Right, So.
Speaker 3 (30:58):
Be mindful in thought of prioritizing inclusiveness and fair benefit sharing.
So everyone potentially is going to be affected by AI
should have a voice, right. If you're going to be
impacted and affected by it, you should have a voice, right.
Not just direct users, not just innovators, and not.
Speaker 4 (31:18):
Just the developers.
Speaker 3 (31:19):
This includes you know, considering those indirectly impacted, right educators
for example, what does.
Speaker 4 (31:28):
This mean to teaching in the classroom?
Speaker 3 (31:30):
Right, So that there needs to be some kind of
shared and equitable benefit.
Speaker 4 (31:35):
Of leveraging these tools, right.
Speaker 3 (31:37):
Ensuring that the data is created in less bias ways.
The next is prevent strengthening of existing technology oligarchies. This
one is really important if all of the innovation and
all of the development is held by a few companies.
(32:00):
It's dangerous, right, So ensured human centered AI is in education,
but not too early, right, children need to be not
involved in uh uh early concepts of AI.
Speaker 4 (32:20):
It should be used later.
Speaker 3 (32:22):
You know. The other discussion and takeaway in this particular
panel was avoid over reliance on AI in education. Right,
so a I should not replace teachers, It should not
replace learning. It is there uh to use like anything else,
(32:44):
like searching through you know, the library. You could use
AI to search through the library for particular articles that
you're researching, for example. The other they talked about was
educate AI developers on ethical issue using impacts, regulate and
hold AI creators libel for HERM cost. Right, there is
(33:09):
something happening in the US right now.
Speaker 4 (33:11):
Where uh uh I believe it was.
Speaker 3 (33:14):
I don't want to mess this up because it's very
it's very serious.
Speaker 4 (33:17):
It's super sad.
Speaker 3 (33:20):
A teenager committed suicide and his suicide plan was created
using open AI.
Speaker 4 (33:27):
Terrible, it's horrible, right.
Speaker 3 (33:30):
So one of the things that came out of this
was regulate and wold AI creators liable for HERM cost
and avoid using AI and high risk situations for now, right,
and keep AI away from children, right, So what are
those those those guard rails that we need to put
around AI as a community to safeguard others. And so
(33:55):
those discussions were all coming up at the form right,
and those discussions need to be had and they need
to be had continuously because AI is not going to
slower stock, so we all need to work together to
figure out the boundaries around this to make it safe.
Speaker 2 (34:15):
But come on, it seems like almost everything we're doing
right now is the opposite of these nice principles that
you were talking about. So you mentioned that you don't
want to have oligarchs controlled at all. You know, it's
something like five companies focused on AI or the majority
of the stock exchange right now, and the most valuable
company is a company that's providing the chips for AI,
(34:38):
and all of them sat behind the president that is
an inauguration. And people are you know, constantly worried about
tech bros. And the tech bros are the the oligarchs
that are that are taking over the world right now.
So you know, if anything, we're going the opposite direction
for what you said was what your smart people at
the Cambridge forum were suggesting. You talked about education and
(34:59):
the imp act on education. You know, I've heard numerous
different teachers concerned about and educators concerned about students that
are stopping to think and uh, you know, they're getting
AI to write their essays for them and do all
the research for them, and that and and and that
they are, you know, scrambling to try to catch up
with some means of trying to evaluate whether or not
(35:19):
AI was used to to.
Speaker 4 (35:21):
Create that essay.
Speaker 2 (35:22):
You talked about regulation and the need for regulation, which,
as I mentioned, Jeffrey Hinton, the Nobel Prize winner, was
arguing for. But we're going the opposite direction. The United States, China,
et cetera. Are deregulating AI and not allowing people to
put regulations on AI. And then you know, you talked
about the risks or risk high risk communities. You know,
there's more and more conversations about how AI is becoming
(35:46):
people's friends and their closest confidants, and people are going
to AI for relationship advice, and people that have got
mental health issues are going to AI. It's everything is
happening is the opposite of what you're suggesting.
Speaker 3 (36:03):
We need no no the stakes are high right. Technology
is reshaping society at crazy speeds. We need to have
these conversations. Groups like the Cambridge Form have to come
together and discuss this and bring this to the forefront
and come together collectively to solve for what's happening. We
(36:26):
don't have a choice, right And so when I say
I'm hopeful and I'm inspired, it's because we have groups
of like minded people that want to do good and
they're coming together to talk about these things. They're not
just letting it happen. They're coming together. They're getting in
a room, they're collaborating, they're disagreeing, and they're they're looking
(36:50):
for a path forward that aligns with their ethics and morals.
Speaker 4 (36:55):
I think that's a good thing.
Speaker 2 (36:56):
I want you to read this book Culpability, and then
have a well review with you and see what you
think after reading that book. Karen, can you tell me
a little bit about you? You know, you you've spent your
career in marketing and uh and now you're what a
fractional chief marketing officer for several different organizations.
Speaker 3 (37:15):
You know.
Speaker 2 (37:15):
That's one of the other things people say is that
the future of our employment is going to be We've
going to have a couple of different gigs that we're
not going to have long term careers. Is that what
what where you are and what your care has been. Uh?
Speaker 3 (37:29):
My my career has been both in house marketing executive
as well as fractional chief marketing officer. So I like
to work for and do work with companies that are
culturally aligned to who I am.
Speaker 2 (37:47):
Right.
Speaker 4 (37:48):
I like strong.
Speaker 3 (37:48):
Leaders, I like moral leaders, and so I like to
choose and give my intellect which is my intellectual property,
to organizations that I feel really are trying or the
leaders are trying to do good right. And so my career,
I've been lucky enough, uh to grow up in tech
(38:11):
in Ottawa. You know, one of my first jobs in
tech was at Corel Corporation way back when. And so
I I have grown up as a marketer in tech.
I at this stage in my career twenty five years later.
Speaker 4 (38:29):
Prefer fractional work.
Speaker 3 (38:32):
I have done in house roles. I spent a number
of years with Intrust in Cybersecurity as their you know,
global vice president of brand.
Speaker 4 (38:45):
And at this.
Speaker 3 (38:47):
Point for me, I like the fractional work. I like
to have the flexibility to choose to work with whom
I choose to work with. And I also like the
just the diverse types of marketing that I get to
do with each of my clients.
Speaker 2 (39:03):
Do you think that's going to be a trend in
our society that most of us or more of us
are going to be doing fractional work and have numerous
different gigs.
Speaker 3 (39:10):
I don't know a lot of my colleagues who are
coming towards, you know, this stage of their professional career
right like they've kind of you know, made it to
you know.
Speaker 4 (39:20):
The VP or C level suite.
Speaker 3 (39:24):
They tend to round out their career as you know,
consultants and advisors and fractionals. So maybe that's just the
stage that I'm in and it's not necessarily, you know,
indicative of a changing marketplace. I have three children who
are still in university. One is as graduated and works,
(39:47):
and I don't see him looking at this point in
time to like servicing different clients. He's with one company
and he likes the culture there and it's a good
fit for him. So I'm not sure about that's the
way that we're booping.
Speaker 2 (40:01):
You've got three kids for me, You've got three kids,
you say you are you have been and you are
a chief market officer, an executive. You obviously travel. Can
a female have it all?
Speaker 4 (40:16):
Can we have it all?
Speaker 3 (40:18):
Define it all?
Speaker 2 (40:20):
OK? I'm asking you to define it. But you know
you've got kids, yeah, family, you have travel. I think
you play golf. You are a senior executive. Can you
have it all? Some people think in today's world, particularly females,
can't have it all, that they have to choose family
(40:42):
or choose career.
Speaker 3 (40:44):
Well, I don't know that necessity believe that. I think
you can choose to do what you choose to do.
And I have chosen to raise a family. You know,
I have been a single mom for the last ten
years and I still have been an executive and managed
to balance both. Now that's my choice, right.
Speaker 2 (41:06):
Uh.
Speaker 3 (41:06):
My family has always come first.
Speaker 4 (41:08):
My children have always.
Speaker 3 (41:10):
Confirst and I have learned how to balance out both.
So so I think if you choose that, you make
it work, right.
Speaker 2 (41:25):
I.
Speaker 3 (41:27):
You know, I growing up in tech in Ottawa, which
has been a very engineering dominated community and has been
a very male dominated community.
Speaker 2 (41:37):
Right.
Speaker 3 (41:37):
There has been challenges along my way in my career.
Speaker 2 (41:40):
Absolutely absolutely, So tell me then about that. Can females
be successful in UH in tech. You know, I don't
know whether you've got to STEM background, but but you've
been marketing in UH in tech. Can a female be
successful in in in tech?
Speaker 3 (41:55):
Well, I feel I've been successful in tech, so I'm
going to say yes. Yep, I'm going to say yes.
I also have you know, many colleagues, female colleagues who
have had wonderful careers in tech. And on the flip side,
I've also had you know, female colleagues.
Speaker 4 (42:13):
You know who who who.
Speaker 3 (42:16):
It wasn't a good fit for them, they might have
moved into a different industry, they might have moved into
a different role. Right. So, but you know, certainly, for me,
I've had I've had a good career. I'm happy where
I am. I've met some amazing people and have built
some wonderful friendships from the people that I've worked with.
(42:39):
I've been a part of some really great corporate cultures,
and I've also been a part of some not so
great corporate cultures. You know, And I and and I
learned from that, and I keep moving on.
Speaker 2 (42:51):
I can ask is its sons are daughters.
Speaker 4 (42:55):
I have one daughter and two sons.
Speaker 2 (42:57):
So the most important question then given what we talked about,
the the risks of AI and your positivity about how
AI can actually be be be a force for good,
The challenges with gig economy, UH, the issues in regards
to layoffs that may or may not happen, replacement because
of AI and stuff like this, and the challenges that
(43:17):
that maybe even more so than anyone else females might have.
What's your advice to your sons and your daughters about
their employment and what they need to do relative to AI.
Speaker 3 (43:31):
That's a great that's that's a great question. My children
and I always talk about AI.
Speaker 4 (43:40):
We use it right, a.
Speaker 3 (43:41):
Different They've all used different tools. I use different tools.
We talk about it, the good and and and the bad.
My eldest son is an accountant. He's a tax accountant.
Speaker 4 (43:54):
He doesn't really use AI at all in his work.
Speaker 3 (44:01):
My youngest son is in his third year of be Calm,
and there have been times where he has used chat
GBT to help him with research backgrounds for a paper.
Speaker 4 (44:18):
Right, So we're.
Speaker 3 (44:18):
Precautionary on what he uses it for and that he's
not replacing, you know, learning the skill set of thinking
of building his essay of building you know the story
that he wants to tell about the particular project that
he's working on only relying on AI. Right, the fake
(44:39):
set he has to come from him. It has to
come from him first.
Speaker 4 (44:44):
Right.
Speaker 3 (44:45):
If he wanted to use, for example, an AI tool
to find particular citations or articles to back up his
points in an article, I think that that's okay, right
because it allows him to get through that amounts of
data quickly.
Speaker 4 (45:02):
But they certainly.
Speaker 3 (45:03):
Don't want it to replace his how he thinks about
writing the essay or how he thinks about delivering the project. Right,
you still need that critical thinking.
Speaker 2 (45:13):
Any different advice for your daughter, you know.
Speaker 3 (45:17):
It's it's interesting because my my my daughter started out
in data and did not like it at all, and
so she switched into the one eighty and is now
in English and she wants to teach and that is
her passion. And so you know, for me, the conversations
are really around you know, is it.
Speaker 4 (45:38):
In the classroom? Can it be in the classroom?
Speaker 3 (45:41):
And so when I came back from the Cambridge form,
you know, discussions were around. You know, there's there's a
lot of fear around using it for children. There's a
lot of and just having discussions around that. So I
don't think that my advice is any different than my
three children. They're just on three different paths done to them,
(46:02):
you know, will interface with AI a little bit more
and maybe the others at a different stage.
Speaker 2 (46:10):
Thank you so much for joining us. I really appreciate it.
I think this has been an interesting conversation.
Speaker 3 (46:15):
Uh.
Speaker 2 (46:16):
I do if I could make an advertisement for this book.
Culpability One More Time by Pruce Hollandser was recommended by
Oprah to you and to other people, because you know,
I think that one of the things that fiction is
good at is really confronting us all to sort of
live through someone else's life and and see what we think.
And uh and here AI really impacted a family and
(46:39):
I'm not going to say it was a negative impact.
It was like an impact and uh and and uh
and it's a real question and a whole bunch of
different sectors of life and parts of life. Uh. And
I think that what you've been talking about is something
that we all as a as a society need to
talk about.
Speaker 4 (46:56):
Thank you, Brian, my pleasure.
Speaker 2 (46:58):
That's our show for tonight, everybody. Thank you for joining.
I recommend you all get it and try it. A
little bit, because just like when computers came, just like
when phones came, just probably like when cars came, just
like when steam engines came, you know, you've got to
try it out a little bit and see what it's
all about, so that you've got an opinion and you've
(47:18):
got some experience. And so I recommend that's one of
the things that one should do. And I think that
it's just as important for people that are more my
age than just young people, because we all got to
think about it and how it's going to impact our
lives in our careers. But I also think we need
as a society to talk about it, and so this
is one good first stop in doing stuff back in
(47:39):
Stay with us everybody.
Speaker 1 (47:45):
Stream us live at SAGA nine am dot.
Speaker 2 (47:48):
Ca enlessly fascinating and curious about the world around us,
about business, about politics, about the arts, about culture, developments,
social issues, and the people. Most importantly, about the people
(48:10):
that are shaping our future. Every night, I get the
opportunity to have conversations, to dive into conversations with thought leaders,
with change makers, entrepreneurs, politicians, and everyday heroes who unpack
the ideas and stories that I think matter to all
of us. What's the future of our economy, what's the
future of our country? How do we build stronger communities?
(48:32):
What can we learn from music, theater, sports, politics, the arts.
How does policy shape our real lives? What does leadership
look like? Real leadership in today's environment. If it's interesting
and relevant and topical and we're talking about it, you'll
hear about it here at six o'clock on the Brian
(48:52):
Crime You Radio Hour. I want to have real conversations,
in depth conversations. These are ideas we're sharing. This is
the Brian Crome Radio Hour. Thanks for joining me.
Speaker 1 (49:03):
That's a.
Speaker 2 (49:08):
Welcome back everyone to the Brian Crime Your Radio Hour.
I wanted to end this show with Wendy Selker on
AI and innovation with a couple of comments on my
own if I could. I ended up reading this book
that Oprah recommended, called Culpability by Bruce Hassinger. If you've
got a chance to grab it. Oprah recommended it, and
(49:31):
I'm recommending it now. It's one of the best reads
I've had in a long period of time. It's a
fiction book, and it's a book about AI and H
and how it influences the world and the family and
and people's lives and and I loved it. I read
it over the course of one weekend and uh and
it was a great book. And the reason why I
(49:52):
talk about it is because I think as we think
about AI, we've got to think about how it impacts
our lives. I listened to a great podcast by with
Jeffrey Hinton, who is sort of the godfather. Godfather of
AI got the Nobel Prize for AI. Was the gentleman
(50:13):
at UFT that sort of invented it and and fathered
it and then went on to Google and really worked
on it. He warned about some of the problems of AI,
and he talks about regulation and the need for worldwide
regulation and a lot of attention to and thought that
(50:34):
needs to be given to it. And you know, this
big beautiful bill that the Republicans and the President have
just passed in the United States argues the opposite that
you need no regulation because the Chinese don't have the
regulation supposedly, and so therefore the only way that the
United States can possibly compete with the Chinese is to
have no or very little, if not any regulation on AI.
(50:58):
This book Culpability really puts it all in perspective, because
you have to really think about whether AI should be
unfettered or whether it should be regulated. And one of
the very first examples they give, and and I'm not
going to give away the story, but it's right at
the barrier of the book is the.
Speaker 1 (51:18):
Trolley bus conundrum.
Speaker 2 (51:21):
And it's if a trolleybus is in San Francisco and
down hill has a runaway and has an opportunity of
going to one side and killing two older people and
going to the other side and killing a whole family,
should it go one way or the other, And assuming
that obviously has no other options, And it's so, what's
(51:43):
the value of a life? What's the value of relative lives?
And that it doesn't say it in the book, but
it's sort of suggesting that AI has got to be
trained by us to make these moral, ethical decisions, moral
ethical decisions that most of us would be in a
(52:05):
quandary to actually make if we ever confronted by it.
But that that that that it has to sort of
been designed in programmed in to AI, and that it
may learn things over time that that that that create
its programming, And so how do you create an algorithm ethics, morality, goodness.
(52:34):
I think it's a real issue, and it really comes
out in this book, and so I highly recommend it
if you're wanted to have a good read and a
great story and at the same time confront some of
the challenges that I think are going to be critically
important for us in the future. That said, you know,
Wendy I think has really raised a lot of the
(52:56):
issues in regards to how critically important the I is.
How she says only twenty five percent of the companies
are spending any time on it, seventy five percent think
it's irrelevant to their life. But yet she thinks it's
going to be bigger than the Industrial Revolution in the
way it changes our society, our lives, our companies, our jobs.
And I think that that's true. And I think that
(53:18):
as Bill Gates has said, you know, give it a try,
communicate with it, try it out. I think that we,
even old people like myself, have got to end up
making some use of it and trying it out, checking
it out and seeing what it can do. If it's
this powerful of a potential help app technology vehicle, whatever
(53:42):
it is, I think we need to understand it, regulate
it appropriately.
Speaker 1 (53:49):
Adopt it. Adapt it, adapt.
Speaker 2 (53:51):
To it, and think about how we make it good,
how we give it ethics, give it morality, give it goodness,
give it empathy, because I think that's what makes humans human.
If these things are going to be making a whole
bunch of decisions, we had to try to have them
(54:13):
think a little bit more like us, which may not
be completely logical at times, but I think is critically
important when making life and life and death decisions and
a lot of others. Anyway, I think it's a great book.
I think Wendy was a great interviewee about where AI
is going, and I think AI is the future. Get
(54:40):
on it, do what Bill Gates says, and just give
it a try, see what you think. Anyway, that's a
couple of thoughts for me to end my story on
AI and innovation Today. Brian Cromby for the Brian Cromby
Radio Hour. Thanks for joining me, Grab me on podcasts,
on YouTube, on social media, or on my website Brancromby
dot com. Thanks and here on Saga and sixty every night.
Speaker 3 (55:03):
At six o'clock for me
Speaker 1 (55:08):
Stream us live at SAGA nine sixty am dot Co.