Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Foreign.
Welcome to the World at Work podcast, where business leaders and job
seekers come together to create winning cultures and fulfilling careers.
I'm your host, Katie Currins, and I am here with Tim Dick, founder of
Best Culture Solutions. Tim, good to be with you again.
(00:22):
Yeah, likewise.
We're
up bright and early on a Monday
morning talking about the world at work. So how have you been? I have
been good. Very busy, but in all the right
ways. So looking forward to maybe catching
(00:42):
my breath. But I guess when things are good, I should just enjoy the ride,
right? Yeah, Just go with it. Just go with it.
Yeah. Like, don't. No, I was just listen in
my brain. I was like, speaking of ride, have you heard all the rumors swirling
around about Disney? Like, there's just constant. With the Muppets
coming, all of a sudden, there were rumors about it taking over
(01:05):
hall of Presidents. There was a rumor that Carousel of
Progress was closing. Like, I'm not sure I can keep up with what's
rumor and what's truth anymore. Have you been seeing any of these crazy
little. Yeah, the clickbaits. Yes. Yeah, yeah. No, it's
funny you say that because, like, it's true. Like, like, I mean, you know, it's.
On any given day, you can find like, a clickbaity headline about anything
(01:29):
that's going on at the theme parks. Right. And I mean, for those who don't
really know us with the podcast super well yet, you know, Katie and I are
both former Disney cast members, so we often like to talk about stuff that's going
on there to set the table with that. But on any given day, you can
find headlines that say something is closing or this is. And it makes it sound
like or look like that something very different from what is really
(01:50):
happening is happening. And the reality is that, like, for those of you
that do follow along on this stuff and know, you know, is the carousel of
progress closing? Yes. But is it closing? Closing like they make
it sound. No, it's closing. Get updated and. But they often put those types of
headlines out there, Right. And make it sound like it's something that it isn't, and
it draws people in. Yeah. Yes, yes.
(02:12):
It's always. I mean, it could be as simple as a sign going down
just to get touched up with a paint job or something. But
what's troublesome to me is how Many times
lately I have had people message and say, oh my gosh, have you heard
this? And they'll send a screenshot and the information
that they're getting. And I, I use information lately
(02:34):
is coming from an AI overview in their
Internet search browser. So it's not even
anything specific. It's being pulled from
possibly anything from like blogs or chat groups or
these not official sites. And so when
I've just noticed an uptick in the reliance on this
(02:56):
AI overview and while I have used it
use AI in a number of ways, personally and
professionally, it's starting to creep in more and
more, even at work. And so I was curious from your
perspective how you're seeing AI evolve in
the workplace and where companies are really turning
(03:18):
with AI at the moment. Yeah, good question.
I think. So, like with you setting the table and us setting the table by
talking about AI is something that
is leading us to get bad information, not the right information, pulling
from it. I think, you know, we've had, I mean, now let's get real. AI
has been in development for years, right? If not at least a decade or
(03:41):
decades. And you know, people have often thought about
tools like AI, right, For a long,
long, long time. And it's been developed for a long, long, long time. But
really it's only been in common use,
right? It's only been in common use in the workplace or by normal people
or you know, in the last three, four, five years has become more
(04:03):
widespread and has been growing. Right. And now that we've had a chance
to use it at that widespread level, are we
learning things about it? And you know, I think
the biggest thing is that a lot of, and we've been working with companies and
the reason why it's present in mind right now for me is that, you know,
we have been working with companies lately to figure out how to
(04:24):
use AI effectively and what is the right ethical way to use it. And
what we're finding is a lot of things. We're finding that first and foremost that
AI is not always accurate. Right. And so, you know, the
example that you use where AI has been asked to scour
headlines for happenings at a Disney theme park, only to find out that
it's scouring headlines give you inaccurate information, that's a very
(04:46):
low stakes, you know, mistake. But we're finding out
that a lot of times if you use AI to go down and
get you a bunch of information or do a bunch of research for you, it's
returning, it's often returning with information
that is not accurate. Because it's just scouring headlines and
it doesn't know yet how to properly weed out the bad stuff or to get
(05:08):
it right. And the biggest challenge that we're finding with AI is that when
it's finding information on the Internet for us, it's not getting us the right information,
it's not getting us accurate information. To give you an example, I once
tested AI to do some research on
Carol Quinn and motivation based interviewing. And what it returned
to me was stuff that was very boiler like, it
(05:30):
used a lot of buzzwords around recruiting and interviewing and tried to
attach them to motivation based interviewing. And it was all wrong. It
was basically leading you down the path of doing traditional or
not traditional but more commonly known behavior based interviewing.
And for those people who have studied motivation based interviewing and mbi,
you know, the reality is that it is
(05:52):
closing the gaps, it is closing the holes
that exist in behavior based interviewing. So for me to get an answer
that basically is suggesting elements of behavior based
interviewing, it was, that means that it was entirely incorrect. And it was
basically pulling things out about MBI that it could find
or just like high level cherry picking on ideas of recruitment. And it was
(06:14):
completely wrong on motivation based interviewing, how it works,
and a lot of that stuff. And I'm finding more and more, you know, people
are finding out that when you go into AI and you use it for these
things, you're getting information that might not be correct. Right. Just like you pointed out
with some of the theme park stuff. Yeah. And it's
interesting because I've been working with someone on a grant,
(06:36):
and as we've been working on the grant, we were trying to see
if there were ways that we could incorporate just different
examples of past grant opportunities or
other pieces. And so we thought, well, let's compare what we get when we
Google searches versus an AI search. And to
your point, half the AI articles that it first populated
(06:58):
were not even real research studies. They were
completely made up. And when we would push it,
give it a little nudge, like are these real or did you just come up
with them? And it might say like, well, these are real researchers. But the study
is not wanted to make the study aligned with what you're looking
for. It was garbage. Same thing with numbers. If you ever
(07:19):
ask for simple, if you ever expect simple
math done, it will give you some really
convoluted oddball correlations with
numbers. So it might say, we're going to try this for
15 different locations, which would reach a total
blank number of people. And the number it might give you is
(07:41):
very arbitrary. It's not based on the size of the
businesses. So we definitely. That was a big red
flag in this process to not rely on it. You
know, there are certainly ways that I have seen AI
play value, whether it's for summaries or things like that.
But, yeah, a very cautionary tale. Now, I
(08:04):
know in my experience, from my understanding,
if somebody is looking for a job and you try to upload your
information, the many different platforms, don't they typically
use an AI type filter for information?
Or is that another rumor that's out there? No, I mean,
and so, you know, you're right. You know, it's. And that is nothing new. I
(08:25):
mean, that has been around for like, that's not even really an AI function. But
a lot of times when you apply online for a job, it's using an applicant
tracking system. And an applicant tracking system is basically a
really, really, really. You know, it's like a. It's like a tool and what it's
going to do, it's a way of, like. How do I put it? It's a
way of like. If you were to think back, right, when people would apply for
(08:46):
jobs, we would get a stack of resumes. We would
get a stack of resumes for the jobs that people are applying for. We'd have
to go through the stack and decide who we wanted to interview. Right? And what
the applicant tracking system is doing is instead of giving you a stack of resumes,
you know, a stack of resumes in that you have to physically go
through, it's automating that process. So it's like a virtual stack of resumes.
(09:09):
And so where I'm going with this is that to your point, a lot of
times what these applicant tracking systems do is they, when a
resume comes in that has the right keywords, right. That would match
the role that you're hiring for, if that
resume has those keywords in it, then what it
will do is it'll put it to the top of the stack and flag it
for the recruiter or I see other things that, where they try
(09:33):
to. Basically they'll tell you, like, they might not put
a stack or put them to the top, but they might tell you, like, when
an applicant applies, you know, just how good of a fit they are based on
keywords. And I actually, I'm not a fan of that. And the reason for that
is this, is that, I mean, it's good. Don't get me wrong, I will use
that tool. I will use that, you know, and I will look at resumes that
(09:53):
it flags as strong, sure. But the reality is that you can't just
look at those resumes, you have to look at all of them. Because
sometimes somebody just doesn't do the right keyword and you
know, you might miss a great candidate just because they forgot to use
one single word in it. And you need to make sure that you're not selling
yourself short. Right? You need to sell yourself. You can't sell yourself short
(10:16):
in the fact that like you might be searching for people, you might miss good
candidates. But there's other things though about like AI, like, so you
mentioned about using it for calculations and stuff like that. I'm not saying that like
AI should never be used in the workplace, but you just have to monitor it
closely and challenge it, right? Like we have, you mentioned
calculations like we have used it before for things like
(10:36):
macro level workplace studies, trying to figure out how many
people are in a certain role and stuff like that. And those are really tough
to find, those are really tough studies to do and to get an
exact number on. But we've been able to use it to
calculate some of those things. But you have to watch very
closely and very carefully and challenge it over and over and
(10:57):
over again to see to make sure that it's getting you
accurate numbers, accurate data, accurate. Right. And you
need to challenge it to make sure and it can be used. But the thing
is, is that it's still going to take a lot of time, especially for overly
complicated things, but you need to dig deeply into how it arrives
at calculations or how it gives you information or sources as information
(11:19):
and challenge it to go deeper if you need to, to make sure
that it's correct. But that, that is one thing that we're seeing with AI. Another
thing that we're seeing with AI in the workplace is that when you
are using, and this is one thing I find a lot of people don't consider,
a lot of people don't think about right, when they use AI, but
when you put information right into an open
(11:41):
sourced AI system. So what I mean by that is, you
know, ChatGPT is open sourced. They're going to use everything
that everybody, every user ever puts in there across the world
and use it to iterate on its knowledge, right? So it's using it to
improve, it's using all the knowledge that you put into it and all the information
you put into it to improve itself. And so
(12:05):
because of that, whenever you put data
in there, people that are using it to improve or the
AI tool, using it to improve itself, it becomes information in the
public domain. And so where that gets a little bit
tricky and you have to be very careful is when you're using intellectual
property of a company that you're working for, or you're using
(12:27):
statistics that belong to a company that you're working for that you know,
that isn't public statistics or public use or people's
personal information for a company that you're working for. Once
you enter it into AI, it now exists in the
universe, it is out there, it is publicly. Like it's.
The information no longer just belongs to you. And so what we
(12:49):
are seeing is, we're seeing a lot of organizations saying, you know, do not
use an open sourced AI platform, you know, to
using information that is not meant to be in
the public sphere. Right? So confidential information,
confidential information, information that might be your intellectual
property or data, stuff like that, because it gets used and
(13:12):
people are learning more and more and more
that when you use AI, you're actually putting that information out there for everybody
or for the system to use publicly. And that's not good. Now
I love that you mentioned that because that is something,
you know, it sounds so common sense when you say
it, but it's so easy to just be going through the motions and get
(13:35):
comfortable with this tool. That's a topic, that's a hot
conversation in healthcare, looking at ways to
support patients and patient care without, you know,
sharing very, very personal information. Education,
same thing. There's a lot of great resources for
lesson designs or for different understanding,
(13:58):
different student interventions. But when you
do that, knowing that you might want to give a vague
scenario versus uploading very
explicit information about the people you're wanting to
provide these resources for. Because yeah, once it's out there,
it's out there and it can be a very slippery slope.
(14:19):
And I definitely have heard the conversation amping up around
policies in workplaces and trying to
start looking at what is, what are things that need to be
considered in a policy versus what is essentially saying no, we're
not using AI because quite frankly, I feel like that's just not
realistic. So. No, it's happening. It is. And
(14:42):
are you involved at all with companies as they're working on that or
what? Have you been part of those conversations? Yeah,
100% we have been. And you know, I think like there's
a few things that we think about now, like with AI and how to use
it. Like, I think the first is, is that, you know, we challenge its sources
and we include its sources right if it. I'm going to ask you a question
(15:03):
when you're saying challenge it. And I'm so sorry, I don't usually want to interrupt,
but I want to make sure because I feel like I have an idea of
what you're saying because I've played with AI. For somebody who may not
have been involved in much AI use, what does
it mean to say challenge it? Because you mentioned that before
with the numbers and I personally, I get a little bit of pleasure
(15:24):
out of this back and forth. But when you say
challenge it, can you explain more of what that looks like? Sure,
yeah. That's a good question. So, I mean, like, so we talked earlier on about
how like, you know, sometimes AI is giving us information that isn't
accurate, right? Or isn't, or makes it sound like something that isn't, right? And we
use specifically the example of Disney theme park news, right? But it's. The
(15:46):
reality is that it happens everywhere. A lot of times, like I
mentioned, we started with the Disney example. I moved into the example
around information on motivation based interviewing. It gave me bad
information on motivation based interviewing. It was just wrong, it was incorrect.
But if I didn't know that, I would not have known. And so what
I'm saying is that sometimes you might ask it for information about something that you're
(16:08):
not an expert on or something you do not know about. And you might accept
what it gives you as fact, but you need to tell it
to tell you where it's, where it got its sources, right? Where
did you get this information? Provide me a link, Provide me a
source, right? Of what you have just told me. And you need
to investigate those sources and make sure that, you know, dig in
(16:30):
deeper to find out, you know, where, you know, where did I get this information
from? Is it reliable? What do you need to know about it in order to
just make sure that it's accurate or as accurate as it can be, or you
can mitigate that. So what I'm saying is that check out before you use
the information it gives you, find out where it came from,
validate that it's accurate, right? And then if you need to either
(16:52):
throw it out because you can't trust it, or if it's not
perfectly, you know, if it's not a perfect source or you know, or
if it's not perfect information, just know that, right? Know that where
it came from and understand how to mitigate what that means.
And that leads me to my next note is that anytime you use AI, you
know, one thing that we are Saying, you know, like you asked if we're helping
(17:14):
companies and corporations or corporate entities with designing
how we use AI, and we are. But the first thing that we say is
that, you know, you need to challenge the sources to make sure that they're accurate.
Need to challenge the sources to make sure that they are, you know, that they
are trustworthy for use in this. But the second thing that we always say
is that also anytime you generate something with help from
(17:36):
AI, be upfront and include a disclaimer, you know, that
says that. And there's a few reasons for that. I mean, the first is that
you should just that way people are aware of how you got the information. So
if they need to challenge it on their end, they can, because there's nothing wrong
with using it. But the other side of it too is just that, you know,
the reality is this is that I don't know about you, but anytime somebody gives
(17:56):
me something from AI, I can spot it a mile away. As a writer, it
pains me. Yeah, it doesn't mean it's bad. And like you mentioned, as a
writer, like, don't like. One thing I should point out we're talking about AI for
use in the workplace. I should have probably set the table better and said, you
know, a lot of people are using it for research purposes. And that's kind of
the, the way that I'm going about this conversation. But the, and the
reason why I haven't thought about it from say, a written
(18:19):
perspective or anything else is because the other thing that I always say is I
would never use AI to write content for me here, like, you know, like
emails or blog posts or info. You know,
I would use it mostly for research. I would never use it to write for
me because it's so obvious. But my point is that, you know,
when you do use it, regardless how you use it, if it's for content, research
(18:40):
or what have you, again, I wouldn't use it for writing content. But anyway,
just make sure that you're upfront with people. Well, the fact that you're using AI,
just, you know, this AI was used to help support this AI was used to
help, you know, come up with this AI was used for research and
just make sure that you're telling people that you're using AI. So I mean, those
are two big things, right? The first is only use it,
(19:02):
sorry, challenge the sources, right? When you get it, challenge information it gives you and
validate that it's correct. If it's research on a certain topic, validate
the sources it gave you. If it's calculations that it's making, you
know, double check them to make sure they're correct, but just challenge it every
single time. The second thing is be upfront with people when you use
it, tell them that you've used AI for help. The third thing is that I
(19:24):
mentioned earlier about how you're putting information into AI that's going into
the public sphere. There are tools that you can use,
right, that are more appropriate for that. And here's
what I mean by that. Microsoft now has an AI
tool called Copilot. And I'm sure a lot of people listening to this have, have
heard of it because it's not, it won't be new news to them by the
(19:45):
time this comes out. But Copilot is a closed
source system. So what that means is that it is
AI and they are using AI to do things and to calculate things.
However, what Copilot does is that it keeps the information
internal. And so Copilot is
it's basically like an internal tool that companies use on their own.
(20:08):
And when you put information into it, it stays in that
environment. It doesn't go out to the public, it's not open sourced, it stays within
your organization only, and it only iterates on things
that, that are from within your organization. So in other words, what it
means is that you can use it for things that are intellectual property, you can
use it for things that might be using your internal data and stuff like
(20:31):
that and it doesn't get out. So those are the three big things that I
would say. The first being challenge AI every time because you know, sometimes
it's giving you bad sources. One of the things that it often does as well
as it just confirms what you're telling it and it's overly agreeable. So challenge
it every time. The second thing is be upfront with people
that you are using it. But the third thing is that not to use it
with your internal intellectual property or information unless
(20:54):
you're using a closed source tool that keeps it internal, such as copilot.
Those are really helpful. I feel like as much as we
want to be curious and lean into it more, it is
important to have some guardrails around it because it can easily,
you know, it can, it can be seen as a catch all for ideas
and it's really not. It's only as good as what's put in there.
(21:17):
And there is something to be said around keeping
a little bit of personal edge. I don't know, I
just, I like the idea of having some Guardrails. I think
it's very important to be transparent about it. Now, have I used
AI to kickstart ideas? A hundred percent. It
sometimes gives me that little nudge that I
(21:40):
feel stuck in a moment, like how would I best utilize
this idea that I had? And then it might help me
come up with, oh, I could do this type of post and then
go from there. But definitely can see why some companies
are wanting to put more policy in place. It's not going away.
It's something that you don't have to use. But
(22:03):
I would probably bet that you in some way have
encountered AI even if you don't directly realize it.
So it's good to have those protections even if you aren't aware
you're utilizing AI resources 100%. And that's the
thing, right? Like it's. Is it going away? No, it's not going away. Right.
And so, and just because, you know, some of the things we've talked about here
(22:25):
today are cautionary tales. Make no mistake, it's still a powerful
tool that can be used to help you do great things if used the right
way. I mean, we use it, right? We use it. We
use it for a reason. And it's because it's not all bad. It really isn't.
It can be very helpful. But you must use it the right way and you
need to put the right guard whales up in front of it. And like you
say, it's not going away. And so we may as well find a way to
(22:48):
use it. Right? And well, right. And that really is
what it's about. Right. And it's like any tool, you know, tools that exist,
you know, that are imperfect, that doesn't mean that they're bad. It doesn't mean
that they shouldn't be used. It doesn't mean that you shouldn't, can't use them just
because, you know, these imperfections are things to be careful of. But it just means
that you need to know them and be very real about them and
(23:10):
use it responsibly and understand, you know, understand what it's doing. So that way you
can properly, you know, make sure that you're using the tool. Yeah,
thank you. Those are really helpful ways to consider the use
of AI and maybe help people just be more aware
of the cautionary sides of it. You know, don't always
trust the AI overviews. They do a little research
(23:32):
before you default to that. That's a whole other
slippery slope that we could get into. So thank you, Tim. And
it sounds like everybody that is interested in pursuing this for their
workplace. You would be a great. You and your team would
be a great resource for them to even just kickstart
the conversation. Yeah, 100%. Yeah. We can
(23:54):
definitely help people with, you know, making sure that they have AI usage
policies and stuff like that that, you know, properly encourage the
use of it and make sure that you are using the tool. Right. And. Well,
while putting the right set of, you know, parameters around that
for sure. Perfect. Well, we'll make sure that we've got the team of
best culture Solutions linked in those show notes and
(24:15):
that, yeah, they can certainly reach out to you via email
as well. There we go. Thank you very much. Thank you, Tim. Take it
easy. Talk soon.