Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Mitch Mitcham joins me in studio.
Speaker 2 (00:01):
He's the CEO of Hive Interactive, and he'll tell us
in a moment exactly what Hive does.
Speaker 1 (00:07):
I've learned a little bit.
Speaker 2 (00:08):
And it's pretty fascinating. In fact, why do we start
with that? What does Hive do?
Speaker 3 (00:12):
Yeah, I think if Hive is the human side of
the AI, of the AI movement, and what I mean
by that is we do a lot with AI enablement.
So we'll go into a company and we will teach
them everything they need to know about actually rolling out
these products in a way that is useful, that is productive,
and that is practical.
Speaker 1 (00:30):
So a lot of the AI models.
Speaker 3 (00:32):
Anyone who's ever tried to use chat GPT for the
first time knows there are a lot of models that
are not very intuitive, so you're kind of stumbling through it.
And so we just we change all that, We put
a learning element to it. We actually teach you how
to make it useful and how to keep the human
in the mix. So, like you just mentioned with that
movie premise, that's where the human has a false sense
of being in control and the things can go. I'm
(00:54):
sure they do in the movie. They arive very quickly.
Speaker 1 (00:57):
Yeah, and that is I think what plays to what
a lot of people are afraid of.
Speaker 3 (01:01):
We're always front and center human centric, so we want
people in the middle.
Speaker 2 (01:05):
Okay, so when you talk about teaching people, do you
mean chat GPT.
Speaker 1 (01:10):
Or famous stuff like that. Also, I guess some.
Speaker 2 (01:13):
Businesses have proprietary versions of AIS that they brought in house,
and you're training people how to use these as tools
and how to think of them as not replacement for humans,
but augmentation.
Speaker 1 (01:26):
Yeah.
Speaker 3 (01:26):
I think part of the what frustrated me the most
when I got you know, what I built high four
years ago was about human development. Then when AI came out,
we got an early view of it and we said,
this is going to change everything, but only if people
know how to use it properly. So if it take
for instance, we're working with the Colorado Rockies right now.
So what we're doing there is we're rolling out not
(01:47):
only chat GPT, which is the model they've adopted, we're
rolling out different AI models that they're putting into their workflow,
everything from analytics all the way up to the front office.
Speaker 1 (01:56):
And it's just making them more efficient.
Speaker 3 (01:58):
But the reason it's working because they're an amazing group
as an organization, but it's the way they're approaching it.
Speaker 1 (02:05):
They want their people.
Speaker 3 (02:06):
To use the tools to be effective, not to be replaced,
and so that's always our focus. And I think again
a lot of CEOs in the space of AI, they're
very flippant about sort of replacing human beings and almost
insulting all the humans, and you think about how how
will people function if their livelihood is taken. We're more like,
(02:28):
how can you use this to make your livelihood better?
Speaker 1 (02:30):
These are and faster?
Speaker 2 (02:31):
Okay, and folks, if you want to learn more about Hive,
the website is a Humanhive dot com.
Speaker 1 (02:37):
And at the risk of free publicity here, I.
Speaker 2 (02:40):
See you got something going on in the middle of
May at Coursefield.
Speaker 1 (02:44):
We do.
Speaker 3 (02:44):
Yeah, we're holding our first Hive Live conference, so people
can actually come and the whole conference is about workshops, application,
practical use cases.
Speaker 1 (02:53):
The CEO of Beautiful AI will be there.
Speaker 3 (02:55):
That's a big that's kind of a PowerPoint killer, so
to speak, super incredible at slide decks.
Speaker 1 (03:01):
But anyway, he's going to be there.
Speaker 3 (03:03):
He's going to be talking about human centric intuitive AI
to help humans and so we're really attracting those kind
of people.
Speaker 1 (03:09):
Okay.
Speaker 2 (03:10):
So I'm a nerd who likes playing with electronics. I
love it, and I'm rebuilding a DINACOST seventy amplifier, okay.
And I asked chat GPT last Thursday which of the
wires coming out of the transformers need to be twisted
and which don't, and it gave me some answers, and
(03:33):
then it said, would you like to see a wiring diagram?
And I said yes, and it showed me this thing
that was actually a very beautiful kind of nineteen thirty
style muted colors drawing of something that was not anything
like an ST seventy amplifier.
Speaker 1 (03:48):
It was like an.
Speaker 2 (03:49):
Insane creation that a tube audio designer on acid would
have made. And I said to it, that's not an
ST seventy, and it said back to me, Oh, you're right,
it's not.
Speaker 1 (04:00):
Do you want me to do an actual.
Speaker 2 (04:02):
ST seventy And I said yes, And then it did
another one that wasn't.
Speaker 1 (04:06):
So it's it's easy to look at.
Speaker 2 (04:09):
These things in the early days and look at the
flaws and giggle about it. But in my mind, the
fact that it can even come close to even trying
to draw a diary.
Speaker 1 (04:20):
A wiring diagram.
Speaker 2 (04:22):
Means that pretty soon it's going to be able to
So where ongoing with this, why is it not inevitable
that AI replaces people in quite a lot of functions.
Speaker 3 (04:34):
Well, I think there's lots unpack several things in what
you did. First of all, I'd love to spend time
with you to see what that prompt was like, because
probably in the prompting this is what we've learned. Twenty
seven and a half thousand times we've done this with
people over the last eighteen months. What we learn is
it's all about what we're putting into it in the
direction we're giving, so that can spin a side of
(04:55):
control really quick. So if you're prompting CHATGBT as a
great example, you've got to be about the who, what, where, why,
and how So it's it's all that context matters, and
a lot of people will they'll basically treat it like
a Google search. Help me figure out on cross these wires.
I'm not saying yours was probably brilliant. I'm saying other people.
Speaker 2 (05:13):
Of course, I don't know if it was brilliant, but
it also wasn't complicated. It asked me that asked you
do you want to see a Wirings diagram?
Speaker 1 (05:19):
And I said yes, okay, good. So but what you
bring up when you talk.
Speaker 3 (05:23):
About replacing humans in the future, what you bring up
is what it lacks is and this is kind of
one of its biggest struggles as AI engines.
Speaker 1 (05:31):
Regardless of the model.
Speaker 3 (05:32):
It's lacking the clarity of understanding what you mean when
you say things.
Speaker 1 (05:36):
So when it says do you want a diagram, it.
Speaker 3 (05:39):
Doesn't know if you mean a creative piece of artwork
or if you need something actually useful that will go
in an sop that you're going to put into your
business or that you're going to use effectively. So it
is just guessing and trying to make you feel good
about the answer.
Speaker 1 (05:52):
This is also part of the double edge.
Speaker 3 (05:54):
Of AI, where you're going to until it gets to
a level where and it has to be at in
my opinion, one hundred percent accuracy. It can't be an
automatic sort of autonomous face because if it is, then
you're not gonna be able to trust it, So that
doesn't work.
Speaker 1 (06:10):
So when you talk.
Speaker 3 (06:11):
About not replacing humans, I think maybe someday it gets
to a place where it's close, but we're also talking
about the human element involved in it, which is the
emotional side, the practical side, the experience side, the things
AI can't harness. It doesn't know your background, it doesn't
know your knowledge base.
Speaker 1 (06:28):
It has to rely on you to give it that.
And they play Devil's advocate for a minute. Yeah, so.
Speaker 2 (06:36):
Why couldn't an AI learn everything you just said? And
then the second question related to what you said.
Speaker 1 (06:45):
A moment earlier.
Speaker 2 (06:48):
Let's say I can't one hundred percent trust it, but
I can ninety five percent trust it, and I'm willing
to take that risk in order to not have to
pay that six thousand dollars salary to an actual human
Isn't that going to happen?
Speaker 3 (07:03):
Can in some fields, for sure, But let's talk about
again on a larger scale, with humanity in general. So
let's say you're a CEO of a major large language model,
which there were lots of them medavos.
Speaker 1 (07:14):
They were all talking about how.
Speaker 3 (07:16):
Inevitably AI will be this super intelligent thing that takes
it all.
Speaker 1 (07:19):
Takes all the jobs.
Speaker 3 (07:21):
Well, then negating the fact that part of the joy
of human life is doing the jobs.
Speaker 1 (07:25):
So can it possibly take all the sixty thousand dollars jobs.
Speaker 3 (07:30):
Sure, possibly in the future that's a possibility. Should it
be because of the impact and the fallout of what
happens after that when humans on mass scale lose a purpose,
a drive, and a vision. Most likely not. So somewhere
in the middle has to be someone with reasonability that says, look,
at the end of the day, what matters here is
(07:50):
human development. If it's AI for AI's sake, then we're
in the matrix and it doesn't matter. But if it's
AI for bettering humanity, then there has.
Speaker 1 (07:58):
To be a balanced equation.
Speaker 3 (07:59):
Just because we can do it doesn't mean we should.
So could it make But could it make the electrician
who's really doing a great job out in the field
day in and day out, that blue collar tradesman, Could
it make his life easier, faster and.
Speaker 1 (08:12):
Better so we can serve as clients better? Absolutely? Could? He?
Speaker 3 (08:16):
Then instinctively know, even if it says it's right, he's
still not going to reach into a power box and
do something crazy because he knows it's wrong.
Speaker 1 (08:23):
He can feel that it's wrong.
Speaker 3 (08:25):
So you see what I'm saying, I do You're in
this trap of can it? I mean, sure, I guess
inevitably it could, will we let it and will we
not seek human balance?
Speaker 1 (08:35):
I don't think that's going to happen.
Speaker 2 (08:37):
We're talking with Mitch Mitcham he's a founder of Hive,
the website a human hive dot com. And just taking
the other side of my own argument now for a second,
because I'm like the three handed economists right.
Speaker 1 (08:50):
On the one hand. On the other hand, on the
other other.
Speaker 2 (08:51):
Hand, one thing that many, many people are guilty of not.
Speaker 1 (08:58):
Thinking more than one step ahead.
Speaker 2 (09:00):
It happens in economics, particularly like with the trade war stuff.
But just as an example, if you had posited the
invention of the thresher or something that meant that for
a large farm instead of fifty employees, you needed too
because in the early days of this country the population
(09:20):
was ninety five percent farmers, right, A lot of people
would think, well, you're gonna have mass unemployment and people
will be killing themselves and everybody will be lost and
have no human fulfillment. And instead people went and did
other things. And so I do think it's so. I
do think it's inevitable that AI will kill a lot
of jobs. But I also think it's inevitable that AI
(09:41):
will create the opportunities for new kinds of jobs that
we haven't even thought of yet.
Speaker 1 (09:46):
That's yeah, completely right.
Speaker 3 (09:47):
I mean, so if you look at I'll go back
to the first thing you said in a second, because
I do think there's a reactive nature to everything we do.
Economics is a great example. There's a reactive we're too reactive.
We're not thoughtful about what we're doing of spaces. Okay,
so we can well I'm going to go back to that.
But if you think about what you just said, what
we end up with is a culture that will inevitably
(10:10):
start to seek better ways of making a living. And
so yeah, maybe maybe some of these jobs do go away,
but I believe there's way more potential in the augmentation aspect,
which makes the people who are doing their jobs better, smarter,
faster at doing them.
Speaker 1 (10:25):
So it will change the job narrative.
Speaker 3 (10:27):
But also you've got a factor in and then I
am going to get back to the other piece. You've
got a factor in that this is the first time
in human history that there's ever been a technology that
thinks and mimics and responds in similar ways to us.
Speaker 1 (10:41):
Right, That's never happened, Right, So this is.
Speaker 3 (10:44):
The first time I would argue in all of history
where we're actually asking ourselves rather than just saying, oh,
you know, when cars replace horses, what are all the
guys going to do that shoe horses for a living.
That's a pretty trite narrative. But when you're saying, hey,
we've got technology that could cons even though that movie
you mentioned before could conceivably be in robotics and mimic
(11:04):
human behavior, we have a far bigger issue going on,
something we've never faced, which is actual human like intelligence
minus emotion that could step in. Now we're having to
ask us much bigger questions, and that reactive piece.
Speaker 1 (11:18):
I said I would get back to that.
Speaker 3 (11:19):
That reactive piece is so dangerous because if you look
at what just happened at Davos around Deep Seek, which
is just another large language model out of China. They
released this model supposed to be open source. Now we're
getting really nerdy about it. Supposed to be open source,
which means all the world gets really excited about it.
Investors and news shows and all the major news outlets
(11:43):
fell on it like it was it was the most
groundbreaking thing on earth. Without ever looking at the code
without ever looking at at any of its privacy statements.
That's we're all linked right back to China. They didn't
look at anything that could be a trapdoor of collecting
all your data and your key strokes.
Speaker 1 (12:01):
Not to be too nerdy with people, yeah, but they
didn't look at any of that. And now since that
was released.
Speaker 3 (12:07):
And wrecked our economy for a few days, now what's
happening is you realize, well, that company is it's proven
that they stole chips out of Singapore, they stole some
of the technology. They're not being clear about where the
money came from. They're not being clear about how your
data gets captured. And so now we're realizing, oh, the
veil is coming back.
Speaker 1 (12:25):
Is that that Singapore thing?
Speaker 3 (12:26):
Is that that was Yeah, they took n video chips
out of Singapore, they illegally transported them into China.
Speaker 1 (12:32):
Okay, so that's how they got their chips. It's a
whole but that's a whole other level of that story.
But what it proves is.
Speaker 3 (12:37):
If we as human beings, I'm not talking about the
people that are out.
Speaker 1 (12:40):
In front, I'm saying all the rest.
Speaker 3 (12:42):
Of us as human beings, if we keep letting the
world get reactive around this. They'll jump to their economic side,
they'll jump to the money making side, they'll jump to
the job replacement for efficiency side. We as human beings,
which is our mantra, we need to be always beginning, middle,
and end. We need to be judging all the content
as it comes out. We need to feel empowered to
(13:03):
be in control of the technology. And I don't think
that the tech firms. I think maybe open ayes the
closest to it is giving people a feeling of empowerment
about it versus replacement.
Speaker 2 (13:13):
All right, So, and I really nerded on your there
s no good No, that's just very basic nerdiness.
Speaker 1 (13:18):
For this show is a super nerdy show, be shocking nerd.
Speaker 2 (13:23):
So I want you to just give me a brief
flavor of the what you might teach somebody. Someone comes
to Hive, maybe a CEO comes to Hive. We want
to integrate AI into our business. We're not looking to
eliminate people, but we're looking to work smarter and more efficiently.
Speaker 1 (13:44):
And I don't know how much you're allowed to talk
about the Rockies.
Speaker 2 (13:46):
You could use a generic example, you could use a
real whatever, But give me an example of the kinds
of things that you're teaching people.
Speaker 3 (13:54):
Yeah, I think, well, first thing you got to think
about in the Rockies is a good example.
Speaker 1 (13:58):
I can't get at all the.
Speaker 3 (13:59):
Details, but their leadership at the top cares immensely about
the people that work there, the team, the fans. I mean,
they're super dedicated. This is why the Rockies, you know,
even if they're struggling records wise, they sell out the
ballpark because they do care about those elements. So they
have a people first mentality, and I think that's the
(14:20):
single most important factor for us. So if a CEO
comes to me and says, look, i've got five hundred
people that have to learn how to use enterprise wide
licenses of chad GPT for example, and I need your help. Well,
the first thing we learn is we come in and
we say who's comfortable with this?
Speaker 1 (14:36):
Who's used it before?
Speaker 3 (14:37):
It turns out and I'm telling you, we've been doing
this for a couple of years now, it turns out
it's only about two to four percent actually are using
it every day right now at start. So we have
to teach them what it is, how it functions, how
they apply it, how it can make them smarter. How
And some of the fundamental things are they have to
talk to it like a person, like it's a colleague.
(14:59):
Number two is they have to realize their unique nature.
This is what we teach them, their unique stories, their background,
their history, their intelligence.
Speaker 1 (15:08):
AI doesn't know any of that.
Speaker 3 (15:09):
So by them working with it, they're leveraging their uniqueness
to make their human experience better.
Speaker 1 (15:17):
And so that's what we are showing them how to do.
Speaker 2 (15:19):
And I imagine some of these folks begin the conversation
a little bit worried that the boss is trying to
replace them with AI.
Speaker 3 (15:25):
And yeah, it's always the first conversation. Yeah, look, this
is an augmentation strategy. Matter of fact, I won't take
a client unless we know that's the client's focus. So
if the client says to us, hey, we want you
to come train our salespeople because they're going to replace
them next year. Anyway, we just won't do it because
that's not our focus. Our focus is on the human
development side. That's the way I am with my kids.
(15:46):
You talk about your kids. All of my kids use
chat GPT, they all use AI. I demand that they
do it. It sharpens them. I want them to use it.
Ethically as a tutor, as a guide, and we can
talk about that forever, but the point is you want
humans to feel empowered and in control, and that's what
we do.
Speaker 1 (16:03):
We give them that enablement.
Speaker 2 (16:04):
I have my kid uses the twenty dollars a month
version of chat GPT as a as a tutor, and
I told him I'd better never catch you have that
having that write a paper for you. And I think
he's aware that the teachers have some tools that attempt
to sniff that out, and you get in a lot
of trouble if you do that. So I think he's
(16:24):
I think he's doing it.
Speaker 1 (16:25):
The right way. We got we got a couple of
minutes left.
Speaker 2 (16:27):
I want to go back to something you mentioned earlier,
and that ties into the movie that I saw that
I think you would enjoy called Companion.
Speaker 1 (16:34):
I've got to look at you now. I'm definitely gonna
look at it. So my question is about control. Right.
Speaker 2 (16:40):
So we we've all seen the Terminator, right, and we
all worry about this kind of thing where AI kind
of becomes self aware, thinks that humans are an impediment
to its own existence and so on.
Speaker 1 (16:54):
Do what do you think the big picture risk?
Speaker 2 (16:57):
Do you think AI poses any kind of excess cisential risk.
A lot of smart people do, and a lot of
smart people don't.
Speaker 1 (17:03):
I think it does.
Speaker 3 (17:04):
I think where the risk comes is when we don't
when we trust but don't verify, when we let it
into systems and we say we just totally trust it,
let it roll.
Speaker 1 (17:13):
I mean you brought up AI detection. AI detectors don't work.
Speaker 3 (17:17):
Researchers put the US Constitution in and it came back
ninety two percent AI. So either that kind of stuff
doesn't work or were in a simulation and I don't
want to talk about that. So if you really think
about it, what's happening is if we let it go
out of control and we let it take over a
system before it's ready, Like everybody right now is fighting
to put AI agents into a business. Those aren't proven yet,
(17:39):
they're not at one hundred percent accuracy.
Speaker 1 (17:40):
That's where risk happens.
Speaker 3 (17:42):
So yeah, I mean, those dangers are real, which is
why we have to be prudent. But number one, it's
why all humans need to understand the tools so that
they know how far they can go or not go.
Rather than live in fear, you got to get your
hands on it and learn about it so that you
can say, Okay, yeah, that's actually too far.
Speaker 1 (17:58):
I don't like that.
Speaker 3 (17:59):
I don't want to run our nuclear devices. I don't
want it to be in control of the energy infrastructure.
Speaker 2 (18:05):
Out of day one, I got I got thirty seconds left.
What's what's this four hundred dollars course on your website?
Speaker 1 (18:11):
Oh?
Speaker 3 (18:11):
Yeah, we have a where that's actually we're pivoting over
to an on demand service that's going to be able
to teach all humans, uh, how to use these tools
responsibly and ethically and properly. Right now, we have a
course that does that, and then it's going to pivot
to on demand in the next month.
Speaker 1 (18:25):
Wow.
Speaker 2 (18:26):
Mitch Mitcham is CEO and founder of Hive Interactive. The
website is a Humanhive dot com. If you're an individual,
or especially if you're a business that wants to learn
how to integrate AI into your business processes in ways
that make your employees more effective, more efficient, more productive,
(18:50):
but not looking to kick them out of their jobs,
you might want to check out a Humanhive dot com.
Speaker 1 (18:55):
Great conversation. Yeah, thanks so much for spending some time
with us.
Speaker 3 (18:59):
I really appreciate it something breaking you want to talk about.
Speaker 1 (19:01):
Let me know. Yeah, come back. I definitely, I definitely
do and and go watch that movie. Companion will feel
like out