Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Get ready for a captivating episode of Schmidt list. Our
guest BJ is a trailblazer in AI and battery manufacturing.
Working with giants like Amazon and Google. He shares groundbreaking
insights and practical tips on leveraging AI to transform industries.
This is a must listen for anyone excited about the
(00:21):
future of technology.
Speaker 2 (00:23):
BJ, how are you today?
Speaker 3 (00:25):
Hey, Kurt to right to be back. I was enjoyed this,
see you.
Speaker 2 (00:28):
I'm so glad you're here. All right? Can you tell
the audience kind of the work and who you're working
with these days?
Speaker 3 (00:34):
Yeah, so do a lot of different things, and anything
from a range of battery manufacturing all the way to healthcare,
multil mix and all that kind of cool stuff future
fifty year, one hundred year type of things all the
way to hey, let's make a battery system and deploy it.
And then the underpinning of all that is technicity, which
is we're doing significant in AI, which I think we
(00:57):
want to talk about today.
Speaker 2 (00:58):
We do, I can I get how So, just for
the audience sake, can you talk about the batteries real quick?
Who are the batteries that you manufacture? Who uses them?
Speaker 3 (01:06):
Yeah? So one of our biggest investors was Terreks Genie Teris.
If you see the blue big jewels, sizzel lifts, boom
lifts and things on construction sites, that's who uses our
batteries because they're our biggest investor. But we also do
a lot of commercial, industrial, military and other things. Were
one of the few US American first oriented the battery compinations.
That's what we do, norcial industrial, no EV, no passenger
(01:28):
EV stuff.
Speaker 2 (01:29):
Sure I did. And then the reason I asked it.
In the news the other day they mentioned they found
a large Lithian department just deposit under Arkansas or something.
Oh yeah, this huge thing that was They were saying,
like could make batteries for a million years or something.
But writing geopolitics right, getting it out of the ground
is a whole nother story. Finding you right, it's.
Speaker 3 (01:53):
A nasty process too. We always say oil is a
bad thing. Well with them. Have you seen lidian mining?
It is a nasty process.
Speaker 2 (02:00):
All right, All right, let's jump in. Tell me about
the work you've been doing in AI lately, and I'd
love to hear your perspectives because we've done a few
shows on AI here and I'm always interested to hear
how you're using it and what your perspective is on
the value of it.
Speaker 3 (02:16):
Yeah, no, that's great. So that's that's what takes up
some of my time, is the AI thing, just just
because everything I do, even though even in better manufacturer
use a ton of AI now and if you're not,
you should be. You got to figure it out. And
that's one thing that technicity obviously helps. So technically, let
me let me start where technicity is. We have many
times before, but technicity we like to build ourselves as
(02:37):
an enterprise innovation firm. That's a big buzzword. What does
it mean. What it means is we like to solve problems.
And before before AI, let's say, before Mark twenty twenty two,
solving problems was an expensive thing, right, because you probably
need an app with that digital experience and your voice customer,
You need to do all these things, you have to
build up the product teams. I mean, yeah, you know
I've done that before, all of those things. And then
(02:59):
when AI hit really twenty twenty two, that kind of
empowered us to do all those same things for a
fraction of the parts because we had this amazing magical
thing of AI that we could use and specifically generative AI.
And that's the thing we've had AI since what the eighties.
Neural networks came about in popularity in nineteen eighties, they
(03:21):
didn't quite take a hold.
Speaker 2 (03:22):
No, I mean talk about people talk about bubbles sometimes
and I talk about remember the big data push. Remember
big data was everything half far a few years ago.
And I think that's where people got kind of hung
up when AI showed up, because they're like, isn't that
just big data just repackaged? Because big data was solved
(03:45):
all these things and we created a lot of value
in a lot of companies for a while.
Speaker 3 (03:50):
Oh yeah, well created a lot of value. I'm not
sure that the people were actually able to take really
big saying that's a whole other conversation or another child probably,
But I think it's actually why genera AI is so exciting,
is actually all that big data stuff. We made our
data links, right, We got our snowflakes, we got our
red shifts, We got all this data into this one place,
(04:10):
which is you know what's great, right, and then you
ask and they actual spent a bunch of money on
a bunch of data scientists or data analysts or whatever.
Maybe some of them just have business degrees, some of
them actually generating degrees all this kind of stuff. Then
you realize, fully, Crep, we're like five years away from
actually making sense of all this stuff I've got all
the place. What do I do now? As it turns out,
jener of AI and actually change that for a lot
(04:31):
of people, And the enterprises that actually really try to
artist that and understand it and do it pragmatically are
the ones that have been and will be successful in that.
And that's really what we try to do in technicity
is a pragmatic approach to solving problems. We just solve
it with AI now, and we make it rapshield the
price because we can be ultra efficient with it and
(04:52):
a cheap But if you use it and you use
certain things like agetric architectures and those types of things
that you hear buzzwords about, you could actually do it.
And it helps that we've actually helped some of the
biggest organizations like Amazonia, Good Gole actually developed some of
those things. If you actually end up doing a lot
of this kind of stuff, it's really hard. It's easy
to sign up for an apike for it. With open
(05:13):
AI they use chat GPT and say I'm an AI developer.
That is all you do is you just do what
we've always done at IT. We integrate with APIs right
affication programming interfaces. What you find very quickly is that
you unless you understand how the models are trained, and
how the models are put together, and why they were
put together, and the cultures of the organizations that put
(05:35):
them together, you'll have the spending off significant amount of
frustrating time trying to get these stochastic statistical chains to
do your bidding. Yeah something, And I think that's what
a lot of people are running into. They might hire
a lot of people who are doing the AI, but
unless you get teams that actually train these things, wait
some of we do, it's really hard to harness these things.
(05:55):
And that's one of the things. Interestingly enough, I get
a lot of cease now talking to a lot of
executives about how do we get AI in our business
and all those different things. That's the first question, how
do I How do I get I? And you'll explain
all the different things where you use agents, guardrail it
and you use least chech techniques, look at your data
and all those different data everyone tells you, oh, we'll
(06:17):
be out to people on a figure it out and
sub and then of course the media. Next question is
how do I trust it? How do I do all
those strange.
Speaker 2 (06:25):
That's the issue I've been running into with a lot
of and maybe not the giant, the Fortune fifty, but
I would say people in the Fortune one thousand, maybe
a little further back. That's what one of the biggest
concerns is security around it, just in general, because if
I'm at a if I'm in a Fortune one hundred company,
(06:45):
I can't just plug chat GPT into my company and
start using stuff.
Speaker 3 (06:49):
No, there's no freaking way you can do that. Actually,
you don't know what your data is. And I work
in some of the largest enterprise world which who developed
these models, and I would say that we even have
specialty even inside of some of these big, big organizations,
we have specialized models that we've deployed for ourselves inside
just so that we're not using facing ones because of
(07:10):
the information control and all those things. And so it's
trusts and security and everything are a very interesting thing
that you have to kind of manage with these models.
But really what it comes down to is the core
thing that I always answer with whenever someone says can
you trust it? And it's somewhat of a flippert response,
but I always say, Okay, so you have me. I'm
(07:31):
sitting here telling you about all this kind of stuff.
Why do you trust me? And it might give you
pause or so reputation or I don't trust you as
a common response or whatever I'm just listening to talk
or whoever. There's really no difference to the models that
trees and so one of the things that I've really
(07:51):
really learned over the last couple of years of working
with these systems, having developed them, and then obviously using
you in a very pragmatic way, is the concept instead
of treating these models like we've always treated our computing systems,
which up until basically the beginning of our computational stuff
(08:12):
was so Alan Turing or wherever you want to Charlie
Beckabach BLOGO like seventy hundreds with slator machine. Right, yeah,
we've got our tubes going, right, Yeah, you basically have determinism. Right,
if I tell the machine to do something, I can
guarantee that the response out is going to be determinate,
so you know it's going to crunch some numbers or anything.
(08:32):
Now you might say, what about bugs and everything? Yeah,
but you programmed the bugs in, maybe not intentionally, but
they occurred, right, I can guarantee the answer it has.
Of twenty twenty two, we're using systems at scale that
are now statistical. There's stopastic processes, and what that means
is that they're non deterministic. I don't know what they're
going to get out. I don't know what I'm going
to get out when I put into it. I mean
(08:54):
I can it's pretty confident I'm going to get something.
But statistically they're correlated, just like us having this conversations, right,
and unless I'm completely out of left field early, you're
going to get some consistency in our conversation or toper
same thing here. And my answer to these the people
that ask to me like how do I trust? How
do I trust the models? And I say, why do
(09:14):
you trust me? You actually have to approach these models
now as you win more of a human resources problems,
the technology problem. So before technology selection easy, right, I
mean always hard, great because you get politics and were
all kind of stuff to go into it, of course,
but it's not as cut and dry it's I'm an
engineered problem anymore. Now it's how well do I trust
the model? Do I know what the background is? And
(09:37):
so now think about it. When you're approaching these models
the way we typically will approach maybe a new model
or even and help people and enterprises engage, I actually
put poorer to human resources strategy, and the HR strategy
is okay, what if I was able to grab the
Facebook pro like I'm looking at a new employee or a
new high rate. Can we get their Facebook profile? Can
(09:58):
we get their social media? Can we look at their
previous publications? Can we look at the previous work they've
done obviously, look at their resume, look at you know
what if you basically break all the rules and you say, what,
can we look at any traumatic childhood stuff? Right? That
would affect that?
Speaker 2 (10:13):
I mean we're already what kind of baggage about you
signing up for here?
Speaker 3 (10:16):
Right? What kind of baggage right? Or what kinds of things?
Not that I would stop me from hiring them, but
just what do I need to be prepared for? Right?
How do we put them in the right position that
enables their success? Up lines with arts organizationally, and it's
kind of interesting. These models are somewhat similar in that
way right where you actually now have to start thinking about, okay,
(10:36):
what was the training data, how are they performing? And
that takes the depth of knowledge and a depth of experience,
which is unlike really anything we've done a from a
strategy standpoint in selection, in being able to do any
of these things than we've never really had to do.
And so in many ways it's now it's not just
(10:57):
a technology decision, which is hard, not in all the
ards and ears, their business people or finance Peo baller room,
just inside which platform to select Technologically, Now you've got
this whole thing where it's oh, prap, now we have
to do this hr thing. It's farting enough to try
together the right people culture right fit. That's really interesting.
The culture fits are really interesting with these models too.
(11:18):
If you look at some of the major models anthropics,
you've got not just anthropics, but you have also Google,
You've got chatchypt. You can actually see how they answer.
How these models answer is actually very culturally late, and
you can look at papers for this culturally late to
the culture of the company right, Oh, for sure, topic
extremely conservative, chat Gypt, very verbose, and takes a lot
(11:40):
of risks in the hallucination of the stuff and of
what they deliver. Kind of middle of the road but
somewhat factual. A lot of these models really are very
reflective of the company's kind of culture. That's the really
fascinating thing. And so you have to take into account
all of these things as you kind of go and
you kind of are selecting it. Really what comes out
(12:02):
to is you have to understand the technology almost as
deeply as I would say disposition of the beach systems.
Speaker 2 (12:09):
I can see that because I mean, I didn't get
much out of AI until I was able to really
put my personal profile in there for my personal use.
So until I was able to tell it about what
my values were, what my tone was, a bit more
of my bio and my background, and being able to
plug all that stuff in, I was able to enhance
(12:31):
to turn up my trust factor a little bit more
versus before. But I can see these companies they have
this idea of I'll put in all of our pricing
data and then that it'd be good. But what about
putting in the employee handbook and the company values and
the mission vision goals stuff around that, Because because those
(12:54):
things cannot exist in silos. The pricing data and the
sales data all a part of the ecosystem with that, right.
Speaker 3 (13:02):
Oh, without a doubt. And but then you have to
concern I mean a lot of people then ask me
if I put all of this prompt engineering, right, we
call it prompt engineering. But if we put this the
preambling of the prompting and the all the context and
everything in the context windows of these large language models,
which is really the basic compilitation of these these big
token machines in a sense, right, the issue that you
(13:24):
have is how much of that informations being used to
retrite because these the one thing that is guaranteed and
all this is data hungry stuff. Well, we're talking about
big data right ten years ago, full circle, right, and
what we started this big data is like tiny data, right,
These things these that billions sometimes trillions of parameters and
(13:44):
they I mean, we've run out of Internet, that's how
much it's stuff it's generating.
Speaker 2 (13:49):
We have to restart nuclear reactors now, I mean.
Speaker 3 (13:54):
It's pretty big time stuff in terms of the level
the amount of data that we need. And so where
are they getting this data? They're getting that data? I mean,
where do we get the day to make these things better?
I mean you get it from people's input to these models,
and it's not malicious, so they were I mean, we
work with all these teams. No one's maliciously trying to
(14:14):
kill your data.
Speaker 2 (14:15):
No, But to your point, there's an end of the road.
I mean, you can read the entire Internet in a
certain amount of time and take everything there and then
it's done. Right, It's like true, it's against.
Speaker 3 (14:25):
Yeah, and so if you're putting your own data and
of course that's going to be used, right, So unless
you have a very strict agreement and you understand the
termally conditions. But maybe that's so different than any SaaS
based platform.
Speaker 2 (14:36):
Yeah, are you signing it for Instagram? Like, I mean
they can use your photos. You can use your photos
and ads by the way, So I find that there's
so there's I've always run into sort of two people
in the world types of people in the world when
it comes to AI. Is that there's the people that
are like, the computer is smarter than and the or
the computer is malicious or maybe not malicious a strong word.
(14:58):
Maybe it doesn't have my best interest at top of mind.
So I feel like there's people that will just take
it for what it gives because they're like, the computer
smarter than I am, so that's what I've learned. And
then there's other people who maybe have a better understanding
of how computing works and are like, this is just
(15:18):
another sort of search type of things, and so it's
not gonna it's not gonna do anything unless I fully
like customize it and set it up to mine. Do
you feel like when those executives are showing up to
talk to you, are which camp are they kind of in?
It sounds like they're more in the the computer is
really smart, can't we just plug it in and go
(15:39):
dj I.
Speaker 3 (15:41):
Yeah, Actually, that's a really that's a deep question because
obviously everyone has a different approach. It really don't if
you're a demographic or age group. I'm not gonna call
anyone out, but you know, if you're full in search
of the end of career, your career, you've a much
different perspective than if you're in my career career. I mean,
that's never changed, right, but that millennials will always say, ah,
(16:01):
those gen zs, right, I remember when the boomers were
millennials saying that about the millennials, right, So of course, anyways,
I think it's the mix. But even inside those generations,
there's no one art fast rule, right. But it's really
a level of comfort I think with the technology. I
think a lot of people are un ulsterable, and executives
specifically have this need if part of their job, in
(16:22):
fact is in their job description they need to be
strategic thinkers, right, I mean they got to where they
were by being strategic and thinking. So any new tool
is going to be looked on that should be looked
on at least as a tool for use. Yeah, obviously
that ends to I would say, promote that tool in
a positive light and so, oh this can do everything,
And it's true. AI definitely opens up a significant number
(16:45):
of opportunities to do it. I would say that it's
not going to solve anything, though, and that's why big
thing like it's not going to invent the cure for cancer.
It's going to it's not going to open up the
next industry that you were going to make the next
billion on. It's just not going to do that. What
it will do, however, is make everyone around you more efficient.
(17:08):
They used me, Yeah, so you'll get there faster productivity.
Speaker 2 (17:12):
Right, Yeah, it's yeah, because that's the way I've always
looked at generative AI is a productivity tool more so
than a new form of intelligence or something. We're obviously
experts debate this. I feel like we're a long way
off from real type of intelligence, but ag. Yeah. But
because of the labeling, because of the branding that's come
(17:33):
along with this, I think people that are less technically
savvy might think it is magic, might think it is
works magic. It can look.
Speaker 3 (17:41):
I am constantly amazed. I mean, I'm the last time
I was this excited about something in computing. Okay, so
I haven't really thought about this, but I remember back
what I was. I don't know ten My parents put
this and computer in most doss right yeah, yeah, what
the milk was? But I used to love to kind
(18:03):
of you get that prompt and I figured out that
if I type certain things that I could explore things
around the file syst And for what reason, like that
just intrigued me. It's a weird eight to ten year
old chair, I remember exacting my age, but it's like weird.
But it was like can I find a new game
or can I find something in there? Like a new
file and it's thinking back as like, wow, that was
such a finite search space. I'm surprised that that engaged
(18:25):
me for that long. Next time, I say, I that
kind of excitement almost like boyish excitement or whatever child
excitement was when I discovered rest APIs RESTful interfaces those.
I mean, it's just amazing to me that's these system interfaces,
like you could standardize on all these and you can
pass information bag and for it and yeah you could
say so of XML and all that kind of crap,
(18:45):
But that was kind of nasty, right, Yeah, I see
if you just seemed so elegant to me, and that
was wait, twelve fifteen years ago, I would say that
this AI is just like this made me completely just
fall flat. All my back would just Yeah, these things
are just so freakishly cool, and they do seem magical.
I mean, even we will Clay, like even the inventors,
(19:08):
which will I've had the privilege of talking to a
number of them of the like change form of attention
is all you need in set little paper back in
twenty seventeen. It's just it's a truly amazing that these things,
given enough data, can actually predict and generate human like responses,
so sure that they're useful. It's not just limericks and poetry,
(19:29):
and it's like actually giftiful things. But we have we
have little chatbots on our website that will act as
enterprise innovation consultants that will actually tell you all about technicity.
It's it's pretty amazing just how not just useful, but
just they are magical. Oh, we know how they work,
but can you guarantee their cork every time? No?
Speaker 2 (19:51):
That's truly what I remember when I first got a
Mac and Photoshop, and it was very similar kind of
feeling of oh my gosh, mask and I can change
the colors, if I could make images that look like
crap look awesome and all these things, it is amazing
at the time to and what power I had at
the power of my fingertips. And I feel like chat
(20:14):
GPT and Claude and even eleven Labs, which I use
for some audio stuff for the show and things like yeah, yeah,
it's just fantastic because not only is there the writing
side of things, but like the audio side of things
for me has been just fantastic. Being able to replicate
my voice and being able to fill in words where
(20:35):
I mixed message things and I'd be able to just
do that for me.
Speaker 3 (20:40):
Yeah, you don't get the time production studio to do
that anymore, like you no personal productivity and efficiency.
Speaker 2 (20:46):
We've had people people asking all the time, produce your
own show, and I'm like, yeah, because I have dscript
which does all the inline editing and stuff for me
link in the description, and I use eleven Labs for
filling in any gas ups of things that I want
to say or do whatever, because it can mimic my
voice almost perfectly, and it's just fantastic. I use all
(21:09):
these things all the time and it's a win from
it took me four hours a week too. I can
produce a show in an hour now thanks to these tools.
But back to your earlier point about about approaching these things.
I think what you're saying becomes way more important as
time goes on. Is approaching these things from a what
is their worldview? Versus what data set do they have?
Speaker 3 (21:34):
Yeah? No, I would say you approach them in a
far more human way. That's the key is approaching these machines,
these computers in a more human way than we have
in the past. And that is a really critical thing
that the opportunity for us, because if we approach these
things more human, then we probably have a better chance
(21:55):
of making this thing very successful for us. If we
continue to approach some like machine, we're just going to
end up with yet another tool that doesn't really affect us. Now,
there's a lot of dangers and all of that, and
we can talk about AI ethics I like to talk
about too, and there are It's interesting. I'm not eve
an optimist about most things. I don't. I'm a pestist
(22:16):
with engineer by nature. But this specific but on this
specific thing, I'm actually very much an optimist. I think
that I'm excited about the future with AI. I don't
see a terminator like future. I don't see enough to
get my stomach tied in knots, because oh man, we're
going to have a revolution and there's this K type
recovery and economically we're going to the blue collar versus
(22:36):
the extreme wealthy, and now we're going to have one
in the middle and all this kind of stuff where
everything goes to some of which are three and I'm armageddon. No,
I mean, I don't see that that extreme.
Speaker 2 (22:47):
Back to your point about you're excited. You're an optimist,
but because you're a realist because what the limitations and
capabilities of the tool are. And I think that's what's
key for people is you gotta get educated on how
these things work and how they create, what they're what
they're doing. Because again, it's the same thing when the
iPhone came up, and then and then other smart phones,
(23:11):
because people were like, wait a second, I'm giving this
all my informations, tracking me everywhere I'm going, and they're
like and then some pimanies are like whoops, sorry, do
you want to turn that off? Yeah, they're like okay, yeah,
And then people were like, wait a second, the phone's
listening to me all the time, and they're like, it's
really interesting.
Speaker 3 (23:31):
Actually, yeah, it's really interesting. I think it's coming at
the right time. I think privacy, we've become very aware
of it, and it's coming at the right time because
if we found this AI thing like the being of
social media, holy crap, you'd match. And that's when I
probably would have been more pessimistic about where we're aheaded societally.
But I think there's there's that. But at the same time,
(23:51):
unlike social media and what we didn't do very well.
We used it because it was so useful. I don't
think people are truly realizing just how useful this AI stuff.
And yet to your employed, I mean to give you
like you use all these little AI things all over
I use like my assistant is an AI. If you
email and want to schedule avoid with me. You may
(24:12):
not realize it, but she will email you and she'll
set up you can joke with her. She'll joke with
you back, like your website, and you'll get a not
a chap like you'll get someone named a Leen News.
They are you know.
Speaker 2 (24:25):
So it has a lot ofersonality, total personality.
Speaker 3 (24:28):
I probe doll herself. She's a closet Swift like you'll
go talk to her, go talk about Taylor Swift that
it'll be great, Like you can do all of these
things right. And I think it's really interesting now to
see a whole And I don't think it's even a
demographic thing. I think people need to realize just how
efficient they can be, and they need to be shown right.
And you have to do it. You can't just passively
(24:49):
sit by and watch others like sometimes our particularly executives.
And she had leaders and organizations might just wait and
see what their employees do because it's cirtically that's been
somewhat of a of a successful thing.
Speaker 2 (25:02):
Yeah, it's been a common practice.
Speaker 3 (25:04):
Right, Let it lose, yeah, kind of let it evolve
in everything. In this case, I don't think we have that.
We don't have that. I would say privilege or out
the word I'm looking for. But instead, because the world's
leaving so fast now with AI, will continue to move
so fast, like it'll just accelerate, right, case of development
is just accelerating, and it is on theerception of that rule.
(25:27):
It really does require everybody because it's not the people
that are everyone's Oh, they are going to take our jobs.
I don't think so now, not in the way we think.
And who has a job right now perfectly going to
say you're a knowledge worker, you're going to be one. Obviously,
learn how to use AI please so that you don't
have to come up against some strike for revery. But
it's the people that are coming into the workforce in
(25:49):
a few years. The job supposed to be a Linda,
it's not that the job will ever exist it, right,
It's just that we're just going to see us we're
going to see a squashing a crunch, right, you have availability,
and in order to really do that, you're gonna have
to show ways that you're going to be more efficient.
So instead of now hiring a bunch of receptionists, AI
can do that job right, right, instead of hiring a
(26:09):
bunch of data analysts with kids right out of school
with business degrees to crunch spreadsheets, I don't need that anymore, right, So.
Speaker 2 (26:15):
That's always going to say. That's the one thing that
I've mentioned is that I think it's the entry level jobs,
the ones that start you on your path, are the
ones that are in most danger right now, because to
your exact point, like AI is covering a lot of
those things. So if you're if you're like me and
you where you've got twenty years whatever some experience in
(26:37):
this thing, there's a ton of value there that you
could never pull or put into an LLM. And but
to your point, if it is I've graduated from this
big school and I'm going to go to get my
first job at a consulting firm, and I'm going to
crunch spreadsheets for the next couple of years to become
a partner. Whatever they got stuff doing that, don't they?
Speaker 3 (26:58):
Yeah, I mean that's the thing. You don't need a
whole team of interns anymore. If you're a partner. There's
lots of things if you enable and harness that you
can now do it yourself. And sometimes even better because
you can cut off the middleman on all those things,
you can get your vision, can implement it consistently faster.
I mean, that's guarantee. We just we're limited being if
we can't do everything. So that's why we build these
(27:19):
social structures and business structures right in order to steal things.
So that's gonna be a very interesting outcome of all
these things, which from a societal standpoint. Not to get
too wrap holed on the societal application, I know, please,
but I think it's important, you know, I see that
as these intern positions or these positions where you're an analyst,
right for a couple of years, right out of school,
(27:40):
you work for a Deloitte or a KPMG or.
Speaker 2 (27:42):
RECE That's exactly what I'm That's exactly what I'm thinking about.
Speaker 3 (27:45):
Right right, you're training like you're paying your due. You're
gonna work eighty hours a week and you're gonna get
treated like crap, and you're gonna be traveling. But what
you're doing, you're paying your due, is you're learning. So
you can go on and go to the next big thing. Right,
do that And for the last twenty five or thirty years,
that's worked. But now that's to your point. Are I mean,
those positions are going to be few and far between.
You're not gonna need as many interests. Set up a
team of five interns, you might only need two, you
(28:06):
might only need one, you may not need any. If
a partner is like really effect and then of course
there's gonna be all that push from the top for
margin and all those different things to to really create efficiency.
So partners may try to limit entire teams and everything right,
managing directors will eliminate. So from that perspective, it's gonna
be gonna be big. And so from a societal standpoint,
I think that development of new talent is at risk
(28:29):
because of that. You don't put them through, you don't
put the younger generations through. Basically that trial by fire
creates the next version of us when we were younger,
and so it's that'd be an interesting thing. But even that,
I mean, we'll figure it out. Humans are amazingly resilient,
particularly in today's economies and everything. And yeah, the biggest
risk we're on this because you said, please, let's talk
about the implications.
Speaker 2 (28:50):
Yes, the implications.
Speaker 3 (28:52):
I think the biggest risk that we run. I was
just talking last week. I was at our executivity meeting
for a bit major university in the US where we're
looking at curriculum, and there was a side conversation or
something about what thefecks of AI and the biggest say,
and I talked a little bit about this. The biggest
worry that I have is this concept I like to
think of as convergence and convergence. There's a great statistic
(29:13):
that holds truth that fifty percent of all people are
below average, right, guarantee. And that's just statistics, right. But
what means is there's average right in the middle and everyone.
If you take everyone in average, everyone's intelligence and knowledge
and everything, you'd get exactly average. Right. That's just simple statistics.
The problem is that if we're trading on all this
(29:35):
data and we consume the words data and knowledge and
information and everything, what do you get? You get a
big averaging machine. Right. And then now imagine. So that okay,
so that's okay. I mean we understand that we can
approach it. But here's the problem. If everyone's using these
giant averaging machines, or maybe just one, and you use
this for a decade or two decades or three decades,
(29:57):
what is up happening? What is up happening is the
elimination of diversity of fonts because everyone's just using these
things because it can be so ultra efficient that we
end up starting to just go in one direction, which
makes us really efficient. We develop, But the problem is
what ends up doing the most creative and amazing things
(30:17):
that change society. It's not just more of the same thinking, right,
it's the change makers. It's the ones that tend to
be really impactful. But crazy people who were solid, neurodivergent
or whatever it is that just made a massive impact. Right.
I could think of one crazy guy himself, right, who
happens to like space and cars and electric vehicles and
(30:37):
all that kind of stuff. It really someone who had
a very unique childhood and everything up into it. And
it's the problem is if everyone not actually taking him
as an example. Elon must great example, right, He's he
had a very fascinating childhood right in South Africa where
through the hard tide oppressed him, and then he was
pulled out put into the United States and was given
(31:01):
oyster of the economy where he was in a powerful
position and he could just go without any kind of
oppression and he just made the monst of it. Also,
being somewhat neuro diverging gave him basically an up on
out of the thing because he didn't react emotion the
same way. So with all those things, now, the thing
is he didn't do that himself. Enough, did it with
a team of people. Yeah, tons of different people. It
(31:21):
took a village for him to do it, now, absolutely, Yeah,
if that village wasn't diverse, which it was for him,
if they were all using the same tools, doing the
same things and being made efficient in the same exact
ways because they were using a tool called Jared of
Ai underneath, then you might still get the nerd divergence.
You might still get the amazing kind of impactful leaders
(31:42):
who won't actually be able to make a major impact
because they will as the team of people to make
it a reality for them. And that's what I call
convergence is eventually you just eliminate possible divergence right and
diversity in everything, which means that we will probably tap
ball right, we will top out at a very specific
thing because you won't be able to do anything. So
that'sally some way to be aware of. I don't know
(32:03):
that's actually how it works. But doing things like open
sourcing AI is a really effective way to combat that.
I think for once in the have to be careful here.
I think I think for the they it it met
it did something good, and that was open sourcing AI, Google,
even open source GEMA right a version of your gem
and are chat GPT hasn't quite done that yet, but
(32:24):
we live the whole right, we live and hope that
they do. But those are really good things because what
it does helps the open source community. I think open
source software is another kind of impetus that helps develop
the last twenty years to evolve these and so just
making sure that we foster all these various different types
of tools, we foster diversity and not the way that
we typically talk about it in quality conclusion all the same,
(32:46):
I'm just talking about just never see it thought, never
see it tools, the things that make great well, things
that make our country great, right, But all those things,
I think that's the key, and it's that to me
is what we have to be careful and moving forward
with AI. And we really only know how to do
that if everyone really starts to understand it. And it's
such a low barrier to get into these things, right,
(33:07):
play with it beyond just using it as a search engine.
I have to do something like, don't just ask to
write a poem because you know your kid told you right,
you'll get on and pick a problem. I have a
good personal example out of sidon the AI guys. So
that's but I had a furnish problem over a couple
of weeks ago. In the furnace, I actually had some
guy look at at her new star. At first, I
(33:28):
don't know, I was flashing light whatever, so I was like, okay,
Gemini took a picture of pulled the panel off, said hey,
what's this light doing? And I used the live so
I was able to talk it through and it said, hey,
I think there's something wrong with the exhaust van and
all this kind of stuff. And I was actually able
to talk me all the way through it. I was
able to say, and I mean, that's a pretty amazing thing.
(33:48):
So I guess that story is to say, look, if
you have a problem, then you're like, man, I really
wish I could talk to someone who knew something instead
of picking up a phone call and just talking on
a service tech as you really should. Don't do anything
unsafe by doing it first, just to understand the problem,
don't actually try to fix this. Still call that, so
still call the experts, but understand so you can talk
to the expert better.
Speaker 2 (34:08):
I love it.
Speaker 3 (34:08):
So that's what you can do, and then you'll see
the true power of jarative AI. I think, yeah, you will.
You don't. We won't see it until you actually have
it help you, And it may not work on the
very first time because you have to plant it right
all those things. Yeah, try sure times, try to chill.
It helps you. I guarantee it. I guarantee you you
will find that it helps you.
Speaker 2 (34:27):
And I think that's I think that's a fantastic piece
of advice, because not only will you be amazed by
what it can do, you'll also get a real sense
of the limitations and where the guardrails are, where we'll
where we'll push back and say I'm sorry, I don't
I don't know, or its response will be so obviously
wrong that you'll be like, Okay, I've gotten over your
head here. And but that's your point to the people
(34:49):
that look at that as an opportunity to leap from
that area versus recoil from that, right, is that, oh,
this is this and that's what I've always found. AI
is a jumping off point for me, and that's how
I've used it is and that's how I've always approached
this new type of technology is not kind of the
alpha and omega, just it's the jumping off point for
(35:11):
the next exactly. And if you treat it that way,
I think you're gonna get a ton of value out
of it. To your point, but yeah, if we don't
have the Elon Musk, Steve Jobs and those types of
people in the in the world, none of this is
gonna all this technology is for nothing in the end, actually,
because people are trying to the real problems right. US
(35:32):
wants to be multiplanetary species. Steve Jobs was a computer
as a tool to change the world. That's it. Like,
it's not. He didn't care about microprocessors and all those things.
He was like, we're gonna change the world. We're gonna
make kids smarter than they've ever been. We're gonna have
people access to better information than they've ever had. He
(35:52):
wasn't like, ooh, can we get more processing power out
of this GPUs? Like he's just making it work.
Speaker 3 (36:00):
Yeah, it takes a village, right, and it does. That's easy.
Like you would have just been stuck with vacuum tubes,
right if everyone would if l A, they just said,
we always use vacuum tubes. Statistically, that's all I've ever seen.
So therefore that's all I'm gonna say. We're always gonna
be stuck with vacuum tubes. Wouldn't necessarily say, oh, I
need that whole team to think of this microprocessor thing.
So yeah, no, that's exactly. So a few a few
(36:21):
tools and tips and tricks, because so far, you know,
I've just I just basically said, oh, you should try
to do the problem. A few prop techniques that I've
found useful for people.
Speaker 2 (36:30):
So it's your first time.
Speaker 3 (36:32):
I'm sure everyone's tried chat GPT. I also recommend Gemini.
I might be a little biased there, but I also
recommend Gemini to try some things. Anthropics good too. Clonde
is a relatively good one to try out in your
prompt always append like anything. So if it's like, hey,
I have this kind of thus a Lennox whatever, explain
it to me like I'm six, and it's that preamble, right,
(36:55):
Explain it to me like I'm this now, I might
what I sometimes will do if I'm like, I know
exactly what responsible going to get because I train this thing,
so I kind of know why it's going to respond
this way. So what I'm gonna do is I'm going
to frame it, write a preamble, some context, and it's hey,
treat me like I'm an engineer, like I do a
lot of biology kind of stuff, right, And I always
preface if I'm asking about a molecule and an interaction, Oh,
(37:16):
how does this? How do your key tones interact? And
the liver produces a key tone, and the key tone
is a signal or for fat burning, you know all
those different things. The way I learned that was through
a large language model. But what I said is explain
it to me like I'm an engineer, and it has
an amazing way to relate it to me. So if
you are considering yourself as let's say that, I'm going
to account you, right, and I want to understand something
about my car because I'm having a trouble. I hear something,
(37:39):
it's like, hey, i'm doing this far is this time
explains to me like I'm an accountant, and it will
actually relate it to your domain and your terms. And
that's a big thing that I've been finding is the
ability to interact with systems and if you have different systems,
generate AI is an amazing translator of domains and domain
knowledge because I can do that because it will actually
do it really well. So that's one trick is a
(38:00):
prompt and just say explain to me. If you're like, look,
I'm nos far than a five year old, let's say
explain to me like I'm five.
Speaker 2 (38:06):
I do it all the time, especially for like my
LinkedIn bio. I will write out stuff and then I'll
go and I'll to chat GPT and I'll say this
to me like I have an eighth grade reading level,
and it's amazing. It works so great to help me
make sure because not because people are dumb and they
only have eighth grade, but that when people are scanning,
they're usually scanning at an eighth grade yeah sort of
(38:28):
reading level. And if you know the area that you're
presenting the information, like in a presentation deck. You don't
want it to be something people are reading. They should
be able to scan it quickly. And so if you're
doing a presentation deck, I'll do all I'll do all
the work and then I'll be like, tell me this
like in an eight thrade reading level. It's been amazing.
Speaker 3 (38:48):
Yeah, that's true. Second second thing I would do.
Speaker 2 (38:51):
A lot of times.
Speaker 3 (38:52):
If you look at Google search, right, it's actually the
opposite of Google search. If you get really good at
Google searching, you would five ways of fragments sent and
saying things right. But as it turns out, you need
to reverse your feeling on that. Right. So if you've
gotten really good at Google searching, instead of writing out
the whole sentence into the Google search bar, which you
don't do anymore, right, you just search and you do
your specific keywords and that of what you're searching for,
(39:14):
and then your copy and paste it. Don't do that
to a large language model. Talk to it like it's
a human. And the reason is what talk to it
like you are talking to basically a high school senior
or undergraduate, first year undergraduate. That's typically how I will
think of it when I'm relating to a large language
model of AI. A generator bay is if you're if
you actually communicate with it like it's a human, it's
(39:38):
going to behave better and it's going to give you
the response that you want better. Why because it was
trained on human language? Yeah right, it wasn't trade on
fragmented search keywords that we use in Google.
Speaker 2 (39:50):
I'm so glad you brought this up because this actually
changed my experience working with chat TPT and Claude and
some other ones when I gave it feedback, like hey,
that was a really good response. But I real like
it a little bit like this, I'm telling you, like
every time I am working with it, like i'm working
with another person, I get way better results.
Speaker 3 (40:09):
It's true. Yeah, And don't you expect it to be
right the first time, just like you wouldn't expect a
high school senior to be right the first time. When
you're trying to help it to help you solve a problem,
asking someone to help you solve a problem, it's just
not gonna work that way. Commit a little bit, say
hey I'm not sure I understand, or hey, can you
explain a different way? Or Hey, you were wrong about this.
(40:29):
Maybe your perspective needs to be changed a little bit.
Here can I explain it a little better this way?
So treat it like it's a human At least it's
more natural for you to do speech to text, do
that sometimes, like I found myself not even wanting to
type anymore, Jude Jemini to I do to say, I
talk to it a lot. I just talked to it
now because I actually get a better response because it's
been trained on these human like interactions.
Speaker 2 (40:50):
I think it's great. I think those are excellent tips.
All right, Bjay, I could talk to you all day,
but I got to go. So if I want to
get in touch with you, if I want to learn
more about technicity, where do I go or to find
out more?
Speaker 3 (41:01):
Imf bj dot Yrkovich Technicality to io. We've got a
new website and everything, so check it out. You can
talk to others. You'll just reach out directly to me.
You can find me on LinkedIn. And I have to
say this every time I do not accept connection requests.
I'm still not at the social media. I'm still one
of those kids from a long time ago. I'm totally
in the AI out, but it's okay. Message me on LinkedIn.
I will get back to you. I promise. You can
also find me at aq on dot com, do on
(41:22):
energy dot com. You can also find me at final
health dot org. And you can find me on some
of my my my investing problem line capitals and all
of those different places, and obviously just anywhere on the
Internet you can find AI. You'll probably run into something
that's impacting.
Speaker 2 (41:38):
Something that's got your fingerprints on it.
Speaker 3 (41:39):
Up. Yeah, yeah, for a better or work, it's probably worse.
Speaker 2 (41:42):
Again, thank you so much for taking the time. A
busy guy, I really appreciate you your flexibility being able
to come out on the show. And there were the
links below for everybody if you want to get in
touch or if you want to view some of his
projects he's working on. So thanks AGAINBJ I really appreciate it.
Speaker 3 (41:57):
I really appreciate it. Kirk always a good time.
Speaker 1 (42:02):
We appreciate you taking the time to listen to this
episode of Schmidt List. The stories shared by our guests
are genuinely inspiring and offer insightful knowledge. It's important to
remember that success doesn't happen overnight and requires collaboration, learning,
and perseverance. If you want to broaden your professional connections,
(42:22):
check out Kurt's book The Little Book of Networking, How
to Build your Career one Conversation at a Time on Amazon.
Please stay connected with all things schmidt List on social media,
leave a review for the podcast, and join our community
of like minded entrepreneurs. Thank you for being part of Schmidtlist.
Keep innovating, collaborating, and chasing your entrepreneurial dreams.