Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to another episode of the Chicks on the Right
podcast where we talk to our friend and sponsor of
our show, Zach Abraham from Bulwark Capital Management. Today, Zach,
we are going to talk about AI because AI just
got named Person of the Year my magazine, which I'm like,
do you not know what a person is?
Speaker 2 (00:18):
Hello?
Speaker 1 (00:18):
This is the whole problem with AI. So anyway, open
Ai was just named Company of the Year, and of
course all the chatbot tools like chat, GPT, Gemini, GROC,
all of these are becoming kind of part of our
every day life, and so we wanted to find out
from you if you're using AI to any extent in
(00:40):
your financial services world and should you be, should other
people be, or should we be afraid?
Speaker 3 (00:47):
Yeah to be? Yeah.
Speaker 4 (00:51):
First of all, to address the titling issue how we
refer to AI, I think it's important to remember that
these are the same people who was watching this interview
this week weekend and a medical specialist was explaining to
the audience how a trans woman's milk is just as
beneficial for a baby as.
Speaker 3 (01:12):
A woman's god.
Speaker 4 (01:15):
So the fact that they're referring to AI as a
person not even close to the craziest designation I've heard
just in the last thirty six hours, So yeah, it
doesn't really.
Speaker 3 (01:25):
It didn't really surprise me too much.
Speaker 4 (01:27):
I actually kind of saw the headline and thought, boy,
that's kind of sober, you know, like compared to the
stuff that I'm used to getting fed from these guys.
Speaker 3 (01:34):
Right, So, first of all, I am using it, and
i'm and I'm using it increasingly more.
Speaker 4 (01:44):
It doesn't. I continue to think that the way forward
with AI. I think AI is going to take over
things slower than people think. I think that people are
over extrapolating what it is that we have right now,
and I think the reason that they're doing that is
(02:06):
in large part because the benefit it is doing to
the stocks of companies involved to these things.
Speaker 3 (02:11):
So what I mean by that, it's a big thing.
Speaker 4 (02:14):
But I think that we're an LM, A large language
model is what we're using right now, and that's what
we're referring to as AI. That is not dynamic artificial intelligence. Okay,
it really isn't it. And it is limited by the
data it has, and there are a lot of issues
(02:35):
still remaining with it not being able to wait the
places it pulls from data.
Speaker 3 (02:41):
Right So, for instance, you're going.
Speaker 4 (02:42):
To get like crazy off the wall wrong answers sometimes,
and part of it's because it can't distinguish between, for instance,
a parity Twitter account and a non parity Twitter account, right,
And I think there's some that can start to identify
those differences. But I'm using that as a you know,
as a as an example. And so what I think
(03:04):
is that I think humans are going to realize how
much harder and how incomplete current technology is to get
it to the point where it truly is.
Speaker 3 (03:18):
Artificial intelligence, because.
Speaker 4 (03:22):
This is a really exciting, really cool, really amazing first step.
But basically what we've seen so far, and I'm probably
oversimplifying it, but I think the easiest way to think
of it is what we've seen so far as like
an AI version of search, Right, You've seen like AI
enabled search engines.
Speaker 3 (03:44):
That's that's kind of what we've got right now. So
getting to true.
Speaker 4 (03:49):
You know, artificial intelligence, you're a long ways away from that, Yeah,
I hear.
Speaker 5 (03:55):
You, however, Yeah, but like with search though, like I
use it a lot from metal stuff awesome, So that's
I mean, I think that's honestly, that's the one thing
I really do use it for because I don't really
like it.
Speaker 2 (04:08):
I get kind of wigged out by.
Speaker 5 (04:10):
It, but I but I do use it a lot
for medical stuff. And that makes me think, are there
going to be doctors.
Speaker 3 (04:15):
In ten years?
Speaker 5 (04:16):
Because I really don't need some unless they're going to
do surgery on me, which I don't know can romo.
Speaker 1 (04:21):
Even robots can do that. Yeah.
Speaker 2 (04:23):
The thing so I start thinking about the medical field.
Speaker 5 (04:26):
Now, granted financial stuff, I don't know are they doing,
like what are they doing for you guys?
Speaker 2 (04:30):
Are they doing computations? Are they doing I mean.
Speaker 4 (04:33):
So, so two really important things there. I think your
point about the medical field is spot on. But look
at what the average doctor has turned into. The average
pediatrician is an LLM.
Speaker 3 (04:49):
Right.
Speaker 4 (04:50):
You tell them the symptoms, they look at their sheet
and they go, I'm.
Speaker 3 (04:53):
Giving you a prescription.
Speaker 4 (04:54):
And I always feel like they talk in Ben Stein's
voice from faris We're I give you right.
Speaker 3 (05:00):
I don't know why, That's just what I hear at er,
you know.
Speaker 4 (05:04):
And so really, when you look at the average now
I'm not talking heart surgeon, brains, I'm not talking surgeons,
but when you look at family practice.
Speaker 3 (05:12):
That's basically what they are. And they're not. They're not
using imagination anyway.
Speaker 4 (05:17):
You tell them the symptoms they're going down looking at this,
we're supposed to give you that drug, right, So in
that way, it's huge. In the financial side of it,
here's here's some stuff that's really so. I have found
it's preliminary research to be really good, meaning I'm not
going to ask it to make but like if I
(05:39):
want to break down of a company, and I want
to look at whether this company. There's a company we
are looking at recently, and depending on how companies book things,
they can make themselves look much more expensive or much
cheaper than they are in reality. And so I was
sitting I didn't want to go through all these computations myself.
I just I wanted to know if this company warranted
(06:01):
me even looking at it, And so I asked AI, hey,
is this as expensive as it looks? Or is there
some hidden value on that income statement in that balance sheet?
And then it did some preliminary filtering work and it
was really good. It saved me a bunch of different steps.
And then one of the other places, like on the
research side of it, asking it to get me different
(06:23):
research about this topic, explain this to me.
Speaker 3 (06:25):
And there was one other place that I was.
Speaker 4 (06:28):
Recently extraordinarily impressed because I was wondering how our as
active managers.
Speaker 3 (06:36):
I was wondering how our fun performance.
Speaker 4 (06:37):
For the last five years matched up to other value managers,
other macro managers, meaning guys managing funds according to macroeconomic
trends and things of that nature.
Speaker 3 (06:47):
And then I wanted to.
Speaker 4 (06:48):
See how we added up to the universe a hedge funds.
And it was awesome the breakdown it gave us. It
asked me for more information. I was giving it all
the portfolio stats and all this kind of stuff and
showing where we ranked, and it was it was really
really cool, and it was information that would have probably
taken us.
Speaker 3 (07:08):
Hours and hours to do ourselves.
Speaker 4 (07:11):
Yeah, but you know the kind of work you're never
going to do, right, I'm never going to spend hours
of time trying to figure out how we stack up
against hedge funds.
Speaker 3 (07:17):
So some really cool stuff there.
Speaker 4 (07:21):
Like I said, I think it's going to remove a
lot of needs for junior analyst positions and stocks.
Speaker 3 (07:27):
Awesome, how come like?
Speaker 1 (07:28):
Because I always ask this and I don't think I've
ever gotten an answer from anybody I've ever asked, But like,
what's to stop some nefarious actors from USK from getting
AI to like go raid our four one ks or
our bank accounts or whatever. It seems like it would
be so easy to do that because we're so online.
(07:49):
All of our information is online.
Speaker 2 (07:51):
So how does it not do that? How do people
not use it that way?
Speaker 4 (07:56):
So I'm not a big enough expert to answer that question,
but to be honest with you, all be shocked if
it's not happening already.
Speaker 3 (08:03):
Oh yeah, it's not.
Speaker 2 (08:06):
Okay, that's terrify.
Speaker 4 (08:08):
But remember, so they're developing AI security. So a good
buddy of mine actually, and I did an interview with
him on our show. He is the head of AI
security at Microsoft. So it's just kind of another it's
kind of another arm of the web security game that
was going on, which is, you know, can the developers
(08:30):
of the security stay one step ahead of the guys
that are trying to break into things?
Speaker 3 (08:34):
Right? That's the game. And so I think it's.
Speaker 1 (08:36):
A foreign actor who you know, China is not going
to put those safeguards around their AI, you know.
Speaker 4 (08:41):
What I mean, Well, they'll put them around their AI
as it relates to their economy.
Speaker 3 (08:46):
Right, I don't. I don't think.
Speaker 4 (08:50):
I don't think the CCP is going to spend an
inordinate amount of time making sure that their AI does
not have nefarious intentions when it comes to us, right.
Speaker 2 (08:58):
That is my point.
Speaker 4 (08:59):
Yeah, so I think that what but what we've got
to do is you got to attack it from the
AI security side and kind of fight fire with fire, right,
So that's what they're doing. But we had an experience.
We had an experience this weekend where my son went
to go buy something online that we gave him permission
to buy, and he was going to use his card
that green We do the green light thing with the
(09:19):
kids and the thrown cards and put money on the cards.
Speaker 3 (09:23):
Somebody happened to his green light account and taken his
birthday money.
Speaker 4 (09:27):
Oh my yeah, yes. And so I was sitting there
going a green light card. You guys are I mean
think about Wow? How crazy that is?
Speaker 3 (09:37):
Wow? And and and a week.
Speaker 4 (09:41):
Before that, somebody had hacked into my wife's uh via
well into our bank account, but they only got us
for like three hundred bucks. But they went through my wife's.
Speaker 2 (09:48):
Credit more times than I can count.
Speaker 4 (09:52):
Yeah, And I and I and I was watching that
happen in the same week, and I was I was
thinking to myself, you know what, this is probably a
because one of the things about AI, right is you
put it on a problem and you just let it
go and let it spend.
Speaker 3 (10:06):
It keeps working on it, working on it, working on it.
Speaker 4 (10:08):
Well, I mean, I would assume that that's not different
how I'm like, I am not some big hacker extraordinaire,
So I'm talking a little bit out of school here,
but I got to believe that that's not that different
from how it would work trying to get it to
hack into a network, right, just let it sit there
and spend enough combinations and try enough things and eventually
it gets in.
Speaker 3 (10:26):
But it is a nightmare for security.
Speaker 4 (10:28):
And there's a lot of things about this where, you know,
for lack of a better term, like it really is
Pandora's box in the sense that makes me want to take.
Speaker 1 (10:35):
Out all of my money from all of my places
and just put it under my mattress. Yeah.
Speaker 3 (10:41):
Yeah.
Speaker 4 (10:41):
Well, the one thing I'll say is that when you
look at existing banking laws and stuff, right, Like, first
of all, the banks have gotten pretty good at detecting
a lot of this stuff.
Speaker 3 (10:50):
And then also they're backing up, so I mean they can.
Speaker 5 (10:54):
And also for all the jobs lost or all the
jobs that we think they're going to be lost because
of AI, there's a whole new market AI security and
a whole new like a whole new job market, at
least I would hope there would be, because I'm hearing
that that's obviously going to be a huge problem. So
I would hope maybe there's like a major in that
in college kids just start majoring in because I feel
(11:14):
like that's going to be a huge issue moving forward.
Speaker 4 (11:16):
Well, and this gets to my next point about, Look,
there's going to be a lot of jobs replaced with AI.
If history is any indicator, it will happens more slowly
than I think people are currently predicting.
Speaker 3 (11:32):
A right and you know it, these things always do.
Speaker 4 (11:36):
There's snags, There's gonna be There's going to be when
you understand how this technology works. There's going to be
fits and starts. It's not going to be as seamless
as everybody thinks. In my opinion, I could be wrong.
It just isn't that way. The other thing is is
that there are there are almost always use cases that
jump up. You know, if you look at historical examples
(11:57):
of really disruptive technology. Now, granted, the one thing different
about this is that this technology, when you think about
what it's designed to do, it is designed to take
the place of human beings.
Speaker 3 (12:11):
So that is different.
Speaker 4 (12:12):
But what I will tell the people is, rather than
pushing back against it, the people who are going to
make it and the jobs.
Speaker 3 (12:20):
You're gonna have to use AI to your advantage. Right,
you can't just be like I don't like it, I
don't want to. You have to use it. It's like
the Internet twenty five years ago, you know, and you.
Speaker 4 (12:29):
Got to harness that power, and if you harness it correctly,
it's unbelievably helpful.
Speaker 1 (12:33):
Yeah, and there's you know, there's certain things. There's certain
aspects about dealing with people in the service industry that
will never Nobody wants that to go away. Like I
would not want to have my financial planner be an
AI bot. I would want it to be a person
like you that I can just call and like maybe
go to a webinar or like have a second opinion
(12:56):
phone call with you to learn about my retirement.
Speaker 4 (12:58):
There it is, Yeah, you could, yes, and you could also,
I'm just gonna say this right now, you could also
moonlight as an advisor for Bulwark Capital Management, because you
already pitch it so effectively. I thought we were still
I thought we were still talking about AI.
Speaker 2 (13:15):
I know, right, She's still She's that good, Zach.
Speaker 4 (13:18):
She's like the podcast version of Ron Pop Peel, remember
all time, all this.
Speaker 1 (13:24):
Stuff, the guy that the hair like, remember he sprayed
his bunk.
Speaker 4 (13:28):
Yeah, he had whatever you needed.
Speaker 5 (13:33):
She actually, she's a transition ninja, is what she You
don't even know what's coming.
Speaker 2 (13:39):
She just and then she kicks you right in the face,
and you're like, I hear.
Speaker 3 (13:43):
We are the Dark Arts.
Speaker 1 (13:45):
The Dark You still need to tell people how they
can reach you.
Speaker 3 (13:49):
Yeah, well, I mean I'd rather have you do it, but.
Speaker 4 (13:54):
Always you can go to Bulwark Capitolmanagement dot com, get
our video feeds on podcast of our daily dots. Search
Know your Risk podcast dot com, Know your Risk podcast
dot com, Know your Risk Podcast dot com, Boardcapitalmanagement dot com.
Speaker 3 (14:07):
Not hard to find.
Speaker 5 (14:08):
Yeah, thank you, Zach, the Dark mock Arts, thank.
Speaker 4 (14:12):
You, Jack. Yeah, I'm telling you is salesperson extraordinary?
Speaker 2 (14:15):
Right, It's insane.
Speaker 3 (14:18):
Investment advisory services offered through TREK Financial loc and SEC
Registered Investment Advisor.
Speaker 4 (14:22):
The opinions expressed in this programer for general informational purposes
only and are not intended to provide specific advice or
recommendations for any individual or on any specific security. Any
references to performance of security so it thought to be
materially accurate and actual performance may different.
Speaker 3 (14:33):
Investments involved risk and are not guaranteed. Past performance doesn't
guarantee future results. TREK twenty four three zero eight