Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Bloomberg Audio Studios, Podcasts, Radio News.
Speaker 2 (00:10):
Hello and welcome the Money Stuff Podcast. You're a weekly
podcast where we talk about stuff related to money. I'm
Matt Levian and I write the Money Stuff column for
Bloomberg Good Pay.
Speaker 1 (00:20):
And I'm Katie Greifeld, a reporter for Bloomberg News and
an anchor for Bloomberg Television.
Speaker 2 (00:26):
What are you talking about today, Katie?
Speaker 1 (00:28):
We're going to talk about open Aire, all the drama.
We're going to talk about chatbots and stock picking, and
then we're going to talk about bibles.
Speaker 2 (00:37):
I'm gonna bell out right now.
Speaker 1 (00:44):
Sam Altman open Ai. You know how open ai is
a nonprofit?
Speaker 2 (00:49):
I don't well explain it to me.
Speaker 1 (00:52):
No, we don't need to, because it's not anymore. It
won't be. According to people familiar with the matter, of course,
they're just discussing how to restructure to become a for
profit business officially and giving Sam Altman a seven percent
equity stake.
Speaker 2 (01:09):
It's so good.
Speaker 1 (01:10):
Yeah.
Speaker 2 (01:10):
I read about this that if you just like didn't
know that it was a nonprofit and you just ignored
all of that stuff, you would sort of have a
better understanding of open Ai than if you did think
about it's nonprofit governance structure, because you know, they had
a nonprofit board whose job was to develop AI for
(01:31):
the benefit of humanity, and the nonprofit board decided that
the CEO of Sam Altman was apparently developing it not
for the benefit of humanity, and so they fired him,
and then everyone was like, never mind, there's no nonprofit board.
Speaker 1 (01:44):
Well that's the thing. I feel like a lot of
people didn't know that open ai was a nonprofit until
they fired and then rehired Sam Ali.
Speaker 3 (01:51):
Yeah.
Speaker 2 (01:52):
Yeah, there was like a five day period where you
could be like, wow, open ai is really a weird
company and not just the hot tech startup. And then
they're like, nah, we're just kidding. We're a context out
And so now they're going to be a for profit
for real.
Speaker 1 (02:07):
Yeah, taking the last step they're considering, according to people
familiar with the matter, becoming a public benefit corporation, tasked
with turning a profit but also helping society. But Matt,
are those two things at odds? Can you help society
and still make money? I don't know.
Speaker 2 (02:24):
This podcast will try to find out. Look, you know
you can be a public benefit corporation. You know you're
trying to make a profit. But like you, the distinction
machine a public benefit corporation and a regular corporation is
pretty vague, right, because you can be just a regular
old for profit corporation and like also try to be good,
and the constraints on you as a public benefit corporation
(02:44):
are not that constraining, So you can be a public
benefit corporation and be kind of bad. But you know,
if you're going from being a nonprofit company to being
a for profit company, it probably does make sense to
stop along the way being a public benefit company because
it like sounds better, sounds like less of a stark transition.
Speaker 1 (03:05):
It still has benefit.
Speaker 2 (03:07):
Yeah, it's still for the benefit of the public. You ask,
somewhat sarcastically, can you do good while also making money?
And I think it's really interesting that, like the Silicon
Valley ethos is so strongly of the view that you
always read VCS saying Elon Musk has done more for
(03:27):
humanity in its for profit companies than you know, any philanthropist.
And you know, I think there's some truth to the
idea that like for profit development of technology can be
really good for the world and also make the people
who do it really rich, and it was always at odd.
It's with that ethos that open ai was a nonprofit
(03:51):
and always very awkward because you know it was like
founded by Silicon Valley, you know, sounded by literally Elon
Musk and like some PC types, you know, s Ham
Altman is you know kind of a he's like a
you know, bicombinator, like the VC guy, and you know
it's staffed by tech people, and all of those people,
I think, to some degree in their previous lives brought
(04:13):
into like the Silicon Valley ethos of like the way
to improve society is to to scalable low marginal cost
technology products that make a lot of money for their developers.
And there's this little blip of like at open Ai,
I was like, no, we're going to do it a
different way, and then they changed their months. So clearly
(04:35):
a lot of people at open ai think that they
can do good for humanity while also making a lot
of money. So the mission of open ai being this
technology in particular cannot be commercialized, This technology in particular
has to be done by a nonprofit, which was apparently
the theory for a while. It's always a little weird, right,
(04:56):
because like the product they've rolled out is sort of
this like consumer chat pottery. It's on a continuum with
a lot of, like you know, for profit technology products.
I think there's like the overlay of open ai believing
that AI could enslave humanity or red work obsolete, or
like usher in a new age of abundance where money
(05:16):
has no meaning. Yeah, if you really believe that your
product is going to create an age where money has
no meaning, then I suppose it makes sense to operate
it as a nonprofit. Yeah, but you know that market.
Speaker 1 (05:32):
Well, the point before right where I mean Altman saying
publicly like I'm so scared of what we could be
creating is a great pitch for why you should give
me a lot of money. Like what we're working on
is so incredible that I am scared, right.
Speaker 2 (05:47):
It makes it seem very important, and so it helps
them raise a lot of money. But eventually, you know,
those people who give them all the money want their
money back. Yeah.
Speaker 1 (05:55):
I don't think like all the investors pouring so much
money into open ai, which is raising right now with
a one hundred and fifty billion dollars valuation, or necessarily
investing to help humanity, they're investing with the goal of
making a lot of money, I would imagine.
Speaker 2 (06:12):
Yeah. And by the way, like even before this like
supposed change, open AI has this for profit subsidiary that
was already able to give to promise investors a big
return on their money. It was like a cap to
profit subsidiary, so you could only make one hundred times your.
Speaker 1 (06:25):
Money up change.
Speaker 2 (06:28):
So what is changing is I think two things. One
is like, like right now the sort of notional corporate
structure is that on top is a nonprofit and then
there's like the cap profit subsidiary under it, and you
can invest in the cap profit subsidiary and get profits.
But ultimately everything answers to the nonprofit board. The nonprofit
(06:48):
board has no fiduciary duties to investors. Their duty is
to humanity to develop AI in the best way. And
you know what happened is the old nonprofit board thought
that Sam all And wasn't doing that in some unspecified way,
and so they fired him, and then they all got
fired themselves. And so now there's a new nonprofit board
that is like a little bit more profit oriented. Yeah,
(07:10):
but it's still technically does an answer to the investors.
And I think the restructuring would mean that the board
is now more directly responsible to investors. And then the
other thing that's happening in this change besiudes, like the
for profit being the top company is sam Aldman is
supposedly getting seven percent of the company.
Speaker 1 (07:30):
It's spend a lot of money. Theoretically.
Speaker 2 (07:33):
I was thinking, if like Harvard University became a for
profit company tomorrow, would they give the president ten billion dollars.
It's like a really interesting you know, like typically founders
of hot tech companies get a lot of equity. Yeah,
because they retain a lot of equity, right, Like they
(07:55):
start with one hundred percent of the company and then
they raise money by selling portion of the company. But
like they started with a undred percent and they end
up with you know, seven or ten or twenty or
fifty percent. Sam Alban started with zero percent because it
was a nonprofit, and now they're retroactively being like, wow, okay, fine,
it was a regular startup, so you can't have zero
percent equity, so we're going to give you seven percent.
And so because he built this company from zero to
(08:17):
let's say, one hundred and fifty billion dollar valuation, even
though he did that without any equity stake. They're retroactively saying, well,
you did that, so you're a founder, so you get
ten billion of that value you created.
Speaker 1 (08:29):
I like equation that went into How did they land
at seven percent?
Speaker 2 (08:34):
Like?
Speaker 1 (08:34):
Did they?
Speaker 2 (08:34):
I'm very curious about the capital structure of this company generally, right,
because like this company is like you know, if you
if you just think of the for profit subsidiary, right,
like Microsoft owns quite a chunk of it, because they've
put a lot of money in at like lower evaluations
than the current one. And you know, there's some VC investors.
And then there are a lot of employees, not Sam alten,
(08:56):
but a lot of employees have gotten you know, chunky
stock grants of the years. They're not called stock grunts.
They're called something like performance something units, but they're.
Speaker 1 (09:04):
Stock grunts, right, and benefit grants.
Speaker 2 (09:06):
Yeah, there's stocks. And then the rest of the company
is owned by humanity, right, the rest of the company's
owned by the nonprofit. Right. No, and so the nonprofit
now is basically allocating money away from humanity to say,
Sam Altman, And how did they come up with that number?
I don't know. I think that like it has to
(09:28):
be a lot because first of all, he's a billionaire
front it's all their adventures. But it has to be
a lot because if you're just recasting open Ai as
a regular startup, then the founder CEO has to own
a lot of it because he has to be incentivized
in the right way, right. I Mean, one thing that's
happened in the open a story in the last year
or two is that you're constantly reading about Sam Altman
(09:50):
doing like other weird stuff. You know, he's like getting
involved in like data center deals and like power deals
and like other startups that use open ai technology, and
they all feel like confident of interest. And one explanation
of that is, Okay young zero percent of open Aif
she like can like make a little money on the
side by doing some sort of nuclear power deal, then
he'll do that. And so if you're an investor looking
(10:12):
to put money into open Ai at one hundred and
fifty billion dollar valuation, one thing you want is for
the CEO to be mostly working for open Ai. Yeah,
and you can imagine a lot of ways to try
to do that. But like the standard silicon of Valueay
is like he has most of his net worth in
open ai stock, so you have to give him enough
stock that most of his network is in open A stock.
And that's like, you know, come doing it.
Speaker 1 (10:33):
She with Elon Musk parallels because you want the founder
CEO to be engaged and passionate and working on the company.
Elon Musk when it comes to Tesla, I mean that
was basically his argument that I want to focus on
the thing where I have the most at stake here,
and that's why I should own more of Tesla.
Speaker 2 (10:51):
Yeah. Absolutely, I don't know who is asking for what here.
Like you could imagine I'm not saying this is true.
I don't think this is true. But you could imagine
Sam and be like, no, I'm good, I have two
billion dollars. I don't need more money from this company.
I'm in it for the benefit of humanity. You could
imagine the board of the investors saying, no, you need
to have a ligned incentives. You need to own seven
(11:11):
percent of the company. Here, take ten billion dollars so
that we know you're working for us. You can imagine that.
Speaker 1 (11:16):
Some time twists my arm, all right.
Speaker 2 (11:19):
Right, right, all right, And like, by the way, Elon
Musk will ask for the money, but he will also say,
I don't want it for the money. I just want it,
you know, to make sure that I'm building it in
the right place. So yeah, it's totally like Elon. It's
a little different because Elon has a lot of very
different ideas, but like have a lot of conflicts. But
you understand why they're different. Companies like Sam all been
(11:41):
when he's like raising money for ventures that build on
top of open Ai, it's like, well, let's do that
at open Ai. It's a little bit more like it
looked like he was using his position at open ai
to make money on the side because he couldn't make
money at open ai. And now it's like, wow, I
can make a lot of money open Ai, so do
that instead. But yes, the perils that Elon are interesting,
right because I said, you know, the notion that Elon
(12:02):
Musk has done a lot for humanity by running for
profit companies, It's like validates dispute to being a for
profit company. On the other hand, Elon Musk is also
suing open ai for becoming a for profit company.
Speaker 1 (12:14):
That's so true.
Speaker 2 (12:15):
This is it has one circumstance. He does not believe
you should be benefiting humanity and also making a profit.
Speaker 1 (12:21):
It feels like a while since we've talked about Elon.
Speaker 2 (12:23):
I wonder like if the resolution to all of this
is open a eye is now a for profit company.
Microsoft will own x percent of it, Sam Alton and
it humanity will ow you know, some single percentage.
Speaker 1 (12:35):
Of Well give you a benefit.
Speaker 2 (12:37):
We just give you on you know, an equity stake
for his his contributions. Why not get on side then
you get you on a line. Yeah, that's good. Although
he's got his own AI company, so I guess he
cant do that.
Speaker 1 (12:47):
True? Was that XII chat thoughts? Shall we? Shall we
talk about Bridget.
Speaker 2 (13:04):
Another AI transition.
Speaker 1 (13:05):
Bridget Bridget bridge Wise.
Speaker 2 (13:08):
So Bridgewise is this Israeli tech company, fintech company that
is launching a chatbot that will be your broker.
Speaker 3 (13:15):
Yeah.
Speaker 2 (13:19):
So it's this fascinating story because you read about it
and you're like, Okay, what does this chatbot do, like
gives your stock recommendations? Yeah? So, I mean I think
it's like you type of O. Can you tell me
like the top five stocks in like the industrial sector
that I should buy today? And it's like, yeah, well,
looking at the researcher, but like you can imagine just
being like, what stocks should I buy? And then Bridgett
says you should buy these five stocks, and you go
(13:40):
buy those five stocks, and then the next day you
come back and you're like, bridget they went down. What's
going on?
Speaker 1 (13:46):
Yeah, Like, what information do you? What is the prompt?
Prompts matter so much with these chatbots. Like if I
said my name is Katie, I'm thirty one, I love horses,
what would you tell me to buy? Versus if I
said I'm Katie, you're up in New Jersey. That's it.
Speaker 2 (14:02):
But those are bad prompts. The prompt that you want
is what stocks will go up the most?
Speaker 1 (14:07):
Well, yeah, because you know you don't care.
Speaker 2 (14:11):
But you're on the right track, because like this thing
is clearly a substitute for a broker, right. This is
like you call your broker, and your broker gets to
know you and learns about your risk tolerances and your
preferences and the themes that you're interested in. And then
your broker is like, well, given your risk tolerance and
you're interested in the theme of horses, you should buy
the horse, etf or whatever. Right, But that's true because
(14:32):
brokerage is a client service business. Where the product is
making the client happy with the service. But you can
imagine a different model where the product is like buying
stocks that go up. Yeah, right, I think that we
all understand that when you call your broker and you
say what stock should I either be like, what are
your risk tolerances? You should be in this CTF because
there's a good balance of what like they'll say something
(14:53):
other than like I know which stocks will go up
and these are them and you should buy them, because
like if they knewhich stocks would go up, yeah, they
would be doing something else.
Speaker 1 (15:04):
Why can't I just ask chat ChiPT what stocks to buy?
Speaker 2 (15:08):
You can apparently sometimes tell you I can't give you
stock recommendations.
Speaker 1 (15:11):
Oh, you can get around it.
Speaker 2 (15:13):
So chat GPT is like a general purpose chatbot, and
it can like read the internet and sort of say
what people will say, Like you can make like the
best guess that like what people would tell you if
you ask what stocks will go up? Bridget is trained
a little bit more on like financial information, and so
it might have a better idea what stocks will go up. Yeah,
maybe like a less good general purpose chatbot, but a
(15:36):
good picker of stocks. But I think the point is
like it's so interesting reading, Like you know, there's there's
a Bloomberg News story about it, and like you read
the story and it's like it's been tested a lot
to make sure it gives accurate recommendations.
Speaker 1 (15:46):
Well, so it's been tested.
Speaker 2 (15:49):
I don't think it means it's been back testing. I
think it means it's like if it says to you, oh,
JP Morgan Research that this stock is like x price target.
Those quotes are accurate, right, it's quoting information to you,
but it's not. I don't think back tested depict stacks
that will go up because if it were back tested
depict stacks like you go up and it was successful,
(16:10):
it would not be a chatbot.
Speaker 1 (16:11):
For a brokerage form, that's true, it.
Speaker 2 (16:13):
Would be Bridge.
Speaker 1 (16:14):
It would be Renaissance, yeah, or you know, it could
just be the Bridget hedge fund. Yeah, and they can
make a lot more money. They should back test it, though,
I want to back test Bridget. I must chat GBT
to back test.
Speaker 2 (16:28):
It's like in beta, but like we'll certainly be testing Bridget.
Speaker 1 (16:31):
Yeah, this did remind me of a thing that you
wrote about already, But I was on vacation last week,
so we didn't talk about it on the pod, but
there is that chatbot ETF that just launched that's supposed
to mimic buffets Stanley Drackenmiller, David Tepper. And that's interesting.
I mean it's so it's basically.
Speaker 2 (16:53):
Yeah, by the way, I haven't seen their back tests.
Speaker 1 (16:55):
I would love to see their back tests as well.
But basically they have this investment committee that is built
on training the chatbot around investment ideas. It basically wants
them to mimic these guys, and he wants them to
learn their personalities and then emulate it. And that's crazy.
But it's kind of similar to what we're talking about
with Bridget. I mean, she's just like general financial information,
(17:18):
whereas this is like, this is what David Tepper would doing.
Speaker 2 (17:20):
We should you know, like when you like want to
like change your airline flight, you like go to a chatbot,
and like the chatbot is like trying to replace the
human customer service but like someone must do it, and
like that's supposed to be been making the human customer
service agent, right, and like this I think is sort
of the same thing, right, It's like making the human
(17:42):
broker who like has the relationship with you and can
like give you stock tips, but like not with like
one hundred percent reliability that it will pick stocks that
go up, just like you know, serving the purpose of
a broker. The chat bot the like LLM ETF is
a little different, Like they're trying to pick stacks that
go up, Like you're not chatting with the chatbot, right
Like the person who runs that ETF is chatting with
(18:04):
the chatbot and asking what stocks will go up and
hoping that the chatbot gives them the right answer.
Speaker 1 (18:10):
That's so funny.
Speaker 3 (18:11):
I don't know.
Speaker 2 (18:11):
I'm just the product with bridget is the relationship with
the customer, right, Like you want the customer to feel better,
to feel like they're getting the information they call for,
and that includes like stock recommendations. The product for the
ETF is just the returns stain of the ATF, right
Like the chatbot is all behind the scenes, and that
is putting a lot more emphasis on the ability of
(18:31):
the chatbot to actually pick stocks.
Speaker 1 (18:33):
Also, the goal of the ETF isn't necessarily pickstocks that
will go up, because there's no guarantee that, Like you're right,
Warren Buffett or Stanley Drugon Miller will actually correctly identify
those stocks.
Speaker 2 (18:45):
Okay, but two things. One, I think the ETF would
much rather the chatbot pick stocks that go up that
Warren Buffett would not have picked, than that it accurately
reflects Warren Buffett and pick stocks that go down. Right,
Like they don't actually care about Warren.
Speaker 1 (18:59):
Buff You don't care about their tracking error.
Speaker 2 (19:01):
The tracking air is entirely theoretical, right, because you can't
like find the port. I mean you can literally like
a Workshire Hathway. But this is like the stacks that
Warren Buffett would pick if you ask them, there's no
like visible tracking error.
Speaker 3 (19:14):
Yeah.
Speaker 2 (19:14):
But then the other thing is you're right that also
their goal is not really to pick stacks like you up.
Their goal is really to attract investors, right, And like
it's a good stick, right, it's a good pitch, Like well,
training to chat bought to be Warren Buffett and then
ask it what slacks? Like, that's clever. I like that.
I'd kick in money to that where they're not the stacks.
Speaker 1 (19:30):
I can't wait to I mean it already launched. We're
talking about the Intelligent Livermore ETF, but it's only been
a couple of days. The ticker I believe is l
I v R. I'm so excited to see the uptake
of this. I want to call David Tepper and ask
how he feels about it, you know. Like again reading
the description of this, the firm is going to instruct
(19:51):
the lms to emulate the investors' personalities. That makes my Skinvid.
Speaker 2 (19:56):
Tepper is not in the like prospectus, like the b
Regardic is like, yeah, David Tepper is on the list,
but like, I don't think you can really advertise formally
in writing. We're gonna like train an AI to be
David Tepper, Like David Tepper is out there like running
his own fund.
Speaker 1 (20:13):
Well that's another, I mean good point against investing CTF.
You could just invest with David Tepper if you, you know,
meet the criteria.
Speaker 2 (20:21):
Yeah, you can just buy Berser Hathaways.
Speaker 1 (20:23):
Yeah exactly.
Speaker 2 (20:24):
It's like this is a clever stack. Yeah, they're not
really meant to be David Tepper. By the way, the
ticker is l I V or whatever. Like they've filed
for several of them, and one of them is the
ticker is a I w B. And again they can't
put Warren Buffett's name. I know, in the perspectives they
can say a I w B.
Speaker 3 (20:46):
I love it.
Speaker 1 (20:58):
It's got to the good stuff about Bible washing.
Speaker 2 (21:01):
So I feel for the US Curious and Exchange Commission.
Speaker 1 (21:06):
You know, you're a sympathizer, I know.
Speaker 2 (21:09):
And they have a hard job. And like one reason
they have a hard job is like they're there to
protect investors from like bad ideas, but kind of right,
like you know, so they want to do their goal.
That's not actually their mandate, right, their mandate is essentially
to enforce good disclosure. So like if you disclose a
really bad idea, then you can do that even if
(21:30):
it's a bad idea, even if the SEC doesn't.
Speaker 1 (21:32):
Like it, as long as you're super upfront.
Speaker 2 (21:34):
Yeah, and like there are limits on that. Many of
them discovered in crypto where you were like a lot
of people in crit are like this is a Ponzi scheme.
It's just like come on now. But so, like one
thing that happens is that the SEC is very interested
in climate change. Right in environmental investing, there's a lot
of focus on ESG investing, environmental, social and governance investing,
and there are a lot of people who think that
(21:56):
the firms that do that kind of investing are somehow
not doing it well. There's concerns about greenwashing where you
say you are an environmental investing firm, but then you
own some coal stocks, and like people get mad about that,
and the SEC gets mad about it too. The SEC
can't say things like if you say you're an ESG firm,
(22:19):
you can't own coal stocks because like who's to say
what is ESG not the SEC?
Speaker 1 (22:23):
Also, maybe that coal company has really great governance.
Speaker 2 (22:26):
No, but it's true, like these are fuzzy criteria, right,
so you could truly be like the best ESG investor
in the world and on a col stock. That's what
is an example everywhere, Like you know, oil companies, good governance,
oil companies that are improving their emissions so they're better
than you know, Like, there's all sorts of ways to
be an ESG investor, and a lot of people have
like genuine disagreements about what to prioritize and how to
(22:47):
choose between like owning the best oil company to encourage
oil companies to be better environmental citizens, or owning no
oil companies because they're all too you know, there's all
sorts of ways to do it, and the SEC doesn't
really substantively get to pick between them. But what it
does get to do is read your disclosures and make
sure they're accurate. And so it has broad cases against
like I think Bonie Mellon had a ESG fund where
(23:11):
they're like, you know, we have these ESG screening criteria
and we apply these criteria to each company before we
make an investment. And they went through their you know,
internal records and found that sometimes they made an investment
without getting a memo about the criteria, and they said, ahaha,
you're not actually doing your ESG investing, And so they
find them and they got in trouble, and like that's
the level of enforcement that the SEC can do, and
(23:34):
it's I think frustrating for them because they probably have
sub substantive ideas about who's doing a good job or not. Anyway,
this week, a company called Inspire Investing paid a fine
to the SEC because they advertised that they were applying
biblical principles to their investing and they were like doing
a rigorous screening to root out companies that had practices
(23:56):
they didn't like, and the SEC discovered that they weren't
doing as goes the screening as they wanted, and so they.
Speaker 1 (24:01):
Find them some money three hundred thousand dollars to be exact.
I love this story. You called it Bible washing, and
I think that's the.
Speaker 2 (24:08):
Grands Green question. But for biblical investing.
Speaker 1 (24:11):
Yeah, I like this story and I like all the
I don't know the ESG parallels.
Speaker 2 (24:16):
It's completely the same thing, right, I mean, it's this
sort of broad category of like people who are investing
for things other than financial returns. Right. They're telling investors,
we're doing your investing in some social way that you like.
A big chunk of that is ESG investing where people
care about climate change or whatever. But another chunk of
it is you know, conservative biblical principles investing where people
(24:36):
care about other things. And the SEC says, no matter
what you're doing, you have to disclose it accurately and
follow the procedures you say you're following.
Speaker 1 (24:45):
Yeah, let's just read from this excerpt from Money Stuff.
So Inspire had the Inspire Impact score, which quote reflects
a rules based, scientifically rigorous methodology of faith based ESG analysis,
which is interesting. There's some tension between science and religion,
but in any case, this creates a level of consistency
and reliability of results necessary for making well informed, quantitatively sound,
(25:08):
biblically responsible investment decisions. And then the SEC checked the
donor lists. Caught a bit of a contradiction as you
lay out that certain companies that were excluded from Inspires
Investment Universe for donating to certain advocacy organizations are sponsoring
certain events that Inspired consider to be prohibited activities. At
the same time, multiple companies held within the inspired ETF
(25:29):
portfolios donated to organizations or sponsored events that were the
same or similar. So a three hundred thousand dollars fine.
These ETFs are still out there. We could buy and
sell them today.
Speaker 2 (25:41):
And in fact, Inspire put out a press release about
this saying we are grateful to receive guidance from the
SEC on what it considers important regarding modern faith based
investment screening, and the SEC order takes no issue with
the conservative biblical values Inspire applies to screening categories like
they're going to keep doing this. They're just gonna, yeah,
(26:01):
they are, you know, Like what they advertise is like
all of our investments have no involvement in you know,
gearats organizations or abortion rights, and the SEC found that
some of them did, and so they're gonna like clean
up their act and be more accurate about screening out
companies that don't meet their criteria. But the SEC, as
they say, took no issue with their criteria. Right, if
you say you're going to exclude companies that contribute to
(26:24):
garage charities, then you just have to do that, and
then the SEC will bless it.
Speaker 1 (26:28):
I almost said, hell, yeah, they're going to keep doing it.
Speaker 3 (26:31):
Do you get it?
Speaker 1 (26:33):
So one of these ETFs, bibl is the ticker, it's
this Inspire one hundred ETF. The CTF still exists. I
took a look at the performance it's doing. Okay, it's
up sixteen percent year to date on a total return basis.
By for comparison purposes is up twenty one percent. Its
top holdings are Caterpillar, Intuitive, Surgical, Progressive, et cetera. Its
(26:57):
top industry groups are reads, software, and healthcare. All you know,
holy stuff.
Speaker 2 (27:04):
My point here is that neither the SEC nor I
can make any comment on whether this is holy stuff
or not.
Speaker 1 (27:13):
So I do think the idea of principle based investing
is really interesting, and that's what this is that's what
ESG is, But why people do it is really interesting
to me. Like if you're investing along your values because
you don't want to invest in stuff that goes against
your value sure, or are you investing because you think
(27:34):
that your values will produce better returns. That's a really
interesting school of thought. Like if you believe that a
company is environmentally conscious and sound and will whether climate
change better than other companies, that could be a reason
that you invest Do you invest in this fund because
you believe that these biblically responsible companies are going to
(27:58):
do better than the one that aren't.
Speaker 2 (28:01):
I haven't read enough of marketing literature to know, but
I think I think in ESG, I think asset managers
benefit a lot by kind of blurring that and sort
of hinting that you're getting both right, like you're getting
to live your values and you're also getting a higher return.
And I think like there's not really a contradiction between them,
(28:21):
because like you could easily believe that, like you have
these values, your values are right. Over time, other people
will come around to your values. And when other people
come around to your values, that will increase the stock
prices of companies that share those values, right, Like, that's
a fairly consistent thing to believe, and you could believe
that about, you know, environmental or biblical things. I don't
(28:41):
think you have to analyze it rigorously to say, like,
the companies that I believe in are also the companies
I'll go up because you could believe that, you know,
without either ESG or biblical principles. Yeah, you could be
like I really like Taco Bells. I think Taco bell
stock will go up.
Speaker 1 (28:57):
So we're talking about BBL. They have amazing other tickers
as well, wwj D, what would John do? They also
have the ticker g l r.
Speaker 2 (29:10):
Y walking away from.
Speaker 1 (29:11):
The hie so so but yeah, no, the ticker game
is strong.
Speaker 2 (29:19):
And that was the Money Stuff Podcast.
Speaker 1 (29:20):
I'm Matt Levi and I'm Katie Greifeld.
Speaker 2 (29:23):
You can find my work by subscribing to the Money
Stuff newsletter on Bloomberg dot com.
Speaker 1 (29:27):
And you can find me on Bloomberg TV every day
on Open Interest between nine to eleven am Eastern.
Speaker 2 (29:33):
We'd love to hear from you. You can send an
email to Moneypod at Bloomberg dot net, ask us a
question and we might answer it on air.
Speaker 1 (29:40):
You can also Subscribe to our show wherever you're listening
right now, and leave us a review. It helps more
people find the show.
Speaker 2 (29:45):
The Money Stuff podcast is produced by Anna Maserakus and
Moses ONEm Our.
Speaker 1 (29:49):
Theme music was composed by Blake.
Speaker 2 (29:51):
Maples, Brandon Frances Nanimus, our.
Speaker 1 (29:53):
Executive producer and Stage Bauman is Bloomberg's head of Podcasts.
Speaker 2 (29:57):
Thanks for listening to The Money Stuff Podcasts. We'll be
back next week with more stuff