All Episodes

April 4, 2025 41 mins

How do LLMs solve math problems? This week in the News Roundup, Oz and Karah explore what AI models could mean for the fashion industry, the humble-but-mighty device our modern world depends on, and what Anthropic’s researchers learned about the inner workings of their LLM. On TechSupport, The Washington Post’s technology reporter Gerrit De Vynck explains the state of the AI race and how some of tech’s biggest companies are vying for position.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to Tech Stuff, a production of iHeart Podcasts and Kaleidoscope.
I'm mos Vloscian and today Cara Price and I will
bring you the headlines this week, including the fashion models
getting Ai twins. Then on Tech Support, we'll talk to
Garrett da vinc a reporter at the Washington Post about
all the news in AI from Open Ai, it's huge
new fundraise, to Xai's acquisition of x formerly Twitter, all

(00:25):
of that on the Weekend.

Speaker 2 (00:27):
Tech It's Friday, April fourth, Cara Price.

Speaker 3 (00:37):
As Volashan, you're looking like me today.

Speaker 1 (00:40):
Yeah, we are matching.

Speaker 3 (00:43):
We are very matchie matchie. I was at.

Speaker 1 (00:46):
McLaren f one's headquarters as you know, earlier this week
and I sent you a picture that I was very
proud of me and Zach Brown, the CEO of McLaren Racing,
And your only comment was, oh, black trainers.

Speaker 3 (00:58):
Always black six sneakers. You can sponsor us, but yes,
always black A six sneakers, even if he's wearing a tie.
But it's, you know, for a tech podcast, it's very
funny how much we talk about fashion. And it's not
because I'm a woman.

Speaker 1 (01:10):
I mean, it is also ironic that we both have
our own uniforms. We're wearing blue blue shirts, dark blue pants, shirt,
blubby pants, and a cool hat.

Speaker 3 (01:20):
Though I am wearing a cool hat, I'm always wearing well,
I wouldn't call it a cool hat, but if you
want to call it a cool hat, it's an appropriated
Yankees city by hat that is completely illegal and was
taken off the internet the minute it went up, So
I'm very proud to be wearing it now.

Speaker 1 (01:35):
The reason we're talking about this is not just because
we're totally self indulgent, but because the topic of tech
folks and their uniforms is pretty fascinating. Earlier this year,
we talked about Mark Zuckerberg, who had this famous, weirdly
shaped T shirt with the Latin phrase out zuk out
nihil printed on it, which means all zook ll all
nothing and is a is a reference to Julius Caesar.

(01:58):
Of course I can't.

Speaker 3 (02:00):
It's just insane to me. The level of It's not narcissism.
It's this sense of like an obsession with being cool that,
while chased, is the antithesis of anything cool and fashionable.

Speaker 1 (02:14):
I thought that one of the genuinely coolest tech swipes
of the year was when Jay Graber, who's the CEO
of blue Sky, the kind of left leaning social media platform,
wore a T shirt at south By Southwest that said
in Latin, no more Caesars, and it was a real
sort of shots fired at Zuck, I think, and.

Speaker 3 (02:35):
Also just again in Latin. I mean, these are the
nerdiest people in human history.

Speaker 1 (02:39):
This was a huge best seller for Blue Sky. I
saw that almost immediately.

Speaker 3 (02:43):
I bet it was, you know, it's tech Titans, as
I was saying, just seemed to want to have a
signature look. And I don't know if part of it
is pathology, psychological pathology, I mean, certainly minus psychological pathology.
I like to wear the same thing every day. I
don't have to think about it. Like our friend Elizabeth Holmes,
who also rocked the Black Turble.

Speaker 1 (03:04):
She flex hard on the Steve Jobs.

Speaker 3 (03:06):
She flexed and copped. She coughed something from Steve Jobs.
And I guess it's sort of worked for her because
we're still talking about it.

Speaker 1 (03:12):
But I think the most sort of talked about tech
CEO of the moment is none other than Jensen Hwang
of Invidia.

Speaker 3 (03:18):
The man who made a lot of people a lot
of money and now is making a lot of people
a little less money.

Speaker 1 (03:22):
And still still a lot, but little less exactly. He
has a signature. Look, can you describe it.

Speaker 3 (03:29):
It's sort of a Marlon Brando, sort of on the
waterfront black leather jacket that is I think Tom Ford.

Speaker 1 (03:36):
I think it is Tom forty, which is like a
ten thousand dollars.

Speaker 3 (03:39):
That's an extremely expensive jacket.

Speaker 1 (03:41):
But there's a lot of Invidia stands. It turns out,
and if you build it, they will come to buy
a fake black leather jacket.

Speaker 3 (03:48):
If you build it, they will come and buy it
a lot cheaper exactly.

Speaker 1 (03:51):
So our friends at four or for Media reported that
there are all these websites popping up with knockoff Jensen
Hwang in video jackets for which have creative titles like
Jensen wang in Video Jacket totally optimized their CEO.

Speaker 3 (04:06):
People, They're going to find my phone and be like
that was the last thing that she was looking at.

Speaker 1 (04:10):
But I think I don't sure if you saw it.
But this week Jensen Hwang went to visit a company
called x one Robots, and they had one of their
robots come up to Jensen whanging humanoid robot and present
him with a new black leather jacket, not a tom
Ford one. This one was bedazzled with the Nvidia stock
ticker on the front left pocket and the logo in

(04:32):
kind of Swarovsky style crystals on.

Speaker 3 (04:33):
The back to imagine if the humanoid robot is also sentient,
if the robot was like, I don't want to do this,
like the people programming mean, please give me something better
to do.

Speaker 1 (04:44):
But there's a more serious story about fashion AI here,
and it comes from The Guardian. So according to their story,
the clothing retailer H and M announced that they're going
to create so called AI twins of thirty models, which
they're going to use for social media marketing posts.

Speaker 3 (04:57):
The way, I wish I had this for a podcast.
As much as I enjoy doing it with you, I wish.

Speaker 1 (05:02):
You could just have a I'd have a relic. Would
you rather use your replica or my replica?

Speaker 3 (05:09):
Well, if we wanted me to be on time, I
would use your replica. But sorry, keep going, because I
do love the story.

Speaker 1 (05:14):
No, So these models, the real models, have given their
permission to H and M to use their likeness with AI.
You know again, as you said, sounds good, you know,
why why sharp yourself you can send your digital twin.

Speaker 3 (05:26):
Well that and also I think there's something that's very
interesting here that reminds me of the Atlantic sort of
selling its data, which is these are people that, like
in the film industry, are cannibalizing their own job.

Speaker 1 (05:39):
Well, of course, just like with the Atlantic and Open Ai,
where open Ai actually compensates the Atlantic for using its archive,
these models are also being compensated for their image. They
own the rights of their twins, and their twins can
work for other brands, not just H and M, and
they'll get paid. One of the thirty models said, quote,

(05:59):
she's like me without the jet lag.

Speaker 3 (06:04):
So the other thing is, and it's worth saying, the
images in the business of fashion right up look amazingly similar.

Speaker 1 (06:10):
Like it's not distinguishable.

Speaker 3 (06:11):
Yeah, it's not like who was that person that we
used to cover on Sleepwalkers. She was a model Lil Michayla.
Oh yeah, like littl Michayla looked like AI. I think
you know, after four years, we have now gotten to
a place where it's just an AI replica, which is incredible.

Speaker 1 (06:27):
Yeah, and there's no sick finger. As you pointed out,
it does raise concerns about the future of modeling in
the fashion industry because although these models who are participation
in the partnership will get paid every time that digital
twin is used, you know, just like in the film industry,
also raises questions about what happens to all the people
who work on sets. I mean the hair siders, the
makeup artist, the lighting designers.

Speaker 3 (06:47):
Yeah, you know, I was actually talking to our producer Eliza.
Shout out to Eliza, I like to keep her in
the mix. And she actually knows a hairstylist who works
with a company that's testing out replacing many of their
e commerce models with AI, and you know, work has
slowed way down for her.

Speaker 1 (07:04):
So this is happening.

Speaker 3 (07:05):
I mean, think about it. You don't need to be
doing a blood on a digital avatar.

Speaker 1 (07:09):
Yeah, and I mean even for models themselves, if I
guess if you're you know, for these thirty who have
gotten first through the door. It's one thing, but this
may affect the kind of future job landscape for model
as well as people work around modeling.

Speaker 3 (07:22):
But this is like a This is Zulander three where
Derek realizes we're in jeopardy. But in all seriousness, you know,
what what does it look like for models who are
no longer in demand? And it might not seem important
to like a lay person, but I do think it's
a harbinger of things to come, which is like, if
you can replace something that is so sort of ubiquitous

(07:45):
for you know, a century, what does that look like?

Speaker 1 (07:49):
Yeah, I mean wile I find most interesting about these
like AI digital twins or AI actors or whatever, or
you know, chatbots of famous people from history is their
effect is kind of to lock in the very few
most famous people in the world as the only characters
who are worth interacting with. I mean, you know, if
you think about an action movie, why would you not

(08:10):
make it with Tom Cruise? Or talking about doing a
fashion shoot, why would you not make it with el McPherson.
So kind of, I think there's a longer term kind
of chilling effect on the pipeline and new talent in
creative industries, which you know, which will pretty interesting, you know,
and somewhat disturbing. To be fair to H and M,
they're being very upfront about their use of AI. They're
going to watermark the images of the AI twins in
their ads so people will know they're the AI versions,

(08:33):
And by doing this, the company will also be abiding
with the EU's AI Act, which is coming into effect
in twenty twenty six, and it will require all AI
images to be labeled as AI images, to.

Speaker 3 (08:44):
Which I say, who's looking for that? Like, I you
know what I mean? Like the role of AI and
creative industries, which you know, is something I'm obsessed with
and sort of how to regulate it is something that's
obviously going to keep coming up, and I'm curious to
see where it goes next. I wonder to what extent
the lay person cares that, Like where they're buying a shirt,
is either modeling that shirt with a fake twin or

(09:06):
the real person.

Speaker 1 (09:07):
Probably not, Probably not. The next headline comes from the
stuff of nightmares for anyone who's a frequent traveler, and
it has to do me, and it has to do
with an airport being shut down for twenty four hours.

Speaker 3 (09:18):
I heard it was Heathrow.

Speaker 1 (09:19):
It was London heath Throw Airport, where I flew out
of just this week. It shut down, leading to over
a thousand council flights after a fire caused a power outage,
and Bloomberg reported that the outage could be traced back
to a single point of failure, a burned transformer and
twenty five thousand liters of transformer cooling oil that was

(09:39):
a blaze for several hours. It's a fascinating story, and
Bloomberg had such a great headline, which was the device
throttling the world's electrified future. But Kara, do you know
what a transformer is?

Speaker 3 (09:52):
I know what the movie Transformers is. I know it's
a car that turns into a man.

Speaker 1 (09:56):
So most simply put, a transformer is something that changed
is voltage. So when you create electricity in a power plant,
you actually want to increase the voltage as much as
you can because that allows it to travel far and
fast with less loss of electricity along the way. So
use a transformer on the way out of the power plant.
But then when it gets to your home or the

(10:17):
local electric grid or whatever it may be, you actually
want to use a transformer to turn the voltage back
down because otherwise it blows up your stuff.

Speaker 3 (10:24):
So ostensibly, it's sort of like when I'm staying at
not the greatest hotel and I turn a blow dryer
on and all of a sudden, the entire room short
circuits and all of the electricity goes off because the
voltage is too high.

Speaker 1 (10:39):
Yeah, exactly mean that is a short circuit where there's
likely been a problem with the transformer being bypassed or
not functioning correctly.

Speaker 3 (10:46):
Right. And I think that in a storm or a
natural disaster, they can sometimes explode and it's very loud.

Speaker 1 (10:51):
Yeah, it sounds like fireworks or a bomb going off.
And I mean during hurricanes and other kind of natural disasters,
these things come under a lot of pressure when they
do go out. Follow And so what happened to Heathrow
Airport was this fire broke out in a substation which
houses transformers, and it took the firefighters seven hours to
get it under control. The airport, in order to come
back online, was able to accept power from other substations,

(11:13):
but even the time it took them to do that,
many many flights were canceled. It was chaos.

Speaker 3 (11:19):
She made it, but it was not a pretty team.

Speaker 1 (11:22):
Yeah him. Back in twenty thirteen, there was actually a
sniper attack on a substation in California which caused a
fire that burned seventeen transformers and almost knocked out all
of the power of Silicon Valley. Now This actually led
to a lot more security around substations and even stockpiling
of transformers, which of course sparked another problem, a shortage

(11:43):
of transformers, transformer shortage and obviously with the supply chain
issues recently, the lead time for delivering a new large
transformer is now about three to five years for a
single transformer, and bear in mind these can be huge. Nonetheless,
the scale of transformer that was Heathrow, it takes a
long hundre replace and they've got a much more expensive

(12:03):
So you know, now we're living in the error of
EVS and the AI boom powered by data centers and
transformers are also required to bring renewable power onto the grid,
and so you know, one of the interesting implications of
this story, as Imaza, who's on the show, recently told
me that actually the US's struggles to onboard new power
to the grid maybe the reason why the US ultimately

(12:24):
falls behind China in AI.

Speaker 3 (12:26):
So it's ironic actually that the increase in transformer prices
could be further increased by President Trump's on again, off
again relationship with imposing tariffs in Canada and Mexico, which
is where we import a lot of our large transformers from.

Speaker 1 (12:39):
And that's why I love this Bloomberg story because it's
fun think about all the sexy stuff like new chips
and data center construction, you know, but there's still this
one hundred year old technology that hasn't changed and that
has to be imported and which is absolutely critical for infrastructure,
digital and otherwise, you know.

Speaker 3 (12:55):
Speaking of the AI boom and things that use a lot,
a lot, a lot of energy. This next story that
I want to tell you is for those people who've
been sitting here thinking about lms and not really understanding
how they work. I have news for you. Much like
Trump's decisions on tariffs, nobody knows how they work. And Anthropic,

(13:17):
which is the same company that makes the AI model Claud,
has been trying to figure out how the hell these things.

Speaker 1 (13:22):
This is to solve the so called black books.

Speaker 3 (13:24):
The black box problem. So Anthropic, which is the same
company that makes the A model Claud, has been trying
to figure out sort of what's under the hood, and
they recently released two reports on how llms do things
like complete sentences, solve math problems, and suppress hallucinations. And
they use a technique called circuit tracing which let them

(13:45):
track an LM's decision making process for ten different tasks
by working back from the solution to the query.

Speaker 1 (13:53):
Huh okay, that makes sense. But I just think more broadly,
as more and more decisions are taken for us by
AI or outcomes kind of determined by AI, it's kind
of remarkable that this huge elephant is still in the
corner of the room, which is we can't understand how
they make their decisions.

Speaker 3 (14:10):
But it also makes you think of like, I don't
really know. I mean, I know how a car works,
but like the I'm not an engineer, and yet increasingly,
I mean, you know, since whenever the nineteen twenties, people
have used cars more and more to get around, and
we're just kind of like, well, this thing's gonna work
until it blows up, you know. That's why I feel whatever.
But it's important to note that lllm's, like Claude, which

(14:33):
was the focus of this study, are trained, which I
thought this was really interesting. They're trained, not programmed on
a bunch of data. They create their own rules based
on the data they ingest. But up until now we
haven't been able to see into the models to know
what those rules actually are. Let alone how the models
generate them.

Speaker 1 (14:53):
Yeah, and I think what the work the anthropic is
doing is all about basically understanding decision making. So then
not yet at the stage of being able to understand
how the models generate their own rules, But I think
this story is all about how it's starting to become
easier or possible to basically backfigure out how a model
has made a decision.

Speaker 3 (15:13):
Yes, and it's not that simple. I think the researchers
in this particular case were inspired by brain scan techniques
that are used in neuroscience. They found that llms, which
again I'm so interested in how we anthropomorphize lms, But
they found that llms store different constellations of knowledge in
different parts of their model. So, for example, the concept

(15:35):
of smallness or the idea of a rabbit. Anthropic was
actually able to identify and then turn off certain parts
of the model. So like the idea rabbit and tune
it down so that it couldn't be part of a
queer result.

Speaker 1 (15:50):
And for exac like what eats carrots? It would be like.

Speaker 3 (15:54):
People right, a dog right exactly, And so the same
query would have a different answer if the rabbit part
of the model was dealt up or down, which you.

Speaker 1 (16:03):
Say said by contrast, you might you might say, you know,
what's a mammal? It always asks rabbit if you dial
it up, and if you if you dial it down,
it would never own a rabbit.

Speaker 4 (16:11):
Correct.

Speaker 3 (16:11):
So I mean, and it's similar. I mean it's similar
in human beings, which is like I don't drink anymore,
so I go to a bar and I'm like water.
It's a similar kind of thing.

Speaker 1 (16:21):
I mean that that actually clarifies for me, and I
kind of We've done a bunch of coverage of AI
and spoken to Jeffrey Hinton and others, but like that
idea of a neural network is to me really clarified
by this study.

Speaker 3 (16:32):
Well, and how could it not be. It's something that
we've created, you know what I mean, it's going to
reflect a sort of human centric way of working. I
think this story actually came from the MIT Technology Review
with the headline anthropic can now track the bizarre inner
workings of a large language model, to which I said, Now,

(16:52):
what really blew my mind, though, is the way that
this thing solves math problems, because it's not the way
that humans do math.

Speaker 1 (16:58):
Okay, tell me more.

Speaker 3 (16:59):
So, research asked it to solve the equation thirty six
plus fifty nine, and while Claude came up with there,
what's the correct answer? Well, well, Claude took a very
circuitous route to get that answer. Anthropic found that Claude
used multiple computation paths in parallel to get its final answer,

(17:22):
unlike you, who just used your brain. So one path
added a bunch of numbers. I love this. This is
like nerd alert close to thirty six and fifty nine
to approximate the total like thirty five and sixty, while
another path focused on determining the last digit of the sum,
so it actually added the last two digits of thirty

(17:43):
six and fifty nine, six and nine to know that
the answer had to be a multiple of five. So
Claude used these two paths to come up with the
correct answer, which is as you said.

Speaker 1 (17:54):
Ninety five. You know, I find particularly fascinating about this,
so I always struggled to be with math. My dad,
on the other hand, is a crazy math murder. He
was like the under thirteen chess champion of Britain, etc.
You know, he said to me about the most important
thing about how to do math well, is how approximate
the answer before you work it out. He basically told

(18:16):
me to do exactly what this model does. Basically break
it down to much simpler calculation, and then when you
do the actual work, you will know whether or not
you're in the range.

Speaker 4 (18:25):
Right.

Speaker 3 (18:25):
Well, Claude does math like your dad. When asked by
a user how it got the answer ninety five, it
claims to do it by the book, for example, simple addition,
carrying the one and one of the explanations anthropic positive.

Speaker 1 (18:38):
Positive for the fact that, when asked how it got
to the answer, did it shared a response that was not,
in fact, how it got to the answer.

Speaker 3 (18:45):
Right because Claude's written answer and I quote may reflect
the fact that the model learns to explain math by
simulating explanations written by people. So when asked to do
math without being taught how to do it, it may
develop its own internal strategies to do so. Remember it's
trade not programmed.

Speaker 1 (19:12):
We're going to take a quick break, but stick around
and we'll be back with this week's tech support, all
about AI competition among tech giants. For our next segment,

(19:34):
we're going to be talking about all things AI. Surprise, Surprise,
on tech stuff Rise tell surprise, But in all seriousness,
there's just so much happening in AI all the time.
I find it even though it's our job, but it's
hard to keep up with.

Speaker 3 (19:46):
We know. I literally work on a podcast about technology
and half the time other people are telling me what's
going on in the world of technology. Because I've come
I've become kind of like a sieve for technology news
from my friends.

Speaker 1 (19:58):
Absolutely, So this week we're going to talk to somebody
who can help navigate a whirlwind of headlines. And you've
got the big open Ai fundraise, You've got elon integrating
x into Xai, and then you've got the struggles at
Apple with.

Speaker 2 (20:13):
Their AI products or lack thereof.

Speaker 1 (20:17):
Here to help us decode the current AI landscape is
Garrett Devinc who's a technology reporter for The Washington Post. Garrett,
Welcome to Tech Stuff.

Speaker 4 (20:24):
Happy to be here.

Speaker 1 (20:25):
Let's start with a big one. The volume of news
around AI today feels quite overwhelming. I mean, also, it
comes from so many different companies Open Ai, Google, Xai, Microsoft, Amazon, Apple,
The list goes on and on. How do you keep
up with all of this and also figure out how
to sort the signal from the noise.

Speaker 5 (20:44):
Yeah, I mean, I think this is kind of a
key question. I'm sure a lot of people are asking themselves,
and I think first thing is, just take a deep breath,
calm down. AI is not coming for your job. AI
is not going to take over the world tomorrow, even
if really smart people or powerful people or rich companies
are saying that. Essentially, what we're seeing here is the

(21:05):
tech industry is this huge conglomeration of a bunch of
powerful people and billions and billions of dollars that are
always looking for the next thing, always looking for the
next way to make money. And they look back at
tech trends over the years, the Internet, cloud computing, moving
to mobile phones, and the tech industry has sort of
collectively decided that AI is the next one of those stages, right,

(21:30):
And so when the mobile phone came out, a bunch
of new people were able to make money.

Speaker 1 (21:34):
Right.

Speaker 4 (21:34):
We didn't have Uber, we didn't have.

Speaker 5 (21:36):
Those kind of mobile first door Dash, those kinds of
companies before the mobile phone came out, and people made
huge amounts of money during that tech transition. And so
what's happening now is the tech industry believes and is
convincing themselves and trying to convince all of us that
AI is that next step, and so they're pouring money
into it, they're pouring marketing dollars into it, but at

(21:58):
the same time, there's still trying to build the plane
as it's taking off. And so that's why you might
see a lot of products that maybe.

Speaker 4 (22:06):
Don't work very well.

Speaker 5 (22:07):
You don't know really how they fit into your life,
and you're not sure whether you should be paying for
them yet. And so I think the first thing is
to say, you know, don't worry. You're not missing the
boat if you're not using a million AI apps right now.
And so, yes, this is happening. There's a lot of
hype and interest here, but it's not as if AI
is just sort of gonna change everything immediately.

Speaker 1 (22:26):
And one of the things that Karen and I sometimes
talk about is like, is it speeding up or is
it slowing down? Because like November December, all the headlines
were slowing down CHACKGBT five is not coming, and now
it feels like we're in a big speeding up moment. Again,
is it even a relevant question? And where do you
fall on it?

Speaker 4 (22:42):
Yeah?

Speaker 5 (22:42):
I mean I think it's a great question, and I
think that analysis is correct, right. I mean, everyone is
trying to either build something up or tear it down.

Speaker 4 (22:49):
That's how these things work.

Speaker 5 (22:50):
And so the reason we had this huge boost in
interest was because chat gipt came out. It was definitely
better than what most people had you used before and
being able to experience directly, and they were able to
put it in a format that regular people could actually
understand and have a conversation with it. And so that's
sort of what fired the starting gun. And then they said, okay,

(23:12):
how do we make that better? And then the techniques
they were using, which was essentially to use way more data,
to just shove more data into these AI models and
hope that they get smarter. That had been working up
into a certain point, and then they kind of ran
out of data and that method slowed down, and so
they've now pivoted to using different techniques where they're actually

(23:32):
spending a lot more time training the model to.

Speaker 4 (23:35):
Do different things.

Speaker 5 (23:36):
They're sort of doing more coding so that it can
be a bit more efficient and strategic, and now they're
seeing a boost in capability.

Speaker 4 (23:44):
From that technique.

Speaker 5 (23:45):
And if people remember the Chinese AI model called deep Seek.
They really had a huge breakthrough where they were able
to use a little bit less data and less computing
power to come up with a model that was really
quite capable. And so now everyone is saying, oh, we're
speeding up again because we found new ways of increasing
the capabilities.

Speaker 3 (24:03):
Which is I mean, ultimately a more long term effective
way to get these things to work than having to
rely on humongous data sets that might not be replenishable.

Speaker 4 (24:14):
Yeah, yeah, absolutely.

Speaker 5 (24:16):
And I mean the other aspect here is that AI
is very compute intensive, which is essentially just a way
of saying they need a lot of computers and a
lot of computer chips in order to do AI in
the first place. Takes a lot of energy, and so
there's a lot of potential environmental concerns. Even here in
the United States, coal power plants that were slated to
be shut down have actually been ramped back up in

(24:39):
order to serve all those AI data centers. So there
is a lot of interest and pressure in making AI
more efficient so that it's cheaper and more environmentally friendly.

Speaker 3 (24:48):
One of the biggest names in the industry is open AI,
and for anyone who is living under a rock and
wasn't on social media this week. Why is everybody talking
so much about Studio Ghibli.

Speaker 5 (25:01):
Yeah, so, open Ai they released a new image generator, right.

Speaker 4 (25:06):
So one of the things that people have been able.

Speaker 5 (25:08):
To use AI for over the last couple of years
is you make a short description and it spits out
an image, and you know, it's been getting better and
better over the months, and essentially they released a big
update to THEIRS and people realize that they could use photos,
upload photos and get the model to recreate that image
and sort of the same design styles from iconic animators

(25:31):
like you mentioned Studio Ghibli or even like the movie
Wallace and Grommet or the Muppets, and.

Speaker 3 (25:37):
I like the Lego Family, the Lego Family or exactly.

Speaker 5 (25:41):
It was this moment where the technology allowed people to
apply their creativity.

Speaker 4 (25:45):
In a new way, and that's why it went viral.

Speaker 5 (25:48):
And open Ai I think they did hope that this
would happen. They themselves were using some of these Studio
Ghibili examples early on. But also they can't really predict
or control how this is going to go, and some
of their releases have just kind of fallen flat. Everyone's
been like boring we don't care, but.

Speaker 4 (26:06):
This one a lot of people.

Speaker 5 (26:07):
They found it really fun, they found it really interesting,
and it definitely went viral, and it actually brought a
lot more people who.

Speaker 4 (26:13):
Hadn't been using chat GPT before.

Speaker 5 (26:16):
But now, of course this raises all sorts of questions
about art and copyright and big problems like that.

Speaker 3 (26:23):
What's interesting, though, is while they're trying to grow very quickly,
we have someone like Sam Altman come out and say,
please slow down with this image generation.

Speaker 1 (26:35):
He said that the GPUs are being melted by the demand, right.

Speaker 5 (26:38):
Yeah, I mean it's possible some GPUs actually did melt
a little bit. I mean, you know, when you're using
your laptop, you've got two hundred Chrome tabs open and
you're trying to listen to YouTube, it gets hot, right,
and so that's exactly what happens.

Speaker 3 (26:50):
My phone does get quite hot.

Speaker 5 (26:52):
Yeah, you know that same thing happens with AI, right.
I mean, so this is still a very physical thing.
Every single time everyone in the world says, hey, make
an image of this. Hey write me a resume for this,
Hey answer my test questions for me. That needs to
go to a data center, It needs to be computed
on these computer chips on GPUs, which is the technical

(27:12):
term for the computer chips, and that heats them up
and so I don't know if they were actually melting,
But essentially what sam Altman was referring to there is
just that so many people wanted to use this that
it was becoming very expensive for open ai to run it.
And this kind of gets to a central problem for
them because the more people use it, the more it
costs them in computer chip costs, and so they want

(27:34):
people to use it, but at the same time they
need to figure out, you know, how can we convince
people force people to pay for these things so that
we can actually grow as a business. And this is
a huge question mark around open ai and other AI companies.

Speaker 1 (27:48):
As fun as it is making a studio ghibli portrait yourself,
it's hard to measure it being a huge, huge business
driver of people buying tokens to do that.

Speaker 5 (27:56):
That's sort of a question for open Ai, right, I mean,
they've been able to have these viral moments. Chat GPT
itself was a viral moment, and I do think people
are using these technologies. I mean, in some ways open
ai chatch GPT is one of the fastest growing, if
not the fastest growing consumer Internet products ever. But we're
not quite at that point yet where I think regular

(28:18):
people are saying, oh, like, this is so important to
my life, I need it so badly to write emails
or to have fun like generating these images that I'm
willing to pay two hundred dollars a month for it.
I mean, I do think that the company is still
in this world where they're trying to figure out how
do we convince people that they need this so badly

(28:39):
that they're willing to spend hundreds of dollars a year
on it.

Speaker 1 (28:42):
So you may be somewhat skeptical, Garrett, but the market
is not, or at least soft Bank is not. You
talk a bit about that.

Speaker 5 (28:49):
Yeah, I mean open Ai raised forty billion dollars and
at a three hundred billion dollar valuation, and I mean it's.

Speaker 1 (28:57):
The largest ever private financing of a US company that.

Speaker 3 (29:01):
A company that's not a private company a little while ago.

Speaker 5 (29:03):
Yeah, And the only private company that is actually worth
more than three hundred billion dollars is SpaceX. So that's
Elon Musk's Space company. But they build big physical rockets
that cost hundreds of millions of dollars, So that company's
worth three hundred fifty billion. Opening Eye is now, according
to its investors, worth three hundred billion dollars, right, And
so I think this mostly goes to the to that

(29:26):
question about how expensive it is to do AI, right,
and so they need that money in order to buy
data centers, buy computer chips to keep helping people make
these these studio ghibli images and all the other things
that Opening Eye is working for.

Speaker 4 (29:41):
And so so it's.

Speaker 1 (29:41):
Just the uber model where you basically subsidized users and
then once you get them hooked, you start to charge them,
but at huge scale.

Speaker 4 (29:48):
Yeah.

Speaker 5 (29:48):
I mean that's a playbook that tech companies have used
for years now, right, and get people hooked on something
that is fun, cheap, easy, free work into their lives
and so that they feel like they needed every single day,
and then start to increase the costs. I mean Google
has done this. I'm paying for my Gmail storage. I
don't know if you guys are.

Speaker 3 (30:05):
I mean, yes, it's a question of when does something
go from the gimmick phase to the business phase. I think,
at least in terms of chat GBT.

Speaker 5 (30:16):
Yeah, And I think the other thing to point out
here is that open ai does have a business where
they sell access to their AI to other businesses, right,
and so there's the consumer question, which is exactly what
we're talking about. Then they are actually selling to businesses
who want to put AI technology into their own apps
and into their own technology, and that's also a big

(30:38):
part of what open ai is trying to do here.
But that's also, you know, a big question mark because
we have these open source AI models as well. Deep Seek,
the Chinese one we mentioned earlier is an example of that.
Facebook also provides these tools where they just essentially put
the AI out there for free for other businesses to
take and use in their own ways. And so open

(30:58):
ai is a very strong business. They have incredible technology,
they have some of the smartest people in the world
on AI, and they have huge funding and backing.

Speaker 4 (31:07):
From their investors.

Speaker 5 (31:08):
But at the same time, that doesn't guarantee that they're
going to continue to grow or even be around in
five years.

Speaker 1 (31:21):
When we come back, we'll hear about how other big
tech companies like Amazon and Apple are fairing in the
scramble to develop AI products. Yeah, we want to ask

(31:44):
you a little bit about what Amazon and Apple are
doing in the realm of AI. But just before we
get there, there's another huge deal this week, also with
a potentially dubious price tag.

Speaker 5 (31:54):
Yes, so I think you're referring to Elon Musk's merger
of two of his companies little confusing.

Speaker 4 (32:00):
They're both kind of called XX.

Speaker 1 (32:02):
I couldn't even get through the read at the beginning
of the shows. That's a tongue twist. Yeah, exactly.

Speaker 4 (32:06):
I mean I think that sort of speaks to Elon Musk.

Speaker 5 (32:11):
He has many companies at this point, and he's been
known to sort of move assets around a little bit.
And so when it comes to x which is formally Twitter,
so that's the social media platform that he bought a
couple of years ago for forty four billion dollars, he
has now sold that company to his AI company, which
is called Xai. And sort of formally, the social media

(32:34):
company is now owned by the AI company. And so
I say formally, because these companies had kind of been
working together in a lot of ways. User data from
the social media company was already being used to train
the AI on the AI company. The AI company's main product,
which is called Grock, which is a chat GPT competitor,
was available through the social media company, and so in

(32:57):
a lot of ways, these companies were already the same thing.
And what he did here is he said, look, the
AI company is able to raise a lot of money
because everyone loves AI, everyone wants to boost AI. So
I'm going to use that money that the AI company
is able to raise to bail out and kind of
give me more time when it comes to the social
media company, which is very influential and maybe Elon Musk's

(33:19):
most important company right now because of the political influence
it gives him. But from a business perspective, the social
media company has struggled and sort of been, you know,
going through the wilderness a little bit since Elon Musk
bought it because most of its users left, a whole
bunch of new users came in, advertisers left.

Speaker 4 (33:36):
Maybe the advertisers are going.

Speaker 5 (33:37):
To come back, and so it's a way for Elon
Musk to sort of use the AI hype to kind
of help shore up the finances of his social media company.

Speaker 3 (33:47):
You know another major tech company that isn't mentioned as
much in the AI raise, and that is Amazon. They
recently unveiled Amazon Nova Act. They're AI agent. So what
does the lay person need to know about this.

Speaker 4 (34:01):
So okay, a couple of things. I think AI agent
is a term.

Speaker 5 (34:05):
That people are probably already hearing, and I guarantee you
they're going to be hearing more about it in the
coming months and years. An AI agent all that is
is just you know, you can say, okay, well, if
I can have a conversation with chat GPT, I can
ask it things. Can I ask it to then go
and read the internet for me? Can I ask it
to go do things on the internet for me?

Speaker 1 (34:24):
Right?

Speaker 5 (34:24):
If chat GPT is able to read an e commerce website,
can't I just tell chat GPT, hey, go buy me
the cheapest sofa for my new furniture that you can
find from my new apartment that's green and seats three people.

Speaker 4 (34:38):
Right.

Speaker 5 (34:38):
And so that's what an AI agent is, is essentially using
AI to go and help people to do things on
the internet for them. And so this is something a
lot of AI companies are talking about. Obviously, there's a
lot of problems. The technology is not quite there. The
last thing you want is to say to your AI agent,
go buy me a sofa for under one thousand dollars,
and then suddenly eight sofas that cost ten thousand dollars

(35:01):
each show up at your door a week later, right,
I mean we need to be really careous. Is that?

Speaker 3 (35:04):
What is that the phase we're in right now?

Speaker 5 (35:06):
I mean a colleague of mine ran an experiment where
he asked some of these AI agents to go and
find him the cheapest eggs.

Speaker 4 (35:13):
This was sort of at the height of the egg panic,
and eggs.

Speaker 5 (35:16):
Were selling for twenty dollars a dozen and the agent
actually went and bought I think it was like thirty
dollars eggs and had them delivered to his home before
he even had the chance to say, like, yes, make
that purchase.

Speaker 4 (35:30):
And so it's definitely in the experimental phase.

Speaker 5 (35:32):
But Amazon they sort of see that this is maybe
the next frontier of the technology.

Speaker 1 (35:38):
Is that what the key interest is to basically do
your shopping for you?

Speaker 4 (35:42):
I mean, I think that would make sense.

Speaker 5 (35:44):
I mean, they are the shopping company, and we all
know Amazon Alexa.

Speaker 4 (35:49):
Was an early version of this.

Speaker 5 (35:50):
I mean you could ask Amazon Alexa to buy things
for you on Amazon. You could ask it obviously to
remind you about the weather, and people never really use
it for more than that. And so people have been
saying why was Amazon not ahead of this trend? Why
is Amazon Alexa not smarter? Why can chat GPT do
things that Amazon Alexa can't. So Amazon has been under

(36:11):
a huge amount of pressure from its investors, from its
own employees, from other people in the tech industry to
show that they are riding this AI wave just like
these other companies. And I think this Nova Act product,
which is very new, it's still in the experimental phase,
is a sign that they're trying to do that.

Speaker 1 (36:26):
All of this, of course brings us to Siri. I mean,
the irony being that Amazon Alexa and Apple Siri were
kind of ahead of the curve in terms of voice
driven assistance, essentially agentic type features, and now Amazon feels
like it's behind the curve a little bit, and certainly
Apple does too when it comes to AI. What's going

(36:46):
on there?

Speaker 5 (36:47):
Yeah, I mean, I think there's this dynamic that you're
putting your finger on where these companies are really really
invested in their own way of doing things and then
there's a new way that kind of comes out of
left field and to adjust right. This is the classic
innovator's dilemma. And so I would say for both companies.
Don't count them out. Amazon, Apple. They are both massive companies.

(37:10):
They are bigger than pretty much anything we've seen in
the history of business. They are in people's lives every minute,
every day, and so I think, first of all, we
should be careful not to just write them off and say, oh,
they're behind, therefore they will fail. But I do think
that this is really five alarm fire moment for them.

Speaker 1 (37:28):
So you, I mean, they've been having internal sort of
ol hands meetings. What's the kind of internal drummer at
apput on this?

Speaker 5 (37:34):
They are saying, Wow, we need to get on board
with this. And Google had the same thing when chat
GPT eventually came out.

Speaker 4 (37:41):
Right, Google is the AI company.

Speaker 5 (37:43):
A lot of this technology we're talking about was actually
developed by Google and then just shared with the world
because they didn't quite know what to do with it.
And so for Apple, people have said, okay, well Series
been around, are you going to make series smarter? This
seems like an obvious application. They stayed quiet for about
a year after chat GPT came out, and then they said.

Speaker 4 (38:03):
Yes, we're doing it. We're going all in.

Speaker 5 (38:05):
We're an AI company like everyone else. And now they've
delayed some of those releases, right, they've said, oh, actually,
we might need a bit more time to get it
to the level where we want it. And we can
see some of Apple's AI experiments have already failed pretty spectacularly.
They had a bot that summarized some of the messages
that were coming into your phone. Well I remember that, yeah,

(38:25):
last year Apple Intelligence. And what they would do is
they would see news alerts and they and said, oh,
let me be helpful. Let me summarize the three or
four news alerts you have. And the summaries were incorrect, right,
And so that is upsetting as journalists. It's upsetting as
a user because you want accurate information and Apple is
muddling the waters.

Speaker 4 (38:43):
And so they had to pull that back.

Speaker 5 (38:44):
And people are starting to seriously ask the question, is
this the moment where a company like Apple kind of
falls from grace and loses its steam, loses its power.

Speaker 1 (38:54):
But here's the thing. Fifty percent of Apple's revenues come
from setting the iPhone. And I can't imagine why do
they need to be a monket eater in AI? Why
can't they license other people they eye products?

Speaker 5 (39:04):
Yeah, I mean Apple doesn't really like doing that. They
like to do everything on their own. They now build
their own computer chips, they build their own software.

Speaker 4 (39:13):
And the whole point of Apple was.

Speaker 5 (39:15):
That it wasn't a PC, right, they had their own
operating system. And so that is a huge part of
Apple's value where they say, come into our walld garden.
You're going to pay a lot of money, but you're
going to be more secure, it's going to be cool,
it's going to be more intuitive. We're doing things our way,
and so that is their whole pitch to the consumer.

(39:36):
And if they suddenly have to start saying, come into
our world garden and use Google's AI, come into our
world garden and use opening.

Speaker 3 (39:43):
Which I do now by the way, Yeah, I mean
I use my Apple products to use chatchibt. But it's
also like can you get everything from one guy? I
don't know?

Speaker 5 (39:51):
Yeah, And I mean this is another big question because
that's been the struggle over the last ten years between
these giant tech companies to sort of box each other
out and try to corral people within their own ecosystems.
And maybe AI is the technology that kind of blows
that up in a way.

Speaker 1 (40:07):
Gary, what a great place to end.

Speaker 4 (40:09):
Thank you so much for thank you so much, of course,
it is my pleasure.

Speaker 2 (40:17):
That's it for this week for tech Stuff, I'm mos Vloshin.

Speaker 3 (40:20):
And I'm Kara Price. This episode was produced by Eliza Dennis,
Victoria Dominguez, and Adriana Tapia. It was executive produced by
me Oz Valashan and Kate Osbourne for Kaleidoscope and Katrina
Norvel for iHeart Podcasts. Jess Crinchich is the engineer and
Jack Insley mix the episode. Kyle Murdoch wrote our theme song.

Speaker 1 (40:39):
Join us next Wednesday for tech Stuff The Story when
we'll share an in depth conversation with Reid Hoffmann, legendary
founder of LinkedIn, venture capitalist at gray Lock, and author
of the new book Super Agency. What could possibly go
right with our AI future?

Speaker 3 (40:55):
Please rate, review, and reach out to us at tech
Stuff podcast at gmail dot com. We want to hear
from you.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.