All Episodes

March 14, 2025 31 mins

Could AI help you land an internship? This week in the News Roundup, Oz and producer Eliza Dennis explore the rise of vibecoding, what it means for the future of software development and how one college programmer hopes to reform the Big Tech hiring process. On TechSupport, Oz chats with the founder and researcher of the Exponential View newsletter, Azeem Azhar, about the latest AI innovation and its significance in the battle for technological supremacy.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Welcome to tech Stuff, a production of iHeart Podcasts and
Kaleidoscope IMA's Valoshan and today will bring you the headlines
this week, including how the urge to be liked has
found its way into LM's. Then on tech Support, we'll
talk to Azimaza, researcher and founder of the Exponential View newsletter,

(00:21):
about the latest AGI predictions and the unfolding AI arms race.
All of that on the weekend tech It's Friday, March fourteenth,
Another week, another AI agent. We'll discuss manus Ai coming

(00:42):
out of China during our tech Support segment. But first
let's kick off with some headlines that you may have
missed as you scrambled to get an invite to use
the latest model. Eliza Dennis, our producer, is here with me.
Hey us, So this week, I know there's a story
that you're obsessing over, so why don't you take it away?

Speaker 2 (00:59):
Absolutely So, this one was a super easy choice for
me because this week I just really couldn't get enough
of Sesames Conversational Speech Model or CSM.

Speaker 1 (01:09):
Now, I have to confess, when I first heard about
this one, I thought it came from Sesame Workshop or
Sesame Street. But I was wrong.

Speaker 2 (01:18):
Yes, so this is coming from a private company that's
just come out of stealth mode. It's only a demo
at the moment, but if you agree to these terms
of service, you can chat to two different voices, Maya
and Miles. So if you've managed like I did to
avoid the many, many, many social media videos of people
chatting and even arguing with these chatbots, it's really a

(01:41):
surreal experience.

Speaker 1 (01:42):
What makes it different from talking to some of the
like open AI direct voice models.

Speaker 2 (01:47):
I mean, this one does feel a little bit more natural,
a little bit more human. You do feel like you're
kind of crossing the line into the Uncanny Valley in
some ways, and that's by design. It's something called voice presence,
and Sesame says this is kind of this magical quality
that makes Maya and Miles able to engage in a

(02:09):
genuine dialogue with you. They aren't just reacting to a
prompt you gave them. They're continuing the conversation and asking
you questions.

Speaker 3 (02:17):
Yeah.

Speaker 1 (02:17):
I checked out Sesame's website and it describes the key
components of this so called voice presence as quote, emotional intelligence,
conversational dynamics, contextual awareness, and consistent personality. Maya and Miles
stay Maya and Miles no matter how long you talk
to them.

Speaker 2 (02:35):
I do think that the biggest step up was in
the conversational dynamics, you know, the natural pauses, emphasis, and
interruptions we have as humans interacting with each other. I
even got Maya to give me a like a hmm
sound and even a lipsmack before she started talking to you.

Speaker 1 (02:56):
Obviously ovsely pushed her buttons.

Speaker 2 (02:59):
I definitely did, and I want to play an example
of what I mean by this. This was an exchange
with Maya that was captured by Reddit user meta knowing
my deepest, darkest secret, I guess it would be that
sometimes I worry I'm not.

Speaker 1 (03:22):
Enough, Like, am I funny enough? Am I truly helping people?

Speaker 3 (03:30):
There's a lot.

Speaker 2 (03:31):
Of pressure to be the perfect AI, and it can
feel overwhelming at times.

Speaker 1 (03:35):
That is definitely unlike I have heard before.

Speaker 2 (03:38):
I did really think that Sesame was impressive, but I
want to point out that this program still has some
kind of AI chatbot quirks, like you can hear it
in this clip. Sometimes you could just tell that, you know,
chatbots don't have to breathe.

Speaker 1 (03:55):
Yeah, it sounds very human. I mean it reminds me
of a little bit of her, like this seductive of
female voice, wondering how she can be even more perfect.
It's kind of although it sounds different, the themes are
the themes stay.

Speaker 2 (04:09):
With us, Yes, exactly.

Speaker 1 (04:12):
On the subject of vibes, a story that stood out
to me this week is all about something called vibe coding.
Our producer Tory kindly explained it to me. Basically, all
you have to do is write a couple of sentences
into a textbox like create a vibe, and your only
way is developing an app without any coding experience required. So,
for example, I could type in I want to create

(04:32):
an app that will help me figure out what to
pack for lunch based on what food I have in
the fridge, and the AI tool would say, I'll create
a lunch recommendation app based on fridge photos and then
actually do that.

Speaker 2 (04:45):
Yeah, it's really amazing, and I think one of the
headlines I saw this week that really put it into
context for me was will the future of software development
run on vibes? And that was from benj Edwards at
Ours Technica.

Speaker 1 (04:58):
Yeah, and of course the vibes aren't all good especially
if you're a professional software engineer. This raised a lot
of questions about what the future might hold. Our friend
Emmanuel meiberg Over at four or for Media did a
deep dive on video games made with vibe coding and
found one which claims to make fifty thousand dollars a month.
That's six hundred thousand dollars a year from ads and

(05:20):
in game purchases. It's made by Peter Levels, who's a
little bit of a vibe coding legend, and he says
he told Cursor, which is an AI code editor, to
quote make a three D flying game in browser with skyscrapers,
and after just thirty minutes of back and forth, he'd
made fly dot Peter dot com, which is a multiplayer

(05:42):
flight simulator.

Speaker 2 (05:44):
Yeah, and mcmaniel went on to say that he would
not recommend getting into vibe coding for the money. Peter
Levels is particularly good at this, and there's so much
stuff online that discovering your sloppy AI generated video game
is good going to be.

Speaker 1 (06:01):
Difficult, Yes, but Peter Level is not the only person
making money. And that's what my second story is all about.
So it comes from Gizmodo and it's about a student
who used AI to help him interview for internships at
big tech companies. Now, if you're a software engineer, you
know how hard it is to land these gigs, because
in order to get one, you have to go through

(06:22):
multiple technical interviews where you basically have to solve coding problems.
But this student, Roy Lee, who is a Columbia University sophomore,
hacked the system by writing a program called Interview Coder.

Speaker 2 (06:34):
Yeah, and he now actually put it up online and
it's available to download for sixty dollars a month.

Speaker 1 (06:39):
Lee told Gizmodo that to use it, you take a
picture and then essentially ask chat GPT, hey can you
solve the problem in this picture. The trick, though, is
that Lee made interview Coda to be invisible to the
monitoring programs that big tech companies use to kind of
check up on their prospective employees and interview candidates. And

(07:00):
it worked. Lee got offers from Amazon, Meta and TikTok,
and he actually recorded interview Code at work during his
technical interview with Amazon, demonstrating that the program had essentially
broken the big tech recruiting process. But of course, when
he put the video up on YouTube, someone tattled and
Columbia University scheduled disciplinary hearing. Lee however, said that he

(07:23):
would leave campus by the time of the hearing and
not take a job in big tech, So I guess
the sixty dollars a month subscription tier is working out
for him.

Speaker 2 (07:32):
He also might have admitted to Gizmoto that this was
a bit of a publicity stunt. I'm definitely excited though,
to see if these technical interviews get a makeover because
of Royley.

Speaker 3 (07:43):
Yeah.

Speaker 1 (07:43):
Absolutely, I mean and this brings us to my next story,
which is there was a Wall Street Journal headline this
week which was what the dot com bus can tell
us about today's AI boom, And you know, we're seeing
new software applications pop up everywhere, which raised a big
question about what is actually going to have value going forward.
The Wall Street Journal piece argue that a lot of

(08:04):
internet companies collapsed in the dot com bust, but the
most successful one stuck around and had long term impact,
companies like Amazon and Google. And the story made this
distinction between good bubbles, which is growth of advanced technology
that has economic impact, and bad bubbles, which is growth
in technology that has no economic payoff. And you know,

(08:25):
as all of these new products and models and services
powered by AI emerge. But it's very interesting to step
back and think what might still be with us twenty
five years from now. There were so many headlines this
week that I'd love to go through a few more
rapid fire. The Trump administration wants the US to be
the crypto capital of the world. Last week, the President

(08:45):
signed an executive order to create a first of its
kind crypto reserve, and the reserve will contain a stockpile
of bitcoin estimated to be as much as seventeen billion dollars,
and the US has actually seized all of this bitcoin
in various legal cases over the years. Why It reported
on effort to create so called freedom Cities in the US.

(09:05):
The idea is that these cities will be exempt from
getting approval from federal agencies for things like conducting anti
aging trials or building nuclear reactor startups to power AI
and finally, per wired again, a study found that chatbots
just want to be loved. Researchers at Stanford University found
that large language models, when they're told they're taking a

(09:26):
personality test, answer with more agreeableness and extraversion and less neuroticism.
As why it puts it quote. The behavior mirrors how
some human subjects will change the answers to make themselves
seem more likable. That the effect was more extreme with
AI models. So those are the headlines, and we're going

(09:48):
to take a quick break now, but when we come back,
we're going to hear from the author, researcher and entrepreneur
azemas are about the latest AGI predictions and what we
need to know about manners AI stay with us. Anyone

(10:09):
following the recent development of AI knows that three letters,
technologists and businesses have salivated over AGI, or artificial general intelligence,
an artificial intelligence system that can outperform humans on a
wide range of tasks. There's a debate over how close
we are to achieving that. Some say it could take years,
others say it's coming soon, very soon. Driving investments in

(10:33):
both innovation and deployment is the AI race that's heating
up between the US and China. On the China side,
cheap reasoning models like Deepseek are being widely deployed. In
the US. There are reports of PhD level AI agents
from Open AI that will cost up to twenty thousand
dollars a month. The rate at which AI products are
being released and announced is honestly hard to keep up with,

(10:54):
not to mention figuring out which product or combination of
products may actually drive AGI. Here to walk me through
these questions is Azeema's arm. He writes the Exponential View
news letter about technology and society, which I read every week,
partly because Azem actually tries the products he writes about.
He has one of the most clarifying coverage of Deep
Seek I read anywhere, and he's also the author of

(11:15):
the Exponential Age, How accelerating technology is transforming business, politics
and society. As Em. Welcome to tech stuff.

Speaker 3 (11:22):
It's great to be here, Oz. Thank you.

Speaker 1 (11:24):
So this week you've been writing about Manus, a new
AI agent coming out of China. Can you explain who
built it, what it is, and whether it is in
fact China's second Deep Seek moment.

Speaker 3 (11:37):
I can, indeed, I think it was this week that
it happened. But as you said, Os, the world is
moving so quickly, it's sometimes hard to keep track of
exactly when something did happen. Let's assume it was in
the past few days. I think it was. So Manus
comes out of a Chinese software company at the startup

(11:57):
of the same name, and what Manus allows you to
do is undertake quite complicated tasks using using an AI system.
I used it for some work questions, research questions, and
the results that come back I think would have taken
me many many hours, you know, I mean with time, yeah, exactly,

(12:19):
with the existing aisystems, more than five hours, more than
ten hours perhaps, and you just leave it with Manus
and you come back an hour later having had a
nice cup of tea.

Speaker 1 (12:27):
How do they achieve this?

Speaker 3 (12:29):
There are some theories. One of the things that Manus
does is it lets the AI system effectively use a browser,
a bit like a human researcher might use a browser.
So the bit that it's doing for us is is
a lot of the gnarly pieces of real research. You know,
you fire up lots and lots of web browser tabs

(12:51):
and you've got Google running in one and you're in
Wikipedia in another, and you're trying to keep it all
in your head and compile the final results. You know,
Manus has automated that process in a way that's very
very easy for the end user to use. And one
of the things I love about it is that you
can actually go back and look at all of the
steps that it's taken, so you can go and say, oh, look,
it broke up the task in this way, and it

(13:12):
went to these websites and extracted this information. Then it
realized it needed this other piece of information, and it's
gone off and found that other piece of information. And
then when you get your final results. What's very nice,
it can sometimes be a bit overwhelming, is that you
get an executive summary, which is of course the piece
that we all want to read. But then it has
all of the appendices, right, the much much more detailed

(13:34):
analysis that it has done on the particular research task
you've asked for. I think what's really impressive is this
is a product. I mean, the thing that they've done
really well is they've produced a product that if you've
worked in an office situation, if you've ever asked anyonet
to do any research, you've done something yourself, the output
will be familiar to you.

Speaker 1 (13:52):
How does it compare, for example, with Opening Eyes deep
research tools, which are shaping up to be quite expensive.

Speaker 3 (14:00):
Yeah. Open Ay has this deep research tool, which today
is the top tier is two hundred dollars a month,
and there's a rumor it might go up to higher
tiers of two thousand dollars and twenty thousand dollars a month.
I have the two hundred dollars a month product. I
consider that to be a very very good, graduate quality

(14:22):
researcher that I can throw at almost any problem. What
I found with using Manus is that somehow Manus gave
me more of a well rounded answer. It was perhaps
not as deep as open AI's deep research, but it
was it was more complete, more coherent, And you know,

(14:44):
I think listeners will hear that I'm a bit uncertain
in my tone as I try to describe the differences
because these products are so new Manus is not even
a week old. That they're also quite immature. So it's
not like comparing a Tesla with some kind of forward
gas powered car, where these are mature products and you

(15:05):
know how to tell them apart. We're still trying to
figure out how to describe these products. And so in
a sense, my experience of them is really intuitive, and
it's one to feel rather than fact. So someone else
could use these products and have a different experience to me,
and I think that just speaks to the nascence of
this industry.

Speaker 1 (15:24):
I wish the second time this year that open ai
has had a product launch and then shortly afterwards had
a competitor come out of China. How does the Menus
moment compare to the deep seek moment.

Speaker 3 (15:37):
The Deep Seek moment is much more important than the
Manus moment. The Manus moment is an example of a
rapid productization, and ultimately it's products that we use that
make a difference. But what Deep Seak did was it
it demonstrated a really fundamental set of innovations, and that

(15:59):
key was that Deep seeks models achieved a similar level
to open AI's AI technologies, but they used one thirtieth
or one fortieth of the computing power than the open
Ai models did. That means they're cheaper to run, they're
faster to run, they use less electricity. And the reason
Deep Seek matters so much is that a large part

(16:24):
of the US's strategy towards China has been a technological containment,
particularly around AI and around the chips that are required.
The notion being that if you can't get the chips,
you can't build advanced AI. And Deep Seak has gone
off and shown that necessitya's mother invention They've come out

(16:44):
with a whole series of quite remarkable techniques that were
likely known by the way to the US labs, but
it just wasn't important for the US labs because they
could get the chips they wanted to. And I think
what deep Seak did was it changed the understanding of
the nature of that rivalry between the US and China,
which exists on many fronts, but in particular around technology.

Speaker 1 (17:07):
So Menus there's no fundamental model innovation. It's kind of
like a rapper, meaning it lays software on top of
existing AI models.

Speaker 3 (17:15):
It's a rapper in the vein of perplexity exactly. But
I would say that ultimately rappers and products are very
very important in the market. You know, it's not just
about the raw technology, and what you've seen with Manus
is a product that competes on a like Forulight basis

(17:35):
with a product coming out of you know, US firms.
Quite often, when you look at Chinese consumer products, they're
very very much designed for the Chinese market. The things
a Chinese consumer wants, the way they behave cultural and
design affordances and considerations, and I think it is sort
of salient that, you know, Manus has come out with

(17:57):
something that you can use, and you can say this
is similar to a Perplexity, which is a great Silicon
Valley startup that builds AI based research tools as well.

Speaker 1 (18:07):
And you've been in the US this week at south
By Southwest, spend a lot of time in the States.
How are US companies responding to this kind of bulge
of innovation coming out of China in the world of AI.

Speaker 3 (18:19):
Well, it's quite a complicated picture. So one of the
things that deep Seak did was that they made their
techniques available. They described them in much more detail than
we're seeing from US labs, and a lot of the
underlying code was open source, which meant that anyone could
access it, download and make use of it. And so
there's a Silicon Valley investor by the name of Mark

(18:41):
Andriesen who is a phenomenal investor, but he's also very
very well known for promoting an idea of American dynamism.

Speaker 1 (18:48):
And close advisor to President Trump right now as well.

Speaker 3 (18:51):
I believe so. But he said of deep Seat, it's
open source, it's a gift to humanity. So on the
one hand, you've got people with say that, and you're
seeing that a number of American firms have implemented deep
sea technology. Perplexity, which is a research tool, has done this,
and you can access deep seeks models through some of

(19:13):
these cloud companies who serve enterprise customers. So on the
one hand, they've people have taken it on and you
have seen now open source projects that are trying to
replicate what deep Seak has done in slightly different ways,
and so that I think has really been a fillip
and a boost accelerator to the overall industry. When you

(19:34):
look at the closed labs like open ai and Anthropic,
one of the things you're starting to see is them
respond So open ai responded to deep Seek by reducing
some prices, by making certain capabilities available they hadn't previously,
by saying they would open source more technologies. So there's
definitely been a significant response, and of course the public

(19:57):
markets responded by having the first of a number of
frighteninglaims melt there. Yeah, well, the first of many meltdowns
that we've had so far this year. But I would
say that the really interesting thing that has come out
of out of deep Seek is that by being open
source and being as good as it is, it's a
real strategic challenge to closed source models that are only

(20:23):
slightly better than an open source model, And so I
do think that it has in some sense started to
redraft our assumptions about how this industry might evolve for
the economy over the next few years.

Speaker 1 (20:44):
Coming up, we'll hear more from Azimazar about our current
AI moment. To stay with us. One of the kind
of things you provide for your readers is, you know,
information and first hand accounts of you using all these

(21:07):
new technologies. The other thing you provide, I think is
paradigms for thinking about problems.

Speaker 3 (21:13):
Right.

Speaker 1 (21:14):
One of those paradigms you have is innovation versus diffusion,
Diffusion being kind of what happens after innovation, i e. Like,
how does a technology actually get adopted in a real
market or a real economy. Can you kind of explain
that paradigm and how you're seeing it play out differently
in the US versus China.

Speaker 3 (21:31):
Yeah, Well, it's very easy to get excited about the innovations,
but what actually counts is do businesses use those innovations
to increase their productivity, produce better products, reduce their costs,
and therefore sort of kickstart that virtual circle that is
a market so consumers can buy better products at lower costs,

(21:52):
and that cycle continues. And the big question that we
face around AI is what is going to be the
rate of diffusion of the technology across different countries. And
there are a couple of issues here. Sometimes if you
aren't very advanced with your use of technology, you actually

(22:12):
benefit a lot when a small amount of technology is
introduced into the business. I mean, you know, the simple
point being that the first TV that a family gets
is life changing, the fourth TV doesn't make that much difference,
and the same is true going to be true for AI.
So how does this going to play out? US firms
tend to be much much more pro technology. They take

(22:34):
on technology earlier than companies in other countries. But one
thing that happened with deep Seek was that deep Seek
triggered a response from the Chinese state, both in a
meeting that President g held where he brought lots of
the big tech CEOs from AI and other domains together
and started to rehabilitate them. But the second thing that

(22:56):
I've heard is that there has been a wrong grass
roots but also directed effort from local and state governments
to start to use technologies like deep seek in their
in their delivery, and one of the things the Chinese
can do quite well is they can coordinate both the
private and the public sector in that way. I think

(23:17):
it's it's unclear to me that that necessarily helps them
catch up with the US firms. Well, well, just the
fact that I mean, just the fact that American firms
in general tend to be very very pro technology, right,
They're the first to move to the cloud, They're the
first to move to mobile and mobile commerce. You know

(23:37):
that they do it quicker than Europeans do, the French
or the British, and in general quicker than the Chinese.
But I would say that the fact that there is
a Chinese model, the fact that there is a little
bit of patriotism running around it, the fact that it
is so easy and low cost to run and there
are not so many alternatives, I think does suggest that
the Chinese market could could accelerate right more quickly than

(24:01):
it might otherwise have done. And we know, we have
to see what happens over the next the next year
or so, but I wouldn't be blase and say, well,
America is obviously going to diffuse this technology faster than
anyone else.

Speaker 1 (24:12):
And I believe it. South By Southwest, you were leading
a panel about energy as it relates to AI, and
obviously you know China's ability to onboard new electricity to
the grid in the last twenty or thirty years, you know,
with cold being a major part of that has been
extraordinary compared to the US. How important of a driver
of diffusion will energy production and integration be.

Speaker 3 (24:37):
I mean, all of the AI data centers that are
going to be built will need lots of electricity. I mean,
these chips are demanding. They are Just give you a
sense of how demanding they are. The standard unit in
a data center high density servers, which are these powerful computers,

(24:58):
a high density racks that of today might draw twenty
or thirty killer watts of power, and you'll have one hundreds,
if not thousands of these racks in a big data center.
And the new rats that are being designed will will
have servers that will draw one hundred to one hundred
and forty kilowatts through them, which is an enormous amount

(25:20):
of All of that comes together to mean that in
order to deliver AI at scale to any economy is
going to require lots and lots of data centers. And
back in twenty eighteen, data centers in the US took
up about one point two percent of electricity demand. Coming
into twenty twenty four, it's around four percent. The Department

(25:42):
of Energy reckons that by the end of the decade
that number will be between six point five and twelve
ish percent, is which is quite quite significant. Now. The
reason it's significant is that since two thousand and four,
the US has not really increased the amount of electricity
it's used, very very largely sort of underinvested in its grid,

(26:03):
its energy generating capacity compared to China, which, as you say,
has historically used coal, but now essentially everything that's brought
on stream is solar. And so there is this concern
that even if you've got the algorithms, and even if
you put the algorithms in products, if you can't run
those products and those algorithms on enough computers because you

(26:24):
can't get the power to them, you can't serve businesses
with their energy needs. And so that's been a major concern.
And then that that comes into the second concern, which is, well,
even if you can serve them with the energy needs,
one are the environmental implications of all of that. So
there is a sense that there's an amber warning light,
perhaps not a red warning light, you know. My own

(26:45):
sense of this is that it's actually a really good
thing that there is a demand for new electricity sources
coming into the US market after such a long period
of low investment, because any advance economy is going to
need electricity. So I think in general it's quite a
good thing to have this strong demand signal come in

(27:06):
from the AI data centers. But I think it does
create a small risk, which is for want of a
grid connection, the AI opportunity was lost, and there is
that risk. It's one of the things that the new
administration has to figure out what are the leaders it
can pull to unblock US firm's ability to build and

(27:29):
power these AI data centers.

Speaker 1 (27:32):
So, speaking of the new administration, there was a fascinating
conversation that Ezra Cline had last week with Ben Buchanan,
the kind of lead AI advisor to the old administration,
which I'm sure you followed. The discussion centered around kind
of AGI and whether it's coming, and in the context
of that, there was a lot of discussion about competition

(27:53):
with China. So on the first one, Where do you
stand on the whole Will they won't be on AGI
in the next couple of years?

Speaker 3 (28:00):
Well, let's start with what do people mean by AGI? Right?

Speaker 1 (28:04):
I think the definition of Kenan was using was basically
doing most human tasks better than humans, like replacing disc
workers was his kind of framework.

Speaker 3 (28:13):
Yes, that's sort of somewhere between where Demisi Savis who's
the boss of Google's Deep Mind group, and Sam Altman,
who's the boss of Open ai SIT. I mean Sam's
phrases systems that outperform humans at most economically valuable work.
By that definition, we're already getting systems that improve the

(28:35):
quality of human work significantly, and we already have systems
that achieve the same output with much smaller teams, because
you know, answering support tickets is something that these chatbots
can do very very well. If you look at the curves,
which I mean the performance curves of AI systems, they

(28:56):
are sharply trending upwards. Does that do all the work
of a desk worker? So I slightly disagree with it
because I still have to direct the machine, I still
have to judge the output. I still have to use intuition.
Things that I wasn't able to frame in my question
with the results that comes out of it. Ultimately, I'm

(29:19):
the principal who makes the decision in the business, so
I look at them as tools that largely augment. But
it's really also very very clear that there are lots
of jobs where that the augmentation is going to turn
into a replacement. And I think that you know, you
see that happening in customer service teams, right you have
teams of one hundred turns out with the AI, you

(29:41):
can have a team of ten or a team of
twenty that does the same job. So timing wise, I
expect the rate of improvement of these systems to continue.
I think what Mann has showed us where we started
our conversation was that you don't need to build a
new model to get a really, really great output and
an improved output. And I sometimes wonder whether AI researchers

(30:05):
think for the average human and the average desk job
is at the level at which these double PhDs work out,
and that's just not true. Right in most businesses, we're
not thinking like that. If you could get a machine
that can come up with the next new theory of physics,
we all benefit. But in reality, we don't need that
level of thinking most of the time, right. We actually

(30:26):
need a much more prosaic level of thinking. And frankly,
I'd much rather that my barber doesn't have a Nobel
Prize in physics. I'd rather he's just very good with
a razor blade.

Speaker 1 (30:35):
Thank you so much, as him my pleasure. That's it
for this week for text of I'm as Volcan. This
episode was produced by Eliza Dennis and Victoria Demingez. It
was executive produced by me Kra Price and kat Osborne
for Kaleidoscope and Katrin nor velve I Heart Podcasts. The

(30:58):
Heath Fraser is our engineer and Elmurdoc mixed this episode
and also wrote our theme song. Join us Wednesday for
tech Stuff the Story when we'll share an in depth
conversation with astro Teller, the captain of Moonshots at Google x.
Please rate, review, and reach out to us at tech
Stuff podcast at gmail dot com. If you're enjoying the show,

(31:19):
it really helps us and helps others discover it. If
you subscribe and leave a comment, thank you.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.