All Episodes

February 21, 2024 • 57 mins

Journalist Jacob Goldstein joins the show to talk about the challenges of communicating technology topics to the public, how to sort through the promise and threat of AI, and the way that engineers approach problems.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host Jonathan Strickland.
I'm an executive producer with iHeart Podcasts and how the
tech are you? Folks. We have a very very special
episode of tech Stuff because I have a very very

(00:25):
special guest with me today, Jacob Goldstein. Now, if I
were to go and run down his entire resume, it
would be an entire episode just by itself. He's got
a long and distinguished career in journalism, and among his
many accomplishments happens to be the fact that he's the
host of a podcast called What's Your Problem, which is

(00:48):
a show that really dives into things like engineering and
how engineers tackle problems, how do they define them, and
then how do they create solutions. So Jacob, welcome to
tech Stuff. Thank you for being here.

Speaker 2 (01:02):
Hi, Jonathan, thanks so much for having me. I'm delighted
to be here.

Speaker 1 (01:05):
Yeah, I'm delighted you're here too. And before we really
dive into a full discussion, we're going to talk a
lot about engineering and a lot about AI in particular,
because many of your episodes in this last season of
your show have been on AI. I want to learn
more about you, So tell us a bit about your
background and how you came to become a podcaster on
What's Your Problem.

Speaker 2 (01:26):
So, before I started What's Your Problem, I was one
of the co hosts of a podcast called Planet Money,
which is a show about economics. And before I had
that job, I didn't know that much about economics. You know,
I was an English major in college. I'd covered healthcare
for the Wall Street Journal, and so getting there and
covering economics for a while, to me, the big exciting

(01:49):
idea at the heart of economics is the pie can
get bigger, right, everybody can be better off. The world
is not a zero sum game, and I think that
is a very non intuitive, big exciting idea. And basically
the way we all get better off in the long

(02:11):
run is through technology. Right, It's through people figuring out
more efficient ways to do things, ways so that you know,
we do the same amount of work basically, but we
get more stuff. You get more output for every hour
of labor. And that is fundamentally you know, engineering and technology,
as you said, And so I wanted to go deeper

(02:33):
on like how that actually works, Like, you know, there
are people whose job is I'm going to go to
work and like figure out a better way to do something,
and so that is what I'm trying to do on
What's Your Problem? Those are the kind of people I
talk to on the show.

Speaker 1 (02:45):
That's awesome. I hear what you're saying because it resonates
a lot with a lot of the stuff we talk
about here on tech stuff. It is to me one
of the key funny components about technology is how everyone
anticipates the next big technology development means that their jobs
are going to be way less labor intensive, and then

(03:05):
often it turns out like, well, sure each individual task
is easier, but now you're doing way more tasks because
everyone's more efficient. So if you remember, because I'm not
going to put an age on you, Jacob, but I
will say that I am of a certain age where
I remember the concept of the paperless office, yes, and
how we were going to get to this incredibly efficient thing,

(03:26):
and that maybe, like maybe your workday would be reduced
to maybe three hours a day. Turns out that that
was perhaps a bit idealistic on the part of the
workers and not the way it worked out.

Speaker 2 (03:37):
Yes, although I mean there is a lot less paper
to start with us, right, Like I am old enough
to remember, like when I started working, everybody had like
a file drawer by their desk and like hanging files
with papers in them, so there is less paper. I mean,
the sort of less work thing is interesting, right, because
on the one hand, people are like, oh, I hope
technology you'll mean I have less work. By the same token,

(03:59):
people are like, oh I hope technology doesn't take my job, right, Yes,
In fact, the basic mechanism is like technology the happy
story anyways, the story that we hope happens is technology
makes us more productive, not so that we can work less,
but so that our output can be greater, right, Like
you know, I did not start working on podcasts and

(04:23):
radio in the real to real tape era, but I
know people who did, and like they talk about how
long it took to literally cut tape by hand, and
you can cut a lot more tape now that it's
not actual tape. Right, that's a productivity game.

Speaker 1 (04:37):
Yes, totally. It's so funny to kind of see these
changes over time and the different perceptions that go into
whether we was it going to be like versus what
it actually turns out to be. We're gonna as I said,
talk about AI a lot, and to your point, one
of the things that I hear repeated about AI in
general and specifically within the realm of robotics and AI

(05:01):
is that its ideal role is to tackle tasks that
fall into the three d's, which are dirty, dangerous, and dull.
That these are technologies that are best suited to take
on jobs that are perhaps less desirable for humans for
various reasons, whether they could potentially cause injury, or they're
not very rewarding, that sort of thing, And I really

(05:23):
like that concept. The fear everyone has, obviously is that
it's tackling everything. It's non discriminate, it's not looking at
just the three d's, it's looking at every possible option.
And you're seeing a lot of discourse, at least I
am online where I see a lot of people saying,
why aren't we looking at how to automate c suite jobs?

(05:47):
Because it seems to me like a lot of the
duties that c suite executives have are ones that would
be best suited for some of the AI tools we're
talking about. Why are we talking about eliminating these lower
level jobs when some of these upper level ones particularly
when you have stories about c suite executives who perhaps
had a less than stellar run at the top of

(06:09):
the company ladder, like maybe the company didn't perform as
well as it should have, and yet they still retire
with these massive packages. So it's funny to see how
the perceptions of things like automation and AI are shaping
social discussions in that way.

Speaker 2 (06:26):
Well, I think certainly. I mean, the rise of large
language models, you know lms like chat GPT have shifted
that conversation some, right. I think if you go back
five years, people thought about, you know, automating warehouse jobs.
But what I've seen in the last say year or
so since chat GPT, you know, stormed, our discourse is

(06:46):
people are talking a lot more about journalists and lawyers
being automated away, right, and plausibly to some extent, plausibly
to some margin. I mean. The other thing is like,
do you want a robot boss? When people talk about like,
I haven't heard about people talking about automating the c suite,
but like, sure, bad bosses are bad, and overpaying bad
bosses is bad. I actually think the role of a

(07:10):
good boss, of a good CEO is largely to be
a human being, right, is not fundamentally about you know,
assessing the data and making the best decision, although obviously
that's important. It's to be there to you know, talk
to people, essentially, to be in the room to tell
people that things are going to be okay. And that
seems like the set of domains that are less likely

(07:33):
to be automated, certainly in the short run.

Speaker 1 (07:35):
I think you're absolutely right. I think it's based on again, perception, right.
A lot of people don't have like FaceTime with CEOs,
and so their perception is from a distance, and they're
looking at the effects that the CEO decisions are having
on a broad level, especially in the wake of something
like a round of layoffs, for example, Whereas you and

(07:57):
I have had the opportunity to speak face to face
and you find out very quickly these CEOs are human beings,
some more so than others. I mean, there are some.
There are some CEOs out there who I suspect maybe
at least part cyborg, but.

Speaker 2 (08:12):
Could be had happen.

Speaker 1 (08:14):
Yeah, yeah, most of them. I mean, I'm not sure
if Elon Musk is even robotic, he might be alien.
I'm not entirely certain. But there are a lot of
them out there. When you just have a short conversation
and you realize these aren't just talking points. For a
lot of these leaders, they sincerely believe their mission statement,
or they sincerely believe in the strategies that they're following,

(08:37):
and they sincerely feel bad when they have to make
decisions that lead to things like layoffs. But when you
are at more of a distance, I think it's easier
to kind of dehumanize the person. And it's understandable, right
you see those big effects and you just think this, Meanwhile,
the CEO is potentially making enormous amounts of money. I'm

(09:00):
reminded of a former CEO I worked for, not a
former CEO. He's still a CEO, but he's a find,
my former boss, David Zaslov. And whenever I see any
stories about him, I sit there and think, well, I've
met the man, I've had conversations with him. I know
a little bit more. I feel like I could give
a bit more perspective to this. But at the same time,
you're not entirely wrong with some of the conclusions you've drawn.

Speaker 2 (09:24):
Yeah, and I mean, you know, to some extent, like
I mean the sub I feel like a lot of
the subtext of what you're talking about is inequality.

Speaker 1 (09:31):
Right.

Speaker 2 (09:32):
It's the gas between what CEOs make and whatever the
median worker at their company makes, and that indeed ballooned
out a lot at the end of the twentieth century
and has stayed quite wide obviously, and to some extent,
that is an effect of technology, right, although it's complicated,
there's a lot of it is norms. Right. Yeah, that's
a pretty subtle one.

Speaker 1 (09:53):
Yeah, that's another thing that as communicators about things that
are in the technological space, often we do need to
take a step back away from just the technology and
acknowledge these other components that impact the entire direction of tech.
I mean, I'm sure as someone who has looked into
Silicon Valley you have seen how things like social norms

(10:14):
and politics and even things like living expenses in San
Francisco have a big impact on these sorts of things,
and people can get frustrated when I step back and
talk about these elements. But my argument is that you
can't really have a full understanding of technology unless you
also take into account these other things that do have
an impact. But one of the things I wanted to

(10:35):
ask you about so what's your problem. You are talking
with a lot of problem solvers, obviously, and I was curious,
now that you've spoken with quite a few people who
either come directly from engineering or have kind of an
engineering perspective, what's your take on engineers, Because I know
that's a general question and not everyone falls into the
same bucket, But I have a love for engineers in

(10:59):
the way that they approach things.

Speaker 2 (11:01):
Yes, same, And you know, I also love engineers. And
as I mentioned, I was an English major. I am
not an engineer at all. I'm not even good at
fixing things around the house, although I try. But in college,
I took one computer science class my last term of college.
It was just the intro class. And on the first

(11:22):
day it was, you know, big lecture class. The professors
computer scientist was talking about his grading system and it
was this weird thing was like a check and a
check plus. And he said the top grade is a
plus plus and that is for code that makes me weep,
like cause it's so beautiful, you know, so erien. And
that was like a revelation to me because you know,

(11:42):
as an English major, as a non engineer, I always
thought of engineering as like, oh, a thing works or
it doesn't work. The building falls down or it doesn't
fall down. But this idea that there is elegance and
beauty in the construction of the thing itself was really
exciting to me and remains really exciting to me. And
engineers really are like that, you know, like they love

(12:04):
building things and they find beauty in an elegant solution,
the way other people find beauty in a song or
a poem or a painting.

Speaker 1 (12:13):
I love that. I would say there's like a spectrum
in engineering as well, where you have sort of the artists,
who are the ones who are very carefully creating and
refining their code, and then maybe you have on the
punk rock side the hackers who didn't necessarily build the thing,
but they really want to know how the thing works,

(12:34):
and they will take the thing down to the very
base level of the structure, and then they'll say, what
if I rebuild it so it does something else? Like
I just think that entire culture, from the artists to
the punk rockers, who, hey, they're artists. I'm a punk
rock kind of fan myself, but I always find that
to be a wonderful way to have a conversation is

(12:57):
to talk about people and about how they're approach to
sort of stuff. And I also always say whenever I
talk with engineers, I come away with the feeling that
they view the world, as you can think of it,
as either a set of problems or a set of challenges,
and they're constantly thinking about solutions, which is nice because
I am unfortunately one of those people who's far more

(13:17):
likely to point out a problem but not have a
solution for you, right. So to talk to someone who's
already thinking ahead about how to solve the problem, not
just that there is a problem, I always find that
very inspiring.

Speaker 2 (13:30):
Yeah, it's nice. I guess I also have tended to
be problem focused, you know. I sort of came up
in my career as a journalist, which is essentially all
about pointing out problems right to a significant degree if
you read the paper. Basically, what's going on in many
many stories is here is a thing that is bad,
and so talking to people who are trying to make

(13:53):
things better or fix things is great. And you know,
to be clear, I don't want to be too Pollyanna
issue here, Like there are plenty of engineers who build
things that on net don't help the world right, Building
new things is not always helpful to the world, and
there are certainly engineers who become enamored of just building
the thing and don't think about what it might mean.

(14:15):
And frankly, you know, in choosing who to talk to
for the show, I do try and talk to people
who I think are some combination of well cognizant of
what they're building and actually trying to do a good thing,
and you know, aware of the fact that there might
be unintended bad consequences of the thing that they're building.
Maybe I don't always succeed, but it is a useful

(14:38):
frame totally.

Speaker 1 (14:39):
I also tend to think about how when engineers build
things that make sense to them, assuming that's they're building
something that ultimately is supposed to be used by the
general public, the great ones will take into account how
a quote unquote normal person would approach the whatever it is.

Speaker 2 (14:57):
Like.

Speaker 1 (14:57):
I'm thinking of user interfaces in particular, and that making
sure that the user interface is going to make sense
to enore me as opposed to an engineer. And then
there are other engineers or in fact entire companies where
they will build things that work great. If you're an
engineer they're fantastic if you're an engineer. If you're not
an engineer, it may require a bit more work on

(15:21):
your part. I'm looking specifically at Google and the Android
operating System because I'm an Android user, but at the
same time, I fully recognize that iOS is an operating
system that is so intuitive. You can literally hand an
iOS device to a child and they will have it
figured out in no time. You can hand an Android

(15:41):
device to someone and they will spend a lot of
time asking questions about how to do things and how
to access things. And it's not that the Android operating
system is bad, it's not that it's worse than iOS.
If you happen to be an engineer, Android is awesome,
and if you're not, it's still awesome. But you have
to put in work to get to realize that. And
it really differentiates the two, right because Apple has always

(16:04):
had a focus on how can we make this into
a product that people realize they need, even if they
never had that need before. And Google's like, how can
we make this so that it's really powerful and that
it does what we wanted to do? But it may
require a little bit of work on the user's part.
In order to have it work out. So that's sort

(16:25):
of also a fascinating thing about engineering that I've really
loved to look at and to talk about. I don't
necessarily think one is better than the other, apart from
the fact that one is just much easier for the
general public to kind of glom onto. As much as
I love Android, I would never say that it's more
user friendly than iOS.

Speaker 2 (16:46):
Yeah, I mean there's a few ways of thing about that, right,
Like somebody a long time ago told me there's a
phrase people use sometimes the user is never wrong, right,
which is which is an interesting framework. And I think
they told it to me to be nice, because I
was like trying to do something in a radio studio
and I couldn't figure it out. And it was an
engineer who was like a thoughtful guy. I said, no, no,
you're not bad at this, it's just not set up well.

(17:08):
I mean, the other way of thinking about it, in
terms of the Android versus io ask question is as
an optimization problem. Right, if you're an engineer, then the
question is, well, are we optimizing for sort of the
mobile operating system that can do the most things or
be the most flexible, or the mobile operating system that
is just like bulletproof. You can give it to anybody

(17:28):
in any language and they will immediately understand what it
is and you get different sort of solutions depending on
what you're optimizing for.

Speaker 1 (17:36):
Yeah, that's true. Like you've identified whatever your goal is,
so obviously the execution is going to be different. Well,
I'm glad that you weren't told that the problem was
between keyboard and chair, which is the other classic classic answer.

Speaker 2 (17:49):
Wait a minute, that's me.

Speaker 1 (17:51):
Yeah, I've received that particular one more than more than
once in my lifetime. Well, now you have a reply.
Now you have to reply yes, yeah, yes, that's true.
The user is never wrong. We've got a lot more
to talk about, Jacob and I, but before we get
to the rest of our conversation, we need to take
a quick break to thank our sponsors. Your show covers

(18:24):
all realms of technology, not just AI, but because this
past year has been undeniably AI centric when it comes
to tech news, clearly a lot of your episodes do
tackle AI, and as someone else who tries to communicate
technology in a way that's really accessible and understandable, one
of the things I frequently run into is that talking

(18:46):
about AI in a responsible way is in itself challenging.
But I'm curious to hear what your take is when
it comes time for you to communicate about AI. How
do you perceive that and how do you approach it?

Speaker 2 (19:00):
A thing in general that I try and do on
my show and certainly with respect to AI, is to
go narrow essentially right, Like I'm not going on my
show and saying here is what AI is and here's
what's going to happen. I'm talking to people who are
typically building a company to do a specific thing with AI.

(19:21):
Not quite always, but usually right. And so that to
me is a helpful way to well, AA say something
new because so many people are making so many broad
statements about AI and b steer clear of you know,
over generalization. How about you, I mean, what's your take?

Speaker 1 (19:41):
So my concern is I always want to avoid being
reductive because AI is such a huge discipline and it
involves so many different aspects, that is very easy to
fall into the trap because I mean, clearly we see
this in mainstream media all the time. Not that I
blame them, but they're just they're taking some shortcuts where
they'll use the term artificial intelligence. It almost implies that

(20:04):
what they're talking about is the end all be all
of artificial intelligence. And usually they're talking about generative AI.
In the last year, I would say, like, that's been
the biggest topic in artificial intelligence, but it's one topic,
and that AI actually covers a lot more than that,
and it falls into so many other buckets too, Like
robotics obviously has a lot to do with AI. That

(20:26):
not always you can have a fully remote controlled robot,
but often there's some AI components there, things like assisted vision,
brain computer interfaces. I mean, there's so many different things
that don't have anything to do with generative AI. That's
still at least touch on AI.

Speaker 2 (20:43):
Doing a Google search, yeah, like exactly.

Speaker 1 (20:46):
Yeah, anything that you're getting into, like automation, I mean
you can get a.

Speaker 2 (20:51):
Lot recommending a show to you. Yeah, blind spot monitoring
system saying there's a cardex too. Like these are all side.

Speaker 1 (20:59):
The Yeah, they're all they're all different aspects of AI.
And like you could argue, well, sometimes you get a
little fuzzy. I'm like, well, so is the word intelligence?
Like intelligence itself is a fuzzy term. I thought that.

Speaker 2 (21:08):
Would be better off if the AI did not exist
me too, like I think it's an unfortunate choice of
words that is unhelpful.

Speaker 1 (21:17):
Ultimately, it's I think largely because everyone starts to jump
to They jump to science fiction, they jump to Skynet,
they jump to Terminator, they jump to this idea of
something that appears to think the way humans do. And
of course we don't even know if we ever reach
strong AI or general AI, however you want to define it.
We don't know if it's going to quote unquote think

(21:39):
like a human, or even think at all. It may
just be indistinguishable to us from the way humans think.
And I think Touring would argue, well, that's good enough.
It doesn't matter. If it's indistinguishable, then it might as
well be.

Speaker 2 (21:54):
You quoted somebody on your show a while back as
saying something like intelligence is whatever machines can't do yet,
which I thought was pretty good.

Speaker 1 (22:06):
Yeah. I think that it's very similar to how a
lot of philosophers define consciousness, right They say, like, ugh,
we don't.

Speaker 2 (22:12):
Know what consciousness is.

Speaker 1 (22:14):
All we do. All we do is we define what
consciousness isn't we haven't gotten to a point where we
can say what consciousness is. We get we chip away
So what we're doing is we've got the marble slab, yeah,
and we're chipping at it, but we haven't yet seen
the statue that's living under the slab. Yet we're still
just chipping.

Speaker 2 (22:32):
I was already worried about intelligence. As you say consciousness,
I like, I don't even know what to do with
that one.

Speaker 1 (22:38):
Oh no, well, I mean it often goes hand in
hand with AI, right, because people immediately assure that intelligence
and self awareness go hand in hand with one another,
and maybe it will. We don't know. That's the point.
We don't know. But long story short, to answer the
question of how I approach this, I usually go where
I start from the broad foundation and then I go narrow.

(22:59):
So I start with saying, first we need to acknowledge
that artificial intelligence is a very very big field, and
that this is one aspect of AI, and that we're
not going to talk about the other aspects of AI,
but we need to remember they exist, and that while
the thing we're talking about is important, and while it
has its own set of challenges and potential, you know,
rewards and risks and all the things that go with it,

(23:22):
it's one part. It's like if you were to say,
you wouldn't hold up a remote control and say this
is all of technology. Right, This is one thing that's
a technological gadget, but it doesn't represent all of technology.

Speaker 2 (23:35):
First of all, you'd have to find it, right, that's.

Speaker 1 (23:37):
True, which you know, you know, you got to have
some sort of method to figure out where it is.
This is, by the way, is why three D television
never became a thing. Who wants to look for glasses
so that they can watch True Detective season four? Not me.
So yeah, that's kind of my approach. And so I
don't think that our approaches are that different. I think
that we're pretty similar. And it's that I think we

(24:00):
both feel there's a responsibility to make certain that we
never overgeneralize or be reductive, because that feeds into a
narrative that I think actually contributes to the old fud
the fear, uncertainty, in doubt. And while there are things
to certainly be concerned about and to be aware of,

(24:21):
we don't want to rush into anything, you know, with
a poor understanding of the situation. I think that's true
whether you're you know, really enthusiastic and excited about AI,
I think it's true if you are really concerned about AI.
I think, you know, taking critical thinking and a really
methodical approach is absolutely key if you want to avoid pitfalls.

Speaker 2 (24:46):
Sure, it seems hard to argue against critical thinking and
a methodical approach, right, Who's going to take the other
side of that one?

Speaker 1 (24:53):
I mean, Mark.

Speaker 2 (24:55):
Fair enough, everybody, ye, fair enough?

Speaker 1 (24:57):
Like Sam Altman maybe?

Speaker 2 (25:00):
I mean, so it is interesting to think about the
Sam Altman, the head of Open AI. I mean, one
of the really interesting things to me about AI. And
that seems different in particular when you know, we're talking
about engineers, Like, I feel like the extent to which
the engineers working on AI are worried about AI is
really interesting and different, right, I feel like the traditional

(25:24):
kind of engineer stance is like, this thing is cool,
let's build it right again. That's obviously reductive and somewhat unfair,
but whatever. Whereas with AI, to some significant degree, many
of the people who are most worried about it are
the people who understand it the best. And you know,
I've heard people are you like, oh, that's just marketing,

(25:44):
and that doesn't seem true to me. First of all,
why would you market a thing by saying we should
be worried about it. And second of all, if you
just look like open ai was started as a nonprofit
and then they you know, needed more money, but they
became this weird capped profit model, and then people left
open ai to start anthropic, which is another one of
the big ones, because they thought open ai wasn't worried enough.
And then you had, you know, people calling like Elon

(26:06):
Musk calling for a six month pause on AI development.
And so I do think that people who know a
lot of it about AI are in fact really worried
about it, which is just interesting on its face, indifferent
than the way technology often works.

Speaker 1 (26:20):
Yeah, there are a lot of conspiracy theories that pop
up or fringe theories that pop up around this. By
the way, like you have the ones who say, well,
they say they're worried about AI because what they're trying
to do is shape the discussions around regulations so that
their own personal organization ends up benefiting from those regulations
while those same regulations slow down smaller companies that are

(26:41):
in the space. You had people saying, well, Elon Musk, yes,
he was arguing that there needs to be a pause,
but it's because he was launching his own AI company,
and he wanted a chance to be able to catch up.
Like You've got a lot of other fringe theories out there,
and I understand that there may be, you know, some
credibility to some of those who knows. But I think
when it gets to a point where the board of
directors of open Ai get together and decide seemingly spontaneously

(27:06):
that they're going to get rid of the CEO and
co founder of the company and then do so, I
think that speaks to a genuine and sincere concern that
perhaps the organization is moving in a direction that they
feel as fundamentally counteractive to what they had intended. And

(27:26):
of course we know they subsequently had to reverse that
decision and step down from the board of directors because
the overwhelming support within the organization was to that co
founder and CEO seemingly embracing this new approach toward developing
AI that was a departure from the original organization's intent.

(27:48):
But to your point, the fact that the board of
directors was willing to do such an extreme move, even
though it was on a Friday at the end of
a news cycle, even that they were willing to do
that knowing that it would lead to them having to
leave the organization. I think that speaks to me a
genuine concern. You don't go and remove a co founder

(28:09):
and CEO for a small reason.

Speaker 2 (28:14):
Yeah, I mean sure, I'm sure that some amount of
the public worrying over AI by people in the field
is some kind of self interested behavior. But I think overall,
there are clearly a lot of people who know a
lot who are even building these things, who are genuinely worried.
Like that seems obviously true.

Speaker 1 (28:35):
Yeah, And honestly, like anytime someone's building a technology and
they're they're bringing concerns up to me, that's a good thing.
And it doesn't necessarily mean that the technology is ultimately
harmful or not beneficial. But you know, I think it's
a responsible person who does ask those questions. For one thing,
it really can save you a lot of time and

(28:57):
heartache further down the line, if you're tackling these kinds
of things before they've escalated to a point where they're
actively causing catastrophe. So I like seeing that. Whether it
ends up being merited or not, well, that's just sort
of a curse we have to bear, right, What if
it turns out that it was never merited, we won't know.

(29:17):
And if it turns out if it was merited, we
still don't know. Because they asked the questions ahead of
time and fix the problems before they became problems. And
it's only if we take the other path that we
find out for sure, like WHOA, we should have thought
of this before we did it.

Speaker 2 (29:32):
I mean, you know, tools are complicated, right. People come
up with new tools, and then other people use those
tools in various ways, some of which enhance human well
being and some of which cause new miseries. And plainly
AI will do.

Speaker 1 (29:46):
Both, yes, and so it really becomes important that we
are really good stewards of the technology and we're paying attention,
that we're calling things out and we're addressing them as
they come up. I don't think that we're that close
yet to the doom day problem of the superhuman intelligent
AI that's stuck in a box and then is convincing
people to let it out of the box. I don't
think we're close to that yet. I mean, even quote

(30:09):
unquote dumb AI can do terrible things if it's poorly implemented. Right,
We've seen that, We've seen accidents with autonomous cars, which
show that AI can make bad choices sometimes because perhaps
it encounters a scenario that no one anticipated, because as

(30:29):
it turns out, reality has far more variables than we
can account for when we're designing things and then something
terrible happens. That doesn't mean that the technology itself is
deeply flawed or bad, but it does highlight that we
constantly have to be asking how can we make it better,

(30:49):
and how can we make it safer, and how can
we make it so that it's actually benefiting us and
not just causing you know, maybe a little bit of
benefit but a larger amount of harm. Yeah.

Speaker 2 (31:02):
I mean autonomous cars you mentioned are an interesting one,
right because plainly there have been, you know, tragic crashes
by autonomous cars. One question there is what are we
benchmarking them against? Right, Like, there are tragic crashes with
non autonomous cars every hour of every day, and so

(31:24):
in a sort of if people were just mathematically rational optimizers,
we would all say, okay, well, let's see if you know,
over a million hours of driving, autonomous cars are safer
or less safe than human drivers. That's clearly not what's happening.
People clearly favor human drivers for some complicated set of

(31:45):
human reasons. And we're not obviously benchmarking autonomous cars against humans, who,
by the way, are terrible drivers. Like one thing about
human beings. We're really bad at driving.

Speaker 1 (31:56):
Yeah, if you look at the stats in the United
States for the number of fatalities and injuries that result
from car accidents that are just human error caused car accidents,
it's a staggering number. And when you think how much
that could be reduced through autonomous cars, and you imagine, well,

(32:17):
think of the ripple effect. It's not just the idea
that those people who died would still be alive, which
on its own is already a phenomenal thing to talk about.
That means that the impact on those people's friends and
families that would not have happened. It means that the
impact on whatever their place of employment was that would
not have happened. They would be contributing members of society.

(32:40):
That would be a phenomenal change there. So when you
start thinking about that, you realize the overall benefit is
so huge that it only makes sense to really pursue
autonomous vehicles. And as long as the data does show
that in fact, they are better drivers per million miles

(33:00):
than humans are, and that to me is something I
try and keep in mind. You have to balance it out.
I think it's the same thing as people who really
flip out when they go on a flight they are
not directly in control of the plane typically. I mean,
if they're flipping out, whether they're the pilot, that's a
whole different issue. But if you're going on a flight
and you flip it out because you lack a sense

(33:21):
of control, I feel like that's a very similar thing
to how people feel when they're thinking about autonomous cars.
It's that somehow the fact that someone's not in control
brings up something very scary to a lot of people.
It also raises other questions obviously, like accountability. Who do
you hold accountable in these cases? I mean, there are
a lot of questions that as a society we have

(33:43):
to solve. It's not just the technology. But yeah, I
agree with you that that gets complicated because it involves
a lot of human feelings. And once you get to
human feelings, the whole data and stats and everything kind
of falls away. It's hard to convince someone who has
a deep seated distrust of changing their mind just by

(34:06):
showing them data, because they're always going to think of
the things that fall outside the norm as being more
important than the norm. Right, So if the accident rate
per million miles is let's say, one tenth of what
it would be for humans, they would still be looking
at that tenth and not the nine tenths. Right.

Speaker 2 (34:25):
People don't think statistically, right they yeah, clearly, Like in general,
statistics don't convince people of the way the world works.
I do think, I mean I would have thought autonomous
vehicles would have developed faster, right. They are the classic
thing that's been five years away for fifteen years. Yeah,
and they still feel five years away. Maybe they feel

(34:47):
a little farther right, like five years ago they really
felt five years away. It's like, okay, but this time
we mean it. Look we got these, you know things
for have around San Francisco. But I do think like
that one which is AI by the way, right, it's
basically computer vision is essential to that to autonomous cars. Uh,
I feel like that one's gonna happen, don't you like?

(35:08):
And yes, people be worried about it, but it's the
kind of thing like you know, you drive, you ride
in like a driverless train when you go to the
airport and you take the train from Terminal A to
Terminal C or whatever. And yes, obviously driving is more complicated,
and obviously we're used to driving the car and not
driving the train. It's not exactly the same, but like,
you get used to it. People just get used to things, right,

(35:30):
Like people didn't used to all walk around looking at
their phones all the time, and now they do. And
I'm old enough that it still seems a little weird
to me. But I'm the weirdo for thinking it's weird, right,
And I think that's gonna happen with driverless cars in Michael.

Speaker 1 (35:45):
Jacob, you just called me a weirdo, because I also
I do too. Once in a while, I'll take my
smartphone out and I'll just stop for a second and
think I have a computer in my pocket. When I
was a kid, a computer or my Apple to e
was a fraction of of what I'm holding in my

(36:06):
hand right now. I also have a device where I
could contact pretty much anyone. I know. It was just
a couple of like, like, it'll hit me for a second,
it's most attentive.

Speaker 2 (36:18):
And no, maybe I should just check Twitter real quick, Like, yeah,
it's a it's I mean, it's another tool that has
like a complicated set of effects positive AGA.

Speaker 1 (36:28):
But you know, you had said, like, do do you
think that autonomous cars are still going to be a thing?
I absolutely do think they're going to be a thing.
I think there there are companies out there that are
so invested in it that it's going to happen. The
timeline is really interesting. I would actually argue that some
other issues in AI that are not related to autonomous

(36:52):
cars could potentially keep that five years out going for
a while. Because people have this concern about A I
think they port that concern over to pretty much every
kind of AI, whether it's warranted or not. Because the
scary risks of generative AI, this idea that it's going

(37:13):
to displace people out of their jobs and such, which
it very may well do. They then kind of say like, well,
that application of AI is is really seems very harmful
to me. Then I think there's a tendency to kind
of apply that, even if it's not the same sort
of artificial intelligence to other implementations. And maybe I'm being

(37:34):
a little too cynical with that, but because I've seen
so much reporting go on where there isn't any effort
made to distinguish between different types of artificial intelligence and
what their purposes are and what their limitations are. It
feels like we are conditioning the public, and by we,
I mean like mass media conditioning the public to think

(37:56):
of AI as all existing in this one single bucket.

Speaker 2 (38:00):
I feel like you consume a lot of really bad
media based on what you've been saying.

Speaker 1 (38:05):
I mean, I'm reading articles all the time, and it's
not that they're written poorly or that the people who
write them are bad writers, but they are taking shortcuts,
there's no getting around it, and those shortcuts I think
are ultimately harmful. But then I also understand, especially if
you're assigned to write a certain number of articles per week,

(38:26):
you're probably not going to take the time to sit
there and explain the intricacies of how this is different
from every other implementation of artificial intelligence. But I certainly
can take the time on my show, so I do.
Jacob Goldstein of What's Your Problem has a lot more
to say about tech and engineering and AI, But before

(38:46):
we jump into that, let's take another quick break. Let's
talk a little bit of about some of the episodes
you've done on What's your Problem? Are there Are there
any that kind of stand out as like a particularly

(39:07):
fun or informative conversation, perhaps opening your eyes to something
that you hadn't considered before.

Speaker 2 (39:15):
Yeah, yeah, a lot. Actually. You know, when you told
me you wanted to talk about the AI shows that
I've done, the interviews that I've done, I actually went
back through the back catalog and you know, we listened
to some shows and looked, and there really are a
lot of them, as you said, in sort of some
different domains, Like I've done a lot on AI and health,
which is really interesting to me. I mean, one of

(39:37):
the things that I try and find are places where
it's like, oh, this is there's actually real stakes here, right.
It's not just like, oh, making some kind of company
I don't care about ten percent more profitable or whatever,
which fine, like it's fine for people to do that,
it's just not that interesting to me. Whereas with health,
it's like, oh, if you can make it less likely
for me and the people I love to die, I'm interested.

(40:01):
When I just did recently, I interviewed this woman Succi Saria,
who she's a professor at Johns Hopkins and She also
has this company called Baesian Health, and her story is
really interesting. She started out as a grad student. She
was interested in AI and robots, and as she told
me about it, she's like, you know, I was like
trying to figure out how to make a robot juggle
or whatever, just because it was fun. And she had

(40:21):
this friend who was a doctor. She was a grad
student at Stanford and her friend was a doctor at
Stanford Hospital who was taken care of of premature babies
in the neonatal intensive care unit. And this was about
twelve years ago, and at this time hospitals were just
starting to use electronic health records, which is kind of

(40:42):
amazing that it was that late. Like we're talking like,
you know, I don't know, twenty twelve or something. And
it is one of the really interesting things to me
about healthcare is in some ways it's super high tech,
you know, like these crazy CT scanners and like, you know,
everybody's got like bionic knees and amazing stuff. But when
you get to actual like care at the bedside, like
doctor treating patients in the hospital, it has remained rather

(41:03):
old fashioned in many ways. Right, you know, twelve years
ago it was still paper charts. Today, it's still doctors relying,
you know, to a significant degree on evidence, but also
to a significantry on essentially intuition. And so this computer scientist,
who she's sorry it, basically decides, oh, I'm going to
try and figure out how to use AI to make

(41:24):
patient care in hospitals better. Like that's basically her big project.
And she starts doing it with these premature babies twelve
years ago and in fact figures out that by using
this data that's now being captured in the electronic health record,
she can build an AI model that can essentially better
predict outcomes for these premature babies than the standard of care.

(41:48):
But it's so early that it doesn't really go anywhere, right, Like,
hospitals are just starting to use electronic health records, and
a lot of doctors don't want to hear from some
random computers scientists. They studied medicine for a long time
and they've treated a lot of patients and they know
what they're doing, and so it takes a long time,

(42:09):
but she eventually starts this company, and more recently she
decided to go after sepsis, which is this really common
complication at hospitals. In hospitalized patients. It's basically a terrible
infection to your body's response to infection, and you can
die from it. Lots of people die from it. It's complicated,
it's somewhat hard to diagnose. If you can diagnose it sooner,

(42:33):
the patient has a much better chance of surviving, right, So,
very high stakes and fundamentally, you know, if you think
about what AI is today, people generally mean machine learning
when they say that, right, as you know, and what
machine learning is really good at doing is taking a
lot of data and matching it to patterns, right saying, Oh,

(42:55):
when you have all of this set of data like this,
you tend to get this kind of out, which really
is what a medical diagnosis is, right, Like, that's what
a doctor is doing when they look at a patient
who has some set of symptoms, age, everything, and they say, oh,
this person might have sepsis. Let's do a test to see.

(43:16):
And so she built this system and it basically works.
They did some trials. But a really interesting thing she said,
and it goes back, Jonathan, something you were talking about
earlier in the conversation, is like she realized getting the
AI to work. You know, it's not one hundred percent,
but to usefully flag that a patient might have sepsis
essentially what it does is like maybe half the problem,

(43:38):
maybe not even What's really hard is getting super busy
doctors who are getting a million alerts all the time
to believe that this alert is worth paying attention to.
And like you were talking about UI, she was like,
it's totally a UI problem. Like the math was the
easy part, Like you know, getting it so that instead
of doctors having to spend one minute when this alert
comes up, they can spend three seconds. Like that was

(44:00):
actually a huge breakthrough for like more than the AI model.
So like that's an example of an episode where there's
like a cool aipiece, big stakes, but also like this
interesting human UI kind of messy humanity piece.

Speaker 1 (44:14):
Yeah. I love talking with folks who who really tackle
those sorts of challenges. I remember chatting with some roboticists
who were focused not on the robotics side necessarily, not
on how the robot actually functioned, but rather how to
design the robots so that they could interact within a

(44:35):
human environment in a way that did not disrupt that
environment at all. And it turns out that's a really
tough challenge right, like creating a robot that can navigate
through a human environment still do useful things. So you
have to design the robot so it can go through
an environment that we have designed to make sense to us,

(44:56):
which doesn't necessarily make sense to a robot, but then
to all. So do it in a way where people
aren't just stopping everything they're doing to watch the robot
bump into a wall fourteen times before it finds the doorway.
So yeah, I think that those conversations can be really
fascinating because it does open up your eyes to other

(45:16):
issues within technology that don't necessarily relate directly to how
the tech functions, but rather how do we interact with that,
what happens when you have the intersection of human experience
and technology. Those are really really great. We need to
take one more break to thank our sponsors, but we'll

(45:37):
be back with more conversation about communicating technology to the
general public. So another episode, I just wanted to call out.
We don't have to talk about it, really, but I
wanted to call out because you spoke with someone I

(45:57):
had spoken with as well on a different show. Casner,
the founder of a Panoai, which is a company that
uses cameras that are co located, typically on cellular towers,
to monitor for forest fires in remote places, and then
it's using AI to look for signs of forest fires,
which then it flags anything that it suspects as a

(46:20):
forest fire. A human reviews the footage, so it's not
just relying upon AI, and if the human determines, oh
my gosh, yes, this does look like the beginnings of
a forest fire, they can then send an alert to
the authorities that would be responsible to respond to that
and potentially cut off disasters before they could happen. When

(46:41):
I spoke with her, it was at a time where
there were the infamous Canadian forest fires that were really
ravaging Canada, and so it was very clear that this
sort of application of artificial intelligence had a potentially like
a really beneficial implementation where it could save property in

(47:03):
people and all sorts of benefits beyond that. You think
about just even just cutting back the amount of air
pollution that affected all of the Northeast. You know, all
those folks who had to breathe smoky air for months
because of this, Like you start again, I always talk
about the ripple effect. You always want to look at
how this is rippling outward because you start to realize, oh,

(47:26):
this has even greater benefit than just the the ground
zero point, right, It has all these other things that
will end up benefiting people, most of which you won't
even realize because you have prevented the bad thing so
you don't experience the bad thing. And so I just
wanted to call that out for listeners who might be
looking to see where to start off, because you've got

(47:47):
quite a few episodes.

Speaker 2 (47:48):
She's really interesting and you know, one of the things
that was interesting to me about Sonya, the person who
started this company, was she has this big idea that
actually goes beyond wildfires. Is that's what they're doing now,
that's their business now. But her big dream is is
about data and adapting to climate change. Basically, right, there

(48:10):
are more wildfires because of climate change. But she's like, look,
we're going to be spending trillions of dollars over the
next decades to mitigate the effects of climate change to
you know, deal with seawater rise and flooding, and like,
are you know flood maps are one hundred years old,
and so if we can in different domains bring data
to bear on like where should we prioritize the money.

(48:30):
Where are there going to be more floods if we
can bring technology to bear? And in her case, what
that really means is data. Right, Like the sort of
substrate of AI. The thing AI needs to be clever
is a lot of data. If we can bring data
to that, it'll just work better. For every million dollars
we spend, for every billion dollars we spend, if we
can be more smart about it, we will get better results.

Speaker 1 (48:53):
Right. It's it's like the difference between being proactive and reactive, right,
being able to being able to plan for something and
minimize its impact, as opposed to, Oh, now we have
to clean up because this catastrophic event has happened, and
how do we deal with that? And I think when
we look back at some of those catastrophic events that
have happened in our lifetimes, you can really see the

(49:16):
benefit of mitigation versus reaction and cleaning up, you know,
the ability to save lives and prevent damage. It's tremendous.
So certainly there are plenty of artificial intelligence applications that
would be incredibly helpful when put to the proper use.
So again, I think if there's any lesson to take

(49:39):
home with this conversation is use that critical thinking, try
not to be reductive. I know that I can get
really cynical about artificial intelligence, but again it's mostly because
of the marketing language around it rather than the technology itself.

Speaker 2 (49:53):
Yeah.

Speaker 1 (49:54):
I think it's also because, like I see a lot
of similarities in the AI evangelists that I saw with
NFT evangelists. And that's low We all know how that.

Speaker 2 (50:05):
Yeah, you know, I think AI has more legs than NFTs.
I feel like I'm not going out on a limb
to say that.

Speaker 1 (50:12):
I certainly think it has more potential beneficial uses than
n FTS. I think NFTs probably have some benefits too,
but the problem is that no one was focusing on
those when they were going crazy about them.

Speaker 2 (50:25):
I mean, one of the interesting things to me, when
you know, when you think about what should we worry
about with AI, there's sort of these two there's sort
of like a Barbelle where like the thing you hear
about most is it's going to take all our jobs,
where a robot's going to kill us, all right, that's
the like amazing there's an interesting other end of the
spectrum that you know, some of the people have talked
to on the show have talked about which is the

(50:46):
risk of people over relying on AI, right, People worry that,
you know, critical decisions are going to be made based
on AI outputs that are not that good, that are
not that robust, that are not that reliable. And you know,
one of the people I talk to like runs a
company basically like to stress test AI to catch eyes mistakes,

(51:07):
and he talked about just like really dumb mistakes that
he sees all the time, you know. Where he gave
the example of like on a life insurance application if
someone puts their year of birth instead of their age, right,
so you put whatever, nineteen eighty four instead of forty,
and I will actually think the person is one nine
hundred and eighty four years old and will want to
charge them a lot for their life insurance because boy,

(51:29):
if you're that old, you're gonna have a lot of
health risks. And I said to him, like, is it
really that dumb, Like is it really or you being
you know, is this hyperbole? He said, no, it's really
that dumb. And so that is an interesting side of
it to me, right, Like, oh, there's also a risk,
there's a risk from AI being too smart. There's also
a risk from a being not smart enough if people

(51:50):
are over reliant.

Speaker 1 (51:51):
On it, which you would hope that people wouldn't fall
into that trap. But at the same time, you just
look back over the history of technology and how would
we've had technology that helps remove certain tasks that we
just let them go. So, for example, I can probably
rattle off maybe half a dozen phone numbers of people

(52:13):
that I know would love, but all the rest are
just buried in my phone contacts because I don't need
to have them stored in my own brain. I have
offloaded that to technology.

Speaker 2 (52:24):
Half does it is a lot? Are those from twenty
years ago? I don't. I haven't member a city, but
he's phone number in a long time.

Speaker 1 (52:30):
My parents haven't changed their phone numbers since I was
a child, so that one I remember, honestly, if I'm
being really honest, it's probably more like three, but I.

Speaker 2 (52:39):
Might be down to two at this point when my
mom got rid of her landline a few years ago.
I definitely don't know my mom's cell number.

Speaker 1 (52:46):
Yeah, I know my parents' landline number because they still
have it. I couldn't tell you they're cell numbers either,
but that's kind of a simple example, and you know,
obviously it's going to be a lot more complicated when
you're talking about offloading, you know, potentially decisions to AI.
But I think the argument I can make is that
there's precedent, So I think that concern is well warranted, right.

(53:08):
I think it also gets back to that concern about
autonomous cars. Everyone worries that the autonomous car they're getting
into is the one that's being driven by a crazy robot.
So it's odd also to think of a world where
people might be nervous to get into an autonomous car,
but they might be willing to have an AI complete
their taxes for them. For example. It's a weird world

(53:31):
we live in.

Speaker 2 (53:32):
Hey, I do in taxes? Is interesting? Are you do
you have an AI accountant? I mean, I like my accountant, but.

Speaker 1 (53:37):
My account's pretty good and I'm ninety seven percent sure
she's human, So I think I'm in the clear on
that one. But I could easily see that being a thing,
especially for something like the United States, where the tax
code gets complicated enough where people like you and I
we feel the need to go out and reach out

(53:58):
to a professional because handling it yourself as daunting.

Speaker 2 (54:01):
You could imagine like a happy story is like the
AI does a lot of the work. An accountant can
have more clients and charge each of them less, and
like go over the work of the AI. Right, Like
there's yes, and this it's sort of mundane, right. The
reason people don't talk about outcomes like that is because
it's boring. But there are a lot of boring incremental gains.
If I could pay my accountant half as much and

(54:22):
my accountant could have twice as many clients and do work,
that's maybe a little better or at least as good.
Everybody wins. I mean, I suppose at the margin there's
need for fewer accountants in that world, but like that's
okay with me, right, Like those people who would have
been accountants can go and like, you know, work on
AI healthcare or something.

Speaker 1 (54:40):
Yeah. I like the people who argue that instead of
calling it artificial intelligence, maybe call it augmented intelligence, where
the goal is to augment our abilities to get things done.
And I think it would be a lot easier to
do that if we heard fewer stories like a CEO
suggesting that eight thousand unfilled jobs will ultimately be filled

(55:00):
by AI and not humans. If we heard fewer stories
like that and more stories about no, we we implemented
this so that people could respond to customer concerns at
a rate that's five times faster than before, which means
they can resolve your issue and you're you're spending less

(55:21):
time frustrated and sitting on hold. Like I think that's
the direction that everyone wants it to go, and they're
just worried that's going to go in the direction of, Hey,
those coworkers you used to like they're all replaced by
algorithms now, Like that's that's where we need to really go.

Speaker 2 (55:38):
Yes, I mean, technological unemployment is complicated, right, Like people
have been certainly afraid of it for hundreds of years.
Now today let's talk about some let's talk about the
Dutch weavers high yeah right, I mean, you know, unemployment
is below four percent today, wages are going up. People
get angry when you point that out. But it's true,

(55:59):
and it's possible that AI will be bad for workers,
but we don't know yet, Like that's one like I
just don't know, and I don't think anybody knows the
answer to that one.

Speaker 1 (56:10):
Yeah, yeah, And then this lays the scariness. Well, Jacob,
thank you so much for joining the show. This has
been a really fun conversation. I've really enjoyed it. I'm
sure my listeners have too. And just to remind everybody,
your podcast is What's Your Problem. You have these kinds
of conversations with decision makers and the people who are

(56:32):
actually creating the systems we've been talking about, and who
are actively tackling these questions and determining how to address them.
So I highly recommend to my listeners you check out
What's Your Problem. You've got so many different episodes, I'm
sure like there's going to be one on there that's
going to speak to every single person who listens to

(56:53):
my show.

Speaker 2 (56:54):
Thank you so much. That's such a kind generous thing
to say, and thanks for having me. It was great.

Speaker 1 (57:00):
I hope you all enjoyed this conversation I had with
Jacob Goldstein. It was a pleasure having him on the show.
I know this was a long one. We literally could
have gone another hour easy, so I had to use
some restraint there. I hope all of you out there
are well. I'm looking forward to having a lot more
interviews in the future. In fact, I've got a couple
that I'm working on right now to kind of line up,

(57:23):
So that's really exciting for me. I love having another
point of view come into the conversation. I hope you
do too, and I will talk to you again really soon.
Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio,

(57:43):
visit the iHeartRadio app, Apple Podcasts, or wherever you listen
to your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Host

Jonathan Strickland

Jonathan Strickland

Show Links

AboutStoreRSS

Popular Podcasts

2. Dateline NBC

2. Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations.

3. Crime Junkie

3. Crime Junkie

If you can never get enough true crime... Congratulations, you’ve found your people.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.