All Episodes

July 3, 2024 73 mins

Warren Hearnes, former Chief Data Scientist of Best Buy, sits down with Ryan to discuss the past and future of Machine Learning and AI, his career, and his perspective on building one’s career around the work they love.

Connect with Warren on LinkedIn: https://www.linkedin.com/in/warrenhearnes/

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to the Good Fit Careers podcast where we explore perspectives on work that fits.
I'm Ryan Dickerson, your host.
Today's guest is Warren Hearnes.
Warren was the chief data scientist of Best Buy.
Best Buy ranked 94 on the Fortune 500 list is the largest consumer electronics retailer

(00:20):
in the United States, generating $43 billion in annual revenue with 85,000 employees across
1100 locations.
Warren started his career as an officer in the US Army.
Warren's career in machine learning and data science started at Georgia Tech in the 90s.
Since then, he's worked with groups like Lucid Technologies, UPS, Home Depot, and Cardlytics

(00:43):
before becoming the chief data scientist with Best Buy in 2022.
Warren, thank you for being here.
Thanks, Ryan.
I'm joined being here.
Glad to have you on the show.
So to help us get started and build our frame of reference, would you tell us a little bit
about the work that you do today?
Sure.
So I most recently left as chief data scientist at Best Buy.

(01:08):
So up through for about two years from 2022 to 2024, I ran a team of almost 50 data scientists
for Best Buy.
I think most people know what Best Buy does.
The largest consumer electronics retailer in the US had over 900 stores.
So our team worked on a number of areas where we used optimization, machine learning, and

(01:33):
artificial intelligence to help supply chain, to help labor, to work through marketing campaigns,
to do the new generative AI and any number of things across the enterprise.
In terms of your foundations and where you come from, would you tell us a little bit
about what you were like as a kid?
Yeah.
So I was raised in a small farming community in southeast Missouri.

(01:57):
So as a small town, 5,000 people, that was the largest town in the county.
So it's a little different than some of the areas like Atlanta that I've lived in for
a long time.
And I've always loved mass and loved solving problems.
And I've been good at it from a young age.
I also loved the idea of serving my country in the US Army.

(02:18):
So as a kid, grew up in high school, good at mass, and decided to go to West Point as
my undergraduate who started in 1985 and graduated in 1989.
So basically, we can talk about the stuff at West Point and mass later.
But early on in my high school career, I would compete for different mass contests around

(02:44):
the area and things like that.
And found out that I was very good at it.
So it's always fun to find something that you love to do and that you're good at.
You found that you loved mass.
It was one of those early passions in order to serve your country.
Tell us a little bit about what your education really played out as.
I wanted to go to West Point.

(03:06):
There were a few reasons why that I wanted to serve.
It was also a good way to pay for your own education here in the United States, is to
serve in the Army and the government will pay for your college education.
So I entered West Point when I was 17 years old.
It was very important for my long-term success to learn the discipline and pushing myself

(03:32):
beyond the limits that I learned at West Point.
So starting there in '17 and going through that progressive leadership development,
I learned that the minimum was not what you strive for.
I'll give an example.
I knew that 40-something push-ups was the minimum that you needed to do on the first

(03:54):
day.
I could do about 50 and I got there and I did my 50 push-ups in two minutes and I thought
that was great and I looked around and I saw other people that were doing 100 or 110
or 120.
So it was very important for my development at that young age.
So I also learned that there were areas of mathematics that I didn't know about and specifically

(04:20):
it was operations research.
So the idea of using simulation or mathematical modeling or linear or integer programming
to do optimization, that's where I first learned all about those areas in math.
So those four years were super important for me.
Then getting out, I served for some time in the Army.

(04:41):
I did the standard officer basic course.
I also did Airborne School, I did Ranger School and a couple of other things.
So all of those things were integral to developing me as a leader and as somebody that pushed
themselves beyond and then fast forward to when I got out of the Army in 1992, I started

(05:05):
my masters and PhD at Georgia Tech in an area of machine learning called reinforcement
learning.
When we were teaching computer simulator robots how to improve and how to do things
through positive and negative reinforcement, just like you were with a pet or a person.

(05:28):
That's like you give it a plus one if it did something good and you give it a negative
one it did something bad and eventually it tries things until it learns to succeed.
How amazing.
And in the early 90s, how did you become aware of reinforcement learning?
How did you decide that was going to be your path before this was all the rage that it
is now?

(05:49):
Yeah, that's a great question.
So I didn't know about reinforcement learning but even in the early 90s, I was reading about
neural networks.
So there were books at the time, this was before the internet so you couldn't go out
and just Google things.
But what I had learned was that we could teach and use math to create systems that would

(06:17):
act similar to sort of an animal where a young person is the early days of artificial intelligence
in that way.
I will say that neural networks were first proposed.
I think the perceptron was in the 1940s and in the 1970s there were back propagation algorithms

(06:38):
that were proposed and it was the 1980s that they started to really take off the computing
power and some of the data.
So another interesting thing is that a lot of what we talk about in terms of machine
learning is not necessarily a new idea, we're just going faster and faster and faster.
One other point is I typically will talk about in January there was a paper called Steps

(07:03):
Towards Artificial Intelligence and that listed out five different things including search,
pattern recognition, reinforcement learning and things like that.
What's important about that is the next step, the next slide that I show is that that wasn't
in January of this year.
That was January of 1961.
That was Marvin Minsky in 1961 talking about the same things that we're talking about today.

(07:29):
So I do think that it's important for people that are just getting into this field to go
back and read and understand the history because there are quite a few things that were proposed
before we had the data and the compute power to do these things.
So I think that's an important point for young people getting into this field.

(07:53):
Oh, I love that.
We will dig into the history of reinforcement learning, machine learning, the foundations
of generative AI here in a minute.
Before we get there, in terms of that first full-time job, from a career coaching standpoint,
one of the things that I find fascinating is people transitioning from military service
into the corporate world.

(08:14):
And it looks like you move from military service into academia and then from academia into
the private sector.
What was that process like getting your first proper full-time corporate job?
Yeah, so the first proper corporate job was at losing technologies.
And you're right, it is tough to find that first job because you don't have the experience

(08:37):
that they're looking for, but you have the skills.
And so you need that break.
And I will say networking is how I got a number of my first roles.
Most of my roles have been either networking or recruiters.
But that first role is a network that I created at Georgia Tech, those of us that were working

(08:58):
on our PhDs in that area.
There were some really interesting problems at losing technologies.
At the time here in Atlanta, that plant was the world's largest fiber optic cable plant.
And so we had 4,000 employees.
They were doing quite a bit of work both in the manufacturing of the fiber and the manufacturing
of the cable.

(09:18):
And there was a really interesting integer programming problem that they asked Georgia
Tech to work on.
The people that were working on that, they all wanted to go in and be, when they were
done they wanted to be professors.
I knew I wanted to be an industry.
So I raised my hand and said, I can go up there and I can work on that and I can be

(09:40):
a full-time employee.
But it was really about the network.
It was being able to go in and talk to them about the problems that were important to
them and also to make sure that they understood that the mass in algorithms were the right
way to go.
What great foresight for you to be able to see while you were doing the applied research

(10:04):
that you needed someone to take a chance on you.
You had the skills, you had the capabilities, but somebody needed to see it.
And somebody needed to be able to say, okay Warren, we want to employ you.
We want you to be that full-time employee.
I think most folks today seem to really struggle with getting that break finding someone to
take a chance.
It's good for you for figuring out then.
Thank you.

(10:24):
In terms of the things that you learned along the way, your military education and experience
within your service, the first research work that you were doing, now you're in a full-time
job, what did you learn about yourself in that in industry first full-time position?
So what I learned is that you need to let people know what you're doing and why it's

(10:48):
important.
So growing up in the military is 17 up through, I guess 23, 24, whatever, should do the math.
The military, your commanders, the people that were in charge of you, had done your
role in a previous five years, 10 years.
And so they knew what you were doing and what was expected.

(11:12):
Now getting out into an area, either the reinforcement learning, the machine learning,
which we weren't using a whole lot at Loosen Technologies in the 90s at the time, but the
integer programming, most of the people that were either your direct manager or working
with you, they were not experts at that.

(11:33):
And what I learned early on after one or two of the performance reviews actually had a
boss say to me, I'm not really sure what you do.
So it's interesting when your boss says, I'm not really sure what you do.
They knew what the results were, but they didn't know what I did.

(11:54):
And I just had assumed that people knew exactly what we were doing and how we were doing it.
You know, that was one of the things that I learned is that you need to manage up a
lot more than I had been doing in my military career, because in a military, your results

(12:16):
speak very, very loudly.
But in a corporate job, you've got to talk about your results, but also the resources
that you need, what it is that you plan to do in the future and how you're doing it and
how it's better than other ways that you're doing it.
Sure.
That makes a lot of sense.

(12:36):
Or you've had such a prolific career and you've had the, I don't know, perhaps good fortune
or the skill to figure out that this is what you wanted to do from a very early age.
Was there a specific moment for you when, whether it's reinforcement learning, machine
learning, analytics, data science, however you want to describe it, that was what was

(12:58):
going to be your path?
I will say that it sounds like I knew that things like reinforcement learning were going
to be big and I'll be honest, I didn't.
When I was at Georgia Tech in the early to mid 90s, there were only a few professors
that were doing any funded machine learning research.

(13:19):
When I started working on that, there was some pushback from a lot of the other academic
professors and students that said, "Well, if you can't prove it's the right, if you
can't prove it's the optimal answer, then it's not worthy of research."
There were a few of us that were saying, "Well, we can't prove it's the best answer,

(13:43):
but it is getting better all the time."
There's something really interesting and intriguing about a system that improves over
time, which was the definition of machine learning at the time.
There's so much pushback that I really saw that I was getting into a career that I would
not have financial success.

(14:04):
I would do what I love to do because the point you're talking about is when I solve
a problem and it succeeds and either makes money or reduces costs or makes the customer
happier.
That is exactly what I'm looking for.

(14:26):
I think there's a Mark Twain quote that says, "I can live two months on a good complement."
It's similar to that.
If somebody tells me, "Hey, what you did was awesome, this mass algorithm really saved
the day.
Everybody wants to make more money, but there's a lot to be said about that positive reinforcement.

(14:48):
We're getting back into reinforcement learning here, that positive reinforcement is when
I knew when I had either instructors at West Point or other professors say, "That was
a really interesting solution.
You came up to that problem."
That was when I knew I was in the right area, but I will not say that I knew that machine
learning was going to be big.

(15:09):
I really thought that I was getting moving into an area that I loved but wasn't going
to be lucrative.
It turns out I'm fortunate that it did work out.
What a beautiful way to follow that early love of math and solving puzzles, bringing
that on to the research side and exploring this and being like, "Even if it isn't lucrative,

(15:32):
this is going to be what I love to do."
Now, today, having it be probably the most hot topic of the decade and being where everyone
wants to be and you were there quite a few years ago.
I've enjoyed it.
It's been a great ride.
In terms of what you needed to learn to be proficient and to get up to an expert level

(15:54):
to become a C-suite executive in this kind of category, what did you have to master to
become proficient in this field?
There's a difference between, it's a great question, but there's a difference between
how I got there and how you might get there now in the sense that when we've already talked
about when I started doing this, machine learning wasn't a big deal.

(16:17):
The way I got there was by moving up in the ranks and solving progressively harder and
harder problems.
At Lucent, I solved some pretty hard problems.
At UPS, I solved some really hard problems.
Then I started moving back into leadership.
I had been in leadership in the Army and I started moving back into leadership.

(16:42):
Then it becomes like a trade-off.
When you're the person solving the problem, you are the expert in the room.
There's nobody in that room that knows more than you about that problem.
As you start to move up into manager, into director, and then into C-suite, then you really
have to let go of being the expert in the room and you have to be the one that is asking

(17:06):
the right questions.
You've got to know enough to ask the right questions.
You've got to be able to get the resources that your team needs, whether that's training
or compute or get the data into the right place.
Then at the very highest level, as a C-suite executive, you're basically setting the strategy
for the team.

(17:28):
I'd say in my career, it was proving yourself along each way.
I will say that nowadays, you can get into being a C-suite executive in AI and machine
learning, not necessarily coming up through the ranks.
You can get there.
We have a lot of companies you report into the chief technology officer.

(17:49):
You are a ... Now that person came up through the software engineering and the technology
and more data, sometimes analytics and AI comes up under data.
I see these days in this particular area, you've got to have the leadership skills and
you've got to have the technology skills and you've got to know in a broad sense everything

(18:14):
that AI and machine learning and optimization can do and then go out and find those people
that can do it for you.
Okay.
If I'm hearing you right, your path in an era where machine learning, generative AI,
we're not necessarily a proven ... This is where it's going to be kind of thing.

(18:34):
Your military experience gave you the foundation of leadership.
Your academic and research experience gave you the ability to begin solving harder and
harder problems along the way.
Then as you grew and as the field matured, you had the opportunity to mature with it and
be able to step into progressively larger and more senior roles.
Then there was eventually an inflection point where you went from, "I'm the one solving

(18:58):
the problem," to, "I'm the one garnering the resources," or paving the road for all
the experts in the room to do the work.
If I'm hearing you right, it sounds like today, if someone wants your job, if they want to
grow into a chief data scientist or a chief analytics officer, it might be through software
development.
It might also be through math.

(19:19):
It might be through data science.
There are a whole range of other ways to get there, but to be successful in the C-suite position,
you're going to have the combination of the expertise as well as the leadership responsibilities.
Right.
That's exactly right.
I think these days, I still am of the personal belief that the best chief data scientist or
chief AI officer or chief analytics officer is somebody that did come up through the ranks.

(19:45):
That's my bias.
I think that because if you aren't asking the right questions and making the right estimates
to your fellow executives, let's say, for instance, you didn't come up through the ranks
and you tell the CEO, "We'll get that done in two months."
Whereas if you'd ever done it, you'd know it's at least a six-month project.

(20:08):
Now your team is caught up between a rock and a hard place of, "How do we get this done?"
I do think that the best chief data scientist and chief AI officers and chief analytics
officers will come up through the ranks.
As things are getting more and more, the AI is so much part of technology and the machine

(20:29):
learning is so much part of technology that you may be able to get in the future.
Maybe all of these things will be in one single group rather than a separate group.
I'm not sure.
The great thing about this field is it changes all the time.
If you love the idea of learning new things, then this is exactly the field that you want

(20:51):
to be.
Warren, would you tell us a little bit about your most recent work at Best Buy?
It's the head of data scientist and the chief data scientist.
Sure.
At Best Buy for that two-year period, we had a separate data science team as part of a
larger analytics team.
There was a decision science team and a data science team.

(21:14):
Then in technology, there was an applied machine learning team.
Where we fit into all of that is that there were about 45 to 50 data scientists on my
team.
We did, if you think about an analytics product lifecycle, all the way from idea through deployment,
our team really worked on going things through the idea up through the pilot phase.

(21:42):
Then we turned it over to the applied machine learning team for deployment.
That's what the team did.
My personal role as chief data scientist was mainly working with other senior leaders
and other executives to make sure that we had the right resources, that we had planned
to do the right things.

(22:03):
Most importantly, one of my roles was to talk about what is possible because not every executive
around the company knows that they might know a cursory idea of what machine learning or
optimization or AI or now generative AI is, but they don't know exactly what might be

(22:25):
done in a cost-effective manner.
That makes sense.
Would you tell us a little bit about the team that you built and the team that you led?
We had at the time, the team, before I got there, had come together in a center of excellence
model.
You know that some companies will have data scientists and analysts in functional silos.

(22:48):
Prior to me getting to Best Buy in 2022, they had brought the analytics team together, decision
scientists and data scientists from across the company to do things that were helping
in either supply chain or either customer service or store operations or marketing or
just anything about the customer or the way that Best Buy operated.

(23:10):
My team was functionally two different teams, one supporting mainly store operations and
supply chain and another supporting everything that was known about the customer.
They worked with decision scientists who also worked with the business to make sure that
we were solving the problems that the business needed us to work on and that we were doing

(23:34):
things in a manner that was also compliant with our all of our risk and technology needs.
Basically, we were a center of excellence but still functional to the business to work
on supply chain, customer and store operations.
How interesting.
What would a good year have looked like for you and your team?

(23:56):
A good year for our team would be that we exceeded all of the OKRs that were the objectives
we set during the planning for the prior year.
There's a whole lot of planning, especially at the executive level.
I was surprised coming from being an executive at a smaller company but to being at an executive

(24:16):
at a Fortune 100 company, a big retailer.
It makes how much planning goes on prior to the next fiscal year.
It's months and months of elaborate planning of we're going to do X, Y and Z.
Here are the OKRs.
We're going to set these to be something that we can attain but it's still a stretch for

(24:36):
us.
Exceeding those would be a good year.
So I think more importantly for my data science team was that we would use what we knew about
the business and what we knew about machine learning and AI and optimization that had
been developed over the last 12 or 24 months, all the new things.

(25:01):
We would figure out how we can use those new things to make a difference, either reduce
cost, improve service, increase revenue because those are the things that the executives weren't
thinking about last year.
And so what we would find is some of these new ideas that we would allow the team to try
and fail on some and try and succeed on others turned out to be some of our most important

(25:27):
objectives for the coming year.
Generative AI is a great example of that.
Generative AI was not in any of the planning for the fiscal year after generative AI exploded
onto the scene.
But the data science team went out and created proof of concepts and pilots.

(25:49):
And what we defined is difference between proof of concept and pilot is a proof of concept
shows that it works.
A pilot shows that it works to people in the business.
From self-set, it's actually making the decision.
And so we did proof of concepts early on for virtual assistants, for chatbots, for discovering
intent in what customers were saying in their feedback and things like that.

(26:12):
So all of these things that still will take a long time to get in from pilot into deployment
and into mass production.
Those are the types of things that you've got to give your team, especially when they're
in the business.
And they love to do some interesting work.
You've got to give them some time to try something new.

(26:34):
And also, you know, don't hold them accountable.
If a POC fails, that's fine.
But give them time to work on something new.
And POC is proof of concept.
Yes, I'm sorry, proof of concept.
Yes.
Very few people ever have the opportunity to become a chief data scientist, much less
become a chief data scientist at a Fortune 100 company.

(26:55):
What did you expect it to be like and what was it actually?
That's great.
So I expected it to be working on, you know, when you have over 900 stores, I think at
the time it started, we had 950 stores.
And you are serving such a large swath of the US population that there's going to be

(27:19):
some really, really interesting problems to work on.
So that's what I was really attracted to for me to Best Buy is that number one, you know,
working at big companies, you know, working at UPS, working at Lucent.
When you have the scale of the problems, then you can put an algorithm out there that

(27:41):
will make a change.
When I was at UPS, there was a change in an algorithm that we made that immediately would
have in the first year increased yield by about $50 million.
But it's because you have, you're doing so many packages.
But that's what I was really interested in at Best Buy is the breadth of problems.

(28:02):
Supply chain, labor, it was marketing.
You're just spending so much money on certain things.
Any little bit will help.
And that was what attracted me.
How interesting.
Just to indulge my curiosity, were you at UPS in the era of the, we're not going to
do left turns anymore and it's going to save us a ton of money?

(28:22):
Yeah.
So that was a different team.
But you know, that's a new thing.
We can kind of go off on the side on this.
So what that team did was something called Orion and I should know it on road.
I should know what Orion stands for.
That's an amazing operations research project that if your listeners haven't read what Jack

(28:44):
Leves and his team did on Orion, go out and see that because there were, it's such an
important thing for UPS.
But there were some times where they were within hours of canceling the whole project.
So you know, that's a, some of the things of just knowing how to, how to create and implement

(29:08):
through the change management for something as large as that project.
But I'll say things like the no left turns, you know, it's not that they're making no
left turns, but it's, we can talk about how, how we create mental models in our, in our
mind about what, what is the, what is the best route for the packages?

(29:28):
Our minds may not be able to come up with the best route because there's trillions of
combinations.
But our minds can come up when we see a map.
We can come up with something that's probably within two or three or four percent of the
best route because we can see that we can, we can look at it and we can say, hey, we're
going to do these things.
And our minds will come up with something that says, yeah, no left turns, you know, a

(29:53):
left turn, I have to sit there at a long time is so that's probably not my best route.
But when you get to the scale of something like Orion, it's, they used to let the drivers
come up with their own routes.
And it might be within two or three, and if you've been doing it for 20 years, it might
be within one percent.
But if you, if when you have at that time, 75,000 package cars on the road every day,

(30:18):
one percent is still a lot of miles.
The no left turns is kind of like, it's just a mental model that says, I want to, I want
to do better.
And here's a rule, here's a policy in, in an area of called dynamic programming, which
was started by Richard Bellman, that policy of no left turns is something that gets you

(30:39):
pretty close.
If you want to save gas, you know, get your best gas mileage on your, on your car, you
know, a good policy might be, let's, let's use the brake as little as possible.
And then it gets you to, instead of speeding up to the stoplight and then stopping, it
gets you to come up, you know, so certain policies can get you a pretty good answer.

(31:02):
It may not be the optimal answer, which gives us back to the, what they said at Georgia
Tech, if you can't prove it's the optimal answer, it's no good.
But I'm like, hey, it's pretty good that, that a machine can learn some of these things.
Sure.
And I think also so interesting to grow up in the era of reinforcement learning and machine
learning where our human intuition, where that experienced driver who's been doing it

(31:26):
for 20 years can get within a percent or two of the optimal route on what I would imagine
as the traveling salesman problem.
So now we're getting to the point where the machines, we can implement policies and rules
that will help the machines establish their own intuition in some sense.
And I imagine from here, it's going to get even more exciting.

(31:47):
I think so.
I mean, it, some of the things in terms of the traveling salesman problem, it's, it's
proving to the driver that yes, this is as good or better than what you came up with.
But then it's also when, when the drivers then start to just go to the next stop that
the, that the system tells them to go to rather than what they want to go to, it allows you

(32:11):
to do more things.
So it allows you to, if there's a change in a pickup, then you just insert it into that
and the driver doesn't have to think, okay, when do I do this?
And it would, so I think, I think the technology in terms of relying on a system in that case
allows you to have more options in the future.

(32:34):
But if you're talking, you're also talking about some of the newer AI that comes up,
you know, the generative AI can, can ride a traveling salesman problem, model in, in
a modeling language almost instantly.
So yeah, there's going to be the, the idea, I think in Google talk about it yesterday
in Google I/O, the, these agent based AI, where it's solving smaller and smaller problems

(32:59):
for you throughout the day, is going to be a really interesting, interesting next, next
few years.
This is, is explosive and, and I'd love to see all the, all the things that are coming
out.
Oh yeah.
And just to put a little timestamp on history here, it's, it's May 15th, 2024.
And we're recording this yesterday was Google I/O and the day before that, open AI release

(33:22):
GPT 4O, their, their omni model.
So we'll check back in in a couple of years and see where we've come to take a step back
and think perhaps in a little bit more of a philosophical sense.
Can you share with us what your work means to you?
That is a work itself of my entire career.

(33:49):
And for me, it's been, you know, it's the impact that I'm looking at is, is mainly for
can a company do better?
So you know, at, at, at cardlytics, it was coming up with ways that we could measure
the advertising that each one of our business clients approved of and believed and therefore

(34:10):
would invest more money.
That is what is important to me.
One of the things that, that I want to get across, I think a little bit to, to the listeners
is as I've, as I've in the later stages of my career said, you know, we have done machine
learning.
We have studied and created these algorithms based on how humans or animals learn, adapt,

(34:39):
and succeed.
And a lot of times we will get very focused on how do I do reinforcement learning algorithm?
How do I do a neural network?
Or how do I apply this particular technique?
And I think, I think that's, that's awesome because some of the things that we've done
in this field have been amazing.

(34:59):
We're seeing the fruits of it right now.
But I also think that, and especially the past few weeks, it's also, it's also interesting
to take what we know about those because right now we're focusing on those algorithms and
apply that back to life.
And what I mean by that is we know that a reinforcement learning algorithm has to fail

(35:22):
many, many times before it succeeds.
So why do we hold ourselves as people to a different standard?
Let's, you know, we've heard fail fast, but a lot of times we, in our careers, we don't
want to try something and fail, but failure is an integral part of success.
So I think that's, it's important for us to learn that failure is enough.

(35:45):
You've heard failure is not an option, but failure is not only an option, but it's a
requirement for success.
And as an executive, you need to give your team the psychological safety to try some
things and fail because if you're, if you're not failing, then you're not really trying
something that's that hard.

(36:06):
So I'd say that's one thing.
Another thing that you can learn from machine learning and apply back to your life is that
early on neural networks were these small, small number of neurons and connections between
them and then you, then you, as we got more compute and more data, then you heard about
deep learning and all of these connections.
Well, you know, the more connections that you have in your neural network is you're more

(36:32):
likely to be able to succeed in whatever you're trying to predict or do.
And I'd say that, you know, there's a way that you can, you can look at that in your
own life is that the more connections you have in your network, you know, at work or
on LinkedIn or through professional societies.
I'm on the board of a professional society called Informs, which is the Institute for

(36:55):
Operations Research and Management Science.
I'm also a member of IEEE and, you know, you create that network.
Those connections are only going to help you make a better fit for your career, make a
better fit for everything that you're doing.
So, you know, I think that's, you know, those are a couple of those things that I would

(37:18):
talk about.
That makes a lot of sense.
And if you'll indulge me and let us drill a little further into the leadership side of
things and embracing how essential failure is, let's say there's someone out there who's
a manager, maybe a new manager and they've got a team member who's very capable, but
we're working on something that, you know, is experimental and we're trying something

(37:40):
new.
What advice would you have for them in terms of dealing with that failure and not necessarily
going down the road of saying that, oh, my employees just not good at what they do and
kind of that your mindset around like the benefit of learning through that failure?
Yeah, that's a great question.
So I want to make it a distinction between failing because you're not competent and you

(38:07):
haven't done the right, you haven't prepared the right way versus failing because you made
good decisions, but it just didn't work.
So in the first sense, you know, you've got to make sure that you as a manager have planned
for and given your team the resources they need and planned for contingencies.

(38:29):
I think it was Eisenhower that said plans are worthless, but planning is essential.
I mean, when you go through the planning process, then you're thinking about contingencies.
You're thinking about what could go wrong.
So I think having some of that already thought about in your mind is essential to adapting

(38:50):
as the plans go awry.
So that's one sense.
But I think when you've done something and it hasn't worked, then already be able to
tell your colleagues and your manager why it didn't work, why you made the decisions

(39:14):
that you made.
So not every decision, I'd say this, a quality decision, a good decision doesn't always
result in a good outcome because of uncertainty.
Likewise, a bad decision doesn't always be a bad outcome.
A bad decision for me might need to go place my entire retirement savings on, you know,

(39:39):
one number on the wheel.
And that's a bad decision.
But if it hits, it doesn't make it a good decision.
It just makes it.
So being able to say, I think this is what we learned from this experiment.
This is where we're going to go.
If we want to continue this, then we want to make these changes.

(40:03):
And this is what we're going to learn, have a learning plan.
I guess the last thing is to not be so invested in that particular project that you as a manager
push it further than it needs to be.
Just be able to say, yep, that doesn't work.
You know, my idea didn't work.
The team did great.
The idea doesn't work.

(40:24):
And then be able to separate yourself.
Your success as a manager and executive is not based on one single good idea.
If you only had one single good idea, your whole career, it's not going to go very far.
So just know that if you had one good idea, you're going to have more than one good idea.
Yeah.

(40:44):
It also sounds like in part that the resilience to accept that, yeah, this one didn't work
and that's okay.
Being able to pick up and then make your plans for whatever is next sounds like another essential
component of being able to actually grow and benefit from those failures.
Exactly right.
We'll get back to the conversation shortly, but I wanted to tell you about how I can help

(41:05):
you find your fit.
I offer one on one career coaching services for experienced professionals who are preparing
to find and land their next role.
If you're a director, vice president, or C-suite executive and you're ready to explore new
opportunities, please go to GoodFitCareers.com to apply for a free consultation.
I also occasionally send a newsletter which includes stories from professionals who have

(41:27):
found their fit, strategies and insights that might be helpful in your job search and content
that I found particularly useful or interesting.
If you'd like to learn more, check out GoodFitCareers.com and follow me on LinkedIn.
Now back to the conversation.
Would you take this to more of an applied sense and let's apply our machine learning

(41:48):
and data science knowledge into the business world?
Would you walk us through how you solved a real business problem?
Sure.
I'll talk about one that my team solved at Best Buy.
We talked about how the business and the decision scientists and data scientists all
worked together to identify a problem and then come up with a way to potentially solve that

(42:15):
problem or alleviate some of it.
In this case, one of the problems was Best Buy repairs appliances.
There was a significant percentage of our repair technicians going out to someone's house.
That person had taken that time off work.
They were going to be at home.
The repair technician comes out there.
Then the technician doesn't have the bike part on the vehicle.

(42:40):
Now, they can't repair it.
The customer experience is bad.
They have to make another appointment and another call.
Bad customer experience and extra cost.
We were looking for ways of how can we predict what parts need to be on that vehicle.

(43:02):
We started taking all this data from the previous years.
We found that we could do a pretty good prediction, but it wasn't as good as what we thought.
We made some improvement to it by doing what's called a parts predictor.
We looked at what were the successful repairs and what weren't.
We looked at that.

(43:24):
There was just something that was in the back of our mind saying this should be an easier
problem to solve than it is.
What is the next layer down?
We started peeling back the layer of the onion.
Yes, the five wise.
It turns out that before the parts that you get on the vehicle are defined by a problem

(43:47):
code that when the customer calls into the call center agent assigned to problem code,
then that defines what parts are needed or might be needed.
It turns out that there's about 150 or so problem codes that the call center agent can
pick from.

(44:09):
Well, your expertise of trying to get that right problem code depends on how long you've
worked there and what feedback you've gotten from repair and other people.
We thought this was a problem that could be helped by machine learning.
What we did is the team took some text strings from that call center transcript and they

(44:35):
were able to train a natural language processing model that identified based on successful
repairs in the past what problem code this should be.
It wasn't exact.
What they did is they re-ranked after seeing about five or six words in this.
They re-ranked those 150 codes and the proper we found that the right problem code was in

(45:03):
the top five over 99% of the time.
Now, that got you better data.
A significant portion of those problem codes were inadvertently classified as the first
problem code because that was what some of the agents just said, "Hey, it's that problem
code."
Before we re-ranked them, then a significant portion was other.

(45:25):
That was the reason why we couldn't get the right parts on theirs because the call center
agents were not always hitting the right code.
If we could put that using natural language processing, using some BERT models or some
large language models to look at what the customer and the agent were talking about

(45:45):
and sort those things in a certain order, then you've got better problem codes.
If you had better problem codes, you got more successful repairs and then your machine learning
model for the parts predictor now had much better data.
Going from garbage in, garbage out to much better data coming in.

(46:09):
Data is such a key part of anything that's machine learning or AI or optimization that
is an integral part of finding the right answer.
Sometimes when you don't get the right answer, it could be that you're missing some key data
or some of that data is wrong.
That was the case in this problem code predictor.

(46:33):
How fascinating.
It also sounds to me like there isn't just one type of AI, so not just a large language
model.
You mentioned something called a BERT model, is that right?
Yeah.
We get this wrong bidirectional encoding, so they're GPT generative pre-trained transformer.
Yes.
A BERT model goes both ways, I believe.

(46:55):
You're getting me in some areas where my team will catch me on some things where I say,
hey, and this gives back to, as an executive, you're no longer the expert.
I would spend most of my days in meetings and I could spend some nights looking at some
of these things.

(47:16):
I shouldn't be embarrassed that I don't know what some of these things, how they're different,
but that's what happens when you get to a certain level.
For people that you're listening to this, you've got to figure out what point do you
want to get to in your career?

(47:36):
Because being chief data scientist sounds like you are solving some really interesting
problems, but at a large company, you're not solving the interesting problems anymore.
Your team is solving the interesting problems.
That's where a lot of people will try to figure out how high do I want to go?

(47:59):
Where is it that do I want to be an individual contributors?
Because some companies will allow individual contributors to go all the way up, but in
some companies, you can't get to that level without going into some type of technical
management.
I love solving problems.
That is one thing that I missed about being in a lot of more of the strategy meetings all

(48:23):
the time is just sitting down.
I saw that by having one on one's with, I'd have quarterly one on one's with every one
on my team.
So 50, half hour, one on one's every quarter, and just talk about the problems that they're
working on.
I could reach back into my experience and give them some advice and sometimes it worked

(48:44):
and sometimes it didn't.
But that was some of the best parts of the day for me.
Yeah, and a beautiful lesson on leadership.
I think two of the bigger takeaways that I've got from the lesson that you shared here is
that it sounds like today that just saying, oh, chat GPT or a large language model can
solve everything isn't necessarily the case that there are other flavors of model that

(49:06):
might be more appropriate for the given circumstance.
And it also, your description of leadership reminds me of one of our very first guests.
He worked his way all the way up to being the chief executive officer of a large advertising
agency, and he found that he was essentially in general management and what he loved was
being close to the problems.
And so his next move after being CEO was to become a CMO chief marketing officer, which

(49:31):
I think from an orchard standpoint might look like a step back, but for him, it was a delightful
move towards being closer to the problem and doing what he really loved.
Yeah, I think that's a great point.
For me, as I've come out of this role at Best Buy and I've sounded a small, haven't done
much with a yet optimal AI, it's the combination of optimization, machine learning and artificial

(49:57):
intelligence.
And I put him in that order because I think everybody loves AI right now.
And machine learning is also something that's really important, but I don't want us to forget
about optimization because there's a key difference between machine learning and AI versus optimization.

(50:20):
In optimization, you have a model of the process.
The traveling salesman problem is a very hard problem to solve.
And there's some great algorithms out there that do that, but it's a hard problem to
solve, but it's a model of the process.
And I don't solve it by just throwing a bunch of data at letting our network figure it out.

(50:45):
You know something about that system.
So I think that's really interesting.
Chats EPT and all of the things that are giving us a lot of hype right now, it's amazing.
But I also think that there's something more fundamental, and this is my personal opinion,

(51:06):
something more fundamental about human intelligence than the way that we trained Chats EPT.
You don't, for instance, you don't, if you have children, you don't teach your children
how to read and how to think by giving them every document that was ever published in
the world and having them read them and connect those words, you give them some fundamental

(51:33):
building blocks to build their world on.
And you start out with some simple books and more complex books, and you start to learn
about science and things like that.
So I love the large language models right now.
I use them daily for coding.
I use them for translating my technical speak into something else.

(51:57):
But I still think that we're there, and hopefully somebody that they're already working on
this, but I still think that we're fundamentally different on how we're teaching this.
And somebody recently said that if you take the same data, it doesn't matter what algorithm
you're using on these large language models.
It's the data set is the key.

(52:20):
So we can relate that back to ourselves as well.
If data is the key, if what you're putting into the system is the key for these large
language models and not necessarily the algorithm itself, maybe what we're putting into our
professional lives.
What data are we learning each day?

(52:42):
What are we putting physically into our bodies?
Everything, the input is the most important thing.
That's not the algorithm itself, but surround yourself by good people.
Listen to this podcast, read good books, and that input data is what you need.
I think through the reading that I did in terms of sci-fi and how robotics was really

(53:10):
going to be the first frontier of intelligence and turns out that language models really
got good first.
And I'm wondering in that same path of development, we humans, we, you know, reading is a higher
level function.
And that's something that we develop over time.
Sensory and physical inputs are something that we get very early on.

(53:31):
I think it'll be very fascinating to see how the robotic side of things that has a physical
embodiment paired with that, you know, omnipresent, let's read every piece of, you know, every
token on earth.
I'm very curious how that like layering of development will change as they become a little
bit more physically involved in the world around them.
Yeah, it's going to be interesting.

(53:51):
So as a chief analytics officer, as a chief data scientist, how do you believe the world
perceives your job and sees people in positions like yours?
It's a great question.
So I think, I think the world perceives it as maybe there's just some, some magic that
we do that we can, we can take some data and we can do things pretty quickly and predict

(54:17):
stuff or, you know, now that they see what, you know, there's basically two camps of the
chat GPT.
It's like, oh, I love this.
This is great.
Or, oh, no, this is, this is, we're heading to towards, you know, judgment day with Terminator.
I think that's, that is one of the things is that they, it's almost, it's almost this
magical saying where we conscious solutions out of thin air.

(54:41):
But you know, it's really more about the, the power of innovation and driving the future
of either industry or education, but just basically transforming the way that we live
and work.
And you know, I think that if they knew how much creativity went into it, how much collaboration,

(55:01):
not just about crunching numbers, it's, it's, it's also about weaving together different
perspectives.
And it's a, it's really still a blend of, of art and science that takes a deep connection
to, to the business.
I think that is, is what I, where I think people see it externally, I think internally

(55:23):
into, into each company, even the other executives have a, they have a better understanding of
that it's not, it's not magic, but it's, but they don't know exactly.
And I think it's natural.
We'll say this, it's natural for any one of us to, to, to underestimate the complexity
of someone else's work for, I will say that early on when, when data science was becoming

(55:50):
a field, you could come at it from now three, but at that time, two main ways.
You could come at it from the math side like I did.
And you could come at it from the computer science side.
So I had to learn some programming to, in order to do my research, because you couldn't
just go and download a package or things like that.

(56:14):
You had to, you had to actually, there's a book on my bookshelf that has the C++ code,
you typed it in for the neural networks.
That was back in, I think, 90 or 91 from them.
Those of us from the math side, and this is a stereotype, those of us from the math side
said, yeah, the models, that's the hard part, the assumptions, the models of the hard part,
the compute, you know, how, how the algorithm, you know, works on the actual system and how

(56:38):
you do it parallel and how you get the data.
That's all, that's all just, you know, that's easy.
You know, I'm using that in quotes.
Now the computer science side would say, well, no, the hard part is the, is the hardware and
the data and everything.
The easy part is just an algorithm that I can just go and I can just, I can, I can make

(57:02):
my computer program try a hundred different algorithms.
And so that's the easy part.
Where in reality, they're both, they both have their nuances.
And I think, you know, I think it's like that a lot in when it comes to AI or analytics
is that other executives say, yeah, I can, I can outsource that.

(57:27):
I can get either a big tech company or I can get a consulting company to do that.
But, but if you ask them, can we outsource the main functions of your particular function?
They'd say, no, we, we know, we have to know all about the business.
We have all these connections and everything.
So I think that is one thing that I, I do wish that the other executives would understand

(57:53):
is that an internal data science team, an internal analytics team can do so much better,
typically faster and better, and possibly even cheaper, and then outsourcing that to
someone else because it's a misconception to say, hey, you know, this, this consulting

(58:15):
firm has done this before.
So we're going to do that.
Well, if that's true, then all you're doing is following the rest of the industry.
You're not leading, you're not innovating.
So that's a, that's a very long answer to, to what I was getting into.
But, but I, I do, I do think that the, that the misconception is that it's more plug and

(58:37):
play than it really is.
That makes a lot of sense.
Let's shift our sights to the future.
And I think part of what I'm so curious about is you've been able to watch the field of machine
learning, artificial intelligence, analytics, just really flourish.
And earlier in the discussion, we were talking about how some of the core principles of AI

(58:59):
were discussed in the 60s long ago.
And I would love to hear where you think we're headed.
And perhaps a little bit of a, what your field might look like in the coming years.
Yeah.
So this is going to be interesting to have it, have it recorded.
So we can see what it looks like even a year from now, but the compute power and the, the

(59:21):
amount of data is, is significant right now.
And it's just exploding.
So there are some things that we can do in AI and machine learning that just weren't
possible even, you know, five, 10 years ago.
So I think we've got, we've got the capability of making some significant advances.

(59:45):
And then it's really hard to keep up with, with all of the things that people are doing
out there.
So I'm very optimistic that, that we in the very near future will have AI and machine
learning that are helping us.
And we trust it implicitly in the same way that we trust it, that we trust Google Maps

(01:00:07):
or Apple Maps.
You know, those are mathematical algorithms that every single one of us use every single
day and don't even think twice about it.
And we trust it so much that, that maybe early on, I was one of these people early on
could, you know, Google apps, I'd say, well, that's not the best route.
I'm going to do something else.

(01:00:28):
And I learned through negative reinforcement that it was almost always right.
There are a few cases where you might go a little bit different way and be battered,
but it almost always had a great reason for taking you the route that it did, even though
it wasn't the normal route.
So I think, I think there's going to be this transition period to where we, we as the general

(01:00:52):
population get more and more comfortable with embedded machine learning and AI products.
And then there's going to be things that we didn't even think of.
You know, it's like once the systems and maybe GPT-40 is the first start to this, but I mean,
I've seen some videos where people are talking to it and it's responding back in a much more

(01:01:19):
helpful manner and like detailed manner than what we're normally used to with our Google
Assistant or Siri or something that I'm not going to say the word of here, which because
it'll wake it up.
But the, you know, so I think we're going over the next probably a couple of years, we're
going to be, we're going to be much more comfortable with small parts of AI in our lives.

(01:01:46):
And then for people in the data science field, the ability for it to do exploratory data analysis
and do some of these things, it doesn't replace people, but it takes those, those, those menial
tasks out of, you know, something that might have taken an hour might, might not take a

(01:02:11):
long time to do that.
And so, I think that's the, the great thing about that and then look at it from a, from
an optimization or a scenario planning viewpoint is a lot of the uncertainty that we have in
the future.
A lot of the times we say, hey, we don't want to, you know, that might be, that might be

(01:02:33):
a possible outcome, but I don't know, you know, that's, it's too much effort to, to plan
and do these scenario plans for us and, and then it presents a human, I still a firm believer
of a human in the loops in the near term of, you know, it presents me as an executive of,
well, here's, here's 50 different scenarios.

(01:02:55):
And, and then I can look at that and I can say, it's based on my level of risk and my
level of risk versus opportunity.
And these are some things that I want to take, these are the actions that I want to take.
So, you know, I think that in the near term, it's really going to help in planning for
companies, I think it's going to help in, in the way people interact with data on, on

(01:03:20):
the internet, 10, 15 years from now, I, I don't know.
I mean, it's going to be, I really do think that, you know, some of the, some of the projects
like the Google Glass and things like that, that, you know, what it would be interesting
for somebody as you're getting older like me and I can't remember exactly what somebody's
name was and when I met him, I would love to be able to have something that just says,

(01:03:45):
this is Joe Smith and you had, you had lunch with him, you know, a year and a half ago
and you talked about these things, I would love that.
Wouldn't that be nice? That is a, you know, some of the things that I would, I would personally like,
but, you know, it's going to be pretty amazing.
If you were to help us extrapolate the line, in my lifetime, it seems as though machine learning

(01:04:13):
and artificial intelligence and kind of the concept of computational intelligence has been
on what feels like a pretty linear but fast path and then in the last two years or less,
it seems like we've kind of hit that part of the hockey stick graph where we're hitting
that exponential curve. Can you help me think through how you have seen the evolution
of machine intelligence and then perhaps where we might have that inflection point where the

(01:04:39):
machines are on par generally smarter than we are?
Yeah, so it's, it gets into a couple of different things there.
It's that that hockey stick for what we're talking about AI.
So if you're talking about AI in the strict sense of it, it acts like a human,
it understands, it can see that we've hit that hockey stick, you know, recently.

(01:05:02):
Traditionally, computers have been better than us at calculating and things like that.
So, so, you know, we use computers in spreadsheets, you know, have used calculators since,
I don't know when the first digital calculator was, but maybe, you know, so we've been
using tools, machines that can do things faster than us, but they couldn't do, they couldn't

(01:05:28):
recognize things. They couldn't, you know, recognize cats and dogs until, you know,
in the last couple of decades. I would say that there's, there's several different hockey
sticks. In computer vision, we're able to recognize things almost instantly with,
with machine learning. And that's, that's been around for a while, but it hasn't really

(01:05:50):
caught on as much as these large language models. But can you start combining this
computer vision with, with the large language models and you start getting these,
this ability to, to really act like a human and understand things, but then combine
that with the fact that it has read every document in, in known existence, you know,

(01:06:14):
then I think, you know, I think we're, we're pretty close to having something that can,
they can do test taking and do some of those types of things as well as a human.
Because I think some of these, some of these newer models can pass, you know,
certain like, you know, they can pass the LSAT and they can do all these things because

(01:06:35):
they, they, they know it. But then on the other hand, do these systems actually
understand the world or is it just understanding all of the input data? And by
that, I mean, and this is a little bit of a side example, but at Best Buy, we did,

(01:06:57):
we did Amazon AWS Deep Racer. So AWS has this virtual simulation of a, of a car,
which you can also buy the physical car and you use deep, a deep learning for it to
learn to move around a track at a, as fast as possible. And there, in Amazon does this
every year, it says, it's pretty cool. Yeah, you can just search AWS Deep Racer and you can,

(01:07:23):
you can try it out. And it's, it's a really interesting way of learning some
reinforcement learning and some of the algorithms that they have on there.
But did you and I, when we were learning to drive, we learned that if, if we were
going down the road straight and we, we, we turned the steering wheel this way,

(01:07:44):
the car went that way, then in our mind, we implicitly said, you know what,
there's a symmetry to this. I don't even have to know, I already know that I
generalize this though. If I do that, it's going to do the same thing. But when you're
talking about the way that we're doing most of the reinforcement learning these
days, it has to learn to turn left. And then it has to learn to turn right, or you

(01:08:11):
have to tell it preemptively that there is this thing. There's, it just, it doesn't
know. And maybe there's some research out there that is getting into this, but
where has already done it, I don't know. But I think that's a, you know, we're
still, we're still a good ways away from the systems that I've seen understanding

(01:08:35):
a basic, under, uh, operating model of how the world works.
I think it'll be so fascinating to see that moment when a machine from a
pre-training standpoint can learn to drive as effectively, as quickly as we
humans do, right? In a couple hundred hours worth of practice, as opposed to

(01:08:56):
simulating tens of millions of hours. The question that I've got for you before
we bring this to a close is as we get to an era where machines and all these
different hockey stick graphs are really starting to pick up, as we get to the
era where the machines are more capable in more and more areas, what advice would you
have to the general working public about how to continue to stay relevant and

(01:09:20):
continue to have a useful and perhaps enjoyable, some, some sort of good fit
professionally for them as some of the functions that they used to be responsible
for are now, wouldn't necessarily say obsolete, but could be handled more
effectively by, you know, other systems.
That's a great question because I do think that, you know, there will be some

(01:09:43):
jobs that are completely replaced by this, but not, not many. I don't know, you
know, it could be wrong, but I think this is going to be a tool. For instance, a
tool like a calculator or a computer, I remember and even before I started
studying neural networks, in the, in the mid 80s, I think maybe 86, 87, I was

(01:10:05):
reading about expert systems and prologue, prologue is a, as a programming
language, but these expert systems, the, the thought was in the mid to late 80s is
that if you could, you could, we wouldn't need doctors in the future to tell you
what sickness you had because an expert system, you could just ask you a bunch

(01:10:28):
of questions and it would tell you, but that didn't work out. I mean, these
expert systems help, but it's not, we didn't replace doctors. We didn't replace
accountants by computers, you know, the computers help the accountants. So I think,
I think if for the general public, don't be afraid to try out some of the

(01:10:52):
things that, you know, make sure you're, you're not putting proprietary
information in chat GPT and things like that, you know, don't do anything that
you're not supposed to do with your company, but, but also try it out and see,
you know, there's some really things, there's some things that will really
amaze you at what it can do. If you were spending 10% of your week doing
something that, that AI can do for you now, then there should be, you should be

(01:11:20):
able to take that time and do something that's even more valuable. You know,
you've got to have ideas on how you might be able to do something else. And I'd say,
look at it, not as it's replacing you, but it's freeing you up to do, you're,
it's freeing up you up to do something that is even more valuable and makes you

(01:11:41):
more valuable to the company. That would be my advice.
Yeah. Well said. As we bring this to a close, is there anything that we skimmed
over that you'd like to revisit? I would just say that, that this is an
exciting field. If you are in data science, find something that you, that you love.

(01:12:06):
You know, we talked about it earlier in this that I didn't get into machine learning
because I thought it was going to be lucrative. It was because I loved it. This
turned out to be a great career. So just, if there's something that you really
like to do, then, then the chances are you're going to find a, you're going to find a
job that pays you to do that. So I would say that's, that would be, be my advice.

(01:12:31):
Don't necessarily chase the, don't chase the latest thing. You know, find what you
love and do that. And, and hopefully you're going where the, where the puck is headed
and not where the puck is headed. Well said, Warren. Thank you so much for
joining me and sharing your perspective. Well, thanks Ryan. I've enjoyed this and
look forward to seeing it. Our next episode is with Paula Cole, head of

(01:12:56):
pension risk transfer with Nationwide Financial. The more that you know about
the individual and how they work at their best, the more that you'll know about
the individual when they're working through opportunity to improve.
If you enjoyed this episode, make sure to subscribe for new episodes,
leave a review and tell a friend. Good Fit Careers is hosted by me, Ryan

(01:13:19):
Dickerson, and is produced and edited by Melo-Vox Productions. Marketing is by
StoryAngled and our theme music is by Surftronica with additional music from
Andrew Espronceda. I'd like to express my gratitude to all of our guests for
sharing their time, stories and perspectives with us. And finally, thank you to all
of our listeners. If you have any recommendations on future guests, questions

(01:13:41):
or comments, please send us an email at hello@goodfitcareers.com.
[Music]
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.