All Episodes

August 29, 2025 28 mins

We’ve all read the headlines that AI will take over white-collar jobs. But this week, Oz and Karah spoke with journalist Robert Capps about the 22 new roles that might exist in an AI-partnered workplace. Plus, Robert tests AI’s journalism skills. There’s not much to worry about.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:12):
Welcome to Tech Stuff by'm moz Veloschen here with Cara Price.

Speaker 2 (00:16):
Hello. Hello, So I just read this piece about how
AI has already started displacing jobs, but it's actually not
the jobs you would think.

Speaker 1 (00:24):
So do we have something to celebrate this upcoming Labor Day?
AI not stealing our jobs?

Speaker 2 (00:29):
Well, notebook LM, as you know, is trying to make
podcast hosts obsolete. But no. According to MIT's State of
AI and Business Report, AI is replacing jobs that are
usually outsourced to other countries. That's the short term. Long term,
around twenty seven percent of white collar jobs will be eliminated. Yeah.

Speaker 1 (00:47):
I know a number of people who are worried about
the security and longevity of their jobs. But I also
know people who are thinking about the kind of glass
half full perspective about new jobs that AI might create.
And we're sharing a conversation with one of those people today.

Speaker 2 (01:01):
Yeah. You and I recently spoke with journalist Robert Kapps,
who has been reporting on what jobs could exist in
an AI driven future, and he actually came up with
a list of twenty two jobs that don't necessarily exist
yet but are likely to exist when humans in AI
work sort of in a hybrid capacity.

Speaker 3 (01:17):
Yeah.

Speaker 1 (01:18):
He lists a few jobs that sounded pretty interesting, and
then wrote up this article for the New York Times magazine.
I think my personal favorite was probably AI plumber. That's
the person who will figure out why an AI system
might not be working the way it should be, and,
as Rob puts it, we'll snake the pipes.

Speaker 3 (01:35):
Yeah.

Speaker 2 (01:35):
And I was happy to hear him say that tastemaker
jobs will last a while.

Speaker 1 (01:39):
Is that because you consider yourself a test maker?

Speaker 2 (01:41):
Yes, a nondescript creative professional. And he says that those
will continue to exist, So I'm very excited. He mentioned
one job called a world designer, where a person fabricates
an entire universe, complete with fictional characters and locations, which
could apply to everything from marketing campaigns to video games.

Speaker 1 (02:02):
Yeah, I remember him talking about that. He also said
that he wrote the first draft of the piece using
AI to see what would happen and his Robert describing
how that went down with his editor, I.

Speaker 3 (02:12):
Thought it would be a funny joke to play on
my editor, Bill Wassick that you know, he assigned me
the article and then like two hours later, I'm like,
here you go, here's my invoice. I love this new future. No,
it was a little bit of a luck, but it
was also you know, you're thinking about AI and jobs,
and you're thinking about how what are the future jobs?
So I just thought it was a logic place to

(02:33):
start with. Okay, well what will my role be as writer?
And you know, freelance journalism is a hard place to
be I and we're not extremely well paid people anymore.
So it's if I can write more, I can make
more money, you know, just as a purely capitalistic play,
like that's the dream, right, But of course that wasn't possible.
It wasn't nearly good enough. And I would suggest this

(02:54):
like to anybody out there who's very worried about it.
It's like, just go have it, do your job. Just
try it, because like it'll teach you a lot about
how far it still has to go, which can be
surprising in our current type cycle.

Speaker 2 (03:04):
Right to absolutely no one's surprised. The AI version of
Robert's story was not published because he couldn't risk his
reputation on an article where AI may or may not
be hallucinating, or may or may not be exaggerating things
or understating things.

Speaker 1 (03:18):
You know, We've talked about cognitive offloading a bunch on
this show, basically atrophying your own skills by using AI
too much, and that's something that Robert said he's very
conscious of.

Speaker 3 (03:29):
When you start to really work with the AI, at
some point you start to hand over your sense of
taste and your sense of uh, like is this good.
You sort of hand over the authority to the AI,
and the AI doesn't have any capability to accept that authority.
It will tell you that like, oh, yeah, this is great,
this is true, this is but like it doesn't It's
just a machine, right, so it doesn't. It doesn't really

(03:52):
have the ability to accept that moral responsibility. Like at
some level there are just these elements that like have
to come from the human because the AI is just
not capable of providing them.

Speaker 2 (04:02):
You know what I love the most is that after
using AI, his next step was to reach out to
a bunch of people, which I know is basic reporting,
but I think it really speaks to the core of
his argument that AI will need the human touch for
quite some time to come right.

Speaker 1 (04:16):
Human work will continue, but the nature of the work
may adapt to the AI Revolution and Robert have some
interesting speculations about what the jobs of the future might
look like and how they fit into three distinct buckets, trust, integration,
and taste.

Speaker 2 (04:32):
So we started out by asking how he came up
with these three buckets, And here's the rest of our conversation.

Speaker 3 (04:38):
The first person I called for the article was the
Ethan Molloch, who wrote the book Cointelligence, who's Professor Wharton
and who thinks and writes a ton about AI, and
I just wanted somebody to talk with about like, Hey,
how should I even be approaching this? And like right away,
even from that first conversation, I knew that this was
going to be way more philosophical than anticipated, because he wasn't.

(05:01):
He was basically like, there's no way I can tell
you specific jobs.

Speaker 1 (05:05):
Why did he say that he couldn't tell you specific jobs?
What was the kind of intellectual exercise or missing step
from where we are today to knowing what those jobs
will be part of?

Speaker 3 (05:14):
It is just too fast moving, you know. One of
his big phrases they told me a couple of times,
was like it depends on how good the AI gets
and how fast. Right. But then there are some things,
and you know, he talks a lot about the jagged
frontier of AI, which is sort of like the AI
can be really good at some things and then just
like really horrible at some things that you would expect

(05:35):
even the base like estimating the word count of an article, right,
like you expect any human to nail that pretty easily.
But when you think about all the different jobs, or
even your job, or any job, what level are you
bringing some sort of moral or technical or whatever responsibility
to that job, Like you are signing off on something,
you are the person saying like, yes, this is good

(05:58):
and right and best and you know whatever it be.
And that can exist in a lot of different things.
In writing this article, it's I'm accepting responsibility for the
truthfulness and accuracy of this article, and the AI can't.
It's just not part of our moral world. Right. You
can't turn to the AI and blame it when something
goes wrong. I mean you can, but it doesn't care.
And so you know, you can think about the like

(06:18):
extreme far end of that, in like autonomous warfare or
something where like AI robots kill somebody, they can't accept
moral responsibility. So the first category was trust and where
are humans gonna still be very necessary? Whereas AI need
them to authenticate and to provide that sort of trust.

Speaker 1 (06:38):
I mean, to your point. This piece appeared in the
New York Times magazine. And when we think about like
the history of newspapers, or the history of publishing, or
the history of publishing houses, like as this concept of impremoto,
like I know if this article or this book or
this movie comes from this studio or this producer or whatever,
that I can trust it to a certain extent. The

(06:59):
output of AI is often entirely divorced from its creator.
But you know, you mentioned in the piece some of
the roles that may emerge in this trust bucket, like
AI auditors, trust authenticators, AI ethicists, And I wanted to
ask you, obviously, one of the things you have to
think about is how much cultural demand will there be
for these types of things. I mean, we're living in

(07:20):
a world now of alternative facts and conspiracies, and I'm
just wondering, do you think the demand signal will be
there from the world, even if it should be morally?

Speaker 3 (07:31):
I think that there. I think that there are a
whole lot of tasks and a whole lot of things
that we really would love AI to do, right like
that that we would be just fine with AI doing
and in fact, we probably already do a lot of them, right,
Like I have AI transcribe my interviews, and I don't
think twice about the moral implications of doing that. And

(07:53):
I think that like, as humans, we sometimes jump to
these very extreme cases, right, Like I just used a warfare,
but like you know, replacing human creativity and things like that,
and yeah, we're gonna have to think very carefully about
the lines there. But I think that there's a whole
bunch of tasks that you're that you're just fine with.
And so you know, one example that I kind of
reference in the piece in Trust is that, like, oh,

(08:16):
I thought that like HVAC repairmen might be some of
the last people to be affected by AI, But in fact,
HVAC repairmen have to do a lot of paperwork, right, right,
and a lot of things that's not really core to
what they enjoy about the job. But if they're using
AI to do their contracts and to do their paperwork,
at some point someone needs to validate that, like those

(08:39):
contracts are accurate, that they're legally fair, because you can't
trust the AI, the AI is not worthy entity. So
like even trust comes in there, like somebody needs to
validate that, and it becomes a little bit different when
you haven't created the thing, the contract or the whatever yourself,
like the same problem I was having with the article.
It's like because you can't. You have to have a

(09:01):
sort of slightly different set of skills to be able
to be like I'm very familiar with where the jagged
edge of AI what kind of mistakes it makes, and
there really can be very weird mistakes, right, like unexpected
not like it's not like backreading something a human rope,
you know, and you can scale it up where it's
more complicated than HVAC contracts, and it's something in an
organization where they're where they have a whole chain of

(09:22):
systems and somebody has to really know the AI well
enough to be able to go through that chain and
give it the like human approval of like, yes, this
is this is trustworthy and accurate. And I think when
I talk about ethicists, you know, a big organization might
be integrating AI enough into the organization that they sort
of have to have some level of explainability, right and

(09:44):
some level of justification of like we let the AI
make these decisions and not these decisions, and here's why
we do it, and those things need to be explainable
in a way that is satisfying to all kinds of constituents, right,
so investors, customers, you know. But it can be like
if the organization ends up in court, right, they have
to explain it to a judge and a jury in
a way that they're like, okay, that's rational and ethically sustainable. So,

(10:07):
you know, one of the things I like to say
is that like the AI boon might actually be a
big boost for philosophy majors who can think through the
sort of philosophical implications of how the AI is used
in an organization and create a rational chain of ethical
explainability for why it's done that way. Because so much

(10:28):
of large corporations comes down to as they get bigger,
they're just there's just more and more liability everywhere.

Speaker 2 (10:34):
Maybe one day there will be an AI hallucination interpreter,
which would be very interesting. I think, I don't know.
The second bucket is integration, and these jobs seem to
be more technical in nature. Can you talk a little
bit more about the integration category and what those jobs
will be like?

Speaker 3 (10:52):
Sure? So another expert I talked to was a fellow
named Robert Siemens, who's a professor at New York University
who studies a lot of this, and he's like, well,
certainly there's going to be some technical rules, right, like
people need to understand the AI, and not just like
we know how to build models. We know how to
build AI. But these are sort of the most immediate
jobs that you can see coming up that'll be very

(11:14):
big for the next some amount of years, you know,
and they may shrink as the AI changes, right, we
may need less integrators over time, but like right from
the jump, you need someone at your organization who really
understands the models, can really dig down, can really understand
what the I is doing and why, and can help
map it to you know, the specific company's peculiarities and

(11:35):
needs and wants and desires, right, And so one of
the first ones he talked about was just the AI auditor,
someone who can go and just really create some sort
of understanding of the AI for people in the business.
So you know, you can almost think of these as
like a sort of twist on or an addition to

(11:56):
your typical IT manager, but isn't quite so. Just technically focused, right,
They're just like, Okay, our sales teams and models are
not showing this, We're not hitting the optimization right, So
just keeping up with the models and like which which
one is actually best at algebra right now? Right? Like
which one is actually best at writing right now? Right?
It changes every few months. Just even keeping up with

(12:17):
that if it changes at this pace, can be a
pretty substantial job. So you can see AI being a
great optimizer and a great tool, but someone needs to
understand the system enough to work with the ad to
make sure that like, oh, we're having this problem, like
people aren't washing their hands enough. How can AI help
us in a hospital setting? Someone needs to be thinking about,
like how to make the AI get to the outcomes

(12:40):
that they want to see. Because it's a very powerful
tool for some of these things, but you know, it
needs someone kind of prompting it and helping and integrating
it to these very very complex, big organizations.

Speaker 2 (13:00):
After the break, why taste making will be the industry
of the future, stay with us.

Speaker 1 (13:20):
So the first bucket was trust essentially human in the loop,
a category of jobs around ultimately like taking responsibility right
or being the final arbiter of like what is just
what is legal, what is correct. The second bucket is integration,
which is essentially like, how do we know enough about

(13:43):
these tools to harness them, effectively.

Speaker 3 (13:46):
Optimize them and harness.

Speaker 1 (13:48):
The third bucket is taste, and this one I think
relates most directly to what me, you and Kara do
for a living. And I was curious, how did you
choose that word and what does it mean in the
context of your pend the jobs that may emerge.

Speaker 3 (14:02):
Yeah, I mean, I think taste is going to be
very core to a lot of things, and not just
creative jobs. It's also the one that I feel like
people kind of perk up because it's it's also sort
of humanly appealing. But you know, as I was just
thinking about it and talking to people, and I use
this in the piece, I just had this viral e
clip of Rick Rubin in my head right of him

(14:24):
on sixty Minutes and Anderson Cooper talking to him and
saying like, do you know how to work a soundboard?
And he's like, no, you know, want to play instruments?

Speaker 2 (14:31):
No?

Speaker 3 (14:32):
Do you know anything about music?

Speaker 2 (14:33):
No?

Speaker 3 (14:33):
And he's like, well, what what do you do? And
his answer, I'll paraphrase, I don't have in front of me,
is is you know, I am very confident in my taste, right,
and so like I had this in my head because
you know, just thinking about what do we still need
humans for it? And at a base level, like the
AI doesn't want anything on its own, right, Like, unprompted,

(14:55):
it will just sit there idle. Right. So at some point,
the basic human AI interaction is the human asking the
AI for something. And to me, one logical route you
go down if you explore that enough, is that the
human is providing the taste for what it wants created
and by I mean created. It could be a creative

(15:15):
thing like a piece of music, or it could be
a non creative thing like a business process. Right, But
at some point somebody is looking at it and saying like, hey,
I have the vision for what I want. But what
they are really doing is they're making creative choices. And
they're not wrong or right choices. They're choices of aesthetic
or their choices of function, right, Like, so there's multiple

(15:37):
ways to find a solution, but they're sort of using
their taste and their judgment to make these creative decisions
to get to their outcome.

Speaker 1 (15:44):
Cara I'm curious for your take here because you know,
wearing your other hat, you're a TV producer, successful TV producer.
How do you think about this idea of taste being
a key place of human irreplaceability.

Speaker 2 (15:59):
Taste is certainly I think the final frontier to be
messed with. I try to think of it. Does AI
have taste? AI has taste insofar as what is fed
to a large language model. If you feed a lot
of Beethoven to a large language model and it's spinning

(16:20):
out music that is supposed to be modeled after Beethoven
or Bach or whoever, you still need a human being
who's going to decide is this music good. I think
things that are synthetically made by large language models can
be good. But I still think there's someone who's deciding

(16:42):
ultimately if that thing is good or not. And that's
a person.

Speaker 3 (16:46):
And that's what I mean, at some level of someone's
making those decisions. And I think, like, again, there's these
far out examples of like, oh, I use the AI
to do everything, and that's going to have a certain quality.
But there's like, oh, I use the AI to do something,
and I do some things myself, right, I do something's
analog and that's going to have a different quality, right,
Like those can both exist?

Speaker 2 (17:07):
Do you find that quality that qceing has become a
harder job? Like I do notice that just even in
LinkedIn posts or in Instagram posts, people are relying more
on chatch ept to create the language that they're using.
And there is a sort of en shitification of things

(17:27):
because we have become used to just accepting that things
aren't as good anymore because people are using chat ept
to produce content. I know, is there a job that
exists to push back on in shitification?

Speaker 3 (17:40):
I guess. I mean, like I've done enough writing experiments
now that I can tell, especially like LinkedIn is just
rife with it. Like there's just certain constructions M dash shit.
The m dash is one but like I love m
dasher is the one that gets me is the like
it's not X, it's y. So like when you and
you'll see these once I tell you this, you'll you'll

(18:01):
see this on LinkedIn like every single post. It's like
the future is not you know, apples and oranges, it's bananas.
It's like that that construction just standing on its own
is like when the AI is trying to be like
a muscular copywriter, especially Claude. It just it just it
just loves that construction, right, And yeah, I think that

(18:21):
it's funny because when I think about AI in creative fields,
and I try to think about it sort of more
optimistic in writing, Like there's this extreme example of like
I just use it to just write the whole thing,
or I use it to do whatever. Like there's a
lot of room between where we are and that being
the outcome, and there's a lot of like sort of

(18:42):
positive room between there. And so like one of the
things that I also do is I work on documentary
films and I'm making a documentary film right now about
AI weapons. And it's going to cost a little over
a million dollars to make that film, right, And that's
that's relatively inexpensive for a documentary film. And would I
love it if AI could help me make that film

(19:03):
for five hundred thousand dollars. Like there's a lot it
can't do. It's not gonna like DP my shoots and stuff.
But like, you know, I have an editor, and you
know it's gonna take months and months to get the
edit together, and I wouldn't want to replace that editor
because that editor is a valuable story collaborator who's taste

(19:23):
I love. But what I love the tools to help
him be able to do in ten weeks what takes
him twenty because it can like do sort of first
cuts quickly for him, and it can sort of like
let him try things much faster, like so therefore we
can make that documentary faster and we can move on
to another one. I would absolutely love that, right, like,
and I think that he would too, but like to

(19:44):
go right to like, well, just let's just have the
AI edit the whole thing and it'll be super cheap. Yeah,
but it it'll suck.

Speaker 1 (19:51):
Now, But what about what about with this piece though?
Because V one you knocked up in two hours using
AI a little bit maybe came in a little bit
of a strung man in the piece, right, But on
V two that you wrote yourself, like, were there places
where you helpfully leveraged AI to make it better? And
if not, like, when do you imagine starting to do
that and in what way?

Speaker 3 (20:12):
So I didn't use AI in this piece for that,
Like I didn't, And I would say one of the
reasons is I have editors at the New York Times
magazine who are very good.

Speaker 1 (20:23):
And even better than GPT five.

Speaker 3 (20:25):
But what I will say is I've done a lot
of experiments with writing in both fiction and nonfiction, of
trying to figure out where to use the AI, and
what I find is that for me it becomes unglued
very quickly in that even Male talks about this in
the piece which I very much agree with. He says
that he never lets the II create a first draft,
and that's something that people have talked about, like, oh,

(20:46):
use it to get over your writer's block, how it
created a first draft. But like his point, which I
find for myself too, is that the AI starts to
dominate your thinking. It puts you in the AI box
and then you're like, well is this where I would
have gone without the II? And it's sort of like
you end up in your own exist crisis. But then
I find even as I use as an editor, I'm
sort of saying like, hey, is this good? Is this?
And it'll start to give some suggestions like oh, that's great,

(21:08):
like you could probably lay on this point a little harder,
or like maybe this is a good opportunity to introduce
some moral ambiguity or whatever. But as you as you
start using it to do that. You're handing that taste
to the AI, right in the same way that like
when I write a piece of New York Times, I
give it to my editor because I'm so close to
it that I'm like, is this is this even any good?

(21:31):
Am I any good? Have I ever been any good
at anything? In my life? That's sort of the writers
just sent into madness. But you're trusting your editor now
to tell you, like, yes, this is very good, right
that this part's working. This part isn't working. I find
it really fraught to hand that to the AI because
it really doesn't know, but it will tell you that
it knows.

Speaker 1 (21:51):
When we look at the three buckets in this article
which I'll trust, integration, and taste, one of the things
that strikes means is that these have a somewhat pyramid
like structure, right, So like, ultimately, when it comes to trust,
somebody has to take responsibility legally, morally or otherwise for outputs.

(22:13):
When it comes to integration, like somebody has to diligence
the tools and decide which ones are helpful and how
to integrate them. And when it comes to taste, somebody
has to be a tastemaker. Somebody has to be a
talented creative or an experience creative, whether it's in music
or organizational design or writing or whatever it is. So

(22:36):
I guess my question is how much of how many
of the jobs, because you also came up with a
concept of the AI plumber, right, how many of the
jobs that will emerge in the AI revolution are kind
of very much for the top of the pyramid types
of jobs, And was that a consideration the piece, Like,
even if there are these new jobs created, will they

(22:58):
make up for the jobs lost?

Speaker 3 (23:00):
You know, I would say that, like the piece is
going to be kind of and I'm going to be
kind of woeful at answering that because I think while
I did it under the structure of like listing some
new jobs, like really my hope was to sort of
like help people philosophically think about like where new jobs
will be and maybe even what their own job will be,
and like what interacting with AI will be like. As

(23:23):
I say, I think that part of it is like
thinking about where the AI needs humans. It's not so
much like I try to resist the idea of its
humans serving the AI, because again, the AI doesn't want
anything if you follow the chain far enough, eventually you
get to a human that wants the thing. Right, So
it's like, where does the AI need the human? It
needs some trust, and it doesn't really understand your business.
It needs some integration, and it doesn't really understand what

(23:45):
you want, so it needs your taste. So there, it's
hard to say, like how much will be sort of
AI plumber versus like sort of setting the taste. You know,
I think that probably ultimately, like the taste maker part
of it is so fundamental to human creation of any kind,
that like that will be the longest lasting, right, Like

(24:05):
I think integration might be the shortest lasting, but like
could still be decades. It's interesting because every single person
I talk to express serious trepidation about where we are
headed in terms of jobs. All of them would say
AI is going to result in more prosperity, is going
to increase wealth. But will that prosperity and wealth accrue

(24:27):
to capital or will that or will it accrue to labor? Right?
And that's the big unknown that I think people really
kind of wring their hands about. And I think that
like we are definitely at the place where it could
go either way. I am a little bit of a
firm believer in you know, sort of the phrase the
cars go where the eyes go, we will go where

(24:47):
we sort of collectively point ourselves to go. Right, None
of this is foreseeing the technology will not It does
not make it guaranteed to go one way or the other.
I think that in theory, right you have a bunch
of companies working on this, A bunch of them are
playing with open source. In theory, the tools will be commodities,

(25:08):
which is to say, unless open AI and a less
anthropic and everybody decides that they get to a model
that's so good that they just close it off for themselves.
Like we'll all have access to these tools. That should
in theory, and this might not be very comforting, but
in theory, that should be democratizing, right, like the Internet was.
That should enable more of us to be able to

(25:29):
do more things, and that should be actually threatening to
bigger organizations. So like, my hope is that we should
be entering a very entrepreneurial age where like small teams
of people can take on big encumbered people. But I
don't think that like that. We're in this weird quirk
area right now where doesn't look like that, right, Like,

(25:51):
what it looks like is we have these massive monolists
that are just getting stronger and more impenetrable, which I
think is part of why it seems so scary. Right,
They're just accumulate cash till the cows come home, and
the rest of us are going to be poor. And
I do think that, like in this sort of one
sense of like being a corporate cog in one of
those massive enterprises, Like there's never been a worse future

(26:12):
for that. But like if you can start to think entrepreneurial,
if you can start to think about, like, hey, how
can I use these tools to do something that excites me? Right,
that I'm really thrilled about, Like I want to make
documentary films, right, Like, and it's so expensive, but like, hey,
maybe they can come down and I can start to
tell the stories I want to tell in the ways
I want to tell them, right, Like that's just for me. Like,

(26:34):
you know, my hope is that like there's all sorts
of stuff that we don't see coming that is going
to be really empowering to people. I don't know that
that's true, right, there's certainly enough worry about just it's
just capital building on itself right, which will require some
other kind of intervention that is fairly bleak about. But
again I think the tools will be commodities, not the people.

Speaker 1 (27:02):
Robert Capps, thank you for joining us on tech Stuff.

Speaker 3 (27:04):
Thank you, thanks for having me.

Speaker 2 (27:28):
That's it for this week for tech Stuff.

Speaker 1 (27:30):
I'm Kara Price and I'm mos Valasha And this episode
was produced by Eliza Dennis, Tyler Hill and Melissa Slaughter.
It was executive produced by me, Kara Price and Kate
Osborne for Kaleidoscope and Katria Norvelle for iHeart Podcasts. The
engineer is Beheid Fraser and Jack Insley mixed this episode.
Kyle Murdoch wrote our theme song. Please do rate, review

(27:51):
and reach out to us at tech Stuff podcast at
gmail dot com. We love hearing from you.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

New Heights with Jason & Travis Kelce

New Heights with Jason & Travis Kelce

Football’s funniest family duo — Jason Kelce of the Philadelphia Eagles and Travis Kelce of the Kansas City Chiefs — team up to provide next-level access to life in the league as it unfolds. The two brothers and Super Bowl champions drop weekly insights about the weekly slate of games and share their INSIDE perspectives on trending NFL news and sports headlines. They also endlessly rag on each other as brothers do, chat the latest in pop culture and welcome some very popular and well-known friends to chat with them. Check out new episodes every Wednesday. Follow New Heights on the Wondery App, YouTube or wherever you get your podcasts. You can listen to new episodes early and ad-free, and get exclusive content on Wondery+. Join Wondery+ in the Wondery App, Apple Podcasts or Spotify. And join our new membership for a unique fan experience by going to the New Heights YouTube channel now!

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.