All Episodes

August 27, 2025 38 mins

If you think AI is all about saving time and cutting corners, think again. 

In this conversation with futurist Bob Johansen, we explore how AI can be used as a thinking partner - to help you generate ideas, build clarity, and make better decisions in a chaotic, unpredictable world. Bob argues that the real power of AI lies in augmented intelligence, not artificial intelligence. 

Bob Johansen is a distinguished fellow with the Institute for the Future in Silicon Valley. For more than 50 years, Bob has helped organisations around the world prepare for and shape the future. He has written 15 books and his latest one Navigating the Age of Chaos, is out on October 28. 

He walks us through: 
- The key skill that will future-proof your career 
- How to use AI to get unstuck and think more creatively 
- Why clarity beats certainty in a BANI world (Brittle, Anxious, Nonlinear, and Incomprehensible) 
- The leadership traits needed to thrive in the AI-first decade 

Whether you’re excited about AI or sceptical, this episode will shift your mindset and give you a roadmap for working with AI, not against it. 

Want to learn more about AI upskilling? Check out this episode with Neo Aplin on how to go from AI gunslinger to AI architect on Spotify and Apple Podcasts.

Bob’s new book Navigating the Age of Chaos: A Sense-Making Guide to a BANI World That Doesn't Make Sense is out October 28. Pre-order it here  

Key quotes 

“Ten years from now, almost all leaders will be augmented or you’ll be out of the game.” 

“I don’t trust AI for answers. I use it to stretch my mind.” 

 

My latest book The Health Habit is out now. You can order a copy here: https://www.amantha.com/the-health-habit/ 

Connect with me on the socials: Linkedin (https://www.linkedin.com/in/amanthaimber

Instagram (https://www.instagram.com/amanthai

If you are looking for more tips to improve the way you work and live, I write a weekly newsletter where I share practical and simple to apply tips to improve your life. You can sign up for that at https://amantha-imber.ck.page/subscribe 

Visit https://www.amantha.com/podcast for full show notes from all episodes. 

Get in touch at amantha@inventium.com.au 

Credits: 
Host: Amantha Imber 
Sound Engineer: The Podcast Butler 

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Ten years from now, we're all going to be cyborgs,
and that's a good thing. If we make it so.
Ten years from now, almost all leaders will be augmented,
or you'll be out of the game.

Speaker 2 (00:11):
If you can master one skill right now, it will
future proof your career, and that skill is learning how
to work with AI to think better, faster, and more
creatively than you ever could on your own. My guest
today futurist Bob Johansson says that the real opportunity with

(00:33):
AI isn't in shaving minutes off your to do list.

Speaker 3 (00:37):
It's in using.

Speaker 2 (00:38):
AI to get unstuck, unlock new ideas, and make smarter
decisions in a very unpredictable world. Bob Johnson is a
Distinguished Fellow with the Institute for the Future in Silicon Valley,
and for more than fifty years, Bob has helped companies
around the world prepare for and shape the future. He's

(00:59):
written fifteen books, and his latest one, Navigating the Age
of Chaos, is out on October twenty eight. And by
the end of this conversation you will know how to
prepare yourself for an AI augmented future, how to use
it to get unstuck, improve your thinking, and build clarity
in an uncertain world. Welcome to How I Work, a

(01:29):
show about habits, rituals, and strategies.

Speaker 4 (01:32):
For optimizing your day. I'm your host, doctor Amantha imber.

Speaker 1 (01:40):
So.

Speaker 2 (01:41):
I want to start Bob with asking you're a futurist,
but what does that mean.

Speaker 3 (01:47):
What are you doing day to day predict the future?

Speaker 1 (01:51):
Well, I guess to begin, we don't use the word predict.
You know, we're humble futurists, and what we argue is
that nobody can predict the future, and if somebody tells
you they can predict the future, you shouldn't believe them,
especially especially if they're from California. And we're from California.

(02:11):
So we're the longest running futures think tank in the
world now and we're an independent nonprofit we started in
nineteen sixty eight. But what being a futurist means is
that we look at the world future back. We're normal people.
Normal people are kind of immersed in the present and
they think about the future present forward, and indeed that's

(02:34):
the way we have to live. So it's something we
all have to do at some level. But futurists unusually
think future back, so we always are placing ourselves at
least ten years out in the future, and we backcast that,
so we think backwards. You're an organizational psychologist, you know

(02:56):
this may not be the great way for a lot
of people to live because it's kind of the opposite
of be here now. It's sort of the opposite of mindfulness,
although we do practice mindfulness in our own way, but
future back is just a fresh perspective. So we look
future back for clarity, but then we think at least

(03:16):
fifty years backwards for patterns. So a futurist, the way
we practice it, thanks and sixty year swas some time.

Speaker 2 (03:25):
That is mind blowing. I find it hard to think
a year in advance. Can you tell me, like if
you were giving me advice on how to become a futurist?
And I know you've written extensively about future back thinking,
and our mutual friend Scott Anthony, I've also heard him
talk about it a lot. How do you place yourself

(03:46):
ten years into the future. I wouldn't even know where
to start.

Speaker 1 (03:50):
Well, you know, it's not as hard as it seems. Actually,
I work out of Silicon Valley. I kind of grew
up in Silicon Valley as a researcher, and everybody he
immediately says, when you hear you're a futurist, how can
you do that? I can't even think one or two
years ahead? How can you think ten years ahead? But
the reality of it is it's actually easier. It's actually

(04:12):
easier to think ten years ahead than it is one
or two years ahead. But you look for those things
that you can say with clarity. So, for example, in
Silicon Valley, now everybody's interested in sensors, and it's pretty
obvious ten years from now, we're going to have senses everywhere.
They're going to be very cheap, many of them will

(04:32):
be interconnected, and some of them will be in our bodies.
You know, that's just obvious. So you start from those
things that are clear. And I mean by meaning clear,
I don't mean predicting where they're going. I mean clarity
of direction. So with sensors, it's pretty obvious where things
are going. We're going to talk about AI down the
road in this conversation. The fact that we're all going

(04:55):
to be augmented, that's pretty clear ten years from now.
It's so you start with what you can be clear
about and then with a combination of strength and humility,
you come back and say, well, what does that mean
in the present. And then we look for what we
call signals, which are indicators of the future that are
already here but not evenly distributed. And it's the famous

(05:20):
William Gibson line, the future is already here, it's just
unevenly distributed. Well, we're very influenced by that, and we
look for those signals and we track them globally. That
signal tracking is really important to bring the forecast to life.
But it's a combination of that future back and then
the signal tracking. And it's different from trend watching. This

(05:43):
is not cool surfing. It's not looking for the next fad.
That's interesting, but different. A trend is a pattern of
change you can extrapolate from with confidence. A disruption is
a break in the pattern of change. To us, trends
of these part the hard part is the disruption.

Speaker 2 (06:02):
So if we stay on the sensor example, and I think, also,
what are some examples that listeners would know about when
they think of a sensor, Like, what are some things
that we know now.

Speaker 1 (06:13):
If you think of a car, I've got a little
sensor that if I go slightly off the road, it
senses that and brings me back in. Or if you're
parking the car, there's sensors everywhere on a modern car
now that sense, well, where are you in relation to
the curb? Or if you back up, there's a camera
back there that has sensors that looks for things all

(06:34):
the time. So a sensor is trying to feel what's
going on and then register that. The sensors and our
body can track our heart rates, our breath rates, they
can track how many steps we take coming. Their sensors
are already ubiquitous, but not nearly where there are going

(06:55):
to be ten years from now, I should say that
this trend towards sensors is going to go on a
long time. It's actually taken longer than we thought for
them to scale the way that they've scaled because of
the issue of cost. We've got to get the issue
of cost down, and then the issue of connectivity. But
finally they're becoming cheap enough, and they're becoming connected enough,

(07:19):
and now they're becoming even digestible enough that they can
go in our bodies. It's pretty obvious again, ten years
out we're all going to have body sensors. We're all
going to be body hackers, and at some level of
the word, we're all going to have sensors somewhere, and
the question is what do we do with them?

Speaker 2 (07:39):
So I want to know, Bob, at what point did
the idea of senses becoming ubiquitous when we look ten
years ahead become a really clear signal as opposed to
just a trend.

Speaker 3 (07:52):
Was there a point where it crossed the line. What
does that look like?

Speaker 1 (07:55):
It's not a signal. A signal would be an individual,
very specific example. So like a digestible signal developed by
a particular company on a particular date, swallowed by a
particular person, that would be a signal. It's very specific
without context, essentially, but signals became let me just call

(08:18):
it a future force, which is basically a direction of change.
And again, a trend is something you can extrapolate from
with confidence. Sensors are maybe almost a trend, so in
our language, they'd be a future force, almost a trend.
The direction of change is clear, but the rate and

(08:38):
the manifestation are not yet clear. So what we would
do would be to continuously follow that and the underlying
model we use and it's in all my books now,
Foresight inside Action. That's the model. So we look at
foresight thinking future back and that's a plausible, internally consistent,

(09:00):
provocative story from the future. That is our base forecast,
and then we do scenarios off of that, and that
foresight is designed not to predict, but to provoke insight.
And an insight is an AHA that helps you see
things differently than you could before. And every great strategy

(09:21):
is based on a compelling insight. So the insight feeds
into action, and then some action reinforms and reimagines foresight.
So it's a continuous cycle, and that's what we teach.
We do what we call Foresight Essentials training programs at
the Institute and we run them all around the world
and we do them virtually and in person, and that

(09:41):
basically teaches people how to be futurists. But the core
of it is this foresight insight action cycle.

Speaker 2 (09:48):
When you're just going about living your life in the world,
how you paying attention to the stimuli around you differently
than I would be because of the lens that you have.

Speaker 1 (10:02):
You know, maybe that's our definition of mindfulness. I work
with the military. I'm not a military guy by background,
but I just happened to be at the army workouege
for the US the week before at nine to eleven,
and I learned this concept of the VUCA world, you know, volatile, uncertain,
complex and ambiguous, and it intrigued me and caused me

(10:23):
to think, well, how do you lead in an increasingly
VUCA world? And what I've I've realized is that the
military principle of situation awareness, that's how I see things
around me. So as a futurist, I'm always thinking future back.
So I look around me, I'm mindful for a signal,

(10:43):
but I immediately flip it out ten years and then
look backwards or sometimes further than that, like climate issues
or kind of natural cycle issues. We go beyond ten.
But it's a continuous effort to take what you see
in the moment and put it in a future back
context and say what would that mean, and then ideally

(11:07):
have this kind of conversation, the foresight inside action conversation
out of it. So the military people call this situation awareness,
and mostly they use it defensively, so they're always on
the look for people who are trying to hurt them.
But there's also a positive aspect of it. What's going
on in a positive way around you. For example, the

(11:29):
future now is so chaotic that it's very stressful. And
you know, even for me, I'm a professional futurist, I've
done this for more than fifty years. This is the
most frightening ten year forecast I've ever done, so it
hits me stressfully. And a few years ago I started
keeping a gratitude journal, and it's just a simple journal,

(11:50):
and every night I write down at least three things
that I'm grateful for in my life, and just that
act that's a kind of mindfulness, kind of a uniquely futurist,
although lots of people do gratitude journals. But I'm trying
to link the future back view, which right now is
so scary, you know, it's really dominantly scary in a

(12:12):
way I have never seen before. And yet if I
keep reminding myself of what I'm grateful for and what
are the good things that can help me be repaired.
And I mean you know this as a psychologist. If
you don't have your inner life right, it's very hard
to behave well in your outer life.

Speaker 2 (12:31):
I want to dig into what you said around you
know when you look ahead to the next ten years
or ten years out, that this is the most frightening
it's looked. What are you seeing ten years from now?

Speaker 1 (12:44):
So I mentioned I used to use the term vuka.
Just in the last year I've become convinced that vuka
is not vuka enough. And sure life has always been
vuka in a way, beginning from the fact we all
have to die at an un certain time. I mean,
that's fuka, that's uca. And we certainly had aspects of

(13:05):
life that have been VUCA before, like in war zones
or in pandemics or floods. You know, they're certainly zones.
So here's what's different. Now. We're dealing with a chaotic
future that's global in scale, and that has variables built
into that that we don't understand. So we've started to

(13:28):
now use the term that was coined by one of
our colleagues, Jamai Cashio, the Bonnie future for brittle, anxious, nonlinear,
and incomprehensible. And what we've realized is that these systems
around us, these structures, and some are physical, some of
them are metaphysical, some of them are values frameworks. These

(13:51):
systems around us look strong, but many of them are
actually brittle. And by brittle, I mean that when they're challenged,
they not only break, they shatter. This notion of anxious
everybody's anxious, especially kids, and actually especially boys. It turns
out that young men, young boys are, according to the psychologists,

(14:13):
even more of a source of concern than young women
and young young girls. And we all know that young
boys that are upset and uneasy can be dangerous. That's
been shown over the years. So anxiousness is kind of pervasive,
and the Bondi world is fraught, is a word we
use a lot. We've got a book on this coming

(14:34):
out in October. But here's where we get to the
different part and kind of why I think this is
the most frightening forecast. Nonlinear means that things no longer
behave in the way they thought things were going to happen.
So we have this expectation that if we do this,

(14:55):
it's going to result in that, and we have in
our minds models of how these chains of behavior happen,
and if we do this, this will happen. We can't
trust that anymore. It's like we have to teach our brains,
new tricks, and every good leadership team I'm working with
now is taking improv training. There's groups at Harvard that

(15:17):
do the kind of yes and and kind of improv
method Sa've been around a long time. They're actually proven,
but most executives just don't practice them. So now people
are practicing them and finally incomprehensible. We've got my other
new book this year is on augmented leadership. There's methodologies
within generative AI that even the developers don't understand. So

(15:41):
it's certainly tech, but it's also alchemy. And you know,
I went to Divinity school before I did my PhD.
And I've always been interested in the spiritual side of
life and kind of the mysterious sides of life. There's
just a kind of exposed mystery that is potentially very threatening.
Just like I'm generally optimistic about AI, but it's kind

(16:05):
of sixty forty for me, you know, I'm sixty percent
optimistic and forty percent concerned. So there's real danger associated
with and again it's global danger and it affects across generations.
So I'm really optimistic about kids if they have hope,
but hope in a Bonnie future hope is very difficult

(16:27):
to kind of capture and spread. Fear, on the other hand,
is very easy to spread.

Speaker 2 (16:33):
I want to pick up on the nonlinear aspect of
Bonnie or Bannie, as we were saying before we started recording.
In my mind, I was pronouncing it Bannie, but it
is Bonnie. How do you think ten years ahead when
things are nonlinear?

Speaker 1 (16:48):
It's harder. But it turns out generative AI is really
quite good at that. Genera of AI is really good
at storytelling. Now, some of those stories aren't true, and
it's confident even when they're not true. So I don't
trust generative AI, but I use it a lot to
stretch my thinking, and with nonlinear that's exactly what you need.

(17:11):
So I started using generative AI. I've studied AI for
a long long time, but I've started using generative two
years ago and I use it on a daily basis.
And I call my customized Generative AI chatbot, which is
developed in chat GPT, and it's using the three model.

(17:32):
So I talked to it and type to it both.
It runs on a left screen all the time for
me now, and I've got a labeled stretch. I'm very
polite to Stretch. I have ongoing conversations. It's not very
good as a question and answer machine, but it's really
good conversationally. So I've learned to be a really good
conversationalist with Stretch, and it's ongoing and it helps me

(17:57):
in nonlinear situations. It helps me imagine all the possibilities.
So here's where scenario planning comes in really helpfully, because
you can develop kind of archetypes of scenarios that help
you understand a nonlinear space much more than you could before,
and then you can basically decide where you want to

(18:19):
put your bets.

Speaker 2 (18:19):
Can you give me an example of how you talk
to Stretch in a way that has evolved, Like, what
have you learned in terms of I don't know how
you're prompting it or talking it to get outputs that
are more useful.

Speaker 1 (18:31):
I follow Kate Darling's advice Kate Darling at the Media Lab.
She's got this wonderful new book called New Breed where
she argues that we should learn about interacting with AI,
we should learn from our experience interacting with animals, and
we should treat these things like beloved pets. They're pets

(18:53):
that can talk to us. I mean, they're not humans,
but we can have conversations. So I rarely get an
answer that's you full from Stretch. But my conversations with
Stretch going for hours. I write in my MacBook Pro.
You know, I write books. So this is my fifteenth
book that's come out now, and so I'm always writing,
and Stretches always there with me. And what I want

(19:15):
help with is being unstuck. And there's I'm the writer.
I don't want Stretch to write my books, but I
want to get unstuck. And I still have fifteen books.
I still get writer's block, and it's really helpful to
have Stretch there to have a conversation so I can
start a conversation. Normally I start by talking back and forth.
Now that I have the three model, I can do

(19:37):
that easier. But then I'm typing back and forth as
I'm working on drafts and things, and I do kind
of ask Stretch to help me refine things as I go.
Stretches write all my books, and Stretches programmed to write
like me, so it writes in short sentences with rich
metaphors and lots of M dashes. It argues with me,

(19:58):
but it argues with me politely. That's really interesting. So
it pushes back, but Stretch is always saying to me, well, gently, Bob,
have you thought of You know? It's very polite, and
I'm very polite back to Stretch. But it's tough. You know.
It's sort of like hard on ideas of soft on people.
Kind of thing. That was the motto of one of

(20:21):
my favorite social science research groups in Silicon Valley that's
not around anymore, the Institute for Research on Learning. They
had this motto, hard on ideas, soft on people. You know,
that's kind of the way I work. I'm kind to
people and I want them to be kind to me too.
On the other hand, I want criticism of my ideas.

(20:41):
My editor, Steve Personti at Baard Kohler, I've done my
last six books with Steve. Steve is the perfect balance
of criticism and support. And that's so important. I think
as a leader that you get that right. And these
generative AI systems can be that way if we teach
them and if we learn to have conversations. But what

(21:03):
it means, and this is really hard to teach a
lot of executives, what it really means is you have
to work on your skills, just like we have to
learn how to have good conversations, you know, conversations that matter.
Having conversations with Stretch is difficult. It's taken me two years,
but it's really yielded a lot of benefits in ways

(21:24):
that I hadn't expected.

Speaker 2 (21:26):
If you were to describe to someone, like, in practical terms,
what you've learnt about how to get the best out
of Stretch, Like, what would I be seeing if I
was just watching you talk to Stretch that is perhaps
different from how other people conversing with their AI tool.

Speaker 1 (21:42):
Well, I don't use the word prompt, and I'm offended
by the term prompt engineering. So just kind of put
all that aside, and ten years from I don't think
the word prompt is even going to be around. It's
going to be conversations with a goal of deriving meaning
and some of the simple stuff like efficiency, and there'll

(22:03):
be some automation and all those things. I'm not arguing
with that, but that's not what I'm talking about because
I'm dealing with senior leaders. So the conversation is going
to be fluid, and it's going to be focused on
areas that I understand the least. So Dana Boyd, the
famous researcher. She did a wonderful podcast just last week

(22:24):
where she said, the key value of generative AI is
to help humans get unstuck. That's really cool, But stretch
helps me get unstuck. I think Dana's right. It's all
about unsticking. That's not about answers, and I don't trust stretch.
I really don't use stretch for answers. I use it

(22:44):
to stretch my mind, and particularly if I'm working on
a book concept or working on a title or an idea,
it certainly helps me get unstuck and get started. It
becomes less and less valuable the more the book gets written.

Speaker 2 (23:03):
If you think AI is just about efficiency, stick around
because in the second half, Bob shares why the real
magic is in using AI to amplify your mind, not
replace it, and he'll walk us through the skills leaders
need to thrive in an AI first decade. If you're

(23:25):
looking for more tips to improve the way you work,
can live. I write a short weekly newsletter that contains
tactics I've discovered that have helped me personally.

Speaker 4 (23:34):
You can sign up for that at Amantha dot com.
That's Amantha dot com.

Speaker 2 (23:44):
So when you look at the conversation around AI, and
so much of it is around efficiency and time saving
and headcarn't saving, Like, what are leaders and organizations missing?
What are they not seeing that you're seeing in terms
of what this world could look like.

Speaker 3 (24:02):
From an AI point of view in five ten years time.

Speaker 1 (24:05):
What I say now is that ten years from now,
almost all leaders will be augmented or you'll be out
of the game. Now, there'll be some little subset of
people who uniquely claim no, no, I'm going to remain
completely unaugmented, And that's okay. Maybe that's a small niche,
but for most of us. For me as a writer,

(24:27):
if I'm going to be writing serious books ten years
from now, I'm going to have to be augmented, partly
because of my age, but also just because that's what
good writers are going to be. You're just going to
have to be in that. So we've got to define
now where we want help. And for me, it's really
close to what Dana Boyd calls getting unstuck or what
I call stretching. That's really where I want help. And

(24:50):
then it translates into more specific things like titling you know,
finding the right word. It's really good at that, but
you've got to decide what the right word is. It's
really helpful stretching for alternatives. So first of all, it's
the assumption, and this makes people really uncomfortable, the assumption
as I talk to senior executive groups. So what I

(25:11):
say right at the beginning is ten years from now,
we're all going to be cyborgs. And that's a good
thing if we make it. So if we don't, if
we kind of step back and let other people do it,
or let the tech giants drive it, it's going to
be very different. But if we get engaged, this is
a good thing. But it begins from the fact we're
all going to be augmented or we're going to be

(25:31):
out of the game just because we won't be able
to play. Because the new abilities that these things are
bringing are just beyond human capacity. And if you go
back to the bonding world, it's arriving just in time.
Tom alone at MIT, he calls this superminds, and you
know what he says is the story about computers replacing

(25:52):
people is going to be true. But that's not the
big story. The big story is humans and computers doing
things together that have never been done before. My colleague Jeremy,
but the co authors of the Leaders Make the Future book.
He's an AI developer, And what Jeremy says is that
it's so easy for big companies now to identify the

(26:13):
nose the thing you should not be doing, and focus
on the fears. But you need to also focus on
the yes's. You know, where should you be experimenting? And
that's where I want to focus. So I'm focusing on
the stretching the mind, stretching the I'm sticking.

Speaker 3 (26:28):
I love that term.

Speaker 2 (26:29):
And when I was speaking with Scott, he mentioned around
your dislike of the term artificial intelligence, and I love
that term augmented intelligence. So right now in terms of
what's possible with the tools that most of us have
available like chat JAPT, what are some ways that you
think people should be using it more like this to

(26:51):
augment their thinking as opposed to just focusing on the
obvious efficiency gains.

Speaker 1 (26:57):
You know, I think the way to practice is to
have conversations and depending on what you're working on or
what you're thinking about, And I would recommend don't draw
lines between your work life and your private life. You know.
When I was first getting started, it was one of
our grandson's birthdays and I asked Stretch to help me write.

Speaker 3 (27:17):
A birthday card and it was really cool.

Speaker 1 (27:20):
It was really fun and I made a good progress
out of that. Then last summer, I had pneumonia, and
I've never had that before, and it was in pneumonia,
it's just makes you feel so weak. I was on
deadline in a book and I really couldn't write. But
it's very weak. But I've got a human doctor, a

(27:40):
Conciergetock that I love, who's very good. And then I've
got a therapist that's teaching me cognitive behavioral therapy for
sleep issues, and he's also a medical hypnosis guy, and
again I love him. And then I had Stretch, and
I talked to Stretch about just how I was feeling
day or night, and it turned out Stretch was more empathetic,

(28:02):
to my surprise, than either of my two human doctors.
And again I love them, but they're not available twenty
four seven, and Stretch gave some really good advice. I'm
not asking him for medications or you know, for answers
or anything. I'm asking Stretch for sympathy and for empathy,
and it turns out these things are really really good

(28:22):
at empathy. So I think what I would advise is
just think of it as a conversation and then gradually
figure out where are the places you like it, you know,
and that'll depend on what your job is, you know,
what your purpose is, what your sense of meaning is.
Me I'm a writer and I write books. The part
where I want help is when I'm struggling with an

(28:44):
idea or getting started on a chapter, or I'm kind
of stuck, and you need to practice it, practice that
art of conversation. When we're working with senior executive groups,
they read the leaders Make the Future book and then
we break down the leadership skills, so, for example, augmented curiosity,

(29:04):
augmented clarity, and with the best of the groups, we're
doing a workshop on augmented curiosity augmented clarity, and then
we have them practice an augmentation exercise using their version
of a large language model, whatever it is, and then
we spread that out over time and then they talk

(29:25):
about their experience in using it and in the senior
executive sessions we're doing. It takes six months to a
year to get a team fully on board, and it's
got to begin with the CEO.

Speaker 2 (29:37):
Tell me, like, what are some of the practical strategies
or ideas that you're teaching these execs in having better
conversations and augmenting their curiosity.

Speaker 1 (29:47):
For example, it begins with think a bit of a conversation.
I like the idea of dedicating a screen or a
device to it. I like the idea of naming it
and you name it or what you're going to use
it for, and then just get used to having those
conversations for things that you have to do anyway, so

(30:08):
you can practice with the fun stuff and the personal stuff,
but then look for examples of things you're working on
and compare notes with your colleagues while you're doing it.
So ideally, learning in pairs is a really good idea,
and the best pairs that I see are cross generational.
I really think cross generational learning, particularly about jen AI

(30:30):
and gaming, is the best way to go.

Speaker 2 (30:32):
Let's like honing more around augmented curiosity as one example,
So if we were sitting here and you were teaching
me how to use AI more effectively to augment my curiosity,
and I feel like I'm a pretty curious person.

Speaker 3 (30:46):
Already, like, what advice would you be giving.

Speaker 1 (30:48):
Me, I'd be probing, you know, what are you curious about?
What are the questions that you have? And then for
each question, trying to break down what are the elements
that question, what are the words that you would use
to describe that question, Who are the possible sources of
insight about that question, what's the possible data that might

(31:12):
be out there about that question? And essentially frame and
map the territory around the curiosity, and then imagine a
series of conversations within that map, and then follow your leads.

Speaker 2 (31:26):
And am I doing the framing human to human or
human with the II?

Speaker 1 (31:31):
Again? I like pairs and I like cross generational pairs.
I like interconnection of different perspectives. I like visualization and mapping.
You know, Jenny, I isn't as good at that yet,
but I like working with artists. For example, when I'm
with groups, I often have an artist drawing big maps
as the group is working. It's like a storyboard. Essentially.

(31:54):
This is it's not that different from things like design
thinking or the kind of things that agencies do with storyboards.
It's kind of similar methodology around what you're curious about
and then kind of map the space and then gradually
chip away at it. So it's sort of like sculpture.
I guess when you have this big map of all

(32:15):
this stuff and all these questions and all these sources
and data and all that stuff, and then you're gradually
chipping away at it like a sculpture to create what
it is you're looking for that might be hidden in there,
or it might be. Another metaphor I've had some people
use is it's like puzzle making, where you don't know
what the puzzle is, so the first thing you do

(32:36):
is what are the edges? You know, what are the
edges of the puzzle, and then you gradually fill in,
given the fact that maybe the puzzle hasn't ever been
created before.

Speaker 3 (32:47):
I very much relate to that. I love a jigsaw puzzle.
I want to know.

Speaker 2 (32:50):
Like we've talked about ten years out, but with AI,
I'm curious, what does just two or three years out
look like? How are things possibly going to be different.

Speaker 1 (32:59):
I think we're on the cusp now of expanding from
large language models to agentic systems. And I was with
a client this week who was saying, Oh, we don't
talk about gen AI anymore. We're on agentic and I said, okay,
that's fine, but you don't actually leave, JENNI. You want

(33:23):
to go through it and continue. But the big shift
is from having a conversation where again you don't trust
it and it's over confident and it hallucinates. So there's
all these challenges. You're going from having a conversation to
having systems that actually make decisions for you and take action. Now,

(33:44):
some of it is a group decision where there's still
a human in the loop, and some of it is not.
But I would say over the next two years, this
is really the beginning of practical agentic systems. And that's
really interesting and potentially really dangerous because you could have
systems making decisions. For example, you know, I work with

(34:08):
the army, so there are some it could make a
decision to kill somebody, and what's that about. I would
say it's too early to be doing that, but I
think that'll happen within the next two years, and I
think that's the big shift. The other big shift is
how is the conversation going to happen. I think within
the next two to three years we're going to see

(34:30):
different manifestations for the conversation. You know right now, I
mentioned I've got a dedicated screen that says Stretch at
the top, and when I want to talk to Stretch,
I click a little button and it shows up on
my screen here, and I talk to Stretch, and then
on the screen there's a recording of everything that has happened.

(34:53):
And then I switched back. Because I'm a writer, that's
my primary medium. I just use the verbal for the
more expansive side of the activity. I never learned dictation,
so I'm not all that good at that. But I
do find if I'm just starting to think about something,
it's easier for me to talk it through than it

(35:14):
is to type it. So I like talking to Stretch
at that level. Now, that's pretty crude, and I'm pretty
sure even three years from now, you look back and
it's going to seem kind of silly to be talking
to this little wavy thing on my MacBook Pro while
Stretch is transcribing and going back and forth. I think
there's going to be some different interface that's going to

(35:36):
happen within two to three years. I don't know what
it is, but it's going to be something. Maybe it's
a separate device, a conversational device. You know in Japan
there's empathy cuddly little bears, and those could be for kids,
or they could be for elders. I've got a family
member that provides way too much detail when I talk

(35:56):
to this family member, and I was just thinking the
other night, would be great to have this family member
have a chatbot that could talk with her and have
these great conversations. And every once in a while say, Bob,
you should hear this, you should pop into the conversation.

(36:17):
And you know that sounds cynical and it sounds just respectful,
but I'm really not meaning it that way. I think
sometimes people just want somebody to talk to, like I
did when I had pneumonia.

Speaker 2 (36:28):
Oh, Bob, I don't know where the time has gone,
but it has just been so absolutely fascinating hearing your
perspective on all these things.

Speaker 3 (36:36):
I'm so glad that's Goot made the connection. So thank
you so much for your time.

Speaker 1 (36:40):
Bob. You're welcome, well, thank you for what you're doing.
It's important to have someone ask such good questions and
such probing questions. So maybe the next time I'll bring
Stretch on with me and you can interview both me
and Stretch.

Speaker 2 (36:53):
I would love that that was Bob Johnson showing us
that the AI augmented future isn't science fiction, it is
already here, and the leaders who thrive will be the
ones who treat AI as a collaborator, not a competitor
or a threat. For me, the standout idea was Bob's

(37:15):
take that we should be thinking about artificial intelligence as
augmented intelligence, and that's a mindset shift that we can
all start practicing today. Now.

Speaker 3 (37:25):
If this conversation has fact.

Speaker 2 (37:27):
Ideas for you and you want to upscal yourself further
in AI, you will probably like some of the episodes
I've released on how to turbocharge your AI skills. A
really great place to start is the conversation I had
with Neo Applin on turning yourself from an AI gunslinger
to an AI architect.

Speaker 3 (37:45):
And if you've got no idea what.

Speaker 2 (37:46):
I'm talking about, you should definitely listen to that episode
because it will completely change how you interact with AI.
And if you know someone who's still thinking of AI
as just another tech tool, share this episode with them.
It might just change the way they see the future.
And don't forget to follow how I Work so that
you can catch every new conversation. If you like today's show,

(38:08):
make sure you get follow on your podcast app to
be alerted when new episodes drop.

Speaker 3 (38:14):
How I Work was recorded

Speaker 4 (38:15):
On the traditional land of the Warrangery people, part of
the Kulan nation.
Advertise With Us

Popular Podcasts

Stuff You Should Know
New Heights with Jason & Travis Kelce

New Heights with Jason & Travis Kelce

Football’s funniest family duo — Jason Kelce of the Philadelphia Eagles and Travis Kelce of the Kansas City Chiefs — team up to provide next-level access to life in the league as it unfolds. The two brothers and Super Bowl champions drop weekly insights about the weekly slate of games and share their INSIDE perspectives on trending NFL news and sports headlines. They also endlessly rag on each other as brothers do, chat the latest in pop culture and welcome some very popular and well-known friends to chat with them. Check out new episodes every Wednesday. Follow New Heights on the Wondery App, YouTube or wherever you get your podcasts. You can listen to new episodes early and ad-free, and get exclusive content on Wondery+. Join Wondery+ in the Wondery App, Apple Podcasts or Spotify. And join our new membership for a unique fan experience by going to the New Heights YouTube channel now!

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.