All Episodes

October 10, 2025 30 mins

Welcome to another episode of The Inside Learning Podcast brought to you by the Learnovate Centre, where we explore the impact of AI on the future of work. In this episode, we are joined by Sangeet Paul Choudary, the author of 'Reshuffle,' to debunk common fallacies about AI, such as 'AI won't take your job, but somebody using AI will.' 

Sangeet dives deep into understanding how AI restructuring workflows, altering organizational logic, and redistributing power. Using real-life examples from various industries, he explains how we should refocus from task-level to system-level thinking to truly understand the changes AI brings. Sangeet discusses the importance of skills like curiosity and curation and provides actionable insights on how to stay valuable in an AI-driven world. Tune in to learn more about the fallacies of automation and augmentation, productivity gains, and the new value industry coordination layers bring.

 

00:00 Introduction to the Inside Learning Podcast

00:15 Debunking the AI Job Replacement Myth

01:02 Understanding the Systemic Impact of AI

02:06 Introducing the Guest: Sangeet Paul Choudary

02:23 Unpacking the AI Job Replacement Fallacy

02:53 The Automation vs. Augmentation Debate

04:25 The Typist Example: A Case Study

06:42 Skills for the Future: Curiosity and Curation

10:58 The Productivity Gains Fallacy

11:13 Coordination vs. Creation: Capturing Value

15:54 The Workflow Continuity Fallacy

17:18 The Neutral Tools Fallacy

21:30 The Stable Salary Fallacy

24:16 The Stable Firm Fallacy

25:22 Practical Takeaways for the Future of Work

29:49 Conclusion and Resources

 

Find Sangeet on Substack:

https://open.substack.com/pub/platforms/p/the-many-fallacies-of-ai-wont-take?r=jzz68&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false

Find Reshuffle:

https://amzn.to/46NGtvv

 

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
The Inside Learning Podcast is broughtto you by the Learnovate Center.

(00:04):
Learnovate research explores the powerof learning to unlock human potential.
Find out more about learnovate researchon the science of learning and the
future of work@learnovatecenter.org.
AI won't take your job,but somebody using AI will.
It's the kind of line you drop ina LinkedIn post or worst still,
you will definitely hear thisline in a conference panel and get

(00:26):
immediate zombie nods of agreement.
It's technically true,but it's utterly useless.
It doesn't clarify anything.
Which job does it apply to?
All jobs.
And what type of ai, what willsomeone using AI do differently apart
from just using AI and what formof usage will matter versus not?

(00:48):
It makes you feel likeyou've figured something out.
You conclude that if youjust use AI, you'll be safe.
In fact, it gives you just enoughconceptual clarity to stop asking the
harder questions that really matter.
How does AI change the structure of work?
How does it restructure workflows?
How does it alter the very logicby which organizations function and

(01:11):
eventually, what do future jobs looklike in that new reconfigured system?
The problem with AI won't take your job,but somebody using AI will, isn't that,
it's just a harmless simplification.
The real issue is thatit's a framing error.
It directs your attention tothe wrong level of the problem
while creating consensus theater.

(01:33):
It directs your attention to theindividual task level, automation versus
augmentation of tasks you perform whenthe real shift is happening at the
level of the entire system of work.
The problem with Consensus theateris that the topic ends right there.
Everyone leaves the room feeling smart.
Yet not a single person has a clue onhow to apply this newly acquired insight

(01:57):
the right way, true, but utterly useless.
To shed some light on the future of workand to avoid such consensus theater.
We are joined by the author of those wordsand a fantastic new book that questions
just about everything in the age of ai.
We welcome to the show theauthor of Reshuffle San

(02:22):
Aiden.
Such a pleasure to be here.
You hold nothing back with thatintroduction that I pulled, I actually
pulled that not from your book Reshuffle,which is an Amazon bestseller, but
I pulled it from one of the articlesthat you wrote about the future of
work, and I'd love you to unpack itin more detail, including some of
the fallacies that you call out thatare inherent in that statement about.

(02:44):
AI won't replace you, butsomebody using AI will.
So let's get stuck in withperhaps an overview and then we'll
pull out one fallacy at a time.
the key point is that very often wethink of, impact on our jobs as simply
its impact on the tasks that we perform.
And so we assume that, it'll eitherhelp us, Speed up our tasks and do more.

(03:08):
And so in a way, it'swhat we call augmentation.
It'll augment our work and our ability toperform work or that will take over some
of those tasks and automate, the workthat is performed and hence replace us.
So we think of things in thesebinary terms, and that's where this
idea of AI won't take your job,but someone using AI will comes
in because , it's a way of saying.

(03:28):
it's not automation butaugmentation that's going to happen.
So augment yourself because AIwon't automate your job away,
but , somebody augmented with AIwill come and take your job instead.
Now, all of that sounds very neatand binary, but the challenges that,
it assumes that jobs will continueto be the way they have always been.

(03:49):
They'll continue to be.
That way going forward, organizationswill continue to be the same.
And the only thing that's going to changeis whether a certain set of tasks are
being performed by a human or by ai.
And I challenge that, in all of my workand, in my book reshuffle because I
believe that that's not really true.
Jobs do not exist in order toperform tasks, jobs exist in order

(04:12):
to solve a problem in a system.
And, when that system changes theconstraints that the jobs were supposed
to address, those constraints go away.
The jobs logic may not make sense.
So I'll give an example which willhelp, bring this to light because
it's one of the fallacies that I pointout, the example I'd like to take is

(04:33):
what happened with typists when theword processor was first invented?
Word processes did notautomate typing, so.
If you were to think about it interms of the binary of does it
automate or does it augment you?
Word processes, definitely augmentedhumans in typing, and yet the job
of the typist went away, that isbecause the job of the typist does not

(04:55):
exist because of the task of typing.
Even though the name typist suggests thatthe job of the typist existed because
of the constraint in the system theconstraint was that editing a document
was very expensive, and hence, you neededhighly trained typists so that the cost of
editing could be kept as low as possiblebecause the other date would be low.

(05:17):
editing of documents becamecheap with the word processor,
Just hit the delete key and startediting documents on the fly.
Anybody could be a typist.
And so the need for specialized typistswent away, not because the task was
automated, but because the problem thatthe job was addressing no longer existed.
And I think that's the keything that we miss today.

(05:38):
When we start thinking about jobsas stable constructs and ai, either,
you know, taking your job or someoneusing AI will, , the bigger question
is what will jobs look like?
What are the constraints on the basis ofwhich jobs are structured today, and once
AI comes in, which of those constraintsstill stay and hence accordingly, which
new jobs emerge, which do not exist today.

(05:59):
That's what we should be reallylooking at, rather than looking
at AI's impact on individual
let's make applicable for peoplebecause there is a lot of fear.
, There is a lot of fear, but the.
and particularly people thinking aboutmy children, what will happen to my
children who may be studying today?
Maybe there's no onboarding jobs, there'sno kind of intern jobs now available
because people are outsourcing to ai.

(06:21):
So I felt the fallacies youaddress help us understand that.
maybe I'll call out each of thefallacies and you'll unpack it then.
Sangeet.
The very first one is.
The automation versus augmentationfallacy, and the call out here at a
high level is how do we shift from tasklevel thinking to system level thinking.

(06:42):
I think that's the real thinking weneed today because, the idea of focusing
on what skills will make you futureready is also based on this task level
thinking, which assumes that certainforms of skills will, be valuable in
certain forms of skills will be not.
a key point that I'd like to make isthat, When you are thinking about, what

(07:03):
kinds of, to invest in or, what kindsof future job scenarios to work towards?
There are two things that are important.
One is that you needto really think about.
Which skills remain, unchangedno matter how powerful AI becomes
or how the job landscape changes.

(07:23):
So, which kinds of skillsremain completely unaffected?
The effect that AI has on a lot ofknowledge work today, whether it's,
consulting, whether it's legal work,whether it's research, the effect that
AI has is that it dramatically collapsesthe cost of getting to an answer.
So a lot of our work, whether, you'rea lawyer or a consultant especially

(07:47):
professional services, has been built onthe idea of billing by the R and billing
by the R essentially assumes, thatgetting to the answer takes a lot of time.
happens with AI is that certainforms of, performance and answer
generation, become much less expensive.
It does not require the same amountof time to, get the actual work
done of getting to an answer.

(08:08):
But the challenge that then happensis that when you use these AI models
and you can just throw a prompt andit generates answers on the fly.
The constraint shifts away fromactually generating answers,
to asking the right questions.
once you're generating answers,determining which answers to

(08:28):
elevate and which ones to reject.
And so those are two thingsthat I think of as evergreen
skills, curiosity, and curation.
If you don't ask the right questions, youcan go down the wrong rabbit hole because
if, generating answers with an LLM isas easy as, just throwing prompts out.
You can keep asking questions and ifthey're not the right questions, you're
just going down the wrong rabbit hole.

(08:49):
So having the ability to ask reallygood questions, which often comes from,
a good understanding of a domain, buthaving the ability to ask really good
questions and being constantly curious onwhether you're asking the right questions
or not, is going to be very valuable.
And the second thing that's very valuableis having the sense over time to know
what to elevate and what to reject.

(09:09):
And that's the idea ofcuration that I talk about.
So the first point that I'm justtrying to make over here is that skills
that will really matter will shiftaway from just generating answers and
getting work done to really havingthe ability to ask good questions.
Having, the taste and, the risktaking ability to choose the
right answer, then stick by them.

(09:29):
And the second point that I want to makeis that in order for you to determine
which skills are necessary in thefuture, you can't just say skill A, B,
C are skills of the past and skill, EFG,are going to be, skills of the future.
Skills are only valuable.
In relation to a system.
And so you need to ask yourself ifyou're training to get to a certain

(09:52):
kind of work if for example, you'relearning and training to get into
the legal industry, what is goingto be valuable in the future?
What's going to command a skill premium.
It's going to be determined by.
How the legal industrywill work in the future.
What will be the new revenuemodels that come out?
How will companies differentiatethemselves If the, , pay by the

(10:13):
hour logic goes away, what's thenew logic that comes in its place?
So if you're a student, you need tostart developing that muscle to look
at the largest system because it'snot just about getting your first job.
All through your career going forward,because we are in a period where there's
such rapid change you have to constantlyhave this ability to keep sensing the
system and determining on an ongoing basishow things are changing and which skills

(10:38):
are going to be valuable as things change.
So, in summary, Look for what doesn'tchange, and in order to look for what
does change, don't stick to static listsof skills, but really develop a sense
of how your target industry is changingand how companies, new forms of company
are coming, and the way companiesdifferentiate themselves is changing.
That's what will help youdetermine how to be valuable.

(10:58):
So the next fallacy is theproductivity gains fallacy.
And . the big call out here iswhy do productivity gains so often
benefit coordinators, not creators?
I'd love you to share what you meanby that coordinator, not creator.
This idea of coordinationversus creation is, actually
the central thesis of reshuffle.
So in general, when we think abouta new tool, if a tool helps you do

(11:21):
more work in less time, we think thatthat sounds like progress, right?
But the challenge is thatis actually progress.
Only if the overall system remains stable.
That is your previous workflowstill work the same way.
The organization, the businessmodel still remains stable.
So if you do thingsfaster, you're winning.
But if the whole system changes, then.

(11:41):
the way people used to succeed in theirjobs in the past, and the skills for
which they were paid may no longer apply.
And this point actually appliesat the level of individuals
and at the level of company.
So I'll take the example of howthis played out in the company's,
competition, for instance.
in the EPA industry, factory productivity.
has increased dramaticallyover the past two decades.
they've always applied tools to improveproductivity and they've applied

(12:06):
automation to compressed cycle timesand push more units through the system.
But faster productivity.
Has not necessarily led to fastersalaries at that level of the value chain.
So the workers in the factoriesare not getting the benefit
of the faster productivity.
In fact, there are other forms ofplayers that are actually getting the
benefits of that faster productivity.

(12:26):
And that's the point Itry to make over here.
That even though.
Workers move faster and areable to produce more output
in the apparel industry.
As companies like Shein who actuallycoordinate these workers across
disparate factories, they decidewhat gets made when in what volume,
and they capture most of the value.

(12:47):
So even as labor productivityrises, as workers actually produce
more, they're unable to capturethat as salaries for themselves.
So even when companies adopt AI toaccelerate tasks, they soon will start
realizing that when everybody's usingthe same tools to do the same tasks,
faster productivity becomes a commodity.

(13:09):
And that this is true notjust at the company level, but
also at the individual level.
So even at the individual level, justaugmenting yourself is not sufficient.
You need to start thinking about.
Where the new value will be captured,whether it'll come in your direction
or it'll come in another direction.
And that's the point about thecreation versus the coordination.
The factories create.

(13:29):
But a company like Shein coordinatesand captures most of the value.
You can even tell now, when people useAI to write, you can see the patterns
of the writing and it's become vanilla.
And in a way you get angry.
I get angry, and you're like,the difference is, okay, you
might use it to simplify.

(13:50):
You're not using it to come upwith the ideas in the first place.
And certainly the way it writes andI have a huge fear now with students.
So many of the audience of this show willbe teachers or college professors or into
the science of learning or neuroscienceof learning, and this is something
we need to be very, very careful of.
That's absolutely right., Just to makeit very practical, and, link it back

(14:12):
to the typist example that I gave.
use AI to help me with my writing,but I never use AI to actually write.
And the reason for that is that.
I try to look at my workflowand, as a writer, the biggest
constraints in my workflow werenot the actual task of writing.
Once I start writing it,comes out the challenges.
The first challenge is somethingthat every writer associates with.

(14:34):
You open up your laptop in the morningand you stare at this blank page and
you don't have anything to get started.
And what AI helps me do now is.
I can open up my laptop in the morningand I can start just debating a
topic with ai and within a few, backand forth questions and answers.
my mind starts working andthen I can start writing.
The other thing very often that Iused to struggle with was that I would

(14:57):
write in a serial sequence and then.
Complete the whole thing or, if I'mwriting a book, complete the whole thing
over weeks and then get it edited, butnow I can write in a sitting and then
quickly get it edited in real time.
those are, constraintsthat used to hold me back.
even if you just address the constraintsthat magically unlocks your ability
to write, it's not about, creatingmore output, it's about expressing

(15:19):
yourself while moving, the constraintsthat had held you back in the past.
It removes bottlenecks andobstacles in so many ways.
I also use it for when text iswritten in a very academic way.
And you know, there's timesyou read it and you, and then
you read the same page again.
You go, I think I'veread this page already.
I asked Chat GPT I'm like.

(15:40):
Please make this readable for a12-year-old or explain it to me like
I'm a 10-year-old and it makes justthat workflow much, much easier.
One of the other fallacies Ithought we'd talk about was the
workflow continuity fallacy.
we see this with so many organizationsthat they optimize and optimize
and optimize, but they don'tactually question what are they

(16:00):
optimizing in the first place.
Yeah, I think this is a fallacythat, keeps coming out repeatedly.
especially when we talk about,what is called agentic ai.
Today we look at today's workflow,then we try to speed up the
tasks within those workflows.
But we don't necessarily ask shouldthe workflow exist in the first place?
Because every workflow has existed for acertain reason, and with new technologies

(16:24):
coming in, we should ask ourselves.
With the capabilities of this newtechnology, should we even have
this workflow or should we do ita fundamentally different way?
this idea of not steppingaway and actually.
questioning why the workflowexists in the first place.
Keeps us stuck in spinning thewheels faster in the wrong system.

(16:45):
I'm not saying that every workflowwe had in the past had to change,
but before we start speeding upworkflows, the first thing we should
do is ask ourselves, should thiseven exist in that particular way, or
should this be done a different way?
The term we, we oftenuse is the faster mouse.
Don't have mice anymore, you know?

(17:07):
You have to think ahead of if that jobwill still exist in the future anyway.
And to do that, you need to understandthe system in the first place.
One of the other onesyou, you talked about
.Is the neutral tools fallacy.
the question here you ask is, howdoes AI redistribute organizational
power without anyone noticing?

(17:27):
this piece is important because weoften think about, tools as being
neutral, but they're actually notbecause tools in general contain
default settings and preferences.
And I'll just give a simple example.
you and I get onto meetings all thetime and we have our AI note takers
getting onto the meeting with us While initself, it's a productivity tool for us.

(17:49):
When you apply the same thing in anorganization, you realize that the
organizational priorities are now beingdetermined by the LLM because it's
determining which nodes to highlightversus which ones to, deprioritize.
And this is something that we'veseen across the board because, with
previous technology as well, whenspreadsheets first came out, Microsoft

(18:10):
Excel, people who mastered Excel.
Ended up having a lot of influence,through much of the 1990s when
process optimization was a very bigdeal because decisions that were
previously made with gut instinctcould now be modeled and simulated.
So anybody who was, making thosedecisions with gut instinct no longer
had that power because they didn'thave the data to show it Whereas, The

(18:32):
Excel jocks now had a lot of power.
And so the point I'm trying to makewith both the note taking example,
where the technology itself, startsdetermining organizational priorities,
and the Excel example where by equippingcertain kinds of people, it shifts
decision power in their direction.
My point is that.
Adoption of technology does nothave, a neutral effect in terms of

(18:55):
just speeding up what you are doing.
It fundamentally changes wherepower sits in the organization, and
that again, changes which kinds ofjobs will hold power, will hold the
ability to, command a higher salary
in the
future.
Sangeet for people who are wonderinghow does that actually affect them?
Could you give us some examplethat might come to mind for you?

(19:18):
Yeah, you know, I'll give acouple of examples, have happened
with previous, technologies.
And when I say previous,they are not that old.
We are seeing that happen right now.
a simple example is, that.
before the rise of, mobile phone usagein the enterprise, field sales forces
had a lot of power because they werenext, you know, close to the customer.

(19:40):
They had to make decisions on the fly,and, they, understood the customer
sentiment deeply, and head officedid not have any of that information.
But once field forces were equippedwith, mobile based applications where
they could capture data right on thefield, keep getting information from
the customer and keep entering it.
All of that information was going backall the way to head office instantly.

(20:04):
And so all the decision powermoved to the head office.
even as the number of jobs in fieldforces has increased over the past decade.
The salaries have progressivelydecreased because this job increase
has happened and the salary decreasehas happened on account of the same
technology, which is mobile connectivitythat allows, an organization to now

(20:26):
manage larger field forces Also.
Is responsible for shifting poweraway from the field forces and back
to the head office, which is why,Decisions are much higher at the head
office than they were in the past.
because, decisions can be made centrally,you can now have people with lesser
training doing much of the field activity.
And so the.

(20:47):
Potential worker base that canfill those jobs, increases that
again, pushes, salaries down.
So that's an example where this isactually playing out in real time where
the shift in decision power becauseof a new set of tools, change the
salary structure in a whole industry.
That's one that's so obvious whenyou see it, but when you see the

(21:07):
system behind it, you can actuallydesign for it and understand it.
And to your point, if you're worriedabout the future of work, decide where
to play in the future because thatmight not be as valuable as it was maybe
from a parent's perspective lookingat the future for their children.
The next one was the, stable salaryfallacy, which is very linked
to what we just talked about.

(21:29):
Yeah, absolutely.
the stable salary fallacy essentiallyis the point that just because you have
a job does not mean that you will stillhave the same salary as you did today.
And I'll give a couple of, examplesto illustrate why this happens.
One example is, you know, back in thedays of, logging wood, when people used
to, lumberjacks used to cut trees usingthat they would, wield with their hands.

(21:53):
So there was a certain skill, a certainlevel of training, a certain level
of, instinct involved in using it.
Well, and then there was obviouslythe physical power that was involved.
So not everybody could be agood lumberjack, but with the
invention of the chainsaw.
The barrier to becoming a lumberjackwent down so you could easily, use a
chainsaw to start filling trees withouthaving necessarily the same level

(22:15):
of skill and, you know, bra power.
key point that I'm trying to make isthat when a new technology comes in.
Even if your job continues, lumberjackjobs or, any jobs in the entire
wood cutting industry actuallyincreased after the chainsaws came in.
But salaries declined and were depressed.
And the reason that happens is becausenew technologies very often by augmenting

(22:41):
you, by making you better, Tend tomake the less trained and less skilled
people move higher up in terms of theiroutput than the more skilled people.
And so there's a flattening of skill, andso the premium that was associated with
high skill actually goes down another way.
We see this every day, actually today,is what happened to London cab drivers

(23:03):
when GPS-based driving came in.
Because before that.
Having that knowledge in your headwas a huge competitive advantage.
And if you were a London cabbie, youknew the streets inside out, and it was
impossible for somebody completely newto come and start driving around London.
But with a GPS they could.
And then when Uber, overlaid, amarket making mechanism on top of

(23:24):
the GPS, where now you had to, justlog into an app in order to get,
jobs, entire skill associated with.
Knowing how to navigateLondon completely went away.
You were not paid for that any longer.
gain a higher fee because therewere few cabbies as the number
of cabbies increased the market.
figured out the minimum price atwhich the ride would be accepted.

(23:47):
And that's what Uber does.
So it's a combination ofboth these things today.
a lot of people, who are the knowledgework may feel that, the Uber thing does
not really apply to them, but we need tokeep in mind that the more AI improves.
The more it has the effect onall knowledge work that maps
had on the act of driving.
It flattens the skill neededto perform that knowledge work.

(24:10):
And the more that happens, the moreyou see this compression of salaries,
even though the job continues to
exist.
Which is tightly coupled to the finalone, which is the stable firm fallacy.
Yeah, exactly.
I mean, all of these things, bring it backto this idea that, AI is not just making

(24:30):
today's firm and today's business model.
Better, faster.
It's changing the natureof the firm completely.
And this, works at many different levels.
So, I'll give, a couple of differentexamples to illustrate this
one we've already talked about.
when we think about, the impactthat, Faster automation of factories
had on business models like shine.

(24:52):
the nature of the firm completely changedbecause in the past, you did not have
this kind of a central coordinator.
You would have relatively siloedsupply chains where, a manufacturer
would work with the buyer and anyproductivity gains would be captured very.
by the manufacturer.
But the nature of the firm changed whenthe buyer no longer was a single buyer

(25:13):
working with another set of manufacturers,but a global buyer like Shein coordinating
a huge number of manufacturers.
So that's one way the natureof the firm itself changes.
Let's leave people with a positive onthis a takeaway that they can actually
do something with Many people areconsultants and we have shifted so much
towards a knowledge economy and in aknowledge economy, when the knowledge

(25:35):
is easily accessible, it changesthe game and it shifts the value.
So you talked about a frameworkfor people to, be able to compete
and actually capture some of thatvalue in the future amidst the LLMS.
absolutely.
as you rightly pointed out, a lot of ourknowledge based industries have been built
on the assumption that knowledge is, thebottleneck and knowledge is constrained.

(25:58):
and that scarcity of knowledge allowsus to capture a premium for that.
the more difficult it is toacquire certain kind of knowledge,
very often the salary associatedwith that also is much higher.
what happens with ai, you know, as AIimproves many forms of knowledge work
become increasingly more accessible,or increasingly more commoditized.
Not because AI replaces humans,but because again, more people can

(26:21):
perform that when augmented with ai.
So the compression onsalaries because of that.
But I believe that there are.
two other, forms of constraints thathave always been there, and that will
reassert value even when, you know, AI'stask performance in itself improves.
one thing that always has value,in any organizational system

(26:44):
is the ability to assume risk.
there are many examples of jobs where.
Almost every feasible aspect ofthe job is automated, and yet,
people who perform the job are paidreally well because they assume the
risk associated with the decision.
this obviously shows, in CEOpay today, but even if you don't
agree with that and you say that'sinflated, let's take another example.

(27:05):
think of the anesthesiologist,who administers anesthesia in, a
hospital room every single taskthat the anesthesiologist performs
is today performed by a machine.
So every part of his job is automated.
the ability to assume the risk ofmanaging the patient's condition
in real time in the middle of anoperation is what he gets paid for,

(27:28):
and he gets paid a premium for that.
So automation, you know, going backto this point, AI won't take your job.
Somebody using AI willautomation augmentation.
Those things don't matter.
What really matters is.
the most risky decisions that needto be taken in your organization?
And who is taking those, who issitting at those decision points?
So if you are seeing AI coming in andimpacting your job, you can always

(27:49):
use this as a rule of thumb to saythat if I assume the risk, I can
capture value associated with it.
But that obviously comes with judgment,which is in its own a uniquely
human capability that comes, withexperience and with taste as well.
Every time technology is rolled out and isadopted organization-wide, what happens is
that you start seeing new cracks becauseevery part of the organization adopts

(28:14):
different technology and different waysspeeding up processes at varying speeds.
So coordination breaks down.
And again, I'll take an examplefrom, healthcare, because over the
past 10 years, Health systems haveadopted, electronic health records.
They've adopted, digital,billing systems, et cetera.
And, more recently they're adopting ai.

(28:35):
And in all of this, one of the reallygood examples of where things break
down is as we have, people with chronichealth conditions who need to be managed
over an extended period of time, thefact that every part of the hospital
and every, person has a differentset of tools managing it ends up
creating a fragmented patient journey.
And the patient is left confusedbecause they're receiving reports from

(28:59):
different types of tools and there'snobody guiding them through the whole
patient journey, especially if youare, somebody with a chronic condition.
And so an entirely new job.
Came up over the last 10 to 12 years,the job of a nurse navigator whose job
is entirely to manage the patient journeyend to end, and to ensure a seamless
experience while coordinating acrossall these different tools and roles

(29:21):
that the patient is interacting with.
And so new coordination layersare always very valuable when
technology is rolled out.
So keep those two things in mind.
Where can I assume new risk?
Where do things break down and Ican solve as a coordination layer?
and then bundle your valuableskills around those points.
So your skills are valuable, butthey have to be bundled close

(29:44):
to these points in order for youto actually capture that value.
For that type of thinking and waymore, I highly recommend both.
Sangeet's Substack, which is justa magnificent read every week,
and he tackles different aspectsof how AI reshuffles the entire
industry from your job to anindustry, to an organization with.

(30:06):
Loads of examples, brilliant analogiesand metaphors as well, but also his
book Reshuffle, which goes deeperinto a lot of these concepts.
Sangeet, for people who wannafind you, where is the best place?
I think Substack is great.
It's called platforms.substack.com.
I, I write, every week over thereand, you can look up the book
Reshuffle on Amazon and my websiteis platform thinking labs.com.

(30:29):
Sangeet Paul Choudarythank you for joining us.
Thank you so
much, Aiden.
Thanks for joining us on Inside Learning.
Inside Learning is brought to you by theLearnovate Center in Trinity College.
Dublin Learnovate is funded byEnterprise Ireland and IDA Ireland.
Visit learnovate center.org to find outmore about our research on the science
of learning and the future of work.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.