All Episodes

September 30, 2025 59 mins

The AI hype in project management software is real—but is everyone ready for it? In this episode, Galen sits down with returning guest Olivia Montgomery, Associate Principal Analyst at Capterra, to explore the findings from her 2025 Project Management Software Trends Survey. They unpack the real reasons behind the surge in demand for AI-enhanced PM tools and the foundational work teams need to do before expecting AI to deliver real ROI.

Together, Galen and Olivia dig into what "AI readiness" actually looks like—technically and culturally. They discuss how competitive FOMO, billion-dollar marketing campaigns, and shifting economic investments are driving decision-making at the executive level, while the realities of adoption, data governance, and employee empowerment are playing out on the ground. They also take a thoughtful look at how PMs can avoid common pitfalls (like AI hallucinations) and begin to build workflows that align with both human and machine strengths.

Resources from this episode:

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Galen Low (00:00):
When organizations are seeking out PM software
with AI features, do theyactually have an idea of what
they want to get from them,or are a majority of these
buyers just kind of followinga mandate to do AI stuff?

Olivia Montgomery (00:12):
We're definitely leading with a bit
more of a competitive FOMOthat executives are having.
There's a lot of marketing,vendors are definitely
rushing to catch the AI wave.

Galen Low (00:24):
What needs to be in place before a team or
an organization can actuallybenefit from AI features in
project management software?
What in your mind arelike the foundational
pieces that help teams andorganizations accelerate
towards the ROI of AI?

Olivia Montgomery (00:38):
There's two pretty big considerations that
I think companies of all typesand all sizes, all industries,
all levels of maturity shouldbe taking into consideration.
Your technical readinessand your cultural readiness.

Galen Low (00:57):
Welcome to The Digital Project Manager
podcast, the show that helpsdelivery leaders work smarter,
deliver faster, and leadbetter in the age of AI.
I'm Galen, and every week wedive into real world strategies,
new tools, proven frameworks,and the occasional war story
from the project front lines.
Whether you're steeringmassive transformation
projects, wrangling AIworkflows, or just trying to
keep the chaos under control,you're in the right place.

(01:19):
Let's get into it.
Today we are talking aboutthe increasing demand for AI
enhanced features within projectmanagement software, what folks
want from those features, andwhat businesses and teams of
various sizes need to have inplace to achieve their goals.
Back in the virtual studio withme today is Olivia Montgomery,
Associate Principal Analyst forProject Management at Capterra.

(01:41):
Olivia is a former PMO lead, aproject management professional,
a prolific speaker, and justthe best kind of nerd when
it comes to research backedinsights on project management,
technology strategies, andthe human side of leadership.
She's also just publishedher latest research paper,
Capterra’s 2025 ProjectManagement Software Trends
Survey, which reveals a shift inhow companies are choosing and

(02:03):
using project management tools.
That's exactly what we'regonna be diving into today.
Olivia, thank you forcoming back in the studio
and joining me today.

Olivia (02:12):
Thank you for having me.
I'm super excited to be here.
We got some hot topics to hit.

Galen Low (02:17):
Yeah, we're gonna get spicy today.
For folks who have listened tothe podcast for a while now,
Olivia is just a fan favorite.
We were just in the green roomjamming about linguistics.
Then we realized we're likeactually linguistics and the
liberal arts and LLMs and AI,like it all fits together.
So we're gonna go alot of different spots.

(02:37):
We do tend to go down somerabbit holes, and I do hope
that we do that, but just incase, here is my roadmap that
I've sketched out for us today.
To start us off, I wantedto just get one big burning
question out of the way.
Just that like uncomfortable,hyper-relevant question
that everyone wantsto know the answer to.
And then I'd like tozoom out from that and
talk about three things.
First, I wanted to talk aboutwhat is needed in order for

(02:58):
organizations to truly takeadvantage of AI features within
project management software.
And then I'd like to deep diveinto like what AI features
actually look like in practiceand how to measure whether or
not they're actually workingor how they're being used.
And then lastly, I thoughtmaybe we could just talk about
the impact near term and longterm of some of the AI features
in project management softwareand elsewhere in the way that

(03:18):
we collaborate and how that'sgonna impact project based work.

Olivia Montgo (03:22):
I am ready to go.
Let's do it.

Galen Low (03:24):
Let's start with the one hot question then.
In your recent researchpaper, you've noted that 55%
of all project managementsoftware buyers are now
prioritizing AI features.
That's probably not surprisingto most of our listeners, given
the pressure for businessesto do more using AI, but when
organizations are seeking outPM software with AI features.
Do they actually have an ideaof what they want to get from

(03:47):
them, or are a majority of thesebuyers just kind of following
a mandate to do AI stuff?

Olivia Montgomery (03:53):
It's a mixed bag out there.
We're definitely leading with abit more of a competitive fomo
that executives are having.
There's a lot of marketingvendors are definitely rushing
to catch the AI wave becausethere is this hype and demand
that is going around, butwe are slowly moving toward,
slowly a more strategic intent.

(04:15):
With AI, the kind of likeinitial like sparkle of
it has kind of worn off.
We are starting to see a bitmore what these tools actually
can do, what they can't do,how teams use them, when,
why, where, and we're gettingto the tougher questions.
They're still pretty heavilydriven by this like competitive
fomo, and that is definitelydriven by, I don't know if

(04:38):
you've heard the news, but thespending for AI infrastructure,
hardware, software, datacenters for the US has
surpassed consumer spending.
So for the first time inhistory, the USD GBP is being
grown by AI investments.
By businesses notconsumer spending.
So that should tell youa lot about where the

(05:00):
conversation is going.
These investments are alsogetting billion dollar
marketing campaigns behind them.
And so that's what we'reseeing and that's a, it's a
big driver of this competitivefomo and there are a lot
of executives and businessleaders that are excited.
They hear about this new techthat works really fantastically
and they want it, of course.
Now our hands are gettinginto it and we're kind of

(05:21):
starting to see like, oh, okay,so marketing doesn't always
line up with the technicalcapabilities that we see.
And then the technicalcapabilities don't always
line up with what your peopleand your team are skilled and
trained and willing to do.
So there's definitely adisconnect That's, you know,
we're right in the middle of,and I'm sure probably everybody
listening is feeling some kindof pressure some way about this.

Galen Low (05:45):
I thought it was interesting that you brought
up the economics of it.
'cause I hadn't reallythought about it.
In some ways, when you'retalking about that level of
investment, like surpassingconsumer spending in terms
of how economies are makingmoney, imagine the pressure
that these companies areunder, they've invested
in infrastructure, they'respending, I remember you know,
we were all taught that like,oh, cloud storage is cheap now.

(06:06):
Yeah.
Not when it's petabytes of data.
You know what I mean?
Like there's this pressure thatfilters down and if you're a
marketer in the software space.
I salute you actually, becausethere's a lot of pressure to
just sell as many licensesas you can because there's
this immense investment thatorganizations have made and
are not earning back right now.
Like they're not profitablebusiness right now.

(06:27):
They're in an investment stage.
There is this pressure togo out there and maybe even
find the use cases becausewe're in this like spot
where we haven't reallystarted with the use cases.
We were like, this haspotential to revolutionize
and change everything.
Great.
Let's go find somenails a hammer.
And I think that's like, it's aninteresting lens to put on it.
That's actually not what Ithought you were gonna say, but

(06:47):
it's this top down pressure thattrickles down into, okay, yes.
I also now feel competitivepressure to be buying
AI enhanced projectmanagement software.
Because that's like buyinga phone that doesn't have
a fax machine in 1992.
You know what I mean?
It's like, no, of course,you know, I need my
multifunction state of theart thing, whether or not I
send faxes, because that'swhat everyone else has got.

(07:11):
But I do like that ideathat now we're in that spot
where it's like, this iswhen we're figuring it out.
And it's okay if you've goneout and bought AI enhanced,
you know, software of any kind,because that'll help us figure
out what we can do with it.
But it does take thatmindset and not that like.
Oh, magic button.
We hit install andwe're good now.
Yeah.

Olivia Montgomery (07:30):
Yeah.

Galen Low (07:31):
We figured it out.

Olivia Montgomery (07:32):
Exactly.
We're starting to see that too.
I was very proud in the surveywe had the results that security
came as like the top priorityfor buyers for PM software and
that's super exciting to me.
It usually is functionality,which of course makes sense.
So to see security bumpup for the first time
that we've seen as thetop priority is fantastic.

(07:54):
That's telling me that teamsare realizing like, okay, this.
AI of any type that you'reusing is increasing the surface
attack level that you have,especially depending on if
you're using external LLMs.
It's a, there's a big differencebetween AI features that are,
you know, in your currentsystem that they just rolled
out and they're like, Hey,you know, we're like helping

(08:15):
you with helpful suggestionsor some extra auto complete,
or will help you builda workflow a bit better.
That's a little lessrisky than your employees
using external LLMs too.
Synthesize project meetingnotes and all of that.
So I'm definitely excitedand proud of our community
for putting securityat the top of the list.

Galen Low (08:36):
That's actually a really interesting one.
It also speaks a bit, I think,at least from my perspective,
I think it speaks to like athe speed to maturity, right?
Like I think there's a lotof things that have disrupted
us in the digital spaceand in technology over the
past couple of decades.
We've always been reallyslow to kind of like figure
out the important stuffand to like hear that.
I'm like, okay, well yeah,privacy data security, that's

(08:57):
getting taken seriously now andit's still pretty early days
in the grand scheme of things.
Like that's important.
It also speaks to how they'reusing it because to your
point, I was never reallythat fussed about data privacy
around like autocorrect.
I was like, that'sprobably fine.
You now know that whateverI cannot spell or type on a

(09:17):
touchscreen for the lives of me.
Right?
And then that's fine.
But when I'm punching in,you know, all of my project
Adam, my client information,like business reports, it
becomes a lot more important.
And what I like about it isit's almost like a, I wanna say
speed dampener, but like, it'salmost like now's the right time
before we get too far too fast.
Shooting from the hip andlike throwing data everywhere

(09:39):
and then going, whoops.
Whereas now it'syeah, it's important.
Also, maybe it speaks to thatin business competition, right?
It's like, okay, I don't wannalike share all my information.
Like, you know, wehear about building
proprietary LLMs and stuff.
I'm sure we'll get into it, butthat's it's really interesting
that end privacy showed upas that much of a priority.

Olivia Montg (09:56):
I know it's great.
So proud of us.

Galen Low (09:58):
For me, it speaks to like this foundational stuff.
And I wondered if we canzoom out a bit because.
Not every organization isready to just wake up one
day and go, Hey, let's buysome project management
software that has like allthe AI features we want.
Sometimes things need tobe like in place first.
And I thought maybe I'dask you what needs to be in
place before a team or anorganization can actually

(10:19):
benefit from AI features inproject management software.
And what in your mindare like the foundational
pieces that help teams andorganizations accelerate
towards the ROI of AI?

Olivia Montgomery (10:30):
Absolutely.
There's taking a few moments,any moments to pause and be
like, okay, are we ready?
Are we ready to do this?
Yes, I'm ready to buy.
Yes, I'm ready for the businessto inject AI and see all
the gains that we can get.
But when are we actually ready?
And there's two pretty bigconsiderations that I think

(10:51):
companies of all types andall sizes, all industries,
all levels of maturity shouldbe taking into consideration.
And that's your technicalreadiness and your
cultural readiness.
So Debbie, in first, like thetechnical readiness, do you have
enough clean structured data?
The AI is, whether you'reusing machine learning for

(11:11):
predictive analytics oreven your LLMs to help you
generate, you know, reports.
And synthesize data.
Those need vast amounts ofreally high quality data to be
able to give you an output thatyou're gonna be happy with.
So you need to doublecheck, do we have this data?
Where is this data?
What's the quality of it?
Are we ready to share that data?

(11:33):
Is it good enough to get us.
Insights.
You wanna audit your existingtools before you unleash
an AI because it could bethat you are unleashing
redundant capabilities.
Another tool couldalready have it.
And you also, when you do atools audit, you could also
flush out any shadow it thatmight be happening if you
have employees that have.

(11:54):
Rushed ahead, and maybethey're using tools that
you're unaware of while they'reat work for your business.
You wanna know about that.
And so doing a full audit sweepof your entire, I advise your
entire IT r infrastructure,your entire environment first to
make sure like, okay, these areall the apps that are in use,
this is where they're in use.
And make sure that is you.

(12:14):
You have clarity of whatyour ecosystem looks like.
You also wanna have yourgovernance in place.
Before you give these peopleyour people tools, you wanna
make sure that they know whatthey're getting, why they're
getting it, and what to do withit and what not to do with it.
Really need to have clearpolicies of the types of
information that can be shared,how to share information

(12:37):
safely with the tools.
Or ideally you can set that allup in the backend and you're
like, okay, if you're in thesetwo, you know, this is our
knowledge management, thisis our document management,
this is our task management.
They're all safe and secure.
Have at it guys.
Go for it.
That's ideal case, but a lot ofcompanies aren't quite there,
so at least empower and giveguidance and direction for

(12:58):
your employees with policies.
So the cultural readiness,so again, your IT team
and everybody can beready, everything's safe,
it's secure, ready to go.
UAT tested passed successfully.
Now you have to be ready,like can people actually use
it when they show up to work?
Do they know what to do?
Are they comfortable using it?
Do they want to use it?

(13:19):
So that cultural readiness isthe other branch that really
needs to be evaluated andtaken into consideration.
So you want to be clear withthe employees, your teams,
about exactly what you'regiving them and why and how
they're expected to use it.
There is room to be like,Hey, we're gonna open up
some new toys for you guys.
Maybe we'll run a pilot projectand we can test these out,

(13:41):
but maybe we're not rollingit out fully 'cause we don't.
All of this is ambiguous.
We're moving very quickly.
So you don't want to havethe expectation either, that
you're gonna have everythingperfectly mapped out.
You can give these workflows andpolicies and teams just execute.
There's gonna be discussion,there's gonna be like stops
and goes and reworking.

(14:03):
As long as you know that's gonnahappen and that you can kind
of isolate it within a testing.
Period.
A testing project, you're gonnabe much more successful than
if you just unleash tools.
Your teams are probablygonna react with surprise and
frustration, and you're notgonna see the productivity
gains that you're probablyhoping for with these tools.

(14:23):
So you definitely want to avoidhaving any kind of mandates,
no, like AI or bust mandates.
And I think all of that canbe achieved if you're just
thoughtful, intentional,and like I said, roll
it out with a testing.
We're gonna try it,you know, in this.
Isolated lower risk environmentor project, however it is
for your business first.

(14:43):
So I think it's reallyimportant that companies
not only take the technicalreadiness into consideration,
but the cultural readiness too.

Galen Low (14:51):
I really like that you included shadow IT in your
software assessment, which whenwe were prepping for this, we
were talking about, you know,the differences between sort of
a smaller, medium sized businessversus like an enterprise and.
We know that, you know, thereare many large enterprises
that should have really goodgovernance, but they're unaware
that people are just, you know,they're accidentally using
their personal ChatGPT account.

(15:12):
They are just playing with toolsthat are accessible on the web.
Haven't been locked down yetby it, but I like that approach
of like, Hey, listen, we'regonna do an evaluation, like
you're not gonna get punished.
We just need to knowwhat's going on.
We know it's happening andwe want to build a plan
based on the fact thatwe know it's happening.
And actually that's great.
That's great that everyone'ssort of experimenting that

(15:33):
they're running ahead.
But yes, let's like putsome guardrails on it.
Let's not stifle that, butlet's say, yeah, we might need
to have some parameters, mightneed to be a limited pilot.
Maybe we just don't tryand like make organization
wide change for, you know.
Thousands and thousands ofemployees, but at the same time,
let's not have like these littlegrains of sand of everyone
doing a different thing.

(15:54):
And then we'll never sort ofbenefit from it as a whole.
And I think it ties right intothat cultural readiness, right?
Which is like, listen, thisis about setting up some
expectations both ways.
The thing that you saidthat resonated with me is
like, do people know whatto do with it when they
come to work in the morning?
And I've been seeing alot of organizations,
they're making assumptions.
You know, there's a lot ofjudgment around folks who

(16:15):
aren't picking up the toolsand you know, I see a lot
of adoption charts thatpeople frown at, right?
They're like, weannounced a thing, but
adoption's still flat.
Like, why don'tthese people get it?
But I think that culturalreadiness and the change
management aspect, theeducation, and just the
conversation goes hand in handwith that technical readiness
of is this actually goingto, are people gonna use it?

Olivia Montgomery (16:38):
Because that's exactly, you don't wanna
stifle your employees, youwant them to feel empowered,
you want them problem solving.
And so yeah, your businessculture should kind of already
be established of whether youare let's turn things on and we
maybe, hopefully identify somepower users, some people who,
you know, you think will bevery successful and are willing

(16:59):
and excited to try new tools.
And you let them play withthem and kind of figure some
stuff out while you formalizeyour policies and your plans,
and then you roll it out tothe other teams that maybe
move a little slower, maybemore, a little more resistant,
and you kind of try and bringeverybody together through an
excited curiosity and tryingto move the business forward.

(17:20):
I think your point of not takinga punitive approach or stance
of any type is really key,because as soon as employees
feel that there's gonna be.
Something like negativestigma to them.
Maybe like, oh, they'renot open to new ideas.
Any whiff of punitiveculture is going to make
employees shut down.

(17:41):
That's where employeesjust hide stuff.
They just don't bring upissues and you're really
gonna suffer and stall out.
And we are seeing, the surveydoes show that like adoption is.
The top struggle that we'reseeing, and it is often because
it's a lack of clarity and alack of a two-way conversation,
a the five-way conversation.
Everybody should be sharingideas, sharing experiences.

(18:05):
This stuff is very new andhow every company is going
to use it is different.
One especially different andunique thing for AI is that it
is impacting all departments.
All industries kindof at the same time.
So it doesn't really, you know,sometimes you're like, oh,
well if you were an executiveassistant, maybe you're gonna

(18:25):
get really hit with an LLM.
You know, the capabilitiesthat you know, it's gonna make
your meeting notes way better.
Like great.
But it's not justthem, it's everybody.
It's also your projectmanagers, it's your engineer.
Everybody now has thesecapabilities that they can
use to augment their workand improve their work.
And that's something we haven'tseen quite before, other than

(18:48):
just like moving everythingdigital onto a computer.
And that kind of happenedto everybody too.
So here again, your entirecompany is getting exposed and
seeing AI in very differentways and any kind of clarity
that you can give peopleof here's how to safely and
effectively use these tools.
No punitive approachis gonna come to you.

(19:10):
Really key.

Galen Low (19:11):
I like that.
The fact that it's sonew that it has to be a
dialogue, and that's kindof the whole point of that
sort of cultural readiness.
And also because I openedthis whole podcast up with
like the idea that maybe thesepeople are buying software
and don't know what theywant from it, and maybe it's
because they are either.
Yeah, like falling into thehype of it all and trying to,

(19:32):
you know, stay competitive.
And sometimes it's becausethey feel like they have
to tell people exactlywhat to do with it.
Whereas I think whatyou're saying is.
Give them some card rails,but tell them that it's okay
to like experiment and share.
We're kind of figuring this outand it's not necessarily like
a mandate with no strategy.
The strategy is let'syou know step by step,

(19:53):
let's go through it.
Like every department, everyeveryone in the organization,
it would be unrealistic to justovernight transform your entire
business, every single aspectof it, just because of AI.
It's really that, youknow, having that dialogue
with everybody andsetting expectations.

Olivia Montgomer (20:08):
Exactly, yeah.
And the more that yourtechnical readiness is
tightened up, the more that youcan, you have a really great
relationship specifically forproject management software,
whether you have an existingtool that is rolling out AI
features or you're lookingfor a new tool because they
offer more AI features.
If you have your technicalreadiness and you're like, okay,

(20:30):
we've worked with the vendor,we've worked with RIT team,
we know the data managementis where we want it to be.
We understand, you know, theseare, this is gonna be a safe,
it's not really a sandbox,but it is kind of a sandbox.
It's your productionenvironment, but you know
that it's safe and protected.
And so you can then turn iton and let teams be like,
okay, maybe by department, byproject manager, by portfolio

(20:54):
depending on how big orsmall your company is, you
can be like, Hey, you know,we've turned everything on.
It's all safe.
You can't break anything.
Roll it out with yourteams as your teams can.
You know, you know yourteams, you know your projects.
Everything on thetechnical side is safe.
Now you guys kind of helpyour team with the cultural
readiness and that canhelp a lot with adoption.

Galen Low (21:17):
I really like that.
I wondered if I could pick outsomething you said in there.
'cause we've been talking aboutdata security and governance.
We've been talking about sortof like different sizes of
businesses and different needs.
How does the data securityand governance piece vary for
different sizes of organizationsand do smaller and more
nimble organizations likehave a leg up over cautious

(21:40):
enterprises or maybe not?

Olivia Mont (21:43):
So it's my favorite response is a mixed bag.
There's pluses and minuses andstrengths and weaknesses for
both sides of the spectrum.
So your larger organizationsare gonna have strengths that
there's a lot more data anda lot more historical data.
The data is probably maintained,hopefully a bit more.
Your data hygiene policiesare probably better, and

(22:06):
you probably have a muchbetter idea of what your
ecosystem looks like and howoften your teams use tools
and how they use tools.
So you probably have a lot moreof that foundational basis that
can set up AI to be a bit moreeffective and successful for
you when you're ready to adopt.
The other side of that isthat smaller companies,

(22:27):
while they might not haveas much data, and their data
management policies aren'tas mature, they can usually
switch tools a lot faster.
And so they can switch theirproject management tool much
faster than, you know, sayan enterprise banking can.
So you can try out new stuff,you can see what you want.
You can quickly leverage thelatest technologies while, you

(22:48):
know, bringing up the other,the technical readiness and
the data quality behind it.
So there's definitely plusand minuses to both, and
I think having a realisticunderstanding of those pros
and cons and how to maneuverthose can be the successful
path forward for both sides.
So it's not that only AIis great for enterprises or

(23:10):
only for small businesses.
It's definitely very helpfuland useful across the board.
But what it looks like and thechallenges that you're going to
experience are gonna be prettydifferent because of the size
and maturity that's out there.

Galen Low (23:23):
Do you have any stories or examples from
folks that you've talkedto while putting together
your report or elsewheresort of in the industry?

Olivia Montgom (23:31):
In the industry.
You want my industry secrets?

Galen Low (23:34):
Can you dish spill the tea?
Come on.

Olivia Montgomery (23:38):
Air by air will definitely share that.
The companies that, whetherthey're big, whether they're
small, whatever size, whateverindustry, wherever they are
culturally, that going backto making sure that you're
kind of ready and that youunderstand that your teams
want to do well for you, andso the more clear you can be
with your expectations andproviding them the guardrail.

(24:00):
The more successfulyou're going to be.
The companies that are justsaying, do AI, we're checking
daily that you logged in.
We're checking usage where yourmanagers are getting reports.
That is absolutely happening.
I have friends, I have family.
I hear like they'regetting these mandates.
These mandates are out there.
There are leaders thatare unfortunately.

(24:22):
Taking a more punitive, like,Hey, we just bought this tool
for a lot of money and everyday we need 95% of everybody
logged in for at leastwhatever, 10 minutes a day.
They all have their ownindividual quotas and
they're really sticking tothose like kind of trying
to measure ROI in these.
KPIs like usage, andthat's just, we're not

(24:44):
ready for that yet.
I don't know any companythat's ready for that yet.
We're still at the like,Hey, let's make it safe
and help you figure out howto be effective with it.
Then we'll go into the likeauditing and making sure
that you're logging in.
I think it's a goodidea to do audits.
I'm not saying don't auditusage, because that's one
way that you can kind offind out like, oh, this

(25:06):
whole team hasn't logged in.
Let me check in withthem and see maybe why.
All this team logged in andthey were using it every
single day for a month,and now we're definitely,
the usage is dropping off.
Let's check in and see why.
Whatever it is you needto check in and why so.
I wouldn't use those auditsfor usage as you're like,
this is how we're gauging RO.

(25:28):
I use it as how effectiveis our adoption?

Galen Low (25:32):
That's a good, clear distinction, right?
Between using data to informdecisions and to guide
folks versus using it tokind of punish and police.
What comes to mind is liketime tracking data comes up
all the time in my community,especially for the folks working
in agencies and consultingfirms and professional services
where part of it is like didn'tlog enough hours and you know,

(25:53):
utilization isn't high enough.
Are you even valuable as ahuman versus going, like,
how are we spending ourtime and is that the right
place to spend our time?
And if we're not spendingour time doing the important
things, then what can be done?
What is the reasonand what can be done?
It's just so funny and I'mthinking of like return
to office mandates andthings like that too.
Like in some ways humans andorganizations are very bad at

(26:15):
like sort of measuring impactand bringing people along.
There's always that sortof like quota, you know,
like you get the stick ifyou don't do this or that.
These behaviors.
Even when the goal is likeactually quite noble, right?
It's like actually we wantpeople to be able to like
explore and use these toolsand yet it comes across as,
yeah, we're gonna punish youif you didn't log in today.

Olivia Montgomery (26:36):
Absolutely.

Galen Low (26:37):
What you said about that cultural readiness is
like, you know, you can seehow important it is and you
can see how important the datayou know, and the governance
side of things is, and how theyplay together because these
things need to be in place.
Before things canactually move forward.

Olivia Montgomery (26:52):
Yeah, and I think to your point about
time tracking, that is, wecould have an entire episode
on that, especially beinga knowledge worker myself.
How do you possibly track, Idon't know, not to jump into
like a neuroscience branch,but so much work, valuable
work is done on walks.

(27:14):
You have the type of workwhere, yes, I'm sitting and
I'm typing and this is work.
But the more valuable work,and I'm, for me as, especially
as a knowledge worker andproject managers everywhere,
anybody who solves problemsat work, the more that I can
do the like, oh, I'm on a walkand I'm like kind of gelling
the information that I have.
And then that's when Ihave my aha moment, like,
oh, okay, that's great.

(27:36):
And then when I do sitdown and I am typing and
doing more traditionalwork, it's much faster and
it's much higher quality.
We tend to only capture or wantto capture the time that is in
front of the computer typing.
And then how do we capturethat, the really, truly useful
time that was spent elsewhere.
And I think that, yeah,RTO mandates and there's

(27:58):
a lot of things thatsometimes companies can.
Be a little, maybe old schoolor rigid in their thinking
and can limit that, andthen they're not capturing.
Then they're like, oh,you're not working enough.
And you're like, what?
I work all the time in my head.
I never stop working.
Right?
Like that shouldcount for something.
But work you see that youlike meat produce is because

(28:19):
of all this other time that'slike this, like mushy time.
And I really hope that'ssomething that AI could
eventually help us with inthe project management tools.
There reminds me of oneof my more favorite or
exciting aspects that I'mseeing AI impact project
management tools specificallyis resource allocation.

(28:41):
So there's been AIautomation more.
There's definitely, that'sanother, so a lot of AI powered
tools are kind of more advancedautomation, a lot of if then
statements that we've had sentin CRM since like the nineties.
So a lot of that's notnecessarily new, but
it's getting a lot morelike turbocharged lately,

(29:02):
which is fantastic.
And so things like if you'remaking your project plan and you
have your bucket of resources.
And it's one of our main jobsas a project manager, is to
figure out the right recipefor the project to work.
Who's got PTO coming up?
Who's got theskillset that I need?
Who do I want to upskill?
And I don't mind them maybetaking a little longer with the

(29:24):
tasks because they're gonna benew, but I wanna upskill them.
I need stuff done really fast.
So who do I trust thatcan do very experienced
and get it done fast?
There's all that.
You gotta put yourrecipe together.
And we're seeing AI andmachine learning and
predictive capabilitiescome much better with that.
We're also seeing there's sometools out there that are taking
in kind of that mushy time.

(29:45):
So taking into account the weekafter some, before somebody
goes on vacation, maybe givethem not so high cognitively,
you know, required tasks.
Maybe load them up witha task that are much more
daily execution focused,and then they can go on
their vacation and thenwhen they come back, they're
going to be pretty fresh.
So you can give them thosehigher risk stuff that takes

(30:08):
a little more like deep work.
It takes kind of all ofthat stuff into account,
and also some of them aregetting really advanced.
Where it's like, all right,this person works really well,
you know, for like three weeks.
They're gonna have deepindividual work on my
project, and then we'regonna give them a break.
We're gonna give them aweek of higher level tasks

(30:28):
that aren't quite the same.
Maybe they're morecollaborative, whatever
the case may be, of thenature of your project.
But taking that human sideinto account, which is
exceptionally complicated anddifficult for a project manager.
To make that perfect recipewith all those pieces.
And the AI is definitelyhelping and we're seeing
more and more of that.
And I'm really excited for allof us because so many project

(30:52):
tasks and deadlines strugglebecause we don't really know how
to take that stuff into account.
And sometimes there's notalways like the safety
to acknowledge that we'rehumans, we're not machines.
Machine can do the workthat you know, you tell
it to and he might fry themotherboard, but you're fine.
You just buyanother motherboard.
It's fine.

(31:13):
We don't wanna dothat with people.
And so to have the mostrealistic expectations given
to your stakeholders, given toyour business owner, you need
the most realistic timelineand work breakdown structure.
And all of that needs to be asreflective as the capabilities
and reality as possible.
And I am definitely excitedthat AI could help us and is

(31:33):
hopefully helping us get there.

Galen Low (31:34):
I love that.
I love resource allocationas an example because it
is very squarely human.
And it shines a light on exactlywhat you're saying, right?
Like humans are not machines.
We were almost like.
Hopefully all right.
At this apex of like moretraditional industrialization
where we kind of did makehumans into machines and treated
humans like machines because weneeded that uniformity, right?

(31:56):
Did that person punch inat 9:00 AM and did they

punch out at 5 (31:59):
00 PM That's how I can measure, because
that data I can understand.
Everyone's working styleand their mushy time
and all that stuff, andlike, I don't have enough
capacity to deal with that.
Let's not even think about that.
So we've I was gonna say thatAI has, there's an opportunity
to challenge our assumptionsabout work, but even more
than that, I think it givesus an opportunity to challenge

(32:20):
like what fallacies we'veactually created around work.
Not just assumptions,but we've been like.
That's too hard.
It's too difficult.
Let's just make everyone showup at the office at a certain
time and clock their hours.
You know, we will doannual performance
reviews and we'll ask him.
We'll ask him once a yearhow they're doing, and you
know, what could be better.
That's all we havecapacity to do.
That resource allocationuse case is just like, such

(32:41):
a good example of like howmachines frankly can help us
recognize our own humanity aswe do our work, especially,
you know, knowledge workersand, but I just like, I'm so
excited about that because.
It does, it plugs allof this together, right?
Where the sort of like thedata piece, the cultural
awareness, the sort of usecases and like, you know, what

(33:04):
exactly does this look like?
How, what am I supposed to dowith these tools day in and
day out when I show up at work?
But also like how is it actuallytransforming the work for us?
Not just the task, butlike the notion of work
and like how we attack it.
That's a really good one.
Yeah.
I wonder if it leans usdirectly into the future
because as we prepared for this.

(33:25):
We were talking a littlebit about the sort of like
leveling of the playingfield, and you mentioned it
earlier, right, that likeit's changing and impacting
everyone in every departmentwithin an organization.
And one of the things thatyou and I talked about is that
sort of, that like AI becomeslike bit of an equalizer,
something that kind of makestechnology and or business like
a little bit more inclusive anda little bit more accessible.

(33:48):
What are maybe some of the nearterm and longer term impacts
that AI features, like let'ssay integrated vibe coding or
like automation orent workflows,what sort of impacts will that
have on the way work gets done?

Olivia M (34:04):
Super, super exciting.
Let me go in thenear term first.
So near term we kind of beenseeing with the ChatGPT coming
out and these LLMs coming out.
The immediate thing isthat project managers and
non-techies can talk tocomputers in ways that
computers understand better now.

(34:25):
And it's so like this much morefrictionless than it used to be.
And that is so exciting.
Like to me, I'm blown away thatI can see, like my parents now
can like talk to an LLM andget some, you know, work done
that they've never been ableto do with a computer before
because they've fumbled aroundlike with their smartphone

(34:46):
and all kinds of things.
But now they're like,these LMS can even tolerate
incomplete sentences forprompts they can tolerate.
Misspellings in yourprompt, they can tolerate
a lot of incorrectness.
Where before, if you're havingto write lines of code and
you're having to do injectionqueries, you had to be precise.

(35:08):
Exactly precise.
Now there's a lot more leewaywith the natural language
processing NLP capabilitieskind of coming together with
these large language modelsthat are trained on vast
amounts of data that is normal.
People speak and we've notreally had that before.
So this is the first timewe have computers that

(35:30):
they know how to build anddevelop a lot of things.
Machine learning on theirown, and now a human can
talk to them a much easier.
So yeah, a pm a non-technicalPM can go into their project
management tool today.
There's a lot of tools thattoday and you can go and
type in, like, Hey, takeproject A and give me a work

(35:51):
breakdown structure that runsthrough the end of the year.
Thanks, appreciate it.
And it's like, okay, andit responds like a teammate
and it's like, here you go.
Here's your information.
You probably have to check it.
You know, we've all used theseLLMs and they make a mistake,
and then you're like, Hey,you made a mistake there.
And it's like, oh,you're so right.

(36:12):
Thanks for pointing that out.
Like we've all been there.
They have their own issues,but the fact that we can talk.
To a computer and getreally effective outputs.
We can build workflows.
It's taking that no code, lowcode trend that we've been
seeing for a very long time andjust like injecting it with ease
that we've not ever seen before.

(36:32):
And that is definitely exciting.
And I would say that'sthe near term that I
definitely wanna bring up.
It brings up red flagsthat I'm seeing for this.
Just what I'm sayingthat you, this.
Colloquial talking tocomputers like a teammate.
There's a lot of issues thatI'm seeing that we might be

(36:53):
relying on emergent capabilitiesof these tools, and we're
not really, the marketingisn't clear that these
are emergent capabilities.
There's definitely a bitof like muddling of what
these tools specifically.
I'm gonna narrow in onLLMs because they are the
biggest near term impact.
We'll get to the other ones forthe longer term in a minute,

(37:14):
but the near term is these LLMs.
We're all using them.
Every project manager I knowis using them, especially
to summarize information andgather data points, analyze
data points, and all of thatis emergent capabilities.
And that is not clear right now.
The LLMs are intended togenerate text or images

(37:34):
in a human-like form.
They're statisticallypredicting word order based
on the vast amounts ofinformation that it has.
It doesn't know whatsummarize means.
It doesn't even know whatthe words summarize means.
You can really break itdown like I'm no data
scientists and I'm no.
So don't come at me.
Anybody who's out there likebuilding their own LLMs.

(37:56):
But depending on yourtoken, like it can
break down your word.
Like we'll take disconnect.
Disconnect.
You are like, oh, Iknow what that means.
I got it.
That's an obvious one.
There's a disconnect.
There isn't a disconnect.
I got it.
An LLM doesn't knowwhat disconnect is.
It's very likely brokenthat down into three tokens,
dis con and neck, and it'susing those three tokens.

(38:21):
So taking the first one, dis,it's using that for discount.
So it's like it doesn't knowwhat dis means in con, like
it doesn't know it's usingthat in any other words that
have con, it's that same tokengetting, if that makes sense.
There's a huge disconnectbetween how even an LLM
understands the word disconnect.
That's where I see likemy background in like

(38:43):
English literature andlanguage study language.
That's where I see this comein of like, I see where it
gets that all mixed up andthat has compounding effects.
When, let's say, okay,I've brought this
up very theoretical.
Let me ground this down,like into an example.
So you're a project managerand you are getting ready
to do a big status updatewith the business owner.

(39:06):
So you need to gather all ofthe statuses from everywhere
that they come at you from.
So you're gonna pullthem from your PM tool.
You probably, I don't know ifyou're a fan of voice notes.
I send a lot of voice notes.
I like them.
There are companies out therethat are giving people tools
to like text the voice, dictateupdates, emails, whatnot.
So let's say you've got acouple voice memos, the updates

(39:29):
from your tool, and you'regonna put these all together
into an LLM and be like, helpme understand what's going
on so that I can give thefull status to the business
owner that I'm meeting with.
That all sounds great andwe are being told that
it's fantastic at that.
By other people, and I amtelling you it is not fantastic
that there are a lot of reallybig issues that can happen.

(39:52):
In general, the LLMs aretransforming the information
that you're giving, andthey tend to take out a
lot of emotional words.
They tend to take out alot of sense of urgency.
They take out a lot of nuance.
Sometimes they just takeout entire sections of what
you've asked it to look at.
And so it will almostalways soften the language,

(40:16):
remove those emotional cues.
So an example, let's sayyou've got your voice memo, is
your boss saying that the newvendor contract that you're
waiting on is stuck in legal?
He is had troublegetting a hold of them.
You can't move forwarduntil a legal response.
And he is not feelinggood about it.

(40:36):
That'll be the voice note andthen your AI summary, and you'll
feed that into the LLM, butthen in the status report that
it gives you, it'll say vendorcontract is in final stages.
And that's not incorrect.
It is in the final stages,but it's a tension filled,
frustrating, final stage.
And you are gonna miss that.

(40:56):
You might not even actuallylisten to the voice memo because
you are giving the transcriptto the LLM, and you're
expecting it to manage that.
So you might even missall of that frustration.
So then you are not gonna sharethat with the business owner,
and you're not gonna followup with your boss, and you're
not gonna try and unblock,you're just gonna be like,
oh, it's in the final stages.
Cool.

(41:17):
And that happens a lot.
And so you really gottawatch out for that.
It's going to alwayssoften your language.
It's going to hallucinate.
Hallucination rates are stillvery high, even in proprietary
systems that companies aredoing a fantastic job monitoring
and making sure that, youknow, they're adjusting

(41:37):
for model drift and all thetechnical capabilities, right?
But PMs need to know thatthe odds of that happening
are very high, and it's onlygonna come back to them.
So these tools don't actuallyknow what summarize means.
It's just trying to giveyou what it thinks it wants.
So, yeah, we need to know that.
Again, I can't likestress this enough.

(41:58):
It's softening language.
It's making it seem likeit's misrepresenting a lot
of stuff and that couldbite you in the butt.
I know it's bitingsome of us in the butt.
I hear about it.

Galen Low (42:09):
I hadn't really thought about it too hard
in terms of like removingthe emotional content and
what impact that might have.
And I mean, you know, justto nerd out on language and
linguistics a bit, there's adifference between being taught.
Language and training onlanguage data and these LLMs
are so convincing, right?
We were talking about levelingthe playing field, right?

(42:32):
Everyone's kinda like, oh,okay, so it's as smart as
a human 'cause it respondslike a human would.
And yeah, it makes mistakeslike a human would.
But basically it's ahuman, but it's not.
And you can see wherethe hallucinations.
Come from token by token, right?
Like process by process, easily,you know, sent down the wrong
path based on maybe limitedinformation, limited training,
and just context, but then alsothat emotional content, right?

(42:55):
We haven't tracked the nut onthat and what you're saying.
Is a lot of a project manager'srole is picking up on nuance
or funneling emotion, you know,redirecting emotion and sure,
we do summarize right as inlike maybe shorten some things
down, but we also transformsome of that emotion along the
way to say, okay, well listen,I need to shield my team from

(43:18):
a bit of this person's angst.
But also they need to know thatit's urgent and that there's
tension and that's gonna helpus sort of get to the goal.
And it's just so interestingthat yeah, you know, you're
like, okay, project managementsoftware that I bought that
has all the AI features init, I guess I'll just dump
all these files and youtell me what the status is.
And that whole like emotionalbit could be completely missing.

(43:41):
That is terrifying.
Fascinating.
And my gosh, what an interestingchallenge ahead of us, I guess.
I like your point thatthis is emergent, right?
It's not perfect yet.
This isn't the end,it's the beginning.
And we do need to kindof like factor that
in to the way we work.

Olivia Montgomery (43:58):
Absolutely.
I think one way you can kindof, maybe account or limit
that specifically for projectmanagers because like you
said, we're held accountable.
If you miss the nuance, ifyou miss the frustration, when
you don't address it, you'reheld accountable for that.
And so we all know likemake sure you're actually
checking all the inputs thatyou're doing, all of that.

(44:20):
We know it.
I don't think it's happeningquite enough because there's
so much marketing messagingcoming at us that these
tools are good at this.
And there's not thisclarity that these are
emergent capabilities.
If you stick to the generatingaspect, you're probably
gonna be a bit safer.
Or if you're in a company thathas a very advanced, dedicated

(44:41):
team that has not only the LLMtrained on a huge database and
trained very effectively, butthey also are putting in a lot
of if then statements, thereare definitely ways that you
can improve the reliability.
But there's also the factthat these are still black
box systems, and even thepeople that are designing

(45:01):
them have come out being like,yeah, we're not sure why it
actually can summarize text.
We're not sure whyit can do that.
And you're like, oh gosh,that's terrifying to me.
I know it's not terrifyingto everybody, and that's
great, but that's scary to me.
And so I think when yousee features, you know,
in your PM tool of likethe smart suggestions,
those aren't so scary.

(45:22):
If you're gathering userrequirements and you're
like, okay, yeah, I'm in mywork ticket, and it has a
little prompt of like, hey,kind of maybe getting rid of
that like blank page anxietythat we often can have.
I think it's fantastic for that,like generate the first draft.
That's kind of my thinkingwith the LLMs is I try to
remember that they're generativeAI, so I'm just gonna have

(45:44):
it generate a first draft,whatever I'm working on,
and they're very effectiveand pretty good for that.
Anything beyond that capabilitydo remember that those
are emergent capabilities.
They're not tested and proven.
We don't really know even whythey're working like that.
And so, yeah, to bea little careful, but
definitely don't be scared.
Blank page anxiety is way worsethan a hallucination of an AI.

(46:08):
At least that getsthe ball going.
At least that helpsyou move forward.
So definitely don't bescared of that, but I think
people do need to be awareof, and I think it, like I
said in my, I think I have akind of unique perspective.
With my language and linguisticsbackground, and I use AI tools
all day, every day, and westudy and research this and

(46:29):
I've worked on a lot of ITteams and I know kind of the
technical side of it, so it's.
Definitely kind of a uniqueperspective that I hope, and
I'm, I hope people kind ofdon't be scared, but know
that we don't fully know howthese things are working.
I would not rely on it foryour full status report.
I really liked yourexample of like.
You can write out maybeyour status report and be

(46:50):
like, all right, help metailor this to the business
owner and then tailor thisinformation to my team.
Because yeah, both those people,both those groups need different
information, different typesof information, different
aspects of the information,and it can help you with that.
But don't rely on it fullyand don't trust it fully.
Absolutely do not trust it.
The other thing to do is they'reso convincing because they

(47:12):
were designed to be convincing.
If you showed up and you usea LLM and it, you know, had an
attitude or you knew that itwas wrong or it admitted, oh,
I don't know, you're probablynot gonna use it again.
And that's the oppositeof what is desired.
So it sounds convincing onpurpose, but know that's just
kind of, it's like gettingyou to use it tactic not

(47:34):
because it actually knows.
And can do what you'reasking it to do.

Galen Low (47:39):
You know, as we go through this, it's
becoming obvious whyadoption was one of the top
challenges in your report.
Because we're coming down tolike these mandates of just do
AI, there's pressure everywhere.
People aren't sure you know whatthey're expected to do with it.
When they're at work, thatlittle voice in their head is
going like, well, you probablyshouldn't have to do that.
Just dump all those filesinto the LLM and like,

(48:00):
that's probably whatpeople expect of you.
We're not necessarily havingthat dialogue all the time.
The education training ourselvesabout how the technology
works and what we do anddon't know about it, and how
we guide how we use it today.
I wondered if we could looka bit deeper into the future.
We're talking aboutsome near term stuff.
We're talking about generativeAI, but you know, I think
still that looming elephantin the room over the past

(48:21):
few months has been likeagentic AI, agentic features
are making their way intoproject management software.
Where does that take usand like what does it
mean for us in the future?
Future us.

Olivia Montgomery (48:31):
Future us.
Yeah, so I am definitelyvery excited about future us.
Current dust, it'sa little chaotic.
Future us, I'mreally excited for.
I think agentic AI to kindof level set what that means.
It means a lot of things toa lot of different people.
So within your organization,your IT team hopefully is
defining that for you andthe vendor, if they're

(48:53):
offering, if they're sellingAgentic AI, that they're
defining what that actuallymeans for you in general.
We're not quite there.
So if you are getting marketedthat, hey, we've got genic
AI fully ready to go, I wouldbe very critical of that
offering and really dive intothe technical aspects of that.
We're not there yet, so thisAgentic AI fully sits in

(49:18):
the like longer, near term.
It's not here yet anyway.
Agentic AI is going to be wherethe system can kind of make
decisions and execute tasksat a pretty complicated flow.
Ideally, we would like to beable to be like, Hey, book me
a family vacation to Greece inOctober, and then it can go, it

(49:42):
can check your calendars, thecalendars of everybody that you
wanna invite and negotiate thebest rates for a car rental.
Find the best flights.
Go and do all of that.
Act as like your personalagent and go and make all
those decisions for you, andthen come back and be like,
all right, here's your trip.
It's done.
And that is exciting and Ihope we get there someday.

(50:04):
And that is kind of thegoal that we're looking at.
I think things like the NLP,the natural language processing,
this ability of non-techies tobe able to talk to computers.
Do no code or lowcode automations.
Even now we cansee that happening.
We can build out our workflowsnow and that's going to
hopefully cross systems.

(50:26):
So right now, maybe yourPM tool, you can build your
workflow, but it doesn'tcarry across, likely, it
doesn't carry across your CRMand across your email, your
calendar, et cetera, et cetera.
But it will continue toincrease those capabilities
and that's where we're gonnaget closer to Agentic AI.
So you're going to be ableto say, Hey, like I said,

(50:47):
that example book my FamilyVacation Degrees in October.
Now that prompt that I justsaid is also kind of what
Vibe coding is and vibe codingis this like new-ish thing.
Like if you're not a developer,you might not have heard this.
If you are on a like reallynew team or a, I should say,

(51:07):
a new team using these tools,you might be vibe, coding
and not knowing, but anyway,vibe coding, just a level
set is using plain languageas your prompt, and then the
computer is the one that'sdoing the actual building of
what the logic needs to be.
It's not vibe coding like, ohI'm gonna make a cool vibe.
Or like, oh, I'm in a good mood.

(51:29):
This is gonna be so fun.
Or, I want the output to be, youknow, really like hipster cool.
It's not that kind of vibe.
Vibe coding is definitelymore just using your plain
language, Hey, book me a trip.
But then the amount oflogic that is behind
something so simple asthat is insanely complex.
Most everybody here hasbooked a family vacation.

(51:51):
We know it is insane.
And so to think that we can justlike pass that off to somebody
else, to a computer is exciting,but also, you know, it's gonna
be fraught with issues andit's really complicated stuff.
So we'll see that it's gonnahave the same risks because
again, if you're vibe coating,even now we're seeing this
issue with vibe coating.

(52:11):
The same words don't alwaysmean the same to everybody.
And so you can be like, oh,book me a family vacation.
And even if you do specifythe four people you want
invited, the AI can't go andactually like confirm that
those four people want to go,that they want to do that.
There's so muchconnective tissue that
gets lost and nuance.
It gets lost.
And emotions that get lost.

(52:33):
It's gonna feel good that itperformed the task and gave you
the output at the immediacy.
But when you go and you goon that family vacation,
that AI booked for you, whew.
You might have aninteresting time.
You might not knowwhat's coming up.

Galen Low (52:47):
It's like getting in a Waymo today, right?
It's like a bit of a gamble.
It's like kind of proven, butit's like a bit of a gamble.

Olivia Montgomery (52:53):
Yep.
Yeah, absolutely.
And there's a lot of bias.
You know, there's thingsthat can be unexpected.
Let's say I use my AIthat kind of knows.
It knows me, it knows mypreferences, it knows who
I am, and I am the one thatdoes the family vacation.
And I'm like, allright, book it.
And then it's probably gonnasend us, it's gonna like book
a lot of like history museums.

(53:14):
It's gonna bookmodern art museums.
It's going to be a tinge to me.
Like, Hey guys, we're gonnago to the train museum now.
I'm the only one thatwants to do those things.
The family doesn't wannado that, but the AI
wants to make me happy.
So there's so much of that goesin and we're just gonna have
to keep a communication going.

(53:37):
And again, it comes allback to the like questioning
challenging, knowing thesethings aren't perfect,
knowing that all the marketingmessaging you're hearing,
take it with a grain of salt,ask deeper questions, and
really know that these tools.
I think like said, like Isaid earlier, the shine is
kind of starting to like comeoff and everybody's like,

(53:58):
oh, it's great at tellingmy kid bedtime stories.
But yeah, it kind of messedup my status report and it did
make me look like I was out oftouch with my project because
I over relied and I some of mythinking or outsource some of.
My problem solving to it,and now I'm being held
accountable for that.

(54:19):
So yeah, it's definitelyneeds to be a continuous
challenging dialogue.

Galen Low (54:24):
It's a really interesting point about,
you know, yeah, like theagentic stuff is, you
know, we're not there yet.
And I think, you know, a lot offolks might say actually we are,
but I think everyone would agreeit's early days, and I like
that it's not necessarily like.
The progressionof the technology.
Like I think the other thingyou're saying is it's also
how we train it and how wegive it context to begin with.

(54:45):
Because like that initself is the art.
And you know, right now it'shard for us to trust it,
to make decisions about themeaning of a word, not because.
It doesn't.
Well, I mean, yes, it doesn'tunderstand, but we also haven't
necessarily trained it withall of the nuance that comes
along with human communication.
And to go from that decisionto deciding how to negotiate

(55:07):
rates on Travago is like,you know, there are leaps,
but the potential is there.
I think it's doing a greatjob of painting a picture
of how work will be, youknow, in five, 10 years.
It's messy right now, but it ismaking space for, you know, some
of the mushy ways that we work.
And at the end of the dayto bring it all back, right?
There's pressure.
There's pressure becausethere's economic investment

(55:29):
happening, you know, at theinfrastructure level, at the
government level, and that'strickling down to pretty intense
investments at a corporatelevel, at a team level.
And that is, you know, thepressure is trickling down
for the users to use it,to figure out how to use
it to, you know, log inevery day and try something
and share your knowledge.

(55:50):
But in order for all of that towork, it's gotta have the right.
Guardrails.
It's gotta have the dataand privacy and security in
place and it also needs, youknow, the human parameters.
Like, what is expected of us,what are we supposed to do with
this, and how can we supportone another as a dialogue to
kind of come out the other side.
That's probably bigger thanproject management software.

(56:11):
We started project managementsoftware, but I actually
loved where we went.
It is a little bit of amicrocosm for work, right?
Projects and thesoftware that we use.
This has been incredible.

Olivia Montgomery (56:22):
I think project managers are
one of those roles thatattracts people who tend
to think very dynamically.
They like to think big pictureand small picture and that they
often are, have a lot of thatconnective tissue of like, yeah,
I problem solved this personalissue this way, and I'm gonna
come and take this into my work.

(56:42):
PMs are exceptionally good atthat and not a lot of roles
in business work that well.
So hopefully theyalso appreciate that.
Yeah.
Like we can talk about,you know, the linguistics
and the macroeconomics,but that does all impact
how you use these tools.
Hopefully it deepens yourunderstanding of what you're
seeing in these tools.

(57:04):
Like your day-to-day, youshow up and you're like,
why did it mess that up?
Like, why did it totallyskip what I asked it to do?
Why did it do that?
Hopefully conversations likethis and information like this.
Help everybody understanda little bit maybe why
they're seeing that, and whythey're right to question
it and what to do about it.

Galen Low (57:21):
Probably call out that project
managers are good at that.
Amazing.
Olivia, thanks so much forspending the time with me today.
I had a blast.
Always great havingyou on the show.
I mentioned at the topthat you've just published
your report, Capterra's2025 Project Management
Software Trends Survey.
Where can people goto find out about it?

Olivia Montgomery (57:40):
Absolutely.
It is posted on capterra.com.
You can check it out there.
Also, you can follow me onLinkedIn, Olivia Montgomery.
I try to post my researchregularly, my thoughts and
ideas, insights there regularly.
So yeah, check it out.

Galen Low (57:54):
Awesome.
Love that.
I'll include the links inthe show notes as well.
And Olivia, thank you again.
Always a pleasure.

Olivia Montg (58:00):
Thank you so much.

Galen Low (58:02):
That's it for today's episode of The Digital
Project Manager podcast.
If you enjoyed thisconversation, make sure
to subscribe whereveryou're listening.
And if you want even moretactical insights, case studies,
and playbooks, head on over tothedigitalprojectmanager.com.
Until next time,thanks for listening.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Medal of Honor: Stories of Courage

Medal of Honor: Stories of Courage

Rewarded for bravery that goes above and beyond the call of duty, the Medal of Honor is the United States’ top military decoration. The stories we tell are about the heroes who have distinguished themselves by acts of heroism and courage that have saved lives. From Judith Resnik, the second woman in space, to Daniel Daly, one of only 19 people to have received the Medal of Honor twice, these are stories about those who have done the improbable and unexpected, who have sacrificed something in the name of something much bigger than themselves. Every Wednesday on Medal of Honor, uncover what their experiences tell us about the nature of sacrifice, why people put their lives in danger for others, and what happens after you’ve become a hero. Special thanks to series creator Dan McGinn, to the Congressional Medal of Honor Society and Adam Plumpton. Medal of Honor begins on May 28. Subscribe to Pushkin+ to hear ad-free episodes one week early. Find Pushkin+ on the Medal of Honor show page in Apple or at Pushkin.fm. Subscribe on Apple: apple.co/pushkin Subscribe on Pushkin: pushkin.fm/plus

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.