All Episodes

May 13, 2025 27 mins

As we navigate the AI revolution, we’re collectively creating a new reality that’s far from utopian, and both leaders and individual contributors are struggling to find their way through the uncertainty. Questions around workplace ethics, output quality, and long-term impacts are top of mind for many, but this presents an opportunity to learn from each other’s breakthroughs and shape the future we want to see.

Andrew Saxe, VP of Product at Smartling, shares his experience leading a company that has evolved from manual translation services to integrating AI-driven workflows. He discusses how to distinguish real AI value from hype, the ethical considerations leaders must consider, and the rapid shift in the skills that are in highest demand in today’s AI-driven landscape.

Resources from this episode:

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Hannah Clark (00:01):
One of the many disorienting things about
this AI revolution is that weare collectively inventing a
new reality, which I probablydon't have to tell you is much
less utopian than it sounds.
But what's also interesting ishow we're watching in real time
as both leaders and ICs are, Idare say, struggling to navigate
the messiness that comesalong with forging a new path.

(00:23):
The questions around thingslike workplace ethics, output
quality, and impact on futureoutcomes are on the minds of
everyone in the org chart.
But that also means wehave a great opportunity
to learn from each other'sbreakthroughs and design
our own decisions around thereality we want to end up with.
My guest today is Andrew Saxe,VP of Product at Smartling.
Smartling falls into thetranslation and language
services industry, which ispractically a poster child

(00:44):
for industries impacted by AI.
When Andrew started atSmartling, their team was
performing translation servicesmanually, and AI was just an
old sci-fi movie featuring HaleyJoel Osment as robot Pinocchio.
Needless to say,things have changed.
In this episode, you'll hearAndrew's take on separating
genuine AI value from inflatedhype, ethical considerations
leaders need to be thinkingabout when deploying AI-powered

(01:07):
workflows, and why the skillsin highest demand have shifted
faster than anyone expected.
Let's jump in.
Oh, by the way, we holdconversations like this
every week, so if thissounds interesting to
you, why not subscribe?
Okay, now let's jump in.
Welcome back to theProduct Manager podcast.
I'm here today with Andrew Saxe.
He's the VP ofProduct at Smartling.

(01:29):
Andrew, thank you so muchfor making time in your busy
schedule to be with us today.

Andrew Saxe (01:32):
Absolutely.
Great to be here.

Hannah Clark (01:33):
Can you share a little bit about your
background and how you arrivedat where you are today?

Andrew Saxe (01:37):
Yeah.
Do you know what I'vealways had an interest in
probably the mid nineties.
I had an interest in internetand it was new and I've always
been in the world of technologyand studied a lot of technology
and all of my jobs have beenin development and product.
And I was a fake web developerfor a while, making not the
best websites or code, andeventually moved into product.

(01:57):
And I've been at Smartlingabout 14 years, I think 15
years in July, so quite a while.
But yeah, always petup product Smartling.

Hannah Clark (02:07):
Yeah.
Cool.
Yeah, 15 years is, it'sno nothing to sneeze at.

Andrew Saxe (02:10):
It's not a little bit of time.
That is for sure.

Hannah Clark (02:13):
Especially in the tech world.
It's dog years.

Andrew Saxe (02:15):
Yeah.
No, it's pretty, it'slike my, it's my longest
relationship, I think.

Hannah Clark (02:20):
So today we're gonna be focusing on
the evolving ethics aroundAI from a leadership lens.
And so to kick us off, did youwanna frame the conversation
through your observationsat Smartling over the past
15 years of your tenure?
So during your time, obviouslyyou've seen the, you know,
Smartling is in the translationindustry, so obviously there's
been a lot of big technologicaltransitions that have happened
since you started the company.

(02:42):
So there's cloud adoption,transparency, and now
we're in the age of AI.
So what is differentabout this shift?

Andrew Saxe (02:48):
Yeah, absolutely.
So yeah, so I'm in theharlems in the translation
business, so I can turnEnglish into French and
Spanish and that type of thing.
And when you think about thatprocess, it's really been a lot
of people and like translatingthings as centuries old.
It's always been thereturning languages into
different languages andit's always been really a
people focused endeavor.

(03:09):
And thinking about theother technological advances
over the last 15 years,and obviously cloud was
probably the biggest one.
They were really like businessand organizational impacts,
and this shift to AI reallyhas already had a profound
impact on the translationworld and probably elsewhere
really is gonna startimpacting humans more and more.

(03:32):
And that like the, althoughcloud was a huge impact
and allowed people to scalebusinesses and not have to
set up server farms and allthis type of stuff, it was
largely the end people maybedidn't feel that except they
got more apps and businessesand things like that.
But it's the day-to-dayworkers that I think are most
impacted by today's AI shift,and we certainly see that

(03:53):
in the translation world.

Hannah Clark (03:54):
Yeah, I can imagine it must
be really amplified inthat specific space.
So when we talk aboutimplementing AI and
translation services, likeyou mentioned, there's a
big impact on the workforceof translators, end users.
How do you balance theefficiency gains with the impact
that this has on your workforceand how has their role changed?

Andrew Saxe (04:13):
Yeah, so when we look at as long as both, kinda
a software, we're both a SaaSsoftware company and you provide
tools to manage translationsand the whole process.
And if you can imaginethose thousands of people
or hundreds of thousands ofpeople typing in translations,
provide tools to manage that.
And then there's of course,all the people who are actually
doing the translations.
And we offer thosetranslation services.

(04:34):
And largely people have beentyping in those translations,
but, or reviewing translationsfor quality and selling mistakes
and all these types of things.
Brand voice for an organization.
And a few years ago, like 10years ago, like neural machine
translation came out, whichis a pretty big step, maybe
five or six years ago, Ican't remember the timeframe.

(04:55):
But Google Translate hasbeen around a long time and
they kinda switched to neuralmachine translation, which
made, had a pretty big impacton quality, but it wasn't the
impact that allowed you toremove humans from that process.
So if you think about theprocess, it's someone types in
the translation, someone reviewsit for quality, and then it
goes back to the content owner.
It's always been thatway, and even with machine

(05:16):
translation was still that way.
So with AI, we're reallylooking at taking out the
humans in that workflow, inthat process, we're at least
changing their role, so humansare, as the AI gets better and
the tooling gets better aroundit, the humans come out of
maybe typing in the translationand we're able to regenerate

(05:37):
the best first translation.
We're able to use AI andLLM is to assess the quality
of it and then decide if itshould go to a human or not.
So the human role init more goes into a
validation type of role.
Say when we've assessed thequality needs to be checked out.
The human comes in andthey're like, oh yeah,
this is maybe not great.
And maybe they find allthose edge cases that the LLM

(06:00):
isn't able to check or findcorrectly, and then you can feed
it back into the machine andhopefully it improves overall.
But like the role definitelyshifts from being in the
beginning and the middle ofthe workflow to being at the
end to do random validation orsampling of content for quality.
So it's definitely apretty big shift in how
people are working in.

(06:21):
Who knows, like eventuallyit might get to a point where
you don't even need some ofthe validation components,
but certainly that's the moodthat is happening right now.

Hannah Clark (06:29):
Yeah, I find that so interesting, especially when
you bring up the something likebrand voice, like something
that's so nuanced that canbe really difficult to pick
up on and really master, evenif it's in your own language.
And then to find that the,with the, the translation might
be, I can see that being a lotmore complex for something like
AI to be able to adequatelygauge and be able to translate.
So I can see how thatlayer is still very much

(06:50):
relevant and it'd be verydifficult to replace.

Andrew Saxe (06:52):
Yeah, it definitely is.
Definitely is.
And yeah, the brand voiceis like a huge component
or special terminology.
Like people have brand wordsand glossary terms they
want to use, or words theydon't wanna stay away from.
And you sure thatthe translations are
respecting all of that.
And yeah, AI does apretty good job of it.
They can also havesome missteps.
You have to build kindof the guardrails and
sometimes the human aspectof that is the guardrail.

Hannah Clark (07:15):
Oh yeah, absolutely.
Even just, like having usedAI for content ideation
and that kind of thing.
There's a whole lot.
Yeah.
I think that it's interesting tothink about what the technology
could be capable of now, butyeah, it is interesting to
think about like the, there weredifferent roles of interference.
That it's not just a cutand dry, direct translation
gig a lot of the time.
So as far as how it'stransforming from a human

(07:37):
typing task, if we're gettinga little bit further into this
validation role, so what doesthe workflow look like now
versus how it looked, fiveyears ago before we saw this
whole age of AI transformation?

Andrew Saxe (07:47):
Yeah, honestly it was like, and like we've
seen even from the Smartlinglandscape, like translations
used technology for decadesand desktop tools and server
tools and all these typesof things to manage it.
But there's always beenthe human typing in and
reviewing and assessingquality and things like that.
And it really is.
And when we think about the costthat goes into translation and

(08:09):
organizations spend hundredsof millions and billions
of dollars on translation,and it can take time to
turn around, translationsof humans are doing it.
So the workflow really changesin that it's instant cost goes
down like rapidly if you'repaying people and you're not
no longer paying those people.
Like the workflow is incrediblymore efficient and it really

(08:32):
is just dealing with all of theedge cases and things like that.
And there are cases still, andeven though the AI and things
are improving, sometimes ifyou have legal documents,
you may still want humansto look at all of it or all
information that really needsto be like pretty exact.
So there's still some roles forcertain types of content that
need humans, but I would expectthat would change over time

(08:53):
over the next several years.

Hannah Clark (08:54):
Yeah.
I guess as the trust inthe technology grows and it
gets to be more effective.
Yeah, it's interesting, likewhen we talk about ethics,
that's a space where thereis a little bit of a internal
debate where, you know, likeethically where do we stand
on getting to AGI and thosekinds of advancements, so what
ethical con considerationshave come up in your

(09:15):
leadership discussions aroundAI implementation internally
or within the translationservice industry at large?

Andrew Saxe (09:21):
Yeah, so I would say that like for us, when we
think about like the, at leastthe linguists that we work with
and things all the time andwe've been able to, I think,
pretty successfully shiftpeople into different roles and
make sure they're still makingthe same amount of money and
things that they were before.
Just maybe they're able todo review more content and
because it's, requires lessediting and less review or

(09:44):
things like that, so liketheir velocity has improved.
At just a differentend of the workflow.
So like it's somethingthat we're, and we're
at the beginning ofreally using these tools.
That's something we'regonna have to really
consider a lot more.
And there's certainly inthe industry thinking about
like, where are all of thesepeople going to end up?
And like translation is a,it's a very nuanced business

(10:07):
and as you were saying,like just 'cause you speak
Spanish or maybe you speakFrench, something like it
doesn't mean you can translatedocuments and specialty
content and things like that.
It just may not come out right.
So it is a very like kindof a thoughtful exercise.
So I think there's stillspace for that and it may
even go more into creatingcontent in language rather
than editing existingcontent for translation.

(10:29):
So there'll certainly beroles for people still
in the translation space.

Hannah Clark (10:32):
This echoes in many ways, that content space
as well, where there's, nowthat the tools exist to make
the process more efficient,the skillset that's in
demand and the space has alsoshifted where, it used to
be that output and attentiveto some of the technical
requirements, especiallySEO and that kind of thing.
Were very high in demand.
And now I imagine it's similarin the translation space.

(10:53):
It sounds like the abilityto be perceptive to specific
nuances or linguistic nuancesor style or being, very detail
oriented around what a client,their requirements are, rather
than just being able to justtranslate, word for word.
It's interesting how thesekinds of technology shift
have like a domino effecton the whole industry and

(11:13):
kind of what is in demand.

Andrew Saxe (11:15):
They definitely do, and even as we look at some
of the research and things thatwe're doing, and we'll probably
deploy this year or whatever,there's a lot of trying to
capture some of that nuance,like being able to say, how
does your organization perceiveitself or how do you want
customers to talk about you?
And like coming up with someof those types of descriptions
and things can reallyinfluence what the AI or LLM,

(11:37):
or whatever is providing.
And we've seen changesin quality based on that.
So like as we're looking athow humans are right now, even
in some of the validation andreview areas are still like
involved in some of that nuance.
Like that will alsoshift probably over time.

Hannah Clark (11:53):
The way that I like to think about it is,
wherever the market kind ofcreates an influx in one area,
so it could be volume of output.
There's going to be, a, sort ofa pendulum swing into demand for
higher quality, more, authenticinsights or more kind of what's
gonna make this stand out versusa competitor piece when they
can have the same tools as you.

(12:13):
Such an interestingtime to be alive.

Andrew Saxe (12:15):
Yeah.
Yeah.
It's super interestingand it's like seeing the
dramatic shift in such ashort amount of time is good.

Hannah Clark (12:23):
Yeah.
Yeah, it's, it really feelssometimes as if we're witnessing
a shift that would normally beobserved over multiple decades,
and it's happened over thecourse of a couple of months.
My head is still spinning,and it's so interesting, every
time I have conversationslike this, I feel like we
have an AI conversation onthis show, but once a month
and every time it's crazy.
Let's get back to workflowsbecause I'm curious, when

(12:43):
you're making decisionsaround deployment and how
you change your workflow.
As you said, you wanna reallyenable velocity and I think
that it's really cool 'causeit sounds to me like a lot of
folks on this workforce arereally now being empowered to
do probably some of the morerewarding work rather than
some of the more, the menialtasks that can be outsourced.
So how do you decide, whatworkflows or what tasks should

(13:04):
be primarily human driven andwhich should be relegated to,
this is an AI task and there'san expectation that you're
using AI for this workflow.

Andrew Saxe (13:11):
It really depends.
And like in there is this,in the translation world,
there's this cost quality,speed matrix that people always
try to navigate to get thebest quality at the lowest
price and fastest as possible.
We really do a lot of researchand studying and testing
to make sure that we'reoffering the right tool for

(13:32):
the right type of content.
So we really have kind ofthese confidence levels that
we look at, and especiallyin our AI tools, where we're
confident that a certain setof technology or tools or a
workflow process will producea set of content that we can
guarantee that quality around.
So if we can guarantee thequality around a certain set of

(13:53):
process or a certain workflow orprocess, then we like use that
with the customer to pick theworkflow that they should use.
And some customers really dohave a requirement that went,
no, I want a human and a, I wanttwo humans involved in this and
maybe they have certain typesof content that require that.
But really like we're seeingjust being able to more
broadly apply technologyenabled workflows to really

(14:17):
all types of content.
And with one caveat, sometimesthere are like long tail
languages that maybe don'thave maybe enough information
in the LLM or whatever itmight be to like really have
an understanding of whatthe translation should be.
So maybe there's some long taillanguages that still require
humans, but really like we'veseen like being able to broadly
apply these tools to really anytype of content and still be

(14:40):
able to guarantee the quality.

Hannah Clark (14:41):
Yeah.
I'd like to zoom out alittle bit also in the
organization 'cause we'refocusing on a specific,
like a specific team really.
When we're zoom out at theorganization at Smartling
at large, how are youguys approaching adoption?
Are you finding that becauseI mean at this point everybody
has a use case for AI.
Are you finding that theorganization is adopting
AI in a fairly uniform way?

(15:04):
Or is there some differencesin terms of where it's more
applicable or where there'smore pushback on, doing
things the old fashioned way?

Andrew Saxe (15:10):
I guess like maybe two sides to that one,
as a software provider withcustomers, or customers are
like, they often have mandatesinternally to ask for AI.
They want to, they're not evensure how they might use it,
but they have organizationalmandates to make sure that
they're getting efficiencyand their vendors and
whoever they're using areusing AI and being able to

(15:31):
report on that efficiency.
So like customers definitelywant that and they're asking
for it, and we're trying tofigure out what those solutions
should be that get gets some ofthe efficiency or cost savings
or whatever they're requiringfor their organization.
And then just like eveninternally, just like how we
use AI and AI tools, like Iknow every team is using it,

(15:52):
is using AI tools and certainlyour, and our engineering team
is, and our product team is.
Maybe to a lesser extent, butcertainly still using AI and I
know that our customer supporttools are all AI-enabled and
our help desk is AI-enabled.
It's like really everythingis moving in that direction.

Hannah Clark (16:08):
I'm seeing more and more of that, the
AI mandates and I think it toright now, it's interesting,
I've had a few conversationsabout this so far where there's
a few different approachesthat organizations are using,
and I've heard it describedas a carrot or stick approach.
Where it's like you caneither incentivize AI and
if that doesn't work and theadoption rate is still not
quite where you want it tobe, then it becomes a mandate.
And that, I think that the,the purpose is to try and get

(16:30):
people to be explorative anduse, try and use the tools and
get comfortable with how theycan support their workflows.
But I could see thatalso having a bit of a.

Andrew Saxe (16:38):
Yeah.
Yeah.
And internally, we definitelyput in some policies and allowed
tools and usually our, whatevervendors we're using have
already gone through securityaudits and all those type of
things so that we can have somesafety and things around that.
But there's a new tool everyday that everyone could use
and I'm sure trying it out.

Hannah Clark (16:58):
Yeah.
So if we talk about the hypecycle, so this is, the new
technology, there are mandates,there's excitement around
it, there's a lot of hype.
And I think right now we'recoming off of a, I'd say
a, the great skepticism iswhat I guess I'll call it,
of, what's hype, what toolsactually need AI features.
And this is, this is somethingthat happens every time

(17:19):
there's a giant advancement.
We saw it withsocial media, Web3.
So how are you helping theorganization kind of separate
hype, trying to ground itand where's the real value
in these tools and trying to,'cause I think there is the
other side of the coin wherethere's folks who are reluctant
to adopt and then theremight be folks who are maybe
a little too eager kind of.
So how do you reign that into,what's like, where's the use

(17:41):
case for AI that is valuable andnot detrimental to quality, or?

Andrew Saxe (17:45):
One of our AI researchers and engineers
he's always if you can, noteverything has to be AI.
If you can do it with a ruleor if then just don't spend
so much time thinking thateverything has to be AI-enabled.
I think like when we, and itis like when you look at, when
you use tools like ChatGPTor something and you're just
amazed that like it just wrotethis huge paper for you or you

(18:06):
typed in something that cameup with all this information.
That's one aspect that likecreates some of the frenzy
because it looks like it's justdoing all this amazing stuff.
But when you actually startto apply AI to like business
solutions and things forcustomers, like you need to
have a different level ofconfidence in what it is seeing.

(18:27):
I think that's where you'relike, when you start to see
maybe the skepticism a littlebit and you're like, oh, this
isn't just like typing somethinginto ChatGPT and it getting
a great answer out of it.
It's like figuring out howto apply the technology
to something that improvesyour business or across that
your customer's outcome.
So I think there's certainlythe frenzy and certainly
everyone wants it, but thereis, once you get into building

(18:51):
some of the tools, there's abit more measured response.
You want to make sure thatyou're building something that
is actually beneficial forpeople and that will be useful.
And sometimes that takesmore time than just typing
something in ChatGPT.

Hannah Clark (19:02):
Yeah, absolutely.
I feel like there's a parallelhere with social media and when
that kind of came out in masswhere there's, this social norm.
Every business needs to beon social media and some
people were not taking itseriously at all or not enough.
Others went the other way whereit was like you're over-indexing
in your social media.
Where to the point where it'slike, where's the ROI on?

(19:22):
So yeah, I think there'sthat fine tuning the balance,
having a playground andencouragement to explore
and get people to zero in onwhere the good use cases are.
But then, and to only holdonto the ones that are,
they really have a provenbenefit to the business.

Andrew Saxe (19:34):
Yeah.
And when we we build, we dolots of experiments and things
and like we've been testingone that we think will be
really great for customers.
We always try to get likean 80%, 85% confidence
for a few months.
It's been like a coin tossof whether it works or not.
And AI is great, but it's stilllike 50/50 for other, produces
the result that we want.
So yeah there's definitely likethat trust layer and confidence

(19:56):
of, and the right use casesfor the dev technologies.

Hannah Clark (19:59):
So looking at industries outside of
translation, if you'vecare to comment, what
parallels do you see inhow AI is being integrated?
Just in the product space ingeneral, are there lessons
that you have from your ownexperience in your career that
you think are more broadlyapplicable, that are informing
how you're seeing, as we moveforward with this technology?

Andrew Saxe (20:17):
A little bit.
I think like I try to keepup with the latest tools
for product and productmanagers and things.
I think that there's a lotof things that I haven't
seen yet where it's beenlike super helpful in maybe
some the product development.
And I don't say that becauseI think like at least where
like we're supposedly inventingthings that maybe don't

(20:40):
exist yet and new tools andinterfaces and things that
hopefully people haven'tseen, or new technologies
or whatever applications.
I think it can beuseful for like writing.
Give me the outline of a specand it's gonna be this thing.
Or it would be great if itcould turn that into Jira
tickets or something like that.
So remove some of thatcomplication of the, of what
PMs end up having to do.

(21:01):
I think there's like somecool tools that I've seen for
like just drawing somethingon a piece of paper and it
turns it into a nice mockupand can point out different
areas that you can focus on.
And so I think there's a lot ofapplication that we will, we'll
start to see that'll improve.
The kind of just some of the,I don't know, stay to day in

(21:21):
the weeds type of things thatmaybe require, maybe don't
require as much thought aslike inventing something new.

Hannah Clark (21:27):
Yeah.
And like to the point ofinventing something new,
something that I'm very excitedabout is this five coding trend.
There's so much potential thatI feel is being unlocked now
for collaboration within teamsto be able to, have an idea and
be able to generate a prototypethat can be iterated on by a
team that really has a skillsetto take it to the next level.

Andrew Saxe (21:45):
Have you seen some of those tools in progress?

Hannah Clark (21:48):
Yeah, our internal team has been playing with it
a lot and it's really cool tosee what people are inventing.
But I think what is reallyneat is now I feel like we've
all kind of had that moment assmartphone users or whatever,
where you're like, ah, thereshould be an app to do this
thing that I specifically want.
Probably no one else.
And now it's you have thetechnology to try it out
and see if it's, oh, thisis actually a stupid idea.

(22:09):
Or maybe there's, I have one.
I can't reveal what it is.

Andrew Saxe (22:14):
It's a great, yeah, that's a great way
for test to see if your ideaactually does pan out or not.

Hannah Clark (22:19):
Yeah, exactly.
Like I think it's cool to beable to instantly validate
something that you know it,because that's where ideas come
from and often it's someone'sprobably doing it, but maybe
you could be doing it better.

Andrew Saxe (22:30):
Exactly.

Hannah Clark (22:31):
As far as the relationship between AI and
human workers, so as a leader,we're talking about some of
the impacts that you're seeingon the organization right now.
And I think we can both seewhere that's going for end
users and for folks at large.
But kinda what wouldyou like to see?
What excites you about how thisis enabling your workforce and
what would you like to see inthe next three to five years?

Andrew Saxe (22:51):
I think from like a platform perspective, there's,
at least in the translationworld and a lot of things,
there's like a lot of justlike management of things.
A lot of people are like inthe weeds all the time, or
a lot of platforms peoplespend their entire day in,
and that is is their job.
And I think it's turning intomore of a managed by exception

(23:14):
and that if you can trust thatthe AI or LLMs or whatever
you're using is creating theoutcome that you want, then
really you should just bemanaging open when it doesn't
create that maybe didn't createthe outcome that you wanted.
So repositioning and reimagininga lot of tools, including
platforms like Smartling, to bemore of a managed by exception

(23:36):
type of thing, rather thana, and also like more places.
If you don't need people to doall of these things, then like
the technology and the tools andall these things can be in more
places than just maybe insidethe probably window or whatever.

Hannah Clark (23:51):
What do you think of this hot take?
So I'm, because one of thethings that I've long been
critical of just in general,is I feel like a lot of the
time, the tradition, when itcomes to your career trajectory
is that people who are reallygood at a specific task within
their, individual contributorstend to rise through the ranks
based on their skillset, withintheir core function, and then
become people leaders and lackthe people leadership skillset.

(24:14):
I feel like this is the firsttime that, because we are now
in a position where ICs areoverseeing technology that are
executing base level tasks,we are actually starting to
train management at a IC level.
This is like a skillset shiftthat people are starting to
develop an actual ability tomanage or an understanding
of what it takes to overseea team before they reach

(24:36):
the people management layer.

Andrew Saxe (24:37):
That makes total sense.
Actually I haven't thoughtabout that, but like it's
something that I will certainlythink about and even looking
at the Smartling team.

Hannah Clark (24:45):
Yeah, I think it's interesting 'cause it's,
the nuances of managing humanbeing issues, but I think it's
cool to see how people arestarting to think like managers
before they become managers.
So I'm excited forthat shift as well.

Andrew Saxe (24:56):
Totally.
That makes total sense.
Yeah.
Yeah.
I does create like a lotof new, maybe that's just
that it creates a lot ofnew opportunities for humans
to do with things that theywouldn't have done before.

Hannah Clark (25:06):
Yeah.
So interesting to seethese unexpected sort of
outcomes that happen whena technology has completely
created a paradigm shift.

Andrew Saxe (25:16):
Yeah.
Sure.

Hannah Clark (25:17):
This has been really fun.
I really appreciateyou coming on the show.
I love an animatedAI conversation.

Andrew Saxe (25:22):
I'm sure you'll have many more.

Hannah Clark (25:24):
Yeah, probably next month too.
Where can people followyou online, Andrew?

Andrew Saxe (25:27):
They can find me on LinkedIn.
I think it's just ASaxe.

Hannah Clark (25:31):
Wonderful.
Thank you for comingand appreciate the time.

Andrew Saxe (25:34):
Thanks so much.

Hannah Clark (25:37):
Thanks for listening in.
For more great insights,how-to guides and tool reviews,
subscribe to our newsletter attheproductmanager.com/subscribe.
You can hear more conversationslike this by subscribing to
the Product Manager whereveryou get your podcasts.
Advertise With Us

Popular Podcasts

Bookmarked by Reese's Book Club

Bookmarked by Reese's Book Club

Welcome to Bookmarked by Reese’s Book Club — the podcast where great stories, bold women, and irresistible conversations collide! Hosted by award-winning journalist Danielle Robay, each week new episodes balance thoughtful literary insight with the fervor of buzzy book trends, pop culture and more. Bookmarked brings together celebrities, tastemakers, influencers and authors from Reese's Book Club and beyond to share stories that transcend the page. Pull up a chair. You’re not just listening — you’re part of the conversation.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.