All Episodes

July 18, 2024 70 mins

Generative AI: step change or snake oil du jour? We enter this mire with Anmol Anubhai as our guide & insight into the future of GenAI x product creation as our destination. Join us for a refreshingly grounded conversation on a topic that’s typically full of hype.

FOLLOW-UPS – 01:42
The Hyundai Ioniq 5 N Manual Mode Changes Performance EVs Forever
WTF is a Product Manager?
Dare Obasanjo on Product Management at Figma Config 2024
Dare Obasanjo on Threads
Shreyas Doshi on Threads
Shreyas on why product management is hard

GenAI x PRODUCT WITH ANMOL ANUBHAI – 07:45
Anmol on LinkedIn
Meet Figma AI
Shane Allen on Figma AI
Mira Murati: Conversation with Dartmouth Engineering
Why Apple’s iPad Ad Fell Flat
Figma’s AI app creator accused of ripping off Apple weather app
Adobe’s new terms of service say it won’t use your work to train AI
Instagram is training AI on your data
CHI 2024: Evaluating Human-AI Partnership for LLM-based Code Migration
Why companies are turning to ‘citizen developers’

ANMOL’S ADVICE TO COMPANIES – 32:08

AI INTERFACES BEYOND THE BOT – 34:45
Google: People + AI Guidebook

CULTIVATING MENTAL MODELS – 42:37

WHO’S DOING THIS WELL & WHERE’S IT HEADED? – 48:12
AI brings soaring emissions for Google and Microsoft

WEEKLY RECS – 56:44
Anmol: Andrew Ng’s Machine Learning Collection & Amazon PartyRock
Joachim: BOOX Palma
Ernest: Earthworks Audio ETHOS microphone

CLOSING & PREVIEW – 01:08:48

****

Rant, rave or otherwise via email at LearnMakeLearn@gmail.com or on Threads @LearnMakeLearnShow.

CREDITS
Theme: Vendla / Today Is a Good Day / courtesy of www.epidemicsound.com
Drum hit: PREL / Musical Element 85 / courtesy of www.epidemicsound.com

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Ernest (00:04):
Hello and welcome to Learn Make Learn where we share
qualitative and quantitativeperspectives on products to help
you make better.
My name is Ernest Kim and I'mjoined by my friend and co host
Joachim Groeger.
Hey Joachim, how's it going?

Joachim (00:18):
I'm good.
I'm hot.
It's too hot.
This is, I'm, should we justpretend that the heat is the
reason why we can't have moreregular episode uploads?
We're just pretending that ouraudio equipment is failing.
Yeah, no, it was a very hotweek.
It was lovely, but quite hot.
But, um, yeah, we're all doingvery well, and enjoying the

(00:38):
summer overall.
What about you, Ernest?

Ernest (00:40):
Yeah, I mean, I, I have to acknowledge that I have AC,
um, our central AC in our place,so I'm a little spoiled.

Joachim (00:48):
Yeah,

Ernest (00:49):
Did it hit, uh, did it get over 100 degrees Fahrenheit
in Seattle as well?

Joachim (00:54):
I'm not sure if we crossed the a hundred threshold.
Maybe we did actually.
Certainly felt like we did.

Ernest (00:58):
Yeah.
Yeah, I think we got to 105 onTuesday here in Portland, so it
was pretty bad.

Joachim (01:05):
Yeah.
Must've sucked.
Must've sucked really bad, huh,Ernest, with all that AC pumping
out.

Ernest (01:09):
Yeah, watching those news reports about how hot it
was.
All right.
Well, this is episode 19.
And today we're going to discussthe intersection of generative
AI and product creation.
We're With our first ever guestAnmol Anubhai.
Now we've touched on the topicof generative AI in previous
episodes, but really only inpassing.

(01:31):
So we're very excited to havethe opportunity to discuss it in
depth with Anmol, who actuallyworks in the field, unlike us,
she's an expert.
Uh, and we'll dive into thatconversation in just a minute,
but let's start with some followups.
Joachim, do you have any followups to our previous episode on
the weight of history or anyepisodes prior?

Joachim (01:51):
Yeah, I have one very, very tiny follow up.
Um, and I think it was in theepisode, um, The Weight of
History.
We were discussing, um,Hyundai's IONIQ 5, the N
version.
And, um, we had, we had gottenvery fanboy about the whole
concept and the people involvedin that project.

(02:11):
And the fact that.
They had created a fake manualgearbox for an electric car,
which doesn't really make sense,but added to the adventure and
enjoyment of the vehicle.
Now, a friend of mine who isalso a listener, uh, Jay
Conheim, was very, very upsetwith our treatment of the Ioniq
5's manual gearbox.
He was deeply offended becausehe pointed out, and rightly so,

(02:34):
that the EV6 GT, Kia's, uh, car,which is built on the same
platform.
And is there a high performanceelectric vehicle has a similar
feature and had it well beforethe Hyundai Ioniq 5.
And, yes, he was so upset.
He immediately texted me.
Um, so I have to apologize tohim.

(02:54):
I have to apologize to all otherKia drivers.
Yes, there is a version of thattechnology in the Kia EV6., If
anyone has access to both ofthose cars, please let us know,
um, if they are comparablemanual gearbox experience is an
electric cars, but just, yeah, acorrection on.
Hyundai is not the first.
Kia have a version of this aswell in their car and, uh,

(03:16):
apologies to everyone out therefor that misrepresentation.

Ernest (03:20):
I think the thing I'm most excited about is that one
of our episodes evoked anemotional response in a
listener.
That's awesome.

Joachim (03:29):
Yeah, I mean, we, better than no response, right?
It's like, can be supernegative, super positive,
neutral.
We do not want, we need, we needpeople to be angry.
That's, that's better than

Ernest (03:39):
Yeah, that's a absolutely for those folks
listening.
Please do send us your feedbackat learn, make, learn at gmail.
com.
We want to hear from you, uh,including corrections.
Um,

Joachim (03:50):
Yes.

Ernest (03:51):
I have two followups today and they both connect back
to our WTF is a product managerepisode from way back in
February.
Uh, the first is a presentationby Dare Obasanjo, who is the
lead product manager at Metawhere he leads the team
responsible for the in appbrowser and key ad platform
technologies for Facebook.

(04:13):
Messenger and Instagram.
So vitally important productswith lots and lots of users.
Uh, just a few weeks ago in lateJune, Dare presented at the
Figma config conference, andwe'll actually reference the
Figma config conference again.
When we get to our main topic,but coming back to Dare, the
title of this presentation wasproduct management, half art,

(04:33):
half science.
All passion, but really what hetalked about was what makes a
good product manager.
So for the folks in the audiencewho are interested in product
management, I think his talk iswell worth a watch.
Thankfully it's available onYouTube and we'll include a link
to it in our show notes.
I'll also include a link toDare's profile on threads where
he's a prolific poster with themajority of his posts focused on

(04:57):
product management.
Now speaking of threads, mysecond followup is also based on
that platform and it's theaccount of Shreyas Doshi, who's
been a product manager and ledproduct management at pretty
much every high profile techcompany you can think of,
including Yahoo, Google, Amazon.
and Stripe.
Shreyas now advises founders andexecutives and also coaches

(05:19):
product managers.
And he very generously shares alot of great insight around
product management via histhreads account.
And again, we'll provide linksto all of this in the show
notes.
In one recent example, heencapsulated the product
management role in the space ofjust one post.
Of course it was very highlevel, uh, but I think he
managed to capture the essenceof the role remarkably well, and

(05:40):
in well under 500 characters.
My favorite of Shreyas's posts,uh, threads, sorry, is, uh, one
in which over the course of 15posts, he explains why product
management is hard.
I'll quote just four of those15.
Shreyas writes, quote, sometimesyou should build what users say
they want.
Other times you shouldn't.

(06:01):
Sometimes you should aim forpixel perfect product.
Other times good enough is goodenough.
And sometimes launch a flawedproduct.
Sometimes you should persistuntil your product is
successful.
Other times you should pivot.
And sometimes the right call isto sunset it.
This is why product managementis hard.
This is why it's so fun.

(06:22):
For some people at least.
It is also why good productmanagement matters.
And as a product managementgeek, I just love that, you
know, I've practiced productmanagement for over a decade and
I've not seen anyone capture theessence of the role more
cogently than Shreyas.
So we'll include a link to hisaccount along with the two
specific threads I'vereferenced.

Joachim (06:44):
I love that.
That's such a great, it actuallyis a great summary also of our
conversations.
I feel that we're alwaysflitting between these two poles
on every conversation, becauseof this is clear cut.
It's such a complex mess of allkinds of forces pulling you in
different directions, and Iguess we always try and

(07:07):
enumerate as many of them aspossible and sit in that
tension, right?
I guess that's exactly where youare.
That's everything's pulling atthe same time and it's all
happening at the same time.
That's such a great summary.
Perfect.
Perfectly summarizes thecomplexity of this.

Ernest (07:21):
Exactly.
That the answer is that it'scomplicated and you know, people
might not want to hear that, butthat's the truth of it.
And that's it.
Just like you said, that's thefun of it as well.
Um, all right.
So we're going to dive into ourmain topic, our interview with,
uh, our conversation really withAnmol Anubhai.
And with that, I'm going to passit off now to Past Joachim to

(07:43):
introduce the segment.

Joachim (07:45):
All right, well, let's move on to our main topic and
we're excited about this becausewe have our first guest on the
podcast.
And so let's just jump rightinto it.
we're going to have aconversation today with Anmol
Anubhai on the intersection ofgenerative AI and product
creation.
And to be very, very clear, andthis applies to all of us on
this podcast, the viewsexpressed here are ours and

(08:07):
hours alone and do not reflectthe views of our employers.
And of course, this extends toAnmol as well.
We're here just as a bunch ofprofessionals chitchatting
around the campfire anddiscussing our thoughts on
things in general.
So we don't represent any majorcorporations here.
Anmol, welcome to the podcast.
Welcome to Learn, Make, Learn.
To be our first guest.
How would you like to introduceyourself to the listeners?

Anmol (08:29):
It's such an honor to be here and talking about, product
design with the both of you.
so to tell you in brief aboutmyself and what I've been up to
in these last couple of years,right?
So, um, I would call myself auser or human advocate in the
domain of product shaping, likeAI product shaping.

(08:51):
Um, I've had the good fortuneof, working with teams at Google
AI and then Uber for a littlebit and now Amazon Web Services
for the last four years, uh, allincredible teams trying to, uh,
You know, shape human first,human centered products, and my
job has been to aid these teamswith qualitative and

(09:12):
quantitative data, that comesfrom in field research around
what are the concerns, fears,myths, some of them are not
completely invalid.
So how can we truly.
Build systems that users cantrust.
That has been my key focus inthese last years.

Joachim (09:30):
That's super interesting.
It's such a different angle fromwhat we generally hear about the
AI domain right now, especiallyin this, let's be honest, it's
kind of a hype time right nowwith AI.
And so having these words liketrust and concern coming
through, and what you'redescribing is pretty, um, Well,
it's heartening to hear thatthis is actually ongoing work
and this is happening right now.
Something that you touched on, Ijust wanted to bounce off very

(09:52):
briefly and then we'll go deeperinto the larger conversation.
The legibility of these AIsystems.
These large language models are,as everyone likes to throw
around, black boxes.
They're very complicated.
how do you feel about that?
Does your work touch on tryingto make these systems legible so
that we can actually trust themand we're able to audit exactly
why it is they do something theway they do?

Anmol (10:13):
I think that's a great question, right?
across all our differentstudies, um, with many of these
companies, even what, one thingthat, you know, has been true is
the fact that all our customers,they want back end visibility
into how are these models, even,you Uh, landing on the
conclusions, making thedecisions that they're making.

(10:35):
Um, nobody here is signing upfor magic.
That's the gist in a nutshell,right?
Like, uh, people might be wowedby it initially for a few
minutes here and there, but thenwhen it comes to actually
adopting these systems, uh,making it a part of their life
or even work, uh, they want tounderstand.
What is it doing on the backend, even if there are no

(10:57):
experts, say, in the field ofAI/ML.
So that is definitely there.
And that has been our push asthe design team to which is how
do we provide that back endvisibility without making it
overwhelming as an experiencefor our end users?

Joachim (11:14):
Awesome.
before we dive into the deeperconversation with all of us
here, Ernest, you had a coupleof thoughts that you wanted to
use to set up the conversation.

Ernest (11:22):
Oh yeah, no, I appreciate it.
Thanks Joachim.
And you know, we do want this tobe a largely free form
conversation, but I just thoughtit might help to put a few
stakes in the ground to startAnd I just don't think we can
have a meaningful conversationabout generative AI and product
creation without highlightingsome of the key concerns voiced
by particularly people in themaking community.

(11:43):
And for starters, people whowork in digital product
creation, especially designers,are almost certainly aware of
the concerns that came out ofthe aforementioned Figma config
conference this past June forthe benefit of anyone who's not
familiar with Figma.
It's an online collaborativeinterface design platform that's
become very popular as aprototyping tool for folks

(12:03):
designing websites and mobileapps.
Um, imagine a version of, uh,Google Slides created
specifically for people makingwebsites and apps and you'll get
a pretty good sense for whatFigma does.
The source of these concerns wasa new suite of features launched
by Figma called Figma AI,quoting their announcement.

(12:23):
We're excited to introduce FigmaAI.
A collection of featuresdesigned to help you work more
efficiently and creatively.
We often talk about the blankcanvas problem, when you're
faced with a new Figma file anddon't know where to start.
The new Make Designs featurewill generate UI layouts and
component options from your textprompts.
Just describe what you need andthe feature will provide you

(12:44):
with a first draft.
By helping you get ideas downquickly, this feature enables
you to explore various designdirections and arrive at a
solution faster.
Unquote.
Figma AI was introduced at thecompany's config conference in a
session led by a product managernamed Cemre Gungor.
And just a quick side note, thevideo of that session has since
been removed from Figma'sYouTube channel, but we'll share

(13:07):
a link to an in depth post byFigma documenting the new
features.
Now this sparked a couple ofconcerns that I think will help
set up our conversation withAnmol.
The first, which is a broadlyheld concern associated with AI
is job loss.
In this case amongst interfaceand UX designers.
Speaking to this, an interfacedesigner named Shane Allen

(13:28):
posted to threads with thefollowing in response to
GemRay's presentation.
Quote, a product manager showingdesigners how to use AI and
Figma.
While the irony isn't lost onme, the reality is this will
change the designer to productmanager dynamic forever.
Unquote.
Asked to elaborate on what hemeant, Shane wrote, quote, Just

(13:51):
make it like this, says theproduct manager, as they show a
designer and AI mock up in Figmathat they've already asked
engineering to build, unquote.
So clearly the elimination ofentire classes of jobs,
including white collar jobs thathave thus far been insulated
from the steady march ofautomation, is a big and growing
concern.
And then gasoline was poured onthis fire when footage of a talk

(14:14):
by OpenAI CTO Mira Muratisurfaced that same week, in
which she said, quote, Somecreative jobs maybe will go
away, but maybe they shouldn'thave been there in the first
place, unquote.
Now, I should note that thecoverage of this has been hugely
unfair in that Murati's quotewas very much taken out of
context, and we'll provide alink to the full video of her

(14:34):
comment so that you can judgefor yourself.
But the response to Murati'swords, this new Figma AI
feature, and Apple's Crush adfrom a little while back show
that the broader fears of jobloss stemming from generative AI
have very much extended into theworld of creatives and makers.
Now, the second concern, whichis closely related, is rooted in
intellectual property.

(14:55):
And again, Figma's new AI toolwas at the center of the storm.
Quoting from a Guardian article,Quote, the new AI tool launched
by Figma allows users to enter aplain English description of an
app they want to create andwatch as the user interface is
generated out of thin air.
But shortly after it launched,Andy Allen, founder of iOS app
developer, not boring software,discovered that multiple

(15:17):
requests to design a weather apprepeatedly resulted in a program
that was almost identical to thebuilt in iOS weather app that
appears on Apple devices.
This is a weather app using thenew make designs feature.
And the results are basicallyApple's weather app.
Alan said in a post on X thisweek, try it three times, same
results.
Unquote.
We'll include a link to theGuardian article and show notes.

(15:39):
It includes Alan's tweets, whichlay bare just how strikingly
similar Figma, uh, Figma's AIgenerated weather app prototypes
are to Apple's built in weatherapp.
Uh, and there are two big issueshere.
First, as Alan points out.
Use of this feature could putapp designers in legal jeopardy
as he wrote on X quote, just aheads up to any designers using
the new Figma make designsfeature that you may want to

(16:01):
thoroughly check existing appsor modify the results heavily so
that you don't unknowingly landyourself in legal trouble
unquote, because if you were tounwittingly use Figma AI to
create an app to happen tomirror the appearance of an
existing app, the developer ofthat existing app could take you
to court for copyrightinfringement.
Now, as troubling as that couldbe for an indie designer, the
even bigger concern is aroundownership.

(16:23):
This is a massiveoversimplification and Anmol,
please don't hesitate to jump inif I'm getting any of this
wrong, but generative AIplatforms work by analyzing huge
pools of data, text in the caseof text generators, images in
the case of image generators,um, to uncover the underlying
patterns in that data, and thisis called training, then
building from the patternsidentified through that training

(16:45):
process, the platform is able togenerate new content based on a
prompt from a user, for example,make a weather app.
Now, As we discussed in ourepisode titled the perils of
crossing over from niche tomainstream, a big issue here is
that with very few exceptions,the people who created the
content used to train thesegenerative AI platforms, content

(17:05):
without which these platformscould not exist, are not
compensated.
And this becomes doublyproblematic when a creation
platform uses its own users datato train its generative AI
models, in effect, chargingcreators for their own demise.
Now, to be fair, Figma deniesthat they've done this.
Returning to that Guardianarticle I referenced earlier,

(17:26):
quote, Figma's chief executive,Dylan Field, posted a defense of
the company's feature.
Despite appearances, Field said,the tool was not created by
training an AI system on workdone using the Figma app by
other customers.
Instead, the service used offthe shelf large language models
to instruct a more hand codeddesign system, unquote.

(17:46):
End quote.
But at least in my opinion, thisis really a difference of degree
and not of kind.
Fundamentally, any LLM basedgenerative AI system is using
patterns underlying existingworks to generate, quote
unquote, new works based onthose patterns.
And Figma is hardly the onlycreative company to come under
fire for this.
As noted in an article from theverge, uh, Adobe faced intense

(18:09):
backlash over its terms ofservice agreement and had to
announce a tweak version thatmakes it clear that Adobe will
not train AI on user contentstored locally or in the cloud.
And many longtime Instagramusers were enraged to learn that
meta has started to mine users,Instagram images and videos.
To train its AI models as a afilmmaker and screenwriter wrote

(18:29):
in a piece for fast company,just when you think that meta
had already committed everyimaginable wrongdoing, the
company has pulled more garbageout of its clown cars, trunk by
mining user creations for itsown AI.
Meta is effectively killingInstagram's spirit while
flipping the ultimate finger toall Instagrammers, especially

(18:49):
those who joined the socialnetwork back when it was an
independent playground forcreativity and self expression.
Alright, so clearly people inthe business of making feel very
strongly about generative AI.
They're worried aboutappropriation of their work,
devaluation of their work, andthe seemingly existential threat
posed to their livelihoods bygenerative AI.

(19:11):
Now, Anmol, I know this is quitea lot to lay on you, but do you
think these concerns arewarranted?
And you know, as clear as theserisks are, do you see potential
upsides of generative AI forpeople in the business of making
products?

Anmol (19:24):
thanks for this question on this.
I think this is such a such animportant discussion to be had,
right?
Um, I think where I would liketo start is by asking ourselves
the question, uh, what is evencreativity?
Right?
What are these concepts,creativity, productivity?

(19:45):
I feel like, and this is mypersonal take, but I've also
seen a lot of creatives in thepast couple of years voice their
opinion on this and have thesame take, which is creativity.
Creativity, you fundamentallyneed to be human.
And need to have had certainlived experiences to be able to,

(20:06):
bring that story, thatindividuality to whatever it is
that you create or make, right?
That is what, uh, resonates withyour audience or with someone
who is, uh, a part of it.
Say if it's like a co creationexercise.
So to hope and pray that like anAI model.

(20:26):
It's going to start making artby simply, say, supervised
learning, just like how you weresaying, right, if the model has
access to a lot of data, theinternet, and then after simply,
say, going through all of it,using permutation, combination,
or some logic, it's makingsomething.
The question to ask ourselves asa society is, would we even

(20:49):
categorize that or call that artin the first place or any kind
of a creative output?
from research what we found, sowe also wrote this paper, um, On
human AI partnerships.
Um, and again, this was with twoof my wonderful authors.
So Ishani and Behrooz and whatwe learned, we did a lot of

(21:10):
these interviews withdevelopers, even besides this
study, we have done so manyextensive interviews with the
developer community, because inthis case, we are making tools
for AI led and one thing that welearned is that Um, users want
actually more time and energyback to be able to be creative

(21:33):
themselves and they don't wantto do certain really boring,
repetitive tasks.
And I think that is whereproduct shapers can play a role.
if you understand what it meansto create value versus just
focusing on quote unquoteproductivity.

(21:53):
You know, what does it mean toempower your employees so that
they get that time energy backto focus on value creation, on
true creativity.
If you come from that route,then I think you're actually
going to end up making use of AIin really meaningful and
powerful ways.
Because let's face it, you know,there are so many tasks, like

(22:16):
for example, in a developer'slife, this entire paper is on
code migration, which is simplytaking like a legacy code base,
looking at the language andconverting it to a different
language.
Now that is a very mechanical,arduous task.
Uh, developers are alsocreators.
in some sense and they want toactually solve real customer

(22:37):
problems is what they weretelling us throughout, right?
Nobody wants to do this.
So if we see AI as a tool thatis here to sort of partner with
humans and take away the boringrepetitive work from their lives
so that they get to do the morejoyful creative work, then I

(23:01):
think we are on the right route.
But if we see AI, just like howyou're saying, as this director,
and as humans simply being therein the system, then I think
we're approaching it in anentirely incorrect manner,
because I don't think our modelsare even there yet to be able to
make those types of nuanceddecisions.

Joachim (23:24):
That's a super interesting starting point for
this whole conversationespecially when you started
talking about creativity, theprocess of creating is really
the magical part.
It's not really the outputs.
The outputs are kind of nice anda good thing to have.
And usually if you're working ina company, that's how you get
judged is what the output is.
But the process really changesthe maker.

(23:45):
and so when you talk abouthaving generative AI handle some
of those menial tasks do youthink if we keep getting rid of
those menial tasks, we maybe arelosing some of that process that
is changing the individual andmaybe it is the fuel that
actually feeds the creativity?
Um, Are we going to lose alittle bit of that with the
generative AI thing or is therea way to keep that going?

Anmol (24:07):
I think.
I haven't, to be very honest, Ididn't think of it that way.
That's interesting, what youpoint out, right?
Which is when you are also doingsome of those menial tasks, you
are Some part of your brain isthinking about the system,
maybe, or the othercomplexities, and then that also

(24:28):
becomes a part of your processwhen you're actually, say,
solving the final problem.
Um, that's a really good way oflooking at it.
Sure, we are losing that, to behonest, if we approach it the
way I was describing earlier,right?
But at the same time, Um, thepro that I see there is the fact

(24:48):
that we are also inviting, uh,many types of creators and
leveling the playground by thatwhat I mean is right.
Like there are so many folks outthere who are now calling
themselves citizen developers.
Because they don't need to learnprogramming.
They don't need to be computerscience experts, uh, small

(25:08):
businesses who had a lot ofdifferent ideas, but so far they
did not know how to maybe say,hire the right developer to make
it happen for them.
Now, these folks, uh, theyalways had a knack for problem
solving, but they have these,skill sets at their fingertips.
And they themselves are nowactually also building solutions

(25:30):
on their own and the excitementof it, right?
To be able to make something onyour own without having to spend
time learning things like a Cprogramming language or Java or
whatnot, right?
Um, so that is, maybe I'm beingan optimist here, but the way I
see it is that we're invitingsome of these business users.

(25:53):
to also start making use ofthese technical products.
And very soon we might askourselves the question, what it
even means to be quote unquotetechnical, right?
Like everyone is a creator insome way, I feel strongly
believe in that.
So I feel like in some sense, ifwe use these tools, if we use AI

(26:13):
correctly, we will, we mightjust end up empowering, uh,
creators coming from all kindsof diverse backgrounds.
To jump in and start making, tostart problem solving.

Joachim (26:25):
Yeah, I love that.
The optimism.
I think this is where I like tosee AI optimism.
I think this is exactly thedomain that makes the most sense
because it is, as you said,removing barriers to entry means
that now more ideas can flourishand you can have meaningful
conversation.
And I think usually, I thinkErnest, in your setup, you

(26:46):
mentioned kind of the, the, theproduct manager who just says,
Hey, make it look like this.
And it becomes this directiveordering approach.
so all of that to say is like, Idon't think it's the technology
itself.
That's to It's theorganizational structure.
Right.

Anmol (27:01):
No, I just wanted to add one quick thing that Ernest also
pointed out.
And I thought that was such afantastic example, right?
That the PM said, Hey, you gomake it this way.
And then finally the model endedup spitting, um, almost like
another version of the AI, um,Like of the Apple weather app,
right?
It ended up looking exactlysimilar.

(27:23):
Um, I feel this is exactly whyyou cannot make products that
are end to end AI led.
You need human AI partnerships.
You know, the folks who are sayexperts in the domain, say
designers in this case, right?
They're still going to be thefolks who are going to lead the
effort.
And AI is probably going to be ahelping hand in the process.

(27:47):
Uh, that's one thing that we seeagain and again in our studies
too, which is these humans, theyknow things just based on
institutional knowledge thatsometimes you cannot find for
the life of you on internet orsomewhere else.
It's because of theirexperience.
Um, and as a product shaper, youhave to respect that.

(28:08):
You have to make systems that,uh, I'm going to partner with
these humans instead of just,you know, go off and do
something on its own becausethat's never going to give you
what you exactly want, um, asthe end user or the stakeholder.
Yeah.
Yeah.

Ernest (28:23):
I I love that just the fundamental approach and also
we'll include a link to Anmol'spaper in the show notes as well.
But I just love the approach youtook to that work where, you
know, the assumption you wantedto test was this idea of
partnership versus replacement.
And I was just curious what,what led you to that, uh,
looking at that question versus,you know, most people I think

(28:46):
are looking at replacement.

Anmol (28:48):
A lot of folks played a crucial role, but I have to say
that, uh, my manager back then,so she is this amazing product
director.
And she really encouraged me toalso study concepts like
productivity and impact onproductivity of Gen AI tooling,
et cetera.
And, um, when I started evenlooking at papers from a bunch

(29:11):
of different companies,organizations across the board
on this, um, we realized thatNumber one, we are approaching
this incorrectly, which is, youknow, a lot of companies today,
they feel like, okay, if you'regoing to say, adopt an AI led
something, we are going to beable to get rid of X number of
people or employees.

(29:32):
Um, but.
that will never happen for now.
And I think even like mypersonal take, maybe it's
controversial, but I don't seeit happening in the future too.
I'll tell you why, becausenumber one, exactly like what we
were describing, right?
Like you need those experts tobe leading the effort.
But I think These organizationswithout someone to tell them if

(29:55):
the output is even okay or not,or if it will meet a certain bar
or standard or not, they're notgoing to want to adopt it.
So sure, they might want to tryit in a sandbox kind of
environment.
But then when it comes toactually incorporating AI into
existing workflows, right, thatis a very high stakes decision.

(30:16):
So you really want to be sureabout certain things like
accuracy, precision, what wewere describing earlier, which
is the back end visibility.
What is the logic here?
Uh, they don't want a black box.
That's one thing that we werehearing across the board, which
is why we were like, okay, ifyou don't want a black box, what
do you want?
And then just like how you cansee in the paper, a lot of

(30:39):
developers are talking about,uh, we want, um, an AI tool that
acts like a peer, you know, thatI'm going to work with together.
And that way, the wholeexperience, I think it's very
clever to do this because as anAI product shaper.
You also automatically meetsystems that will be more easily

(31:00):
forgiven by the end user.
You know, they are not expectingit to come out with the hundred
on hundred output or response.
They're expecting it to partnerwith them, you know, throw a
bunch of different ideas and forthem as the humans in the
system.
In the loop, as they call it,human in the loop to discuss and

(31:22):
decide whether something evenmakes sense or not, and then to
build on it together.
So it's a win win kind of, youknow, you don't have to work
towards that perfect hundred onhundred end to end AI system.
Nobody's asking for it.

Ernest (31:37):
I think that's great.
I also loved what you saidabout, you know, this focus on
value over productivity.
Cause I, I do feel like that'swhat's driving so much of the
excitement right now around AIat a C suite level is this,
visions of incredibleproductivity without really
thinking about what's the actualvalue we're getting out of here.

(31:58):
Um, How would you try topersuade someone to, you know,
shift their thinking away fromjust being so productivity
focused and, you know, thinkingmore broadly about value?

Anmol (32:08):
yeah, absolutely.
So I think, this again is sortof tied to what we were
discussing, uh, a little bitearlier, about creativity and,
uh, design, for example, artalso in many ways, right?
Like as a company, my personaladvice is to a lot of these
companies, small, big, large.

(32:29):
Um, ultimately.
You want, folks or solutionsthat are out of the box, that
are cutting edge, novel, right?
almost sort of like art, likeextremely creative solutions to
difficult problems.
Uh, you need humans to be ableto do that.

(32:49):
Simply put, I don't think any AImodel is going to do that for
you without any guidance.
And as a responsible employer,you want to empower.
these folks, your humans to beable to be their best creative
versions.
So, um, instead of productivity,if you focus on value creation,

(33:11):
then you can do really smartthings like maybe end to end
study your existing workflows,processes, identify the
bottlenecks, identify themenial, not so fun work, which
might also be leading to churn.
you losing some of those gems inyour organization, and then

(33:33):
shaping AI systems or bringing aflavor of AI to focus on those
bottlenecks.
Because actually you know, a lotof experts will also tell you
this, but.
Um, AI systems do well on smallscoped out tasks, right?
They're still not able to handlevery fuzzy big problems, but if
you have a small scoped outtask, it's probably going to do

(33:56):
its job then.
So I think it's both again,clever and responsible to get AI
to handle some of thosebottlenecks while you free up.
The bandwidth, uh, give backthose cycles to your employees
to focus on real problems on.
Um, be more creative, maybe takea few more risks because they

(34:19):
have the time and the energy towork through it.

Joachim (34:22):
All of that is, I think resonates with all of us here in
this conversation, because itreally does put that.
The focus on kind of the, thespecial magic that humans still
have and you use the technologyto augment that and expand what
it is that's possible.
I think that's where we've seentechnology have the most impact
in human societies is that itexpands our ability to do

(34:44):
everything.
So what is the best way to getthe AI to interface with us?
Cause right now it's very much.
Chat GPT is, the one in thepublic's consciousness.
and that is as the namesuggests, a chat bot, but if I'm
writing code or I'm trying toproblem solve, it doesn't feel

(35:05):
great to just see code appearout of nowhere.
Have you put any thought.
Not necessarily even where youare right now, but just
privately, what does the greatinterface look like for
something like a system likethis?
Let's say I'm a developer and doyou have any like best practices
of ways of thinking about whatwould make a good interface for
that type of person, forexample?

Anmol (35:27):
Yeah, I think that's, again, such a fantastic question
because I've, also had thesediscussions and debates with so
many other designers,throughout, like across, uh, in
Seattle, uh, in the Bay Area,um, even outside of Amazon, but
then they are all having thisconversation exactly right,
which is not everyone wants tobe talking to a bot.

(35:50):
Right, like there are folks whoare introverted often,
ambiverted too, talking seemsstrange as an experience.
If I want something, why would Iwant to talk to, to a bot,
however accurate or great?
So the way I see it is that, uh,it will not remain just a, you
know, chat or a natural languageprocessing NLP kind of

(36:14):
experience.
The way I see it is that maybeour UIs will become more
adaptive.
By that, what I mean is,something like organic user
interface, right?
So I remember reading about thisconcept many years ago, which
is, um, OUI.
which is when a user interactswith an interface, the interface

(36:36):
learns about the user and shapesitself accordingly.
And I think that is thedirection which is going to be
really exciting to explore,right?
Because these models, they canretain the history, they can
understand a user so much sothat it sort of tailors itself
accordingly.

(36:56):
So, you don't have one, uh,experience or one solution for
everyone across the board, butmore tailored, meaningful
experiences.

Joachim (37:07):
That's super interesting.
Yeah.
That immediately got my brainthinking about a very different
Like the simplest version in mymind is like auto complete, but
a natural auto complete that'snot irritating, but feels like
an extension where I am and whatI'm thinking.
Not clippy.
Not that, not Microsoft'spaperclip, but um,

Anmol (37:25):
one quick thing to add to it.
It's, a very interestingdiscussion, right?
Because there are also cons.
I'll give you an example.
suppose you use an app, any app,maybe it's your Microsoft Word,
and then there are just certainfeatures that you use all the
time, and a lot of the otherfeatures that you never use, for
whatever odd reason, maybethey're not as relevant for your

(37:45):
work, the OUI or the intelligentUI will actually Figure out a
way to be able to make thosefeatures that you use often more
discoverable, easy, accessiblewhile you're working, right?
And maybe hide the rest, makefor cleaner, less overwhelming
experiences.
However, there is a slightcaveat to this.

(38:08):
Um, so I was working at theSeattle Times, uh, while I was
in grad school here at UW and Wemade like a similar intelligent
reading experience for youngreaders using fuzzy clustering,
which is a kind of an AIalgorithm.
However, um, a lot of youngreaders, they were super

(38:28):
skeptical about this becausethey were like, wait, I'm only
getting a lot of what I amalready into.
It's not like there's no way toswitch out of this, similar to
what we often also hear aboutsocial media apps and whatnot.
Right.
So what if you want to try beinga different person?
What if you have grown?
What, like, how will the model,the experience keep up?

(38:51):
It's also a very interestingquestion that I think designers,
researchers will have to answerat some point.
Yeah, absolutely.
Yeah.
And

Joachim (39:00):
Oh, that's so interesting, because that kind
of opens up, in my mind, whenyou said the way you phrased it
as they want to try on being adifferent person, it feels like
It would almost be like, well,you've now formed these clusters
around the types of behaviorsthat, create this cohort and
that cohort and that cohort.
But if you're very transparentabout that and say, by the way,

(39:21):
you're in cohort A, there's alsoall these other ones out there.
Do you want to try one of thoseon for size that it, it, there's
a, again, we've become soaccustomed to obscuring those
things.
Maybe if there's agency in howyou then are able to break free
from the persona that you'vebeen given and you can try
something else out, that wouldgive you enough of the sense of

(39:42):
freedom that you can stillexplore other things, but really
interesting example.

Anmol (39:46):
Um, you know, our fantastic technical writers, uh,
because we need them to actuallyhelp us break down these
concepts even and make them moredigestible for our end users.
Um, not every time is user atall different level or whatever,
actually be able to understandsomething like cluster or what

(40:09):
exactly do you mean?
Right?
So, um, I think that's anotherbig question right now, which is
how do we make these conceptssimple to understand without
losing the nuances?

Ernest (40:22):
I think this is such an interesting question and it, it
to me makes clear that interfacedesign is going to be so
important to addressing thesesorts of questions in the years
ahead.
So the field you're in, Anmol, Ithink is going to be really
exciting for, uh, quite a whileto come.
But, um, One very old schoolexample that comes to mind for

(40:45):
me is, you know, the traditionalpaper and newspaper that, uh, by
way of the judgment of aneditorial staff exposed you as a
reader to a range of topics thatyou You know, may not have
sought out yourself.
Um, and I, I recall I'm oldenough to have grown up reading
paper newspapers.

(41:07):
I really valued that.
That was one of my favoritethings about reading the New
York Times.
I grew up with the New YorkTimes.
I've been growing up in NewYork.
That, Tuesday was science day,and I would get exposed to this
content that I just wouldn'thave been exposed to otherwise.
And, to the point that you andJoachim were talking about, it
does feel like we're missingthat right now, where, you know,
like you were saying, just.

(41:28):
by virtue of the algorithm as itis today, just kind of kept in
our own little bubbles.
Um, do you, is, are you aware ofanyone doing work in this space
to, you know, because I guessthat is the challenge of how do
you set a KPI against that,right?
Like, how do you say, um, we'redoing this well or not in terms
of exposing people to thingsthat they're actually not

(41:50):
seeking out?

Anmol (41:52):
Absolutely.
I think, the one name and teamthat comes to mind is, um,
actually Google AI UX.
I also used to be a part of thatteam.
Um, and Jess Holbrook with someof his, uh, amazing researcher
colleagues wrote this guidebookcalled the People Plus AI
Guidebook, uh, wherein he talksabout the importance of

(42:15):
providing end users with thisvisibility, transparency, how
best maybe to go about it.
Um, so I think that guidebook,um, in fact, it also got me
thinking about so many of thesequestions that we are discussing
today.
I highly recommend, uh, thatdocument.
It's also available online asfar as I remember, you can share

(42:36):
a link.

Ernest (42:36):
Oh,

Joachim (42:37):
I was just going to reach to a different, related to
this stuff, which is the otherside of the feedback loop, and
bringing it back to your paper,there was this, fundamental
distrust that the developers hadof the AI.
I found that really interestingbecause some of them said, unit
tests are no good.
It's not enough to be able totrust what this thing has done.

(42:59):
Um, I need to see a moreholistic test.
And I, I, that one struck me asreally interesting because
having worked with softwareengineers and also as myself
coded and been in the system ina production system, no one does
that.
I found it interesting thatthese developers, when taken out
of their standard developingenvironment, You give them a
different interface, They startevaluating work differently.

(43:23):
And I wondered, do you guys alsofollow that feedback of because
I've interacted with an AIsystem, this now affects the way
I work actually without the AIsystem, does that kind of
feature somewhere where thatfunky weird feedback loop that,
uh, pops up?

Anmol (43:39):
I think that's, uh, again, such a great question.
I wish my colleague Ishani washere to answer it.
I really miss having her here.
Um, she also was like, we bothpartnered on this and she led
several of these sessions and wehad some great discussions about
exactly this, right?
Which is, are we seeing also achange in the mental model?

(44:01):
I think one thing that we haveto also acknowledge and maybe
show compassion for our ownselves as AI product shapers is
the fact to just embrace thefact that it is genuinely hard
to also shape AI experiences andproducts right now.
And one key reason is that thesecustomers, to your point, even

(44:25):
in the beginning, they haveabsolutely no mental model.
for what to expect or where tostart, right?
So, uh, it's a very differentballgame.
It's not like designing asolution for something that you
have seen firsthand in real lifeor you have some reference point
for.
Um, so it's more like shapingsomething for folks who are

(44:50):
still themselves forming theiropinion and shaping their mental
model about the product orservice in question.
So in this case about thetesting thing, um, I think one
thing we realized is that it'slike they started seeing AI as a
teammate.
They talked about how duringcode review, if they were, say,

(45:13):
reviewing their team memberscode, right, then they would, of
course, offer like very criticalreal feedback, but then also be
forgiving when they need to and,uh, wait for them or, in this
case, the model to get betterover time with their feedback.
So I think, um, more than a verydifferent way of working, the

(45:35):
flavor they're bringing is thatyou're seeing AI as a peer or a
teammate.
I don't think they are changingthe way they are working, but
that is how they're approachingit.
Does that answer your question?
Mm

Joachim (45:49):
Yeah, it does.
And it just, It makes me just,my mind just go off into all
kinds of crazy directions.
Because you've put this human inthe loop, you have to first give
them adjustment time.
They need a mental model, as yousaid, of the thing that they're
interacting with in order to getthe benefit of that thing.
which is so interesting becausethat's very anthropomorphizing,

(46:10):
right?
We're turning them into a peer.
I think back to Ernest's pointat the beginning of the C suite,
just seeing productivitynumbers, what they might
actually be in more interestedin and should be focusing on,
which is hard to measure,doesn't have an obvious KPI is
really how is this affectingour.
People and what are they doingdifferently?
How are they responding to thesethings?

(46:32):
And is there some other magicthere that we haven't quite been
able to understand?

Anmol (46:37):
No, I was just going to say that I really like your
point on anthropomorphism,right?
Like, are we trying to.
Do that here.
And where are we going?
Essentially, I think that's abig question that is left to be
answered by researchers, evenproduct shapers.
Um, and I don't think I myselfhave an answer for that yet.

(46:58):
Right?
Because, uh, there is so muchdebate around that too, that we
must acknowledge the fact thatso many folks, so many
researchers even talk about how,uh, human beings, um, Having
that kind of a human likerelationship or that expectation
with a machine, an AI model, isthat even healthy for the user?

(47:21):
Is that responsible?
So that is also another questionthat many researchers, product
shapers are asking.
But then when we do these typesof studies, it's really
interesting to see that the endcustomer almost starts thinking
of it as a peer.
Right.
So where do we draw the line?
This is just my hypothesis, butI feel like, that NLP kind of

(47:43):
experience, you know, it'stalking to you.
Are we already sort of setting acertain expectation and a stage
for that to happen is thequestion I am asking myself,
what if it were not?
a conversational experience,will that maybe not lead to them
having these kind ofexpectations that they generally

(48:05):
have of a peer?
Um, that's, I think somethingthat I am actively thinking
about now these days.
Yeah,

Ernest (48:12):
was curious, actually, you know, you've referenced a
couple of examples, um, a coupleof resources that have been
useful for you.
Is anyone doing it well rightnow?
You know, I know everyone's onthe AI, Gen AI bandwagon.
Would you kind of point toanyone and say, okay, yeah,
they're actually doing someinteresting things and maybe

(48:33):
heading in a more interestingdirection than the majority of
folks?
Okay.

Anmol (48:38):
that's a good question.
I would say.
Especially going back to yourpoint on productivity and KPIs,
right?
One framework that I personallyfound really interesting was
from Slack.
They came up with the SPACEframework.
The customer research team cameup with that and I don't think

(48:58):
they touch upon AI specifically,but Um, it was a very holistic
framework overall that alsotouches upon some of these
points that we talked.
Not to value generationspecifically, but it was more
holistic and better than justsay measuring velocity or amount

(49:20):
of work done.
It was not dehumanizing, Ithought.

Ernest (49:25):
That's a great example.
Um, and I, you know, I know thisis always a tough one, but
there's so much froth aroundthis right now.
People think it's going tochange the world, destroy
humanity, or, you know, donothing at all.
Where do you think this will bein, say five years from now?

(49:45):
Uh, you know, do you think it'sgoing to actually be a part of
all of our lives or is it alloverhyped?

Anmol (49:51):
I think, um, I continue to maintain, right?
Like, I would say I am, um, if Iwere to describe, like, myself
and my opinion after especiallyalso doing this research, I
think it's also wrong to be acomplete pessimist and think
that, okay, maybe it's going to,you know, be dangerous and be

(50:14):
fearful.
I don't think one should befearful.
This technology, it's like,sure, there are a lot of
unanswered questions.
Uh, certain aspects of it.
Um, So I think it's veryimportant for us as product
shapers, right?
Um, whatever role you might bein.

(50:35):
I think you can actually, thisis the time to, uh, be vocal.
And to join the conversation andto help, uh, your team, your
leadership make responsibledecisions and really small ways,
even right.
So instead of being fearful, Ithink.

(50:56):
The way we should approach it isthat there is a lot of
potential.
And now the question that isleft to be answered is how best
to tap into it, right?
For work that just like how wediscussed is not joy inducing,
is not, uh, something thatpeople look forward to so that
we can free up their time to domore joyful work, or maybe even

(51:20):
just go back home and spend timewith family, right?
Like make us more human, uh, ifpossible.
If you will.
So that's how I see it.
Yeah.

Joachim (51:29):
Okay, let's go a little bit spicy then.
Asking the bigger question of,where is this heading and is
this heading in the rightdirection?
You've addressed that question.
I wanted to ask the samequestion in a different angle,
which is there are knownphysical consequences to these
systems, right?
I'm thinking about water usage,energy usage, all of the crazy
environmental stuff that we'redealing with right now.

(51:53):
we're seeing Microsoft blastingthrough its carbon quotas and
all of these things.
And,, as I'm using Copilot orsomething like that, that's
never really part of myexperience.
Right.
Do you think there's a role inthat domain as well?
Because we've talked abouttransparency of the actual AI

(52:15):
system, the way it's generatingits answers.
But do you believe that there'sa space for also People in the
HCI community to be thinkingabout showing the consequences
of those types of interactionswith AI on the real world in
terms of, all of theenvironmental impacts I was
talking about, and how can youdo that without distracting and

(52:36):
just making someone feel sad asopposed to actually making them
feel empowered to do somethingdifferently or think about what
they're doing?
Where do you land on that one?

Anmol (52:45):
I think, um, I myself, I'll be completely honest here,
right?
Like I've been so neck deep intech and user research, what
not.
I always heard a lot aboutsustainability.
For example, I, um, saw thatconcept being thrown around a
couple of times in meetings, etcetera.

(53:06):
Um, however, it was only when Iwas talking to, you know, my
best friend who is an advisor inthe sustainability domain, that
I was able to also ask somereally 101 kind of questions,
right?
So I think it is so important.
I even tell this to myself to becurious and it's okay if you

(53:28):
have really basic 101 questions,but to actually start asking
them.
I think so many of us, Andincluding myself, I still don't
fully know a lot of theseconcepts, to be honest.
So it's so important to continueto be curious in this domain,
right?
Just like how I was telling youa little while ago about how our

(53:48):
customers don't seem to have amental model yet, and we are
sort of shaping on the fly withthem while their expectations,
mental models get formed.
Similarly, this is also true forus as shapers, as stakeholders
in this domain.
It's like, If you're notcurious, if you're not asking
the right questions, if we saydo not even give the

(54:10):
sustainability experts a seat atthe table, uh, it's going to be
hard to actually get to thebottom of it.
Um, so my mantra is always,leave it to the experts and be
curious yourself and get anexpert involved, right?
Earlier on, uh, seek theiropinion, ask those questions.

Joachim (54:31):
I love Because we brought in an expert to talk
about exactly this thing.
So we're, we're living, we'reliving the act of doing this.
Um, yeah, thank you for that.

Ernest (54:40):
I'm just echoing Joachim's point.
This is an awesome conversation.
I was just curious before wewrap up, did you have any kind
of closing thoughts you want toshare about the intersection of
generative AI and productcreation?
Any kind of last things maybethat we didn't touch on that you
want to highlight?

Anmol (54:55):
I think, um, it's so interesting where we are at
right now and of course, I don'twant to, I want to be sure that
I make everyone feel heard andseen.
There are certain aspects thatcan be worrying and folks are
fearful.
I completely understand.
But at the same time, You know,I think going back to what I

(55:16):
was, maybe I'm just repeatingmyself at this point, but I feel
like there are so many really,um, exciting opportunities too,
right?
So just like everything else inlife, I feel like it all boils
down to how we choose toapproach it and how thoughtful
we choose to be at every point.

(55:37):
And to just be sure we don't doone thing, which is dehumanize
our fellow peers or ouremployees.
It's very important to, um, takepride in how human we, we all
are and how that is just so veryspecial.

(55:57):
And I don't think any model,however accurate, precise, or
amazing is going to be able toreplace that, right?
There is something special herethat we bring to the table.
So it's very important to justremember that and to focus on
building the best tool for us sothat we can be our best versions

(56:19):
is the way I see it.
Yeah,

Ernest (56:22):
Oh, that's fantastic.
Those are great words to end on.
Um, all right.
Well, now that you've heard ourperspectives, we want to hear
from you.
If you work in product creation,are you excited about generative
AI or does it keep you up atnight?
Where do you think it couldhelp?
And you know, what do you thinkare the biggest pitfalls to its
adoption?
Please share your thoughts withus at learn, make, learn at

(56:43):
gmail.
com.
Now, let's move on to ourrecommendations of the week, and
we want to include Anmol on thisas well.
Anmol, is there a product orservice or article that you'd
like to praise or pan for ourlisteners?

Anmol (56:59):
there are many, um, I think to name a few, I would say
going back to what I was saying,right?
Um, one of my, uh, favoritedesign managers, Dave Brown, he
also talks AWS.
And I think it's important,which is, uh, the fact that.
Um, as product shapers, it'ssometimes.

(57:20):
It's really easy to like, it's aslippery slope and you sometimes
if you don't know the nuances oryou don't understand the
concepts, you might end upmaking certain assumptions that
are false.
Right?
So it's very important toeducate yourself about certain
AI really simple basic ones tostart out with fundamentals.

(57:41):
Um, there is this amazingCoursera course series, uh, by
the very knowledgeable AndrewWang.
Um, on the fundamentals offmachine learning.
So for someone who is justlooking to get started, I highly
recommend that course.
That's a great one.
Um, there's another platformthat AWS actually put out.

(58:01):
It's called Party Rock and it'sanother fun platform.
Even if you don't need to reallybe a developer or a, um, you
know, or being a technical roleat all, you can just get
started, have fun with all typesof models and see what works for
you.
what kind of solutions you mightlike to shape with AI models.

(58:23):
So that's another platform thatI recommend.
There are a couple of otherarticles that we also talked
about.
So the people plus AI guidebookfrom Google, that's another
great resource.
So yeah, those are the ones thatI wanted to share.

Ernest (58:39):
Those are awesome, and we'll include links to all these
in the show notes as well.
Thanks for that, Anmol.
How about you, Joachim?
Anything you want to share thisweek?

Joachim (58:47):
Yeah, I actually for once have a real product to
think I've, I don't think I'veever had a product to recommend.
So anyway, um, my recommendationis the Boox Palma mobile e paper
reader, um, which feels likesuch a ridiculous luxury item to
have another e book reader.

(59:08):
Um, and you know, it costs 280,so it's not cheap.
I did try and give it a proper,a rigorous testing.
And the main things that, alwaysbother me about technologies is
if they require the cloud andaccounts and all kinds of extra
hoops that you have to jumpthrough to, to make this work.

(59:30):
So my test for this device washow quickly can I take a book
from my, device that I alreadyhave and.
EPUB or a PDF file and get itonto that e reader.
What does that look like?
Once you get it, you have toconnect it to your wifi to get
any of the benefits.
You can of course also, uh,hook, it has an internal drive,

(59:51):
so you can put a micro SD cardin it.
So if you want to be really,really old school with no
network, you can do that andkeep that thing off every
network and just load everythingonto an SD card and then pop it
into that.
so that was a little bit tooinconvenient for you, but it
does have Transfer program thatbasically opens it up to all
other devices on your Wi Finetwork, and then you can just

(01:00:13):
drag and drop in a browser ontoit.
So I was surprised.
I was really blown away by howeasy it was.
I didn't really dig into thehardware aspects that much, but,
It basically looks like a cellphone.
It looks like an Android cellphone, but with a really high
resolution e ink display.
Um, it has the same form factoras a phone.

(01:00:34):
It's pretty slim.
I maybe to some people, thepoint of reading is to have a
device that feels different froma phone and you want to project
that you're intelligent and readthings.
And so that's why you'd like tohave a Kindle, which is a
slightly different format.
I didn't really mind havingsomething that looks like I'm on
my phone because, uh, The screenis really great.
It's big enough and you can, um,and you can upload stuff so

(01:00:56):
quickly.
Yeah.
It's been pretty easy to set upand I really like having this
and my library with me.
It is annoyingly good because itlooks exactly like a phone.
There's a reason why the phoneform factor is so compelling.
It is great for holding.
They just nailed it.
They understood that that isprobably that sweet spot for a
lot of people, as opposed to theKindle, which is slightly wider,
takes up a little bit morespace, is a little bit bulkier,

(01:01:19):
this can slip in your pocketjust like your cell phone, and
then you have an e ingredient,and it's pretty damn good, and
the beauty as well is It runsAndroid.
You don't have to log into yourGoogle account to use this
device.
I am only using the wifi.
I'm sure there's data that'sbeing transferred in the
background, other than that.
It can be a full device, withall of the typical stuff, but

(01:01:42):
because it's an E Ink display,it's not fast enough at
rendering the image.
So browsing is incrediblyfrustrating because when you
scroll it, you know, as ifyou've used an E Ink reader, it
flashes a little bit and thenthe next image is there.
And so you can change thesettings to get it to go faster
and it degrades the quality alittle bit, but it's.

(01:02:03):
It's such a, it's just enoughfriction where you go, no, this
will always be a reading device.
So despite it having a browserand a voice recording, all of
the things that you'd expectfrom an Android device, those
things don't really come intoplay.
Well done, they got me.
I'm not returning it.
I'm holding on to it a littlebit longer.
And yeah, if you want a devicethat is pretty hassle free and,

(01:02:25):
depending on where you get yourbooks from, if you're not
necessarily always getting themfrom legitimate sources, this is
a very good reader for thatpurpose as well.
No questions asked.
They let you upload anything.
that's my recommendation forthis week.

Ernest (01:02:38):
That's so great.
I'm so glad to hear your take onit.
I've been really intrigued bythe BOOX devices as well.
So it's really great to hearyour firsthand account of it.
Um, uh, actually myrecommendation this week is also
a physical product, which ispretty rare for both of us to be
sharing physical products.
Um, and it's also directlyrelated, related to the podcast.
Um, you may have heard theunkind expression that so and so

(01:03:02):
has a face for radio.
Well, I've come to learn thatunfortunately, I have a voice
for text, my voice is low andbreathy and, uh, kind of
difficult to capture in a waythat's easy for listeners to
make out.
so in hopes of improving on thisin Joachim and I have been
podcasting, I've tried severalmicrophones, none of which
really did much to improve thesituation.

(01:03:23):
But, um, after reading many,many reviews, I decided to give
one last microphone a try.
It's from a company calledEarthworks Audio, and it's their
Ethos Broadcast CondenserMicrophone.
It, um, originally retailed for699, and that was well outside
of my budget.
But it recently came down to399, so I decided to give it a

(01:03:44):
go, and I was, uh, very happilysurprised by the results.
There's You know, really only somuch that any microphone can do,
but at least to my ears, theethos was able to capture my
voice with a clarity that noneof the other mics I tried could
match.
So I was very excited.
But, um, then while editing thefirst episode where I used the
ethos, I noticed some weirdissues in the recording.

(01:04:06):
And you may have heard this too.
If you listen to those episodes,uh, there were.
Some periodic dropouts and briefmoments of static, you know,
only a handful over the courseof an hour plus recording.
And I was able to edit aroundmost of them.
So hopefully you won't hearthem, but no matter how good it
sounded, you know, it justwasn't tenable to use a
microphone that might drop outat a key moment in a recording.

(01:04:26):
Um, I spent a lot of timetroubleshooting to see if the
problem might be somewhere elsein the chain of gear that I used
to record, but I was able toisolate the ethos as the source
of the issue.
And so I was pretty distraught,you know, because I had
purchased the ETHOS through athird party retailer on Amazon.
Um, it was new in box, but Iworried that between Amazon, the
retailer and Earthworks,everybody would just kind of

(01:04:48):
point the finger at each otherwhen it came to a service claim,
kind of like that Spider Mananimation of the Spider Man
pointing to each other.
But, um, thankfully it turnedout that I didn't have anything
to worry about.
Um, I started by contactingearthworks, which is, uh,
they're based in New Hampshireand they were remarkably
responsive.
They really just asked me for afew bits of info.

(01:05:08):
And then based on that info, uh,they asked me to send them the
microphone to look at for closeranalysis.
And then I think within a day ortwo of them receiving the mic,
they told me that, uh, Based ontheir review, they'd replaced my
mic.
They said it was faulty andthey'd replace it with a new
one.
And that's what I'm using rightnow to record this episode.
Um, there's actually, uh,there's a fair bit of research

(01:05:31):
showing that customers whoexperience a problem with a
product that's effectivelyresolved, come away with greater
affinity for that product andits parent brand than customers
who'd never had a problem atall.
Uh, we spent a lot of timefocused on this, um, in my old
37signals days.
And we came up with this conceptthat we called contingency
design or design for when thingsgo wrong.
I know, you know, Joachim and I,uh, we've talked about this a

(01:05:53):
bit in past episodes.
The fact that the design of somany modern systems is so
brittle, which causes them tofail spectacularly.
Um, well.
You know, I'm glad to say thatearthworks audio isn't brittle
and the support experience theyprovided me is really made me a
big fan of the brand.
Um, now, you know, hopefully Ihaven't just jinxed myself, but
as of now, I'd, I'd highlyrecommend the earthworks ethos

(01:06:16):
microphone and honestly, any oftheir products, because they've
demonstrated that they really dostand behind them.
So the earthworks audio ethosmicrophone is my recommendation
of the week.

Joachim (01:06:27):
I was going to add, I had one tidbit that I found
really interesting aboutEarthworks Earthworks are really
innovative because they keepthinking about microphone
technology and also in ways thatare surprising.
So they do these drummicrophones that are designed to
only capture overhead sounds.
And they have a setup that isbasically a little bit behind
the drummer, a little bit abovethe drummer and a kick drum mic.

(01:06:50):
So you have three signals goingin.
And.
The recording setup that you getfrom that is really, really
awesome.
Um, you get a lot of air fromthe drum kit.
Now, most.
drum kits when they getrecorded, they have mics on each
drum, each individual drum closemic, and then each signal gets
fed into the recording.
It makes everything sound reallyclosed and a little bit compact,

(01:07:13):
but with this setup, the kit isallowed to move all of the air
and they're capturing all ofthat.
And it's kind of incredible.
I mean, back in the day, that'show A lot of the coolest
Zeppelin drum sounds come fromjust capturing the air moving
above the drum kit.
but I just thought it was sointeresting that a company that
is thoroughly modern decided togo in a very different direction

(01:07:34):
with their, their design to goto the earlier days of
recordings where drums wereallowed to reverberate in a room
and they would capture that.
So it's nice to hear that theyalso have a great customer
experience, but I always lovedthis idea that they were trying
to do something that was alittle bit I don't know, old
school, you know, and of thedrum sound in a different way.
So that I always liked that ideafrom them.

Ernest (01:07:54):
That's such a cool example.
It kind of, to me at least,connects back to what Anmol was
talking about in terms of humansfocusing on creativity.
You I don't know that analgorithm would have gone to
that sort of a solution.
Um, but yeah, that's such a coolexample.
Um, alright, well, I think thatdoes

Anmol (01:08:12):
just

Ernest (01:08:12):
it Yeah, yeah.

Anmol (01:08:14):
I was just going to say one thing, which is, you were
saying your voice is perfect for

Ernest (01:08:20):
Text.

Anmol (01:08:21):
to text.
I think your voice is perfectfor ASMR.
Like, oh my god, it is socalming.
You immediately feel relaxed.
Yeah, so I have to say, it isvery ASMR.

Ernest (01:08:34):
you're, you're too kind.
The problem is it puts people tosleep is, is the problem.

Anmol (01:08:40):
No,

Ernest (01:08:41):
Oh, well, I appreciate it.
And thanks.
Thank you for that.
But also thank you once againfor joining us as our first ever
guest Where can listeners followyou or find more of your work?

Anmol (01:08:56):
they can connect with me on LinkedIn.
Um, I did have an Instagram, butI've been off it because I was
very much into scrollingendlessly.
So I just got myself off it.

Ernest (01:09:07):
Oh, for you.
I'm definitely available onLinkedIn, and then to those
listening as well, thank you forjoining us here at learn, make,
learn.
As we mentioned, we want to hearfrom you.
So please send any questions orfeedback to learn, make, learn
at gmail.
com and tell your friends aboutus.
Now, we usually like to previewthe topic for our next episode,
but in all honesty, with summerin full swing and life offering

(01:09:30):
up some curve balls, we're notentirely sure what we're going
to address in our next episode.
You may have also noticed thatthe publication cadence for
episodes has become rather, howshould I say, irregular.
Uh, and that's probably going tocontinue through the summer, but
I want to reassure you thatJoachim and I are going to
continue to bring you newepisodes and new guests.

(01:09:50):
So we thank you for yourpatience and hope you'll
continue to join us at LearnMake Learn.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.