All Episodes

September 17, 2025 36 mins

What if the biggest value of AI isn’t answers, but better questions? For Morgan Brown, Vice President of Product and Growth at Dropbox, that realisation has transformed everything from family dinners to global product strategy. 

In this episode, I chat with Morgan about how he uses AI as both a problem-solver and a sparring partner. Morgan leads Dropbox’s AI products, but he’s also built his own app from scratch - with no coding background - to help his six-year-old son manage type 1 diabetes. 

We dive into how Morgan uses AI to eliminate grunt work, create powerful prompts, and even stress test his own ideas. This conversation will show you practical ways to turn AI into a genuine partner for your work and life. 

We discuss: 

  • The story behind CARB Scan, the AI-powered app Morgan built to help his son manage diabetes 
  • How he uses AI to eliminate shallow work, like meeting recaps and email sorting 
  • Why designing prompts is a superpower—and Morgan’s framework for writing great ones 
  • The automation Morgan built to scan and summarise the entire AI/ML landscape every morning 
  • How to use AI as a true thought partner for brainstorming, strategy, and decision-making 
  • The risks of skipping human feedback and why real-world validation still matters 
  • Morgan’s advice for spotting your own hidden time sinks and turning them into AI experiments 

Key Quotes 

“LLMs aren’t the best search engine. They’re much better as a thought partner.”  

“The real leverage of AI isn’t the answers. It’s the better questions.” 

Connect with Morgan Brown on X (Twitter), LinkedIn, and his website https://www.morganbrown.co/ 

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
What do you do when something as ordinary as dinner
turns into chaos? For Morgan Brown, vice President of Product
Management for AI at Dropbox, it was trying to work
out the cabs in his six year old son's meals.
Endless Google searchers stress at the dinner table, so he

(00:22):
built an app with no coding background, using AI to
solve it. That same problem solving mindset runs through his
work at Dropbox, where Morgan leads the company's AI products.
He's using tools like CHATDPT and Claude as sparring partners,
not just search engines, automating the grunt work, refining strategy documents,

(00:46):
even filtering AI research papers while he has his morning coffee.
In this conversation, you'll learn how to spot your hidden
time sinks, how to design prompts that actually work, and
how to use AI as a genuine thought partner. And
the result more bandwidth for the work and the life

(01:08):
that really matters. Welcome to How I Work, a show
about habits, rituals, and strategies for optimizing your day. I'm
your host, doctor Amantha Imber. I want to start by

(01:31):
talking about carb scan, which I read your post on
LinkedIn as something that you're doing on the side, Can
you tell me how this came about? Yes, because you're busy,
You've got like an important job at dropbox to do.

Speaker 2 (01:44):
I'm actually a tapeline diabetic, so really important to manage
my blood sugar. I have, you know, an insulin pump
and blood glucose monitor and have been dealing with that
most of my adult life. But my son, who is six,
was also recently diagnosed with type one diabetes. And really
for little kids, it adds a ton of complexity to

(02:08):
your day because their bloodshair has to be monitored very closely.
And what I found was pretty much every meal turned
into grantically searching on Google for how many carbs are
in chicken nuggets or mac and cheese and trying to
add it up, and it really interferes with Hey, everyone's
trying to sit down and eat. And so I built
a little app that allows me and my family or

(02:29):
anyone that wants to try it to take a picture
of the food. I take that picture, I send it
to open Ai with a really detailed prompt, and then
reference calorie King, which is a kind of canonical source
of carbohydrate and other nutritional data on the web, to
give me carbohydrate estimates back for his meals, rather than
three or four different Google searches across a bunch of sites.

(02:52):
I get a very quick answer back that's really just
taken a ton of friction out of three times a
day or snacks included just kind of snapping photos. And
it built it using a vibe coding AI tools. You know,
I don't know a lot of code myself, and so
basically used AI to build new AI to solve the problem.

Speaker 3 (03:13):
I love that.

Speaker 1 (03:13):
I want to talk about the tools, and I love
that you've mentioned that you're not a coding guy, because
people might be listening going, well, that's easy for a
tech guy to say, so vibe coding, and it might
be worth just defining what vibe coding is for those
that don't know. And then I want to ask which
which are the tools you used.

Speaker 2 (03:30):
The idea behind vibe coding is to use an LM
or an AI agent to describe the type of software
you want to build, and then have it build it
for you. And that can be how it looks, how
it works, and really you just use natural language to
describe it. And I think the reason they call it
vibe coding is that you don't have to be so

(03:51):
strict with kind of precise requirements and precise coding and
you know, making sure there's no typos, and so it's
really a very fluid creative process. And I used a
tool called Replet, which is a great tool for building
web applications, and built everything from the front end design
or I just explained what I wanted it to look like,
to the prompts that get sent to open AI, to

(04:14):
the way that it does the carb estimates and all
of that. So yeah, I've been working on it for
about a month and a half here and there, and
it's been fun to see it kind of come together.

Speaker 3 (04:23):
That's amazing.

Speaker 1 (04:24):
Did you use any other tools, like were using chat
chapte to help with how you prompted replic because I must.
I've played around with replet a little bit. I built
an app that had so many bugs in it and
then I just got sick of it and abandoned that
little project. But was it just Replet or were we
using other tools to help build that?

Speaker 2 (04:44):
You know, caps can I definitely use chat gpt as
kind of my software thought partner, you know, so I
would go to chat gept and say, hey, I want
to write a prompt for an LLLM to give me
a really accurate carboh great estimate from a picture of food.
Could you give me some suggestions on how to write
a really great prompt? And it gave me a bunch

(05:06):
of detailed instructions. I kind of took that, worked on it.
Then I gave that to a Replet And actually, now
my current workflow is I have an idea for a
feature for carb scan. I go to chat GPT and
I say, I now have a project within chat GPT
that is just carb scan, so it has a whole
bunch of historical knowledge there. And I say, I have

(05:27):
this new feature, I'd like it to do X, Y
and Z. Can you help me write a really clear
product requirement and specification doc to give to Replet to
help me get this feature built and it will give
me that prompt and I take that and then I
paste that into Replet and I ask Replet to review
the requirements and then come back with a plan to
implement it, and then I approve the plan and it

(05:49):
starts to build.

Speaker 3 (05:50):
So good.

Speaker 1 (05:51):
Now, of course, building carbohydrate scanning apps is not your
day job working at dropbox is. Can you tell me
in the nutshell, what do you do at Dropbox?

Speaker 2 (06:03):
My day job is I'm the Vice president of Product
Management for AI at Dropbox, where I'm responsible for the
AI products that we build at Dropbox to help people
be more effective at their job and with their work,
and so that means I'm primarily responsible for Dropbox Dash.
Dropbox Dash is an AI powered search knowledge management work

(06:24):
assistant that helps teams be more effective, get busy work
out of the way, and hopefully allow them to do
better work at more of the empowering, meaningful work that
we all set out to do every day that can
often get buried under a bunch of project management.

Speaker 1 (06:39):
I would love to know about some of your AI workflows,
I guess, both in terms of how you use AI
to help with some of the grunt work, but also
how you use it to help augment your thinking as well.
So maybe let's start by talking about the grunt work,
the shallow work. Take me through some of the ways

(07:00):
that you're using AI. I'd love to get into real
specifics as to what you've designed for you.

Speaker 2 (07:06):
One of the biggest challenges with my work is that
I'm constantly switching between all sorts of different contexts and
problems to solve. So it could be a product feature
over here, thinking about how to launch a new feature
over here, or communicate how feature works over here, looking
at the roadmap, reviewing something to ship out to customers,
working with our marketing team, and so constantly switching contexts,

(07:29):
and over the course of a week, I'll be in
dozens of meetings, and so one of the things I
do is I use AI to recap all of the
things that I've promised to follow up with for everyone,
all of the things that other people have promised to
get back to me on and big decisions or milestones
or risks to watch out for. So Dropbox uses Zoom,

(07:49):
which is kind of the video meeting software. It records
the meetings pretty much by defaults for the most part,
so I am able to take those transcripts and turn
those into summaries, you know, at the start of every week.
So it helps me reset context and stay kind of
on top of that. So that's one big use case,
and that's really helped with you know, I used to

(08:10):
write it all down in chicken scratch. My handwriting is terrible,
and try to go back through and remember who I
promised what to and so now that's very seamless and
just happens, you know, pretty much automatically for me.

Speaker 4 (08:22):
So that's one big one.

Speaker 3 (08:23):
Yeah, cool, What are a couple of others.

Speaker 2 (08:25):
So another one of the things that I do all
the time is I'm reading documents constantly. So do these
product requirement documents or specifications or strategy briefs or memos
about to go to market approach? And I found that
over time, I have the same questions over and over.
Who are we building this for? What's the job to
be done? How do we know that this is the

(08:46):
right thing to build? Why does what we're proposing make
sense relative to everything else we know? And so I
basically built a prompt for DASH, which is our internal
AI tool, to help me pre read those documents and
evaluate those documents along the same sets of questions that
I ask all the time. So I've got, you know,
twelve questions that I ask all the time. I run

(09:08):
it through the prompt and it helps me surface areas
to look out for in the document, areas that I
might want to prove a bit deeper. And I found
it so effective that I actually just said emailed or
I messaged the prompt to my entire team and said, hey,
I'm using this myself. If you want to use it
before you send something to me, you know, it might
save us a few cycles.

Speaker 4 (09:28):
So I just sent it to them. Now they use
it before they send it to me.

Speaker 2 (09:31):
And so now I'm kind of taking that the next
step and really trying to think about, Hey, based on
this strategy or this approach, you know, what are some
second order effects that I may not be aware of.
What are some ways to potentially double down on this
or differentiate and make this an even stronger proposal. What
are some risks that may be hidden here that I'm

(09:52):
not considering? How might my CTO react to this? How
might our legal team react to this? How might our
go to market team react to it? So really trying
to basically roleplay and game theory out all of these
different inputs that would typically take weeks to get.

Speaker 1 (10:07):
And when you're constructing a prompt like that, are you
typically using because what is called meta prompting where you're
you know what you described earlier, where you're working with
the AI to help improve the prompt or even write
the prompt, Like do you have a process when you're
I guess, creating one of those fundamental prompts in your

(10:28):
prompt library, and you want to make it really good.

Speaker 3 (10:30):
How do you, I guess, build a great prompt?

Speaker 2 (10:33):
Yeah, absolutely, And I think that is really a skill
that knowledge workers can really turn into a superpower now.
And so I think it's thinking about what is the
optimal amount of context you can give the LM to
give you the most relevant and best answer possible. And
so in my prompt that's basically what I'm trying to do.

Speaker 4 (10:53):
Now.

Speaker 2 (10:54):
You can't give it too much information or it kind
of gets overwhelmed. But the more that you can give
it that it is highly relevant, the better the answers
you get. And one of the main principles that I
use is that lms read things in context of everything
they've already read. So the initial pieces of information you
put into it are going to influence how it evaluates

(11:16):
the rest of it. And so, for example, one of
the tips that you often hear and I actually have
found to work really effectively is that when you're talking
about a domain specifically, it's important to state up front
that this is the domain you're talking about. So, for example,
if I'm working on positioning. I'm interested in how we
might position a dash feature. I'll say you are April Dunford,

(11:38):
author of this positioning book, or you are Al Reese
and Jack Trout of Positioning to Win. Help me position
this product feature. These are its capabilities, This is how
I think it could work. I think these are the challenges,
this is what's differentiated about it.

Speaker 4 (11:56):
These are who it might compete.

Speaker 2 (11:57):
With, and these are the ways where we would like
it to connect to the rest of our product suite.
And then it'll you know, kind of structuring it that
way where it's kind of a here's some context. Then
there's a very specific task or set of tasks that
you want to give it, and then you can give
it detailed directions around how you want it to accomplish
those tasks. So, for example, sometimes the lllms can be

(12:20):
very verbose, but I like very terse, very pragmatic answers,
So I always tell it no fluff, no hyperbole, consider
this in a very you know, you're not going to
hurt my feelings if you tell me it's a bad idea.
You know, just really trying to kind of get it
honed in with the language that I or the thought
process that I kind of want it to laid up,
and so context task output and then detailed instructions are

(12:45):
usually the format that I use to write these.

Speaker 1 (12:47):
Still staying on the topic of I guess reducing or
eliminating some of the grunt work through AI, I'd love
to know if they're more on the automation or a
gentic AI side of the what some of the I
guess other tasks that you've been able to automate in
interesting ways would be.

Speaker 2 (13:06):
I've really done a lot of automation around trying to
understand what's changing around me. So I have a daily
workflow that reviews all new kind of substack newsletters, important
tweets across a large domain of AI and mL X accounts.
It reviews Archive, which is the scientific paper repository for

(13:29):
new mL and artificial intelligence papers. It scans the whole
landscape YouTube, Spotify podcasts, and basically takes all of the
day's information about AI and mL reviews it all categorizes
it into world changing, interesting, incremental, and noise, and then

(13:49):
summarizes the top three quote unquote world changing potentials and
then summarizes it for me in the context of my
role at Dash where it kind of talks about hey,
things you might consider as the head of product for Dash. So,
for example, Anthropic the other day just announced that Claud
can handle now a million token context window, which effectively
means that for many reasonable size software codebases, it can

(14:13):
reason about the entire codebase, which is a pretty big
and important milestone for software development. And the other one
is I automate my personal email. I can't imagine you're
inbox Atmantha, but mine it gets hammered all the time.

Speaker 4 (14:24):
And so I built a.

Speaker 2 (14:26):
Little agent using Claud code to read my personal Gmail
every day, categorize it into newsletters, spam priority emails, summarize
the newsletters for me, flag the messages that I want
to respond to, and draft responses for those newsletters or
not for those emails. So really trying to kind of

(14:49):
like manage my inbound email automatically as well.

Speaker 3 (14:53):
Wow, okay, I love the both those examples.

Speaker 1 (14:55):
I want to dig a bit, Dafa into how you
went about creating this automation that summarizes all the AI
news of the day.

Speaker 3 (15:05):
I mean that sounds so powerful. Can I buy that
from you?

Speaker 1 (15:09):
Morgan?

Speaker 3 (15:09):
Are you selling it?

Speaker 4 (15:11):
You can link it in the show notes.

Speaker 3 (15:13):
Wow amazing handed over Yeah Wow. Talk me through the
process of how you put that together.

Speaker 2 (15:19):
First of all, I realized that there were a few
key sources of information that I was constantly referring to personally,
So you know, whenever I had a free moment, I
would go to archive and kind of browse the actual
published papers because that's the real source of truth on
the cutting edge. I would see a lot of popular
posts on x about kind of new research breakthroughs. I

(15:42):
follow some of the key people like the Sam Altman's
of the World and the Frontier Labs, and then there's
also several great substack newsletters around AIS that I read
as much as I can. But I just realized there
was no way for me to stay on top of
that volume consistently, and I just started with a prompt.
I told CHATGBT, I'd like to create a prompt to

(16:02):
scan the entire AIML space for important signals around research
and development and industry news that pertains to my role
at Dropbox. And I said, if for X, I want
to look across these accounts and accounts like it. So
I kind of gave it a seed list of accounts,
but asked it to expand where it could. I gave

(16:23):
it specific newsletters that I read on substack, but also
again asked to consider adjacents. And then I gave it
specific categories on archive where those papers are housed and
asked it to consider relevant ones that there. And then
I said, I'd like to really filter high signal, like
I don't want this to be a laundry list of

(16:44):
everything that happened that when actually helped me kind of
consume it any better. So I said, help me come
up with a scale to rank the innovations and the
news that's coming out in the day and kind of
categorize things as can't miss, these are like foundational important,
shifts increment and important, and then give me some synthesis
around hey, when you put these signals together, what does it.

Speaker 4 (17:06):
Mean for my role?

Speaker 2 (17:07):
So I fed that all just in basically talking to
chat GPT. I like to use the audio input to
kind of just stream of consciousness my thoughts into it.
Once I did that, I asked it to kind of
structure it into a prompt. I reviewed the prompt. Some
of the things I didn't like. I always try to
go back and forth with it. I think you should
kind of refine it, much like you with the copy

(17:29):
for a website or the script for something. And then
once I got the prompt to a pretty good spot,
I said, okay, let's run that for today and give
me the output.

Speaker 4 (17:38):
And I got the.

Speaker 2 (17:39):
Output and I was like, oh, this is a little
too long. I could be a little shorter. It looks
like we're underrepresenting, we're missing YouTube. Let's try to add
YouTube in so on and so forth. And then I
finally got it dialed in and then I said, great,
set this as a daily task every day five am Pacific.
Run it again while I'm having coffee. I just kind
of read through it.

Speaker 1 (17:57):
Wow, Yes, I would love if you could that, and
I will pop that in the show notes. Let's talk
about how you use AI to augment your thinking. I recently,
I don't know if you know Bobb Johansson. He's a
futurist over in Silicon Valley and I interviewed him a
few weeks ago and he was talking about how, you know,
as many people have said, artificial intelligence is such a

(18:20):
bad brand name, and he said, if he was going
to name it, he would call it augmented intelligence. And
it's stuck with me, and I feel like I use
it a lot to augment my own thinking. But I
feel like most people talk about, you know, the efficiency
gains and the productivity gains. But I'd love to know,
for you, Morgan, what are the different ways or use
cases that you think about and use AI for to

(18:42):
improve your thinking.

Speaker 2 (18:43):
I think a lot of people looks AI for answers,
but it's not really the best search engine. You know,
it can make things up, it can have outdated information.
I think it's much better to use it as kind
of a thought partner or sparing partner. And so I
think a lot of the techniques that you kind of
learn learn in business and brainstorming and communication and strategy

(19:04):
work really well with lllms. So the first thing I
try to do is again go back to, like what
kind of context does the LLLM need to know in
order to be a good thought partner for me? You know,
like you wouldn't go ask someone who works in sporting
goods about a medical condition, you know, and you're not
going to ask your doctor about how to score a

(19:25):
goal in soccer unless they used to play in a
university or something. But so, really, like, what does the
LLLM need to know to be successful. And so with DASH,
we have this concept called stacks, which are collections of
documents that you can create around a given topic, and
then dash chat uses that collection of documents to augment
its context and understanding of the problem space. But I

(19:48):
generalize this to chat, GBT or anthropic or any of
the other models where I try to say, Okay, what
does it need to know about me and the problem
I'm trying to solve first to be highly relevant, and
then go create that information. So, for example, one of
the things I've been that I started early in my
career was a Morgan's Operating Manual. So anyone that was

(20:10):
kind of going to work with me join my team.
It became really important when I became a manager to
help people understand, you know, here's what Morgan prioritizes, Here's
how he likes to communicate, here's the things that add friction.
It's like my operating manual that you would kind of
take a quick start guide for working with Morgan working
with an LM. That's one of the first documents I

(20:31):
give it are my operating principles, and it has things
like no fluff, what you don't know, say you don't
know no politics, you know that type of thing.

Speaker 4 (20:40):
And then I've.

Speaker 2 (20:40):
Started to build out those contextual documents. So, for example,
I've really been working lately on what are my core
principles as a person, as a father, as a husband,
what are the things that I care about in life generally,
and using an LLLM to codify those, and then I
can give it even domain specific things. So for example,

(21:00):
with carb scan, my priorities with carb scan are like,
accuracy is the most important thing, no false, confidence, delight
and speed are you know essential And so now it
has a working set of information to be a really
good thought partner. Once I do that, then I try
to use the techniques that we use to kind of
come up with good ideas all the time. So I'll say,

(21:21):
instead of asking AI for an answer about something, I'll say, Hey,
I have this problem I'm trying to solve. Let's do
some divergent thinking together. I want to be generative. Help
me be generative about the ideas here, So lead me
through a series of questions to help diverge and push
my thinking. So you know, in brainstorm, I use divergent
thinking to get as many options on the table. And
then once I feel pretty good about that, I'll start

(21:44):
to say, Okay, now let's now go through some conversion
thinking cycles.

Speaker 4 (21:48):
Okay, now let's.

Speaker 2 (21:49):
Drive down here, and then kind of through those questions,
I'll say, like, hey, is there a framework that could
help articulate how we're making these decisions? Like what are
some the implied principles about some of the decisions we're
making in this session? You know, is there a two
by two grid of you know, feature importance versus priority,
or user engagement versus monetization, or some other type of

(22:13):
way to kind of like help frame up the decisions.
And then look for like I mentioned earlier, like Okay,
now that we have this set of ideas, how might
we push them further? Are their power laws inherent?

Speaker 1 (22:24):
Here?

Speaker 2 (22:25):
Are there limiting steps, you know, things that must be
true for this to work? Like what are those? And
so that's really how I go back and forth with
them as a thinking partner. And then I try to
take the output of one. So if I do this
with chat GPT, I'll take its output and I will
give it to Claude and I will say I went
through this exercise of the chat GPT here's its recommendations.

Speaker 4 (22:45):
What do you think.

Speaker 1 (22:46):
So it's tempting to think that AI's biggest value is efficiency,
but Morgan.

Speaker 3 (22:51):
Shows us that that is just the surface.

Speaker 1 (22:55):
Because coming up we dig into experimentation, how tools like
chat JPT can simulate real world audiences, test hundreds of
ideas in minutes, and even reshape how you think about
what makes a strong hypothesis. We also get into the
risks of skipping humans all together, why speed can sometimes backfire,

(23:16):
and the surprising ways that Morgan stress tests his own ideas.
Stay tuned because this half gets very practical and very
eye opening. If you're looking for more tips to improve
the way you work can live. I write a short
weekly newsletter that contains tactics I've discovered that have helped

(23:38):
me personally. You can sign up for that at Amantha
dot com. That's Amantha dot com. I want to shift
into talking about experimentation, and but before we started recording,
I was showing you my very scribbled on dogged Coffee

(23:58):
of Hacking Growth, which I love this It came out
quite a few years ago now, but it's such a
great book, and I think about experimentation a lot. My
consultancy inventing part is what we do is we build
innovation capability for our clients. And you know, something I
have been thinking about in the last couple of years
is just how some of those fundamental principles of experimentation

(24:20):
and even some of the principles that like you know,
Eric Reaes and Steve Blank gave us with the lean
startup methodology and the great work in principles that you
wrote about in Hacking Growth.

Speaker 3 (24:32):
How has your thinking evolved now with all these.

Speaker 1 (24:35):
Different AI tools that we've got access to when it
comes to testing an idea.

Speaker 4 (24:40):
Yeah, that's a great question, and you're right.

Speaker 2 (24:42):
Your Hacking Growth, I think will be I think it's
eight years old at this point, and so I was
so excited to see.

Speaker 4 (24:47):
That you had a copy.

Speaker 2 (24:48):
When I reflect on it, I think the principles of
it still stand up very well. Like the first half
of the book kind of talking about, hey, you really
need to understand the problem deeply, using that understanding to
generate new potential ideas to test. And then the main
things that have changed are one, you can now go
through that loop much faster. Either you can kind of

(25:09):
work offline before you kind of move online. So, for example,
if you're going to test a bunch of headlines for
an advertisement or any email subject line. In the old days,
you would sit down, you would brainstorm with a team.
You'd come up with four or five. You kind of
debate which two you wanted to test. You would take
those two, you'd put them into your email software. You'd

(25:30):
send it to like ten percent of the list. You
would see which one was winning, and then you would
send that to the rest of the list, and then
you'd say, okay, for next time, this one one, and
so we'll use that as a kind of a starting
point to kind of move onto the next one. And
that was kind of the speed of the loop, which
was really gated by how much time it took to
do all that. Now today I can say I need

(25:51):
an email subject line chat you BT give me five immediately,
you get it instantly, and then you can start to
I think what's really interesting is you can start to simulate, hey,
how might this perform generally? You know, and the lllms
have so much context around the baseline industry open rates,
by email types, and you know, is to continue the example,

(26:13):
and so you can go through that learning loop much faster,
so that hopefully by the time you actually get down
to testing with that email, you've gone from maybe five
initial ideas that were kind of like generally concocted to
maybe you started now with twenty or one hundred ideas
which have gone through a series of you know, feedback
loops down to now two or three or five that

(26:36):
you feel really good about. And that takes you know,
thirty minutes versus you know, days and a couple of
meetings with your team and all of that. And so
I think that applies everywhere now. It applies to you know,
the subject lines of your email, your ad copy, the
landing page copy, the what shows up in your app.
To give you an example, with carb scan, one of

(26:57):
the things that I found coming back to it multiple
times a day was it was pretty sterile.

Speaker 1 (27:01):
You know.

Speaker 4 (27:01):
Every time he came back.

Speaker 2 (27:02):
It said the same exact thing and said just you know,
know your carbs in a snap. And I was like, well,
this is my fifth time here today, you should know
a little bit something about me. So I just said,
you know, hey, I want to create it so that
when I come back, based on the time of day,
respond to that. So now when you come back at
dinner time, it's like dinner time, we've got you covered,
you know, get going with a snap, or like if

(27:24):
you go late at night, it's like late night snacking.

Speaker 4 (27:27):
We've got your carbs counted. And that just took minutes.

Speaker 1 (27:29):
I'd love to get into a bit more detail around
just your process for testing within an LLM without even
having to find a customer to open an email, and
maybe if you could walk me through an example, maybe
even with cabscan, Like let's just say you were testing
two subject lines for an email where the purpose of

(27:51):
the email.

Speaker 3 (27:51):
Was get to get people to trial the app.

Speaker 1 (27:54):
Can you talk me through like what that flow would
look like to test it, but just within an LLM.

Speaker 2 (27:59):
Well, I think you get to get really wonky here,
so I'll try to kind of like break it down
a little bit. So one of the things that you
can do is you can ask the LLLM.

Speaker 4 (28:09):
To simulate a persona.

Speaker 2 (28:11):
So, for example, one of the things you can say is, hey,
you are a busy working mom with three kids. So
you use the carpskin example working mom, three kids, one
of them is a Type one diabetic. You typically eat
out a couple of times a week, but you know,
you're very on top of their diabetes regimen. You have
a bunch of mobile apps around kind of like managing

(28:33):
their blood glucose, and you know you're trying to make
the best informed decisions you can, but like you have
a very modern and hectic life, like we all do,
kind of schedule to the nines, and then you can
give it some specific prompts around, like from that persona,
which of these headlines might stand out in their inbox?
Which of these are the kind of more likely potentially
to click on? What might it be competing with around

(28:55):
that time of day? And then one of the things
I always like to do is ask the LM to
cite why it's giving me this information. So point me
to the blog posts that you're kind of referencing here,
point me to the study that you're using, or kind
of the ux principle, So trying to get as much
separate what it's hallucinating versus what I can go actually

(29:18):
kind of validate. So that's kind of the lightest layer
that you could do it. You can get deeper by
having it create full audiences of people like that. You
can say, imagine a representative pool of parents of type
one diabetics in this country, make up demographic psychographic behavioral information,

(29:41):
come up with a bunch of different personas, come up
with a distribution of those people, and now run this
subject line against that imagined population of people.

Speaker 3 (29:51):
How accurate have you found it to be?

Speaker 1 (29:54):
Like, have you ever done a split test where you've
recruited real humans and compared that to simulated audience.

Speaker 2 (30:02):
I think it's all in the quality of the setup, right,
So the most important thing is if your assumptions about
the online audience. So, for example, if you're going to
test a x post and you're like, oh, I'm going
to build a synthetic audience with an LLM, if it
doesn't really map to your actual audience, then your results

(30:22):
are going to be, you know, just noise. But I
think there's some pretty interesting, you know, ways to kind
of like get that signal and get it pretty close.
So I would say I've had some where it's like
fully wrong and just doesn't resonate at all, and then
there are some where you're like, oh, that's actually that's
pretty good. So it's it's still more art than science.
I wouldn't put my job up against it quite yet,

(30:45):
but I do think it points to like an interesting
way to kind of accelerate ideation and kind of like
honing your thinking a bit further than kind of, you know,
just kind of scribbling, you know, down a couple of
ideas on a whiteboard.

Speaker 1 (30:58):
So yeah, with that in mind, and when in an
experimentation or idea testing process, would you bring in real humans.

Speaker 2 (31:07):
As soon as possible? Starting with hey, does the team
think this is a good idea? With dropbox and Dash?
We have some of our customer advisory boards and design
partners who we give them early access to our products
and ask for their feedback, or we show them mockups
and prototypes and ask them is this clear? How can
we make it better? And then from there you can
move up into bigger, scaled tests with larger groups of people,

(31:32):
quantitative surveys all the way up to ab tests and
beyond with personalization and so on. And so I don't
think it's lllms instead of humans. It's what is the
pre work that you can do to make the very
limited time and the very cost the expensive time with people,
you know, make it the most impactful as possible.

Speaker 1 (31:52):
What else has changed when it comes to experimentation and
some of the concepts you were writing about years ago.
When it comes to test ideas in the age of AI.

Speaker 2 (32:02):
The obviously answers, Hey, AI has changed a bunch of
the tooling and how we do it. But I actually
think maybe goes back to what you and I were
talking about earlier with the how to write like a
good prompt, which I think can be generalized into how
do I think about what's the right question to ask,
what's the right problem to solve? How good is our

(32:22):
thinking upfront in terms of the question we're trying to answer,
the problem we're trying to solve, and then the context
around that to do that really effectively, because the output
now is limitless, and so I think if I was
going to write packing growth again today, I would have
spent more time in how you think about framing up
and create hypothesis. Without that, the output is maybe just

(32:45):
more noise.

Speaker 1 (32:46):
I'd love to finish on a piece of advice for listeners,
and you know, I feel like how I work listeners
fairly AI savvy, Like if you were to set them
one piece of homework something to do that would really
improve how they're working with AI right now, seeming they're

(33:06):
kind of you know, they're more than dabbling. What would
be one of the most powerful things that they could
just do this week?

Speaker 2 (33:14):
Take thirty minutes and think about what's something that you
do all the time that you do kind of constantly
without even thinking about it. It could be checking email,
it could be reading documents, it could be approving expense reports,
it could be you know, there's a numerous things that
we all do constantly, and you know, kind of the
idea with carb skin is I didn't even realize I

(33:36):
was doing that many Google searches around dinner until I
sat down and kind of just thought about it, like
oh wow, just kind of like hit me. We see
that in work all the time that you know, when
we're working on DASH, we'll talk to someone like walk
us through your day or walk us through your week,
and someone will say, oh, every Wednesday, I get seven
emails from all of these different field agents about the

(33:58):
status of projects that are happening around the country. And
then I take all those I read all of them,
and then I have to put together a report to
send to our management team about what's on track, what's behind,
what's happening out in the field. I was like, great,
how do you do them? They're like, well, I get
the emails ever way for them to come in. I
read on my copy and Piete stuff into a new
document and then I write them. And I was like, Okay,

(34:18):
this is a great use case for like summarization and
that type of thing. And it just kind of the
light bulb goes off when you can get down at
that very specific thing. So yeah, start with I do
this a lot, It's important part of my job. And
then step back and say, how can I even if
you can't go end to end with the workflow, like
you might not be able to you know, connect to

(34:40):
your email inbox to read it all, but you know
what is a piece in there that you can let
AI kind of take a little bit off your plate
for you and then kind of use that as a
springboard to kind of work through the rest of your week.

Speaker 1 (34:53):
Basically, Wulgan, it is vain such a joy chatting to
you and getting to pick your brain.

Speaker 3 (34:58):
Thank you so much for spending some time with me.

Speaker 4 (35:02):
Yeah, I Meanta, thank you so much. I really enjoyed
the conversation. Thanks for having me.

Speaker 3 (35:07):
Here's what's stuck with me from Morgan.

Speaker 1 (35:10):
The real leverage of AI isn't necessarily the answers, it's
better questions.

Speaker 3 (35:16):
So this week, try some homework.

Speaker 1 (35:19):
Take thirty minutes to notice something you do constantly, almost
without thinking, and then ask what part of this could
AI take off my plate?

Speaker 3 (35:29):
You might be surprised at how.

Speaker 1 (35:31):
Small experiments can lead to very big shifts. And if
you enjoyed this episode, you might want to go back
to my episode with Bobby Ohanson, the amazing Silicon Valley futurist.
We talked about reframing AI as augmented intelligence.

Speaker 3 (35:47):
And it's a beautiful pairing.

Speaker 1 (35:50):
To this interview with Morgan. Thank you so much for
listening to this episode. Hit follow on How I Work
so that you don't.

Speaker 3 (36:00):
Miss what's next.

Speaker 1 (36:01):
If you like today's show, make sure you get follow
on your podcast app to be alerted when new episodes drop.
How I Work was recorded on the traditional land of
the Warrangery People, part of the Cooler Nation.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Charlie Kirk Show

The Charlie Kirk Show

Charlie is America's hardest working grassroots activist who has your inside scoop on the biggest news of the day and what's really going on behind the headlines. The founder of Turning Point USA and one of social media's most engaged personalities, Charlie is on the front lines of America’s culture war, mobilizing hundreds of thousands of students on over 3,500 college and high school campuses across the country, bringing you your daily dose of clarity in a sea of chaos all from his signature no-holds-barred, unapologetically conservative, freedom-loving point of view. You can also watch Charlie Kirk on Salem News Channel

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.