Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jerod (00:04):
Welcome to Practical AI,
the podcast that makes
artificial intelligencepractical, productive, and
accessible to all. If you likethis show, you will love the
changelog. It's news on Mondays,deep technical interviews on
Wednesdays, and on Fridays, anawesome talk show for your
weekend enjoyment. Find us bysearching for the changelog
(00:24):
wherever you get your podcasts.Thanks to our partners at
fly.io.
Launch your AI apps in fiveminutes or less. Learn how at
fly.io.
Daniel (00:44):
Welcome to another
episode of the Practical AI
Podcast. This is DanielWitenack. I am CEO at Prediction
Guard, and I'm joined, asalways, by my cohost, Chris
Benson, who is a principal AIresearch engineer at Lockheed
Martin. How are doing, Chris?
Chris (01:01):
Doing very well. Looking
forward to talking some fun
stuff on this beautiful springday.
Daniel (01:08):
Yes. Well, I've always
hoped that AI could make me a
superhuman. So really excited tohear about maybe something in
that realm today from LoicUssier, who is head of
engineering at Superhuman. Howare doing, Loic?
Loïc (01:27):
I'm doing great. I'm super
excited to chat with you guys, I
would say with a prettyhumbling, I would say, set of
guests in the past. So I'm superhappy to have this opportunity
and discuss in length.
Daniel (01:40):
Yeah, that's awesome.
Well, know this is kind of
interesting because I knowSuperhuman, I think, was one of
maybe the sort of first reallyintegrated AI first kind of
engineering tools that Iremember seeing. And of course,
the AI space has advanced a lotin that time. Maybe could you
(02:02):
give us a little bit of a stateof AI in email or productivity
more broadly, if you want tothink about But really,
obviously, we're going to talk alot about email and messaging.
So could you give us a littlebit of a sense of what that
landscape looks like right nowand kind of how Superhuman fits
into that?
Loïc (02:21):
Yeah, totally. And and
it's like a it's incredible,
like the time that we're livingright now. Of course, like
everyone has been shocked, like,when we had, like, the first
version of those LLMs, like,doing some crazy, crazy stuff,
and, like, analyzing text,summarizing, and doing, like,
all sort of of magic. And and,course, email, I mean, is all
(02:45):
text based for the most part.And so it was like a really nice
test bed, like to try out, likeall the cool stuff that you can
do.
And interestingly, that's alsolike, helped like this category,
like the email client to thrive,like for quite some time.
Superhuman was almost the onlyone supercharging Gmail and
(03:09):
Outlook. We were the only one onthe space making like people
faster, going through theiremails, and all of that. And
with the rise of LLM and agentsand everything, now there's a
bunch of people are like, oh,damn, this is like a great,
great, great environment to playaround and to make like things
better. And right now, this iswhat we see.
We see like a bunch of, I wouldsay, other like tools trying to
(03:33):
do stuff with LLM, and to createlike a better experience for
email, and this is indeed likean interesting time for us,
because this is the proof thatthat category needs to exist,
was existing before, but we arethe only ones there, and now
that more and more people aregetting there showing that
(03:54):
there's deep interest in it.And, and it's challenging, and
it's like super interesting, andwould probably talk about it,
but it will also like help usunderstand what makes like a
good product, and is like justthe LLM and AI sufficient, or
like do you do you need somesort of like a secret sauce on
top of it? And happy to todiscuss about it.
Chris (04:15):
As you I'm curious,
following up on on what Daniel
was saying with, you know, withyou guys being so early into the
space. And and obviously, we'reyou know, the not only LLMs, but
just AI in general has beengoing at light speed, you know,
increasing steadily over thattime period. How is that? How is
(04:36):
the the space change for youguys from being kind of the
early only player, you know,into into the space where
there's others, you know, it'sbecoming, you know, somewhat
congested across not just thespace you're in, but just like
everything. How has that changedthe world for you guys in terms
of staying differentiated andall that?
Loïc (04:56):
Yeah. So it's very
interesting because, like,
there's multidimensions that wecan talk about. Like, the first
one is this is, like, the raiseof those AI features and
capabilities are bringing, like,a new set of features that you
can implement that you couldn'tdo in the past. In the past, AI
was mostly classification, likeadding labels and and stuff like
(05:20):
this, and that was kind of likethe limit of what AI could do
really for everything that istext based, so typical
classifiers, typical models likethis. And more and more now, you
can do, like, some intelligentstuff.
So we moved from a place wherewe were making things faster for
our users compared to Outlook,compared to Gmail. But now
(05:45):
there's like more that we cando, we can make things smarter,
which is probably like a partingshift in terms of like the value
that we're creating for ourusers. The other dimension is
like, this is raising theexpectations for, I would say,
the different users. Like, for along time, they were like, damn,
this is so fast, and I'm, know,I'm gonna say, winning like four
(06:07):
or four hours a week to gothrough my emails, but like,
everyone is used to chat withGPT, everyone is used to the
complexity, everyone is likecrafting images, or like even
movies with SoHa and all ofthat. So like the level of
awareness and the level ofunderstanding of what the
technology can do is raiseddramatically.
(06:27):
So for our users, the level ofexpectations like, hey,
superhuman, I expect this now. Iexpect this now. The other
dimension is like from anengineering standpoint and like
a building standpoint, our toolset is totally different. Like,
the the tool that changed, andengineers that were working like
in in some ways three years ago,even two years ago, even six
(06:49):
months ago. Like right now, thetool set and like your flow and
like all your setup to work hasdramatically changed.
And maybe like the the lastdimension that I think is like
really tricky to apprehend isthe perceived quality. So
Superhuman was seen and built onthe kind of like the the one
(07:12):
single dimension that was like,it's highly qualitative. We were
in charge of the quality becausewe master everything. So you can
be like, have like a zero bugpolicy. You can take the time to
deliver the value, but it needsto be perfect.
And now, with LLMs, a bunch ofthe perceived quality depends on
your prompt. So you have usersthat are prompting with
(07:36):
different skills or differentlevel of skills, and the outcome
of that prompt may be perceivedas low quality. But that's
something that is really hard tocontrol. And it's creating like
something that is like sort oflike mind blowing from an
engineering standpoint. I mean,we've all been working in tech,
and the craft, the bugs, andeverything.
(07:57):
There are some processes tolimit the number of bugs. But
now, quality is not only bugs.Like, it's also like this
perceived quality based on theuser, and that's an interesting
thing to tackle.
Chris (08:11):
And I'm curious, as you
kind of mentioned the fact that
with some of the prompts, andhaving different users' skill
level and stuff like that, couldyou talk a little bit about how
you tackle This is one of thoseinteresting things, from my
standpoint, to hear about wherethere's all these little gotchas
(08:32):
in this world that a typicalperson isn't going to ever have
thought about going ahead oftime. And so as one of those
things where prompting itself isfairly diverse in terms of the
skill set. Can you talk a littlebit about like, how do you deal
with that, when you're trying toput together a product and
focusing on the quality issuesand stuff like that? Because
(08:52):
I'll be honest with you, thatwould not have occurred to me to
have to think about addressingthat kind of issue. Can you talk
a little bit about that?
Loïc (08:59):
No. No. It's it's I will
tell you, like, about one
specific feature that wereleased, like, in q one. So we
have those auto labels. Soautomatic labels that will
basically flag your emails, andbased on the label, you can
decide to skip your inboxaltogether.
Typical stuff, like randompitches from a company that want
(09:22):
to get in touch with you to selltheir product. I receive like
probably like 30 of them everyday. Do I want to take look at
those, like, 30 and answer,like, all of that? Probably not.
Probably not.
So I'd love them to tobasically, like, be skipped
altogether. So for those, webuilt classifiers that do not
(09:43):
rely on user prompts, so that wecontrol the quality, precision
recall, like the typical stuff.But we also allow our users to
create and craft their ownlabels. Let's say, like, you
want to have like, oh, I wantall my podcast invitation to
have the same label. But like,cannot just have like a
(10:06):
deterministic rule to say itbecause I don't know all the
podcast, like people andeverything.
So you cannot just do like thefilter, like Gmail would do,
where you say like, if then,then this. So you have to point
it, and you have to basicallyallow the user to craft a point
that will surface all of those.But then, that point is tricky,
(10:27):
because like if you have someonethat is just like putting just a
one liner, you you start havinglike some issues, because the
precision and record based onthe one line prompt is not
great. And we know, like, asyou, I would say, I guess your
audience have been working withlike, chat GPT, or like prompts
in general, the more structuredand extensive they are, the
(10:50):
better the result. And there's abunch of isolation hallucination
that can happen if you are likejust one liner, because lack of
context and lack of all of that.
So of course, you do like somelike system prompt to suit
basically surround this userprompt to try to avoid like too
much too much issues, butthere's also like a part of
(11:13):
education that you need to have,and we are working on this now,
which is like, your prompt seemsinteresting, but like probably
you want to structure it thatway. So there's some stuff like
this that we will be working on.Also sharing prompts, like
libraries of prompts issomething that we're thinking
about more and more, because noteveryone is able to craft a nice
(11:35):
prompt, and maybe someone inyour team will have done like a
prompt that you would really usehappily if you get access to it.
So it's sort of like, mean, it'svery product centric, so it's
not AI centric, and you need towork around this new problem.
And I wish we had a silverbullet, and like the answer to
that problem, but I think we arelike learning as we walk, and
(11:59):
it, but it's super interesting.
Daniel (12:00):
I'm wondering I'm always
intrigued by I read a book by
Richard Hamming, and one of thethings that he talks about is
how if you rethink a processthat was very human and manual
before, often the way that youwould make that an augmented or
machine driven process is verydifferent from what the original
(12:20):
human process would look like. Ithink in the email client, we
all sort of expect a certainprocess, a look and feel to the
email client that's developedover time. What have you found
in terms of presenting an emailclient to a user that is
drastically different? What sortof needs to be preserved? What's
(12:42):
kind of up for grabs in thatexperience?
What should stretch the user?What needs to be preserved? How
do you think about that?
Loïc (12:51):
That's really interesting.
That's a really interesting
point because we are at thatmoment where the user
interaction with the computer,with the system is, like,
dramatically changing. Like,people don't expect to click in
different windows anymore. Like,the expectation is different,
like ChattyPT to to, like, or Iwould say the the other like
(13:14):
clones, from like differentproviders, you basically have a
chat box, and you ask everythingthere. Like even if you're
working on a document, you askon the chatbot, and like, modify
my document and rewrite my exactsummary.
Oh, make my tone a bit more likeX and Y and Z. You don't expect
to have like a button like Worldwould have like Microsoft World
back in the days. So, and we areonly at the beginning of this
(13:39):
shift. So I think that, it'skind of like coming back to like
competition and all of that, thebarrier to entry to like pretty
much any SaaS application, oreven the consumer application,
is very low now, because it'svery easy to, at least to build
a POC, at least to build a POC.I wouldn't go like on like
(13:59):
further than that.
And what will make thedifference is the product test,
and how you want to understandyour users, and how you
understand their userinteraction. And this is where
like I feel pretty, pretty proudto work at Superhuman, because
our CEO is a freak in terms oflike user interaction and
vision, and he's alreadythinking about that, and like
(14:20):
how the future of interactionwill be, and it will change. It
will be different. So like whatwill stay, what will, what will
be slightly different? I'mpretty sure that the
conversational aspect would be astrong paradigm.
(14:41):
Like right now, you don't talk,whether it is like through your
keyboard or through a mic, youdon't really talk to your
system. You don't talk to theapplication. Maybe you start
talking with Charge ept becausethey have this nice voice
interaction. Maybe you useWhisker Flow, or like these type
of tools to basically write youremail, or like write in Slack
(15:02):
and your messages. But you'renot exactly commanding the
device to do things as you talkjust yet.
But more and more people aredoing so. Like, I probably talk
to my computer now more than Itype, interestingly. So there's
a change. And everything thatwe've done in the past was
(15:23):
mostly click and click andclick. Superhuman started with
like the Common K and keyboardcentric access to things for
people that wanted reallyproductivity, because like
switching like with a mouse islike, is pretty slow.
And now more and more people arestarting to engage with the
voice. So all of that willchange the way you think the way
(15:44):
you face the data, the way youinteract with the data, the way
you bring the focus. So this isan interesting, I would say,
area. One thing that I dobelieve will stay though, to
your point, Daniel, and I wouldtalk about email especially, The
concept of inbox, like theconcept of having like some sort
of like a timeline of thingsthat you need to go through, and
(16:07):
get rid of the stuff that aretop of mind, some sort of like a
task list to some extent willstay. Now, how it will be
surfaced, how you will gothrough it, will dramatically
change over time, and we arealready like seeing this.
Sponsor (16:38):
Okay, friends. Build
the future of multi agent
software with Agency, AGNTCY.The Agency is an open source
collective building the Internetof agents. It is a collaboration
layer where AI agents candiscover, connect, and work
across frameworks. Fordevelopers, this means
standardized agent discoverytools, seamless protocols for
(17:01):
inter agent communication, andmodular components to compose
and scale multi agent workflows.
Join Crew AI, LangChain, LambdaIndex, Browserbase, Cisco, and
dozens more. The agency isdropping code, specs, and
services. No strings attached.You can now build with other
(17:24):
engineers who care about highquality multi agent software.
Visit agency.org and add yoursupport.
That's agntcy.org.
Chris (17:40):
So as we were kind of
going into the break, we were
talking about kind of, you know,the the notion of rethinks on
that. And I'm curious, as you'rethinking about not only the
rethinks, but you're also havingto respond to the evolution of
(18:00):
the technology itself that'savailable for your teams to
implement stuff. And one of thethings that we've seen over time
is kind of, it's not this smoothincrease. You may have
evolutionary increase in themodel capabilities for a bit,
But you also have these jumpsthat'll occur along the way. And
(18:21):
with your product teams, asyou're looking at what the
future of your products aregoing to be, and you hit these
moments where it kind of goesfrom predictable improvement in
the models, and you make thesejumps.
How does that affect the productdevelopment cycle that you have
internally when you're saying,are those moments, you know,
(18:43):
we're talking about rethinks, doyou have moments where you kind
of go, maybe it's time for kindof a deliberate rethink because
something just happened in termsof the technology capability
that we weren't expecting lastweek. And we're gonna do that.
How do you guys handle this kindof an industry being in it
Loïc (18:59):
for that? No, it's it's
very interesting. Okay. And so
Danielle, you were mentioning abook, but one book that comes to
mind as you're asking thisquestion is Zone to Win by
Geoffrey Moore. And he talksabout, like, continuous
innovation and, like, disruptiveinnovation, and this is probably
what we're talking about.
Like, we continuously innovate,and we continuously add more
(19:20):
features and new stuff into theproduct, and sometimes, you have
this opportunity to providesomething that is disruptive,
whether it is like theunderlying technology that is
disruptive, or because you havelike some sort of a wow moment,
and you are like someone, Iwould say with a vision that is
like, this is the duration totake, and we need to either
pivot, or we need to dosomething like drastically
(19:42):
different. What we've seen,especially with AI, is like the
rate of those disruptiveinnovation is mind blowing. I
would say before AI, to someextent, like the technical
innovation where maybe once ayear, once every two year, like
you have something that is likebrand new and like, holy shit, I
(20:02):
need to use this, and pardon myFrench. But what is interesting
with like LLMs, like every twoweeks or three weeks, if you're
not on Twitter, you're not onHacker News, like, you can miss
like the new big stuff. LikeLLM, like multi models,
reasoning, MCPs, like every thatcame in six months.
(20:23):
And all of that is coming withlike a new set of capabilities
that you can decide to implementin your product. So to come back
on your early question, what isthe impact on the product
development? How do you handlethis? One, you better be agile,
meaning like the true agile. Soyou better be able to stop what
(20:43):
you do and say, wow, focus, weneed to sit down for a moment
because this is coming.
What do we do about it? And youneed to have like and that's why
I love like a small companies tosome extent, because it's very
easy to have like everyone,listen, there's this new thing.
We need to do something aboutit. Let's change the roadmap
right now. When you're in abigger company, it's way harder
(21:06):
to do it because you have yourearly planning that is like
coming into quarterly planning,and you have all those OKRs that
you you need to report on andeverything.
So like you need basically likealmost a six months business
plan to explain why you want toto pivot into something else,
which is obviously not the casewhen you're a company that is of
a small size. Superhumanengineering and product and
(21:30):
design is probably like, don'tknow, I don't have the strict
number, but like forty, forty,six, maybe 50, but that's about
it. That's the the size whereyou can be like super agile. You
can stop everyone doingsomething because something is
coming up, and we need to tofocus on it. Of course, we can
do better.
If my engineers are listening tothis podcast, they would say,
(21:52):
like, maybe you're like yourcaricaturing a bit. So probably
I'm caricaturing a bit.
Chris (21:56):
And of course they are,
right? I mean, Of course,
they're right. Of course they'relistening to you.
Loïc (22:01):
And of course they are
listening to it. No, so it's
having this understanding thateverything is changing right
now. So you will you need toreassess your priorities like
almost every two weeks, almostevery two weeks. MCP is coming,
people are standardizing on itright now. What do we do with
it?
What do we do with it? Should welike invest like crazy? Should
(22:23):
we stop everything that we aredoing? Should we, I will say, we
still believe in the vision, andit's providing more value, you
need to make those decisionsevery two weeks. So, or like
almost every week.
So being close to, I would say,the close knit team that is
talking like basically on adaily basis to make sure that
you're making the right decisionis key.
Chris (22:44):
And by the way, just for
listeners, you may have heard
MCP in there if we did anepisode explaining what MCP is.
So anyone who's not familiarwith it, you should jump back a
few episodes and hear that out.It'll give you some context
around that.
Loïc (23:00):
Yeah, thanks. Thanks,
Chris. And I'm sorry, if I use,
I would say some jargons, but
Chris (23:04):
Jargon jargon is fine.
But we all we always try to jump
in and point people to it.
Loïc (23:09):
So this is perfect. This
is perfect.
Chris (23:11):
And I think, of looking
forward, one of the things that
that I'm really curious about iswe've tackled some of the bigger
issues of AI and email. But I'mcurious, if we dive down into
specific functionality atSuperhuman, how do you see kind
(23:32):
of the most, maybe the mostuseful AI email functions that
you're currently either kind ofreleasing or kind of thinking
about forward? You know, how doyou, when you get kind of
granular on the product, how areyou starting to think about that
now?
Loïc (23:45):
The I would just like like
the the the feature that all our
users are basically talkingabout because they just love it
is a feature called auto draft.You receive an email as part of
a thread, someone is asking yousome questions, and or you send
an email basically saying like,hey, can we meet next week or
whatever, and after two days,you don't have an answer. You
(24:08):
usually want to bump that intotheir inbox and everything. We
build this feature where wecreate those drafts for you,
ready to be sent. It's, it's notmind blowing in terms of like
usage of LLMs, like you providethe context, you use the tone of
like you're with that person andeverything, and you craft a
draft that could sound like agood way to reply to it, and the
(24:32):
results are just mind blowing.
Like the users find it like soaddictive, because it's
relatively accurate, and theywin a lot of time. Like, it's
just about like winning time.Our users are mostly CEOs, CXOs,
on the sales side, as well assome consultancy firm. They
(24:53):
leave, like, basically day inand day out, like in their
emails. So every ten secondsthat you can make them win in
their day, it's a huge win forthem, given like the amount of
emails that they have.
So this is like one of thosefeature that is super effective,
even if it sounds simple.
Daniel (25:12):
So, Loic, even with what
you're just describing there,
creating an auto draft peremail, maybe an LLM call, doing
classifications, auto labeling,maybe other calls. I don't know
how many calls or chains of LMcalls are happening per email,
(25:33):
but that could potentially be alot. And if you do that for one
email, that's fine. You do thatfor all my emails, that's more.
If you do that for all emails ofthousands or hundreds of
thousands of people, that's alot of GenAI workload.
How does Superhuman as more of aAI application company think
(25:54):
about that in terms ofoptimizing infrastructure or AI,
Gen AI use, consumption, hostingyour own models, fine tuning
your own models, using smallermodels? How do you think through
some of that?
Loïc (26:08):
So this is a great
question, and this is a real
challenge to some extent, if nota problem sometimes indeed. I
guess, like, my engineers are,like, very much into, like, the
finance. Like, they understand,like, the cost of inference, the
cost of the input, the cost ofthe output. They understand the
(26:29):
difference between the differentmodels. So we we have to put
some some sort of like a highlevel principles to keep moving
fast so that they know, I wouldsay, how to default, and only
like escalate if they they havesome questions.
I will give you some example,but if it's a new feature, we
don't know if it would beworking or whatever, so still
testing. We want it to be great.So we take the most expensive
(26:53):
model. It's working, and we havetraction. Great, good problem to
have.
And now this is the moment whereyou start thinking about like
optimizing the cost, and maybeyou will switch to like a
cheaper model, maybe more finetuned, maybe you would switch to
like a different type of modelaltogether. So for example, like
the classification that wediscussed, LLMs are okay with
(27:17):
classifications, but you canhave way cheaper for the same
quality with like a BERT type ofmodels. And inference cost is
like a fraction of it, afraction. So long story short,
this is the way we provide, Iwould say, value to our end
users. We try with the bestworking, we do optimization
after the fact.
(27:37):
Does that answer your question?But this is like, more
generally, I think this is likealways like the right approach
is like, don't care about thecost right now if it's not
becoming a problem, because youalways want to provide like the
best experience, and and if youdon't have traction, too bad,
because the risk, if you try tostart small because you're
(27:59):
afraid of the cost, you will uselike a cheaper model, and the
feedback from the users would bemeh, and they won't use your
feature. And then you don't knowif it's because the feature is,
I would say not well targeted,or if it's because of the model.
So targeting the best, you havelike better answers and better
on better insights.
Chris (28:19):
That was a really
interesting answer from my
standpoint. You explicitlycalled out, as you're getting to
the feature and going ahead andgoing with the best, the most
expensive thing, and thenpulling it back to what the
efficiency will be. And onceagain, one of the things that we
(28:40):
often call out on the show iskind of the fact of kind of
software engineering beingapplied and kind of the
analogies on that on the AIside. So I just, I really wanted
to kind of call that out becauseI thought that was a great
insight that you made there.
Loïc (28:54):
And it does impact. So I'm
sorry to cut you off, Chris, but
like it has like significantimpact on the way you build your
application, because you want tobe able to switch models to
switch, like, the heuristicassociated with the output that
you want to have. So you have toinvest some time to have like a
way to do this switch relativelyeasily, potentially do AB
(29:17):
testing with differentpopulation to measure like the
difference of perception,because again, there's not
everything is like black orwhite, like there's like nuances
of gray now in terms ofperceived quality. So you need
to have like more of statisticalapproach in terms of
understanding the impact of likeone model versus the others. And
(29:38):
of course, we have internalevals and all of that to do our
own testing in terms of with ourgolden dataset, but the reality
is we have a diverse set ofcustomers, and everyone is
different, so we need to havebroader perspective than just
relying on our own dataset.
Daniel (29:56):
Yeah, Loic, I appreciate
you getting into the technical
side of things a bit and talkingthrough some of those
optimizations and how you thinkabout them. Obviously, you're
leading the technical effortswith Superhuman, and I'm
wondering if you have any sortof hard lessons learned from
doing AI engineering over time.We have a lot of practitioners
(30:19):
that listen in. Any kind ofgeneral principles or lessons
learned that you'd want toimpart?
Loïc (30:25):
That's a good question.
Maybe one thing that I've
learned is to, like as a CTO, Ineed to discuss with the rest of
my leadership team, and we talkabout the the success of
features and everything. And thetypical way to talk about, like,
quality is typically in termsof, like, number of bugs and
everything. Now and and I Iwould say touch on it early on,
(30:46):
but the perceived qualitydepends now. Like we are in a
world with way more subtletieswith LLMs.
So setting the rightexpectation, basically
explaining that the way thefeature can be built, and
sometimes failing because thefeedback is not great, might not
be because it's not wellimplemented. But maybe there's,
I would say more to it. Maybethere's a part of the
(31:07):
perception, maybe there's toomuch latitude that is offered to
the end user, maybe there's somework on the prompt side. So
that's something that hit me inthe beginning, where perception
of the feature was like, likethis is, this is terrible work.
Like it's not working, peopleare complaining.
Guys, what have you done? Andthe work was done properly. It
(31:28):
was like well implemented andeverything, but the perceived
quality of such, I would say,some of those feature can be
completely different based onlike the, those new aspects. So
maybe like my lesson learned wasto, is now to just like be very
explicit when you basicallylaunch a new feature about the
risk of that perceived quality,and like the source of the
(31:52):
mistakes being a bit less on theengineering side, and maybe a
bit more on the user, andthere's a lot of work to be done
to control that in terms of likeuser education, in product
education. So putting a bit moreeffort on like the product led
growth, typical aspect of thebusiness that will have like a
(32:15):
tremendous impact on the successof the future.
So that's probably one. Thesecond one is, and it's
interesting because I see itevery day, we are moving
upmarket, right? We, like wehave a lot of startups that are
moving upmarket. So you starthaving like your companies that
are like part of the Fortune500, and they want to use your
(32:37):
product, and I come from a worldwhere moving to enterprise is
pretty heavy. You need a lot offeatures, you need to have like
a lot of compliance, you need tohave like a, basically a lot of
things that are not directlyimproving your product, but
improving the confidence ofthose companies that you are the
right partner to work on thosefor them.
(32:57):
There's a shift now. There'sclearly a shift in those Fortune
500, and by extension, all theenterprise market, where,
especially with AI, the riskassociated with lesser
compliance, or you're a smallcompany, should we trust you, is
completely counterbalanced bythe risk of missing out, like
(33:21):
the cost, the opportunity costis too big, and now we see
definitely push from CXOs ontheir security teams for the,
like those AI tools andproductivity tools, basically
saying, hey guys, you need tomake it work. You need to make
it work, because it's improvingso much the efficiency of the C
(33:41):
level, and by extension of therest of the company, you know
what, we're probably ready tomake the risk, to take the risk,
even if it's like a series A,series B, series C company, and
it's not like fully establishedmaybe, or like maybe the, yes,
they are processing our emails,which is like a core data set of
our business, and we need to belike straight about it, like
(34:02):
maybe they are more okay. Ofcourse, we need to do the work.
You need to be like, you need toprove that you're the right
partner, and, but the firstapproach is changing, and the
dynamic is changing.
So it's basically a bias towardlet's make it work compared to
two years before, where it wasprobably prove us that you are a
(34:23):
reliable partner, and then we'llsee if we do this POC. It's
completely the reverse rightnow. So yeah, that's an
interesting dynamic that isuseful in the way to build a
product right now.
Chris (34:34):
I'm curious, and, you
know, we get to talk about all
these really cool thingshappening in the AI space and
how they're affecting productsand services. And, you know,
LLMs can do so much now. And,you know, we're kind of moving
into the agentic age, you know,of AI, and that's increasing.
But, you know, there's still ahuman being in the workflow.
(34:58):
And, kind of what are thecritical factors that the
human's still bringing into theworkflow as opposed to all this
amazing technology that we'reable to utilize on that?
How do you see the human in theworkflow going forward, given
the fact that you have so muchcapability from technology
playing all around them?
Loïc (35:18):
That's an interesting
question. And I guess the answer
is almost in the question. Like,it's like the human part that is
hard to replicate. And so Imean, creativity, ability to
define, like, to detect patternsand stuff, like so I I think
that the the rise of LLMs ishelping us get rid of everything
(35:40):
that is mundane. Like, spent,like, was, I used to, I I will
give you one example.
I I do a lot of interviews,because I hire, like, engineers,
and as part of every interviewprocess, you used to write up
like a debrief for the team toconsume, and so, and writing a
debrief, like a thoughtfuldebrief, like takes time. It
(36:01):
takes time. Like I would, I waslike probably spending like
between twenty and thirtyminutes after each interview to
basically put like the pause,the pause, and I can like
question mark, like area to divein. Now we are pretty much all
using like meeting minutes thatare like, you know, using the
transcript, formatting that theway you want, and you just have
to add your quick thoughts hereand there, and on the lightning
(36:25):
like that. So now, like fromtwenty to thirty minutes, this
is taking me three minutes, andboom, this is uploaded in the
whatever ATS, like tools likeForm HR.
That's an example. Meetingminutes with my people, like I
have one to ones. I do one toone, like meetings with my
people. I want to keep track ofeverything that we said. I used
to take notes.
(36:45):
I'm still to some extent takingsome notes, but like the
transcript itself is so good nowthat I don't have to take notes
of everything. So I just like toput notes on like the the two
key highlights that I want tokeep somewhat private. The rest
is already shared. And now it'sbuilding like a database for me,
like of information on mydesktop that I can query anytime
(37:07):
to find information. So this isreplacing all the mundane work
that I was doing, and I can justfocus on like brainpower to some
extent.
And that's, that's definitelychanging. So same for my
engineers, my engineers, they'velived like, I would say padding
shift to padding shift and likechanging the way they build
(37:29):
software over time. They keepincreasing their velocity
because of those like new tools.They have also to, to think
differently. But like, it'sstill stupid to some extent,
like all these toolings, like,it's basically an intern.
It's an intern. So you need toreview, you need to spend the
time, like you're reviewing the,what has been output, what's the
(37:52):
output of like your new ID beingcursor, being C Line, being
what, whatever, like those thosetools, you need to review
everything, because sometimes itwill make like some crazy
mistakes that like a regularengineer won't do. But I think
that it's saving a ton of timefor our engineers that they can
just focus on the core of theirjob, which is like understanding
(38:14):
the user, understanding whatneeds to happen, and what is the
smartest way to get it happen.LLMs are just a nice helper to
go faster, but so far, it'sabout it. But it's changing
every day.
It's changing every day.
Daniel (38:29):
Yeah, Loic, you
mentioned kind of coding and
vibe coding, you know, to mind.And I almost wonder, there's
going to be a new reality foremail with all of these AI
features coming in. I know whenI'm using Vibe coding tools, I
have to learn a new way ofworking. There's different types
(38:51):
of mental loads that I have tomanage, like a lot of context
switching, guiding the model indifferent ways. It's a different
kind of mental load, a differentkind of skill.
Do you see a similar thingdeveloping in terms of my
interaction with email andlearning a different way of
(39:13):
working through those things ingood ways, but also in
challenging ways to retool mymind or retrain my mind of how
to work in this kind of vibeemailing way?
Loïc (39:28):
No, no, this is a good
question. And then we are
talking about like the userinteraction and how this is
evolving. And our work is tomake that transition, if there's
any transition, like thesmoothest possible. We need to
take the user where they are tobring them where they will be
eventually with this vibeemailing, if that even mean a
(39:48):
thing. I'm not sure what wouldbe behind it, but but clearly,
there's a there's a change thatwe are, what we are facing.
And interestingly, I was talkinglately, but right now, startups,
I would say, startup typicallyover index on seniority for
engineers, because you needpeople to be able to manage the
(40:09):
noise, manage the shit, likeit's always changing. Like you
need people with like a toughskin to be able to to manage
that. That said, and when we seeit, it's harder right now for
like a new grads to get intothis market. So, but they have
like one asset that makes themprobably different. It's the
brain plasticity.
The new grads of this year, likefor the last three years,
(40:31):
they've seen so many differenttechnologies coming like every
six months, they had to readapt,they had to relearn. So their
brain is used to this mentalshift, like every six months,
like, oh, damn, this is the newway to code. Oh, damn, this is
the new, new way to code. Likein my days, like the biggest
shift was moving from SVN toGit, that was about it, or like
(40:54):
you have a new framework, or youhave like a new language, but
like it's same old, same old,like different flavor of the
same thing. So I do think thatthe people that aren't like just
born with it, can like, we wereborn with internet, they are
born with LLMs, and with AI, andthey have this brain plasticity.
(41:14):
And I think this will be likeprobably the challenge for like
practitioners, like engineersglobally, is how to adapt to
that, because I'm 45, I'm notsure that my brain plasticity is
still there, so I need to keepup, I need to still try new
stuff and everything, andchallenge myself every day,
compared to even like five yearsago where, like, I was just like
(41:35):
tuning my own ways, and likemaking it like slightly better
over time. Like this is a partof the shift. And if I don't
take the wagon, I'm probablylost. And same applies for
engineers. So it's definitely aninteresting time.
Chris (41:49):
Definitely an interesting
time. I gotta say, you hadn't,
if you hadn't dated yourselfintentionally, revealing your
name, I was gonna say the theSVN to git switch would have
done that for you. I don't thinkanyone out there under 30 is is
going to know what SVN isanyway.
Loïc (42:07):
I'm sorry. I'm sorry if
that it's kind of like my gray
hair that we're talking.
Chris (42:11):
I'm just definitely a
brain plasticity is on my mind
as well. I'm older than you areeven. I'm curious, as we wind up
here, there's so much ground isgetting covered right now. And
you've talked about theevolution of the product and new
technologies slamming into yourcurrent plans and having to
(42:35):
adjust and stuff. If you take astep back or done for the day,
and you're thinking about thefuture, and you're thinking on a
little bit longer timeframe thankind of what we've been talking
about, kind of where can emailand messaging and stuff, where
can it go with thesetechnologies in a little bit
longer timeframe, when you kindof get into kind of just letting
(42:59):
your mind wonder and kind ofkind of dreaming what could be.
What are what are your thoughtsaround the future around that,
you know, in the large? Whatshould we be thinking about
that's not necessarily going tobe science fiction going
forward, but, you know, day today life given where things are
are kind of generally headed?
Loïc (43:17):
No. This is a I I wish I
knew. I wish I knew. But, like,
if I have to to do, like, a bitof science fiction, like,
clearly, I see the thecommunication globally,
communication between people isso fragmented, so fragmented.
Like with my family, I useWhatsApp.
At work, I use, and with mypartners, and all of that, we
use emails internally. We useSlack, but we also like discuss
(43:43):
in like Google Docs, threads,like in comments, and all of
that. So communication is sospread out, and so in different
places that it's really hard tomake sure that you have
everything that belongs to thesame topic into this, the sort
of like unified inbox. So if Ihave to to guess where we would
(44:07):
be like in, I don't know, Iwould say ten years, but like
maybe with like AI, it would belike in six months. I would say
that there's probably a need ofa unified and central way to
communicate for you, which isyour preferred preferred
interface, regardless of wherethis will land.
And doing so like in a way thatbrings focus. When I want to
(44:31):
work on a specific partnershipwith like, in AI, with like all
those like big providers andeverything, I want to focus only
on this, but I don't care iflike the the information is in
my email, is in Google Doc, isin Notion, is in in WhatsApp, or
whatever. I want this to beconsolidated, so that I know
everything that is happening inone place. So I think there
(44:53):
would be like a lot of workaround this. The other aspect
that is really interesting isthe, where LLM sits, what is the
entry point?
We see Charge pity being likeone entry point, but like all
the tools have like an embeddedCHA GPT equivalent. So whether
(45:13):
you use like Confluence, Notion,whether you use like Salesforce,
whether you use like any kind ofb to b application, have their
like own specific chatbot. Andthen you have like actors like
Glyn, for example, and and someothers that try to, yeah, like
unify everything. Where is thisgoing? And so that's something
(45:37):
that I'm really curious about.
Do we want to be where peoplework, or do you want to have
like some sort of like a unifiedexperience regardless of the the
vertical people are working in?I'm curious. Like, I I have more
questions than answer. What'sfor sure is it will evolve, and
that I I do believe thatSuperhuman is doing that in a
(46:00):
nice way, and people tend tolove it. So building on that
experience and that empathy withusers, I believe we'll be like
well placed for that race,basically.
But interesting race.
Chris (46:12):
I appreciate the
insights. And thank you so much
for coming on the show today,and kind of sharing, and not
only where superhuman's at, butkind of, you know, how you're
how you're tackling thechallenges and thinking about
the future. A lot of insightthere. I really appreciate it.
Loïc (46:29):
Thanks, Chris. I
appreciate the time with you and
Daniel.
Jerod (46:39):
All right, that is our
show for this week. If you
haven't checked out ourChangelog newsletter, head to
changelog.com/news. There you'llfind 29 reasons. Yes, 29 reasons
why you should subscribe. I'lltell you reason number 17.
You might actually start lookingforward to Mondays. Sounds like
(47:00):
somebody's got a case of theMondays. 28 more reasons are
waiting for you atchangelog.com/news. Thanks again
to our partners at Fly.io, toBrakemaster Cylinder for the
Beats, and to you for listening.That is all for now, but we'll
talk to you again next time.