Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:15):
Pushkin.
Speaker 2 (00:22):
After every interview we do for the show, we upload
the audio to a piece of software called descript. Descript
turns the audio into a transcript, and then I can
edit that transcript, cut out the boring parts, move sections around,
and when I do that, descript edits the underlying audio
to match. As software, Descript is pretty janky, buggy. It's
(00:45):
constantly changing in ways that can make it hard to use,
and sometimes it just blows stuff up. But we use
it anyway because Descript is an incredible.
Speaker 1 (00:54):
Advance over what came before.
Speaker 2 (00:57):
Before descript audio software represented audio files not as words
that you can read and edit, but as waveforms, as
squiggly lines presented.
Speaker 1 (01:07):
On a timeline. So when does the script came along?
Speaker 2 (01:09):
Being able to edit audio by editing words on a
screen was this huge advance, and it was an advance
made possible by artificial intelligence. Eventually, Descript expanded to allow
people to edit not just audio but also video, and
last fall, open AI, the company that makes chat GPT,
led a fifty million dollar.
Speaker 1 (01:30):
Investment round in Descript.
Speaker 2 (01:32):
It's a sign that Descript is moving out to the
new AI frontier, the frontier of generative AI. AI that
creates words and pictures. This is of immediate interest to me,
as in is AI gonna help me do my job?
Is AI gonna do my job?
Speaker 1 (01:51):
But there is also a bigger question here what is
AI going to mean?
Speaker 2 (01:55):
More broadly for people whose jobs involve writing things and
creating visuals, which is to say, what is AI going
to mean for almost all white collar workers. I'm Jacob
Goldstein and this is What's Your Problem, a show about
people trying to make technological progress. My guest today is
(02:17):
Andrew Mason, founder and CEO of descript or maybe it's
descript by the way, I've always said descript and I'm
pretty sure that's wrong, right, it's a descript like dtour.
Speaker 3 (02:28):
Weird noncommittal on the issue.
Speaker 2 (02:30):
Let's do the subjective version.
Speaker 1 (02:31):
You're just one man. How do you say the name
of your company?
Speaker 3 (02:34):
Yeah, I've kind of cultivated the ability to flip between
them as I speak.
Speaker 1 (02:39):
You're killing me.
Speaker 3 (02:40):
The world still needs a little mystery.
Speaker 1 (02:42):
Okay, how about this? Say your name and your job.
Speaker 3 (02:45):
My name is Andrew Mason. I work at descript. That's
dscript dscript.
Speaker 1 (02:53):
Well played.
Speaker 2 (02:55):
Earlier in his career. Andrew Mason was the co founder
of groupunk. He took the company public and then got
fired after its stock fell by something like seventy five percent.
After that, he started a company called Dtour or maybe
it's I don't know. The company made these highly produced
audio walking tours that you could listen to on your phone.
In that job, Andrew saw the challenges of working with
(03:18):
the old waveform.
Speaker 1 (03:19):
Based audio editing software.
Speaker 2 (03:21):
At the same time, AI generated transcripts were getting better
and cheaper, and new technology was making it possible to
automatically match a transcript to an audio file. Andrew looked
at those two developments and thought, we should make an
audio editor that works like a word processor, which he
admits was a distraction from what he was supposed to
be doing, which was making walking tours.
Speaker 3 (03:44):
If I'm being honest, it was a bit of an indulgence.
It just felt like an incredibly cool problem to work on.
I went to school for music technology and worked in
a recording studio after I graduated, and just always loved tools,
and audio visual tools in particular. It was just so
fun to start thinking about this puzzle.
Speaker 1 (04:05):
Uh huh.
Speaker 3 (04:06):
So we told ourselves it was kind of way of diversifying,
but that's just like a ridiculous way for a product
that at like a startup that just hasn't even found
product market fit in their core product to be thinking
about the world. You know, all of the advice textbooks
will tell you not to do that, and it's probably
generally good advice, but it was just it was just irresistible.
Speaker 1 (04:28):
You know.
Speaker 2 (04:28):
So I am a fan of descript. I started using
it around when it came out several years ago. Certainly
I think it's great. It is kind of janky, and
it's always kind of janky, right, And my guess is
jankie meaning like a little bit unstable, things don't quite work.
It's always telling you to restart.
Speaker 3 (04:48):
By the way, if I'm not sure, if I have
your side, so I may ask you to send me
this entire portion of the interview so I can share
it with the team.
Speaker 2 (04:55):
So the thing is, like, I wonder, why is it
always kind of janky? Why is it never just like
stable and it works? And my guess is it's because
you're pushing forward really fast, right, You're trying to make
it better and better and better, and presumably there is
some trace off, right, like the faster you try and
push it forward, the more janky it's gonna be. You could,
I'm sure, just perfect the way it was four years ago,
(05:17):
but then it would never get better, but it would
be stable, right, And so this is like a big
whatever startup founder type question, like, is that some balance
you're always trying to figure out how fast do we
iterate versus how much do we try and make it
just stable and work.
Speaker 3 (05:31):
Yeah, that's an astute observation, not the fact that it's janky.
That doesn't take a genius.
Speaker 2 (05:38):
Respectful but respectfully as a fan.
Speaker 1 (05:41):
I'm telling you it's.
Speaker 3 (05:43):
But I think like your your attempt to make sense
out of it, I think like a good story to
tell here is maybe like going back to the to
the very beginning of Descript. So when we became Descript,
we sold off Detour to Bows and we decided to
just focus on building out this media word processor thing.
(06:06):
And some of the public radio producers who had worked
at Detour went on back into public radio and they
became some of the earliest customers of dscript. And what
we found was that they pushed it so much farther
(06:27):
than we were ready for.
Speaker 1 (06:30):
Ah, so quickly, what do you mean by that? Like,
what is an example of that.
Speaker 3 (06:34):
Yeah, I mean specifically in the case of like some
of these shows, it means putting together three to five
hour cuts of tape from many different files, with lots
like tons and tons of edits and notes mixed into
the edits, and just like stuff that we hadn't pressure
(06:54):
tested from a performance just giant.
Speaker 2 (06:58):
The files are really big, right, Like a three hour
audio file is actually a giant file, right, And if
you're stacking up a bunch of those, so you have
all these giant files and you're making tons of cuts,
that's just like computationally intensive, that kind of thing storage intensive.
Speaker 3 (07:15):
Yeah, it was just something that we hadn't that we
hadn't optimized for. It's it's an eminently solvable problem, but
it was something that in the earliest versions we hadn't done.
And so that is kind of in many ways been
the story of descript up to this point, where there's
there's been that element of it, and there were kind
(07:35):
of realities of needing to make quick progress that we
had to balance against stability and what we had for
our customers in terms of like the core product idea
of being able to edit by text was still for
them so much better than the alternative that there was
(07:59):
just a tolerance of the stability issues that honestly made
us sick to our stomachs that we had to put
people through. And it's not like wegn but it was
like we had to make trade offs there. So all
of this pushing kind of culminated with this release of
a pretty major overhaul that we did at the end
(08:20):
of last year and since then, since last November and
really like through the first half of this year, is
when we think we start to get to a good place.
Our goal is that if we're having this conversation, like
we're not going to be having the same conversation in
say July, for sure at the very latest, like the
(08:41):
conversation we'll be having with someone like you will be Wow,
it's gotten. It's like not an issue anymore.
Speaker 2 (08:46):
So you say all that, but also, you just got
this big investment from open Ai. You got a thing
on descript that says sign up to try GPT four
with Descript, which I just signed up for and I'm
very curious about That doesn't sound like, oh, we've arrived
and now we've got our product and we've just got
a hone it that sounds like there's this whole giant
(09:08):
new universe of things you were about to try and
figure out.
Speaker 3 (09:11):
That's true, And that's the funny thing about all of
this are is that at the same time that we're
turning to focus on quality, it's a moment where generative
AI has arrived at a scale and with a force
that no one really saw coming this quickly.
Speaker 2 (09:28):
So so okay, I know from the beginning Descript was
built on top of AI, you know, the technology for
transcription for matching audio to text, but was Descript itself
an AI company.
Speaker 3 (09:41):
So we had some really smart people on the team
in UH with machine learning experience, but I wouldn't say
in the early days we were like a company that
was with anybody that was doing like original AI research
or anything like that. We saw that as a gap
that we wanted to solve. And so I forget exactly
(10:05):
what year it was, it was maybe about four years
ago we saw this company called Liarbird. It was a
a company out of y Combinator with some really smart
PhD candidates. They had built model that would build a
clone of your voice based on I think about three
minutes or five minutes of training data. Of just talking
(10:27):
to it.
Speaker 2 (10:27):
Let me just say, I know Liarbird is spelled l yri,
but I assume they're aware.
Speaker 1 (10:34):
Of the hominem.
Speaker 2 (10:35):
Right, this is a thing that is cloning your voice
so that you can make it sound like you're talking
even if you're not talking. And the company is called Liarbird,
and this is a somewhat fraud thing, right, Like, I
feel like they're throwing it in my face that this
is a sketchy product that they're developing.
Speaker 1 (10:54):
Did it cross your mind?
Speaker 3 (10:56):
Did it cross my mind? Is like the ethical quandary
that we were getting into, or like the branding implications
of the name.
Speaker 1 (11:03):
More the ethical quandary.
Speaker 3 (11:05):
Yeah, the ethical quandary absolutely entered our mind. And our
point of view on that, and has been our point
of view on these things in general, has been that
we don't want to be like out there paving the
way for any new paths to the apocalypse, so to speak.
(11:27):
We actually think, like have always felt like not really
sure how society was going to put the brakes on
this sort of thing. We just knew that we didn't
want to be part of it, and we tried to
put guardrails in place on our product. That would make
it easy to stay off the slippery slope. So in
the case of Lyyerbird, which once we bought them, we
(11:51):
integrated their technology and released it as something that we
call overdub. It's a way that you can clone your voice.
We require you to authenticate that it's actually you, and
we only let you clone your own voice, and that's
worked really well. We're now in a world where there's
other people that have similar models and they're not putting
those protections in place. And the use case that we've
(12:11):
always been the most excited about is making it possible
to edit your natural recordings, so going in and changing
an individual word, and we've built some special stuff that
will kind of listen to the audio on either sides
and make sure that it blends in. From an intonation perspective,
we started with the ability to delete stuff and move
stuff around. Now you can just type and really make
(12:32):
it feel like it's a word processor.
Speaker 2 (12:34):
Presumably the better you get, the better the technology you
use to Clona Voice gets, the more words it can do. Right,
I mean, every week, for what's your problem? I write
a little introduction and then I read it. But presumably
at some point overdub will be good enough that no
one knows will know whether it's me reading it or
(12:55):
I'm just typing it right.
Speaker 3 (12:57):
We have a new version of overdub that will release
in the next couple of months, and it's the first
time that I've heard my own voice doing a narration
of something that made me say, like, this sounds so
much like me in a way that it's not distracting
or the AI does not get in the way.
Speaker 2 (13:18):
Can I try that new version now, like, not this minute,
but like for the show?
Speaker 1 (13:24):
Yeah, for the show.
Speaker 3 (13:26):
I bet we could find a way to do it.
It's just so you could hear it and stuff.
Speaker 2 (13:30):
There's a universe where I say, at this moment in
the show, guess what today? That voice me reading the
intro at the top of the show that was overdubbed.
Speaker 1 (13:39):
It wasn't really made.
Speaker 3 (13:40):
Yeah, we tried overdubb for the voice doing the intro
at the top of the show.
Speaker 1 (13:47):
And we decided it wasn't quite.
Speaker 2 (13:49):
Good enough, but we decided it would work for this
part of the show.
Speaker 1 (13:53):
What you're hearing right now, it's not really me. It's overdubbed.
Speaker 2 (13:58):
In a minute, what overdubb and chat GPT and generative
AI will mean for descript and for the.
Speaker 1 (14:04):
World and also for me.
Speaker 2 (14:12):
Now back to the show, descript is expanding from podcasts
to video, and it just took a big investment from
open Ai, the company that makes chat GPT, and also
this system called Dolly that uses AI to generate images.
So Descript is clearly pointing toward a future where it's
going to be software for creating AI generated or at
(14:33):
least AI.
Speaker 1 (14:34):
Enhanced audio and video.
Speaker 2 (14:36):
And I asked Andrew, what does that future look like?
How is generative AI going to work in descript?
Speaker 3 (14:43):
I don't think we know entirely yet. In a lot
of ways, it feels to me like you're letting this
alien into into your app. You're just giving it the
keys and then the interfaces. How do you find how
do you find a way to kind of give the
aliens some buttons in tier UI, give them the ability
(15:04):
to press the buttons, and then how do you talk
to the alien?
Speaker 1 (15:07):
What do you mean? Like?
Speaker 2 (15:08):
That is a striking metaphor a little scarier right. It
suggests a certain level of uncertainty and potential downside. It's
not like, oh, this is great, this is going to
solve a problem like, why do you say it's like
letting an alien.
Speaker 3 (15:21):
In as opposed to letting a human in.
Speaker 1 (15:25):
It's a really interesting choice of words. Tell me more
about it.
Speaker 3 (15:30):
So let's start by just saying, like, very specifically, what
I mean. I think, when implemented, well, what this will
feel like is as if you had a co editor
in a document with you, in our case, in a
video or a podcast that you're working on that is smart,
(15:52):
knows how to do everything, definitely knows how to do
the tedios busy work, and you can kind of kind
of guide or direct through giving these tasks. You know,
it's almost like it's the production assistant or something like that,
and you're the director and you're able to just guide
it and give it feedback on how it's doing and
(16:13):
what it's doing well and what it's not doing well.
Speaker 2 (16:15):
There's a version of it where it's like we've gotten
used to the graphical user interface, right, We've been trained
since the Magintosh computer in the mid nineteen eighties that
the way you interact with a computer is like there's
little pictures and little folders and you point.
Speaker 1 (16:28):
And click one way or another, right, and.
Speaker 2 (16:31):
One possibility here is the new standard interface is chat.
You just type in like whatever, please trim all the
ums from this file, or even please turn this thirty
minute interview into a twenty minute interview, and the way
that makes it most interesting, right, and you just type
that in and it happens.
Speaker 1 (16:48):
I mean that's a version of what I hear you
saying there.
Speaker 3 (16:50):
I think some people believe that that chat or a
text field will become the primary interface for making things.
I think of it more as like it's the primary
interface for interacting with the alien, and then you and
the alien are still going to be working, like have
other buttons that they can press. You still, sometimes you
(17:11):
just want to take the thing in your hands and
do it yourself.
Speaker 2 (17:14):
The alien metaphor, I mean there's a real like do
we welcome our alien overlord's question? When you choose that metaphor,
it makes.
Speaker 3 (17:21):
Me I mean, maybe it feels that way.
Speaker 2 (17:24):
It doesn't it it doesn't make me feel better.
Speaker 3 (17:26):
I'll say that I think it feels the way that
an alien arrival would probably feel, where you know, maybe
you shake its hand and immediately it has something in
its skin that cures your cancer, and you feel hopeful,
but you also want to know what they're up.
Speaker 2 (17:45):
To and yeah, and cure your cancer is definitely the
happy version.
Speaker 1 (17:49):
Not usually in the alien movie.
Speaker 2 (17:51):
What happens, but I guess that could happen.
Speaker 3 (17:53):
Well, there's the good there's a good part, right, But
you never really know, I think is the point. And
I think we're all living in this kind of like
pushing forward in this mystery, kind of kind of stuck
between awe and terror.
Speaker 2 (18:06):
You sound more ambivalent than I might have thought. Why
is that because you just took a giant investment from
open AI.
Speaker 3 (18:14):
I think like at moments like this, you have a
choice between either renunciation and just like stopping and out
of from a place of fear. Which maybe that's right,
you know, maybe fulfillment and happiness everything we have for
that is is already here, and we should focus our
(18:37):
energies on making peace with our inevitable death.
Speaker 1 (18:41):
In any case, we should do that.
Speaker 3 (18:42):
But go on the other way to think of it
is to just forge ahead and realize that the potential
of what's on the other end of this might make
us feel in retrospect like we were just in the
earliest possible innings of our of the human experiment. So
(19:06):
you know, I feel like we're all going to die
one way or another, might as well forge ahead. It's
not ambivalence, but it's more just being clear eyed about
the fact that not trying to pretend that there's parts
of it that don't seem scary.
Speaker 2 (19:21):
I mean, one of the things that's really striking to
me with AI, and that seems quite different from other
technologies in the past, is the people who are working
on it, the people who really understand it, seem more
scared than everybody else.
Speaker 3 (19:41):
I'm not a first time founder. I went through the
experience of being a young person building building group on
telling myself a story about how it was going to
revolutionize local commerce and all the good stuff, and it
just didn't turn out that way. And I think we've
seen a generation of tech companies that just like didn't
(20:06):
turn out the way that the the super Rose colored
Glasses mission statement would have suggested. And I think we're
just trying to be we just have that experience, that
recent experience at top of mind, and are trying to
think about it in a way that has guardrails around
(20:27):
around repeating that history and just make sure we're really
proud of what we build. Does that make sense?
Speaker 1 (20:33):
It makes sense.
Speaker 3 (20:34):
Am I Am I going to regret saying all this?
Speaker 1 (20:37):
I don't think so.
Speaker 2 (20:37):
You haven't said anything like incriminating as far as I
can tell. You know, I heard somebody saying the other day, like,
it's an interesting question to ask somebody like, what was
the first thing you asked GPT chet GPT to do?
And the first thing I asked chet GPT to do
was write an episode of Planet Money podcast I used
to host, of which there are you know, a thousand
(20:59):
transcripts on the internet. Write an episode of Planet Money
about whether the FED is going to raise interest rates
by twenty five basis points or leave them unchanged, right,
And it wrote something that was pretty good, like not
a whole show, it's not there now, but at the
rate of current improvement, you could definitely imagine it writing
that episode pretty well in whatever a year or two
(21:20):
years or some amount of time when I will still
want to be gainfully employed. And like I do wonder
on this one, is there a day slash? How far
are we from.
Speaker 1 (21:30):
The day when generative a I can just make a
podcast without me?
Speaker 3 (21:36):
How does that make you feel?
Speaker 2 (21:39):
I mean somewhat afraid, also like interested in figuring out
how to use it, right, Like it feels like a steamroller.
It's like, oh, maybe I should go get in that steamroller.
If my choices are get in the steamroller or get
run over by it.
Speaker 3 (21:53):
Yeah, I think, like before I comment on it, I
think it's important that people understand, Like it's very true that,
like it's easy to think that I'll have a bullshitty
answer to a question like this because I work at
a tech company that's working on a lot of this stuff.
But you have to remember that, like, if that's true,
(22:16):
we're out of jobs as soon as like a human
is no longer in the loop. That's really bad for us.
Like does that make sense to you buy that.
Speaker 1 (22:26):
At some margin?
Speaker 2 (22:28):
Right, there's a long way between all the people who
are doing it now and zero people. There's a lot
of intermediate cases between the way it is now and
like a fully AI generated podcast, right, and like we're
already starting down the road, right, getting AI to write
show notes or something that's basically has happened now. And
(22:48):
you know, like I know the history of technology and
the labor market pretty well, you know, from the Industrial
Revolution on.
Speaker 1 (22:55):
I'm pro.
Speaker 2 (22:58):
Technological innovation. I believe in productivity gains and efficiency gains.
I'm also aware that there are instances when highly skilled
crafts people are displaced by technology. Right, that is definitely
a thing that happens. And I recognize that the pie
gets bigger and everybody's better off than the long run,
But like, I just want to not get pinched, right,
I just want to be you know, you don't want
(23:18):
to be the one.
Speaker 1 (23:19):
I don't want to be the one. And you know.
Speaker 2 (23:21):
I'm not out on using it. It's getting really good,
really fast. It's doing a lot of the things that
I can do.
Speaker 3 (23:29):
There's one other thing I wanted to say, just about
the fear for your job thing, which is something we
say around here a lot, is that you should struggle
with your story and not your tools. That's almost like
a guiding light for us, is we want to take
all of the cognitive friction away from using the tools.
The funny thing about all of these things is like
(23:51):
there's a brief moment in time where you feel like
you have superpowers, but then everybody has them, and humans
once again become the differentiator. And we really think to
make like making great stuff is always going to be
a thing, and great is always going to be determined
by the human that's in the loop.
Speaker 2 (24:11):
I mean, you know, there's this story about chess, right,
a computer chess program beat a person a long time ago,
decades ago now. But then after that people pointed out
the fact optimistically from my point of view, that a
computer plus a person could still beat any computer. Right,
a person working with a computer was better than the
(24:32):
best computer in the world. And that was like the
metaphor for like, yes, if we work with machines, we
can be better. That is no longer true now the
computer's kept getting better, and now people can't make them better.
Even a person plus a computer cannot beat a computer.
And I know that chess is less complex than the
real world, and so perhaps still a reason for optimism.
(24:52):
I certainly think I'm clever and good at making podcasts
and hope that I can do that. I hope that
I can work with AI to make something better than
ANYII or more like me or something.
Speaker 3 (25:05):
It's it might not be true, though, but here's the
amazing thing. People are still playing chess. Right. It's like true,
there's some separation. Some separation happens where the machines become
so good and we just say, okay, you you machines,
you go off and do your thing, and we're going
to be here kind of reveling in our humanity with
(25:28):
each other. I think what we'll see is there's there's
going to be a certain category of content that's really
just about like the transmission of bits of information from
your brain to my brain, and that's all that it's about.
Speaker 1 (25:41):
That.
Speaker 3 (25:42):
Maybe we do one day see humans taken out of
the loop, but I really do believe there will always
be space for like at the core great content, storytelling,
whatever you call it, it's it's about feeling connected to
the humans and other people. And as soon as machines
play to have too heavy a hand, it's just not
(26:04):
interesting anymore.
Speaker 2 (26:08):
We'll be back in a minute with the Lightning Round,
which includes a message from Andrew to.
Speaker 1 (26:13):
His future self. That's the end of the ads. Now
we're going back to the show.
Speaker 2 (26:25):
Okay, so this is the Lightning Round, now you ready.
It's just a bunch of questions. Do you use generitive
AI in your life outside of work?
Speaker 1 (26:34):
Now?
Speaker 3 (26:35):
You know what's interesting. I did something this morning where
I was actually like, I don't I don't even care
if it's wrong. I don't even care if it's.
Speaker 2 (26:44):
Like the test of a theory is not is it correct?
Speaker 1 (26:46):
But is it interesting?
Speaker 3 (26:48):
Yeah? Exactly. I was asking it about I think, like
my son got hit in the head with a baseball,
and I was trying to I really should care about this. Actually,
you should.
Speaker 1 (26:58):
Not ask chat GPT anything significant.
Speaker 2 (27:00):
About that, not to give your parents advice.
Speaker 3 (27:04):
It's stuff like that, like I've I've pretty quickly been.
Speaker 2 (27:08):
Able to like that you should not be asking for
medical advice about your child.
Speaker 3 (27:15):
I know. But when I say stuff like that, like
I would have googled it and probably just done what
I was going to do anyway. So it was almost
just a curiosity. He was fine, He didn't.
Speaker 2 (27:25):
Need to go see a doctor, not according to JGBT. No,
I'm curious about your time working in a recording studio, right,
You worked in a recording studio where musicians came in
and recorded. Did you see there any like moments of
musical genius?
Speaker 1 (27:44):
Is there one?
Speaker 2 (27:44):
In particular?
Speaker 3 (27:46):
I worked for this guy named Steve Albini, who is
a pretty well known engineer producer that was in some
popular kind of punk rock bands in the in the
eighties and currently and definitely saw some cool bands. But
I think also I really feel like I learned a
(28:08):
ton from watching him work. He's so talented, so articulate,
so smart in many ways, like an example of what
I aspired to be at the time, and so seeing
that output, but then also seeing him every day and
how hard he worked, it was a real like, oh,
(28:30):
this is how it happens kind of moment for me,
and it kind of inspired me. It inspired within me
a kind of work ethic that I'm not sure I
would have gotten to otherwise.
Speaker 2 (28:44):
What's the best deal you ever got from group on?
Speaker 3 (28:53):
Man? You know, it's so funny because, like obviously I
was asked. I used to be asked that question all
the time. I think it was a sensory deprivation tank.
They had a sensory deprivation tank center in somewhere in Chicago.
Had never tried. It was really cool.
Speaker 2 (29:11):
This is a descript question. Now, how will you know
when it's time to do something else?
Speaker 1 (29:15):
But leave? Dude?
Speaker 3 (29:18):
I don't know if I want to say this on
a podcast, because if I do decide to take the
company public, it'll come back to haunt me. But I
almost want to say it specifically for that reason, Andrew,
I'm talking to future Andrew right now. You do not
want to be a public company CEO again, Okay, hire
someone else to do that. I know you're talking yourself
(29:38):
into it and saying it's going to be different time.
It's okay, but you hate it. It's the things that
those people are good at is and are interested in
is different than you go do something else.
Speaker 1 (29:53):
Amazing.
Speaker 2 (29:54):
I've never had someone leave themselves at time. Tast a
lot of podcasts before.
Speaker 1 (30:02):
I'm going to send that to you. If you go public,
I'm to have you back on the show and I'm
going to play it to you.
Speaker 2 (30:09):
Thank you, Thank you for being so generous with your time.
I appreciate your candor and I'm grateful for that.
Speaker 3 (30:14):
I appreciate that I had. I had fun too. You're
good at your job in the sense that, like uh you,
you bring it out in me.
Speaker 1 (30:22):
I'm better than a machine for now. It's gonna that's
my model, better than a machine for now.
Speaker 2 (30:34):
Andrew Mason is the founder and CEO of Descript. Today's
show was edited by Sarah Nix, produced by Edith Russolo, and.
Speaker 1 (30:42):
Engineered by Amanda k Wong. I'm Jacob Goldstein.
Speaker 2 (30:45):
We'll be back next week with another episode of What's
Your Problem? And here, finally is the top of today's show.
The intro to the show as read if that's what
you'd call it, as generated by overdub descripts AI, powered
voice whatever emulator. After every interview we do for the show,
(31:06):
we upload the audio to a piece of soft were
called the script. Descript turns the audio into a transcript,
and then I can edit the transcript, cut out the
boring parts, move sections around, and when I do that,
descript edits the underlying audio to match. As software, descript
is pretty janky, it's buggy, it's constantly changing in ways
(31:27):
that can make it hard to use, and sometimes it
just blows stuff up. But we use it anyway because
descript is an incredible advance over what came before. Before
descript audio software represented audio files not as words, but
as waveforms, squiggly lines presented on a timeline. So when
descript came along, being able to edit audio by editing
(31:50):
words on a screen was a huge advance, and it
was an advance made possible by artificial intelligence.