All Episodes

February 5, 2025 42 mins

Meredith Whittaker is the president of the Signal Foundation and serves on its board of directors. She is also the co-founder of NYU’s AI Now Institute. Whittaker got her start at Google, where she worked for 13 years until resigning in 2019 after she helped organize the Google Walkouts. She speaks with Oz about learning on the job, championing data privacy and being awarded the Helmut Schmidt Future Prize for “her commitment to the development of AI technology oriented towards the common good.”

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Thanks for tuney into tech Stuff. If you don't recognize
my voice, my name is Oz Valoshian and I'm here
because the inimitable Jonathan Strickland has passed the baton to
Cara Price and myself to host Tech Stuff. The show
will remain your home for all things tech, and all
the old episodes will remain available in this feed. Thanks
for listening. Welcome to Tech Stuff. I'm Os Valoshian and

(00:22):
I'm Cara Price.

Speaker 2 (00:23):
And this is the story our weekly episode, dropping each Wednesday,
sharing an in depth conversation with a fascinating person in
and around tech. This week, Oz tells me he's bringing
me a conversation with Meredith Whittaker.

Speaker 1 (00:37):
That's right. Meredith leads the Signal Foundation, which is a
nonprofit that oversees the encrypted messaging app Signal, beloved by journalists,
deal makers, drug dealers, and privacy buffs alike.

Speaker 2 (00:50):
It takes wanting to keep a really really really big
secret to get on Signal, but we should also think
about everything we say as a really big secret. If
you can who has access to all of our information.

Speaker 1 (01:02):
One hundred percent, Well, you don't sometimes know what's a
really really big secret until it's too late. Meredith has
a fascinating background. She started at Google back in the
day when it famously boasted that don't be Evil slogan,
and she actually started as a temp in a kind
of customer service adjacent gig that was despite being a
Berkeley grad in literature and rhetoric. Eventually she worked her

(01:24):
way up into leadership, founding Google Open Research Group and
then separately founding something called the AI Now Institute at NYU.
And after she left Google, she became head of Signal
in twenty twenty two. But the circumstances of her departure
from Google were memorable, to say the least.

Speaker 2 (01:41):
I think it's fair to say that she got on
everyone's radar back in twenty eighteen when she led the
walkouts at Google over the experience of women in tech
and at Google specifically over things like pay inequality and
the handling of sexual harassment at the company.

Speaker 1 (01:55):
And then there's also Project Maven, which was Google's decision
to take part in a Department of Defense program to
use machine learning and data for battlefield target identification, which
was very very controversial internally and which in part is
what led her to her role today at signal. Merediths
won a prize in Germany last year called the Helmut

(02:16):
Schmidt Future Award, where she was honored to quote her
commitment to the development of AI technology oriented towards the
common good. So we talk about that too, and why
Meredith fundamentally describes herself as an optimist. But we start
by geeking out over literature.

Speaker 2 (02:31):
Let's hear it.

Speaker 1 (02:33):
So I was reading your bio and one of the
things that stuck out to me was your background as
a English literature and rhetoric major in college. I was
also in English major. As I looked at your kind
of story, there's this arc from arriving at Google as
a graduate in two thousand and six in the Don't
Be Evil error, seeing all of these technologies developed, many

(02:55):
of which went on to be kind of part of
the foundation of what we now call AI, leading a
protest movement and ultimately leaving and it made me think
of Pilgrim's Progress or a buildings Room.

Speaker 3 (03:08):
Then yeah, I mean, maybe a Jeremiah is more accurate.

Speaker 4 (03:14):
So that was a joke for the four English literature
majors who are listening. I think for me, I was
very sincere. I still am very sincere, and I came
in and took a lot of things at face value,
in part because I had no idea about tech. I
was stepping in and I was asking very basic questions.

Speaker 1 (03:34):
You did in two thousand and six.

Speaker 4 (03:36):
I mean, well, you know, it wasn't the money occupation
when I got into Google. It was it was actually,
it was a fairly lovely environment in some ways because
a lot of the people there were people who had
a special interest, like nerdy people like myself, but I
was a book nerd, and they were sort of math
and science nerds. And suddenly their special interest, their sweet

(04:01):
passion that they spent so much time thinking about, was
also profitable. It was also important for the world. It
put them in a position of being a meaningful contributor.
And there was a genuineness to that. And we're in
an era now, of course, where you know, no one's
mom is saying be a doctor or a lawyer. They're
saying be an engineer. And so it was, you know,

(04:24):
it was simply a very different world.

Speaker 1 (04:26):
And what about don't be evil? When was that generation?

Speaker 3 (04:29):
Then?

Speaker 1 (04:29):
When was that abandoned? Was that part of the kind
of discourse when you went to Google, that.

Speaker 4 (04:33):
Was generated before my time, and it was core to
the discourse at Google. I think it was removed by
lawyers in like twenty fifteen, twenty sixteen, I don't actually
remember the date, but look, it's a slogan.

Speaker 3 (04:47):
You know.

Speaker 4 (04:47):
The structure of Google was still up into the right
forever so profits and growth worthy objective functions. That's the
sort of perpetual motion machine that Google adhere. However, I
do think don't Be Evil had some power in that
it was a touchstone that we all shared where socially

(05:10):
awkward people are people who might not quite understand how
to formulate a discomfort could kind of point to it
and say like, hey, this is making me think that
we should examine this because we said don't be evil.
And most of this, you know, throughout Google's history was
at a time where let's say, the horizons that would
challenge that were very far in the distance. So you

(05:31):
could still make a lot of quarterly returns doing a
lot of things that were understood and we can debate
that as as good right, and things like building a
search engine for the Chinese market that would include surveillance
functions for the government was not something you even had
to consider because there was so much low hanging fruit

(05:51):
in terms of continuing to grow, continuing to profit. And
so I think that combination of like it was, it
was sincerely held, and that the actual temptation to do
the things that really got into that sort of hot
water potential evil were not on the table at that
point because there was so much else that could be done.
And you know, of course, the ratchet turns, the ratchet turns,

(06:12):
the ratchet turns, and then suddenly those horizons are much closer.
The company is swollen by orders of magnitude, and that
little slogan is quietly removed by a team of lawyers
who probably bought some summer houses with the fees.

Speaker 1 (06:34):
Did you believe that slogan? And was there a moment
along the way where you definitively started not to believe it.

Speaker 4 (06:42):
I liked having that slogan as a tool. Let me
say that, right.

Speaker 3 (06:46):
It was useful. It did work in rooms.

Speaker 4 (06:49):
It you know, allowed us to frame debates in a
way that had at at least as their reference port
a core and ethical consideration. Like I was young, there
was about a million different learning curves I was scaling
at once. Yes, what does it mean to have a

(07:10):
real job.

Speaker 2 (07:10):
Right.

Speaker 3 (07:11):
I didn't come from this class.

Speaker 4 (07:13):
I didn't have familiarity with these cultures or environments because
I'd literally never been in an environment that looked anything
like you know what my family still jokingly refers to
as a business job.

Speaker 3 (07:24):
Right.

Speaker 4 (07:25):
I didn't know what was normal and what was not normal.
So for me, Google was normal, and so I just
want to be care Like. I don't know if I
believed it or not. I found it useful and I
saw it do work in the world, and I liked
the idea, Like, it sounds great that we're giving people
ads that match their interests. What could be wrong with
that We're organizing the world's information. Wow, all of this

(07:48):
is cool.

Speaker 3 (07:49):
Like I was like, Okay, what is tech? What is Google? Hey,
sit me down and tell me what is information? I
was a master at asking the dumbest questions, but sometimes
they would unlock whole world for me, and then I
would realize, like, oh, information is just websites. Okay, but
you can't make a website if you don't have a computer,
and you have to be able to code HTML markup

(08:12):
to do that. Okay, So it's more limited than I thought. Okay,
well what is an index? Oh, it's ranking it. But
isn't that a value judgment?

Speaker 2 (08:20):
Well?

Speaker 4 (08:20):
I guess it's not if it's an algorithm, but like
who makes the algorithm? And so it was butting my
head up against what are kind of dumb, very basic
questions and sort of sensing an insufficiency of the responses,
and like, you know, over almost twenty years now, continuing
that process of trying to, you know, just figure out

(08:41):
like what are we doing here?

Speaker 3 (08:44):
And how do I understand it? And that was basically
my education.

Speaker 1 (08:49):
So fust forward, you know, thirteen years from two thousand
and six to twenty nineteen. How did you get to
that point where it became impossible for you to stay?

Speaker 3 (09:01):
I don't have a pat answer, But how did I
get to that point? Well?

Speaker 4 (09:06):
Again, I was a sincere person and I had done
a lot in my career. I had a lot of
opportunities and then I took them, and you know, I
built a research group inside of Google.

Speaker 3 (09:17):
I was involved in.

Speaker 4 (09:18):
Efforts to spread privacy and security technologies across the tech ecosystem.
I was a champion for privacy within Google. I had
built you know, open source measurement infrastructures to help inform
the net neutrality debate. I had always tried to be
on the side of doing social good, to.

Speaker 3 (09:40):
Be very bland about it.

Speaker 4 (09:42):
Yeah, And I had been able to do very very
well doing that. And at the point where I became frustrated,
I had already established the AI Now Institute, co founding
that at NYU, looking at some of the present day

(10:02):
real term harms and social implications of AI technologies. This
was twenty sixteen. I had become known through the academic
world and the technical world, and particularly within Google itself
as an authority and a speaker and an expert on
those issues. So talking about some of the harms that

(10:24):
these technologies could cause, criticizing within, essentially criticizing within. I
saw myself as a resource, and actually many many people did.
I would be brought in when teams were struggling with things,
I would advise them. I would often give academic lectures
that went against the party line at Google, but we're
very well cited, were very empirically documented, and I had

(10:48):
felt that I was making some headway right.

Speaker 3 (10:51):
People were listening to me. I was at the table,
so to speak.

Speaker 4 (10:55):
And then in twenty seventeen Fall twenty seventeen, and I
learned about a military contract that Google had secretively signed
to build for the US drone program. And that was
at a time and still today, when those systems were
not purpose built right, they were not safe.

Speaker 3 (11:14):
There were ethical concerns we had. Sergey Brin, the co
founder of Google, had made.

Speaker 4 (11:20):
Remarks that were very clear in the past about keeping
away from military contracting. Given Google's business model, it's public interest.
It serves the world, not just.

Speaker 1 (11:28):
The US business model, meaning that it knows a lot
about people. Essentially, yeah that it is.

Speaker 4 (11:33):
Ultimately it's a surveillance company. It collects huge amounts of
data about people, and then it creates models from that
data that it sells to advertisers, and advertisers can leverage
Google's models and ultimately the surveillance that they're built on
in order to precision target their message and thus reach

(11:53):
more customers. And that remains the core of the business model,
that remains the core of the Internet economy.

Speaker 3 (11:59):
And putting that type.

Speaker 4 (12:01):
Of information at the service of one or another government
is in my view, it is dangerous. And the first
thing I did is I think start a signal thread
with some people and say what should we do?

Speaker 1 (12:12):
This was Project Maven. It was to do with helping
the US government target enemy combatants on the battlefield.

Speaker 4 (12:18):
Though well, it was building surveillance and targeting AI for
the drone program.

Speaker 1 (12:24):
And what was the result of me used so talking
to people on a signal. What happened next?

Speaker 4 (12:30):
You know, it grew and it grew, and then I
wrote an open letter. That letter got signed by thousands
of people. Eventually someone leaked it to the New York Times.
It snowballed, it grew into a movement, and ultimately Google
dropped that contract. Now you know, these efforts are ongoing.
I want to be clear that this was a fairly

(12:53):
different time. I think now there's a lot more comfort
with military technology, but I think those issue shoes have
yet to be addressed. The issues of the danger of
yoking the interests of any one country irrespective to a
massive surveillance apparatus, the types of which we haven't seen

(13:13):
in human history. That's a real concern, and I think
that is a concern across political spectrum. We should all
be concerned about that, and frankly, I think it speaks
to the need to question that form altogether. The form
of sort of the surveillance business model and the collection
of massive amounts of information that can be used to harm, control, oppress,

(13:36):
et cetera.

Speaker 1 (13:37):
How did you end up finally deciding what's time to leave?

Speaker 4 (13:41):
There's not one answer to that, you know, I realized
pretty quickly that I had poked a bear. I had
a huge amount of support inside the company.

Speaker 3 (13:50):
I still do. I don't think anyone's saying I'm wrong.

Speaker 1 (13:53):
I mean, there are twenty thousand employees at one point
right involved in Yeah, and it's a huge no. I mean,
the employees are there.

Speaker 4 (14:01):
At that point. I think it was like two hundred thousand. Yeah,
So it's it's significant.

Speaker 3 (14:07):
It was the largest labor action in tech history, which
speaks not to the militancy of people who work in tech,
but I think speaks to the fact that this was
really a rupture of a lot of discomfort that had
been building for a long time. Right, Like, people.

Speaker 4 (14:24):
Who often got into this industry because of their ethical compass,
because they really, you know, wanted to quote unquote change
the world, were feeling betrayed and feeling like their hands
were no longer on the rudder in a way that
they felt comfortable.

Speaker 2 (14:37):
With coming up. We discussed why signal broke through the noise,
stay with.

Speaker 1 (14:44):
Us, so you poked the bear and it became time
to leave. I mean, not for nothing. The kind of
one of the wedge issues was this use of Google's
technology for military purposes. And I think twenty twenty four
was the year. I mean, of course we will remember,

(15:05):
you know, the strikes, the drone strikes during the Obama years.
And this is not per se a new phenomenon, but
something about Ukraine and Gaza has really elevated into mainstream
consciousness what it means to have autonomous weapon systems.

Speaker 4 (15:20):
Yeah, I think the environment has changed a lot. Look
on one side, there's a very real argument that I'm
deeply sympathetic to, like military infrastructure and technology is often outdated.
They're amortizing yanky old systems that are not up to snuff.
Logistics and processes are threadbare, all of that being true,

(15:45):
and some of the new technologies would make that a
lot easier. On the other hand, I think we're often
not looking at these dangers. What does it mean to
automate this process? What does it mean to predicate kate?
Many of these technologies on surveillance data on infrastructure that

(16:06):
is ultimately controlled by a handful of companies based in
the US. How do we make sure the very real
dangers of such reliance are actually surfaced and quick fixes
to bad practices accrued over many, many years of grifty
military contracting don't come at the expense of, you know,

(16:29):
civilian life. Don't come at the expense of handing the
keys to military operations to for profit companies based in
the US.

Speaker 1 (16:38):
And you talked about Project Lavender and your Helmut Schmidt speech,
which I found fascinating. Why do you think it was
important to start there?

Speaker 4 (16:47):
Well, Lavender, Gospel and Where's Daddy are three systems that
were revealed to be used in Gaza by the Israeli
military to identify targets and then, you know, basically kill them.

(17:07):
I looked at those because they are happening now, and
because they were the type of danger that the thousands
of people who were pushing back on Google's role in
military contracting were raising as a hypothetical, and because I
think it shows the way that the logics of the

(17:29):
Obama era drone war have been supercharged and exacerbated by
the introduction of these automated AI systems. So for folks
who might not remember, the drone war was the theater
to use the military term in which the signature strike
was introduced, and the signature strike was it's killing people

(17:53):
based on their data fingerprint, not based on any real
knowledge about who they are, what they've done, or anything
that would resemble what we think of as evidence. So,
because someone visited five locations that are flagged as terrorist locations,
because they have family here, because they are in these

(18:16):
four group chats, whatever it is, I'm making a possible data.
But you could model what a terrorist looks like in
any way you want. And then if if my data
patterns look similar enough to that model, you deem me
to be a terrorist and you kill me. And you
know it is exactly the logic of ad targeting.

Speaker 3 (18:39):
Right.

Speaker 4 (18:39):
I scroll through Instagram. I see an ad for athletic greens.
Because you know they have my credit card data. You
buy healthy stuff. You're this age group, you live in
New York, whatever it is, we assume you will buy
athletic greens or be likely to click on this ad
and will get paid. It's that but for death and

(19:00):
the lavender, Where's Daddy? And gospel systems are basically that
for death supercharged.

Speaker 1 (19:12):
And is it a coincidence the resemblance between how AD
targeting works and how these signature strikes on the battlefield
or in war work. Was the military inspired in some
way by the ad industry or what's the kind of
source of the connections.

Speaker 4 (19:31):
I wouldn't be surprised. But then again, you know, it
wasn't just the ad industry. You know, there was an
Obama era and kind of a neoliberal faith in data.
Now there wasn't much of a definition of what data is,
but the idea that data was more objective, less fallible,
more scientific. These are almost like mythologized versions of data,

(19:53):
and that if we relied on the data instead of
subjective decision making, we would re each determinations that were better,
We would make the right choices. Now, part of my
background is I came up doing large scale measurement system
so I was in the business of making data.

Speaker 1 (20:12):
That's interesting. I mean, I just want to pause on that,
because that's an interesting way. I've never heard it expressed before.
But essentially making data equals measuring stuff.

Speaker 4 (20:21):
Yeah, data is a very rough proxy for a complex reality.

Speaker 3 (20:26):
Right, So you.

Speaker 4 (20:27):
Figure out, like, what's the way we're gonna create those proxies.
How are we going to measure it? Right, whether we
measure human emotion, or measure the timing of street lights,
or measure the temperature in the morning based on a
thermometer that's calibrated in a certain way. We log all
of those as become data, right, And of course those
methodologies are generally created by people with an interest in

(20:50):
answering a certain set of questions, maybe not another set
of questions. In the case of Google or another company,
they were interested in measuring consumer behavior in servis of
giving advertisers access to a market. This is, in effect,
like a rough proxy created to answer very particular questions
by a very particular industry at a very particular historical conjuncture,

(21:12):
and it is then leveraged for all these other things.
And I think shrouding that data construction process in sort
of scientific language, you know, assuming that like data is
simply a synonym for objective fact, that's a very convenient
way of strouding some of those intentions, right, Shrouding the

(21:33):
distinction between those who get to measure and thus get
to decide, and those who are measured and are thus
subject to those decisions.

Speaker 1 (21:42):
I mean, twenty twenty four was the year of this
hypothetical of what was raised at Google becoming real in Gaza.
It was also the year that two Nobel Prizes, one
in chemistry and one in physics, went to one former
and one current Google employee, and both in the realm
of pushing fundamental science forward. So how do you see

(22:05):
the moment in an AI and both this kind of
enormous hope and something drug discovery and the obvious peril
in getting people based on their metadata.

Speaker 4 (22:15):
Well, I think there's a lot of things AI could
do very well right. Drug discovery is one, it's very interesting,
and the idea of being able to create drugs for
things that affect smaller numbers of people thus aren't generally incentivized.

Speaker 3 (22:29):
All of that, like, yes, let's leverage what we.

Speaker 4 (22:32):
Have to make life as beautiful as possible for as
many people as possible. Amen, The issue comes down to
the political economy of AI and what we're actually talking about.
And I think when we look at the AI industry
right now, and when we look at the technologies that
the Nobel Prize was awarded for either creating or leveraging,

(22:56):
we begin to recognize that AI is an incredibly centralized
set of technologies. It relies on huge amounts of computational resources,
and then there is data that's required to train these models.
And so the paradigm we're in right now is the
bigger is better. The more chips, the more compute, the

(23:17):
more data, and you combine those in whatever architecture and
you create a model, and then that model is purportedly
better performing and therefore better, and therefore we're advancing science.

Speaker 1 (23:28):
And that's a useful argument, I guess for very very
large companies whose inherent logic is to grow larger. Right,
I mean, if you Amazon or Googo, little Microsoft, you
what is there to do for growth other than popularizing
this model of how the progress of AI works?

Speaker 4 (23:45):
Yeah, I mean it would be very bad for these companies'
bottom lines if it turned out that much smaller models
using much more bespoke data were in fact much better, right,
Because they've thrown a lot of money behind this. Narratives
we hear about tech, and the narratives we hear about
progress are not always born of scientific and I think

(24:07):
when you look at that, then you have to realize,
like AI is not a it's not a product of
scientific innovation that everyone could leverage and these guys just
figured it out first, right, It's a product of this
concentration of resources, and in fact, in the early twenty
tens when the sort of current AI moment took off,

(24:28):
the algorithms that were animating that boom were created in
the late nineteen eighties. But what was new were the
computational resources, the chips and the ability to use them
in certain ways, and the data and the data is
that product of this surveillance business model that Google, Meta

(24:49):
at Alia sort of participated in and had built their
infrastructures and businesses around. So of course, who has the
access to those infrastructures, who can meaningfully apply these old
algorithms and begin to fund research sort of optimizing them,
building new approaches to this big data, big compute AI paradigm.

(25:10):
It's the same surveillance platform companies, And so all of
this comes back to the question of like, well, the
Nobel is awarded for AI this year, don't we want
better drug discoveries? And it's great, yes, we absolutely do,
but if you look at the market conditions, it's unclear
how that better drug discovery is actually going to lead

(25:32):
to better health outcomes, and so who will the customers
of that AI be?

Speaker 2 (25:38):
Right?

Speaker 4 (25:39):
Most likely it will be pharmaceutical companies, not altruistic organization.
It'll be insurance companies who may actually want to limit care. Right,
So you have a diagnostic algorithm, but it's not used
to make sure that I'm given the care I need,
even if it's expensive early so I live a long life.
It's used to make sure I'm excluded from the pool

(26:00):
so that I am.

Speaker 3 (26:00):
Not a cost going forward.

Speaker 4 (26:02):
So I think that's the lens that we need to
put on some of this, because the excitement is very warranted.
But in the world we have, it's pretty clear without
other changes to our systems, that AI, given the cost,
given the concentrated power, given the fact that these entities
need to recoup massive investment, is not going to be

(26:25):
an altruistic resource, but will likely be ramifying the power
of actors who have already proven themselves pretty problematic in
terms of fomenting public good so to speak.

Speaker 2 (26:40):
Coming up, we'll hear about Meredith Whitaker's speech upon winning
the Helmet Schmidt Future Prize and why we should rethink
our preconceptions about data stayed with us.

Speaker 1 (26:57):
Talk about Signal and not for nothing. Was a was
the first place you went when you learned about Project Maven,
and is now where you spend a lot more time.
So for those who don't know what it is and
how it works and why it's important, I mean, just
tell us a bit about it.

Speaker 3 (27:11):
Yeah, Well, you know, Signal is incredible.

Speaker 4 (27:14):
I'm really honored to work here and it's been along
on the journey and various capacities for a very long time.

Speaker 3 (27:22):
It is the.

Speaker 4 (27:22):
World's largest actually private communications network in an environment as
we just discussed, where almost every one of our activities,
digital and increasingly analog are surveilled in one way or
another by a large tech company, government, or some ad mixture.

(27:42):
So it's incredibly important infrastructure. We hear human rights workers
rely on signal. Journalists rely on signal to protect their sources.
We have militaries rely on signal to get communications out
when they're necessary. We have board rooms, you know, most
bour room chatter, most CEOs I meet use signal religiously.

Speaker 3 (28:05):
Governments use signals. So anywhere where the confidentiality of communications
is necessary, certainly anywhere where the stakes of privacy are
life or death. We know that people rely on Signal
and our core encryption protocol was released in twenty thirteen
and it has stood the test of time. It is
now the gold standard across the industry.

Speaker 1 (28:26):
And it's used by WhatsApp as well.

Speaker 3 (28:27):
Right, yeah, it's used by WhatsApp.

Speaker 4 (28:29):
It's used across the tech industry because one of the
things about encryption is it's very hard to get right.
It's very easy to get wrong, and so if someone
gets it right, you want to reuse that. You want
to use that formula. You don't want to DIY because
it's almost certain you'll have some small error in there,
and when we're talking about life or death consequences, you

(28:52):
can't afford that.

Speaker 1 (28:54):
But why if what seven Signal used the same open
source code, am I so much safe for using Signal
and WhatsApp?

Speaker 4 (29:00):
Because the Signal protocol is one slice of a very
large stack, and WhatsApp uses that slice to protect what
you say. But Signal protects way more than what you say.
We protect who you talk to. We don't have any
idea who's talking to whom. We don't know your profile name,

(29:21):
we don't know your profile photo. We don't know your
contact list, and it goes on and on and on.
So everything we do we sweat the details in order
to get as close to collecting as little data as possible.

Speaker 1 (29:36):
Those things that you're describing that Signal doesn't collector are
the inputs to the signature strikes we were discussing earlier.

Speaker 3 (29:43):
Could be? Could be.

Speaker 4 (29:44):
I mean one of the issues with proprietary technology or
classified information is we don't totally know, but it's that
type of information that has been reported and mentioned as
inputs to those strikes.

Speaker 1 (29:59):
Yeah, and you're not the CEO of Signal, you're the
president of the Signal Foundation. Can you kind of explain
why that is and what it means?

Speaker 4 (30:09):
Well, I think in this moment in the tech industry,
that would be dangerous. Although it's not profits that we
are opposed to, it is the particular incentive structure of
the tech industry which ultimately you either collect data and

(30:34):
monetize that you're creating models to sell ads as we discussed,
or training AI models or selling its data brokers or
what have you, so surveillance, or you provide sort of
goods and services to those who do the picks and
shovels of the surveillance industry. So you can imagine a
board with a fiduciary duty governing a for profit signal.

(30:55):
At some point that fiduciary duty is going to take
precedence to the obsessive focus on privacy.

Speaker 3 (31:01):
So that's really the crux of it.

Speaker 4 (31:04):
And we're in an industry where consumer apps are quote
unquote free, right there is not a norm of paying
for these things upfront by the consumer. Is not for
lack of innovation that we don't have many many more
signals or signal like products that are focused on democracy,
focused on privacy, focused on ensuring fundamental rights, focused on

(31:25):
the public good. It's really because many of those things
cannot be subject to the surveillance business model and thus
don't really exist because capitalizing them is such an issue.

Speaker 1 (31:38):
You mentioned that you know there's a consumer expectation that
tech is free, or at least free at the point
of delivery. You also have heard a piece of Wired
recently kind of arguing twenty twenty five might be but
the beginning of the end of big tech. But I
guess my question is big tech of so successfully kind
of created this expectation on the consumer side that tech

(32:01):
products are free and that essentially I pay for the
product by allowing myself to be surveiled. Can there be
a fundamental shift away from the big tech hegemony un
lessen until consumers are willing to pay with money rather
than data for infrastructure services.

Speaker 3 (32:22):
I mean, I absolutely believe there can be. This is
what we have.

Speaker 4 (32:27):
Thirty years of this, twenty years of this. This is
hyper novel in terms of human history. This is in
no way natural. There's you know, there's a longer history
of the sort of particular regulatory and political environment this
came out of. We now have the FBI advising people

(32:47):
not to send SMS messages following a massive hack that
enabled China and I don't know who else to access
telephony networks and surveil US citizens, including many high ranking citizens.
And so we're in a geopolitically volatile moment where the

(33:07):
lines between nations and jurisdictions and interests are getting a
bit more brittle, a bit more crisp, and I think
the imperative of moving away from this centralized surveillance tech
business model is really becoming clear to almost everyone. And
that was what animated the piece and wired you know,

(33:28):
I would say it's more of a wish casting into
the future than necessarily a prediction of what will happen.

Speaker 3 (33:35):
But insofar as like hope.

Speaker 4 (33:37):
Is an invitation for action, I really think there's a
lot of creative work that could be done to undo
that business model. Right, the question is really where do
the resources come from? And that's a question we can
begin to answer. Okay, are there endowments, Are there certain
technologies like signal, like some of the cloud and core
infrastructure that should be more openly governed that really, you know,

(33:58):
the more dependent we are on these infrastructures, the more
these infrastructures are controlled by you know, a small number
of actors, the more perilous those single points of failure become.
And I think the more attentive people are becoming to
that peril, and the more appetite there is for solutions
that may not have been on the table even a

(34:19):
couple of years ago.

Speaker 1 (34:21):
Yeah, it is interesting. I mean I was a talk
the other day and somebody's talking about cloud computing and
how the metaphor kind of suggests that this kind of
decentralized model and it's kind of around all of us.
But like what cloud computing has actually meant it's this
incredible kind of centralization of computational power in the hands
of basically two nations and ten companies.

Speaker 4 (34:42):
Yep, yeah, exactly, and you know, two nations, but the
US being dominant there it's Amazon, Microsoft, and Google have
seventy percent of the global market seven zero insane. So
that means other nation states are running on Amazon, running
on Google, right, And there is no way to simply
create a competitor because you are talking about a form

(35:05):
that was understood in the utilities context, particularly in telephony,
as a natural monopoly for a very long time, and
that has you know, ultimately been kind of entered into
the gravitational pull of these handful of companies. So it's a.

Speaker 3 (35:20):
Very big problem.

Speaker 4 (35:21):
But again, it doesn't mean that things like signal don't exist.

Speaker 3 (35:25):
Signal exists. Swimming upstream signal is proving that one extremely
innovative technologies can and actually do exist, that even swimming
up against stream, we can create something that has set
the gold standard and proven that yes, there's a ton
of innovation left to do in tech, in privacy, in

(35:48):
rights preservation, in creating.

Speaker 4 (35:49):
Tech that actually serves a social need. You know, it
is popular, and Okay, now let's solve the issue of
this toxic business model to actually foment innovation, to actually
foment progress, whatever the claims of the marketing on the
tan of big tech may be.

Speaker 1 (36:07):
Yeah, and just to close, I've really was taken by
your Helmut Schmidt speech and you had a line, make
no mistake, I'm optimistic, but my optimism is an invitation
to analysis in action, not a ticket to complacency. Yes,
talk about.

Speaker 4 (36:22):
That, Well, that is true, and I think that was
me also pushing back a little bit on the idea
that a grim diagnosis is somehow pessimism, right, uh huh, Like,
we know it's not great, we know it's not great.
Along a lot of axis tech is not the only
one we know things aren't working. But the most dangerous,

(36:43):
and frankly, the most pessimistic place to be is pushing
that aside, disavowing that, diluting that analysis in service of
immediate comfort.

Speaker 1 (36:53):
Stoping to bother to critique because you think there's no point. Essentially,
that's the ultimes.

Speaker 4 (36:57):
Well, I would say that's almost nihilism. I would say
the pessimism is where we paint a rosy picture so
we don't have to feel uncomfortable and then base our
strategy on that rosy picture in a way that is
wholly insufficient to actually tackle the problem, because we never
scope the problem, because we were unwilling to carry the
intellectual emotional responsibility of recognizing what we're actually up against.

(37:24):
So I think the most optimistic thing we can be
doing right now is really understanding where we stand, really
understanding what is the tech industry, how does it function,
how does money move? And okay, what are the alternatives
that exist, how do we support them, and how do
we recognize the power that we have to shift things right?

(37:46):
But that has to be based on a realistic picture.
It can't be based on delusion.

Speaker 1 (37:51):
And part of that, as you know firsthand, coming back
to the beginning of the conversation, is acting as a collective.

Speaker 3 (37:58):
Yeah.

Speaker 4 (37:58):
Absolutely, I think no one person is going to do
this alone ever ever has ever you know, one person
may have taken credit at some point or another, but ultimately,
like the Internet was a collective effort, the open source
Library is you know, every single big tech company that
has one CEO, it has millions and millions of contributors.

(38:21):
And so we change this together. The issue is will
and the issue is resources. The issue is not ideas right.
We're not waiting for one genius to figure it out.
We're waiting for a clear map and some space to
examine it together and share insights and then figure out

(38:43):
how to push forward to a world where it's you know,
thousands of interesting projects that are all thinking together about
creating much better tech and reshaping the industry and its
incentives in order to nurture that.

Speaker 1 (39:01):
Us.

Speaker 2 (39:02):
First of all, Meredith said about fifteen things that I
would put on a small office couch pillow if I
knew how to needle point.

Speaker 1 (39:07):
Well, start with the thing you would love most to
put on a needle point.

Speaker 2 (39:11):
A grim diagnosis is not an invitation to pessimism.

Speaker 1 (39:14):
I love that too.

Speaker 2 (39:15):
I just think that, especially right now, it's easy and
right to say that there's a lot to be wary of. Yeah,
but it also is not an invitation to be weary,
ad infinitum.

Speaker 1 (39:28):
No. No, I totally agree with you. I mean, I
think often if you're critical, you'll called out for being pessimistic.
But I thought the way that Meredith reframed that and
was basically no, no hold on a second it's optimistic
to be critical was really fascinating.

Speaker 2 (39:42):
Yeah. I think my favorite line in the whole interview,
and maybe one of my just favorite lines I've heard
in reporting on technology ever, is that data is a
rough proxy for complex reality.

Speaker 1 (39:53):
Sounds like you're going to need two pillows or just a.

Speaker 2 (39:56):
Very big one. I actually want to talk about something
that we've talked about in our production meetings for this
show is that we don't want this show to be
all doom and gloom. And you know, when I found
out that you were interviewing Meredith, I was interested to
see what she would say outside of a sort of
technopessimist viewpoint, given that she has been quite critical of
Google and also of surveillance capitalism. And yet when you

(40:19):
hear her talk about Signal, I was sort of optimistic
in terms of, oh, here's a tool that we actually
have been encouraged by the FBI to use, yes, yet
still don't use. But here's a tool that is actually
answering a real fear that a lot of people have

(40:40):
that Meredith has, of privacy and doing some actionable stuff
to make us feel more secure.

Speaker 1 (40:50):
Well, Meredith, ultimately the buck stops with her at signal,
and like, if you're a product lead or if you're
running a company whose main thing is a product, you
sure as hell better be optimistic because you're not going
to get through your day.

Speaker 3 (41:03):
That's right, that's right.

Speaker 1 (41:04):
But but I agree with you. The thing that's really
stuck with me was this concept of technology being dual use.
And Meredith, you know, used this phrase signature strike, which
it turns out can apply both to serving you with
an AD to make you want to buy something and
using your metadata to make a calculation that you're most

(41:26):
probably an enemy combatant or a terrorist and kill you.
I mean, this is the idea that it's the same
basically set of analysis that can apply to shopping or
life and death. I don't know, I just I have
no sort thinking about that.

Speaker 2 (41:40):
What certainly makes me want to protect that data set
that's it protects uff today. This episode was produced by
Victoria Dominguez, Lizzie Jacobs, and Eliza Dennis. It was executive
produced by me care Price, Oswalashan and Kate Osborne for
Kaleidoscope and Katrina Norvel for iHeart Podcasts. Our Engineer is

(42:02):
Bihit Frasier. Al Murdoch wrote our theme song.

Speaker 1 (42:05):
Join us on Friday for tech Stuff's the Weekend tech
We'll run through our favorite headlines, talk with our friends
at four or four media, and try to tackle a question,
when did this become a thing. Please rate and review
on Apple Podcasts, Spotify, or wherever you get your podcasts,
and reach out to us at tech stuff podcast at
gmail dot com with your thoughts and feedback. Thank you,

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.