Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome back to another episode of Taking Inventory.
This week we're welcomed by AlexKantrowitz, the journalist
behind Big Technology Newsletter, the host of the Big
Technology Podcast, somebody who's been a a respected and
award-winning journalist for a number of years.
His podcast is a top ten in tech.
He's a regular contributor on CNBC, and he's interviewed
(00:21):
people like Kara Swisher, Marissa Meyer, the CEO of
Perplexity, and the head of Meta's AI division.
So talking to pretty much everyone in this space who knows
what's going on. Alex, super excited to have you
here. Welcome.
Thank you. Great to be here.
Yeah, man, we appreciate it. So, you know, I think, you know,
one thing that we wanted to kindof pick your brain on is, you
know, we have a ton of primarilyproduct managers that listen to
(00:42):
this podcast that are sort of stepping back and they're at
like the big wall gardens. And they're like, how should I
adapt my products to like this AI landscape?
And given that you've interviewed every person in this
space, I kind of thought it's almost like we're talking to a
GPT that's been trained on how AI leaders think.
To some extent, that's. Right.
Even if you won't even have to interview me, you'll just be
(01:03):
able to interview my AI clone orsomething else that approximates
what I would be in some other universe.
So we're going to take advantageof this opportunity while we
can. Yeah, I'm thankful we have the
real one here. But yeah, I mean, I thought it'd
be really kind of if you could just for our listeners, could
you kind of just give us a stateof play of like what's going on
in AI? Like how should people think
(01:23):
about Claude versus open AI versus open source?
You know, just like, can you guys maybe just give us a little
bit of a, a view into the landscape and, and kind of where
things are today? Yeah, things are moving fast.
So as you mentioned, I just published an interview, I don't
know, 1/2 hour ago with Ahmed AlDalla, who is the head of
Generative AI at Meta, talking about their new Llama 3 model.
(01:46):
So we've gone from Llama to Llama two to Llama 3 effectively
with under under a year. And just to give you a sense as
to the magnitude here, they took100 times more compute and 10
times more data to train llama three.
And we're talking about a matterof months going between llama
two and llama 3. So this field is moving
(02:10):
exceptionally fast and to try tohandicap like who's the leader
is is difficult because you haveone company go in production
with a model that seems to best the others.
And then the other companies areworking on their own, their own
models and they're probably ahead of whatever is out there
right now. So of course you have the major
players like Meta, Anthropic, Open AI, Google, all vying to
(02:35):
try to make these large languagemodels as good as they possibly
can. And I think that's like
effectively step one in this moment that we're having is get
these models to be conversant, hallucinate less, be able to
handle lots and lots of data, understand more relationships
between things. And that's why you see that
these companies are really effectively maxing out the
(02:55):
amount of compute and data that they're using to try to build
these things. Now we're starting to head a
little bit closer towards Step 2, and Step 2 is going to try to
combine these large language models with other machine
learning practices to try to make them even smarter and even
more capable. And so one way to do that is
(03:17):
build reasoning in. So I was just speaking with this
guy, the yeah, the guy from MetaAmad about this.
Effectively what you do right now with large language model is
you sort of ask it to tell you an answer about something and it
uses its, you know, effectively all of its training data to get
you that answer. And what they're going to do is
create reasoning within it. So basically allowing it to go
(03:39):
step by step, OK, here's the question.
How do I go about breaking down the answer and getting the
answer, breaking it down to the component parts, maybe search
the Internet for it, you know, effectively evaluate whether it
was good to answer and then spitit back to you.
So many more steps effectively in the answering of the
questions for you and then eventually the ability to take
action. And that's where we get into
(04:01):
this new agent paradigm, which seems like we're going to get
close to where these models willnot only like, you know, you
give them a, you give them the, the input like a question and
then they will spit something out to you, the answer.
Now they might go back to you and ask for clarification or
(04:22):
they might go out on the Internet and take action for
you. So that is the state of play.
It's wild. And I, I heard, you know, you
did that interview with with Jack Clark and he was sort of
talking about how Claude is likenow trying to like, you know,
basically explore like, is therecreativity in the thinking?
And like, can you actually understand why they're making
(04:43):
those sort of those choices? Are are we are we now getting,
is that like a reality? Like are we going to, are these
things going to be not only agents, but like creative
partners for us? Or is that farther off than than
we think? Yes, so Jack Clark is the Co
founder of Anthropic and he cameon for such an interesting
conversation on big technology podcast.
(05:05):
We talked about, you know, not only where Anthropic is going,
Anthropic, by the way, you know,billions of dollars raised
across Google and Amazon. So they have the resources to
compete here. And also the scientists, they
have a good amount of people that have come over from an open
AI to start this splinter company and they're obviously
showing tremendous results. So there's like 2 questions
(05:26):
here. So the first is like, are the
models going to be creative themselves?
And that's what I was sort of getting at with Jack basically
asking, are they able to generalize outside of their
training set because? We still don't fully know
whether these models are just spitting back what we trained
them with or being able to make connections on their own and
then sharing those new connections.
So actually one of the most fun parts of the conversation with
(05:49):
Jack was when we were talking about how you would determine
whether these bots can actually come to new conclusions on their
own and not just spit back theirtraining data.
So I'm like, well, why don't youjust like train the bot on 75%
of a field's knowledge and see if it can come to the next 25%
on its own? And he said, yes, they're
actually trying to evaluate thatright now.
(06:10):
And one of the ways that they'redoing it is within, I think
maybe the field of biology is they are giving it all the non
classified information and seeing if it can sort of come to
the classified stuff on its own.And if it can, then, you know,
it's really able to generalize. And then you have some serious,
serious questions about where this technology is is going.
(06:32):
So that that was fascinating. We don't know the answer yet.
I suspect it won't be as exciting when we get those
answers. I think that we're still
probably limited to the trading set, but my mind is open.
Like I've decided not to, you know, sort of write these things
off before we have the answers. But to go to your other question
about whether they're going to be creative partners.
(06:52):
I mean, yes, definitely already.I'll tell you something.
So as I was putting together this.
Interview with with Ahmed Al Dalla, the meta head of
generative AIII did what I do now for every interview, which
is I take the transcript, I upload it, upload it to Claude,
the full, you know, hour long transcript.
And I say things like, where canI cut what was most interesting?
(07:15):
What, what comments should I highlight for the for the
promotional post that I'm going to use to sort of promote this
this episode? And this thing is exceptionally
good at it and it's able to calland it's like saying, hey, you
might want to cut this. And it gives me the time stamps
because you know, the time stamps are on the transcript.
Cut this out. This was boring.
Or this could have you rambled alittle bit here and it's spot
(07:37):
on. And so like, we don't need these
things to be much more intelligent than they are right
now to turn them into creative partners.
And I think that if you're in a content business, if you're in
marketing or advertising, you'realready using this stuff.
If you're, you know, if you're ahead of the curve.
Do you do you see though like with your conversations with
(07:58):
someone from Meta versus someonefrom Anthropic?
Is there a difference in philosophical point of views or
the companies developing this stuff and how they how they want
things to progress? Given that Google and Meta have
existing businesses, Anthropic Open AI have approached this as
this is their business. Are you seeing a difference in
in how leaders approach it? Yeah.
(08:20):
So first of all the the start-ups have less to lose.
That's why you saw Open AI, for instance, put out Chat Chi PT
before everyone else, even though Google had a working
version of it called Lambda within the company that even
convinced one of the product, one of its engineers, that it
was sentient and Google didn't release it and Chat Chi PT was
(08:40):
released. 1st and In fact, in Big Technology, I wrote a story,
so I interviewed Blake Lemoyne, who believed that the Lambda was
sentient and I wrote a story in Big Technology after that, you
know, sentient or not, this is some extremely powerful
technology. And then ChatGPT came out a
couple weeks later and everyone's like, Oh yeah, it is.
So that was that was, you know, an interesting evolution.
(09:01):
So I would say, yes, these thesestart-ups have less to lose.
They also, and you can hear the difference in the distinction in
the conversations between Jack and and Ahmed is that they, the,
the start-ups don't have any legacy products to worry about,
which means that they can be focused more on the research,
more on the model training and less on how to implement it into
(09:22):
products. And so like with Meta, like a
big question is like, OK, they developed this Llama 3 model,
which came out today, the day that we're talking, you know, a
few days in the past by the timethis episode publishes.
And like a big part of that pushis after that model was ready,
they worked to integrate it intomessenger and WhatsApp and
started to like really give Meta's products.
(09:43):
The half of Llama three. Will Anthropic doesn't really
have to worry about those things.
Like it doesn't have to worry about a pre-existing product
that has a billion users. Like it's just not part of the
equation or billions of users, you know?
So it's both blessing and a curse for them because they
don't have to worry about the flagship, but on the other hand
the distribution that the peopleworking on Llama 3 have.
(10:05):
Is insane. Like today, starting today, the
day that this model comes out, it's going to go to billions of
people. So that's kind of the
differences in philosophy. Yeah, it's, it's crazy, you
know, as, as both of us former SNAP employees, I read about how
they, you know, they the I believe Meta put a cap though on
like how big of an app can actually use the models, right.
(10:27):
So like. Google.
Google can't use it. Everybody else can.
Yeah. Yeah, it's just, but it is your
point. It's like a different hand to
play. It's like they're like companies
based in Mountain View with 7 billion user products cannot use
everything everyone else can use.
And I guess given that, like, doyou see it going as like maybe
an analogy just like kind of theway the cloud providers have
(10:49):
gone where like maybe Anthropic and Open AI are the cloud
providers. And then you have like Google
and Meta and you know, some of these other guys that are
building our models kind of staykind of these walled gardens.
Or like do the do you think overtime the open AIS and the
anthropics become like actually Google competitors and and Meta
(11:09):
competitors? Yes, My view on this is that all
the foundational models will eventually converge like the
quality is is going to commoditize and you know you
will be able to tell slight differences between chat GPGPT
or the underlying model right between GPT 4, Llama three and
Claude. But ultimately it's not going to
be a differentiator. So the differentiators in this
(11:31):
space are going to be a like howthe foundational model
providers. Are able to work.
With companies building on top of them.
So that will be crucial. And then their ability to
productize on top of the stuff themselves.
So like, This is why the questions around Google are
fascinating because Google obviously will have a model.
(11:52):
Their Gemini model already is onpar with GPT 4.
But what does that mean? It means nothing unless it knows
how to turn it into its product.And Meta's Llama three, you
could say is, you know, so closeto on par with these.
And it's like, well, what does that mean?
Well, yeah, it's open source. So they're basically giving it
away and maybe they're going to get more people, you know, eager
to build with their technology, But ultimately the thing that's
(12:14):
going to matter for them is how well they they bake that into
their products and it will help them insulate from, you know,
being reliant on, let's say, a Microsoft.
And then there's Microsoft, right?
You have open a they're working with open AI.
And the question for them is going to be, how do you turn
this technology into products with an Azure that people will
(12:34):
build on top of and with an office that will make them more
productive. And then you're, you know,
again, competing against Google.So in short, it's all the big
questions that we've had about technology just remixed and spun
in a very different way. But yeah, building these models
will have some some value, but ultimately the end products and
the silicon, right NVIDIA are going to be where most of the
(12:57):
value is generated. And and I guess, you know, you
did, I think like a mega episodelike a month or two ago on Apple
and like you kind of diving intolike kind of their hand in this.
You know, that's the one that I think, you know, they just don't
get a ton of attention around it.
Obviously they like have the products.
I think they they have, you know, at least they've imported
(13:18):
the trip, the chip, the silicon side.
They're doing parts with themselves.
And I heard they're doing some big new plant in California.
Like where? Where do we think of them in all
of this? Yeah, so they are fascinating.
First of all, like, thank you for listening to the show.
Like. Already these questions really
show that you you listen, which is awesome.
Oh yeah, we're like fanboys, man.
(13:39):
We love it. It's awesome.
Thank you, I appreciate it. Well, look, I mean, I put a lot
of work into it, So it's good toknow that it's it's getting, you
know, it's useful to to people in the industry.
All right, so, so yes, Apple's going to have some innovators
dilemma questions, right? Because there's like going to be
two ways that they're they so they have this big announcement
coming up in WWDC, their developer conference in June
(13:59):
where they're going to talk about AI and you're effectively
going to see this played out in two ways.
And we by the way, we have an episode coming out on this next
week with MG Siegler where we really go deep into it.
One way is the the product itself, where they're going to
have a choice to make. You know, they have this very
popular operating system in iOS.So how much do you actually want
(14:21):
to change that in, you know, in service of making it an AI
phone? And if you don't change it
enough, you then open the door to somebody else, let's say a
Samsung or an Android even to beable to make the changes that
will sort of overtake you. It's possible.
Like if an AI phone is that muchbetter than the phone we have
today, how long is Apple going to want to hold on to standard
(14:43):
iOS and hope that the future won't come?
So and what is it? What's it going to do with, you
know, with Siri, right? That those are some interesting
questions. How much?
The key question for Apple with Siri is, is really, it's been
the AI, but it's also been the control.
How much will Apple want to cedethe control of Siri, right?
Let people build applications into it that aren't Apple.
(15:04):
That's that's been the question.And so far they haven't really
shown a willingness or an ability to do that.
And the other thing is going to be the developer side, right?
What technology can Apple offer to developers of apps that will
enable them to run AI applications on the phone in a
way that they can't? Elsewhere and if they're able to
(15:26):
make a meaningful improvement here, then you start to ask,
well, you know, can they, you know, can they?
Can they then make an operating system where the apps that run
AI are that much better than anywhere else because they're
running on these custom Apple chips that then solidifies their
lead and doesn't require a huge operating system change.
(15:47):
So these are all the things thatare in play with Apple.
And you know, there were, there were when Apple started talking
about AI, people said OK, Apple said the magical 2 letters AI
now you know, give them another trillion in market cap.
And it's simply not going to work that way.
It's a much more complicated question.
That that brings up something I've been reading a lot about
because I think it's been in thenews and and a YouTube kind of
(16:10):
went viral for his review of thehumane AI pin.
But you know, you, you're talking about Apple and how they
need to approach their operatingsystem and start to work with
developers. But then there is this open
question of there are other new devices that are AI enabled and
and they're very simplistic is, you know, in quotes, but
(16:32):
simplistic from a consumer experience perspective.
But they are new devices. And so do you, do you see that
there's maybe a new device format that's coming to play or
is this something that that willlikely converge into the
Androids, the iO S, S, the existing devices we're used to?
I mean, not anytime soon. I mean even before this humane
(16:54):
pin came out you had the meta ray bands that have an AI
assistant baked in and it's effectively doing a lot of the
same stuff and is better. But is that taking off?
Like are we all walking out with?
Meta meta ray bands and I don't think so.
So I mean, you guys, I mean maybe you guys have some
interesting input on this because you were at a company
(17:14):
that was like sort of first to the, you know, smart wearable.
Yeah, I think it's funny that wehad on the podcast the the head
of marketing from Meta Reality Labs, she was former Shahar
Scott. She formerly was at Snap as
well. And she was really honest about
it was like Meta Ray Ban's is basically a better
implementation of Snap spectacles like at the end of
(17:36):
the day. And she basically was like this
like Meta just executed better and that's why people like it
more and it will get better overtime.
But you know, Snap was sort of, you know, it's kind of a classic
Snap, like right idea, maybe tooearly, maybe execution wasn't
sort of perfect. And in in some ways it it kind
of I've thought about it and I don't know enough about it.
(17:59):
I'd love to get your point of view on this.
So like I've thought about it kind of similar to open AI where
it's like open AI kind of reminds me of snap in a weird
way, where it's like they nailedthe first implementation of it.
But then you sort of look at their hand and you're like, I
don't know, like, is this going to end up being a glorified
Microsoft app? Like at the end of the day, like
what? What is this thing compared to
its competitors? Is that is that a fair
(18:20):
distinction? A fairway to think about open AI
like given the space? Yeah, I think that is I mean,
I've always said that open AI isin the hits business, right?
So, so a snap. And you know, The thing is you
can you can, you know, make a great innovation like spectacles
or stories, but then everybody'sgoing to try to do it
themselves. You've seen that happen with GPT
(18:41):
4 and that's why it's no no surprise that you've seen
opening AI push on a, on a, on aGPT store where you can build
your own little bots. And that seems to have kind of
had mixed or poor results. And that's why you see them
pushing into new models like wasit Sora the, the video model,
(19:02):
right. So they're just going to have to
keep pushing the status quo forward.
And by the way, now it's not just the big ones, but there are
so many companies that are trying to build new different
applications. And sometimes it's on top of
opening AI, by the way, like we talked about, you talked about
perplexity in the intro, right? So perplexity is effectively
built on top of open AI. But it's a search engine.
So who's going to get the value there?
(19:23):
It's generally the application, not necessarily the foundational
model provider. So they got, they have to keep,
you know, pumping out hits. And that's why, by the way, I
think Sam Altman sees this and that's why he's, you know, he
has a start up fund. That's why he's out trying to
raise, you know, $7 trillion fora chip company.
That's why he's trying to build a device with Johnny Ivy, right?
(19:43):
Do you think any of this is an accident?
I don't think so. Yeah, yeah, I, I guess, you
know, given just on the on the on the open AI side, you know, I
guess the, the, the foil to all that is, is seems like it's Elon
Musk, right, where Elon's basically like called open AI on
on on BS. What what's your take kinda on
(20:05):
on, you know, I guess maybe likefull stop driving.
You know, no one really seems tobe talking about grok.
Like, how's, how's the Tesla sort of X hand kind of laying
out, you know, based on what you're hearing so far?
Not well. So first of all, I think Elon is
making a terrible mistake not making grok available at least
(20:26):
to all verified subscribers on X, right?
So I'm, I'm, I like, pay for thesubscription.
Not ashamed to admit it. I think it's useful for me,
especially when it comes to reporting.
I get into the DMS and there's like a, you know, you get sort
of added priority there. It's worth it, you know, just
for that. Although I feel like I could
take it away now. And they're not giving it back,
(20:47):
so there's a chance we'll come back anyway.
That's a side point. The bottom line is that they
should be making this available because with more user
interaction, they can train better and make that model
better. But they're not.
And I think there's two reasons why they're not.
One is it's super expensive to run these things and I think
they're a little bit nervous about spending that money, which
is interesting because you have to spend money to win this war.
(21:09):
And two is I don't think their personality is quite dialled in
yet. I mean, grok right now, just
from all the screenshots that I've seen is incredibly cringe.
And it's not, you know, just think about how embarrassing it
would be if they released that out out to the world as this
antidote to like woke ChatGPT. And it just sucks and it's
cringe. So they need to do a better job
there. There's no doubt about that.
(21:31):
Full self driving. Look, it's a, it's a real
question of whether Tesla will be able to create these
autonomous vehicles with just the data that they have.
And you know, they, they try to run autonomous with much less
technology than let's say a Cruise or a Waymo.
And it's no doubt, no, it's no surprise now that they're behind
Cruise and Waymo. So they could get there if they
(21:55):
get there. It's a game changer for Tesla.
But and look, Musk just announced there's going to be
this robo taxi event in August. So they must feel like they're
getting close. And it's possible they they do
it, but it from all accounts, itseems like it's still a little
bit further away than they wouldlike it to be.
But the Rockets are cool and Neuralink is cool.
(22:16):
The Rockets. Are very cool.
It's a. Wash, Elon, It's a wash.
On the, on the self driving side, you know, just giving you
cover so many of these big tech companies, you know, we have, we
have friends at like at like Uber, for example.
And like we've been sort of kindof just trying to understand
like what's going to happen with, you know, Waymo just
rolled out. We're we're in LA Waymo's in
(22:37):
Santa Monica now from all accounts.
It's awesome. Have you guys been in the
waymo's yet? No, it it doesn't come to a
Yeah. How is it get out to?
Santa Monica and give it a shot.It is incredible.
So like what's going to happen? Like what happens to Uber and
all this are they they become distribution for them?
Like how does it all play? Yeah, they could be.
(22:57):
I mean, Uber has a partnership with Waymo.
So I know that the technology providers like Waymo's and
Cruise are going to be open to inking partnerships with these
other companies. And by the way, there's still,
there'll still be moments for for Uber to jump in, so.
If you think about it, the nice thing about Uber, it has like
this really variable fleet of cars that it can call online if
(23:20):
there's a real busy time with surge pricing and can have sit
at home when it's not busy. And with these robo taxis,
they're kind of online all the time.
So they will not fully replace human drivers because you're
just not going to build, you know, enough taxis to meet peak
demand because that would be silly because they'd be sitting
around all the time otherwise. So there, I think there will
(23:42):
always be this, this at least inthe near future, near to mid
future, this balance between human drivers and robot drivers.
That being said, them coming to Santa Monica is, is really
showing that they can generalizethis technology, right?
It's not that they just have to scale build, like build custom
for each city that they're, thatthey're working in.
(24:03):
Cause like now they're in Phoenix, they're in a large part
of San Francisco now and they'recoming to Santa Monica.
So they're expanding the cars are incredible, better than any
human driver you'll ever sit with, by the way, because they
they think about just like the the decision to brake right, We
like. Kind of know when to brake and
we kind of hit the brake and youknow, you're a decent driver,
(24:25):
you know, to slow down like, youknow, as you ease into a stop.
But sometimes you sort of, you know, see something in the last
second and slam on the pedal. Waymo never does that.
You know, if Waymo is like, it will just slide, ease you into
your stop. And I can say that, you know, I
was in San Francisco with my wife for three weeks in July
last year and we got like this one week pass to Waymo and we
(24:51):
took it everywhere. And we would be on our phones in
new, burn a lift with human drivers and like be ready to
puke by the time we got out of those cars with all those hills
and stop signs. But with the Waymo, you could
like, effectively, like be on your phone, your laptop and your
the ride is so smooth that you'll never feel that way.
And it's also kind of fun that there's like, no driver there,
Not only because, like you, you skip a lot of the awkward small
(25:14):
talk and the car will always smell good.
But you can also just like, turnthe music up, have a dance
party, you know, just, like, do whatever you want and it's your
space. And now there are there are
people that are watching you on camera, which is a little weird.
Like you will see that if you don't put your seat belt on
like. Someone's going to call in and
(25:35):
be like, hey, put your seat belton and they're like, thank you.
And I'm like, wait, are you watching me right now?
They're like, yes, we are. I'm like, how many fingers am I
holding up? They're like 4 fingers.
I'm like, oh shit, those, those cameras are high quality.
But this really is the future I'm very excited about.
I think that, you know, cruisershad some problems, but Waymo
continues to push on and I'm much more cautious and, and, and
(25:57):
I don't know, technologically sophisticated way.
And you know, I think our publichas been trained to ignore this
stuff because there's been a lotof promise and not much
delivery. But this is, it's real, it's
coming fast and it's, it's very exciting to me.
It's, it's interesting because it's kind of one of the first
times that I've like heard someone articulate that like the
(26:19):
it's a really good example of like human beings are going to
be the a is assistant in that scenario.
Like what you just described as like humans being the overflow
to the robots is, you know, it may not be a great future, I
guess, but but that is a you you, I guess you could start
seeing that in a bunch of different industries over time
where like surge capacity is filled by humans versus vice
(26:43):
versa. Right.
And the humans want to take those surge rides, by the way,
like a Uber driver doesn't make a ton of money sitting around
waiting for a ride in a slow time.
They want to be working in the busy times.
So I mean, ultimately they'd probably rather not have any
robots on the road, but I mean, it's sort of cliche at this
(27:04):
point, but the horse and buggy drivers didn't want any cars,
so. It doesn't seem, yeah, it
doesn't seem like an option at this point.
You've, you've done a lot of recent writing specifically
about Google and kind of the changing culture and, and what
that's doing to kind of their product development and what
it's just doing to the ethos of Google.
But I think we've touched so much today on just the speed of
(27:26):
everything, right? Big companies, startups and
sure, including Google, but but Meta and Uber, a lot of these
other organizations that are going through kind of
fundamental shifts in, in their business.
What is happening to the cultureat all these companies and tech
in general? Like what has that evolution
(27:46):
been and what do you think it itmeans as we move forward?
Yeah, a few big things. It's a great question.
So first of all, these cultures are definitely like the big tech
cultures sort of became summer camp ahead of the pandemic.
And you understand why it was Euro interest rate times and
they were printing money and there was no pressure.
And so they became kind of bloated.
(28:07):
They became kind of this place where employees relaxed and made
these day in the life videos where it's like, so just working
at Facebook mean like getting 9 smoothies in the day and lunch
in the middle. And they've, they've really the
higher interest rates in this more or less forgiving economic
climate has really clamped down on that, right?
There's been tremendous amount of layoffs.
(28:30):
The companies have cut down on perks.
They don't feel like summer camps anymore, and they've been
able to ship faster. So it's, you know, it would be
great if you could be an employee and sort of like hang
out in the dining room all day and, you know, get a six figure
paycheck. But, you know, ultimately, that
was unsustainable. And we've seen the end of that.
But for some employees, that'll be better, right?
Just to be in a culture that is more dedicated to shipping and
(28:53):
moves faster, works with urgency, and your colleagues are
motivated to crush it. Like there's, there's, there are
people that will be more interested in that than than
just, you know, being in a senior citizen Center for
millennials. Now the other thing is that
there's been, there's, there's plus and minuses to this next
thing that I'm going to talk about, which is there have been
(29:15):
open cultures almost to a fault within these tech companies.
So at Meta you know they would encourage you to bring your full
self to work and talk about politics and your issues.
And now they prohibit any political discussion on these on
these. You know, internal corporate
systems, messaging systems. So they've become less open, and
a lot of things are much more restricted now after lots of
(29:37):
leaking. And that's across the tech
giants, right? And we even saw an example this
week where Google fired 28 employees for protesting with,
you know, in the company's offices.
And it's like, that's a huge thing.
They never do that, but they're sick of it.
And, you know, I think their message they're sending to their
employee bases. If you want to be activists,
(29:58):
work through the mainstream political system or like go to
the ballot box, organize, you know, for candidates you want
and for causes you want, but just sitting in and like an
executive office isn't going to get you anywhere and it's going
to disrupt the business. And so that's sort of another
thing that they've done. And that's, you know, I think
there's there's something to that, right?
Like that was found a little weird that like people
(30:20):
advocating for their company to post a message on Instagram was
like the be all end all of theirpolitical advocacy.
Like there's actually, there's an actual political system that
you can get involved in an influence and might even do you
more than than these posts. So, so that's, that's, you know,
another thing. But yeah, it's going to clamp
down a little bit on their ability to collaborate across
groups and, you know, may encourage people to, you know,
(30:44):
the layoffs may encourage great performers to leave and do their
own thing. So there's downsides here, but
it's definitely a shift from where it was pre COVID.
And you know, you mentioned justbefore we wrap, but you, you
mentioned sort of like the political climate and obviously
it's an election year and, and you've covered this on your
podcast, but you know, love to just get maybe just your kind
of, you know, what's, what's your take on what's going to
(31:06):
happen to TikTok? You know, we have a bunch of
friends that used to work there.We have friends that work there.
We kind of have our own point ofview on kind of if it's a good
or bad thing, to be honest. But but you know, what's your,
what's your bet? What happens to this thing?
So trying to decide, OK, I'll tell this story.
So I was before I went into sales and marketing, where I
(31:31):
started my career, which preceded my move into
journalism. I did a tiny bit of work in
politics, tiny bit, but it was eventful.
It included running an assembly campaign in Queens and a state
assembly campaign in Queens. And also perhaps it.
(31:51):
Embarrassingly, I was also an intern for Anthony Weiner's
congressional office. And it was like right around the
time where he was picking up Twitter and the joke that I
always make, which is not true. But I tell people that, that I
taught him how to use Twitter and he paid a lot of attention
for the tweet and the notification, but did not pay
attention to direct messages. And that's.
Where he went wrong? But but anyway, one of the cool
(32:16):
things I got to do there was I went down to DC for a week just
to like, you know, see what it was like in the congressional
office. I was working in the District
office in Queens. And we saw.
So Mitch McConnell was speaking to the congressional interns
that week and he talked a littlebit.
He goes, I'm going to tell you about process.
And I know you're going to thinkit's boring, boring, but I'm
(32:38):
going to talk about it. And he talked about how
basically, like the way that that government works is the
Congress is like a steaming hot cup of tea where lots of stuff
spills out. And the Senate is like the
saucer where it cools down. You know, you sort of like get
your level head on it. And, you know, I think that's
really, you know, appropriate tothink about this TikTok thing
(32:59):
where this TikTok ban screamed out of Congress and what's
happened. It's in the Senate and it's in
that saucer, and it's really cooling off.
And I think the fact that we haven't seen the Senate quickly
move ahead with this decision toban TikTok means again, that
we're not. Or not even ban it, but force a
sail or face a ban. Means again, that we're going to
probably see the status quo continue and Tik Tok's not going
(33:21):
to go anywhere in the USI could be wrong.
That's my take on it. What do you guys think?
I think I think it should get banned.
I think it should be forced to be sold, to be honest.
I think just seeing, you know, again, like, you know, just from
not allowing these other services into that market.
(33:43):
And then we do have friends thatused to work there that have
told us stories. We're like, it's like it is a
Chinese run company, like in theinvolved in everything.
And they, you know, it's not like the company is bike dance.
It's not tick tock, tick tock, you know.
And, you know, they, you know, anecdotally had friends who said
like they wouldn't they wouldn'tuse tick tock, you know, like
(34:05):
who worked there. Yeah.
So. Oh my God introduced me to these
friends. At.
What time they're scared, they're like scared to talk to
people, which just makes me think that there probably is,
you know, when there's smoke, there's fire.
But I think I don't think it should be bad.
I think a for sale, I think seems like a, an appropriate,
you know, outcome. And, you know, I've, I've
(34:26):
tweeted about this, like if Walmart could get its hands on
it, you know, like I'm moving to.
Oh yeah, You know, like, like ifyou could have Tiktok, Walmart
and Vizio all be owned by one thing, it would be incredible.
Right. But we'll see if they actually
do sell it, right? Even if they're forced to, they
might just decide to shut it down.
And if they do sell it, do they also sell the algorithm?
Because if they don't sell the algorithm, then what do you?
(34:49):
Have and and that's part of The thing is like the algorithm, you
know, like I think what people do forget is because it's
allowed in a market that, you know, meta's not allowed.
There's just a lot more trainingdata that they get access to
like so that is so I think they but I agree.
I mean, it seems like at this point, like when Trump switched
gears on it, it's like, you know, there's like no one in
(35:10):
favor of of pushing this forward.
So it seems like it won't reallyhappen.
Yeah, most likely. Yeah, that sounds right.
And I mean that that brings up the interesting kind of ideas.
We talk so much about AI, but like, how do you decouple the
body and the brain? Like you, you can't, right?
Like if you know the product, like what good is that without
the other and vice versa? So it is a really interesting
(35:31):
debate, I guess. But Alex, as as we kind of wrap
up, one thing we didn't touch onis the book you wrote a few
years ago, always day one, how the tech Titans plan to stay on
top forever. I think anyone listening should
should go check it out. But I think as a last question,
like given the changes in the last four years since you you
(35:52):
wrote that book, like what if you're writing a second book or
if you're thinking about writinga second book or where, like
what, what would you look at now?
And like, what has changed sincesince you first wrote that?
Like do you think it's? Stands the title.
I think the book holds up great and the title would be still day
one and if anything the book waslooking at, you know, day one.
(36:13):
What does day one mean? So like initially I watched a.
Clip of Jeff Bezos like addressing Amazon employees and
he's like, what? You know, he's like, basically
someone's like, what does Day 2 look like?
And he's like, day two is like stasis followed by long painful
decline, you know, followed by irrelevance, followed by death.
And that's why it's always day one.
And I was like, oh, Jeff Bezos is just telling these Amazon
(36:35):
employees that they have to worknights and weekends and
Thanksgiving, which was mostly true, but it's not what day one
means. Like the best thing about being
a start up on your first day is you don't have to care at all
about a legacy product. And we've talked about this
actually with Anthropic and OpenEye.
You just build what the market needs and you don't worry about
sustaining what had got you there to that point, 'cause it
doesn't matter. That's your advantage as a start
(36:56):
up. And what Bezos is telling Amazon
employees in this day one speechis basically like, who cares
what got us here today? We're only as good as what if we
could as what we can provide to the market, you know, and and
not regard our legacy products as the most important thing.
It's about the day one thinking that's going to get us to the
next stage. And that's how you saw Amazon go
(37:16):
from like a bookseller to something that sold everything
in a first party marketplace andthen a third party marketplace
transforming to hardware with the Kindle and voice computing
with Alexa. And then finally it becomes a
web services company. And then who's who knows what's
going to happen? And the reason why they have to
adjust is because, you know, computing adjusts so often.
(37:37):
Like the nature of the world right now is that we're moving
faster. Like we talked earlier about how
quickly we went from llama two to llama three, adding in 10
times more data and 100 times more compute.
That's the world we live in. And that's why last century a
company would spend 67 years on the S&P 501 idea for a lifetime
on the S&P 500, and now it's 15 years.
(37:58):
So to basically make the equivalent, you need 3 1/2
ideas. And so that's why I'm saying
still day one, because if anything, AI has brought us to
the point where these companies need to transform even faster.
And this idea of let go of your legacy and think about what's
coming next and build that. And, you know, Wall Street be
(38:18):
damned, because you can, you know, churn out profits for a
decade like Microsoft did under Steve Ballmer.
It's going to leave you nowhere if you're not prepared for the
next shift. And that's where they eventually
needed to bring in Sai and Adela, who brought in this day
one mindset and said, actually, we're going to do cloud now and
look at where they are right now.
They're the most valuable company in the world.
So that's that's basically my take.
(38:38):
Still day one would be the pitch, but I'm not I'm not going
to write the the next book. I feel like I'll keep writing
what would be the next book in Big Technology and talking about
these ideas and Big Technology podcast and that works well for
me. And so then where can, where can
listeners follow you with the podcast with, with your writing
and anything else? Yeah, thanks for the opportunity
(38:59):
to shout it out. So the podcast is called Big
Technology Podcast. It's on, you know, your podcast,
app of choice and the newsletter.
You can find it on bigtechnology.com.
Awesome, man. Well, thank you for doing this.
The podcast is awesome, everyoneto listen to it and thank you
for, you know, taking the time to do this.
And I know you got a ton going on and everyone should also
(39:20):
TuneIn, whenever Alex is on CNBC, he's all over the place
doing some awesome stuff. So thanks again, man.
We appreciate it. Thanks so much.
Appreciate being here with you guys.