Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with iHeartRadio and how the tech
are you? It is time for the tech News or Tuesday,
(00:24):
May sixteenth, twenty twenty three. And as the band Stained
would say, it's been a while since we've had an
Elon Musk heavy episode of tech news, but today we're
going to make up for that. So let strap in.
Last week on Thursday afternoon, after I had already published
(00:47):
my news episode for the day, which can I just
say rude, Elon Musk tweeted out that he had selected
a new CEO for Twitter. This had been something Musk
had been promising us for a while while. He said
that he would get a CEO to come on board
and replace him, adding that he would do so once
Twitter was kind of in a stable place. Now, I'm
(01:09):
not sure that Twitter is actually anywhere close to being stable,
considering the chaos that seems to unfold every week. But
forget that. We now know that Linda Yakarino has been
named the next CEO of Twitter. She formerly headed advertising
over at INBC Universal, which I mean, that's a heck
(01:30):
of a job. I mean, NBC Universal is a truly
enormous company, and of course it's part of an even
bigger company called Comcast, and so she definitely knows how
advertising works like this, is an expert in that field.
So could her leadership help repair the relationships between Twitter
and advertisers. I think she's got a decent chance now.
(01:53):
Whether she can convince the people who have already jumped
ship off of Twitter to come back remains to be seen.
So in other words, there may not be as many
folks to advertise too, even if she's able to convince
them to return to Twitter. Musk is also not leaving
Twitter entirely. Of course, he will stay on both as
the chief technology officer and as the executive chairman. Now,
(02:18):
I'll say this, I hope she can lead the company
toward true stability and success and create a place where
people aren't constantly bombarded by hate speech and misinformation and scams.
That would be nice. It's a long shot, because it's
I mean, I don't, honestly don't. I don't know what
you do to fix Twitter. At this point, it was
already not in the best of shape. Before Elon Musk
(02:41):
took over, and I don't see how you could argue
it's gotten any better since then. So yeah, it's a
huge not to untangle. But maybe she can do it.
We'll have to find out. Meanwhile, even though Elon Musk
is the current CEO and executive chairman of Twitter, he
still has to submit tweets about his other company, Tesla,
(03:04):
to a lawyer before he posts them. Now does he
do this? I don't know. Twitter has never identified this
lawyer that he's supposed to do this with, but you
know he's supposed to. The Verge reported on this because
Musk challenged this consent decree he had agreed to back
(03:24):
in twenty eighteen. He had challenged it in a recent
appeals case, and the federal court rejected his appeal. So
it all stems from a settlement Musk agreed to way
back in twenty eighteen. That's when the US Securities and
Exchange Commission, or SEC, accused Musk of making quote a
series of false and misleading statements regarding taking Tesla, a
(03:47):
publicly traded company private end quote. Essentially, the SEC was
saying that Musk wasn't following proper procedure if in fact
it was his intention to take Tesla private, and further,
whether he won to do it or not, it amounted
to market manipulation. Anyway, Musk settled with the SEC, and
(04:08):
as a concession, he agreed to this sort of babysitter
clause where he has to show his tweets to a
lawyer before he actually posts them. Musk wanted that thrown out.
I'm sure it's humiliating to be the person who bought
Twitter and still have a legal requirement to submit tweets
for review. Musk's argument was that the SEC was essentially
(04:29):
using this consent decree to infringe upon Musk's free speech,
and we all know what a believer Musk is in
free speech, at least for himself. Anyway, the Court said no,
dice Elon, and pointed out that the SEC has investigated
a grand total of three of Elon Musk's tweets over
(04:51):
the years, and that includes the tweet that actually started
off the whole mess in the first place, so only
two since then, and of course Elon hid posted many
many times more than just three over the last five years. Personally,
I think Musk should just consider it a fair trade
because obviously, I mean, like an earlier investor lawsuit sought
(05:12):
billions of dollars in damages against Elon Musk. They were
arguing that Musk's tweets ended up costing investors enormous amounts
of money, but a jury found that Musk was not
liable for those losses, so he didn't have to pay
out the billions. He's just gonna have to mine this babysitter.
And to be honest, I'm very skeptical that he actually
(05:34):
goes through it. He just I don't know. I guess
he just wants the whole thing lifted. And you know,
I think it's pretty fair to say that Elon Musk's
mo for a long time has been wishing to avoid
consequences of his own actions. I don't think that's unfair
to say. Over at Tesla, Elon Musk has shown that
he wants to be more involved with the company, which
(05:56):
is something that shareholders have been asking for for age
ever since he first announced that he wanted to buy Twitter. Now,
more specifically, Elon Musk wants to be more involved if
anyone in the company wants to hire anyone else, Musk
sent out a memo throughout Tesla, saying that all new
higher requests have to come to him or his personal approval.
(06:18):
Doesn't matter who the person is or what position is
being filled, it has to go to Musk first. Even
in the case of contractors, it has to come to
him for approval. He instructed managers to send him hiring
requests on a weekly basis. However, he also followed this
up by saying that people should quote unquote think carefully
before submitting a request. Now, to me, this is me
(06:41):
inferring from that statement that seems to be kind of
an intimidation tactic, Like it's meant to discourage people from
making requests in the first place, because they need to
think carefully before they ask, And it just seems like
it's Musk trying to head that off and avoid having
to hire more people. Later today, Musk is actually going
(07:04):
to hold an earnings call with shareholders, so I imagine
we'll have a lot more to say about his increased
involvement with the company later this week when we do
another news episode on Thursday. One thing that could be
discussed on that call, if shareholders get their wish, is
a discussion about succession planning. So if you're not familiar
with succession, and I'm not talking about the television series.
(07:26):
It's when a current leader outlines their plan regarding who
should take the top spot after they leave the company.
Investors will soon vote on whether or not to compel
Tesla to publish a key person risk report. The concern
is that Tesla may be too tightly bound to the
personality of Elon Musk, which means if something were to
(07:47):
happen to Musk, whether it's something catastrophic or maybe Musk
ends up getting distracted wanting to build his AI company
and he runs off from Tesla, that the company could
end up being directionless and investors would lose out on
a lot of money, and that the company could be
(08:07):
in danger in the wake of Musk's absence. I think
the closest similar example I can think of right now
is how investors were thinking about Steve Jobs and Apple
like the two were synonymous. Now, Jobs actually did the
proper steps to plan for his successor, but it didn't
stop folks from worrying that the company Apple could fall
(08:29):
apart after Steve Jobs's death. Obviously that didn't happen, So
Tesla's shareholders may force the company to do a thorough
report then not only examines how important Musk is to
the company, but also identify key personnel who could potentially
take on the role of CEO in the future. The
shareholders will also vote on whether or not to approve
(08:49):
certain board member nominees, some of which are contentious. And
like I said, we'll circle back to this on Thursday
to talk about what unfolded during the actual earnings call
if there are any significant updates. Now it's time to
shift to AI. And I know I cover AI a lot,
but it keeps creeping into the tech news and so
(09:11):
we're gonna talk a bit about AI today. Open AI's
CEO Sam Altman is set to appear before a congressional
panel here in the United States. Altman has previously appeared
to be fairly straightforward in his assessment of AI. He's
even suggested that folks were overplaying the capabilities of his
own company's chat pot Chat GPT. Prior to his meeting today,
(09:35):
Altman submitted written testimony to the panel and suggested a
framework for a licensing procedure for AI companies. So essentially,
Altman's proposal is to create a system where companies that
want to develop certain types of AI tools, will have
to procure a license and follow established safety standards. Of course,
Congress would have to establish those standards first and InterMine
(09:58):
what sort of parameters AI I should fall into and
what would constitute an unsafe version of AI. Altman also
reportedly use the phrase regulation of AI is essential. This
is according to Reuters, which you know, again that might
come as a surprise considering he's the CEO of arguably
the most famous AI company right now. Typically, business owners
(10:21):
aren't really gung hole and calling out for regulation of
their own industry, and when they are, sometimes it turns
up that they were being, you know, perhaps less than
forthright about it. See also Sam Bankman freed of FTX fame.
Some critics worry that regulation could discourage startups and potentially
(10:43):
cause smaller AI companies to sort of fade away and
just leave AI to the larger established companies. That is possible,
but then you also have to admit these larger companies
are progressing at such a rapid pace, they're investing billions
of dollars in re search and development, that there does
seem to be a need for some sort of checks
(11:03):
and balances to be put in place in order to
head off problems before they get too severe. Okay, We've
got more stories, including more AI stories to go over,
but first let's take a quick break. Okay, we got
(11:25):
some more AI stories to cover. The World Health Organization
or who Who? By the way, A side note if
you've ever seen the film Clue, it's a comedy mystery
film that I absolutely adore. It's very, very silly and
I love it. One of the characters there mentions that
(11:46):
he works for the United Nations Organization, which would be no,
and then asked about him being a politician, he says, no,
he works for the World Health Organization. Who's joke there
That never gets actually used in the in the film,
but it means he works for you know who. Okay,
(12:07):
I'm sorry, I got distracted anyway. The World Health Organization,
which is a real thing, issued a statement cautioning the
medical field about the use of AI and cited concerns
that AI can potentially misinform patients and or healthcare providers. Also,
as we have seen in other areas, AI can contain bias,
(12:30):
and bias can ultimately cause harm to people, particularly people
in specific populations. Right like with facial recognition technologies. We've
seen AI cause disproportionate harm to people of color Overall.
The Who said that AI stands to provide great benefit
(12:52):
in the field of healthcare, like there are obvious applications
where it could be of huge help. However, we have
to address issues like bias and misinformation, as well as
the potential for bad actors to use AI to create
outright disinformation in an attempt to harm I think the
(13:12):
takeaway here is that WHO is suggesting it might be
a little too early for the healthcare sector to just
fully embrace AI. I think that is fair to say. Now,
we also have to admit AI already plays a huge
part in the world of healthcare, because AI is more
than just chatbots and large language models. AI includes lots
(13:35):
of stuff, and a lot of that is already being
used regularly in healthcare. That doesn't appear to be what
the WHO is specifically referencing here. My interpretation is that
they're talking more about the generative chatbots style AIS that
have been taking over the news. The New York Times
reported that some AI researchers at Microsoft think that the
(13:58):
AI system there work looking on could be showing the
faintest signs of approaching artificial general intelligence or AGI. They
called it sparks of AGI. That would mean that we're
talking about machines that appear at least to be able
to reason in a way that is similar to how
we humans reason. The science fiction definition of AI has
(14:22):
long been one of general intelligence, often passing into the
category of superhuman intelligence. The article, which is titled Microsoft
says new AI shows signs of human reasoning, and it
describes an experiment in which researchers asked this sort of
AI chatbot to solve a bit of a puzzle. They said,
(14:45):
how can you create a stack out of this weird
collection of objects that would result in a stable structure,
And the objects included nine eggs, a laptop, a book,
a bottle, and an So the AI chatbot suggested using
the book as the base and then set the eggs
(15:06):
on top of the book in a three x three grid,
and then gently laying the laptop on this layer of eggs,
and then putting the remaining items on top of the
laptop's surface. The researchers concluded that the system was at
least appearing to use some real world knowledge, such as
that eggs are delicate, and therefore they would need to
(15:27):
be in a arrangement like a grid in order to
have enough support to avoid cracking, and that the laptop's
upper surface would be flat so it would be able
to support the bottle and then the nail. Earlier versions
of the AI system gave more nonsensical answers, so it
was an argument that this newest version of the AI
(15:49):
model was sophisticated enough that it could actually reason out
an answer that would potentially work. So could that mean
we're now approaching general intelligence? Maybe? Maybe not. While the
researchers appear convinced that these are some early signs of
limited but still general intelligence, other experts argue that the
(16:12):
results just give the appearance of intelligence, and that this
is another case where because of the perspective someone takes,
you see a particular outcome. I talked about this recently
in another episode. If you pull an Obi one Kenobi,
then you look at things from quote unquote a certain
point of view, then maybe you'll see signs of general intelligence.
(16:36):
But if you look at it from a different point
of view, maybe those signs of general intelligence just vanish,
and it turns out that the thing you thought was
real was just an illusion? So is that what's going on?
I honestly don't know. I will say that there are
some critics who have a pretty strong point to make,
(16:56):
which is that Microsoft is a company that has made
massive investments in AI to the tune of more than
ten billion dollars, So it has a vested interest in
AI becoming a huge success, right Like, They've poured a
lot of money into this, so they have a desire
for this to come out the other side as a
(17:18):
huge revenue generator. So it's possible that such a company could,
as Professor Martin Sapp of Carnegie Mellon University has said
to be quote co opting the research paper format into
pr pitches in the quote, in other words, trying to
(17:38):
manufacture support to make a tool seem more sophisticated than
it potentially is. I don't know the truth of the matter.
If I'm being honest, I'm still skeptical about machines capable
of making the leap to general intelligence. However, that's based
(17:58):
largely upon the fact that we don't fully understand general
intelligence within humans, let alone in machines. But maybe that
isn't necessary. Maybe we will achieve general intelligence with machines
without having a full appreciation of how it works in humans.
That's possible, I guess, So I just I mean, I
(18:19):
keep going on to I don't know, but I do
remain somewhat skeptical. Apple has announced a host of new
accessibility features coming to various Apple devices, both on iOS
and mac os, and that's great. So accessibility is getting
more attention and support these days, and it's been long overdue.
(18:42):
I follow a lot of people who work in improving
accessibility in technology, particularly in things like video games, where
there are now settings and a lot of video games
that are meant to allow people who might have limitations
in some way or another still be able to enjoy that.
I think that's great. I think being able to increase
(19:04):
the spectrum of folks who get to experience stuff is fantastic.
I think everyone's a winner when that happens, and seeing
companies put attention toward accessibility is one of the big
steps toward addressing gaps that can otherwise exist between different
populations when it comes to their ability to use tech.
(19:27):
So I love accessibility technology in general, but the specific
feature I wanted to talk about that Apple is introducing
is called personal voice. So with personal Voice, users can
train their device to sound just like they sound. Then
the Apple device can speak in a synthesized version of
(19:48):
the user's own voice. For people who have conditions that
affect their ability to communicate, potentially it might mean that
they're facing a future where they will no longer be
able to speak at some point. Well, this kind of
feature is huge for them. Rather than them having to
rely upon an impersonal, generic synthetic voice to speak when
(20:12):
they cannot, the voice that will come out of their
devices will be their own. I think that's incredible. I
think it's a really great use of the synthetic voice technology.
We have talked about how synthesized voices can cause disruption
and bad ways, right, how it can impersonate people in
the arts where those people suddenly feel like their identity
(20:37):
has been stripped from them and put to use in
something that they had no involvement with. That's bad, right,
Or people like me, I would be out of a
job if iHeart decided, you know what, We're just going
to train of synthesizer on Jonathan's voice because he's got
thousands of hours of content out there. Well, we're going
(21:01):
to train it on his voice. We're gonna get a
chat GPT style bought to write episodes of tech stuff
as if it were Jonathan. Have the voice that sounds
like Jonathan, deliver it, and then we don't have to hire,
you know, we don't have to pay Jonathan anymore. He
could just he could just be cut free. It's a
scary thought, like I get it, like, and it's potentially
(21:24):
a possible thing, But you know, I U still argue
that humans have their own, actual, legitimate contributions to various activities,
including things like creating episodes. So yeah, it's nice to
see a version of voice synthesis that isn't potentially scary
(21:45):
or bad, but rather an application that can protect a
person's agency and personality and independence. I think that is fantastic.
So good on you Apple for developing those particular excess
stability features. There are other ones that Apple also announced
which are equally great. I was calling this one out
(22:07):
specifically because it kind of has that AI connection with
the voice synthesis model that is worked into this particular tool. Okay,
we're going to take another quick break. When we come back,
we will wrap up with a few more big stories.
(22:33):
So we're back, and I saw on CNN that a
former byte Dance employee has filed a wrongful termination lawsuit
against byte Dance. That employee, Yin Tall You says that
he formerly served as head of engineering for US operations.
Now you might recall that byte Dance is the parent
(22:53):
company of the popular video social platform TikTok, and you
probably also know that in the United States as well
as many other parts of the world, there's a growing
concern that TikTok might serve as a kind of data
siphon and shoot that data to the Chinese Communist Party.
(23:14):
Because if you aren't aware Bye Dance being a company
that's centered in China, that means that that by law,
the company is supposed to aid the Chinese Communist Party
when it comes to things like gathering information about, you know,
enemies of China and that sort of thing. They are
(23:36):
supposed to be obligated to share that kind of information,
and a lot of companies in China have to reserve
a spot for an official from the Chinese Communist Party
to essentially kind of sit on the board of the company.
So You says, the fears of all these different nations
(23:56):
that perhaps bye Dance is gathering information using its platforms
and then sending it on to the Chinese Communist Party.
He says that those fears are totally justified. He says
that Chinese officials have full access to get data that
was gathered through byte Dance applications, including presumably TikTok, and
that they had full backdoor access to data even that
(24:19):
data saved on US servers. Now ByteDance disputes this. They
say that this is just not true. They point out
that you worked for the company for less than a
year before his employment was terminated, that it ended in
twenty eighteen. You, by the way, says that byte Edance's
description of his time of employment is not true. I'm
(24:43):
sure that use accusations are confirming a lot of fears
that are held in various political circles. But security experts
say that there really hasn't been any evidence that Chinese
officials were actually accessing TikTok data in the United States.
So this very well could be a situation where a
(25:05):
former employee is leveraging growing suspicion in order to support
their own claims, or it's possible that as accusations are
all true, I honestly don't know the answer. I will, however,
point out again that even if TikTok was actively being
used by China to spy on Americans, the fact is
(25:28):
you can buy and sell data from pretty much every
online platform, which means you don't have to rely on
a single app to be like your way to gaze
into an enemy's territory. That's not necessary. You don't need
to do that because you can just buy the data
online from all sorts of different data brokers. Unless there
(25:50):
are really strict controls about that sort of thing, the
information is out there, So that's something we should really
think about, is that. You know, if you were to
eat even shut down TikTok, that's one potential stream of
information that could potentially go to an adversary, but there's
(26:12):
still all the other ones. It's like putting your finger
in a hole in a dam and then like fifteen
feet down from you, there's a massive breach that's allowing
millions of gallons of water to pass through what You're
not really doing anything at that point. But we've gone
over this before, so we'll move on. Now. Here's a
quick update on the Microsoft Slash Activision Blizzard deal. If
(26:37):
you recall, Microsoft has been trying to purchase Activision Blizzard
and has met with some resistance around the world because
we're talking about a global acquisition here. So as expected,
as we talked about I think last week, the EU
has now approved this merger. Now, you might remember this
(26:57):
was never a guarantee. An earlier reports had messages saying
that perhaps even Sony was campaigning hard to have this
merger blocked out of concern that it would constitute an
unfair advantage for Microsoft in the video game market, particularly
for really popular titles like Call of Duty. But Microsoft
then made several promises to regulators to keep things fair
(27:21):
and you know, not just absolutely lay waste to the
home video game industry, and also to make sure they
took steps so that they're not becoming the de facto
cloud gaming service. Microsoft agreed to a ten year licensed
deal that said the company would keep Activision Blizzard titles
available through all cloud streaming services, all cloud game streaming services,
(27:43):
I should say, as long as those services sign a
license agreement with Microsoft. This addresses one of the main
concerns that has held this deal up in the UK,
where regulators have voted to actually block the deal. Microsoft
is now appealing that decision. And then meanwhile here in
the United States, we still have to wait for regulators
(28:04):
to actually weigh in. They haven't done that yet, so
this acquisition still is not necessarily going to happen. But
arguably the EU's clearance for this deal gives the move
some support and some momentum, so its chances have improved slightly.
So maybe we'll see opinions reverse further downstream. We'll have
(28:25):
to wait and see. You know. The EU was actually
pretty busy this week because on top of approving Microsoft's
acquisition bid for Activision Blizzard, the EU also approved cryptocurrency
regulation rules this week, being the first region in the
world to form formal cryptocurrency regulations. These will not take
(28:46):
effect until next year, but the regulations are meant to
protect EU citizens from losing their shirts in the crypto market,
and also to create a framework to hold scam artists
and bad actors accountable when it turns out that they're
amazed in crypto investment opportunity is a little more than
a Ponzi scheme. Ultimately, the goal here is to weed
(29:06):
out the bad crypto entities from the good ones. So
it's not saying all crypto is bad, but rather there
needs to be this framework of regulations in order to
make sure that it's not just running rampant and causing harm.
And also this protects the EU and the process. So
it also means that these regulations create the framework for
(29:28):
pursuing crypto currency companies that are engaged in stuff like
money laundering or financing terrorists. These regulations may end up
serving as a foundation for other parts of the world
to adopt similar approaches and thus rain in the wild
West nature of the crypto community. While that might chafe
(29:48):
a bit to the folks who saw cryptocurrency as a
way of working outside the system, it can also potentially
mitigate disasters such as the aforementioned collapse of FTX. How
that particular event had a domino effect across the crypto market.
I think it's hard to argue against creating rules that
(30:10):
minimize those those chances of like those big disasters, or
at least reduce maybe minimizes the wrong word, but to
reduce the chance that that happens. Because investors don't want
to see their money go away, right, You don't want
to have to depend upon some government agency to retrieve
some or or maybe, if you're lucky, all of your investment,
(30:33):
because that's never going to be something that you can
depend upon. So yeah, I think regulations are the right idea.
They are antithetical to kind of the spirit of cryptocurrency,
but as we've seen, without those regulations, there's a lot
of opportunity for people to take advantage of folks at
(30:55):
a grand scale that is incredibly harmful. And finally, while
well I usually like to end a tech news episode
with a silly or a light hearted story, I do
not have such a story for this particular episode. Instead,
we get to say that India is the first country
with a democratically elected government to ban messaging services that
(31:17):
allow end to end encryption. Now we've seen these sorts
of bans in authoritatively governed countries now where you've got
essentially a dictator or a military organization in charge of
the country, but we've never seen it in a democracy.
And the justification for this move for banning encrypted messaging
(31:37):
services is pretty much what you would suspect. The government
says that you've got to get rid of them because
terrorists are using these apps to communicate with each other,
and they use encryption to hide their plans from authorities,
and their plans constitute a threat to the state of
India and its citizens. So you've got to get rid
of encryption. That means that nobody would get to have
(32:00):
access to encrypted messaging systems within India. And you know,
you'll hear arguments of like, why are you worried unless
you have something to hide? But consider how officials in
India have gone after people who have criticized the government.
They have gone so far as to petition platforms like
Twitter to remove posts that put the government in an
(32:22):
unfavorable light. So you start to see a government use
its power to suppress speech, and you start to make
a solid argument that removing access to encrypted messaging services
is another step toward authoritarianism. It's just it's authoritarianism that
is dressed up like democracy. So bad story there. I
(32:48):
hate to end it like that, but that was the
last last one that I came across before I started
working on this episode. In the meantime, I hope all
of you out there are well and I I will
talk to you again really soon. Tech Stuff is an
(33:10):
iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app,
Apple podcasts, or wherever you listen to your favorite shows.