All Episodes

March 2, 2023 38 mins

There seems to be a lack of consensus regarding whether or not AI is about to change everything or if it's more hype than substance. We explore several news items that look into this. Plus, the ACLU doesn't think the US should ban TikTok. Airbnb might ban you based on who you hang out with. And DARPA is looking for some new aircraft designs.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with iHeartRadio. And how the tech
are you. It's time for the tech news for Thursday,
March second in twenty twenty three. And AI is on

(00:28):
the brink of changing the world for the better, with
the potential to boost the world economy by nearly sixteen
trillion dollars in just a few years, or it's all
overblown and far less impressive than you think. My first
two stories take these very different perspectives. So first up

(00:50):
from Markets Insider is a piece that's titled Artificial intelligence
is on the brink of an iPhone moment and can
boost the world economy by fifteen point seven trillion dollars
in seven years. Bank of America says, now, command Markets Insider,
you can at least save some of the content for

(01:11):
the actual article. It doesn't all have to go into
the headline, but yeah. Bank of America sent on a
message to clients outlining why the financial institution believes that
AI is poised to change things forever, similar to how
the iPhone helped transform the web from something most folks
access through computers into a mobile experience, and honestly, that

(01:34):
transformation was huge. Anyone who was just working in web
based content at the time can tell you that and
about how it was so incredibly disruptive. We still see
that today, with companies offering up web page services designed
to scale properly no matter what kind of device connects
to the page. That doesn't even touch the rise of

(01:55):
the app developer market, which really didn't exist in any
significant way before the iPhone. Anyway, Bank of America says
AI is about to do something on the same scale,
and surely lots of companies are pushing AI prominently. That's
something that we're coming to be revisiting throughout the early

(02:16):
part of this episode. But I take some issue with
the conclusions. For example, the Bank of America note includes
this bit quote it took chat GPT just five days
to reach one million users, one billion cumulative visits in
three months, and an adoption rate which is three times

(02:39):
tiktoks and ten times instagrams. The technology is developing exponentially
end quote. Okay, that conclusion does not follow the premise.
You know, I think you could say that the appeal
of AI, the curiosity people have, the eagerness they have
to try it out, the enthusiasm around it. All of

(03:01):
that developed very quickly. Perhaps you could even argue it
developed exponentially, but that is not the same thing as
saying the technology itself is developing exponentially. I think conflating
these two things is dangerous because it creates this heightened

(03:22):
expectation for a technology that, depending upon its use, has
often proven to be far from perfect or reliable. I
do think it's undeniable that companies are going to continue
to invest huge amounts of money in developing AI. That
is unavoidable. It is happening and will continue to happen.
But I would caution against making this assumption that it

(03:46):
means we're going to see incredibly rapid development in the space.
That might happen, but we might also just see more
iterative improvements rather than these big revolutionary leaps. A huge
increase in attention is not the same as an increase
in a technology's capabilities. Meanwhile, Alex Shephard at New Republic

(04:10):
has a different take from the Bank of America approach.
It is far less bullish. Alex wrote an article titled
artificial intelligence is dumb. Okay, So this is a much
more straightforward but simple title, and Shepherd's argument is that
AI is in the early stage of the Gartner hype

(04:31):
cycle that we've talked about in recent episodes. That enthusiasm, excitement,
and expectations are on the rise, but if you spend
any meaningful amount of time with tools like chat GPT,
you come to the realization that the actual experience doesn't
quite live up to the hype surrounding it. Shepherd spends
much of the article using chat GPT as the primary example,

(04:54):
and argues that while the chat bot is remarkably more
advanced than ones that preceded it, conclusions like what Bank
of America has come to are largely unfounded. That the
belief that we're on the precipice of disruptive transformation is
really based on nothing more than conjecture and hypotheses, and
that we lack any actual evidence to say that we

(05:16):
are in fact on that precipice. Now, you could argue
that these are the sort of things that we really
can't assess until they've actually happened, but really it's only
with hindsight that we can say that moment was when
everything changed, because as you're living through the moment, you
don't have enough perspective to judge whether or not it's

(05:39):
that pivotal. It's only after the fact that you can
make that assessment. I do think Shepherd makes some very
good points, but I also find the arguments of the
article to be a little too narrow and reductive. Shepherd
says that those who claim AI is going to have
transformational impact on absolutely everything are making quote unquote insane claims.

(06:00):
But since the article almost exclusively focuses on chat GPT,
I feel that leaving out all the other manifestations of
AI undermines this argument, because we're already seeing how AI
is transforming the world, both in good and bad ways.
It can help optimize processes which might not be as

(06:22):
world shattering as I don't know, facilitating meaningful conversations between
different countries, but it does have an impact. We've seen
it in stock trades, right, We've seen micro trading and
ultra fast stock trades that's making economic impacts that are
honestly still kind of difficult for us to get our

(06:42):
minds wrapped around. And then we've also seen how AI
can exacerbate social problems like the use of facial recognition
technology among law enforcement. That kind of AI can really
crank the knob on already difficult problems, like the fact
that people of color are disproportionately targeted by law enforcement

(07:03):
here in the United States. So while I think some
versions of AI are undeniably dumber than what the hype suggests,
we also have to remember AI does not manifest in
just one way. AI is not just chat GPT. It's
not just the idea of a seemingly sentient computer like

(07:23):
how in two thousand and one. It's in all sorts
of stuff, from robotics to stock trading to assisting surgeons
with medical procedures. So I think we have to avoid
using a specific product as the gateway to criticizing the
general field. It's too reductive and it doesn't really help

(07:44):
us get a deeper understanding of what's actually happening. Moving
on to another version of AI. Earlier this week, Microsoft
researchers unveiled Cosmos one. That's Kosmos one. This is a
form of multimodal AI that, according to the researchers, can

(08:04):
solve visual puzzles. It can recognize text visually, so it's
not you know, reading, it's reading the text like a
person would. So you could have like a picture that
has text in it, and this would be able to
distinguish what that text is. It can analyze pictures and
be able to tell what's in those pictures and describe them.

(08:26):
It can have natural language interactions, and that could mean
we're about to have another change in how captures work.
You know, captures are those tools that websites and other
services use to determine whether or not you're human. And
you know, you sometimes will encounter a capture that will
ask you to do something like select all the images

(08:47):
that have a fire hydrant in them, or a crosswalk
or a motorcycle or whatever. Well that's because that's a
task that most humans can do fairly easily, but bots
additionally have a hard time doing it. Cosmos one, it seems,
could potentially complete those kinds of captures. It could analyze

(09:09):
images and determine which of them, if any, have the
important feature in them. The whole history of captcha actually
is a swinging pendulum between foiling AI and then creating
AI that's capable of foiling the capture. So this is
nothing new anyway. The Cosmos ones system was given tasks

(09:30):
that included writing captions for images, like trying to describe
what the image showed, and it even took a visual
IQ test. Essentially, what the researchers did was they fed
answers to a visual IQ test to the Cosmos one
model and ask Cosmos whether or not the answer was correct. Now,

(09:54):
according to the researchers, the AI scored below thirty percent
on that visual i Q test. That's a pretty dang
low score. Technically, I think it was between twenty two
and twenty six percent. That's not good, but it is
better than chance, so it's better than just answering yes

(10:15):
or no randomly. So that suggests that this could be
a starting point from which this model will improve over time.
Microsoft has indicated that the company plans to make Cosmos
one available to developers in the future, though at the
time I'm recording this, there's not a timetable mentioned about
when that might happen. This is a different approach than

(10:36):
the generative pre trained transform of GPT, So again, we're
looking at different ways that AI manifests. It's not always
in just one single direction. There's so many different disciplines
that are involved in AI, and many of them are

(10:56):
approaching AI from a very different angle, and there's no
telling which versions are going to end up being the
most dominant further on, or if it will truly be
a convergence of all these different disciplines that ultimately produces
the AI that we're thinking of that will be truly transformational.

(11:17):
The US Federal Trade Commission, or FTC has its own
concerns about AI, and in this case, it has more
to do with the way companies are marketing their services
by including mentions of AI. The FTC is concerned the
companies could be overpromising or misleading people by leaning on
a trendy, buzzworthy term and concept. If you need to

(11:40):
get investors to pour money and your startup, well, you know,
just start using the term AI in there. Even if
AI doesn't really make sense or you're not really using it,
you're bound to snag a few fish with that approach
because AI is such a crazy popular concept right now.
That's the kind of thing that the FTC is concerned about.

(12:01):
Folks who are trying to cash in on a popular
but largely misunderstood technology. And as I've said many times,
when you have excitement mixed with a lack of information
or knowledge or understanding what you have is the perfect
condition for scam artists, or if not outright scams, at
least unethical folks who don't mind leaning a little heavily

(12:23):
on ignorance in order to make some money. So, if
there's something that sounds really exciting, like a huge investment opportunity,
but you don't actually understand the underlying approach, whether it's
a technology or otherwise, huge red flag, y'all, Huge red flag.
I don't care if it's an NFT or if it's AI.

(12:47):
It is something you need to take a step back
from and start asking critical questions to get a better understanding.
And it might turn out to be total legit, which
is awesome. But if it's not total legit, it will
benefit you from taking that step back. So the FTC
is essentially sending a message out there, and it is saying, hey,

(13:07):
be sure any claims y'all are making about AI in
your products and services are substantive or else we're going
to ask you to prove it, and if you can't
prove it, you're gonna be in trouble. Mashable reports that
Google layoffs have affected all sorts of employees that you
wouldn't expect, like robots. I mean, I'm talking about actual robots.

(13:29):
We usually worry about robots taking our jobs. Rarely do
we think about them losing their jobs. All right, So
the robots in question are one armed robots from the
Everyday Robots team that was within Google. This team had
been working on robots systems that could operate in consumer applications,
and Google was actually making use of them in the

(13:51):
Google HQ to do odd jobs like cleaning surfaces like
counters and stuff and that kind of thing, or taking
stuff to recycling bends. But it now sounds like this
project has been dissolved, and in addition, the robots themselves
have been shut down and packed away. So times are
tough even for the machines out there. I guess Okay,

(14:12):
we've got some more news stories we're going to be covering,
but first let's take a quick break. All right, we're back.
We still have another AI story, because, like I said,
it has just become the big tech topic for twenty

(14:34):
twenty three, unless something massive changes later in this year,
which is entirely possible. I suspect that end of the
year roundups in various tech podcasts are going to be
talking about how this was the year of AI hype,
but switching over to Apple. The company has a very
well earned reputation for having an obtuse process for approving

(14:57):
apps on its iOS platform. You can re countless descriptions
among app developers of encountering frustration as they have submitted
apps to Apple and only found them rejected and often
with not enough direction for them to be able to
make informed changes to the app so that it could

(15:18):
actually pass. But recently, Apple send a communication to one
app developer called blue Mail that was planning on pushing
out an update to its existing email application, and this
update would have incorporated an AI powered feature that could
assist with language tools. Think of something kind of similar

(15:39):
to chat GPT that could help you construct an email message.
Apple has delayed this upgrade rollout, citing concerns with that
the AI could end up generating inappropriate content and that
the app developer needs to take that into consideration since
children could be using the app. So Apple is telling

(15:59):
blue Mail that if it wishes to incorporate this AI feature,
that it also has to change the app so that
the app is now going to be restricted to users
who are seventeen years or older, just in case the
AI starts to generate offensive messages or material that could
be considered harmful for kids. Blue Mail has protested this decision.

(16:23):
The company has argued that there are already apps on
the iOS platform that are not held under the same
sort of restriction, but that have some similarity in capability,
and the company says that if it is forced to
offer this email app with that restriction, the age restriction,
that this harms visibility and discoverability, and it hurts the

(16:47):
app's performance in the marketplace. Now, I do not doubt
that there is an uneven landscape among apps on iOS.
I don't think it's fair at all. I think there
are far too many inconsistencies with apps that get the
green light and apps that are prevented. I do not
think it's a very transparent process at all. But I

(17:10):
also think concerns about generative AI have some validity to them.
If you just take the Nothing Forever show on Twitch,
that's the AI generated show that creates an endless Seinfeld episode.
That's proof that without careful guidelines and controls, you can
run into problems So for those who don't remember, Twitch

(17:31):
actually temporarily banned the Nothing Forever show the channel because
they had temporarily reverted to an earlier version of GPT
when they encountered some technical issues, and the earlier version
of GPT did not have the content restrictions that the
more recent version had, and the show began to generate

(17:53):
content that was homophobic in nature. They violated twitch his policies.
They got a ban. Well, that show that these AI
tools can end up being problematic. I know that's a
word we use to the point where people complain about
it's use, but it's it's an inappropriate word in this case.

(18:14):
So I do get the concern, but I also can
sympathize with Blue Mail's argument that it's already an unfair
playing ground on iOS. I don't think anyone comes out
a winner in this one. Now, to talk about TikTok
a bit, the American Civil Liberties Union or ACLU, has

(18:35):
issued a statement protesting US House Bill one one five three. Now,
this proposed bill would, according to the ACLU quote, effectively
ban TikTok in the US end quote. This isn't about
removing TikTok from government devices. But according to the ACLU,

(18:57):
banning TikTok outright and similar platforms. So the official description
of the bill is quote to provide a clarification of
non applicability for regulation and prohibition relating to sensitive personal
data under International Emergency Economic Powers Act and for other purposes. Quote.

(19:20):
I wish I could tell you more about the text,
but when I went online to read it, it had
not yet been uploaded to the database, so I haven't
been able to actually read the bill. The ACLU says
that the US Congress quote must not censor entire platforms
and strip Americans of their constitutional right to freedom of
speech and expression. Quote and yeah, the right to free

(19:44):
speech is one of the fundamental core values of the
United States, send the First Amendment to the Constitution. But
this is a complicated issue because TikTok critics worry that
the app is on the back end, essentially performing as
a data siphon and pulling in information that the Chinese
government can then use as intelligence. And this information includes

(20:06):
personal information about users, things like employer information, government information,
and more like. People are using TikTok all over the place,
so potentially, if you are gathering intelligence, you could comb
through TikTok and look for stuff that could give you
an advantage in that arena. So generally speaking, I tend

(20:27):
to side with the ACLU on most topics, but this
one is a little tricky, and I'm not sure where
I land on this. I do have concerns about data
security with TikTok, but then again, as a lot of
people have pointed out, TikTok's practices are really not all
that different from other platforms like Meta, YouTube, etc. It's

(20:51):
just that those companies aren't owned by a Chinese company, right,
But they are gathering the same kinds of information and more,
and they're definitely exploiting it. So you can make a
strong argument that we've already decided that handing over information
to platforms is fine, and therefore it would be unfair
to single out TikTok just because its parent company happens

(21:13):
to be based in China. Plus, obviously, freedom of speech
is critically important. I'm not really sure if banning a
platform falls into the bucket of restricting free speech, but
then I'm no constitutional expert either. Also, there's nothing stopping
someone else from making a similar app. In fact, we've

(21:35):
seen that right Instagram YouTube, Snapchat, and others have all
introduced features that are extremely similar to what TikTok does.
I think it's fair to say some of these have
outright tried to copy what TikTok does, to varying degrees
of success. So I don't know that eliminating a platform

(21:56):
amounts to the same thing as eliminating Americans free speech.
But again, I am not an expert on the subject matter,
so I don't I am genuinely conflicted. I do not
know what to think about this particular topic. TikTok itself
is introducing features that are meant to limit screen time
for younger users. So all TikTok users who are under

(22:19):
eighteen will get a message when they hit sixteen minutes
of screen time in a day. Once this rolls out,
and at that point, the user will see a prompt
asking for a passcode before they can continue watching content
on TikTok. However, they can also disable the feature entirely,
but after one hundred minutes of screen time in a day,

(22:40):
they will receive a prompt that requires them to create
a new daily limit. Now I'm not sure how effective
this will actually be on limiting screen time, because to me.
It sounds mostly like something that the average user would
just kind of roll their eyes at and then disable
and then continue on unless parents set the passcode and
they don't tell their kids what the pass code is.

(23:01):
But then if the user can actually just disable the feature,
I find it hard to believe that most folks will say, ah,
thank you, TikTok, where did the time go? I shall
now go outside to take in the fresh air and
play at sport or something. I guess you could say.
I'm skeptical that this is going to make much of
a difference. Some of the other features potentially could help

(23:21):
parents keep an eye on how much time their kids
are spending on the app that at least allows for
intervention if usage spirals out of control. I'm glad I
don't have kids, because I don't know how I would
approach this one either. I'm also glad that TikTok was
not a thing when I was a kid, because I
have a feeling I would have been a hardcore addict

(23:45):
of TikTok if I had had the opportunity to access
it back when I was a kid. In the UK,
a man named Duncan McCann has lodged a formal complaint
with the country's Information Commissioner's Office or ICO, accusing YouTube
of collecting information about the videos that children are watching

(24:05):
within the UK and this is against an ICO children's code.
YouTube responded by saying that the platform has never been
intended for children under the age of thirteen, that accounts
that are registered to young users follow protocol. They don't
collect data on young users if that's the account that's

(24:26):
connected to YouTube, and that for the younger kids. There's
also the YouTube Kids platform, which also does not track
activity and collect personal information. But McCann's argument is that
a lot of kids are accessing YouTube on family accounts
or on family devices that are under a parent's account,
and that these kids, as they use the app, have

(24:47):
their data and activity tracked. And you might be thinking, well, yeah,
if YouTube is being told an adult is in charge
of the account, then YouTube is going to treat the
activity on that account as if it were any adult
using it. So obviously it's going to track all the information.
That's the YouTube business model. And you might wonder what

(25:08):
McCann's solution to this problem is, And essentially he says
the ideal solution would be to create an opt in
system in which only accounts that are registered to adults
would have the option to agree to having their activity tracked,
kind of similar to how Apple approaches app tracking. So

(25:28):
it becomes an opt in system, and the can believes
that only a minority of users would ever opt into
such a system, and I'm pretty sure he's right. But
I also bet that if you forced that change on YouTube,
it would result in such a drastic impact to the
company's revenue that they would have to make drastic changes
to operations or else it would become too expensive to

(25:51):
run the business. Keep in mind, they are hosting hundreds
of new hours of content every single minute. So as
it stands, this matter is going to test the ICO
Children's Code. The UK only put that code into operation
in twenty twenty, so it's a pretty young set of rules.

(26:12):
And this, to me starts to raise larger questions because
if you start with the premise that a child could
possibly access a particular device or account that belongs to
an adult, does that mean that all online services from

(26:32):
here on out have to be designed in such a
way to assume by default that a child could be
accessing it, Like, do you have to start thinking, well,
a child might have gotten hold of their parents' iPhone,
and because of that, we need to design this so
it's child friendly, because obviously that would end up impacting everything.

(26:55):
There are tons of apps that are not appropriate for children,
whether because it's content or of the use. I mean,
like banking apps would not be appropriate for children. Right,
So if you start from that premise that you have
to assume that a child could be using this, therefore
you can't be tracking data or usage. It would mean

(27:16):
that tons of things would have to change. So I'm
very curious to see how this develops, because I don't
see it as being sustainable. All Right, I've got four
more stories to cover, so we're going to take another
quick break. When we come back, we will wrap up
tech news for this week. Okay, we're back, and here

(27:46):
is a quick Airbnb story. I sometimes resist including Airbnb
stories and tech stuff because the company kind of is
a tech company, and kind of isn't. I mean, ultimately
it is a tech company. It's just that our experience
with Airbnb is more on the actual like either hosting

(28:07):
a property or staying at a property, and not so
much thinking about the back end that's making all this happen. However,
in the case of this particular story, I think it
really qualifies as a tech company. So sometimes Airbnb will
issue a ban on a user and prevent that person
from ever being able to make a reservation and an

(28:29):
Airbnb property. There are a few reasons why airbb would
do this. They might do it if someone has been
reported as violating the rules, like if a host says, hey,
you know, I opened up my home to this renter
and they ended up causing an enormous amount of damage
that I'm now going to have to address, that might
be a reason. In some cases, it might be a

(28:52):
background check. Airbnb does partner with a company that does
rapid background checks. If that background check reveals that person
has a criminal history, that could be a reason to
get a ban. In fact, even in a few cases,
the quote unquote criminal history has been one where someone
was guilty or found guilty on a misdemeanor charge that

(29:12):
wasn't remotely related to rental of property. There was one
story about how someone had a misdemeanor of having their
dog off a leash in an area that required dogs
to be leashed, and that alone prevented them from being
able to stay at Airbnb, and that does seem like
that might be an overreach. And on the one hand,

(29:34):
you can understand how a company like Airbnb would air
on the side of draconian caution because Airbnb ultimately is
matching prospective customers with hosts, and Airbnb does not own
this property. In most cases, a host owns the property.
So if Airbnb allows some I don't know, TV tossing

(29:56):
rock Star to totally trash a host's home, there would
be some pretty major problems. And Vice reports that Airbnb's
policies now extend to folks who haven't broken any rules
or have a criminal past, but they have been found
to associate with someone who has already received a ban
on Airbnb. So let's just say, for example, that you

(30:19):
happen to be friends with somebody who occasionally makes bad
life choices, this person goes and does something that gets
them banned by Airbnb. Well, then you might find the
next time you try to book a place that you've
been banned by association because Airbnb did a little quick
background check and saw through Instagram that you and this

(30:40):
friend of yours had been on multiple trips together. And
they're like, oh, well, they travel with this person who
we've already banned, so now we're going to ban them too,
even though they haven't been found to have done anything
wrong themselves. Now there is an appeals process, but it's
not very transparent. If you would like to learn more
about this, I recommend the Vice Slash motherboard article. It's

(31:02):
called Airbnb is banning people who are closely associated with
already banned users. Over in Texas, Tesla has announced during
an investor call that it will offer Texas Tesla owners
an overnight home charging package that costs thirty dollars a
month for unlimited overnight charging of their Tesla. This is

(31:25):
to encourage Tesla owners to recharge their vehicles at night,
and further will depend heavily on electricity generated by wind farms,
so it comes from a sustainable source. Tesla executive Drew
Baglino pointed out that quote Texas has a ton of wind,
and in Texas, the wind blows at night end quote.

(31:45):
According to Insider, the average monthly cost to charge a
Tesla would typically amount to around fifty six dollars a month,
so thirty dollars a month would be a bargain. Now
there are restrictions. Only people who happen to live in
an area of Texas that allows homeowners retail choice in
electricity providers would be able to qualify, and they will

(32:07):
already have to have a Tesla Powerwall battery installed inside
their home. So they have to meet these these qualifiers
first before they can be part of this particular incentive package. Now,
on that same investor day call where we got this
incentive announced, Elon Musk himself said that Tesla's humanoid robot

(32:30):
program called Optimus, is one that he believes will lead
to a future in which humanoid robots could potentially outnumber
humans in a greater than one to one ratio, he said.
He also said, quote you could sort of see a
home use for robots. Certainly industrial uses for robots humanoid
robots quote, I respectfully disagree. I think we've seen tons

(32:57):
of examples of how humanoid robots are not always the
best approach. In fact, they rarely are the best approach. Now,
hear me out. The reason industrial processes are the way
they are is in large part because we humans have
certain abilities and certain limitations. For example, before we got

(33:18):
to industrial robots, the way we build a car is
not necessarily the best way to build it, full stop.
It's the best way to build it based upon what
we humans can do. But then we could also design
robots to do stuff that humans can't do, which means
we can actually make those processes better and more efficient
and safer and less expensive. Because we can start from

(33:41):
scratch and design an idealized industrial process that isn't limited
by the capabilities or lack thereof, that human beings possess.
Robots don't have to be humanoid at all, And in fact,
making robots humanoid means the machines end up having but
not identical limitations to human beings. So why would we

(34:05):
limit ourselves to this? Why would we choose the humanoid
robot approach if it means that we have to make
all these other considerations just to make them work. Plus
it turns out creating a really good humanoid robot is
exceedingly difficult. Then you have to take into account how
humans and robots will interact in social settings. You might

(34:27):
spend a ton of time making a robot that works
great in a laboratory setting and then find that once
you put it into the same environment with humans, there
are tons of problems that crop up that you didn't
anticipate because you didn't take into account how humans would
react to this machine. I guess what I'm saying is
that I'm far more skeptical about humanoid robots being super useful,

(34:52):
at least in the near term, because I'm not convinced
they fix many problems and in fact might make some
stuff a whole lot harder. Finally, DARPA, which is the
US Department of Defenses agency that funds technology intended to
advance the US's defense capabilities, has announced an initiative called

(35:12):
the Speed and Runway Independent Technologies Program or SPRINT. According
to the agency's director, Stephanie Tompkins, the goal is to
develop aircraft that can take off and land without a runway,
but also still have excellent speed and mobility. How the

(35:32):
aircraft achieves these goals is not part of the brief,
and that makes sense. DARPA's method is to propose an
engineering challenge, like this is the goal we want to achieve.
It comes down to various companies and research institutions to
attempt to meet that goal, often taking very different pathways
to try and achieve it. DARPA is really more about

(35:55):
awarding contracts for these jobs. The agency itself is not
some sort of skunk where its labor tory. Instead, it's
more of an administrator that evaluates proposals from various sources
and then chooses which ones to fund. As for why
the Department of Defense would want runway independent aircraft, it's
likely to make certain that the US would be capable

(36:15):
of fielding aircraft even if an enemy were to target, say,
military runways, because as it stands, no satellite information has
pretty much blown the cover off of military runways and
air fields. Once upon a time, there were secret air
fields and secret runways on military installations that people just

(36:35):
weren't aware of, at least not widely aware of. But
satellite imagery has really changed that pretty dramatically, and even
the fabled Area fifty one was not immune to this.
You can easily imagine scenarios in which you might want to, say,
evacuate people from a region. Maybe there's a natural disaster,

(36:56):
maybe there's a military threat, and you want to send
rescue operations to help evacuate the area, but you might
not have access to a runway to land and then
take off with your evacuation aircraft. So having a way
to land in those kinds of conditions would be absolutely critical.
It will be interesting to see how respondents will propose

(37:17):
different solutions to this problem, because again DARPA did not
specify anything. There was no mention of vertical takeoff and
landing or any related technology. So we might end up
seeing some really innovative solutions to this issue, and that's fascinating.
In fact, I would argue that a lot of the

(37:38):
technological advances we've seen from DARPA projects came as a
result of DARPA defining the problem but giving all the
different parties involved the freedom to craft their own solution
to that problem. Really interesting stuff. All right, that's it
for the news. If you have suggestions for topics I
should cover in future episodes of Texts Stuff, feel free

(38:01):
to reach out to me. One way to do that
is to go over onto Twitter and tweet to the
handle tech stuff HSW. Another way is to download the
iHeartRadio app. It's free to download, free to use. Type
tech Stuff in the little search field. He'll take you
over to the tech Stuff page. You'll see a little
microphone icon there. If you click on that, you can
leave a voice message up to thirty seconds in length.

(38:22):
I'll look forward to hearing from that, and I'll talk
to you again really soon. Tech Stuff is an iHeartRadio production.
For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts,
or wherever you listen to your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.