All Episodes

July 8, 2024 41 mins

The AI startup scene is bonkers. Investors are pouring so much money into AI startup companies that some of those businesses are making unsubstantiated AI claims. We explore stories of a few companies that weren't as AI-focused as they initially claimed to be.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host Jonathan Strickland.
I'm an executive producer with iHeart Podcasts and how the
tech are you? So? In the late eighteenth century, there

(00:24):
was a man named Wolfgang von Kimpland, and he had
a clever idea. He really wanted to knock the proverbial
socks off of Maria Theresa, the Impress of Austria. Moreover,
he wanted to make a more spectacular display than an
illusionist named Francois Pelletier who had performed for the Impress

(00:49):
to great renown, and Kimpland was not impressed. He was like, huh,
I'm gonna show Frank up. I'm gonna make something that's
really gonna rub his face in it, and the Impress
is gonna think I'm her favorite. So, fueled by a
competitive and perhaps petty spirit, Kimpland came up with an
invention that some would call the mechanical Turk. Now the

(01:13):
machine hesitate to call it that, but the machine consisted
of a large table like cabinet, and on the top
of this cabinet was a chessboard and standing behind this
cabinet was a mechanical man dressed in the Western European
concept of traditional Turkish attire. If you were to open

(01:35):
the cabinet doors, you would reveal a mass of gears
and cogs and such, so it looked as though everything
was mechanical. Kimplan claimed that this machine could play an
expert game of chess against any opponent, and as it
turned out, the machine performed very well and won more
games than it lost. But it was all a trick.

(01:56):
The machine wasn't really a machine, or at least it
wasn't a machine that did any work. Instead, hidden inside
the cabinet, concealed from exposure, hidden by these gears and cogs,
was a cramped human chess player, and the player could
manipulate the Turkish figure and was able to play chess

(02:18):
from below the chess board. So it was an actual
human being who was actually playing these games against people.
It wasn't some mechanical construct. The Turk only seemed to
be a chess playing machine. Now, this was way back
in seventeen seventy. Today, in twenty twenty four, we still
have to deal with companies and entrepreneurs peddling artificial intelligence

(02:43):
that when you look at it more closely, is really
relying on plain, old, reliable human intelligence. Why, Well, the
short answer for that is money. It seems like, you know,
not a day goes by in twenty twenty that doesn't
include at least one news story about how artificial intelligence

(03:04):
is going to completely change our lives. And the stories
run the gamut of hyperbole, from doomsday prophecies about weaponized
AI making battlefield decisions, to company executives as saying that
AI programs are a viable alternative to hiring actual human beings,
to optimists who describe a star trek like utopia in

(03:27):
which AI handles all the dull stuff and it leaves
us to experience the world as a never ending series
of adventures. I'm not sure if any of those scenarios
are what's actually in store for us, but I do
know things are going to be messy for a good
long while. But AI is such a buzzy term, and
with big companies like Google, Microsoft, Amazon, Apple, Meta and

(03:53):
more all stomping relentlessly forward to make AI the next
big thing, there are literally billions of dollars pouring into
various AI pursuits. Now, with that much money, and enthusiasm
at play. It's no wonder that dozens of startups attempting

(04:13):
to cash in on the gold rush have cropped up
in recent years. And some of those companies might actually
be making genuine strides toward advancing AI or implementing it
in a useful way. Some might just be jumping on
the opportunity to get some of that sweet, sweet VC cash.
Since AI is the new metaverse slash NFT slash virtual

(04:38):
reality slash three D technology thing, what I'm saying is
that we've been through this hype cycle many many times before.
The term AI itself is incredibly useful if you want to,
you know, sell some snake oil, because AI as a
term is still a bit vague. Like the term AI

(05:03):
is seventy years old at this point, and yet we
don't have an easy definition for what really is artificial intelligence.
It's kind of like our definition for actual intelligence. We
don't have a super great explanation for that either. We
have ways of describing parts of it, but we don't

(05:24):
really have a holistic, perfect encapsulation of what is intelligence.
So how could we do that? For artificial intelligence? You
don't even have to make an AI application or implementation.
To take advantage of the opportunities that this vague state
of affairs creates, you just call whatever thing you're trying

(05:45):
to sell AI, and you let the hype do the
work for you. Because people don't understand it fully, you
probably aren't going to get called out on it unless
you're really sloppy, which means you can make hay while
the sun shines and then get up out of town
when the clouds roll in. So today I thought we
would talk a bit about fake artificial intelligence, or perhaps

(06:07):
we should call it artificial artificial intelligence, which in a
way comes back round to just plain old intelligence, because
we're going to chat about some cases in which a
person or group of people passed off stuff that isn't
really AI, but was rather powered by human intelligence onto
unsuspecting targets. First, however, let's do a quick refresher on AI,

(06:29):
because I find that the term is so broad and
it is so overused that it's really starting to lose
its meaning. These days, as consumers, you and I, we
are most likely to encounter AI as applied to generative AI.
That's the hotness right now, And I know I'm old.
I use phrases like the hotness. Sorry, but this is

(06:54):
artificial intelligence that's capable of generating something. Thus you have
generative AI. Now the something might be written text, it
might be spoken words, it might be music, it might
be a sketch or a painting. And there's no denying
that generative AI can be really impressive when it works well.

(07:15):
It seems to be able to do the same sort
of things we humans can, though there remain questions regarding
how much of that is thanks to the various AI
programs cribbing from actual human work. Because you'll often hear
human artists argue that generative AI borrows liberally or outright
steals if we're being more forthright about all this, and

(07:37):
does so from human artists. You know, AI doesn't magically
know how to paint something in a specific style or
maybe even more specifically, to mimic a particular artists style
and technique. The AI quote unquote knows how to do
this because it has been trained to do it on
countless examples of actual human generated art. That's a real

(08:00):
problem because it could mean that the AI is lifting
from real artists and thus potentially putting real artists livelihoods
at stake. But there are tons of other implementations that
have nothing or at least little to do with generative AI. So,
for example, facial recognition technology is a discipline under artificial intelligence.

(08:25):
The basic task is to compare an incoming signal with
a database of image records. This is relatively trivial if
the incoming signal is one that matches the database record precisely.
In other words, let's say that you've got an incoming
signal where the camera angle, the lighting, the distance from

(08:45):
the subject, all of that stuff is the same as
the reference image that you have in your database. Then
the computer can very quickly say, yes, these two are
a match. Typically, it does get trickier if you are
moving away from whatever types of faces the AI had
been trained upon. But it gets even trickier if conditions

(09:08):
are different between the incoming signal and the reference. So
an example I often give, and this isn't facial recognition,
this is image recognition. So imagine you have a coffee mug,
and let's say that we first we have a picture
of a coffee mug. It's sitting on a table. The
mug's handle is pointing to the right, you know, to

(09:29):
our right. As we look at the picture. The mug
is dark red in color. The body of the mug
is essentially just a you know, a simple cylinder. There's
no writing on it or anything. This is the reference
image that we are using. It's the one that's in
our database. Now imagine that you've pointed a camera at
a coffee mug, and this mug is an oversized coffee mug,

(09:53):
and it's off white in color, and it has the
words World's Greatest Podcaster on the mud ug and the
handle's pointed to the left, not the right. And this
mug actually isn't a perfect cylinder. Let's say that it
kind of curves outward from the base just about, you know,
sort of like a bowl more than a cylinder. And

(10:13):
if I ask you what is this thing, you would
quickly say, oh, that's a mug, or maybe that's a
coffee mug. You'd say that right away. You would recognize this.
But it doesn't match the reference picture in our database perfectly, right, Like,
it doesn't look exactly like or even close to the
red reference mug we have in our database. It's got

(10:34):
features that make it a mug, and you, as a
human being, can naturally apply your knowledge of those features
to identify a coffee mug. Even if you've never seen
that specific mug before, you immediately know, oh, that's a
coffee mug. Even if the coffee mugs form deviates from
others you've encountered in the past, You're able to apply
your intelligence to say that's a coffee mug. But a

(10:56):
computer cannot do that, not on its own. It has
to be fed hundreds of thousands, or even millions of
images of coffee mugs, all in various shapes and sizes
and colors and orientations to the camera and more. Even then,
there's no guarantee that the computer will be able to
identify a new image of a coffee mug that deviates

(11:17):
from this collection of reference material. So we can help
computers by applying metadata to information. We might take a
photo of a new coffee mug and apply metadata labels
to this image so that a computer can quickly reference
the metadata and then pull up our new photo of
a coffee mug when we ask for it. But this

(11:38):
is not the same thing as quote unquote knowing that
it's a coffee mug. That would be more like using
a reference index in order to pull up the matching image.
It doesn't involve the image itself, It just involves the
meta data about the image. So facial and image recognition
are just one of thousands of different aimplmentations that have

(12:01):
nothing to do or very little to do with generative AI.
They might ultimately have stuff to do with generative AI,
but that's because of convergence. It's not that they're the
same thing. It's that things these different disciplines are converging
into new implementations. Alan Turing, the great computer scientist, theorized
that machines might one day be able to take all

(12:22):
available information in a given situation and apply reasoning to
that situation in order to reach conclusions similar to how
we humans operate, and he wrote about it in a
paper titled Computing, Machinery and Intelligence. It was all hypothetical
at the time, since computers were still quite primitive back then.

(12:42):
For one thing, they lacked the capability of persistent memory.
I'll explain more in just a moment, but first let's
take a quick break to thank our sponsors. Okay, we're back.

(13:03):
What was I talking about? All right? Persistence of memory?
And I should get myself some of that anyway, What
I meant by that with Alan Turing and the lack
of persistent memory, is that the computers of Turing's day
could execute a command, but they couldn't quote unquote remember
what it was they just did. They would just perform

(13:24):
an operation, and they would continue to perform that operation
on new incoming data until you changed all the factors
of the computer, which often involved physical switches and cables
and plugs and stuff like. This was a big deal
to set a computer up to run calculations, so you
couldn't naturally build upon an outcome and then do a

(13:49):
new operation. You had to do a lot of work
in order to make this happen. But Turing thought, there
will come a day where computers will be able to
do this. They'll be able to complete the task, create
an outcome, and then take that outcome and then perform
new tasks upon that outcome, all with the goal of

(14:10):
some specific outcome further down the line, like ten or
twenty steps further along. So it was only a whole
bunch of different smarty pants is from different disciplines who
were able to advance the technology of computing that machines
could actually have something that resembled memory, let alone this

(14:31):
capability of taking in information and then being able to reason.
So from transistors to integrated circuits to computer languages, et cetera,
a lot of different pieces had to come together in
order to even make this a possibility. So in nineteen
fifty six, the Dartmouth Summer Research Project and Artificial Intelligence

(14:52):
saw buffins from across the young discipline of computer science
gathered to talk about researching concepts relating to machine intelligence.
This was the conference that serves as the official birthplace
for the term artificial intelligence. While there were a lot
of people really excited about the idea and many people
attending the conference felt pretty sure that machines would one

(15:14):
day reach a point where they could simulate human intelligence,
there was no agreement on exactly how this would happen.
No one proposed any standards or anything like that, and
because of that, the following decades would see various researchers
pursue their own pathways toward a common goal. So everyone

(15:35):
kind of knew where they wanted to get, but they
weren't in agreement as to how they were going to
get there, and so there was a lot of different
work being done in different approaches toward artificial intelligence. In
the nineteen sixties, a computer scientist and programmer named Joseph
Weisenbaum created an early chatbot called Eliza. So this chatbot

(15:57):
is exceedingly primitive by today's standards, and it gave the
illusion of understanding communications from a human being, But in fact,
Eliza was really just spouting off responses using some rudimentary
pattern recognition and substitution strategies. So in a very superficial
and not particularly useful way, Eliza could chat with humans. Now,

(16:19):
I think Eliza is a really important early step in
artificial intelligence. I would also say it's not very intelligent.
It's following a pretty simple set of rules in an
effort to simulate conversation, and limited conversation at that And
while we have much more sophisticated chatbots now, ones that
can draw on immense libraries of information and use complicated

(16:42):
statistics to select words and word order, ultimately they're kind
of doing the same thing. They are using rules to
create an illusion of intelligence. But the projects I really
want to talk about don't necessarily even do that much.
They are creating the illusion of artificial intelligence because us
that's a field that's getting crazy amounts of investment. Sometimes

(17:04):
these companies are doing it because they don't yet have
the money to really dive into AI. So it's not
that they want to deceive, it's that until they get
the investment to do the thing they want to do,
they can't do it. AI is expensive. The processing you
need in order to run complicated AI implementations is considerable,

(17:28):
and most people don't have access to that, especially if
you're just starting up a company. So sometimes an AI
startup is not using AI, not in an effort to
deceive investors, but rather as a placeholder with the intent
of using AI later on when it becomes feasible to

(17:48):
do so. Sometimes the company just needs to do a
lot of early work before it can launch whatever AI
tool it has in mind, and that this early work
needs to be done by humans. There might be a
lot of generating training data or that kind of thing,
and for that you might employ a bunch of people
to do it, and eventually you will develop your AI tool.

(18:12):
But again, it does not like AI tools just spring
fully formed and you can make use of them. So
again there are cases where an quote unquote AI startup
isn't using AI, but it's not necessarily an attempt to
mislead people, but sometimes it might just be a scam.
It might just be an effort to tap into people's

(18:35):
enthusiasm and excitement around a buzzy term, but have no
intent on ever doing any significant work within the AI field.
And this has been going on for quite a few
years now. Back in twenty eighteen, a writer named Kian
jen Cheng, and I apologize for my pronunciation. I am

(18:57):
notoriously terrible about this way Xichang wrote a piece for
sixth tone dot com and it's called AI company accused
of using humans to fake its AI. So the company
in question was iFly Tech that's little I, big f

(19:17):
l y te k, and among other things, I fly
Tech was offering AI powered interpretation services, or at least
that seems to be what the claim was. So the
product was supposed to provide real time interpretation and translation
services and was demonstrated at you know, things like international events.

(19:40):
But then a man named Belle Wong came forward and
claimed that he was part of a group of interpreters
who did the actual work and essentially they were posing
as the interpretation software. So Wang's accusations centered around a
symposium called the twenty eighteen International Forum on Innovation and

(20:02):
Emerging Industries Development Catchy. At this forum, there was a
professor from Japan who gave a presentation, and his presentation
was in English, and his words were being transcribed in
real time by a text to speech program from iFly
Tech and displayed on a screen behind him. So as

(20:23):
he spoke, his words in English were showing up behind him,
but next to the English transcription, his words appeared as
a Chinese transcription written in Chinese characters, also supposedly handled
by iFly Tech's incredible technology. Now this is amazing, not

(20:43):
just because you're talking about translation. I mean we have
translation apps out there right, We've got translation tools where
you can speak into a app and have it generate
an actual response in another language. This was incredible because
it's not just translation but in interpretation, meaning that the
turns of phrase that the speaker used were being interpreted

(21:08):
and then translated into Chinese so that the Chinese translation
would make sense. Because obviously, like the sayings, the idioms
that we use in one language do not necessarily translate
to another. If I say it's raining cats and dogs,
English speakers know what I mean is that it's raining
really hard. Non English speakers, if they saw that translation,

(21:31):
would wonder why are animals falling from the sky when
that's not what I literally mean when I say it's
raining cats and dogs. So interpretation requires an extra step.
It's not just translating word for word, and in fact,
what was appearing behind the speaker was an interpretation of
the speaker's words. The problem was Belle Wong says it

(21:53):
was really him and his colleagues who were doing the
work of that interpretation and translation. So Wong pointed to
the fact that the professor's accent was fairly strong. He
had a Japanese accent as he was giving his English presentation,
and the real time English text to speech program from
iFly Tech got into some issues with this. The program

(22:17):
would sometimes misinterpret what the professor was saying, and so
the transcript had errors in it. If you were reading
along while the professor was speaking, you would see, oh,
the program thought he said this one thing, but in
fact he was saying this other thing. But the Chinese
interpretation of the professor's words did not include these mistakes.

(22:38):
The Chinese translation was accurate, and that's because Wang and
his colleagues were translating accurately. They were listening to what
he was saying, interpreting it, translating it, and then putting
it up in Chinese text, so they had the appropriate
Chinese interpretations displaying not the mistaken text to speech stuff.

(23:01):
Wong said that I fly Tech never really acknowledged the
use of human interpreters at that event, and that the
implication was the technology was doing all the heavy lifting.
So Wang said this made him feel very uncomfortable to
be part of what he felt was a deceptive presentation.
And it's interesting because in that same piece in sixth tone,

(23:24):
the piece quotes I fly Tech executives who have essentially
said the machines are not a suitable replacement for human
interpreters and that it's far more likely that the future
of interpreting will involve humans and machines working together, rather
the machines replacing humans. Outright now, perhaps the ie fly
Tech representatives at the International Forum were a bit over

(23:46):
zealous in promoting the work of their company. But it
feels a lot like the mechanical turk. You know, at
a casual glance, you have a machine that's doing this
incredibly complex action but if you take a closer look,
you see that humans are powering the real process behind
the scenes. Then there's Olivia Salon's twenty eighteen piece in

(24:09):
The Guardian titled the Rise of Pseudo AI and how
tech firms quietly use humans to do bots work. Now.
I love how Salon frames her piece by saying, quote,
some startups have worked out it's cheaper and easier to
get humans to behave like robots than it is to
get machines to behave like humans. End quote that I

(24:32):
feel is bang on the money. She did a great
job with this article. We humans are really versatile. We
have evolved to be that way like It's not that
we're special. We have millions of years of evolution behind
us that have shaped us to be like this. But
we have to put that same work into machines in

(24:53):
order to make machines perform in versatile ways, and that
is a considerable amount of work. We haven't been working
with computers for millions of years. We've only been doing
it for a few decades. So companies like open Ai
and Google and such are spending billions of dollars to
achieve that goal. It is not at all easy and
it doesn't always go smoothly. So some startups use humans

(25:16):
in the early days almost as a way to show
the proof of concept for their end product. So sure,
right now, humans are the ones doing whatever it is,
like the coding or the translating or whatever the startup
AI is focused on. But further down the line, well
that's going to be bots. Maybe in fact, it'll have
to be bots because if the startup were to take

(25:38):
off and become a big company, then it could become
too expensive to rely on humans to do all the
work that needs to be done when you're operating at scale.
So there's a danger of doing this as a startup,
right Like, if you're doing it early on, you're saying,
I'm going to be transparent with you. Right now, we
have human beings doing this work, but what we're working

(25:59):
on is developing AI to do the work instead. And
this is how we're presenting it to you, and we
want you to be aware of our goals and our strategy.
If it turns out that whatever they want to do
is too hard to do by AI, like it's just
too hard to develop the AI to accomplish this goal,

(26:21):
and the company is getting big because people value whatever
the process is, you've kind of shut yourself in the foot, like, yeah,
you might become successful, but you might not be profitable
because you can't switch to AI. You never figured that
part out, and scaling up means that you're employing so
many humans to do the work that you're not being

(26:44):
efficient and you're not really making profit. That's a real issue,
and especially if people are still associating your company with
AI and you're still not doing AI stuff. So it's
a dangerous path to go down, even if you're being
sincere at the beginning. Now so low One also mentions
a piece in the Wall Street Journal that uncovered how

(27:04):
Google would work with third party companies and allow them
access to user email inboxes. So the identities of the
people who own those emails would be masked, but it
would mean that these third party companies could essentially read
emails and stuff, which seems like a bad idea, right,
Why would Google let that happen? Well, the research was

(27:27):
largely focused on the field of AI generated responses, you know,
like using AI to fire off a quick reply to
someone rather than having to compose a message yourself. But
in order to train AI to be able to do this,
humans have to do it first, and that meant other
human beings were reading like Gmail user emails, so maybe
they would read the email to make sure that the

(27:49):
AI generated response was appropriate based upon the email it
was responding to. Even if you mask the identity of
the people who are sending and receiving these emails, that
still seems a bit sketchy, doesn't it. Because I don't
know about you, but I typically assume other folks aren't
allowed to read messages that were sent to me. I mean,
we have rules about that with physical mail. You would

(28:10):
imagine the same thing applies to electronic mail. Those messages
being sent might include really sensitive information. So let me
give you a personal example. This year, as I'm sure
many of you know, I've been dealing with a lot
of medical issues and I don't mind sharing that if
I do so on my own terms, but I don't
want people to be able to read the messages that

(28:31):
are coming to me from my various doctors. And sure,
the actual identities of users was redacted, so my identity
would be masked in such an email. But I'm sure
you're all aware it does not take that many points
of data to be able to identify someone. It's pretty
easy to do. Actually, there was a famous case, this
was like more than a decade ago now, where a

(28:53):
researcher showed that she could use three points of data
and identify like eighty percent of the people in the
United States based upon those three data points. Now, those
were specific, it was like zip code and things like that.
But my point stance, it does not take a lot
of information for you to be able to identify specific person,
So having your ID mast is not that big of

(29:14):
a comfort to me. Solon also cites an older example
in her article one from two thousand and eight, and
this was of a company in the UK called SpinVox
that claimed to use technology to convert speech so that
customers could have their voicemails converted into text messages. But
a BBC reporter named Rory celenne Jones said SpinVox actually

(29:37):
was sending these voicemail recordings to call centers in Africa,
which was already questionable under UK and EU law at
the time of the UK was still in the EU,
and that humans were actually transcribing the voicemails into text.
She also cited Bloomberg reports made in twenty sixteen of
companies like x dot Ai using humans posing as chatbo

(30:00):
for the purposes of calendar scheduling services. And she mentioned
a company called Expensify, which made a business expense management
tool reportedly using AI scanning technology to handle receipts. But
it turned out that at least some of those receipts
were transcribed not by a machine but by humans working
for Amazon's crowdsourced labor business. That's a business which has

(30:23):
the appropriate name, Amazon's Mechanical Turk. I kid you, not
all right, We've got more to talk about, but I'm
running a bit long, so let's take another quick break
and we'll be back to chat about some more fake AI. Okay,

(30:46):
we're back. Next up, I want to talk about an
article written by James Vincent for The Verge. This was
in twenty nineteen and the article is titled forty percent
of AI startups in Europe don't actually use AI claims report.
So the report that is mentioned in that headline came

(31:07):
from a venture capital firm in London called MMC, and
MMC looked into nearly three thousand AI startups across thirteen
different EU member states and found that forty percent of
them weren't actually using AI in a way that was
quote unquote material to their business. In fact, the guy
who wrote the report, a man named David Kellner, went

(31:29):
even further. He said that in those cases, quote we
could find no mention of evidence of AI end quote jauza.
Not just no evidence of AI, no evidence of no
mention of evidence of AI. That's that's not good. So
the piece does go on to give at least some

(31:50):
slack to some of the companies that were included in
this study because they point out that, you know, the
AI designation didn't necessarily come from the star ups themselves. Rather,
independent industry analysts may have categorized some of these startups
as falling into the AI bucket, but it wasn't coming
from the company. It was coming from these independent analysts. So,

(32:12):
in other words, it wouldn't be fair to like walk
up to an executive from one of those startups and say, hey,
your company doesn't even use AI. The executive might just
look a little confused and then say, uh, we never
claimed it did. So I don't want to paint with
too broad a brush here. I don't want to suggest
that forty percent of these twenty eight hundred and some

(32:33):
odd companies are purposefully trying to trick people. Some of
them are, I'm sure, but not all of them. Sometimes
it's literally because some other Yahoo said, oh, that startup
that belongs in AI. So this same venture capital firm
MMC gets a shout out in another article I read
while researching this episode. This article is by Lauren Hamer

(32:56):
in twenty twenty one and she wrote it for Chip.
The article is titled how to spot when a company
is trying to pedal you fake AI, and Hamer cites
MMC Ventures just like the Verge piece did, And in
this article, MMC Ventures says that startups that are in
the AI space tend to attract up to fifty percent

(33:19):
more investment dollars than startups that are not in the
AI space. So again we see this is where the
money is. Like, if you know that AI companies are
getting fifty percent more in at least sometimes getting fifty
percent more investment than non AI companies, You're probably gonna
start scrambling to figure out how can I shove the

(33:40):
AI into my business idea? Because I want to be
able to get it funded, and there's only so much
funding money that's out there. You're fighting for a pool.
It's a big pool, but it's a pool of investment dollars.
And if you know that people are more likely to
invest in companies that are related to AI, then you

(34:01):
are incentivized to make sure your company is positioned to
at least appear to be AI related. And I imagine
that this number has actually grown since twenty twenty one.
I don't think that this has diminished at all, as
we saw other hype trains derail over the last few years.
Like I mentioned at the top of the show, NFTs

(34:23):
that was a big thing briefly, but it totally and
spectacularly failed. And then the Metaverse that was a really
big thing for like a few months, and lots of
investors got really excited in that. Not to say that
metaverse development has stopped. It's still going on, but it's
nowhere close to the level of hype that it was

(34:44):
a couple of years ago. That means that since then,
a lot of people have shifted over to AI as
the next money ticket. And I'm curious what if any
gap exists between startups that claim to be in the
AI space and those that don't. As far as funding goes.
I would imagine that it's more dramatic than it was
in twenty twenty one. Well, in twenty twenty three, the

(35:05):
US government began to weigh in on startups making AI claims,
not just startups companies in general making AI claims. Specifically,
the Federal Trade Commission or FTC posted a blog post
titled keep your AI Claims in Check. This is it
again in twenty twenty three, and the blog post is
a warning to companies that are attempted to fake it

(35:26):
until they make it in the AI space. The FTC
post reads quote, when you talk about AI in your advertising,
the FTC may be wondering, among other things, are you
exaggerating what your AI product can do? And then also
it says, are you promising that your AI product does
something better than a non AI product? And then on

(35:49):
top of that, it says are you aware of the risks?
And finally it says does the product actually use AI
at all? So the implication here is that the FTC
might call on an AI startup or other company to
prove its claims, and if the companies unable to do this,
the FTC might impose penalties on that company. The FTC

(36:12):
is also not the only government agency in the United
States getting involved. The Securities and Exchange Commission, or SEC,
brought charges against two different investment firms, one called Delphia
Incorporated and another called Global Predictions Incorporated. This was a
matter that was just settled earlier this year in March
twenty twenty four. So the charges stated that both of

(36:35):
these companies had made quote false and misleading statements about
their purported use of artificial intelligence end quote. So as
I said, the two companies each settled with the SEC
just this past March, and in total they paid four
hundred thousand dollars in civil penalties. So obviously, with the
recent explosion of AI startups, there are lots of similar

(36:58):
articles that are coming out to about be wary of
AI claims. One is by Pauline Tomaer. I believe that's
how you say Pauline's name. It's a twenty twenty three piece.
It's in a blog called be Cool, but Be Cool.
It's spelled b qool. The post is titled how to

(37:20):
spot the Fake AI Claims. That's a good one. And
then sheikar Quatra has an article titled the AI hype Machine.
When companies fake it till they make it. I found
that one on LinkedIn. Actually it's also where I first
saw the term AI washing, which once I saw that term,
I thought, oh, well, of course that's a perfect phrase,

(37:40):
because we're already familiar with stuff like greenwashing. That's when
a company claims to follow eco friendly processes, but in
fact it fails to live up to those promises. AI
washing is similar. A company uses AI to drive interest
in support for the business, even if the company itself
has little, if anything to do with AI. Now, Quatra's
piece is large a warning to potential investors that it

(38:02):
behooves you to examine a company's claims closely and to
employ critical thinking before handing over a sizable chunk of change.
Of course, that's true no matter what business a startup
might be in. But the frenzy around AI creates the
sense that if you do not act now, you're going
to be left behind, and you'll sit there while your
neighbors and co workers all make millions of dollars and

(38:25):
they move out to live in solid gold yachts or something,
and you're stuck at home doom scrolling through your various
social media accounts. So don't give in to the fomo, y'all.
But my warning goes beyond investors. My warning is for
all of us out there. We always need to remember
to use critical thinking. I say that as someone who

(38:46):
often I will forget to use critical thinking. It's terrible.
I say it all the time. When I do use
critical thinking, I'm always thankful for it. But the point is, like,
this is a skill you exercise. It's not something that
just passively happens in the background. You've got to employ it,
and we have to remember to do that. We need
to remember to ask questions, and we have to examine

(39:06):
the answers that we receive. And we need to do
this for a lot of reasons. So top reason is
probably just you don't want to get tricked, you know,
unless you're at a magic show, in which case that's
exactly what you want. But typically getting tricked means someone
is taking advantage of you, and that's not cool. But
another good reason is that we need to look into
how a company is actually doing its business. For example,

(39:29):
if that business involves relying on call centers or data
centers located in developing countries, and it all depends upon
severely underpaid staff working insane hours to do the things
that a company claims AI is doing well. That comes
across as mightily unethical to me. I've seen far too
many stories about people enduring terrible working conditions while the

(39:53):
companies that are exploiting those people are posting record profits
and shareholder returns, all while claiming that AI is the
cornerstone of their business. That just strikes me as inherently
unethical and really downright evil if we're getting honest about it. So,
I feel like critical thinking is important, not just for
our own welfare, but those of people who live in

(40:13):
other countries. Like I do want them to find gainful employment,
but I want to find them to find employment that's
not you know, exploiting them to the point where they're
falling apart, and meanwhile these companies are posting record profits.
It's good to remember that AI can be dangerous, not
just through misuse or weaponization, or through having AI replace

(40:35):
real folks at their jobs, though those are real dangers.
I mean, my old colleagues at the editorial department of
HowStuffWorks dot com found themselves out of a job when
the site shifted to AI generated articles. If I had
still been there, I would have been one of them.
But yeah, AI can be dangerous even when the AI
itself isn't real or I guess that really just points

(40:56):
out that humans can be dangerous and deceptive. But we
kind of knew that already, didn't we. Anyway, I hope
you learned something in this episode. I hope you go
and read some of those articles I mentioned, because they
are really well done and they illustrate some specific examples
of what I'm talking about and might help you spot

(41:17):
when it happens again, so that you ask the tough
questions and you do examine those answers. In the meantime,
I hope all of you out there are doing well,
and I'll talk to you again really soon. Tech Stuff
is an iHeartRadio production. For more podcasts from iHeartRadio, visit

(41:40):
the iHeartRadio app, Apple Podcasts, or wherever you listen to
your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Bookmarked by Reese's Book Club

Bookmarked by Reese's Book Club

Welcome to Bookmarked by Reese’s Book Club — the podcast where great stories, bold women, and irresistible conversations collide! Hosted by award-winning journalist Danielle Robay, each week new episodes balance thoughtful literary insight with the fervor of buzzy book trends, pop culture and more. Bookmarked brings together celebrities, tastemakers, influencers and authors from Reese's Book Club and beyond to share stories that transcend the page. Pull up a chair. You’re not just listening — you’re part of the conversation.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.