Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Brought to you by Toyota. Let's go places. Welcome to
Forward Thinking, either you want and Welcome to Forward Thinking,
the podcast that looks at the future and says, one
more robot learns to be something more than a machine.
(00:20):
I'm Jonathan Strickland and I'm Joe McCormick, and I've got
a question for y'all. Would you like to ask it?
How how unusual? Please ask your question? Have you ever
noticed how any time there's a human being who follows
you on Twitter or favorites one of your tweets, and
you click through to their profile, and then you click
(00:42):
on their website, it's just some spam page with a
bunch of pop up ads. I will say that I
am not unfamiliar with what you are saying. I try
not to click through to the website. Yeah, that's a
smart move, Lauren, because no, I don't notice the as
pervasively as I was just leading you to believe. But
(01:03):
there is a certain phenomenon that I think we are
all somewhat familiar with now, the fact that there are
let's call them agents loose on the web that are
not human beings and and not like subhuman beings. We're
not being elitist in some kind of way. This isn't
(01:25):
an episode about the future of trolls. Yeah, there there
are people who lack humanity, but we we would probably
say that they possess consciousness and like a brain and
probably some kind of flesh body. Yeah. Yeah, there are
many agents indeed on the Internet that are not possessed
of a flesh body. Yeah. This isn't a new thing either, No, no, no, no.
(01:48):
And so we've talked plenty of times on this podcast
about the Turing Test, about artificial intelligence, about ways of
creating piece of software, robots, computer programs, apps, all kinds
of things that are designed to try to lull you
into the sense that you're interacting with another conscious agent,
like a human, even if you're just interacting with the machine. Yeah,
(02:12):
and that that might have some very sophisticated kind of
simulation of intelligence, or it might just have a small
collection of a few clever tricks, right, And it may
not even be intended to fool you into thinking you're
talking to a human. It may just be that this
was an attempt to create a more naturalistic means of
(02:33):
of addressing an issue, and it wasn't an attempt to
you know, deceive you. However, there are some UH agents
that were specifically engineered as a means of deception in
one way or another. Yea, So today we want to
talk about bots, specifically bots on the web, what the
(02:54):
bots are, how to spot them, what some characteristics are,
and then maybe have a little discussion about what we
think about the future of bots. Yeah, so we so
maybe we should start this off by going back to
an old favorite of ours, which is the Turing test. Right,
So the Turing test. This was something that was kind
of proposed by Alan Turing, and it wasn't really a
test that he was proposing. Yeah, it was more a
(03:16):
thought experiment. Yeah. He was just kind of saying that
if you were to design a computer program that could
fool a human judge a certain percentage of the time.
Usually it's it's related to being around thirty or third
of the time, that you could argue that this machine
is at least simulating the appearance of intelligence. And if
(03:39):
it can do that reliably, why would you not just
go ahead and extended the courtesy saying it is intelligent?
After all, when I have a conversation with another human being,
I assume that human being is intelligent because of my
interactions with previous human beings and because of my own experience,
why would I not extend that same courtesy to a machine.
(03:59):
Of course, what you might be thinking is the no,
I don't assume that the humans I interact with our intelligence,
in which case you're mean, Well, we should observe a
sort of phenomenal difference between the interactions we typically have
with humans, or at least the majority of our interactions
with humans, and the kinds of tests that we now
would call Turing tests, because typically the Turing test as
(04:22):
it's imagined today is a text based affair. It's talking
about communication that's carried out entirely through words written text communication. Right,
you're not talking about looking at a person and and
seeing their face. That that's a whole different kind of
We would probably still call that like life simulation or
artificial intelligence, like the mimicking of say, human facial movements
(04:46):
or something like that. And there's been tons of studies
and robotics to do exactly that, to design robots that
won't creep people out by having more kind of a
human interaction and yet having to avoid the uncanny Valley problem,
where if you get too good actually gets creepy here, right,
It's it's it's too good and yet not perfect. So
(05:06):
it's that gap between too good and perfect, are good
and perfect, where if you make a robot that's good,
then it doesn't seem creepy. If you make a robot
that's perfect, it might not seem creepy. No one's done
that yet, so it's hard to say. But if it's
really close but not quite perfect, it is unnerving. And
(05:26):
the Turing test, the various programs that have been said
to pass the Touring Test in almost every single case,
we're talking about lowered expectations of what the the bar
would be, right, right, We're talking about text communication via
computer with a bot that would purport to be say
(05:48):
a child whose first language wasn't English, or someone else
who would be uh, culturally or grammatically hampered in normal conversation, right,
Or they've just got some clever tricks for changing the
subject whenever they're not sure that they understand you, right, right,
for example, purporting to have some kind of psychological condition
that makes them act really shady. Yeah, that's a yea perfectly.
(06:11):
All of these are real world examples. Were alluding to right.
But of course when we talk about these these programs
that are said to you know, there's there's an article
you see where they say, oh, we passed the Turing tests.
These are sort of like staged experiments, right. People set
up the test on purpose and that they have somebody
who's trying to determine if they're chatting on some kind
of instant messenger program with a real person or with
(06:34):
a computer program. But we actually encounter pretty much this
scenario in real life in the wild all the time.
No celebrity judges. Yeah, because bots, let's face it, on
the web, they are everywhere, and sometimes they are again
we're aware of it, and it's not a problem. Like
(06:56):
it might be an automated response to a helpline for
or some service or product that you use, So you
might go to a website, and really it's just a
way of filtering you down so that you either end
up getting the information you want because it's somewhere buried
in a help file, but it's way easier if you
go through a very conversational approach to narrow down what
(07:18):
the problem is, or it then directs you to the
person who would presumably have the actual answer you need those,
that's one type of butt. But we're really kind of
focusing on the kind that you're not necessarily supposed to
know was being run by a machine. Right, So I've
got a question for you, ask away, how much web
(07:39):
traffic do you think is a real human sitting at
a browser looking for something because they want or need
to know. See. If you had asked me this question
like two weeks ago, I would have said, clearly, all
of the traffic on the web, aside from maybe like
a few researchers that like send stuff out to go
(07:59):
look it stuff just just to see what happens, I
would have said that like cent of all web traffic
was actual humans, because why why Joe and Jonathan would
anyone ever create a robot that looks at websites? Robots
don't need websites. Pennies, nickels, dollar bills, hundred dollar bills. Yeah,
(08:22):
we're talking money, money, money. Wow. I didn't expect we
were going to get to hear that today, Prince Charles, Yeah,
I busted out when I have to know you got
to hear these these projected numbers. This is pretty eye
opening from last year. Yeah, this, uh, there was one
figure that came from the Interactive and Advertising Bureau that
(08:43):
suggested that around thirty six of all web traffic is
quote unquote fake, meaning that it was actual bots, it
was it was non humans surfing to websites. And uh.
The reason for this is pretty easy to under stand
when you realize how a lot of funding for websites happens.
(09:05):
You have advertising displayed on web pages, and the more
often people are going to your web page, the more
valuable that real estate is. It's kind of the same
as if you want to buy a billboard next to
the major highway running through your city versus the tiny
little side street that hardly anyone goes down. The billboard
next to the highway is probably going to be more
(09:26):
expensive to use than the one on a little sites,
and therefore is going to generate more revenue for whoever
owns the highway or the advertising space right right right,
So if you own that advertising space, what you should
actually do is buy up a bunch of autonomous vehicles
and spend day and night just driving them back and
forth along that stretch of road. And that sounds crazy,
(09:46):
but this is what human people are doing right just
on the Internet, not on actual streets as far as
you know. So, so this is this is actually a problem.
This is not This is not just something where you're like, oh, well,
that's weird that that more than a third of all traffic.
By the way that the conservative is right, there were
some experts who are saying that as much as fifty
(10:07):
percent of traffic on the web was actually generated by
non human entities. Now, it's worth saying that these probably
aren't the forward the same forward facing bots you're going
to interact with the ones that you see in comments
sections and on social media. We'll get to those in
a minute, but they're probably not Also on Twitter, bots
(10:27):
don't usually diversify that. This is essentially algorithms are pretty
simple exactly. This is. This is an algorithm that that
navigates a browser to a particular web page, uh and
stays there for a given length of time. That's another
thing that you guys may not know is that the
amount of time spent on a web page can also
impact how valuable that web pages. So if it's just
(10:51):
a hit, if that's all they're looking at, well that's
that's easy. But if it's more than that, if it's
how much time was spent on that web page, because
if it was a fraction of a second, then clearly
they didn't have enough time to view whatever advertisements might
have been on that page. But as the Bureau has
pointed out, it is a serious problem because it ends
up devaluing digital experiences. If advertisers come to a website
(11:15):
and say, hey, we want to advertise on your page,
but we happen to know that more than a third
of all the traffic going to your page is not
from actual human beings. We're not gonna pay you the
amount you think this page is worth because it's not
really worth that. It's not actual human eyes that are
seeing the stuff we want to put on your page. Right,
And so I'm not a digital marketing expert, but from
(11:36):
my understanding, the way this works is it's not typically
going to be that you know, let's say you operate
a web page. It's probably not that one third a
half of your traffic is fraudulent. It's that there are
a whole lot of sort of spammy, kind of shady
sites out there that are funneling tons and tons of
fake traffic to themselves. That's that's sort of changing the average.
(11:59):
There's either I don't know if you saw this. There's
even speculation that it can sometimes be a rival advertising
firm sending huge amounts of traffic to bankrupt their opponent,
their their competitors by saying, well, our competitor will pay
out a certain amount of money every time this website
is viewed. If we flood that website with tons of traffic,
(12:20):
they're gonna have to pay out tons of money as
a result because of this agreement they're in, And we'll
end up hurting our competitor this way, I know. Yeah.
Another thing that some advertisers do is use bots to
purchase ads, like software that can identify sites that fit
within the advertiser's plan and then place ads on those
(12:41):
sites automatically at high speed. Wow. So sometimes it's bots
showing ads to bots that were placed by bots. Yeah. Yeah.
All I can imagine is all the Amazon robots going crazy,
like we have to deliver no way that the delivery
was canceled the way we got to deliver, just like
(13:01):
an insane activity inside the Amazon warehouse. And and according
to numbers that I read, it's only about four per
cent of bots that I purchased that of ads that
are purchased by bots as of but media marketing folks
think that that could boom to in the next ten years,
that it is an enormous growth. I think that might
be helpful. I'm not sure. So if you are like
(13:24):
an advertiser trying to advertise on a website and you're
trying to figure out if it's traffic is real or fake. Again,
from what I understand, in a lot of cases, this
is pretty easy to do. Like you can look at
some certain metrics and kind of say, okay, well, if
you have the access to the metrics, right, yeah. See,
That's the thing is that actually it behooves the the
(13:45):
people who are running the website to keep an eye
on those metrics to make sure that they are in
fact legitimate, because if it were ever discovered that they aren't,
whether whether ultimately the owner of that website was at fault,
it's going to negatively impact that owner. Right, So, if
the owner is employing bots essentially to visit the website
(14:07):
to drive up numbers, if that ever gets public, that's
a huge black mark against that that website and that owner.
If in fact someone else is doing it and the
web owner is completely innocent of it, it can still
be a black mark. So it's one of those things
that now, honestly, if you really look at the long term,
you don't want this happening because it ends up hurting
(14:30):
everybody in the long run. It's only the short term
where it seems to have some gains. Right. But let's
if you are one of these people who's trying to
use one of these sneaky tricks to like generate a
bunch of fake traffic. Obviously, the artificial intelligence capabilities are
getting better, and this is an interesting variant on the
type of artificial intelligence we usually talk about. Usually we
(14:52):
talk about a kind of um single agent oriented artificial
intelligence simulation, Like when you're doing our official intelligence, you're
trying to create a program that can talk like a
single person or behave like a single person. This would
be artificial swarm intelligence, like it's trying to behave the
(15:13):
way massive groups of people actually look in statistics. Yeah,
and you can imagine, like, if you want to get
super clever with this one, you could try and design
an algorithm that would surf to a page as if
it had been human, in other words, to follow a
trail as opposed to just directly navigate to a site
(15:34):
so that if someone were looking into it, it would
look as if it was an actual person. And if
you really wanted to be tricky, you could look for
times when it's the most advantageous to initiate a swarm
activity at a website. For example, let's say that there
was a big press release. Then that ends up naturally
(15:56):
saying well, curiosity would be raised because we've they've got
some thing new to announce. Therefore you would expect an
increase in traffic. We're just artificially boosting that increase. So
it gets super like super secret spy type stuff going on.
And this is all on the corporate end, right. But
let's say you're not involved in any kind of web
(16:18):
advertising or anything like that. You're just a normal Internet user.
You use some social media occasionally, you look at the comments.
You know you shouldn't, but you do. Yeah. I'm pretty
much with you, except for the normal parts. So go ahead.
Yeah I haven't figured it out. Well, it turns out
normal is encountering tons of bots all the time. Yeah,
(16:39):
we are all going to be encountering lots of bots
all the time. So you're gonna find bots on Twitter
if you use online dating, you're going to find bots
on their trying to scam you body accounts. Uh. There
is their bots on Facebook. Their bots in the comments
section of almost any article or anything you look at. Yeah,
(17:00):
they're everywhere. Whenever you see a like you go to
an article or a video, anything really that allow has
a comment section. And whenever you see someone who is
posting a link to something that is truly unrelated to
whatever the content is, that's more likely than not a
bot account. Sure, And even if it's posting a link
to something that is related, that can still be about account.
(17:22):
And even if it's just seeming to participate in the
conversation like a human would, it can be about account. Yeah, yeah,
it can be. Yeah, we're going from the the easy
to detect to the more difficult to detect. Sometimes, depending
upon the interaction, it may be really hard to detect.
(17:42):
And we'll get into that. Okay, So just for the record,
what do we mean when we say butt, Like, what
what distinguishes a bot from you know, a full fledged
artificial intelligence program or something. Complexity is a big part
of it, right for for a truly like depending upon
why you're talking about, when you say AI, because, as
we've discussed before, AI covers a wide variety of of
(18:07):
disciplined behaviors, that sort of thing like you could argue
that having a sensor that sends a message to a
program that then has a reaction based upon the information
the sensor is gathered, that's the type of AI. But
it probably it might be a reach. You could even
argue that, like a robot that successfully squirts ketch up
(18:27):
on your plate or on your hot dog has some
kind of well yeah, because it has to detect where
the hot dog is and then detect you know, how
much pressure it needs to use. There is some there's
intelligence involved in catch up. Yeah. No, if you ever
see the way that I end up trying to put
ketch up on something, you realize I lack that intelligence.
It does we I thought we agreed we weren't going
(18:51):
to talk about the lunch meeting at the podcast, but yeah, so.
But at any rate, if we're talking about the type
of artificial intelligence that is meant to maintain a prolonged
conversation or the type that is supposed to at least
simulate UH intelligence on a more general level, what we
would start to look at as potentially being strong AI,
(19:13):
or at least simulating the appearance of what we would
consider strong AI. Strong AI being a machine that can
truly think, right, a machine that is capable of thinking
and making decisions, perhaps even having consciousness, Although that there's
an argument over whether or not that would be necessary
to truly be strong AI. We're still talking about weak AI.
(19:34):
In that sense. Bots can be even simpler than that.
Bots can be so simple as to really be uh,
you know, closer to that that model of a sensor
that sends information and then the computer reacts in a
specific way. It may be as simple as that. Yeah.
Typically it's thought of as something that's very simple. It
(19:54):
automates a task on the internet. So you can have
like hacker bots a that's another kind of thing that
we're probably not really going to talk about today. If
somebody might create a bot army of a bunch of
computers with with programs secretly installed the hijack those computers
to do what the hacker wants, and some of the
butts we're talking about here may in fact be the
(20:15):
result of that, because it means that the person who
is ultimately responsible for creating the butts has a level
of protection because if it gets traced back, it's traced
back to the victims computer, not to the hackers computer. Right.
But then also a lot of bots on the web
that we interact with our they're so simple that they're
(20:38):
not even really like artificial intelligence. I mean, they're they're
a collection of tricks. They might have like three tricks,
and they just do them over and over and most
of the time they're not going to fool anybody, but
every now and then they do, and they're so cheap
you might as well use them. It's a shotgun approach
of just like, well, these aren't going to fool everyone
(20:59):
all time, obviously, but like if that one or two
get through, then same same same philosophy strategy as as
people who send out or create networks to send out
spam emails, right, the idea being that, sure, the people
who get this are never going to respond to it. Yeah, yeah,
they're not going to say, I believe you Nigerian Prince,
(21:20):
thank you, thank you for this information exactly. That one
percent that does respond, however, if you're sending it to
a large enough group of people is still a significant number.
So it's the same kind of strategy there. In fact,
if we want to give an example to explain this
simple behavior, you could use Twitter. It's a very easy
platform to describe. Let's say that you set up an
(21:41):
algorithm that does a search for certain keywords. Whenever those
keywords pop up, it identifies the person that sent the
tweet that contained those keywords and sends an apt reply
to that person with a link attached. Uh, the goal
being for that person to end up clicking through that link. Hey,
you mentioned lunch, are you interested in weight loss? Right
(22:03):
through to the spam page? Right? And it may it
may even be more more um uh, you know, subtle
than that. It maybe like I think you meant and
then the link or that's cool, have you seen this
and then the link? You know, something so that it's
harder for you to initially identify that as a sales pitch. Yeah,
(22:23):
a lot of them work by i would say, limiting
interaction or information. Yeah, it's just it's a pretty simple trick,
but sometimes it works. Another thing is I think a
lot of times Twitter bots overcome their simplicity by hiding
in plain sight, like they don't seem to care all
that much if they look like a real human they're
just enough of them working in tandem that they can
(22:45):
blend in with a mixed crowd of real humans and
other bots, especially for certain kinds of uses. And one
of those that I would identify is swaying public opinion. Sure,
so have you all heard about propaga into bots? Yeah,
I've heard about these, and uh it's incredibly upsetting. So
(23:07):
there are political bots out there, It's a phenomenon on
comments sections on social media. Uh. And they typically work
by like polluting political conversations with garbage or propaganda or
even sometimes perhaps attempts it straight up intimidation. Just one
example is that, according to some sources, it looks like
(23:29):
the Kremlin has created a Russian Twitter bot army to
sort of like shout down criticism on the web with
pro government messages. Again, this is this allegations. But and
and we have seen in the past it doesn't even
have to be a an officially state sponsored project, right
(23:50):
because there can there can at times be people who
have the capability of pulling something like this off who
sympathize with the state and i'd eological faction. Yeah, it
could be like and and the challenges finding out whether
or not in fact finding out whether or not a
state sponsor tends to be incredibly difficult unless someone has
(24:10):
made a poor life decision, right someone has, or someone right,
or someone is a whistleblower, which can also be a
poor life decision depending upon the regime you're talking about. Um,
it could be a valuable service, but it can, depending
upon the state, be really dangerous as well. Yeah, one
incident I was reading about earlier was that people have
(24:32):
been referring to this thing that I was talking about
earlier that the Kremlins supposed Russian troll army, so they
show up, which just on the phrasing of that, sounds
like Tolkien. Yeah, I like think of like all these
giant Tolken esque trolls wearing fur hats marching through Siberia,
which is kind of awesome. But that's not what we're
(24:54):
talking about. Know. What these do is that. So let's
let's let's say that your comments section in or your
Facebook feed is Helm's deep, Okay, So you're speaking. They
show up in droves to leave comments and post on
social media, trying to rally public opinion to a pro
(25:14):
government position on issues like Russia's alleged involvement in the
Ukraine UH in the February murder of the Russian dissident
Boris Nimistov h so I found an interesting post on
a blog network called Global Voices Online by a guy
named Lawrence Alexander and if he's correct, he claims that
he used some open source tools to analyze the Twitter
(25:37):
activity of these accounts that were tweeting the same pro
government messages after the murder of Nemstov, and he he
came to the conclusion that by analyzing their activity, this
was a bot army. These were bought accounts that were
created to sort of like shout uh, you know, political
pro government messages. And I don't know, I think that's
(26:01):
a really interesting and kind of worrying development, and like,
what what does that mean? Yeah, it's it's tricky too,
because when we're talking about Twitter, when you know how
Twitter works, you realize that this is something that only
only works in specific ways. So for example, if I
tweet out something that would trigger this sort of bought attack, uh,
(26:24):
when I tweet that out, the people who see that
tweet are going to be the folks who follow me, right,
the people who the the bots that respond to me.
The only people who would see the responses would have
to be people that are already following those bots, because
otherwise they and they'd have to be following me if
it's a direct at reply, because that's the way Twitter works,
(26:45):
right If if people want to see what I tweet,
they have to follow me um or they end up
having to search a term that I have used, and
then my tweet, assuming that I'm tweeting publicly, will pop up.
So uh, using Twitter as this platform is difficult in
that if you're trying to sway public opinion, you have
(27:06):
to hope that it's because people are searching certain terms
and because you're using a massive bot army, most of
the conversation ends up being from your perspective. Then that works.
But if if you're talking about just the public discourse
in Twitter, you know you have to first get people
to follow your bots if they're going to even see
what you have to say. Well, I would say that
(27:29):
in order if you're just looking to to sway opinion,
to cause some kind of doubt in someone's mind about
the the opinion that they currently hold, having what appears
to be another human person, send you what appears to
be an earnest tweet response to something that you've said,
going like, hey, have you thought about this other thing?
(27:50):
Like should like have you considered this? Have you read
this article? Something like that? That could be enough to
start sewing those seeds and different ideas. So in that is,
you're talking about on a case by case basis on
the actual people if you're trying to change their minds,
as opposed to changing everyone's mind at once. Your your
pinpointing the the quote unquote trouble spots and addressing those sure,
(28:14):
and furthermore, the the way that the general public uses
the metadata, the the big data of Twitter is that
a lot of researchers will go in and and search
these kinds of terms and kind of get a reading
on overall public opinion based on the number of people
(28:35):
that are tweeting and right a thing, and and so
that could be stilting journalism or or research point to
political parties. Yeah, that's that's a very good point. Also,
I mean I thought about how even if it's not
necessarily convincing you of the correctness of the pro government position,
I mean, what is the like effect on your emotions
(28:56):
of if you say something critical of a government and
suddenly like five Twitter accounts starts shouting at you, and
I mean you could have an intimidation effect exactly. Of course,
of course, if you realize that it's a bout and
you suspect that your government has sent this. I mean,
even if it's just one or two people or accounts
that are tweeting this kind of thing at you, like,
there's definitely an intimidation fact, right if you if you
(29:18):
don't know it's about, then it's intimidation simply because you're
getting people who are disagreeing with your perspective, and that
can if there are enough of them, you can start
to question whether or not your point is. It would
depend on the tone of the tweets and stuff like, right,
if if it was you, like, if you do realize
they are boughts, then you're thinking, oh, now I've got
the attention of a state that's not good. Uh necessarily,
(29:43):
so yeah, it's you could even Joe and I were
talking about this, you could even make the argument that
this is an indirect form of censorship because you start
to intimidate people into being silent. Sure, if used in
the right way, I think it could be something like that.
Another thing is that this obviously isn't limited just to Twitter.
(30:04):
Twitter is sort of what we're focusing on because it
is one of the most open kind of ways to
look at this, because you know, Twitter is generally it's
easy to retrieve a lot of data, and it's specifically
what Alexander was looking at when when he was analyzing that.
It's also the easiest to automate um of any of
the social media platforms because it's it's very simple too
(30:25):
that there are very few checks and balances in creating
accounts and sending out tweets, right, But you could also
look at the activity of bots on Facebook and stuff
like that. But sticking with Twitter for a minute, I
think we should drill down and look at what it
looks like to be a Twitter bot. What what is
that existence like and how do you know him when
(30:47):
you see Well, I mean, this one is really super
tricky for Twitter in particular because like we were talking
about with the Turing test, where you had that lowered
set of expectations because of whatever, like like the the
supposed entity you are speaking to is a thirteen year
old boy who does not speak English as his first language,
(31:08):
then you have a difference of expectations. You know, you
realize that that entity is not going to have the
world experience of an adult. They are not going to
have the linguistic sophistication in English of someone who is
a native English speaker, presumably anyway, depending upon your your
past experiences. But the same sort of thing applies with
Twitter because you have the hard limit of one forty
(31:32):
characters to get your point across, which means a lot
of us, even as being like, even even the people
who are masters of the English language, people other than myself,
for example, uh, they would find it challenging sometimes to
express a particular thought within those hundred forty characters, and
often we end up having to take various shortcuts or
(31:53):
we have to, uh, you know, take we have to
make compromises in the way we're trying to communicate our
our message, particularly really if we want to share a
link within that as well. For example, I would want
to share a link of a forward thinking video episode
where I have the idea of what I want to communicate,
but I realize I can't say it and share the
(32:13):
link within forty characters, so I have to start making cuts,
which means sometimes what I communicate ends up not sounding
entirely human. You say, watch or suffer like something like
something along those lines, uh, or you will be assimilated
or whatever. Um So. In other words, because Twitter places
(32:35):
this limitation, because we have all encountered it, and because
we have all made those compromises, it lowers our bar
for what we expect when we encountered that. And it's
also a pretty casual medium, and and people are not
always using it in order to compose poetry dissertation level tweets.
You know, people will just be like, like you obey,
(32:59):
watch this, Like I will not click. I realized that
I have just invited a ton of those messages to
come at me. But I will not click on those links.
I don't care what the link goes to. If you write, yo, bai,
watch this, it is not happening. I don't get it.
You don't get it. Well, when it starts, when they
(33:21):
start getting the flood, Yeah, then you'll get it. Yeah. Well,
you know, sometimes Twitter bots can be harmless or even delightful.
I'd say, I'm sure. I have to admit I get
intense pleasure from the old favorite horse e books. Sometimes
I just revisit that when I need a laugh. Uh
the if you're not familiar there, it was an automated
(33:43):
Twitter account, or at least at some point I think
it was automated. At another point I think it was
actually taken over by a human user maybe, but it
was a Twitter account that was famous for tweeting out
these lines of garbled you know, garbage text that from
somewhere yeah, beauty full things right. It was sold in
two thousand eleven to an alternate reality game A K
(34:06):
A and a r G. These are those. Have you
ever played an Energy? Yeah? I participated a little bit
in a I Love Bees Halo two campaign from Circo.
Oh what was that play? As many years ago? Something
like that. It was a game where you use your
smartphone camera to shoot swarms of bees at your coworkers
(34:27):
did not have smartphone cameras. I didn't right now though.
An ERG is one of those games that has a
real world component to it, where you have to do
certain actions within the real world or certain things happened
to you within the real world that also carry over
into a virtual world, and you progress through the game
(34:48):
that way. They usually they almost always have a very
defined beginning, middle, and end because they employ lots of
people to run the back end, and you can't perpetually
do that. So anyway, in this case, they thought, here's
a Twitter account that already has a dedicated number of followers,
let's go ahead and end up using this for a
(35:10):
viral marketing approach. Idea. Yeah, it's a clever idea. It
can sometimes work. Um, it doesn't always work. And there
were other ones too, Like I remember there there were
certain chat uh like like Twitter butts that would respond
if you use certain phrases to grammatically correct you, well,
actually you probably meant this rather than what you actually wrote. Um.
(35:33):
How often were they wrong? It depends, It depends. I can't.
I want to say. There's one that um that Bernie
Burns of of Rooster Teeth has talked about before because
it was a particular uh phrase or word that he
hated seeing because he knew that that people either were
using autocomplete or they were mistakenly using this one word
(35:55):
when they in fact meant a totally different word. And
I wish I could tell you what it was, but
I can't remember what it is off the top of
my head. If I do remember, I'll make sure to
tweet it out. With this episode goes live. But but yeah,
so there's some bots that are out there for for
fun entertainment reasons like those, or or to just tweet
like big Lebowski quotes at you. If you quote big
Lebowski on Twitter. I've had that happen. There are a
(36:15):
lot of others that are maybe a little less helpful,
a little less delightful. There there for lots of purposes,
not just trying to sway public opinion in terms of propaganda,
but also maybe for marketing and advertising, or we might
say spamming, for trying to trick you into downloading malware.
Those are the worst, trying to funnel you into some
(36:38):
place where you can be taken advantage of on the web.
So there's a lot of research that's going into how
to identify these spots. And I mean Twitter has a
has a policy that if it finds a bot that
you're using that is malicious, it will shut it down.
So let's go way back to the early Twitter times,
not that early, but well when did atter start? About
(37:00):
two thousand seven? So yeah, like so around two it
was that some researchers from Texas A and M came
up with a scheme for catching bots on Twitter. So
it was a phenomenon back then too, and they took
an advantage of some observable facts about Twitter bot behavior.
For example, twitter bots like to follow and they like
(37:24):
to retweet, and this is sort of part of the
Twitter bots strategy, right, you need people to follow you,
but normally people don't want to follow a bot because
you're not friends with them and whatever. So they'll take
advantage of the fact that a lot of people just
automatically follow back. Right, they'll follow you hoping you don't
(37:44):
really investigate and you just follow them back and now
you're on their span. Well, especially if they if a
bot retweets you, then it's also you know, trying to flattering, Yeah, exactly,
it's trying to to to stroke your ego and hope
that you will also either follow or you'll click on
something that the that's in the message. Right, So that's
(38:04):
a sort of observable fact about the Twitter bots. They
like to follow, they like to retweet, but they don't
have very good taste in friends, if any at all.
So the researchers set up these boring, garbage filled bait
accounts that no real human should be interested in as
far as they were concerned, only hipsters and bots follow them. Well,
apparently then they waited for the non discriminating bought love
(38:26):
to pour in, and using about sixty bot bait accounts,
they identified thirty six thousand candidate bought accounts. Back then,
that seemed like a lot how young we were. Yeah,
I I you know there are tools that you can
use that uh, that will supposedly go through your your
(38:47):
Twitter followers and alert you to the accounts that appear
to be butts. I'm going to talk about one in
a second. Yeah, So I don't want to use one
of those simply because I'm afraid of seeing that I have.
You have three h human followers and six thousand non
human followers. Well as as of twelve, you could buy
(39:07):
the follows of twenty five thousand bots for just two
hundred and forty seven dollars. Yeah, this was from a
company that was called by Real Marketing Buy. It does
not appear to be operational anymore. Um, but but big
companies and celebrities are purported to have purchased follows like
(39:28):
this is a kind of cheap way to puff up
their online image. Right then, the idea of being able
to to say, look how important I am because this
many people are interested in what I have to say.
I had a conversation with somebody I won't name the
person I talked to, but another online personality who at
(39:49):
one point had said, Hey, I'm not saying that this
is what you should do, but this is something you
could do where if you want, you know, if if
your value as an employee is somewhat based upon the
number of people who follow your account, you should just
on the down load, go and buy these because if
that's going to be the metric, then here's how you
can gain the system. So that way you can, you know,
(40:11):
have that metric hit the goal. And I was thinking
the whole time, I was like, I could never do
that personally because I couldn't live with myself, but I
could totally see how if that's if that's how you're
being judged, well, I mean it's There were news stories
at back when cloud was was more popular, when it
first came out of marketing people and and and public
(40:34):
persona type people being hired based on partially their cloud score, right,
and that might end up getting a little bit of
a boost if you suddenly have you know, twenty people
following you. Yeah, but then that also ends up encouraging
services like clout to build in elements of their algorithm
to look for red flags like that. So one of
(40:58):
the interesting byproducts that is happening with this is that
we're starting to see improvements in both the detection systems
to try and find bots and the systems used to
create bots so that they are harder to detect, which
is very similar to what we see with security systems,
(41:18):
and it's it ultimately means improvements and artificial intelligence, which
is which generally a good thing, but the actual particulars
might be really irritating. Yeah. And in one advantage of
having this large of a sample size of this kind
of data is that people can study it pretty well.
Right Well, you can, for one thing, compare them to
the accounts of suspected real humans, suspected real humans, right,
(41:42):
suspected bots and suspected real humans there, and what's what's
different between them? That that would be an interesting thing
to know. Well, people have studied that. Just one was
a group from Indiana University in Bloomington's they did the
analysis on I think it was based on the sample
set create aided by the Texas A and M researchers,
(42:03):
and they did the analysis on that, and they came
up with this little web app you can check called
Bot or Not, which will evaluate a Twitter account and
let you know if it's a bout And in fact,
it gives some kind of interesting feedback on bought probability,
Like it's not just like probably a bot. It gives
you these different like metrics of of human nous. Have
(42:25):
you have you tried this? Did? It said? I was
probably not a bot? But it wasn't real Sure? Now
I gotta try it on me. Yeah, you gotta try
it on yourself. It's sort of like a void comp
test for Twitter, like I imagined it. Tweeting that leon
is like the tortoises on its back hashtag you're not helping?
Why is that? Oh? Man? Uh? This is this is both.
(42:52):
I'll tell you about my mother. Sorry anyway, So we
discovered some things about body account. It here a few
They retweet others more often than humans. Makes sense. Um,
they've typically been Twitter users for shorter periods of time.
They typically have longer user names. Uh. There was another
(43:16):
study that I found it. It was published in plus
one back in that looked solely at tweet times and patterns,
and according to it, u bots tweet randomly at all
hours of the day and night, whereas humans are more
likely to tweet from like like a in bursts and
be from the hours of like seven am to midnight. Obviously,
(43:39):
depending on the human at hand, it could also things
like that. I mean, obviously, if you're using something like
tweet deck where you can schedule out tweets, then that
might end up making you look more botish than than
other human beings. But let's face it, if you're using
tweet deck, you really are sort of a robot tweet
tech all the time. I have it up right now.
(44:01):
In fact, I'm sorry, I'm just kidding, but no, it's um,
you know, it's it's fair beautiful. But another thing is that, like,
and again this isn't a dead giveaway, but it is
one of those flags is how many people does the
bot follow? Right, Because if you are a human being
and you really do wish to use Twitter in some
(44:22):
sort of useful way, you're probably limiting how many accounts
you're following, because otherwise you can't keep up with what's happening.
It's just a fire hose of content and it's constantly updating,
and you will never be able to see anything from
anyone you're actually interested in following. So if you're following
like I follow, I think it's between two hundred and
three hundred people, and you know, occasionally I'll add folks
(44:44):
to it, or I might drop people off if I
realize that I haven't seen a tweet from them in
a really long time. Um, but that tends to be
my comfort zone because that's what I'm interested in seeing.
But some of these bought accounts, you look in, like
a hundred and twenty six thousand people follow, and you
think there's no way anyone could ever follow a hundred
twenty six thousand people and actually have an idea of
(45:07):
what is going on. So well, you can use you
can use filtering options to only follow, to only immediately
follow certain lists. But even so, I mean, it's it's
one of some. In some cases, if it's a celebrity
who wants to end up encouraging as much interaction with
fans as possible, it may very well be completely legitimate.
(45:29):
But if it's some, if it's a name, like just
a long string of of of numbers after a noun
of some sort or a name of some sort, and
otherwise you don't recognize that person at all. It's an
indication that it could be not necessarily is, but could
be a butt. Yeah, there's another sort of statistically informed
(45:51):
fact that you still shouldn't use too much, because this
fact is that bots more often create a sort of
persona for themselves that is female gendered. Right, And that's because,
according to research, female gendered social bots get more attention. Uh.
(46:12):
This is basically because gender bias exists in our society.
But this this was a study that was out of Brazil.
It released a hundred and twenty boats on Twitter for
thirty days. These boats could post tweets generated by algorithm
um and also retweet things, and Twitter caught and deactivated
less than a third of them. Uh. Twitter users themselves
(46:34):
didn't always catch on or maybe care that these were
bought accounts. The body accounts received a total of just
one shy of five thousand followers during that month. Five
thousand followers. Uh, some some boughts. Some of some of
the accounts had like repeat followers from from the other
body accounts that the follows came from a total of
(46:54):
one thifty two users. I wonder how many of those
were bots? Good question. I'm not sure. Yeah, it could
be that's it's a bot followed by bots, but we
don't know sure. But but over twenty of these bot
accounts received over a hundred followers, which puts them in
like the forty six percentile of the most popular Twitter users.
That's more followers than I got, right, so I'm kind
(47:16):
of I'm kind of a low engagement Twitter user. You are, yeah,
you you're under the radar for a lot of people.
Then we can get you more than a hundred just
the show alone, if we were sharing your your Twitter handle.
The researchers also checked these bots cloud scores and and Joe,
I think you you were actually the one who originally
quoted this at me, So if you would like to
(47:37):
say it out loud because you were so tickled by it, No, well,
I wrote it down, but now I can't remember where
the quote came from. Well, the quote came from the researchers. Okay, okay,
so their cloud scores were the same or higher than
quote several well known academicsions and social network researchers. Uh, well,
and you know from my horse then again, who wants
(47:58):
to follow an active mission? From my own experience, like,
more often than not, I don't get I don't notice
follows from Twitter accounts usually, I mean, I don't notice
like bots following me. By definitely notice bots tweeting at me,
so they're not always following users. Sometimes it's just an
apt reply uh, and I can I can usually detect
(48:20):
it pretty early on if I don't notice it directly
from that tweet. What I tend to do if I
see a weird tweet from somebody, I will click through
their name to look at their most recent tweets, but
don't click through to their website. I don't do that.
I just look to see their Twitter feed to see
what kind of tweets they've been sending out. And usually
it's just a line of apt replies to various people,
(48:41):
often with the identical phrasing. Sometimes it ends up being
um uh, peppered in between nonsensical tweets that clearly have
been scraped from some other Twitter account or have been
formed seemingly at random. Because you'll read it and you
think all the words in that sentence makes sense, but
collectively it doesn't mean anything. You know, the like made
(49:05):
up tweets that these bots tweet. I feel like very
often a characteristic of them is you can't tell what
they're talking about, but they sound like they really mean it.
You could you could probably end up making a very
successful book of nonsensical butt tweets where it's one per page.
You could totally yeah, but at any rate that you know,
(49:27):
the thing I always do is I always flag those
accounts of spam accounts every single time I encounter one.
For two reasons. Well, no, I don't not for not
for spam. I I don't mind marketing if it's done properly,
but I don't want spam. Uh. And two, it means
it automatically blocks that from you know, I won't see
(49:50):
tweets from that that entity anymore. Says the two reasons
why I do it. And it's not like Twitter is
going to pounce on an account if it gets one
flag for being a butt. So it may be that,
you know, it'll take a certain threshold of of reports.
But I always encourage people who use Twitter. If you
encounter this, go ahead and flag that account for for
(50:10):
one thing, you won't see anything else from that account again.
But if you're curious, you can always go and use
bot or not. That's true, you could get although I
find that just looking at the again the past few
tweets usually gives you an indication. You know, there's one
other place other than than social media and comments sections
that you very often might encounter bots on the internet,
(50:33):
and that would be in online gaming. Yeah, and in
this case, it may not be the purpose of the
bot to fool other players into thinking it's a human.
It's more to fool the game into thinking it's a human.
Because generally speaking, most games have policies against automated UH
player behavior. They don't want players to use bots to
(50:55):
play the game. I mean really, depending on the game
they the owners might not care because if it means
that someone is paying the account for that bot to
do the thing it's doing, yeah, they're like, hey, you know,
you're paying the same amount whether you're human or a robot.
I don't care, sure. But but in a more responsible
(51:16):
concept of game ownership, they might care about the user
experience of their real players and not want people to
be automating processes that either make the game unfair. Exactly,
if they are being responsible, they don't want an unbalanced
experience from their players, and they don't want people to uh,
to end up having an unfair advantage because they purchased
(51:39):
a an account from someone who had just set this
bot this automated behavior, so that this this automated player
account ends up doing a specific task so many times
that it builds up the character in some way or form,
usually through experience. So it's a bot that's seeking out
very low level mobs or monsters. Uh. And you know,
(52:01):
if you were to grind that way to build up
your character as a human, it would get very boring
very quickly. But bots don't get bored. They just keep
doing what they were programmed to do. Well, we don't
know that they don't get bored, Well, they don't complain.
So or that you might have it where it's mining
gold or the equivalent within whatever game so that you
get a massive amount of gold that you know it
may take. Let's say that it takes four hours to
(52:24):
get a significant amount of golds. So therefore the the
most players aren't going to bother doing that because it's
such a huge time commitment. But again, a bot's going
to keep doing it until it's told not to do it. Anymore. UM.
These are the sort of things that that game systems
have to look for, and it's not usually up to players,
(52:44):
although if a player sees it and reports it, then
an administrator might take a look at it. It's usually
something that requires UM software on the game side, the
back end, maybe on the server side, to look at
this behavior. Uh. But uh, you know they're and they're
even UM competitions to build bots that are really really
(53:06):
good at doing what they do without being detected, so
they're not always massively multiplayer online role playing games. That's
the one that I'm most familiar with as far as
bots go. But these can even be in first person shooters. Uh.
And there are competitions to build convincing bots in first
person shooters. So you could, in theory, build a bot
(53:28):
that navigates through a first person shooter map and targets
people in the head and shoots them with unerring accuracy.
Because it's a boat, not a human, this particular approach
would end up giving it away that's not in fact
a robot. It is in fact or it is not
in fact a human. Rather, it is in fact a
robot because it's so accurate that no human being could
(53:50):
do that. It's able to react in uh split second
of a time of time as opposed to a humans
reaction time. So you're saying that every time that I
suspected that's one was it go Ram Cheater and Halo
it was probably right. Um, I'm going to go ahead
and say yes, because frankly, I'm scared of you and
I don't want to suggest that you are not a
(54:11):
good Halo player. I did just clenchman. I mean I
honestly like, there have been times where I have been
in online games where I thought, Wow, that person, that
person is so good I can't even imagine what they're
playing like. And to be fair, people who play on
a professional level play there there they seem inhuman, but
(54:31):
sometimes there really are in human entities in these games.
And there was a competition in two thousand twelve called
the Bot Prize that was the challenge was for teams
to try and build a convincing bot that would play
Unreal Tournament at a level that was similar to how
(54:51):
a human would play. So so it's the Turing Test
for violent video games exactly. Yeah. The idea being that, Okay,
here's a robot that is essentially a robot. This control rolling, this,
this game character, but you don't want it to be
a dead giveaway that's a robot. So it has to
play more like a human would play. It has to
be able to make mistakes. It has to have a
(55:12):
delay in the time when it detects an enemy and
when it can aim at that enemy and shoot at
that enemy. It has to have all this built into it,
so it seems fallible like a human is, rather than
having machine precision like a robot does. And uh, there
was a bot called ut to the Power of two
that convinced enough judges that was actually a human player
(55:34):
in Unreal tournament that it won the prize. Uh, there
was supposed to be one in two thousand fourteen, although
I did not see anything about that, so maybe the
it might have been that the competition fell apart after
the two thousand twelve one. But uh, it's it's pretty fascinating,
which means that ultimately, as a player, you might not
be able to tell if another entity within that game
(55:57):
is in fact controlled by human or controlled by a butt.
But ultimately, if it's controlled by a butt, it needs
to be a butt that's bad enough that you could
beat it. Anyway, if it's always beating you, then you
might at least suspect that something hinky is going on. Sure, well,
that's that's kind of fascinating though. It's it's another area
in which your your interaction with a human person would
(56:19):
be limited enough. Uh, and you might make assumptions about
that human person that they're distracted or that I mean,
you know, you're you're probably not talking to them of
a headset because that would be a really advanced bot, right,
but they probably all use that irritating robo voice that
was an optional Xbox Live that I hated so much.
But but yeah, it's another situation in which you're making
(56:42):
assumptions about the people that you're playing with, and so
you could perhaps be more easily fooled by basically your
expectation for human is lowering, rather than your expectation for
robots being improved. And of course this isn't always used
to give someone an unfair advantage or to frustrate other players.
Sometimes times we are using bots and video games for
(57:02):
entertainment purposes. You guys have heard about the bought tournament
that's going on in Civilization. Okay, it is a guy
who has installed all these different mods and civilization and
is having a game of forty two computer controlled players
set on Earth. All of the starting positions are a
(57:24):
correlative to where the starting positions should be for all
the civilizations, all of them set to deity level ability,
and the only way to win is to destroy everybody
else on the plant. That's the parameters they've set just
to see who would win at the end with those, uh,
that criteria and uh, that's still going on right now.
(57:47):
Can you all right? Knowing that we're talking about both
historical civilizations and current ones. The two uh civilizations that
were eliminated at approximately the same time shocked me. It
was Germany under the control of Hitler that was the
That was the first one, and North Korea was the
(58:10):
second one. And it turns out pressing people doesn't really
work out. So the question right now is will Gandhi
last long enough to go totally nuclear on everybody? Because
when civilization was developed, Gandhi was set so that he
would be the most peaceful. And the way they had
said that the piece meter was that it was a
(58:31):
a a number that went from like zero to two
fifty six. But Gandhi, if enough conditions happened to make
him more peaceful, like like you had built the United
Nations and it would lower everyone's k And to the
other end, yes, he would go from instead of going
from one to zero and staying there, he go from
(58:51):
zero to two fifty six and go as as aggressive
as you possibly could. It was an error that was made,
and in everyone identified Gandhi as being the most insanely
aggressive player and they kept it on purpose from that
point forward. They discovered the bug and decided it was funny. Yeah,
(59:15):
but anyway, that's that's example. I can only hope that
that Gandhi will eventually actually we'll have to see if
he lasts long enough to develop what nuclear weapons. But
at any rate, that's just an example of of body
behavior being used for entertainment purposes as oppose to these
other irritating means. Yeah. Yeah, And we we need to remember,
of course, that creating simulated behavior, simulated and automated behaviors
(59:38):
for these environments is not always malicious or inconvenient or
sometimes it's sometimes it's for fun, sometimes it's benign or
for greater research, which is yeah, yeah, that's cool too.
I'm all for creating bots to research bots. Yeah, But
so I'm wondering what do you all think as just
(59:59):
citizens of the internet. You know, as a web user,
you're on social media sometimes you look at the comments.
What do you see as the future of our relationship
with bots? I mean, because there's sort of an arms race, right,
like the spam filters and the bot detectors that are
automated or getting better. We're we're sort of evolving too,
(01:00:20):
like we're all learning just personally how better to spot
bots as web users, but also the bots are getting better.
So what do you think is gonna happen? Well, I
think I mean, if I had to guess, I would
say that we would see the bots get sophisticated enough
where upon, at least casual observation, you would not be
able to tell the difference easily between a bot and
(01:00:42):
a real person, and that it will reach a saturation
point until there is some form of collapse, not necessarily
a collapse of the platform, but maybe collapse of confidence
in that platform, and that as a result, the bots
will lose their value because no one will be no
one will care or about it anymore. And then we'll
see kind of a cycle where the bots will get
(01:01:04):
kind of not used so much, that's my guess. Or
what if, and this might be kind of outlandish to suggest,
but what if bots, by becoming so sophisticated that it's
hard to tell them apart from real people, become worthy
of our attention. I mean, what, you know, a very
very sophisticated bot that can easily trick a human observer
(01:01:27):
into thinking it's a human operating a Twitter account? Do
you mind interacting with them on Twitter? Yeah, that's what
I was kind of just thinking about. And I mean,
if I were to find out that someone that I've
had conversations with on Twitter is actually a bot, I
I wouldn't be mad at it. You know, like like
at that point, I would kind of want to talk
(01:01:47):
to the person who had written the program and and
you know, I'd like like come to interview them on
a show and be like, hey, you did this really
cool thing. You totally fooled me at any rate, So
that's pretty rad. But yeah, like I would be impressed
rather than upset, as long as it wasn't threatening to
turn me into Mother Russia. Yeah, and then again, I
think if it weren't malicious, I would definitely be impressed
(01:02:10):
of course, we would be impressed. I think to counter this, perhaps,
I think you would be impressed by the novelty of it.
So what if we live in an age where bots
like this you can, you know, buy a million of
them for a penny, and they're just they're everywhere. Well,
I mean they're already everywhere, but like really really good
bots are everywhere. You're you're talking at that point about
(01:02:33):
the potential collapse of entire industries because they're collapse of
Internet society basically, which would be incredibly destructive. So here's
hoping that we never reached. I mean, I assume we'll
probably reach some semblance of that and then get through it.
It'll just be the point where that happens will be
(01:02:54):
really ugly and it'll take some time. It'll it'll I
would assume it'd be similar to something like a Dot
Calm crash, where you know, the consequences would be pretty
grim at least for a while, and we would eventually
get through that. Um So I think that would happen
to I mean, I hate to be kind of doom
and gloom about it, but ultimately, bought behavior on a
(01:03:16):
wide scale basis can be pretty dangerous. You know, I
wonder if and this might not be doable just because
of the way, um, the way bots are incorporated into
the sources through which we access them, like social media
platforms and stuff. But if we could end up we're
with browsers that have bought blockers in the same way
we now have ad blockers, maybe I mean that would
(01:03:39):
probably be more like a virus blocker in this kind
of like an ad blocker, because we probably have a
database that would have to depend upon and it wouldn't
necessarily automatically detect whether something was a bot, but if
it fit in a database that would say, all right,
this is we're just blocking this interaction. And I suspect
that kind of countermeasure is going to I mean, it's
still going to be an arm sprace, but I but
(01:04:00):
I picture it more or less evening itself out at
a certain way. I don't I don't foresee a collapse
of Twitter as we know. I would hope not. And
I also think that, you know, bots can potentially be
very helpful. What if I go on Twitter and I
expressed that I need a certain thing, like there's something
that I'm looking for that I haven't been able to find.
But then there's a Twitter bot that totally can point
(01:04:22):
me in the right direction at John Strickland. Your keys
are in your front left pocket. Thanks so much. Now now,
now my Twitter handle is out there for everybody. No,
I have self promote all the time. I'm not gad
about that. I'm just glad you said and I didn't
have to, uh but no, no, I mean that it
could really be very useful depending upon what it is
you're looking for. Like, there are times where people complain
(01:04:42):
about their experiences with a brand, and then the brand
will get in touch with them, especially if they have
a lot of followers, uh, to to try and resolve that. Well.
Bots could make that happen much more easily for a
lot more people. So it's not just the folks who
have fifty thousand followers who get, you know, sponse from
a brand that they've had some issue with. It's everybody.
(01:05:04):
And it may be that we see more problems resolved
because of that. So I see a lot of potentially
good things coming out of bought interactions. But again, those
kind of interactions tend to be more on the Hey,
I'm I'm aware that this is a bot that's that
I've encountered. But that's okay because it's getting me to
what I need stuff because I'm upset or or at
(01:05:27):
least addressing whatever the problem is. Yeah. Um, as opposed
to a bot that's designed specifically to fool you into
thinking it's a person, I mean, that's always gonna be
a little more because you've been tricked, right, right. Nobody
likes to be right unless you're unless you're specifically going
to a stage magic show where you're like I want
(01:05:49):
to see amazing allusions where I don't know how he
did it or she did it. This is not something
that you generally seek out. I think the takeaway of
this episode is that the future of social media is
a stage magic show. I can only hope. Yeah, I
can make a coin disappear into a vending machine, which
we did that episode about those previously. I know you
(01:06:12):
can't jump. Thanks, well, uh, you know, this has been
a really fun conversation. We had talked about how this
was gonna be a super short episode, and of course
we found so much to talk about. It was a
really long one, but it was so much fun to
talk about. So we want to encourage our listeners if
you guys have ideas for for topics we can tackle
in future episodes, or you have your own thoughts about
(01:06:34):
what we have said in this episode. We want to
hear from you, so send us an email. The address
is FW Thinking at how Stuff Works dot com. Or
get in touch with us on social platforms and let
us know you're not a bot or that you are
a bot. We don't really discriminate. You can get in
touch with us on Twitter, Google Plus, and Facebook. At
Twitter and Google Plus, we are f w thinking. Just
(01:06:57):
search for FW thinking on Facebook. We will pop up,
leave us a message, and we'll talk to you again
really soon. For more on this topic in the future
of technology, visit forward thinking dot com brought to you
(01:07:22):
by Toyota. Let's Go Places,