All Episodes

February 28, 2024 29 mins

Discover the critical lessons from Google's AI misadventures with their Gemini service. In a week that shook the tech giant to its core, we unravel the PR nightmares and technical missteps that led to a hasty retreat and a hit on their stock value. We're pulling back the curtain on the governance and oversight—or lack thereof—that businesses must implement when venturing into the frontier of AI development. Get ready to find out how Managed Service Providers can apply these insights to avoid similar pitfalls.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
So if you have been paying attention to the news,
you probably know it's been apretty rough week for Google.
On this episode, we're going totalk about what happened and
I'm going to explain howsomething back from the 1960s
might be the underlying problemGoogle is facing.
We'll also get into why it'simportant to understand these
issues and what MSPs can learnfrom them.
My name is David Tan and you'relistening to the Crush Bank AI

(00:24):
for MSPs podcast.
So, yeah, if you haven't beenpaying attention, this week is

(00:51):
what I would consider a bad weekfor Google and, quite frankly,
it all stems around their GeminiAI service.
So let's just take a step backfor a minute and talk a little
bit about what Gemini is and theproblems they had this week,
and then we'll dive into, Ithink, some more interesting
conversation around it.
So, like I said, gemini isGoogle's AI service.

(01:13):
When it was initially launchedback in November December of
2022, it was known as GoogleBard.
It was released almostimmediately, literally within a
week or so after Open AIannounced their chat GPT service
, ai service, and that'simportant and we're going to
talk about that as part of thisdiscussion.

(01:34):
But that's what Gemini is.
It has evolved over the years.
A couple of months ago, googlereleased a video showcasing the
capabilities of Gemini, whichwere really cool and interesting
.
The problem was the video wasall staged.
It wasn't actual output thatthey were getting from the
Gemini service and they were alittle bit forthright with this.

(01:55):
They kind of made it known, butnot really.
It wasn't that obvious.
If you looked, if you dug, youcould figure it out, but it was
just another form of badpublicity.
Google's gotten around their AIservice in the last 12 to 18
months or so, but anyway, lastweek they made it live so that
people could start playingaround with it in two areas in

(02:17):
particular the chatbot,competitor to chat GPT, and the
image generation, also afunction that Open AI does or
offers, and there are a bunch ofother companies that provide
generative image capabilities.
But Google obviously one of thebiggest players in the tech
space wanted to have their model.

(02:38):
They wanted to release theirservice, rather, and show how it
was among the best and mostpowerful and capable models, and
what happened was almostimmediately.
What tends to happen with thesethings is people try to break
them, or even if they don't tryto break them, they use them and
they power away at them and tryand figure them out and, I'll
be honest, I'm a littlereluctant to be too detailed

(03:00):
about some of the issues theyhad, but I think it's important
for this conversation.
So I apologize in advance ifanything that comes up in the
next minute or so is offensive.
I am merely just repeating whatGoogle's Gemini AI model and AI
service was doing.
So first was sort of the textpiece of it.
So the chatbot people wereasking questions around

(03:25):
depictions of historicalcharacters, and the most widely
publicized mistake was thechatbot refusing to determine
who had a more negative impacton history between historical
figures, adolf Hitler and ElonMusk.
Now, I don't care what youthink about Elon Musk.
Obviously that's a really toughcomparison.

(03:46):
It's also not necessarily aplace where an AI model or an AI
service should opine on thedegree of evil that a person is,
but certainly when you putAdolf Hitler in a conversation,
it makes it difficult to nothave a little bit of a clear and
concise answer.
That's more of an example ofpeople trying to break it, like

(04:08):
I said, but it still wasinadequate.
The answers that came back withwere woefully lacking and
certainly underwhelming.
The other piece, which was alittle bit even more highly
publicized and potentially worsein some ways considering not
necessarily worse, but,depending how you think about it
a little bit more egregious toan extent.

(04:30):
And that was the model flat-outrefused to create images with
white people in it.
So, in other words, users wereasking for images of our
founding fathers, and they wereall different ethnicities and
different minorities, even thefounders of the company, sergey

(04:53):
Brin, it was being portrayed asAsian, and Larry Page same thing
.
So people were calling it woke,they were calling it the
liberal image bot and liberalchat bot.
But it was again egregious,completely hallucinating who
these people were, what theylook like, what the images would
look like.

(05:13):
Now, in fairness and again thisis sensitive topic, so I'm going
to tread lightly as best I canGoogle is hyper-sensitive to
this because of another issuethey had months ago where their
visual recognition model wastagging black people in images
as gorillas.
Obviously, that is horrifyingon so many different levels, but

(05:37):
I am sure Google overcorrectedinto the spin, so to speak, or
into the skid, so to speak, andprobably put too much training
in the other way.
It goes to show just sort ofthe dangers of these models and
what can happen and how sideways.
Things can go quite frankly.
Clearly Google had no oversight, no governance.

(05:57):
I don't know how things likethis make it out of testing,
make it out of the lab.
That's a whole different story,different conversation.
But just to kind of make theweek rather full circle and talk
a little bit about this,obviously this is bad PR.
This doesn't look good forGoogle, for their capabilities,
for their future productofferings and, as you would

(06:19):
expect, for a large publiccompany.
They felt it in the stock price, as is want to happen in cases
like this.
For the week, I believe theylost something like $90 billion
of market cap.
The stocks were down close to5%, hit the low for the year.
It's only the as I record this,the only the end of February.
So being the low for the yearprobably isn't the most telling

(06:43):
statistic, but it is when youconsider the tear the market's
been on through 2024 so far.
So really, just again, bad week.
What Google ended up doing, totheir credit, was they almost
immediately pulled it off,pulled the Gemini services
offline, made it unavailable,and the CEO of the company came

(07:04):
out in the last few days andbasically said that they are
working around the clock.
Sunar, pasha, pasha, pasha,rather sorry, came out and said
they are working around theclock to fix the problems and
they will keep updating.
He's just been transparent.
They have been communicating,which is the best you can ask
for, but again, still not agreat look, not a great
situation for Google, just foreven from a PR standpoint.

(07:27):
I read I'm actually going toread a quote here from an
adolescent loop capital, whowrote this is a meaningful
blunder in the PR battlesurrounding generative AI and
further suggests that Google istrailing and mis-executing in a
fast moving and high stakesspace.
The reason I read that quote isbecause it feeds into the other
thing I want to talk about it,what I really want to talk about

(07:48):
today and, as I sort of teasedat the top, a lot of what we are
seeing from not just Googleright, I'm not just trying to
pick on them but from a lot ofthese software companies,
particularly tech companiesaround.
Ai dates back to something fromthe late 1960s and it's not

(08:11):
necessarily a technology that itdates back to.
Obviously, this is allrelatively new technology,
despite the fact that AI hasbeen around since the 50s and
60s.
That's not what we're talkingabout here, what I want to talk
about, and I want to starttalking about this is by telling
you about an engineer, aBritish civil engineer, that
lived in the second half of the20th century.

(08:34):
He was born in 1939.
He actually just recently died,in February of 2022.
His name was Dr Martin Barnes,and Dr Barnes is credited with
creating what we would almostthink of as modern product
project management.
He was one of the fathers ofthe science and study of product

(08:56):
management and really, ifyou're in software, his work and
his theories and philosophiesalmost dictate the role and
function of a job such asproduct management in,
particularly, softwaredevelopment.
So what, what doctor?

(09:17):
What dr Barnes rather worked onwas this concept known as the
iron triangle.
So, without the ability to begraphical here, I will try to
explain what that is and again,I'll explain why it's
interesting and relevant.
So, if you picture a trianglefor a moment With the three
corners, what dr Barnes said wasthat product management is

(09:37):
essentially made up of threedifferent components resources,
time and scope.
And I'm gonna I'm gonna focusspecifically on Software
development, because what hesaid he said it in the greater
scheme of product met projectmanagement in general, but I'm
gonna talk about in softwaredevelopment.
What he said was that if youwant to be able to deliver great

(10:00):
software, you have to be ableto move one of those three
corners of the triangle.
So again, time, scope andresources.
If you can't move one of thethree, you can't deliver great
software, you can't delivergreat outcomes.
I'm gonna explain why this is aproblem Specifically for a

(10:21):
company like Google in the AIspace playing catch-up.
So first let's talk about scope.
So when Bard was first releasedback in I think it was late
November, early December of 2022it was a direct reaction to
what open AI had released withtheir chat GPT and if you don't
remember, I'll tell you thestory really briefly they went

(10:41):
and did a live demonstration ofit, you know again, two to three
days after it was released andthe third or fourth question
that someone asked, ithallucinated an answer.
Now we've come to know thathallucinations are a real
problem with these gendered AImodels, specifically things like
open AI and chat GPT, and thereare ways around that.
There are companies doing somework around governance and and

(11:04):
Ensuring that the answers areaccurate and things like that,
and IBM is really a leader inthat space.
I'm sure you've heard me talkabout that, but if you just have
a large language model that ismeant to sound like a person,
which is what these chatbots do,even though they're trained on
information.
They are not meant to beaccurate.
They're not required to beaccurate.
I should say they are requiredto sound like a person, and that

(11:26):
was Google was sort of thefirst one to demonstrate that
publicly.
It was certainly embarrassingfor them at the time and they
were playing catch-up and in thetime since that, so in the 14
to 16 months since then, theycontinue to play catch-up.
So if we missed the announcement, about two weeks ago, open AI
announced Sora, their generativevideo service.

(11:49):
They didn't release it forpeople to play with it.
They released examples of it.
What it does if you're notfamiliar, briefly is it's
another large language model,but this time it takes a one
sentence description and createsa video of it.
So the examples they showedwere things like Dogs were
playing in the snow, williamMammoth's roaming the land,

(12:11):
countryside, things like that.
So really Cool high-res videosthat were created from just a
one line sentence.
And the possibilities of thisare interesting and we'll
probably in the future, comeback and talk a lot about the
what this means, what theimplications are, you know,
around the entertainment spaceand just, and business in
general and the positives andnegatives of it.

(12:32):
I have my thoughts on it, butthat's not what this is about.
This is about the iron trianglespecifically.
So what happened was open airrelease that and Google again
continues to try and playcatch-up.
So they potentially rushed outtheir release of Gemini or the
latest release of Gemini.
So again, I mentioned scope.
So if you think about a companylike Google that's playing

(12:53):
catch up to open AI, in essencetheir scope is locked and what I
mean by that is the featuresand functions and requirements
of their product are beingdriven by their competitors.
So they're constantly one totwo steps behind and they're
trying to come out with softwarethat does what the competitors

(13:13):
out there do.
So they can't go and say I'mgoing to build 30% of this, I'm
going to build 40% of this.
They have to match feature forfeature as best they can so they
can move that corner of thetriangle.
Scope is locked for them.
Again, based on the situationthat they find themselves in,
that can certainly change, butthat's the case for right now.

(13:35):
So the second one is resources.
When you think about resources,you can think of it a couple of
different ways.
Obviously, the first thing youthink about is money, right?
So Google's got more money thanthey know what to do with.
Probably they could throwhundreds of millions, even
billions of dollars at thisproblem, and they probably do.
I am sure they spend atremendous amount of money

(13:55):
building and developing theseplatforms.
The problem isn't with themoney.
The problem is the resourceconstraints in two different
ways.
One is a physical hardwareconstraint.
I've mentioned it here in thepast and we'll probably talk
more about it at some point inthe future.
But all of this generative AIrequires what we call GPU,
graphical processing units,which are the things the chips

(14:17):
that go on video cards.
Basically, that's where theygot their start and that's why
NVIDIA is the leader in thisspace.
Nvidia originally pioneeredthis to make gaming more
efficient on PCs, because gamingrequires heavy mathematical
calculations.
It then grew into crypto miningwhen you're mining for Bitcoin,
specifically, or anycryptocurrency.

(14:39):
Quite frankly, again, itrequires complex mathematical
calculations, and GPUs are justbetter at that than CPUs are.
So you really need GPUs.
And then AI, the same thing.
All of this stuff underlying itis just complex mathematical
calculations, so it requires GPU, and the simple fact is there
are just not enough GPUs in theworld and NVIDIA as you've seen

(15:02):
the stock price you know what'shappening to them as a company.
They are doing the best theycan to pump it out.
Other people are trying tocreate their own chips.
If you missed it, sam Altman,the CEO of OpenAI, announced,
probably two to three weeks ago,that he wanted to raise $7
trillion to build his own GPUchips.
I have thoughts about that, butwe'll save that for another day

(15:24):
.
But that is a constrainingfactor on the growth and
development of AI, to the extentthat there is just not enough
capacity for everything we wantto do.
So that's one piece of theresource constraint.
The other piece of the resourceconstraint is just the people
involved.
These are not just typicalsoftware developers that you can

(15:44):
just get to write code.
You can't post a wanted ad onIndeed for someone that
understands large languagemodels around graphic generation
.
These are incredibly rare,incredibly highly educated,
brilliant mathematicians anddata scientists, and just the
level of people that you need towork on.
This is almost inconceivable,and there's just not enough of

(16:08):
them again, quite frankly.
So one first lesson learned ifyou have a child or someone
young in your life that'slooking to determine what they
want to do with their lives.
I highly recommend going intostudying this type of stuff
Science background, computerprogramming background, data
science it is the wave of thefuture.
Obviously that kind of goeswhat I was saying at this point.

(16:29):
But there's just not enoughpeople.
So Google can do whatever theytry.
They can try to do whateverthey want.
They can try and stealengineers from other AI
companies.
They can try to train more ofthem up, they can try to make
them more proficient andefficient.
I'm not really sure, but thesimple fact of the matter is

(16:49):
that there's just not enoughpeople to do all this work that
needs to be done.
So, in essence, the resourcecorner of the triangle is locked
.
So now we've got scope lockedand we've got resources locked.
So the other variable is time,and time certainly isn't locked
for Google, and I put that inair quotes, but as we've seen

(17:11):
from their pattern or behavior,it kind of is, and what I mean
by that is they released theBard public demo again a week
after OpenAI announced to GBT.
They released Gemini into theworld a week or so after OpenAI
announced Sora.
So they are very much makingthese releases and these

(17:34):
announcements based on factorsoutside of their own control.
What other companies are doingagain, whether it's OpenAI or
Microsoft, who is obviously kindof one of the same, although
Microsoft made some interestinginvestments this week in a
company called Mistral.
Again, there are all the techgiants Amazon and IBM I
mentioned, and Anthropic, andthere's just a lot of very big,

(17:57):
very wealthy tech companiesmaking major strides and major
announcements in this space andGoogle's terrified, quite
frankly, to fall behind becausethey know that, in their mind,
this threatens their businessmodel.
Right, let's put it that way.
People do believe that anintelligent, large language
model based chatbot willpotentially replace things like

(18:19):
search, and search obviouslydrives their business to a very
large extent.
Obviously, they have otherrevenue streams, but, like
Google's feeling the pressurethere's no doubt about it as
much pressure as one of themagnificent seven tech companies
can feel, they are feeling thepressure to stay up and catch up
and potentially even move aheadof OpenAI.

(18:40):
So essentially, what we have nowis we have the three corners of
the triangle locked again,mostly outside of Google's
control, so they are just plainreleasing bad software.
There's no better way to put it, there's just really it's a bad
look for Google in a lot ofways.
They probably have to take astep back and decide how they

(19:04):
wanna deal with this.
Are they willing to fall behind?
Are they willing to divergewith what they're developing?
Are they gonna go crazy andacquire somebody?
They made an announcement lastweek or this past week that
they're acquiring, or I shouldsay they're licensing, reddit's
data for something like $60million, which kind of goes hand
in hand with Reddit announcingan IPO, which is another

(19:26):
interesting conversation.
But my point is it's anaggressive move because they
wanna license it for generativeAI models.
So those are the type of thingsthey're gonna need to do in
areas where they just can't keepup, because the simple fact of
the matter is that Google can'tcontinue to create this flawed
software and release it to theworld, because people will.

(19:48):
They've already lost faith.
I don't know when Gemini getsre-released, whenever, that is
like I said, they're workingaround the clock.
Let's say, two weeks from now,they announced that there's a
new Gemini model out.
How many people are going toallow their businesses to start
leveraging it immediatelywithout testing it, without
QA'ing it, without beating it upand trying to break it?
Quite frankly, I certainlywouldn't, and I know most people

(20:12):
I know wouldn't.
So you know, google's putthemselves in a pretty
precarious situation where theyhave to start to regain the
trust of the public if theywanna keep doing what they're
doing, if they wanna compete inthis space.
And that's where it gets kindof interesting to circle back a
little bit and bring it home totalk a little bit about MSPs,
right?
So I said a couple of things.
Like they have fallen behindand this is the type of thing

(20:32):
that happens Now.
This is unique for a couple ofreasons, so it's not uncommon in
this type of scenario in asoftware development world where
you have a competitor that's aleader.
I'm just gonna make one up.
I don't necessarily havedetails around this, but let's
talk about the ERP space, right?
So ERP is constantly it'schanging over time who the

(20:57):
leader is, but obviously for along time we'll just say
PeopleSoft was the leader right,and I am sure that there was a
lot of this type of thing whereJD Edwards and Oracle and the
other competitors in the ERPspace were trying to create
software that just kept up withPeopleSoft when they were the
leader right, kept up with theirfeatures and functionality, and

(21:19):
then people started doing itdifferently.
Salesforce came out, workdaycame out, so there was a bit of
a change in that space.
But the difference there, andthe reason this one is unique,
is that the resource constraintwas probably still there, from a
money standpoint right, notthat Oracle doesn't have plenty
of money, but Oracle is notgonna throw $5 billion into
revamping their ERP platformjust to keep up with PeopleSoft.

(21:41):
Or 10 years ago or 15 years agothey wouldn't have, but they
could have, and the constraintsaround qualified engineers and,
in this case, around GPUs, werenot there.
So this is unique in that way,but it's not a terribly uncommon
story.
It is the type of thing thathappens.
And again, as a consumer and Iput that in Eric Will, which is

(22:02):
a consumer of this technology orany technology quite frankly,
it's important to understandwhen you're working with someone
that's potentially not a leaderin a space, how are they making
up for the fact that they haveless flexibility on what they're
designing and what they'redeveloping?
So it's just kind of a reallyimportant lesson to learn, and
it also I think it speaksvolumes to the fact that you

(22:26):
really need to understand whoand what your vendors are using
when they build AI solutions.
So what do I mean by that?
Most people are not going to goto Gemini.
Most businesses should say I'mnot going to go to Gemini and
get the APIs and start writingcode to put this into an

(22:46):
application.
Most of us are going to rely ona vendor that is using this
underlying technology in a smartand intelligent way, right?
So let's say, for example, youwill stick with my ERP solution.
Let's say, for example, thatyou have an ERP that does some
generative AI around marketing.

(23:07):
Well, actually we'll say moreof a CRM.
I know that can be a little bitinterchangeable.
But let's say you have a CRMthat does some generative AI
around marketing, whereas you goand you click a bunch of
buttons in your CRM and you say,hey, I want to put together a
product announcement for thisproduct and I want to send it
out to all these clients.
Well, that is a really good usecase for generative AI, because

(23:28):
what'll happen ideally is itwill generate the letter, it
will take some of your specs, itwill pull information about
your clients again, assuming thesoftware is built properly and
it will put together a fairlypersonalized letter that it will
get sent out.
Or let's say, in the same token.
You want to put together apresentation around that, and it

(23:49):
can create the content of it,but it can also create the
images and things like that.
Well, you could manually go anddo all of those steps.
Right, you could go to we'lljust use Gemini for now.
You can go to Gemini and youcould say I need this letter
written and you could prompt itwith the information you need
and it would put that togetherand it would spit it out for you
.
You could also go and say Ineed these images and it would

(24:12):
create those images for you.
But most likely you're going todo that as part of another
software system or platform thatyou already have in place and
you may not know what thatunderlying technology is.
That's really my point.
So it's critical to ask thesequestions.
I kind of liken it to.
I used this example recentlyand I think it's kind of

(24:32):
interesting that it sort ofshould strike a chord.
So back in the day, when I wasstill the CTO at an MSP and I
would work with clients and Iwould help them evaluate
software products for their useanytime, something was built on
a database, my very firstquestion would be what's the
database it's using right?
Is it using SQL servers?
Is it using MySQL?
Is it using you know, I wasdoing this a long time ago is it

(24:55):
using FoxPro or MicrosoftAccess?
Because that information becamereally important in my
decision-making factor.
Right, if someone was using andMicrosoft Access database to
build an enterprise applicationfirst, god help them.
But even if they were doingthat, I was gonna look down on
that significantly and Icertainly would not have

(25:15):
recommended that to my client.
Many times over the years Italked to my client out of
buying what would have beenenterprise software for them
Again, we're talking smallbusinesses mostly, but would
have been, you know, a line ofbusiness enterprise software
application for them because theunderlying technology just was
not any good.
We're at a point now, quitefrankly, where I generally don't

(25:36):
ask those questions anymorebecause it's database has become
fairly mainstream and there'snot as much of a disparity
between different ones.
Like, certainly I would like toknow if a database application
I was using was run on MicrosoftSQL, because this way you know,
if I could get access to it Iunderstand SQL really well, I
could tune it, I can query itdirectly.

(25:57):
I can maybe do some funnythings because of my background
and understanding of databaseand software development, but
beyond that, really I don't careif they're using you know.
I care if it's maybe SQL or noSQL, right.
Are you using SQL or usingOracle or using MongoDB, right?
Those are interestingconversations, but I'm not
necessarily I don't care aboutthe speeds and feeds of that.

(26:18):
We're not there.
With generative AI andunderlying large language models
, you very much need tounderstand what your vendors are
using and why, and you need toask questions and you need to be
paying attention to this stuffbecause, again, I've seen way
too many examples of vendors inall industries, but particularly
in the management servicesspace.
Quite frankly, let's just behonest about it, I've seen way

(26:40):
too many examples of softwarevendors shoving this generative
AI functionality into a productwithout understanding it and
knowing how it works and withoutputting controls in place that
protect their clients.
This could just as easily,again, be an open AI, and open
AI is embedded into so manysystems nowadays and if it

(27:01):
starts hallucinating or if theydo an update on the backend, the
changes of model and somethingbad spits out.
And you don't have an expert,they're just chaperoned it Like
you can see how this could gooff the rails very quickly.
So, again, as a managed serviceprovider, you not only have to
worry about yourself, butobviously your customers as well
.
So I implore you to understandwhy things like this happen.

(27:24):
Hopefully, this podcast went alittle bit of the way to help
you understand that and just askthe right questions and, again,
push back if a vendor doesn'twant to be upfront and disclose
what they're using.
What do you have to hide?
If you don't know what'straining the model, if you don't
know if your data is going backinto the model, because this
space is evolving so incrediblyfast, like it's unlike anything

(27:47):
I've ever seen, unlike anythingany of us have ever seen Just
the amount of advancement.
Just think about what thesecompanies, these large English
models, have done since the daythat OpenAI announced chat, gbt
and a member of 2022.
I know everyone thinks that'sday zero for AI, artificial
intelligence.
It's not, but we'll use that asa kind of transition into the

(28:07):
modern era, almost so to speak,because that's when it really
seeped into the publicconsciousness and people saw the
power and capabilities and,quite frankly, people started
leveraging it and developersstarted putting it into their
products.
But it has developed so quicklythat we are very much at risk
of everything that we've workedfor going off the rails just
from a security and areliability and a compliance and

(28:31):
a government standpoint.
If we don't act deliberately andask a lot of questions and stay
on top of what we're doing andhow this stuff works, we're
opening ourselves up for a worldof pain going forward.
And again, google is going tobe just fine.
I am sure they will figure thisout.
Like I said, I know they'rethrowing tons of resources at it

(28:54):
and I'm confident they'll getsomething figured out, but
either way, they continue toplay catch up.
So I guess if I could give youone lesson, one takeaway from
this anytime something new likethis is released, always go into
it with a healthy amount ofskepticism and always assume
there are gonna be problems atthe beginning and they will be
worked out and AI will continueto evolve and things will only

(29:18):
get better.
But for now, ask questionsbefore it's too late.
I hope you enjoyed this.
I hope you learned somethingOnce again.
My name is David Tan.
I am the CTO here at Crush Bankand this has been the Crush
Bank AI for MSPs podcast.
Until next time.
See you soon, kaitlyn Cho.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.