All Episodes

May 19, 2023 26 mins

In this episode, Brian Evergreen (Founder & CEO, The Profitable Good Company & Author “Autonomous Transformation”) and Andreas Welsch discuss creating a human future with AI. Brian shares examples on human-machine collaboration and provides valuable tips for listeners looking to drive AI initiatives while keeping the impact on people in mind.

Key topics:
- Understand does AI drive autonomous transformation
- Assess your options from reformation to transformation
- Prepare for collaboration between humans and machines

Listen to the full episode to hear how you can:
- Focus on strategy & culture before technology
- Classify your project from digital reformation to autonomous transformation
- Manage organizations as social systems

Watch this episode on YouTube:
https://youtu.be/3ArPJEGyyfg

Questions or suggestions? Send me a Text Message.

Support the show

***********
Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


Level up your AI Leadership game with the AI Leadership Handbook:
https://www.aileadershiphandbook.com

More details:
https://www.intelligence-briefing.com
All episodes:
https://www.intelligence-briefing.com/podcast
Get a weekly thought-provoking post in your inbox:
https://www.intelligence-briefing.com/newsletter

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Andreas Welsch (00:00):
Today, we'll talk about creating a human
future with AI and who better totalk to about it than someone
who's focusing on doing justthat.
Brian Evergreen.
Hey, Brian.
Thanks for joining.

Brian Evergreen (00:10):
Hi Andreas.
Thank you for having me.
I'm happy to be here.

Andreas Welsch (00:13):
Awesome.
Hey, why don't you tell ouraudience a little bit about
yourself, who you are and whatyou do.

Brian Evergreen (00:19):
Thank you.
Yes.
So my name's Brian Evergreen.
A little bit about me.
I've had three careers so far.
I was an internationallycompetitive chess player, and
the highlight of that wasplaying Josh Wakin.
He did beat me.
I retired from that at 12 andthen my next career for a while
was in music.
I studied music theory andcomposition and sang in Carnegie
Hall and made music with peopleall around the world.

(00:42):
After that I, transitioned intoworking in AI and corporate
America and globally.
And the last few years I've beenworking with Microsoft and
before that Accenture.
And really,AI strategy isprobably the easiest way to
summarize that.

Andreas Welsch (00:56):
Awesome.
And I saw that you also have abook coming out, right?

Brian Evergreen (01:00):
I do have a book coming out.
About a year ago, I signed abook deal with Wiley around
autonomous transformation.
But starting with a question ofhow can we create a more human
future with these technologies?
And so the book comes out justin a couple months and really
looking forward to sharing itwith the world.

Andreas Welsch (01:17):
That's awesome.
So I'm sure you've really seen alot and also in the research
that you've done for the book.
So I'm really excited to haveyou on today and to share more
with us about how we canactually create that more human
future with AI.
Alright, Brian, before I getstarted, should we play a little
game to kick things off?
What do you say?

Brian Evergreen (01:36):
Let's do it.

Andreas Welsch (01:37):
Fantastic.
Alright, so this game is calledIn Your Own Word.
And when I hit the buzzer, thewheels will start spinning.
When they stop, you'll see asentence and I'd like you to
answer with the first thing thatcomes to mind and why, in your
own words.
Okay.
And so to make it a little moreinteresting, you'll only have 60

(01:59):
seconds for your answer.

Brian Evergreen (02:02):
Okay.

Andreas Welsch (02:04):
Yep.
So for, those of you watching uslive, also drop your answer in
the chat and why, and let's seewhat we can come up with here.
Brian, are you ready for, What'sthe BUZZ?
I'm ready.
Let's go.
Okay, excellent.
Then let's do this.
If AI were a bird, what would itbe?

(02:24):
60 seconds on the clock.
Go.

Brian Evergreen (02:26):
If AI were a bird, I would say it would be
like a baby eagle maybe.
In the sense that it still needsnurturing before it's gonna
really be able to fly.
And also the altitudes that itcan reach is higher than most
other birds.
And it is capable of manythings.
And so I'd say Eagles, the firstthing that comes to mind for me
on that.

(02:46):
So I'm, gonna cut your time inhalf.

Andreas Welsch (02:49):
Fantastic.
Awesome.
I live in the Philadelphia areaand the football team, it's the
Philadelphia Eagles, so that,that resonates on so many
levels.
Great analogy.
I'm always impressed what myguests come up with for these
questions.
Alright, so let's maybe jumpinto our first question.

(03:10):
And we've been talking in theindustry a lot about digital
transformation for the last 10years now.
And some are doing it betterthan others.
I'm also wondering what comesnext after digital
transformation?

Brian Evergreen (03:23):
Great question.
So the first thing I'd say isthat digital transformation is
absolutely still a thing.
When I talk about autonomoustransformation, it's not
necessarily a linear path thatyou move through digital
transformation.
And then autonomoustransformation.
Autonomous transformationintroduces a new phrase into the
business lexicon.
It's a new era of transformationfor organizations and for

(03:44):
society more broadly.
When I started with the book inresearching and with the goal of
defining this era, I realizedthat even digital transformation
itself actually needs a littlebit more context.
And because the true definitionof transformation is to change
the nature and the structure ofsomething and improving it while
you're changing that you'retransforming it.

(04:05):
But there's a lot of things thatwe do in this digital
transformation era that aren'ttransforming the nature of the
process or the structure bywhich we deliver value to
clients or the world.
And so I searched for the rightword and what I came up with was
reformation.
So digital reformation isactually when you're vastly
improving.
Cause the definition ofreformation is to improve
something without changing itsstructure.

(04:28):
And so if we're going from ananalog to a digital paradigm and
vastly improving it, but we'renot really changing the
structure of where the processby which it's delivering value
to its to clients or toconsumers.
That's a digital reformationinitiative.
And then digital transformationis when we're not only moving
from analog digital, butactually transforming.
And an example of that is whatwe saw with streaming with

(04:50):
Netflix.
We're still going throughtransformation of what that
means in terms of how weinteract with entertainment.
And so autonomoustransformation.
There's autonomous reformation,which is what we're seeing right
now in robots being leveraged inwarehouses.
Where machines are coming in andthere're things are moving from
a digital or analog paradigm toan autonomous paradigm to vastly

(05:10):
improve.
The nature of what's the workthat's being done and the
efficiencies, autonomoustransformation would be that
we're actually transforming theway that value is delivered to
the world or to clients,consumers around the world with
autonomous capabilities movingfrom digital or analog to
autonomous.
And I don't think that we've yetseen an actual full-blown
autonomous transformationinitiative hit the market yet.

Andreas Welsch (05:34):
Awesome.
Thanks for sharing that and forsetting that up.
I keeping an eye on the chat andI see Michael is asking, is AI a
reformation or a transformation?

Brian Evergreen (05:45):
AI is neither reformation or transformation.
It can be leveraged for either.
So you can leverage AI toimprove the nature of a process
that you're running and make itmuch more efficient without
changing the actual process.
That would be a reformationalproject.
If you used AI to completelychange the structure or the
process by which you'redelivering value to the world.

(06:07):
That would be a transformationalapplication of AI.
So AI in this case is a tool asopposed to the inherently tied
to either reformation ortransformation.

Andreas Welsch (06:17):
Awesome.
Thanks for sharing that.

Brian Evergreen (06:19):
I realize the other thing I should mention is
that both reformation andtransformation assume that
you're starting with something.
You're reforming something,you're transforming something.
Another important piece of thispuzzle that I think is important
to share is the acts ofcreation.
Sometimes you're creatingsomething that doesn't yet exist
because you want to create thatvalue for it.
So you're not necessarilystarting and then transforming

(06:39):
and reforming it.
You're just creating somethingnet new.
And again, in that case, youcould also leverage AI and
there's many that are doing thatto create new ways of delivering
new types of products andapplications that we haven't
seen before.

Andreas Welsch (06:51):
That's awesome.
And you're giving me a greatsegue to our next question,
right?
Because creation now with toolslike generative AI that are at
our disposal, we're able for thefirst time to also use AI to
create information.
Whether it's new information orit's composed in a new way.
But I'm also wondering in thisoverall autonomous

(07:13):
transformation and having peoplein the center of them, what role
does aI play in that kind oftransformation from your
perspective?

Brian Evergreen (07:20):
Great question.
So in autonomous transformationhuman autonomy and machine
autonomy are actually two sidesof the same coin.
That's the first thing that Ithink is important to clarify.
Because I think when people hearthe phrase autonomous
transformation, they think of,okay, process by which you're
gonna replace all human jobswith machine workers, which is
absolutely not the case.

(07:42):
It's more about the workhierarchy of repetitive to
creative work and machines beingable to take on more of the
repetitive work which bumps uphumans to be able to move from
operations to more things, morelike stewardship.
And so in terms of your questionaround generative AI, I'd say
that generative AI plays animportant role in a couple ways.
One is in the actual teaching ofmachines.

(08:05):
So there's obviously machinelearning that many or most are
familiar with, which is theprocess by which machines will
learn based on patterns in data.
And then there's machineteaching, which is a newer
paradigm that is around insteadof focusing on how do we teach
machines to learn based onpatterns then instead focusing
on how do we teach machinesbased on human expertise that

(08:27):
fundamentally involves in a lotof cases reinforcement learning
and humans creating theboundaries and the guidelines.
The way I talk about that isoften is chess.
I used to teach kids chess afterI'd retired from chess.
There were some other teachersthat were teaching children, if
they do this, you do this.
If they do this, you do this.
And they would be memorizingthese different patterns of
openings.
And then my focus instead wasthe principles of saying, you

(08:51):
wanna control the center.
You want to get your piecesdeveloped from the back rank.
You want to think about yourpawn structure and the way that
that affects your overallpositioning on the board.
The way that would play out intournaments is that the ones
who'd memorize those patterns,if they ever ran into anything
they weren't expecting, theywouldn't know what to do,
because they were outside of thepattern they memorized.
Whereas a student that hadlearned the principles and the

(09:12):
fundamentals of chess could walkup to any position they hadn't
even been playing and thinkthrough and say this would be
the next best move based off ofcontrolling the board, the
center of the board or, whateverthe principal might be.
And so I think in the same way,machine teaching starts to shift
toward that paradigm of, insteadof trying to teach machines
through patterns of data,there's a lot of things for
which we just don't have data.

(09:32):
And so being able to instillprinciples and then create a
simulated environment andleverage reinforcement learning
for the machine to practice andlearn the same way humans do
enables a whole new swath of usecases that just weren't possible
before.
And so all the way back to yourquestion about generative AI and
what role that plays in terms ofthat process of machine

(09:52):
teaching.
Up until now, that's been alittle bit more explicit for
those, machine teachers to thencreate those boundaries.
And I think we're gonna see theimbuement of that into the
machine teaching process toallow humans to distill their
expertise in a more human-likeway with even less machine
teaching expertise required,once the systems have been

(10:13):
designed.
And then on the latter end ofit, I think that generative AI
will be very valuable forallowing human machine
interaction.
So if you have machines workingtogether with humans in a
factory or machines navigatingsomewhere in a public space for
those humans to be able to comeup and, talk to it and
leveraging generative AI to beable to answer questions and

(10:33):
interact.
Or just large language models ingeneral, but to be able to
answer those questions andunderstand the intent of what
those humans are trying tocommunicate.
So those are two main ways thatI see that within the context of
autonomous transformation.

Andreas Welsch (10:46):
That sounds really in intriguing, especially
that they're part of teachingmachines.
When for the last six, sevenyears we've been talking all
about having machines learn fromthese patterns in data.
What does it look like on a moretactical or task oriented basis?
And is that what you would seeas making sure that we have a
more human future where we workwith AI, alongside AI, where we

(11:10):
teach AI?
And how does that symbiosismaybe even look like?

Brian Evergreen (11:14):
Yeah short answer is yes.
And so it works from a tacticalperspective.
If I give an example of amachine, let's say in a factory
that's being leveraged to makesomething.
The machine learning paradigmwould say, okay is there a
pattern of data by which amachine could learn how to
control that machine.
Let's say the algorithm couldlearn how to control an extruder

(11:36):
in a manufacturing context.
And so that's the machinelearning paradigm that most of
us are familiar with.
Machine learning, a lot of timesthere's not data for, or you
have to instrument and then getmany years or months of data
before you can even start.
Machine teaching would say we'regonna create a simulation,
either data driven or aphysics-based simulation of this

(11:58):
machine.
And then we're going to have ahuman that operates that
machine.
Say that when I see this outputI know, okay, this is too
viscous or this is the color iswrong, or whatever the
parameters, the goals are, andthey say, when I see that this
is the correction that I make.
And that's not something thatyou can necessarily capture in
historical data the same way.

(12:20):
And so then when they createthose boundaries and then they
combine that with a simulation,a machine can basically
effectively run millions, manyhundreds of thousands of
simulations to see that thecorrelation between the
principles that the human hasdesigned against just the pure
physics of it.

(12:41):
And then that can be leveraged.
An example is that Pepsi didthis with their Cheetos
extruder.
They leveraged machine teachingfor the extruder to be able to
run autonomously.
The last I heard, they're stillworking on getting that into
production.
But the plan is that the workerscan then move into more of a
stewardship.
It's not gonna replace the workthat they do.
It will enable them to be freedup from the many things that

(13:03):
they're trying to manage andmonitor at all times.
And so I think we're gonna see alot of that.
I do think just speaking ofhumans and machines, and it's an
important conversation rightnow.
There's three ways I thinkpeople are looking at things
today.
I think there's jobprotectionism, which is saying
the current class of jobs wehave as they exist today must be

(13:25):
protected at all costs, whichintroduces an economic and a
long-term risk.
Which is that if the nature ofthe way that work is being done
is transforming elsewhere, thewhole organization may go down
and the market could take awayall jobs at that company as
opposed to that one class ofjobs or those classes of jobs
that were under threat.

(13:45):
Then on the other end, there'sjob fatalism, which is saying
machines are coming.
Get ready to bow to our machineoverlords.
And I don't know that I evenneed to necessarily speak to why
that's not necessarily the rightway to think about things that
or at least the way that I wouldrecommend.
Then in the middle, there's jobpragmatism, which is saying,
okay, the market is shifting andevolving.
Jobs are going to change.

(14:05):
There are some tasks and,collections of tasks that
currently make up a job thatwill be made autonomous or
automated by machines.
And in the time that a leaderhas from when they've signed on
an AI initiative that could evertake on a project.
There's a length of time,months, years before that
actually gets developed to thepoint where they could make that

(14:28):
human redundant that was runningthose tasks previously.
And so in that time, to me if, aleader hasn't created new
opportunities for those workersto retain their tribal knowledge
and to maintain the culture ofthe organization.
That's a leadership failure.
That's, actually not to do withthe technology.
It's more to do with the leaderthat's running that

(14:50):
organization.

Andreas Welsch (14:51):
That's an interesting perspective that
you're sharing, and I think thatalso puts it into perspective of
what the opportunities are andat the same time also have a
more realistic and objectiveview on, there's those three
categories.
I like how you've describedthem.
If I look at the chat real quickfor a second.

(15:12):
I see there's a question fromAhmad that I would like to ask
you.
Yeah.
Are we in or are we in the inbetween times for AI moving from
conception to globalization?
Or have we moved intoglobalization already?

Brian Evergreen (15:27):
I think that it varies significantly.
Because I think there's onething that is going on right
now, which is obviously with thegenerative AI and the fact that
anyone anywhere can access thesethe power of these systems that
many of those of us who work inthe field have been interacting
with.
Just as powerful of systems, butthat can't be made accessible to
anybody in their living room oron their cell phone even.

(15:50):
And but, that have nothing to dowith generative AI.
And so I think that in terms ofAI as a broader topic, like Bill
Gates and others have said,we're moving into an era, an age
of AI, so to speak.
And not just AI, but it has alot of adjacent technologies and
enablers.
But it's the blanket of AI hasbecome I think.

(16:10):
The sort of reigning term interms of what this era of
transformation is based on.
And and so in terms ofglobalization or in terms of
conception, I'd say that itvaries significantly by the
specific application and the setof use cases.
But I don't think we're at apoint where everyone is.

(16:31):
Only 13% of AI initiativesactually make it through into
production.
And so I think that we're stillas a society figuring out.
That's actually why I startedthe process of the book was
trying to solve and answer thatquestion.
And so I have a strong notion ofa few of the reasons why AI
initiatives don't make it intoproduction.
I think it's less about ageographic difference at this

(16:52):
point and more about just thefact that society and the
cultures of our organizationsand the strategies that we've
set and this difference betweentechnology leaders versus
business leaders versus industryleaders who are often divided by
their expertise.
I think that those are more themain reasons that we're
struggling to see a globaladoption and harnessing the

(17:14):
economic potential that thesetechnologies provide.

Andreas Welsch (17:19):
Now you've mentioned earlier that it's also
a leadership task to prepareyour teams for that future, for
that autonomous transformation,for more of that AI driven
transformation.
What would be your advice to AIleaders?
How can they prepare for thattransformation?
How can they bring their teamsalong and bring their

(17:39):
transformation to life?

Brian Evergreen (17:41):
So the first thing I'd say, I've started a
company focused on connectingwith leaders and sharing the
things that I've learned withthem to help them drive this
kind of transformation acrossthe organization.
And so trying to condense downto at least a couple quick
bullets.
I'd say one is that the way thatwe've been solving problems, the
way that we've been taught tosolve problems as leaders and as

(18:03):
a society is based on industrialrevolution era thinking.
Fundamentally based on in a lotof cases, Taylorism, which I
won't get into right now, butthat, but there's a lot of
problems with that tool set.
When we come to the 21st centurycomplexities, and we're starting
to see the cracks break betweenorganizations that are evolving

(18:24):
beyond that era, that way ofthinking versus those that are
still mired in trying to solveproblems.
And if you start with bottoms upplanning where you say, what are
all the problems that we need tosolve?
And each team comes up with alist of problems, and then those
problems are shared up withtheir leadership.
And then ultimately it bubblesup to the C-level or whichever
the highest level that thenapproves those.

(18:45):
These are the problems we'regonna solve.
Solving problems, getting rid ofwhat you don't want doesn't
actually necessarily move youtoward what you do want.
So I think that if you'relooking and saying, which
problems should we solve withAI?
There's actually an earlierproblem, which is that instead
of focusing on which problemsyou need to solve, because you
know the famous quotationsaround if we're solving a

(19:07):
problem in the carriage timeswith horse carriages, we
wouldn't have come up with acar.
Instead of solving problems, weshould say, instead of looking
at what trends and what can Ireact to happening in the
market.
Be the thing in the market, bethe leader in the market that
others have to react to.
Decide what future do I want tobe in, do I want to build?
And then work backwards insteadof solving problems.

(19:29):
Solve for that future.
Say, okay, based on that futurethat I, and we as an
organization wanna move into,what would have to be true for
us to get into that future?
And you can create an entire mapof different initiatives and
hypotheses that you could proveor disprove that would advance
you toward that future movingboldly forward instead of
backing into the future.

(19:50):
And solving problems as you seethem, as you see them come up.
The famous rearranging chairs inthe Titanic, right?
So that's the first thing I'dsay.
I think a second one is that afundamental issue that we're
seeing is cultural.
There's business leaders,technology leaders, and industry
leaders.
A hundred years ago, industryleaders had all of the
purchasing power and were makingthe decisions.

(20:12):
And then 50 years ago, businessleaders in the wake of the world
warps and in the rise ofshareholder primacy rose to the
four as a key leader in businessdecisions and purchasing power.
And then in the last 30 yearswe've seen IT or technology
leaders go from the backroom tothe boardroom.
And now you have three extremelyintelligent groups of people
with significant expertise andtraining.

(20:35):
All wanting believing that theyknow the answer and they know
how the money needs to be spentand what the plan needs to be.
And most organizations that I'veworked with are divided by that
expertise instead of multipliedby it.
And I think that's the reasonthat seven of the 10 most
valuable companies in the world,public companies, I should say,
were technology companies.
Because in those companies, theindustry leaders and the

(20:56):
technology leaders are the same.
So they're not divided becausethey speak the same language.
So for leaders to prepare forthis AI future and this era of
transformation, I would say it'sactually a strategy question and
a culture question before itever gets to which technologies
you should use.
To go advance into that future.
And it may not be generative AIor it might be it.But you

(21:18):
shouldn't start with thetechnology to figure out which
direction you want to go withthat technology.
You don't plan a trip based offof what you're gonna bring.
You decide where you're gonnago, and then you pack what you
need in order to enjoy your timethere.

Andreas Welsch (21:31):
That's a great analogy.
That resonates deeply.
And also what you shared aboutit's a technology, it's a tool.
It's a means to an end, but it'snot the theme.
It's a theme that we seethroughout and also that others
have shared.
So it's great to hear youreinforce it as well.
Now I wanted to ask you if youcan summarize the three

(21:53):
takeaways of our show for ouraudience today, because we're
getting close to the end of theshow.

Brian Evergreen (21:58):
Absolutely.
So I'd say the first is thatwhen it comes to autonomous
transformation and AI andgenerative AI and all these
technologies, the first thingleaders need to focus on is
actually strategy and cultureand not the technology.
It's tempting cuz the technologyis so interesting and so
exciting and so capable to startwith the technology.

(22:19):
We need to start with strategyand culture because, if you've
completely digitally transformedor autonomously transformed to
the umpteenth degree where youcan say, we're the most
transformed company in theworld, but no one's buying your
products.
You're gonna be out of businessshortly and the fact that you've
transformed doesn't matter.
And so that's the first is,starting with the strategy and
culture is what I would say.

(22:40):
The second is the introductionof the words, the paradigm of
digital reformation, digitaltransformation, and autonomous
reformation, autonomoustransformation, and then acts of
creation.
Because at any given point, whenyou charted the future point
that you wanna advance into,there might be a combination of
different types of initiativesthat would align to those
different.

(23:01):
Subjects that will help you moveinto that.
So if you're advancing toward afuture and depending on the life
cycle of your organization, youmight just need a digital
reformation initiative and tosolve a problem because you have
a profitability issue that'sposing a risk to your
organization continuing tosurvive.
It's not to say that there's onethat's more better than the

(23:21):
other.
It depends on where you are inyour organization.
So that's the second thing I'dsay.
And the third major point isthat the things I've shared so
far are one of maybe 15 to 20that I've found in the process
of writing the book that areways that we need to break away
from managing our organizationsas machines for that is a relic

(23:43):
of our industrial era thinking.
A lot of people have asked meover the years and, they're
writing this book Yeah.
That AI stuff or that humanstuff is fluff.
And that's nice and I would loveto be able to do that, but I
really need to get the valuefirst and then we can think
about that later.
And, what I'm positing is thatcreating a more human, cuz I
said strategy and culture aresome of the main reasons we're
not seeing value fororganizations in implementing or

(24:06):
getting into production thesetypes of technologies.
It's not that creating a morehuman future and creating a
better world for an experience,for the human experience is a
nice to have outcome.
That's actually a fundamentalingredient in order to have cuz
right now talent can leave datascientists, the types of
technologists that you need,they have options.

(24:26):
So if they're working on aninitiative that's boring or that
is exploitative or just evenneutral, when they have an
option to go get paid as much ormore, but to work on something
that they're passionate about.
That's what we're seeing theshift, right in organizations
and then consumers react topeople doing good in the world
and creating a better humanexperience with the new products
they're introducing.

(24:47):
And so I think that creating thethird thing I'd say is not
something that I first came upwith I should call out to Dr.
Russell.
We need to move from managingour organizations as mechanistic
systems to as social systems andmanaging them as machines to a
network of people and all thethings that we're seeing that
are in the business lexicon andcoming out as HBR articles about

(25:08):
empathy and feelings and allthese other things, they don't
fit in a machine paradigm.
But a lot of the most successfulcompanies have already started
to make that shift.
And I'm just codifying that andI propose practical applications
for how do you actually makethat shift if you're starting at
this point.

Andreas Welsch (25:24):
Awesome.
Thanks for, the summary.
And also thank you so much forjoining us today and for sharing
your expertise with us.
It's really great to have you onand to learn from you about how
we can create a more humanfuture with AI.

Brian Evergreen (25:37):
Thank you.
Thank you for having me.
It's been a pleasure.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.