All Episodes

May 6, 2025 31 mins

Send us a text

Ken Kahn takes us on a fascinating journey through the evolution of artificial intelligence in education—from his early days at MIT's AI Laboratory in the 1970s to today's revolutionary chatbot capabilities. As the author of "The Learner's Apprentice: AI and the Amplification of Human Creativity," Ken shares a vision where AI serves not as a replacement for teachers but as a collaborative partner in learning.

The conversation reveals how surprisingly accessible AI has become for creative educational projects. Ken demonstrates how anyone—without coding skills—can build web applications, interactive stories, and augmented reality games through simple, conversational interactions with AI tools. His examples range from playful nonsense word generators to complex "Emoji Adventures" where students can command digital objects through voice control.

What makes Ken's approach revolutionary is the fundamental shift in how we think about human-AI collaboration. Rather than focusing on complex "prompt engineering," Ken advocates for an iterative, conversational approach where we start simple and build through feedback. This mirrors authentic creative processes and helps students develop critical thinking and communication skills along the way. The AI becomes what Ken calls an "apprentice colleague"—a co-thinker, proofreader, pair programmer, and brainstorming buddy that amplifies human potential rather than replacing it.

For educators concerned about implementation, Ken offers practical strategies for classroom integration. Even with age restrictions on direct student access, teachers can facilitate whole-class AI interactions on smartboards, with students contributing ideas while the teacher manages the interface. His concept of "guidance prompting" allows teachers to customize AI experiences for specific educational contexts, ensuring age-appropriate support for project-based learning.

Looking toward the future, Ken is exploring multi-agent systems where multiple chatbots with different roles—like programmer and digital artist—collaborate while students observe or participate. These developments promise even more sophisticated educational applications that enhance human creativity rather than diminish it.

Ready to transform your approach to technology in the classroom? Explore "The Learner's Apprentice" and discover how AI can become a powerful intellectual ally for both teachers and students in developing the creative potential of the next generation.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Chris Colley (00:15):
Welcome back everyone to another episode of
Shift Ed podcast.
Today I have a real treat foryou guys out there.
I have Ken Kahn coming in, adoctorate from MIT of the
Artificial IntelligenceLaboratory, and he just put this
great book out called theLearner's Apprentice AI and the

(00:35):
Amplification of HumanCreativity.
This came out on ConstructingModern Knowledge Press.
I recommend this book highly.
Knowledge Press.
I recommend this book highly.
Chocked full of great examplesof how AI can be integrated
almost seamlessly into ourteaching practices and our
schools.
So, ken, thanks so much forhopping on here and joining us

(00:56):
today.
Yeah, I'm very glad to do it.
So, ken, just to kind of likebuild a foundation on which our
conversation will go, just tokind of build a foundation on

(01:23):
which our conversation will go.
What were the key points in AI'skind of history that you could
tell us of key importantdevelopments or aha moments that
AI had in the course of itshistory so far?

Ken Kahn (01:27):
Yeah, so I was interested in AI even as an
undergraduate and then I waslucky enough to get into the MIT
AI lab in the 70s.
My intent was to just do AI,but then I met Seymour Papert
and I was sort of part-time withthe Logo Group, interested in
enabling children to createinteresting computer programs as
well as doing AI.

(01:48):
As a matter of fact, I startedmerging the ideas.
I probably wrote the firstpaper on the topic in 1977,
which was called ThreeInteractions Between AI and
Education.
Even way back then I wasinterested in generative AI, my
doctoral thesis from MIT.
The title of it was thecreation of computer animation

(02:11):
from story description.
So it's very much like today'stext-to-video models.
But the AI was so differentback then that you had to hand
code all of the knowledge andall the heuristics and processes
and you sort of reflected onhow maybe you might do it and
then try to write a program toemulate that and that was my

(02:38):
view of AI until about 10 yearsago, when I started to pay
attention to neural nets.
My first thought was that neuralnets are such too low level
it's like the equivalent of thetransistors in the computer,
rather than a high levelprogramming language like
JavaScript or Python orsomething.
But I started to see how it wasstarting to do interesting

(03:03):
things.
How it was starting to dointeresting things, and the
first thing I thought of was toadd to a programming language,
give non-expert programmerschildren especially the ability
to take advantage of all theseadvances in AI.
So I started with the Snapprogramming language, which

(03:25):
probably should have been calledScratch Senior, because it's
very much like Scratch but moreadvanced, and I added over a
hundred blocks, some of whichwould do speech recognition or
image recognition, or you coulddefine a neural net and train it
and evaluate it, and you couldalso load in lots of pre-trained

(03:46):
models so that any of themodels that are out there you
could bring them into thebrowser and use them.
So that was my focus up untiltwo or three years ago when I
started playing with GPT.
And then, when ChatGPT came out, I started to ask myself the

(04:07):
question if I went over the many, many sample programs that I
had built in Snap using myblocks.
What would the experience belike if I role-played a student
that didn't have my expertise?
How well could they recreatethese apps?
And it went very well, and thatled me to two years of

(04:29):
exploring this and then writingthe book about well.

Chris Colley (04:33):
So when you're saying that too because I I saw
an example that you gave and Ifound it really interesting um,
you made this nonsense uhgenerator with Claude, I think
it was and then you added soundto it.
And all of this was just youtyping in right Like create a
nonsense machine, boom, spits itout, creates the code for it

(04:56):
all.
You can go look at the code tosee what it looks like.
And then you modified it to addsound to it.
Something so simple like thatback then would have taken like
some time.

Ken Kahn (05:08):
Yes, that's very true.
That's actually the firstexample in the book, and one
important point is that it wasso easy to communicate to Claude
and I tried it with ChatGPT tooto just say a simple sentence
like make a web app that willgenerate nonsense words.

(05:28):
And the word web app is kind ofimportant here, because the
idea is that you want a web page, an HTML page, that has some
JavaScript interactivity, butthat you could run right there
in the browser.
You could share it with people.
Also, it's perfectly safebecause it's just like any other
web page.
The browser kind of protectsyou from malware or something.

(05:53):
And then what even makes it moreappropriate to do web apps is,
in the last several months, bothChatGPT and Claude and Gemini
from Google they all enable you.
If you ask for web apps, they'llsplit the screen and you could

(06:14):
just see the web app right thereon the right half of your
screen.
Try it out and if it's workingwell then, like in this case, I
said, well, could you have theweb app speak the word?
And it knows all about theseAPIs and how to connect to
speech utterances.
And then, just to be playful,then I said, well, could you

(06:40):
have it speak the GettysburgAddress, but every third word
replaced with a nonsense word,and it knew Gettysburg Address,
but every third word replacedwith a nonsense word, and it
knew Gettysburg Address, knewhow to pick the third word, and
you know you can go on fromthere.
You know you might even imaginemoving to something that isn't
just frivolous like this.
So maybe you want an app thatwould help you come up with

(07:02):
names for pets and then, if youask for that, it'll, you know,
modify the generator to be moreappropriate for pet names or
something.
And that's what's nice aboutthat example is it's you know,
five or 10 minute little playingaround and it doesn't require
any expertise.
And the book is full of anyexpertise.

(07:29):
And the book is full ofexamples that go from that
simple to ones that took a wholeweek to build because, like the
one that was the most ambitious, it was called Emoji Adventures
and actually that name was asuggestion from a chatbot.
After we built it I said whatcan we call this?
But you start off with themouse or your finger painting

(07:51):
random emojis on the screen, butthen you could speak things
like spell AI and then all theemojis will move either to an A
or an I and you see AI made outof emojis.
Or you could click a button andsay dance and it by default
will load a song.

(08:11):
It was actually a songgenerated by a different
generative AI and then it knewhow, and I didn't know this.
It was kind of interesting backand forth with the chatbot to
get it so that it picks up thebeats from the music and then
the emojis rotate to kind of fitthe beat.
And then I said, well, could weadd a black hole and all the

(08:33):
emojis sort of fall into theblack hole and some of them will
get into an orbit around it.
And some of these were my idea.
I think that actually thespelling one was its idea,
because I was asking forsuggestions.
Another idea I had was, uh,chasing.
So every emoji picks a randomother emoji and tries to chase

(08:54):
it and all the emojis arerunning.
And you know, with moderncomputers you could have, you
know, a thousand emojis allflying around on the screen like
this.
And one last thing that it didwas it loaded a database of
2,000 descriptions of 2,000emojis and then you could just

(09:18):
speak and you could say a sadface, and it'll find one that
matches a sad face, even if it'snot an exact match.
So it knows how to load in apre-trained model that could be
used to see if two bits of text,how similar they are and if
it's similar enough, or if itactually finds the one that's

(09:39):
most similar and shows you, youknow, makes that the current
emoji that you create.
So it was quite a fun projectto sort of push how far you
could go.
You know it did take a week,but it's a thousand line big
program that's really quitefunctional.

Chris Colley (09:58):
And I guess that's kind of like the beauty of it
as well, because you can reallygo down that rabbit hole Once
you start to see somethinggenerate, like how much more can
I just keep twisting and adding?
And it brings me to this quotethat comes, I think, from the
book or somebody that reviewedit, but I want to read it to you
and then get your response toit.

(10:19):
It says this is not a bookabout fantasies of replacing
teachers with machines.
Rather, the learner'sapprentice model generates AI as
an apprentice colleague,co-thinker, proofreader, pair
coder, brainstorming buddy andillustrator, an intellectual

(10:39):
ally that amplifies humanpotential.
I mean, is that somethingamazing, or what Can you put
some context to that quote thatamplifies human potential?
I mean, is that somethingamazing or what Can you put some
context to that?

Ken Kahn (10:50):
quote yeah, yeah.
So In the process of exploringthis and then writing the book,
the first big discovery was that, unlike all these people that
talk about prompt engineering,we have to carefully design
maybe a paragraph-longdescription of exactly what you
want and then hope it makes it.

(11:11):
And that doesn't often workwell because the description is
fairly complex.
Doesn't often work well becausethe description is fairly
complex.
Instead, the approach thatworks throughout is to think of
what's the simplest version ofwhat you want, ask for that and
then, when you see it, givefeedback.
If it's fine, think of someimprovement and just keep

(11:33):
iterating.
And sometimes you could thinkof improvements, sometimes you
could ask for suggestions, andit's still you're being creative
.
When you ask for suggestions,often you get 10 different ones
and only one or two do I feellike I like and want to pursue,
and so that's the style ofcreating apps.

(11:54):
But the book also talks abouthow you could create text-based
adventures or have various kindsof very different kinds of
conversations than normal.
So the text-based adventures islike the old adventures where
you you would say, you know,open the door, you know use the

(12:14):
key or whatever, but now thechatb's making it up on the fly
and it can be anything that youask for.
You know I've asked for.
You know I want to witness theassassination of Julius Caesar,
or I want to visit ancientAthens and talk to Socrates and
visit the theater of Dionysus orsomething.

(12:34):
Or another example that I didwas I said theater of Dionysus,
or something.
Or another example that I didwas I said I'm a high school
student who is just learningFrench, but I want, and I really
like science fiction.
I want to have an adventurewhere I have to practice some
French.
And it created a nice thingwhere I'm the captain of the
ship and suddenly there's someobject coming up in the viewer

(12:58):
and at first the french was alittle too hard.
So I just simply said this istoo hard to, you know, make it
simpler french.
And then, um, and then I wantedit.
Oh it, usually with theseadventures it follows the old
thing of gives you like fourchoices and you could pick them,
but you could say anything.
And I even said in this case, Idon't want the.

(13:18):
I want to have to force myselfto express in French.
So I said you know, I want totalk to the science officer, but
my French was so rusty that itwas like three mistakes in that.
And instead of correcting mymistakes, it said oh, you want
to talk to the science officer,and it wrote that in perfect
French with the grammar fixedand the spelling fixed, so I

(13:41):
could see you know what I didwrong.
And then Owen asked every sooften to make illustrations, and
it made some really nicescience fiction illustrations
where you see the deck of theship and what's in front and all
this.
So that's a whole class ofkinds of creative things you

(14:01):
could do, unlike some of theother examples, like creating an
app or creating an illustratedstory, it's the conversation
itself.
That's the point, not theproduct You're just learning by
exploring.
I often get asked how are yousupposed to assess things like
this?
And if it's an historical visitto ancient Athens, I think if I

(14:29):
was a teacher, I would ask thestudents not just to hand in a
log of the conversation but toreflect on what was going on and
to also be responsible forcatching mistakes.
So if, if the you know, in thevisiting viewing Julius Caesar's

(14:51):
assassination, it does allow meas a high school student to
sneak into the Senate, hidebehind a pillar and watch it.
But then if you take the entireconversation and give it to a
different chatbot and say isthis conversation plausible?
It said everything washistorically right, but there's

(15:14):
the security opening so tightthat you know you couldn't have
snuck in to see this.

Chris Colley (15:18):
but it even said you know, but that's dramatic,
license, that's okay, orsomething it's amazing, do you
find that, that differentchatbots react differently to
the same prompt, like you'd giveit the same kind of information
, but it might react differentlyor produce different results
for you yeah, they always do aswell, the, but it might react

(15:38):
differently or produce differentresults for you.

Ken Kahn (15:40):
Yeah, they always do.
It's very even the same onewill will react differently.
I illustrated this in the um inthe book by just simply saying
you know, uh, asking the samechatbot, you know, create a
haiku about creative uses of AI.
And I made a nice little haikuand then did it again and it was
, you know, was only one or twowords similar or something, but

(16:03):
you do start to.
It's very hard to give anyrecommendations about these
chatbots because a few monthsago I would have said, oh,
claude is so much better thanthe others.
And you know, now in the lastmonth or so, chatgpt and Gemini
have kind of caught up in a lotof ways, but I still think

(16:25):
Claude tends to be better forcreative writing and there's a
strange sense in which itspersonality seems a little bit
more attractive.
But which reminds me, one ofthe advice I give the students
is you don't need any specialskill to use a chatbot.
You should just interact withit the same way you would
interact with a person, a fellowstudent or assistant or

(16:50):
something.
But that, because that's themost effective way of
interacting with them is thatway.
But you know, know, keep inmind that it's it's not a person
, it's got a very alienintelligence.
You know, keep that in mind.

Chris Colley (17:06):
Somebody told me that if you are very polite in
the chat bot, you know, thankyou please.
Um, I'd love to know what youyou know you can provide that it
has.
Um, that it's more.
I don't know if it's betterresults or what happens, but
that being polite is a goodthing to be in the chatbots,

(17:27):
just for digital citizenshipfree as well.
At the same time, yes.
Putting things, what you putonline, very cool.
Um, very cool.
What?
What do you see as the untappedpotential that we have with
chatbots, that that we might notbe there yet in education with
um?
I know that I talk a lot witheducators about, you know, using

(17:47):
chatbots to help them andsimplify their, their teaching.
You know protocols they have togo through.
It allows you to have more timewith your students as well.
Where that human connectionhappens.
They're getting there but likethey still have a hard time
understanding, like what it cando.
You know, like other than youknow, make me a lesson plan on

(18:10):
electricity.
Or you know, like, how do you?
What do you think that untappedpotential that we have in chat
that we still need to get closerto?

Ken Kahn (18:22):
Yeah, yeah.
So well, first off, I think thebest match is with
project-based kinds of learning,because you can be as creative
as you need to be or want to beand you also learn a lot of
critical thinking.
You have to give good feedbackto the chatbot.

(18:43):
You're learning communicationskills, because the better you
communicate, the more effectivethe conversation will be.
So I think that's the mostimportant thing is to just treat
it as a, as a colleague thatyou're trying to.

(19:05):
That's helping you makeprojects, and there's a lot of
flexibility as to what exactlyits role is and what your role
is, and you could start offsaying you know, oh, I don't
want you to do too much.
You know I want to do some ofthe writing, but you should do
some of the illustrations orwhatever you know you want to.
However, you want to split upthe work you could also like

(19:30):
with with writing.
I've asked for it to generate astory, and sometimes these
stories I try to connect up withmathematics or science.
One example that I probablyoverdid in the book is stories
and poems, and example that Iprobably over did in the book is

(19:51):
stories and poems and musicalsall about Euclid's proof that
there's an infinite number ofprime numbers, and you can do it
so many different ways.
But you could also, once youget it, if you see anything
wrong, you could or ideas forimprovement, you could give it
critical feedback and improvethings, or you could ask it to

(20:11):
you know.
Or a different chatbot, youknow how could this be improved,
and then you decide whichimprovements you might want to
do.
So you're acting very much likethe editor in that case, unless
the writer, but of course, youcould be the writer and let the
chatbot be the editor.
You got all these flexible,different ways of interacting.

Chris Colley (20:31):
Right, you can kind of wear whatever hat you
want and assign that other oneto the bot itself.
So cool.
I have a question that teachersask me and I'm wondering maybe
you might have an answer to it.
So all these chatbots are builton input, right, stuff, that we
, that we fed it.

(20:52):
What, where does the copyrightcome into play?
Like, because I know that ittakes it, indexes, it also
basically splices everything.
All these little pieces likewhat are you allowed?
Like, if I'm making a chatbot,what, what am I ethically
allowed to put into it?
Like, I can't put your bookinto it, but right, like, like,

(21:16):
where do, where does that line?
Um, you know, blur a bit yeah,yeah, well, there's.

Ken Kahn (21:24):
there's one issue has to do with what was the training
data that the developers of thechatbot used and whether they
were really respecting thecreators' copyrights or whatever
.
When it comes to, say, imagegeneration is, you could ask

(21:49):
some of them will create.
You know an image for maybe agame you're creating and you
could ask it, you know, to haveit in the style of Picasso, say,
and it'll do it, but that'sstill in copyright.
But if you ask for Van Gogh orRembrandt, I don't think there's

(22:10):
any ethical moral issues there.
They've been out of copyrightfor a century or more.
And when it comes to making webapps, I don't think there's much
of a ethical issue or copyrightissue because you know it's
been trained on millions of opensource programs and got the

(22:35):
idea of how to write code.
But that does lead to a pointwhere I wanted to make a I did
make in the book, which is ifyou ask it for a tic-tac-toe
program or a snake game orsomething, it'll do it, but
you're not going to learn muchand it's probably just based on
the hundreds of examples it'sseen on the web or something.

(22:57):
But if you ask for a game thatyou make up like a very simple
game that I made up was onewhere there were flowers on the
bottom and the flowers startshrinking and losing their color
, turning gray, but you coulddrop water colorful water
balloons on them and if you hitone then they get bigger and

(23:17):
more colorful again and it'sjust dropping water balloons on
flowers.
You know, I don't, I couldn'tfind a game like that anywhere,
but it was able to, to make it,you know, and.

Chris Colley (23:28):
Right.
So does that become your gameLike, or is that just again
freely available to anybodyonline?

Ken Kahn (23:35):
Yes, yes, matter of fact, what we did in the book
was, you know, for all theconversations you know, often we
had to abridge them orsummarize them because you know
many of them are pretty long.
But there's an online appendixthat has the entire
conversations but also is linkedto any app or story or images

(23:56):
made.
So everything that we talk aboutin the book you could go to the
online appendix and get thefull version or play with my
water balloon game, or anothergame that I think was a bit more
impressive, that still wasn'tthat hard to make is one where,
asked it to, to display thevideo with the camera, with the

(24:21):
coming from the camera, so yousee yourself just like you do in
Zoom, but then to have um,cartoon balloons falling from
the sky and if you push them,touch them with your finger,
they pop and then there's alittle score keeping.
If you miss how many?
If you miss five, you lose thegame and if, however many you
pop, you get more points and andit has a little popping sound

(24:43):
when you hit numbers.
But what's interesting aboutthat is that in order to build
that, the chatbot had to thinkwell, how am I going to figure
out if somebody's finger istouching an artificial balloon
or a virtual balloon, and itknew about a pre-trained AI

(25:04):
model called HandPose that isable to pick out every joint in
a finger, in a hand, and then itwas able to figure out where
would the tip of the finger beand if it's close enough to the
balloon it pops or something.
So augmented reality, you know,just by asking for it.

Chris Colley (25:25):
Yeah, sounds like a scratch game that I've made in
the past.
That's so cool.
Well, ken, I want to just thankyou again.
I mean I could just keeptalking and talking with you,
but I want to respect our timeas well.
This book is really, reallyinsightful and I think it's the
kind of book that will opendoors for teachers to start

(25:47):
understanding a little bit ofhow that integration can happen.
Um, we're still kind of in thatum no-fly zone a lot with our
students because of the agerestrictions, um that some of
these well, most of these botshave right now.
Hopefully that will change inthe future.
I imagine it will as we getmore of a handle on it.
Um, but I just found that yourbook was just full of great

(26:11):
ideas and it was really cool totalk to you and and for you to
kind of share the origin and thedevelopment.
And maybe just in closing, ken,what, what, what are you
looking for in the in in Like,are you continuing to test these
out and share your projectideas and information like that
to the world?

Ken Kahn (26:33):
Yeah, I am.
But let me comment first onwhat you said.
First off, I think one of thefirst things teachers should
pick up from the book is thatit's very easy to experiment,
try things yourself and see whatkinds of things you could do.
And if there are these kind ofconstraints because of age or

(26:59):
whatever, I think there's a lotof things you could do with the
whole class, where the teacheris controlling the chatbot and
students are giving suggestionsand everybody's watching what's
happening or something.
So there's a lot ofpossibilities there.
So, in terms of the future, justevery day there seems to be new
advances and I'm always tryingthem out.

(27:25):
One that I particularly findinteresting is having multiple
chatbots communicate with eachother, but some of them have
different roles, and I did anexperiment just playing around
where one of them is a reallygood programmer but another one
is a very creative digitalartist who thinks out of the box
and wants to do some computerart and music, and the two of

(27:47):
them just go back and forth,back and forth, and I'm watching
, but I could type in as a thirdperson in this conversation,
and the times I've tried thisthey actually produced, you know
, a pretty interesting littledigital art piece with some odd
music and some you knowanimation and stuff.
So that's the thing I'm mostinterested in exploring right

(28:09):
now is what you could do withmultiple chatbots and you know,
if you don't have, you couldhave.
The simplest way ofexperimenting with this is to
copy and paste between twodifferent tabs.
One tab has one chatbot andanother tab has a different one,
and they could be the samechatbot or different ones.
But the important thing is theinitial prompt kind of sets up

(28:30):
the context for each one, right.

Chris Colley (28:34):
And do you offer suggestions for prompting
chatbots in the book as well?
Like effective prompts andstuff like that.
Because, that's a skill initself, right.

Ken Kahn (28:43):
Well, it is but and it isn't in some ways because you
can, you know, carefully craft aprompt and it could be useful.
But so often I find thatequally good to just get started
, just say something simple, andthen it may be half understands
.
And then you say no, I meantthis and you can.

(29:04):
And then you add some moreconstraints or more details.
It comes out in a conversationrather than as an initial
careful prompt.
But there is another kind ofprompting, which we call
guidance prompting, where youcould say to the chatbot, just
in the very beginning, you'regoing to be helping some middle
school students that are, say,um, making some web games around

(29:29):
biology.
You know, ask them if they haveany ideas.
If they don't ask them whattheir interests are, make a few
suggestions if they, if theyhave too complicated an idea,
you know, encourage them to comeup with a simpler version.
And you know, don't do too muchwork for them.
But you know, be as helpful asyou can and explain things you,
you know, in the age appropriatelanguage.
Just two paragraphs like that,to kind of set the context and

(29:53):
the constraints, and then youcould have a kind of a
customized version of thatchatbot.
That's ideal for that classroomexperience.

Chris Colley (30:05):
I love that idea too of popping her up on the
smart board and having the kidsso they're not touching anything
but they're able to participatein the prompting and the
developing and seeing thatchange, I think that they would
be like, oh my God, wow, this isamazing, not to say they
wouldn't want to go home and tryit out when what can you do?

Ken Kahn (30:27):
they will be able to with their parents permission.

Chris Colley (30:30):
Absolutely, absolutely well, ken, again
thanks so much.
Uh, people go out and get thisbook, the learner's apprentice
ai in the amplification of humancreativity.
It's a wonderful book, tons ofgreat ideas in there and, ken, I
thank you for putting this outthere.
It's really something else,this book.
So thank you yeah thanks fordoing this.

(30:50):
I enjoyed it good.
Yeah, it was really great.
Thanks so much, and have a havea great day all right, thanks
bye.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.