Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Laszlo (00:01):
Welcome back to the
round things of the designer.
We are a bi weekly podcast.
We talk about the latest in userexperience from around the web.
Terri (00:08):
Hey, my name's Terry.
I'm a designer in the Bay area.
Laszlo (00:11):
And my name is Lazo.
I am a front engineer in the Austin area.
Terri (00:16):
Lazo and I believe products
should be accessible by all.
Each episode we like to have somebodyto come in and share their own take
on user experience So let's seewhat we have in store for this week
Hey, hey everybody we are welcomeback to the families of the
(00:37):
designer we have A person here.
I'm just going to do the bio.
So he is a founder and a CEO of fiveby five is a design and technology
leader expert and inclusion innovationwith over two decades of experience
spanning from education and technology.
So this person's career includes executiveand leadership roles with Microsoft.
(00:59):
Audible, Amazon, Adobe,and multiple startups.
Not only that, but this person is aformer education turned tech executive
with their unique perspective,growing with dyslexia and ADHD.
Not only did they shape their career,but they also pioneered approach
to inclusive design and technology.
(01:21):
But there's more.
The biggest commitment todemocratizing the power of
technology and making accessible toeveryone, particularly in education.
Stems from their personalexperience being under marginalized,
underestimating growing up in the 1980s.
Yes, the 1980s.
Good years with this, with dyslexiaand A DHD and with his and with their
(01:42):
latest project and innovation AI poweredplatform that transform how education
continent is delivered and consumedfive by five learning mission to do,
making education more accessible.
Equitable affected with AI Taylorpresumed to have given a students
basically how they learned right sonamed after the communication term loud
(02:03):
and clear five by five represents asignificant advancement in education and
technology served with diverse learning.
Sorry, serving.
Five by five represents a cyclicaladvancement, education, technology,
disturbing diverse learning needsacross K through 12 education.
(02:25):
All of that encompass.
I want to welcome Tripp O'Dell
Trip (02:30):
back.
Hey guys back.
Terri (02:31):
It's been a while and stuff.
So it has.
Trip (02:34):
And I have to apologize when you
asked me for that bio Terry, I probably
should have run that through AI andsaid, make it a little shorter if it's
the ramblings of a designer, I'm in theright place because there you go, chief.
Terri (02:48):
It's all good.
It's all good.
Yeah.
Yeah.
So definitely welcome back.
I know you've been doing newadventures Here and there, just
like everybody else, trying newthings, doing new things, learning
new things and everything like that.
I really like the five byfive and then basically trip.
(03:09):
If you could tell usjust a little bit more.
No.
So what was,
Trip (03:12):
I think getting really
frustrated was the backstory.
I, you talk a little bit aboutit in the bio, my background.
I have a really weird.
Sort of back story of what kindof ultimately led me to what I do.
And I feel in some ways I've been makingthis start up for 40 years, right?
(03:32):
Because a lot of what I've pickedup and why I got into technology
is that I had to use it in reallyunusual ways in my own learning.
Uh, with, I learned how to readusing books on tape and reading
along on the paperback book.
I was hacking all sorts of littledevices this way and that to, to,
to make it easier to write, uh,or take notes or, or do anything.
(03:56):
So I'm having the time of my life nowwith generative AI because like everyone
else is finally discovering what likeI figured out decades ago, which is a
lot of this stuff is unnecessarily hard.
And really what, what is it we're,we're really trying to accomplish?
And I think the pain, the realpain for me is that a lot of these
(04:17):
differences, and I ascribe to thebelief that these are not disabilities.
These are naturally evolved cognitivetraits in the same way that you've
got tall people and short people.
And some people go to theNBA and some people don't.
Right?
Because whether you're tall or short.
And they, they serve a purpose.
There's outlier strengths that gowith them, but we're, many systems
(04:39):
in society, education being the firstone we really encounter, are designed
around a one size fits all assumption.
They're designed for economies of scale.
When you have, I think it's somethinglike 48 million, there's 68, there's,
there's close to 70 million K 12students in the United States when you
look at public and private schools.
(04:59):
But in the public schools,it's like 48, 50 million kids.
So you get 50 million learners allover the map and they come from
all sorts of different backgrounds.
Some of them don't speak English at home.
Some of them don't.
They come to school hungry.
They have learning differences orcircumstances that are challenging.
And they bring all of thatwith them into the classroom.
(05:21):
And the teacher has, and I'm aformer teacher, the teacher has
deal with the kid that shows up.
And my youngest, these are differencesthat are genetically inherited.
My, my youngest has an evenmore impressive resume slash IEP
than I did, where it's school'sbeen a real struggle for him.
And we were applying fora middle school for him.
(05:42):
He had to write an essay andhe doesn't, he's on paper.
He's not that,
He's incredibly articulate when hespeaks, but his handwriting, and he
doesn't type that well, and that sortof thing, it's very challenging for him.
It can take him 40 minutesto write a paragraph.
And the prompt for the essay was, in aperfect world, what would school be like?
(06:05):
And, uh, I said, well, okay, let'sshow him what it could be like.
And I used a couple ofdifferent AI products.
I sat him down in front of my podcast,Mike, and I gave him the question
and he, like me, riffed for 12 anda half minutes, not really repeating
himself or flubbing or making ideas.
He was very clear and recorded that.
(06:27):
We ran it through Descript,which transcribed it.
And then we took that transcript,put it into a Google doc.
Printed it out read through it together.
He circled the parts that he likedtook those bits We put that into
a Google Doc ran chat GPT on it.
And I prompted it to say hey with a childwith a an IQ of X and a Vocabulary of
(06:50):
Y and Lexile score Z and like all theseother things Reframe this and it wrote
a beautiful essay and we printed thatout and I showed it to me It's amazing
and I said, I know but we can't put thisin because that'll be cheating, right?
But you now, you know what your ideas looklike when they're all put together in a
straight line and nobody is in between.
(07:10):
This was all you.
So all we have to do is we're goingto just do another pass at it.
So we select that document,created an outline for it.
And then printed that out and then we wentto Microsoft Word and we hit the record
button for speech to text and he dictatedit and then we edited it as normal.
How, so the only thing we cheatedis we didn't use a keyboard.
(07:33):
The ideas were all his, the, the modelwas something that we used as reference
for the outline or whatever, but it was,it's how actually education is supposed
to work when they talk about, uh,individualization or, or accommodations
or, or those sorts of things is youtry to meet the kid where they're at.
And, but.
Schools really have a hardtime doing that because imagine
(07:56):
you're in a factory, right?
An auto factory, a plant,one of Ford's plants.
And you tell the CEOof Ford, you know what?
We've made it a federal law thatevery fifth car that comes off
your assembly line is a completelycustom design from the ground up.
You can't even use off the shelf parts.
It all has to be completely custom.
And that's often times where the kid likemy son, you know, What they're asking the
(08:20):
teacher to do in addition to servicingthe other 20 or 25 kids in that class who
also may have Differences it's about onein five have a substantial difference and
that seems like a reasonable expectationin the age of Amazon and Netflix and
all this personalization what you cando with AI Why aren't we doing it in
(08:41):
the most important parts of our life?
The, the parts like school andwork, couldn't that be better?
And that's, that was the inspiration for5x5 is that you think about things like,
for example, the state of Californiaspends 12 billion a year on dyslexia alone
because of what they went and calculated.
(09:01):
This was done by Boston ConsultingGroup about two years ago.
They look at the cost, notjust of educating those kids.
Because it is about twice as expensiveas a typical general education student.
But they also look at what's calledthe school to prison pipeline.
Because when you get a gap in reading,if a kid's not reading fluently
by fifth grade, their chances ofgoing to prison are off the charts.
(09:23):
Like higher than even the onesyou would think would be much
higher, like young black males.
A child with dyslexia is36 percent more likely.
To end up incarcerated thansomebody who without dyslexia,
regardless of their background.
That's, that, that is an amazingillustration of how propagate the lost
(09:44):
opportunities are with what, how we'reeducating kids, uh, because we can't
meet, we can't, we have a factory modelin terms of how things get taught or
produced, and we think that we nowhave the technology, the state of
the art tech, they're giving it away.
There's so much you could do with it.
It just requires rethinking some ofthe things and it becomes much easier
and much more functional and it takesa ton of work off of the teacher
(10:08):
if you roll it out in the right wayand that's what we're working to do.
Pretty amazing.
I'm excited.
And if you can't tell, it's avery exciting product to work on.
Terri (10:21):
Yeah.
No.
So if you don't mind, can we go backto two of the factoring elements?
Model example, do you think of oureducation system, either private or public
will actually not have that anymore?
Or would they change it?
(10:41):
Or would they defer from it?
Do you think in the long term?
Trip (10:47):
I think before you can change the
model, you have to change mindsets, right?
Is that people, like whenever youhave a big disruptive innovation,
let's take AI out of the picture.
Let's, and we'll go back to somethingthat all designers love to talk about.
Let's talk about the iPhone.
Is like how much the iPhonechallenged our thinking around what.
(11:09):
Personal computing looks like or howit works, and there were a couple
of really rough years where peoplewere trying to essentially implement
desktop style controls in mobile apps,and they wouldn't work or mobile apps
that needed to scale up and work.
On a website, but then it juststretches really wide and it's weird.
(11:30):
That's it.
We had to really think about formfactor and we had to think about the
differences for what mobile is goodat versus what desktop is good at.
That's a mindset shift around whatdoes computer software look like.
SAS is a similar thing.
The idea that you could be connectany device I'm on, I can just pick it
up and start using something withouthaving to go to another computer that
(11:51):
where the software is installed andlog into it and then post something
and then upload it to the internet.
It just works now, right?
SAS changed a lot of those things.
They change.
If you change the affordances, if youchange what something can do or what
it's capable of doing, you fundamentallychange the assumptions around how that
(12:12):
can work, but it's still new enough.
And the people that are the peoplein education today, and I have many
friends that are still teachers,was a teacher 20 years ago.
It's way worse than it everwas when I was teaching.
It was hard when I was doing it.
And you get teachers leaving in drovesbecause you've taken work that they
(12:33):
love and you've made it all aboutcompliance and SLAs and checkpoints.
It's like being a short order cook.
And the things that they do less andless of the work that they love because
they have so many requirements aroundindividualizing something for this
special case or that special case.
Why don't we actually have systems thattake care of the special cases and the
(12:54):
teachers can just be amazing teachers.
It, it, we, our goal is never to replace ateacher because teachers are the X factor.
They're the ones that keep kidsoff the school prison pipe.
They're the ones that, that,You don't think back 30 years
and say, you know what, I wonderwhat happened to Oregon Trail.
That was just amazing.
(13:15):
But you think about that teacher.
You think about that Englishteacher that got through to you.
Or that teach, that coach, orwhomever, that human element.
AI gives us, it's a paradox, butAI affords us the opportunity to
be more human than we have been.
Because it takes a lot of the hard work.
It can be labor enhancing.
(13:36):
It's amazing.
not labor replacing.
You don't need to replace a human worker.
You can actually have that humanworker working on the stuff
that only humans can do well.
And that's what, that's the essence,the magic of what a teaching job is,
is that you do something and you'rein a unique position and you have
the insight and the skill set and thecalling to, and the craft to reach a kid.
(13:59):
Who desperately needs it and needsto see what they're capable of doing.
Terri (14:08):
Yeah, that's a lot.
It's so interesting because as you kepttalking and talking about AI and the whole
like profile, so now I'm going to clip it.
Meaning of, I have aging parents,they are, I'm 87 years old.
So it's when they're in a passing, it'slike, could they have a profile of like
(14:33):
their thoughts and maybe their voice?
And could, could AI be them,but in a can or in something?
So if you, I know of some kidsor adults now, they, they would
keep other parents, Phone number.
So when they call, they can heartheir voice and just leave a message.
(14:57):
I've heard families havedone that in the past.
I'm just thinking of whatyou're talking about.
Can that be something, ofcourse, of a later date?
Sorry, I don't, I don't want tobe morbid, but it's like the life
after we're doing the present now.
Trip (15:13):
Yeah.
Terri (15:14):
I don't know.
It just made me think.
Trip (15:16):
I lost my mom 2008 to about 16
years ago and it was right, right at
Thanksgiving and it was a long, shewas only 58 and it took me years.
Uh, I think I still have, I'm stillconnected with her on LinkedIn.
I think I'm still, it's, Ican't bring myself to remove her
address from my address book.
(15:37):
And I think about that with my dad, I'mlike, I, he won't hear this, but I deuce
his call regularly on Sundays cause healways calls at the least opportune time.
It's like he's got radar.
But.
Yeah.
Okay.
I never, I'm very careful aboutalways making sure I've got one in
there, just because it, it does bringthat, that, that echo from the past.
(16:02):
And I went through and organized a bunchof old family photos this summer with my
daughter and looking at back at my parentsand you look at them at a completely
different light and you're like, Ohmy God, they're so young and stupid.
Like now I understand allthe things they did wrong.
Right?
Because they didn't know any better.
But the, I think there's anelement of that where I'd never
want to see that be Tainted.
(16:23):
It's faked.
Never, I don't, there's something that, Idon't know, I, I can't think of a sacred,
there's something like essentially sacredabout that humanness and that turning
it into a commodity that can do stupidhuman tricks on command and be my Alexa
voice and a bunch of other stuff doesn'tnecessarily, it, it denies our humanness
(16:48):
and the fact that time is precious.
So that's, that's a personal take on it,but I, but I think it extends largely.
to how I think aboutthe proper place of AI.
People ascribe way more power andagency to AI than actually exists.
They're statistical modelsat the end of the day.
And with fast computing, and theycan improve those statistical models,
(17:10):
and they have methods to do that.
AI does not have serotonin.
AI does not have progesteronedoes not have feelings.
It does not have the fullrange of human experience.
It doesn't even have the fullrange of human intelligence.
It's a better tool.
It's a tool.
And it is a technology, and puttingit on the same standing as a human
(17:33):
being loses, loses the point.
It's, the purpose oftechnology is to improve life.
And I think part of that is respectingand maintaining a barrier between
what is authentically and genuinelyhuman and what is a fake or a person,
(17:55):
a big word, a simulacrum, right?
Like it's an illusion.
And it's an illusion that's based onour own cognitive biases of we see
something that seems intelligent, so wetreat it as, the more human something
seems, the more we treat it like a human.
That can be used to manipulate us.
That can be used to gaslight us.
Those things, like, those agents,and by blaming the AI, you actually,
(18:17):
Let the people that are actuallymaking the decisions and programming
that AI and not giving considerationto the damage they can do or the
fraud or all the other stuff.
We're not holding those peopleaccounting, accountable.
It's like blaming the devil.
The devil made me do it.
Or the Twinkie defense.
Laszlo (18:35):
Yeah,
Trip (18:36):
it's just like all those
chemicals just made me crazy.
And that's not my fault.
It's the Twinkies fault, right?
It's a stupid, it'sintellectually dishonest.
But I also think like a lot, like thething with, Products like Alexa, which
I worked on, is, we had a principlefor Alexa is that Alexa is a computer,
it's a machine, not a person, andit should only speak when spoken to.
(18:57):
And they stuck to thatprinciple for a while.
And I, I think it was a good principleto stick to because the purpose of it
sounding like a human or responding likea human is there to make it easier to use.
But it should not be used tomanipulate the emotions or the
decision making of the human.
Laszlo (19:15):
Yeah, I do want to chime in
here since you brought that up, Terry.
I remember a few years ago there were,when LLMs were starting to take form in
a more user friendly way, people weretraining, they were giving it all of their
families, like lost parents, entire Texthistory and phone call history, and then
they were having conversations with thisAI, but then they started going into the
(19:38):
psychological impacts of not letting go.
And then, like you said, diminishingthe value of the individual
because now they're just thiskind of a, like a safety blanket.
To not come to terms with reality insome way, or yeah, that human beings
are so complex and tweak messingwith those root emotions is unknown.
(20:02):
You don't know what you're going to get.
And then it devalues the.
The moment, right?
Because then you can start thinking,I'll catch up with them after they're
gone by texting with their ghost, andthen you don't live in the moment.
What I do is I, everyyear I bought a camera.
I interview my mom, we'll do a fullhour interview, and I want to have
those recordings because she does sayshe wishes she had done that with her
(20:24):
parents because she doesn't have any.
Recordings of how they sounded, buthaving how they actually spoke and
sounded as different than simulating it.
Trip (20:33):
Yeah.
And it's often the little passive things,like the things that you don't remember
that you see is the little quirk thatthey had and be like, Oh, I remember that.
Right?
Like it's never going to have thatserendipitous moment of discovery.
And I think that's a bigger problemaround like our relationship with
grief and death in this culture.
When I started teaching, I worked.
(20:54):
I work on a Lakota reservation inSouth Dakota, and their traditions, we,
life expectancy is a lot lower there.
It's one of the poorest places in theUnited States, and it was like, life
expectancy was in the 50s for bothmen and women, and about 50 percent of
the population is under the age of 18.
So we had a whole period around thistime of year, it was called funeral
(21:16):
season, because people would die ofexposure and a bunch of other stuff.
And, but the tradition there is thatwhen a family experienced a loss.
They would immediately collect allthe belongings of the person and
they would hold a first giveaway and,and then they would give away all
of these things to people that weresupporting of the family or whatever.
(21:36):
And the family would go into a yearlong period of grieving where they
would shave off their hair or cut theirhair, uh, they, if they were long.
And they, the family would saveup, and the extended family, to,
for a great big feast and give awayon the one year anniversary when
they would bless the headstone.
And the idea was that in Lakota,in the Lakota worldview, the only
(21:58):
thing you actually own is your body.
And that possessions.
Can end up possessing you quite literally.
Like the reason to give away thosepossessions and spread them in the
community is a benefits the community.
It's a generosity thing.
And your wealth is actuallymeasured in how much you give
away, not how much you have.
And so the family will save up for anentire year and they will give away
(22:26):
tens of thousands of dollars of thingslike blankets and laundry baskets
and they would feed the community.
It's open to all comersand, and that sort of thing.
And it's just given, they wouldbuy new stuff just to give it away,
to celebrate that person's life.
But they gave away their personalpossessions because they didn't want those
possessions to anchor this person here.
(22:47):
That person's gone.
They need to move on to the next life.
We don't want to keep them here.
They deserve to move on to the next step.
It's a beautiful tradition, but Ithink like when we look at, we live
in a very privileged culture where wedon't think we're ever going to die.
It's never going to happen to us.
That's a natural thing, right?
But everything culture kind ofreinforces that and that we hold
(23:07):
on to things to the point where theguilt becomes complicated and it
becomes, it just compounds the trauma.
So I think we weretalking about AI though.
It got a little deep.
I'm sorry.
Laszlo (23:19):
No, it's all related, right?
It's AI is a mirror on humanknowledge, so it's all interconnected.
Terri (23:27):
Yeah.
And the fascination about AI is you cando like a lot of different things for
the good, for the bad, for the ugly.
So it's just, I just find it's justall encompassed, interesting of what
can make your life better or what canhelp this person achieve X or maybe.
Do what they would like to do.
But, yeah.
Trip (23:47):
To say it in a nerdy way, it's
natural alignment is chaotic neutral.
Right?
It's not good or bad, butit is chaotic neutral.
Very disruptive.
Terri (23:59):
Yeah.
Yeah.
It definitely can be.
It's just, it's now, what was it?
I think it was scrolling on, onLinkedIn or on our Instagram, they
have so many ai, AI companies outthere for like design, or you can
do this or you can do that with it.
And there's little, little YouTubething that came out and this one little
(24:21):
advertisement, oh, you can be an ai.
Professional.
And they have all these apps like going.
Trip (24:30):
I think like LinkedIn, yeah.
LinkedIn secret calling seems tobe to run good jobs by just over
telling people how easy to kind oflike what happened to UX a bit with
the whole general assembly phases.
Hey, it's a gold rush.
It's the UX gold rush andgeneral assembly selling shovels.
There's a lot of snake oil outthere with AI around people saying,
(24:54):
It's just like the old thing, butbetter because now it has AI, right?
A lot of times the AIgets implemented and.
People don't think about whatare the challenges with AI?
Not just in terms of, not in terms oflike good or bad, or is it even useful?
Is it helpful?
Is it too hard?
Is it easy enough to use?
Because I, one of my ways of tellingpeople the problem with AI is that
(25:16):
you really have to know what you.
You have to know it pretty wellto be able to be any good with it.
There's, it's not as turnkeyas people like to believe, and
it's like a genie with a bottle.
You gotta know how to rub thatbottle just the right way and say
the right magic words in the rightway to get the outcome you want.
And you have to know enough to know ifit's fibbing, if it's cheating on you.
(25:38):
And that's a, that's something wherepeople think that it's super intelligent.
Right.
Or that it's infallible.
And that's a really dangerousassumption to make with this
technology at its current state.
Laszlo (25:52):
I did want to mention on the
chaos side, non, not really harmful,
but just a funny thing that happened.
There's just, I'm sure you'veheard AI news generators, fake
Facebook pages and all that.
And just for the ad revenue and Ireland,I believe had a Halloween parade last
year and there was this huge thing.
So then AI flubbed the data aroundthat and then said that there was
(26:16):
going to be this massive secondyear for the Halloween parade.
Showed all these fakeadvertisers in the address.
It hallucinated the entire thing,but it seems so real that a
bunch of people showed up forthis parade that never happened.
And then the police hadto go clear them out.
No, nothing's happening.
And, but it's just funny how itcan have real world consequences.
This one was fairly benign, but youcan just imagine how this can happen.
Trip (26:39):
I wish I was there.
I think an Irish, aspontaneous Irish street party.
Yeah.
They talked about how goodit was for local business.
Make it up as they go.
Hey, we'll make our own parade.
Demands
Laszlo (26:49):
there.
Yeah.
Terri (26:51):
That's funny.
Yeah, there's just so much other stuff,but like going back to the AI and like
everything, Trev, so what do you see thedifference between 2024 as we're ending
and we're going into 2025, like AI?
So what do you think of designersor developers or even executives?
(27:14):
What are they looking for?
Just in a trend.
Trip (27:19):
I don't know how I'm
going to take a risk here.
I think all of us would settlefor a week without something just
being batshit crazy, but justsomething weird happening is come on.
Like I would take just a normalweek in 2025 to kick things off.
But I think if we're looking atlike tech and we're looking at the
(27:40):
industry, I think you're going to see.
It'll be interesting to see whathappens after the inauguration
and the change in power, right?
And the tone there.
I think that's going to help set the tone.
I think there's a lot ofuncertainty in the world.
And I think there'll be acontinued sort of acceleration
on the hype cycle around AI.
I think people are startingto, it's starting to get
(28:03):
traction with average users.
But I think we're going to start lookingat, well, is what we've done so far good?
Right?
I've seen statistics around like thecost of power to generate a typical GPT
intent is basically the equivalent to aplastic bottle in the electricity, right?
(28:25):
And until we, we bring on some ofthese nuclear power plants that
I keep talking about and we have.
Unlimited access to electricitythat has an impact, right?
And if you're trying to perfect and, butyou don't think about that when you're
writing intents and there's a, and one,one intent, uh, to assist to an LLM,
(28:47):
you're tailoring it and you're shaping itand reworking it where we're working it.
Like I've used, done a bunch of stuff withstable diffusion with image generation.
And that's been a, it probably takes10 or 15, after you've trained the
model, it takes 10 or 15 attempts toget it exactly what you're going for.
And it generates a bunch of, andmost of that stuff never gets lost.
If you're looking at an industrialprocess, that's incredibly inefficient.
(29:10):
So I think the, the companies thatcan take the technology, the LLMs,
and, and, and, and work aroundsome of their limitations to scale
them efficiently and to have thingswork in a more automatic fashion.
I think that's, there's going to be moreappetite for that because right now,
(29:32):
just going to chat GPT and putting inan intent that doesn't actually help you
With the complex service design aroundhow information gets around or how
anything that you would generate withAI gets done, like how do you actually
get to quality faster versus skipping alot of the in between work and getting
right to a result, but it's mediocre.
So you keep taking big swings.
(29:54):
And it's not a terribly useful, it'snot very efficient as a way of working.
So I think looking at efficienciesand how to make the AI easier to use
and more efficient for more peopleand the results more predictable is
going to be a big area of investment.
Laszlo (30:10):
Yeah.
And you touched on a point I've heard it.
The biggest hurdle to AI development isn'ttalent, isn't what you want to do with it.
It's energy.
Energy is the biggest blocker right now.
Trip (30:24):
Yep.
And yeah, there's, and everybody's,the amount of spending that's going
into things like getting new chips,like every generation of chip, you're
getting entire data centers turningover their hardware on, on a less than
12 month cycle, because the new chiparchitecture is so much faster than
(30:47):
the old one that they can't justify.
Making do with what they've got becausethey're gonna fall behind on speed
and the energy consumption is higher.
Laszlo (30:56):
Would that be a, I see it as like
a potential risk to the overall success
of AI if, There's heavy investment andinvestment isn't just a charity, right?
There's an expected ROI there.
And if the burn rate continues tojust blow out of proportion and it,
when is it, is the return going topotentially the inflection point
(31:17):
potentially going to happen soonenough to where it makes sense.
Trip (31:20):
Yeah, hold on.
God,
Laszlo (31:22):
good.
Terri (31:23):
Yeah.
No, definitely Laszlo.
I think that's a really interestingspectrum on the burn side and
what's really going to happen.
I think,
Trip (31:32):
yeah, it's a good question.
I agree with you.
I think I was listening.
So there's a book that really influencedmy thinking that I read back in like 2022.
And it was, it was basic, this is whatI do for fun, is it was a, it was a book
on economics and it was a combination of.
Classic economic theoryand historical economics.
(31:55):
And it examined AI through the lensof what were the social impacts
and trends that happened during theindustrial revolution, the first
and second industrial revolution.
And I think it's helpful to look atthings like the development of the
steam engine, or electrification, orthe development of modern railroads.
(32:16):
And the rapid development of rail.
And there were things that unlocked thatsteam power and using turbines to generate
electricity that those technologiesgot adopted so incredibly quickly.
And they weren't well prepared for it.
And there was a lot of like civil unrestand you hear about the Luddites and all
(32:37):
that kind of stuff that went with it.
But I think it's instructedbecause it's a similar pattern.
It's not a, it's not a unique,like AI isn't a unique disruption.
It is a big one.
But it, I think you can learn a lot by, Ilearn way more and I'm usually more right
by looking at history than trying to be.
I think we need more.
(32:58):
Speculating.
Laszlo (32:58):
Yeah.
Trip (32:58):
I think we need more technology
historians than we do futurists because
most of the problems that you're goingto encounter and the big opportunities
and challenges I think are foundmore at examining patterns in the
past and then it is speculating thatand we'll all be riding flying cars.
Right, and we'll just plug ourbrains into the computer and
(33:22):
that's how we'll commute to work.
Nobody wants that.
Who wants that?
And so I think that if we actually startedreading more history books and fewer
science fiction books, Elon Musk, itmight be, it might result in some better,
more humanistic technology decisions.
Terri (33:42):
Yeah, absolutely.
But anyway, we're gonna stop it here.
We can probably talk about AI for the nextfive hours because there's just so much.
It's so cool.
But then again, it can be veryfrustrating at the same time because
as you said, Tripp, there's so muchgobbledygook, I would say, out there.
Trip (34:05):
I will.
Yeah.
Yeah.
I'll double down on my nightmare earlieris like being the, the king of rambling.
Like I can talk about any topicfor about five hours straight, but
AI is one where, yeah, we can, we,there's a lot of ground to cover.
Thank you so much for having me.
This was a lot of fun.
It's great to catch up with you all again.
It's always a pleasure.
Terri (34:23):
Yeah, absolutely.
So anyway, so Laslo,where can we find you at?
Still on
Laszlo (34:28):
LinkedIn.
You can find me on LinkedIn.
By my name.
How about you, Jerry?
Terri (34:32):
Yep.
Same here on LinkedIn and then onInstagram under Fluxing Designs and then
also with Happy Scribbles and Tripp.
So where can they find you at?
Trip (34:42):
You can generally find me tied
to my desk and in Slack or stigma?
No, uh, you can reach me over LinkedIn.
I'm not really on social as muchjust because I'm trying to build
a global education platform,but the, yeah, LinkedIn's good.
Terri (34:59):
LinkedIn.
Okay everybody, you can findTrip odell on a LinkedIn.
But until then, everybody have agreat lasting remains of 2020 24
and lasso and I will see you in.
next year.
So aloha.