All Episodes

September 13, 2023 • 58 mins

When you work and live deeply within the realm of new technologies, it can be easy to forget that most of the world doesn't share that same perspective. Many of us have been spending the last 9 months freaking out about the generative AI arms race and the end of white-collar work, but my conversation this week with Mark Johnson steered my thinking in a very different direction.

Mark is the co-founder and CTO of an online learning platform called Pathwright, and the majority of our conversation was spent talking about AI from the perspective of the end user. The origin of Mark's 11-year-old tech company looks a lot different than we're accustomed to seeing. There was no venture capital or seed round. Instead, Pathwright was funded and built by aligning the development roadmap directly with the specific needs of early adopters.

In the decade since, Pathwright has continued to develop its platform with the mindset of serving actual needs and staying true to its core values. So, I was curious how a founder and CTO like Mark is thinking about the rush in the tech world to implement AI as quickly as possible. Is this something that his customers are demanding, or are technologists putting this pressure on themselves?

Besides this topic, we also talked about the implications of AI in education, the likelihood of an AI bubble, and how to foster an optimistic view of the future in the way you think about and interact with new technology.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Sam Gerdt (00:07):
Welcome everybody to Road Work Ahead, a podcast that
explores the unmapped future ofbusiness and technology.
My name is Sam Gerdt and I amyour host.
When you work and live deeplywithin the realm of new
technologies, it can be easy toforget that the majority of the
world does not share that sameperspective.
Many of us have been spendingthe last nine months freaking

(00:29):
out about the generative AI armsrace and the end of white
collar work, but my conversationthis week with Mark Johnson
steered my thinking in a verydifferent direction.
Mark is the co-founder and CTOof an online learning platform
called Pathwright, and themajority of our conversation was
spent talking about AI from theperspective of the end user,

(00:52):
the learner.
The origin of Mark's 11 year oldtech company looks a lot
different than we're accustomedto seeing.
There was no venture capital orseed round.
Instead, Pathwright was fundedand built through a process of
aligning the development roadmapdirectly with the specific
needs of early adopters.
In the decades since,Pathwright has continued to

(01:13):
develop its platform with themindset of serving actual needs
and staying true to its corevalues.
So I was curious how a founderand CTO like Mark is thinking
about the rush in the tech worldto implement AI tools as
quickly as possible.
Is this something thatconsumers are demanding, or are

(01:33):
the technologists putting thispressure on themselves?
Besides this topic, we alsotalked about the implications of
AI in education, the likelihoodof an AI bubble and how to
foster an optimistic view of thefuture in the way you think
about and interact with newtechnology.
Mark is a thoughtful businessleader and I hope this

(01:54):
conversation encourages you likeit encouraged me.
Mark Pathwright's been aroundfor 11 years now, which is a
really long time.
It's such a cool platform.
I think I remember when youguys launched it and how unique
it was then, and I think itcontinues to be unique now.
But I'm curious in 11 years,what are some of the things that

(02:17):
you've seen change, maybe onthe technological side, but also
just philosophically withonline education?

Mark Johnson (02:24):
When we first started, the landscape of online
education tools was not great.
You had tools like Moodle andBlackboard and just user
experience was not even a thingin the educational space.
It was more of an administrativeexperience, I guess you could
say, where you just try to checkall the feature boxes that the
school had and don't worry aboutthe students they're not really

(02:45):
the user because they're notpaying you money and so that was
a particular niche that wetried to fill early on and still
are filling.
Actually, I mean the softwarehas.
I guess one thing that'schanged is the baseline of the
software has gotten better.
It's gotten a bit simpler, buteven so, the most popular
learning tools like Canvas andsome of the other big ones,

(03:05):
while they are more userfriendly than Blackboard or
Moodle, they're still, I wouldsay, admin first, not student
first, and they're stilldesigning for the people who buy
, and so that part hasn'tchanged.
Obviously, with technology andmobile phones becoming a huge
thing for students and forteachers, that's obviously
caused the simple design wechose, with a single path, one

(03:28):
screen, to kind of stand thattest of time in a sense, where
we haven't really had to changethings too much and make a
completely separate app, likesome of our competitors have
done.

Sam Gerdt (03:40):
So in 11 years, then what are the challenges that I
think that have most impactedyour business?

Mark Johnson (03:48):
Well, I mean, the growth part has always been kind
of our Achilles heel.
Not that we don't grow, it'sjust been kind of slow and
steady, which is not a bad thing.
We kind of prefer it that way.
We're probably mostly anintroverted company, so we're
not out there banging down doors.
So that's been a challengeoverall.

(04:09):
But we've had kind of slowgrowth.
We just never have reallyfocused on that area of the
business as much as we have justthe product side.
But again, the issue is that ifyou build it it doesn't always
work out.
So you do have to have somekind of outbound stuff, which
has been continually a challengefor us.

Sam Gerdt (04:31):
When I think about e-learning, online education,
LMS, whatever it is that youwant to call it, it seems to me
like your platform is built fora very specific set of people
with a very specific need.
It's not necessarily somethingthat somebody goes out and
impulse buys, and so marketingthat's always going to be a
challenge.
I guess I'm a little curioustoo Are there barriers to people

(04:57):
adopting e-learning platforms?
Because I know you guys arelooking at businesses and saying
you can use this for training.
You're looking at educators andsaying you can use this for
your students.
Are there specific barriersthat you've seen, and have those
barriers shifted over thecourse of time?

Mark Johnson (05:16):
Yeah, I mean, the barriers mostly are just the
amount of prep it takes tocreate a course or a path.
So it's not the kind of thingwhere you can just sign up and
then instantly pay us usuallybecause there's a lot of work in
between that kind of trialperiod and the actual point
where you'd want to launch.
So a lot of the online learningspace is in what you might call

(05:37):
the content marketing side,where the sales pitch is
basically if you create your ownonline course, you can make a
business working from home.
That kind of stuff Usuallydoesn't work out so well because
, again, those people have thesame problem where how do you
get your course in front ofpeople?
And then they buy courses onhow to get their courses in
front of people and it createsthis whole recursive loop.

(05:57):
So on that marketing side it'sa little less friction in that
you can sell your product moreeasily, but then your customers
have that same issue.
So we don't tend to focus solelyon that.
Like you said, we are kind of amore broad, horizontal platform
, which makes it morechallenging.
In the academic space thechallenge is always going to be
like the IT department whateverkind of arbitrary requirements

(06:20):
that you have to meet, and sothat's another game that, in
order to play successfully, youhave to have a sales team and
account managers, and that'salso a route we don't really
want to go, and so we've kind ofstayed in this middle space
where we're not really like acontent marketing platform,
we're not really an academic LMS.
We're trying to stay in themiddle, which does make it a bit

(06:44):
more challenging to find theright people.

Sam Gerdt (06:49):
Yeah, there are two challenges that you mentioned
and I'm thinking it seems likeyou guys maybe are working on
solutions to them.
You mentioned the time it takesto build a course, and I've
seen a demo of some new stuffyou guys are working on with
using AI, like the LLM to helpassist in that process of

(07:13):
building courses.
And then the second thing is itseems like there's a greater
demand more recently for ordercourses, online educational
content, especially as companiesare trying to upscale and as
people are recognizing thatmaybe the information that they
learned in school is notnecessarily going to stay

(07:36):
relevant throughout theircareers.
So, are you seeing, are youhopeful that those two changes,
those two shifts, are going tobe beneficial for a platform
like Pathright?

Mark Johnson (07:49):
Yeah, we think so and we're working on a new
version right now that kind ofdoubles down on that direction.
So not necessarily only forshorter form, but making shorter
form a more natural part of theproduct as far as it doesn't
feel like it's heavy to getstarted.
The AI I think you mentioned isstill something that is in our
labs.
We're experimenting with it,trying to find the right UX

(08:11):
patterns.
I think a lot of people arestill trying to figure that out,
but we have been thinking aboutit as almost like a templating
system on steroids, so you couldhave a teacher or whoever is
creating this path outline theirobjectives.
The AI could ask someclarifying questions until it's
got a good idea of what kind ofcontent they're trying to draft,

(08:31):
and then it could generate anice template for you to get
started, and we think that willhelp with that kind of initial
like blink page problem that wehave.
But yeah, it definitely isshowing some potential.

Sam Gerdt (08:45):
Yeah, when I imagine too there's a temptation to go
too far with it to say oh, AIcan create my course when maybe
that's not the wisest path.

Mark Johnson (08:55):
Yeah yeah, I don't think that would be beneficial
to most people if, like, you'rejust being a content middle with
the AI.
But I think where we're tryingto frame it is more on the.
This will give you a good draftto start with or maybe give you
some ideas for the kind oftopics you might want to cover.
So it's more of like abrainstorming or a template, but

(09:15):
more tailored to you know yourspecific scenario.

Sam Gerdt (09:19):
Yeah, do you see a path right as being something
that can augment highereducation or even secondary
education, or do you see it asbeing a potential replacement
for higher education, secondaryeducation, and I think what I
mean there is do you feel thatthese established means of

(09:41):
educating people will stand thetest of time, or will new
platforms like Pathright come inand disrupt to the point where
those older institutions areforced to either change or go
away?

Mark Johnson (09:57):
Yeah, I think, most likely some of the
institutions will go away, butsome will change.
You know, hopefully the onesthat have been able to like keep
up with things, whichunfortunately is not all of them
.
But yeah, I do think in thelong run we're going to see like
a pretty seismic shift in theway the next generation, even

(10:18):
the current generation, who'sactive in this learning space,
thinks that learning looks likethe shape of learning is
changing quite quickly.
You know, I think AI, like thatwe mentioned before, is going
to be a big part of that.
I'm sure you've probably heardof the six sigma problem.
Basically, the idea was that,you know, just summarizing it,

(10:41):
if every student had a personaltutor, then their score results
would be like in the 80thpercentile or higher, just by
having the attention that theyneed, which the problem is the
part of the six sigma, that's.
The problem is that that doesn'tscale because you can't give a
human tutor to every student youknow, in the classroom, and so
that's always been the problemthat people like we know this

(11:01):
would work if people can getpersonal tutoring and mentoring
along with what they're learning, but we can't provide the
manpower for it.
So now we're in a phase, though, where AI looks like it could
at least do a pretty adequatejob at that tutoring part.
In some ways it's even betterthan a human tutor not in every
way, but in some ways it isbecause it doesn't get tired,
it's always available, itresponds immediately, doesn't

(11:23):
have its own personal likeemotions around what you're
asking it to do.
So that part, I think, will bea huge shift once people start
adapting to hey, this is a thing.
Now I have my own personallearning assistant.
It's always available.

Sam Gerdt (11:41):
Is that something that you foresee all education
platforms and institutionswanting to implement sooner
rather than later?

Mark Johnson (11:51):
I would guess most institutions will be a bit wary
of it at first.
They already are like.
So a lot of schools are banningchat GBD, for instance.
Some are.
There are a few that are tryingto incorporate it too.
But I think in any kind of newwave technology, which we're
definitely in one of thosespaces like, there's gonna be a
fear reaction to try to hold itback.

(12:14):
But then there will be peoplethat embrace it too much.
But eventually it'll evenitself out where, yes, I do
think that at some point it'lljust be assumed.
One analogy I've found helpfuland I think will be a likely
outcome for AI is that goingback to when calculators became
a problem for math classes.
I think this was in the early80s.

(12:35):
A lot of teachers were freakingout about that, saying this was
we can't have calculators inour classroom because what's the
point of them working throughthe problems if they can just
type it into their calculatorand this is the end of math as
we know it, kind of stuff.
And the reaction was difficultfor math teachers and students
who wanted to use thesecalculators.

(12:57):
But now we all know it's likepart of the classroom.
We all use calculators.
I think LLMs in particular,will fill a similar space.
They'll be kind of thecalculator for language that we
use in school, and it'll just beassumed.
Eventually It'll be integratedinto all your stuff already,
though.

Sam Gerdt (13:16):
Yeah, I imagine you do have to have certain rules
around it.
They don't seem likecomplicated rules, though, to
guide students in the usage ofthis technology.
To me it seems like it could bevery straightforward and would
absolutely be beneficial.
I know I've talked with acouple other people already

(13:37):
who've talked about this use ofthings like chat, gpt for
tutoring, instructing studentsin the classroom on how best to
augment their learning with anLLM, versus banning it all
together, especially if youconsider in higher education,
you're gonna jump out into theworkforce and you're gonna have

(13:57):
to use the technology.
It's gonna be right therewaiting for you.
When you talk about in yourlabs preparing for the future of
Pathright, when you look at newtechnologies like GPT or any
other LLM and you say, okay, howdo we incorporate that into

(14:20):
what we're doing?
What's?

Mark Johnson (14:21):
the thought process there.

Sam Gerdt (14:23):
What are the most important things and what are
the things that maybe can bedistractions?

Mark Johnson (14:29):
Yeah, I mean the lab project itself, like
speaking personally, can be adistraction in the sense that it
sometimes is more exciting thanthe kind of stuff you're doing
day to day.
True, but it's kind of a Idon't know.
I personally think you justkind of have to try a bunch of
stuff before you find the rightsolution, and so we use our lab,
but we have an official labproject status at work.

(14:51):
We have this kind of threequadrant model we work in, but
labs are on the very far rightof it and then the middle.
We're actually like this iswhere our actual project work is
, and on the left is where we'retrying to like, maintain and
support things that we'vealready built.
So we kind of divide it in thatway and I love working in the
lab, but I need to be sometimesfocusing on that middle space

(15:12):
too.
But as far as how we evaluate it, we usually will make
prototypes.
So, like you know, like youwould in a lab, like a product
lab, we make like little one-offdemos and tests.
Sometimes we show them to theteam or run them by a few
customers, like on the path AI,when we actually put out there
on Twitter and had a newsletters, like a news list, sign up to

(15:32):
see if people were interested.
So we'll do that kind of stuffto this kind of lightweight
prototype and then eventuallythose things will make their way
into like the real project andby the time they get in there,
after having been in the lab abit, they usually look quite
different to what we started outwith.
But having that kind ofseparate intentional space,
though, where we do that kind ofR&D, has been really helpful,

(15:52):
because you can kind of silo itoff as like okay, this is the
lab stuff, this is like theproject stuff.
We know, you know we're movingon in.
Like having those segmentsmakes it easier to manage and
even to think about.

Sam Gerdt (16:02):
Really, yeah, there's a pretty popular video out
there where Elon Musk is talkingabout engineering problems and
he makes the comment that one ofthe biggest traps for engineers
biggest waste of time, istrying to optimize a process

(16:26):
that should be eliminated.
How do you approach that whenyou're looking at developing a
product and when you're lookingat over time, maintaining and
perfecting and adapting, how doyou measure what should and
shouldn't belong?

Mark Johnson (16:45):
Well, I mean, one way to do it is to make sure you
have a balanced team and evenon like just the pure, like
engineering development side,especially when you're trying to
build a platform.
So if you're trying to buildsomething that has to last 10
years, you have to kind of likehave some people that are more
on the edge of the technologylandscape, so like what's coming

(17:09):
, what's down the wire, so thatwe're making decisions that can,
you know, have a good longevity.
But then you also need peoplewho are going to like focus on
stability and maintenance andquality assurance, and those are
different personalities really,like you could say they're
different skills, but I thinkthey're actually more aligned
with personality than they arewith skill.

(17:29):
Personality will tend to shapethe skill, but there are some
people that really like achecklist.
They really like that you know,let's check everything off and
make sure we're ready for deploy.
Personally, I hate that.
That's not what I like to work.
I like to work in a wide, openspace where there's lots of
possibilities and I can kind ofspiral around and find stuff.
Of course, you want people tobe able to do both things, but I

(17:52):
think if you can balance yourteam kind of across that
spectrum of like people workingin the lab, people like making
stuff happen and getting thingsdone, and then people who are
maintaining and doing support,that can really help a lot and
like trying to balance out thattendency for things to get
either decay and become stagnantor get so like pie in the sky
that you never actually doanything practical.

Sam Gerdt (18:15):
As you're then developing.
Obviously, you're interactingwith the public on this.
Obviously, you're gettingfeedback.
What are clients demanding ofyou in terms of your platform,
in terms of future roadmap?
Is AI something that's in highdemand from the average person,
or is this something that's justhype?

Mark Johnson (18:38):
You know, I think that from our customer's
perspective what I've seenanyway like people are very
curious about it, but they don'treally have like opinion as far
as like we need AI nownecessarily.
It's kind of more like I don'tknow what's gonna happen with
this.
Is this AI gonna replace me oris it gonna make me more
powerful?
It's kind of the generalquestion.
You know, I think thateverybody's asking, so I don't

(19:02):
know if I would say that any ofthem are demanding it there.
Our customer needs are moregenerally on the practical side,
like I need more versatile waysto sell my course or I need
this one feature for peer review, or whatever.

Sam Gerdt (19:18):
Well, I was gonna ask you is there a time that you
can think of where maybe afeature was developed because of
feedback from clients, likeoverwhelming requests for, and
how do you respond whensomething like that happens,
like, do you prioritize it, doyou put it on a roadmap to
appease and then develop it in anormal timeframe?

Mark Johnson (19:41):
Yeah, that's a great question.
We have a fairly unique way ofdealing with that, especially
early on in the company.
So we didn't raise any seedmoney or venture capital, so we
were to completely bootstrapped.
But that does mean that youhave to come up with some model
to keep the bills paid whileyou're developing out this
platform.
That has no customers, yeah,and so our way of doing that was

(20:02):
what we called a partner model.
So we would work with specificcustomers who had like online
learning goals, basically, andwe would give them a proposal.
Basically, we'd give them apitch that said, hey, we're
developing this platform.
Like if you will fund Xpercentage of this feature, then
we'll give it to you for free,or like we'll give you usage,

(20:23):
whatever, but we'll also likework with you to make sure it's
meeting exactly what you need.
So a level of service that youwouldn't usually get from like a
software as a service type ofproducts.
But we also got the benefit ofraising a good amount of funding
, probably more than you wouldget in a regular like seed round
using that method withoutgiving away any equity at all.
But we also have customers thatare it's real customers.

(20:47):
We're not guessing what theywant because they're actually
paying us to do it.
So we found that model worksquite well.
There are some gotchas with it,but overall we've been pretty
happy with it.

Sam Gerdt (20:58):
Yeah, so give me an example of one of those features
that was developed just bypopular demand.

Mark Johnson (21:04):
I don't know if it like so much popular demand as
like opportunistic demand, inthat, like there was a several
features we built aroundcompetency based education
models where we had severaldifferent partners who were
wanting to fund the same thingat the same time, and so you
know three or four trying tofund the same basic venture, and
so they're trying to dolearning models at center around

(21:26):
mentorship and the learner, notso much in a program but as
like an adaptive path kind ofstructure where they have
different mentors giving themfeedback on different areas.
And so we built some featuresspecifically for that use case
because it aligned with where wewanted the head as a platform
anyway, but also these partnersare willing to fund it as well,

(21:46):
and so that is one example, butwe've done, you know, quite a
few different features that way.

Sam Gerdt (21:51):
In terms of your user base?
Do you feel like it's weightedmore on the large scale, like
projects and courses that aremeant for a lot of people more
broadly, or is it?
Are you currently seeing morelike one on one, one on two,

(22:12):
like mentorship type programs?

Mark Johnson (22:16):
I would say that I don't.
That's I.
Really.
I should probably know thatstatistic like what the average
like cohort size is for a course, I would guess.
If I had to guess just based onwhat I've seen, it would
usually be in like the 30 to 40range maybe.
So maybe medium size.
But there are cases where wehave, like single students.
Like you know, there are somehome schoolers that use the

(22:38):
platform that they just have oneor two kids on there.
Got you?
And they're you know, addingstuff to their path.
But then we have also have somecourses that are outliers, that
have 10,000 students in them,so it's kind of all over the map
.

Sam Gerdt (22:49):
Do you use the product in your own marketing
and attempts to grow the company?
Do you guys make your owncourses?

Mark Johnson (22:58):
Yeah, yeah, and actually that's our kind of
double-down strategy formarketing.
As I said before, we've alwayshad a little bit of a harder
time with outbound marketing,but one thing we can do really
well is make paths.
So we make a tool to make pathsand we can make paths for our
customers, and we do it all thetime.

(23:18):
But we've started to actually,just a few weeks ago, we started
a new 10-week cycle where we'retrying to make more outbound
paths, you might say so publicpaths that will help other
people know how to use certaintechnologies.
We're working on one right nowfor, like, chat, gpt, for
learning.
So how do you use that toactually learn, not just teach,

(23:39):
but actually if you're trying toresearch something but we've
got one coming up on usingbutter, which is like a better
alternative to zoom for livesessions, but we're gonna be
doing a bunch of those type ofthings using Zapier to automate
workflows, and so that'll be.
One strategy is trying to makepublic paths that people can use

(24:00):
.
But the benefit of that is weget to help people by learning
whatever it is that we're tryingto help them with, but also
they get to use the platform,you know, while they're learning
that.

Sam Gerdt (24:12):
One of the things that I've come to appreciate and
I think it's still early totell where we're going with this
but there's a trend in businessto use education as a marketing
tool.
There's a trend to try to offervalue to the people that you're

(24:33):
trying to reach by educatingthem, and no strings attached,
we're just gonna give you askill, and that's something that
we've actually tried.
I work for an agency.
We've tried that here.
We have had success with thathere.
You guys have tried that.

(24:54):
Is that a model that you havepicked up on?
Is that something that youwould consider in the future in
terms of outbound marketing foryour own platform?

Mark Johnson (25:05):
Would you?

Sam Gerdt (25:05):
consider approaching businesses and saying hey, let's
almost turning your platforminto a marketing platform.

Mark Johnson (25:13):
I mean, I think it was the I'm not sure if you're
familiar with Basecamp or 37Signals, but Jason Fried and
those guys, I think they had aphrase that we thought about a
lot early on but is like don'toutsell, I'll teach, which I
really do prefer that approach.
I think it's more natural andhumanistic to do that approach

(25:33):
than trying to get people toclick and add, for instance, so
like just genuinely sharing whatyou know.
Hopefully that leads to peoplelearning and using your product
in a way that will make themcurious and want to build their
own stuff on it.
But yeah, we definitely areleaning towards that direction.
For sure I wouldn't say we'relike super proficient at it yet,

(25:56):
but we're definitely.
That's where we want to go.
Is that out teach, not outsellapproach?

Sam Gerdt (26:03):
I want to switch gears real quick because I've
been following you for a whileand you've been fairly vocal on
social media with the chat GPTLLMs, and you've been
particularly vocal against someof the doom and gloom, against

(26:23):
some of the more fantasticclaims.
I want to first ask how did youbecome aware of what was
happening and how was yourthinking shaped with regards to
the current hype?

Mark Johnson (26:40):
I mean I've been using GPT when it was three, so
not 3.5.
When it first came out I wasfortunate enough to get off the
wait list pretty quickly andstart playing with it, and that
was kind of like the Wild Westera because no one had access
yet.
So everybody was trying tofigure it out and you could find
stuff on Reddit or whatever.

(27:01):
But I guess from the beginningI always just saw it as a tool
and a very cool one, extremelypowerful, like what it could do
and the potential it unlocks.
The fact that we basicallytaught Silicon to speak, we
taught our computers to talkEnglish to us and we can talk.
I mean it's just insane, thatlevel.
That is a huge, huge leap.

(27:23):
And even though GPT 3 wasn'tnearly as good as GPT 4, it was
still very impressive and so Iguess I've always been more on
the optimistic utility side ofit.
But also, it's not really amystery how it works, like it's
just statistical models that aretrying to predict the next
token, and that sounds likereally trivial and it's very

(27:47):
profound that it's as powerfulas it is.
But that's why I think a lot ofthe demerism that assumes some
sort of level of consciousnessto the AI and agency that it
just isn't there is.
I don't completely understandwhere they're coming from, but
also I think it adds a lot ofnoise and kind of click baby

(28:08):
distractions to what could bemore healthy conversations to be
had around AI and safety,because there are real problems
with AI, it's just not the onesthat it's going to go rogue and
become skynet and kill us all.

Sam Gerdt (28:19):
Yeah, well, it's interesting, you know.
On the one hand, I think youhave people who do know better
and who are taking advantage.
There's, there's money to bemade.
People are paying attention, sothey're using the attention.
There's certainly some of that.
There's also, like you said,that fundamental
misunderstanding.
I think the problem is that itis language, it is we have this,

(28:44):
we have this, we call it.
You know.
Think about the Turing test.
And when can we stop?
Like, obviously, if you'relooking at a machine that's
spitting out code or that'stalking to you in a computer
voice, you're not going to passthe Turing test.
But all of a sudden you've gotyou know, we've become
accustomed to these chatbots andreceiving emails, and all of a
sudden you've got a computerthat talks to you in much the

(29:06):
same way and it sounds like anyother customer service agent
you've dealt with or you knowstuff like that there's.
I think it's too easy to game.
I think it's too easy to gamelanguage, the idea that it's
talking to you and it's fakingthis humanness that.

(29:31):
I think, has tripped up a lotof people.

Mark Johnson (29:33):
Yeah, that's a great point.
I think we're all verysusceptible to language in
especially very fluent language,and so you can see this in
politicians, where there's likea shell of a person but it's
very linguistically fluent, itcan go a long way Just being
able to deliver speeches in theright tone or whatever, and
these models are only going tobecome more fluent.

(29:55):
So it's not something that's,you know, going to go away.
But I agree 100%.
I think we're very easilyfooled by fluency and the fact
that there's no, there's nothingbut like a frontal lobe there
with the AI doesn't really occurto people because it's so
fluent that it appears to havesome sort of conscious layer.
And I mean, people love toanthropomorphize anyway, like

(30:18):
you know, kids thatanthropomorphize everything.
If you have kids, you know that, and I think adults are the
same way.
we tend to anthropomorphizethings way more than we think we
would, and I think when youhave something that's fluent,
it's almost impossible not to.
So, yeah, I would guess a lotof the more extreme views of
what AI is capable of come fromthat type of bias getting

(30:42):
tricked by the language,basically.

Sam Gerdt (30:44):
Fluency is going to become an interesting thing, I
think, at coming up, because themore fluent it gets, the more
nuanced the input output of themath engine behind the scenes.
You're gonna have thesesituations where the hacks that
people use to essentially tojailbreak these things and dump

(31:06):
memory that they're not supposedto dump are going to look less
and less perceptible.
They're not gonna look like abunch of hashes and a bunch of.
It's not gonna look like agobbledygook, it's gonna look
like natural language.
And then I think the flip sideof that is it's just going to

(31:27):
give this much greaterappearance of intelligence
without anything.
Like you said, it's just afrontal lobe which is probably
gonna exacerbate the problem.

Mark Johnson (31:37):
Oh, absolutely yeah.
I like that idea that a morenatural language way of like.
It's almost like we need a newTuring test.
That is more, it reminds me ofBlade Runner, the Voigt-Kampf
test, as I always call it, theVoigt-Kampf.
It's like this interview thatHarrison Ford gives to the
replicant to see if they're ahuman or not.

(31:58):
It's like we kind of need oneversion of that to in the future
, I think we'll need somethingto be able to detect whether
we're talking to a real humanagent or not.
I mean, you can kind of tell ifyou interact with AI enough.
Now I can spot GPT-4 writing.
If you're not, you know cleverabout the prompting or whatever,
but eventually, though, that'sjust gonna be much more

(32:19):
difficult to spot, and therewon't be like a reliable way of
knowing you know if you'retalking to a real human being or
some sort of AI agent.

Sam Gerdt (32:28):
Well, the one thing I think that gets me the most
excited about a natural languageprocessor like GPT is, all of a
sudden we have a much better UIthan you know keyboard, mouse,
monitor setup that we're soaccustomed to.
It makes me really happy when Isee people recognizing the LLM

(32:50):
as being a UI and notnecessarily like an AI.

Mark Johnson (32:55):
Yeah, yeah, that's like a.
That's a great way to thinkabout it and I think most of the
like, the more profound leapsthat happen because of AI will
be come from thinking about itmore that way.
So, like, what affordances doesthis open up that weren't there
before?
Like the fact that we can speakto our computers and they can
speak back to us.
Like, what does that enable?
That was not previouslypossible.

(33:15):
And like, we know some of thoseanswers, but we're still in
like the late 90s era.
I feel like, like, if youcompare that to the development
of the web, we just got AOL,like chat.
Gpt is basically AOL for AI,but that's a very simple basic
version of what these AIs couldbe and like what they could be.
We can't quite imagine yetbecause we haven't had enough
time, but I agree, I think thethinking of them as UIs, like

(33:39):
what do these enable that were,no, not previously possible.
That's the direction that Ithink people need to go in, and
I would guess that there aresome major leaps that are going
to be made just solely throughthat category.
Not necessarily through betterAIs, but just thinking of what
things that enables us to dothat we couldn't do before.

Sam Gerdt (33:57):
Yeah, have you used GPT or any others to code?

Mark Johnson (34:04):
Yeah, yeah, I use GitHub Co-Pilot, which is based
on not sure which model GPT 3.5maybe and it's fine-tuned for
coding, but it does a great job.
It's like it's auto-complete onsteroids.
It's kind of dumb-foundingsometimes when I'm writing code
and then all of a sudden it'ssuggesting like what I would

(34:25):
have written anyway, yeah, whichis kind of a cool experience.
I don't even really think aboutit much now, I'm so used to it,
but yeah it's pretty amazing.

Sam Gerdt (34:35):
So a long time ago, we went from everything just
needing to be coded to, all of asudden, this prevalence of
libraries you think of all thelibraries that are out there
that you can just load and allof a sudden you have these
massive code bases at yourdisposal.
And now you have tools likeCo-Pilot In terms of leveling up

(34:58):
of efficiencies.
What does the current state ofgenerative AI with coding?
What have we seen in terms ofpercentage, you think?

Mark Johnson (35:10):
I mean, I think GitHub did a study on that or a
survey.
I can't remember what thepercentages were, but I think it
was like 40% or something oflines of code were written with
Co-Pilot in these code basesthey analyzed.
I don't remember the exactnumbers, I could be misquoting
it, but it was a lot higher thanyou might expect.
And I think you could also lookat statistics.
With Stack Overflow, which usedto be like just the de facto

(35:32):
source for where programmerswould go to look for answers and
it's not that it's not usedanymore, it's just most a lot of
programs are now just askingchatDBT and instead of going to
Stack Overflow, so they're notgetting nearly as much traffic,
and I find myself doing that allthe time.
The benefit you have from againthe affordance that a

(35:52):
chat-based interface opens upthere is that I can ask my
question.
It can give an answer, but Ican ask again and again and
again.
I can refine, I can tell it toask me follow-up questions if
I'm being unclear aboutsomething, or I can ask it the
definition of some word it usedin real time.
So that's not really somethingthat you could do before.
That's a new affordance, butit's such a better experience

(36:13):
that sites like Stack Overflowand stuff like that are gonna
become less and less relevant.

Sam Gerdt (36:18):
So where's the limit then, maybe, in terms of, just
generally, where's the limit ofthe LLM as an AI?
And then also, does an LLM evertruly learn to code?

Mark Johnson (36:31):
Yeah, those are both good questions.
As far as the way the ceilingis, I don't that's hard to
predict.
It's like the.
I mean I don't know if youfollow the image AI communities
at all, like mid-journey stablediffusion but when those first
came out like Dolly was thefirst one that was like that was
from OpenAI it was likecoherent enough that it's like,
oh, this looks like legit images.

(36:52):
And when it first came out,people are like blowing away,
this is amazing.
Just a few months later, youknow you got mid-journey, which
is significantly better, and nowyou've got mid-journey five or
whatever it is.
They're on In stable fusion XLthat make this dolly stuff look
like just trash.
Like you look back on it.
Like why were we impressed bythat?

(37:12):
It reminds me a lot of likeearly 3d games, like Mario 64 3d
.
It's like amazing, but now youlook at it and it's super blocky
.
I mean, still really welldesigned games, but like the
tech itself hasn't really agedwell, and so you have these kind
of blinders when it comes tolike very new Technology where
you don't know how to evaluateit because there's nothing to
compare with it, and I stillfeel like we're in that space.

(37:34):
So, as far as where the ceilingis, like we can't, I Don't know
, like I guess we'll know itwhen we see it and it doesn't
seem like so far.
Anyway, based on capabilities,like the test scores that are a
GPT for can get now with likethe SAT and even the bar exam,
just keep getting better andlike GPT five will probably ace

(37:55):
it.

Sam Gerdt (37:55):
Who knows, there does seem to be a diminishing return
, though, as they train.
You know they talk abouttraining larger and larger
models there does Diminishingreturns and performance.

Mark Johnson (38:04):
Yeah, you can only get so large really once you've
crawled like most of the corpusthat's available.
You know, I think there aretechniques that people are
working on that are moreinteresting, like some of the
papers coming out about Trainingand fine-tuning, even on very
small data sets, that show likevery good results.
Those have like high potential.
There's also multi, multi-modalstuff, which I think will be a

(38:25):
game changer.

Sam Gerdt (38:26):
Yeah, that now that that were, we've left LLM now
and we're talking about morebroadly Introducing other models
.
I mean even even when you talkabout images.
I think we've left LLM at thatpoint, because those are more of
a diffusion model.
What intrigues me about thoseis you have far fewer parameters
.
I mean by a factor of maybe ahundred.

(38:48):
You know a couple billionversus a couple hundred billion
parameters to produce theseincredible images.
So yeah, I'm just, I was justcurious your thoughts on you
know when the ceiling is,because I see people talking
about you know you know we'regonna turn on another thousand

(39:10):
GPUs and train.
You know this massive model andmy, my thinking is my
understanding is what kind of aperformance increase is that
gonna get you, and is that evengonna unlock any new capability,
or is it just gonna get you ahigher score on some?
you know really, comprehensiontest or standardized test.

Mark Johnson (39:30):
Yeah, that's a great question.
I mean the first leap in LikeLLMs like they were kind of
boring before GPT3 Was createdby having like a much larger
training data set butspecifically trained on like a
different gradient, descent,back, propagation stuff.
It was trained on a differentstyle but with a much larger

(39:51):
corpus than it had ever beendone in the.
The amount of fluency gainedfrom doing that was Substantial,
like.
So there definitely was a hugeleap there from like GPT1 to
GPT2 to 3, 3 being like thebiggest by far.
There was a substantial leap Incapability, and so it could be
that, you know, throwing moretraining data does make marginal

(40:12):
improvements, but again you'regonna run out of data in like
they've done tests already withlike the obvious thing to think
about is like Well, why don't weuse AI to generate a bunch of
quality data and then we'llingest that and, like you kind
of feed it back into the machine?
They've done some tests on thatand that just leads to
gibberish, which is veryinteresting, you know.
I think the same thing is truewith image.
Ai is where you produce a bunchof like you can even see this

(40:33):
with mid-journey.
It's very impressive.
But like they fine-tune theiralgorithms based on people in
their discord that are rankingup images or downloading them or
whatever, and then eventuallyit kind of normalizes to this
thing.
That is a very mid-journeyimage and you recognize it.
It's right out of the gate.
If you're familiar with thatstyle, I think the same would be
true, you know, when you'retraining on an Lons own data

(40:56):
you're just gonna get.
Eventually you're gonna get tovery recognizable AI Text.

Sam Gerdt (41:01):
So well, and you even said you can recognize GPT for
when you see it.
Mm-hmm anybody who uses thesetools with with any amount of
regularity you tend to, you tendto notice it's almost like it
has its own subtle voice.

Mark Johnson (41:15):
It does.
Yep, yeah, you develop like kindof an intuition for it that you
can, okay, this is AI, you know, whether it's an image or text,
you can kind of see it.
Eventually, you know, I wouldguess I mean I don't know I
would guess that the more peopleget exposed to AI, the more
will General intuition about itwill develop and maybe people

(41:35):
will be able to see it thecouldn't before.
I mean, maybe it's similar tohow CGI of Developed in movies
when it, you know, when it firststarted becoming a thing you
had, it was amazing and, likeyou know, even the Star Wars
episode one just blowingeverybody out of the water and
massive budgets.
But if you look at it now, it'slike, you know, I don't know,

(41:56):
it kind of detracts from theexperience because it's not
really that Impressive to us andit's very easily recognizable.
But even in modern Marvelmovies or whatever that have
like the huge budgets andeverything top of the line CGI,
you can still recognize it, likeyou kind of develop an eye for
what's real and what's not.
I think we'll probably do thesame thing with AI.

Sam Gerdt (42:15):
Yeah, I have.
I have this Unfounded theory,as I, you know, as I learn more
about this, as I work with thetools, as I work with people who
you talk about the tools I havethis, this developing theory,
that the ceiling with thiscurrent round of AI and I can't

(42:36):
see too far into the future,obviously, but the ceiling is
very little to do with thehardware or the software and
more to do with the people.

Mark Johnson (42:45):
Yeah, I would agree with that.

Sam Gerdt (42:46):
We just, we just are gonna stop Wanting it at a
certain point.
We're gonna say you know, theAI is good up to this point, and
now we want people, now we wantto deal with humans, and, and
so you talk about jobreplacement, you talk about, you
know, agents, the, at the endof the day, people are going to

(43:09):
want to talk with people,they're going to want to do
business with people, they'regoing.
Yeah, they're going to want tomake that connection.

Mark Johnson (43:16):
Yeah, I mean, generally people are Not really
interested in technology, like Imean, some people are.
If they're a tech enthusiastlike I am and you probably are
but like the average personmaybe thinks about technology
Like a few times a week and thenit's probably a negative thing
because it's not working.
But, yeah, I mean, people aremuch more interested in human

(43:37):
beings than anything else in theworld and I think you could see
this clearly with, like AI art,no matter how impressive it
gets.
I doubt there's many peoplehanging that on their wall like,
yeah, the, the connection toart is gonna be the human who
painted it.
You know so if you boughtsomething, even from someone you
have a parasocial relationshipwith on Instagram, it still is

(43:58):
more meaningful to you than likesomething you generated with
mid-journey.
Even though that mid-journeyimage may be technically more
impressive, the fact that itdidn't come from another human
being is like a significantfactor.

Sam Gerdt (44:10):
Yeah, we're familiar with that concept of hotel art,
the idea that you have art thatsimply is designed to fill a
space Right, versus art that wehave some kind of connection
with good analogy yeah.
I think, I think AI does agreat job with hotel art.

Mark Johnson (44:24):
Yeah, and hotel writing.
I guess you could say, if youwant to stretch that, for yeah,
yeah, yeah, and there are someartists that will lose out
because of that.

Sam Gerdt (44:31):
I think you know, I think the print market the idea
that you've painted something,now you make endless prints of
it and you sell the prints theprint market's gonna see a
decline because of this, becausethe audience for a print is
gonna be more akin to theaudience who's trying to fill a

(44:52):
space, less akin to an audiencewho is appreciating the art for
the artist or for some deepermeaning.

Mark Johnson (45:00):
Yeah, especially like a digital print.
I mean, those are difficultenough to sell already.
They're definitely not gonnaget any easier with this kind of
thing.

Sam Gerdt (45:09):
That's a really interesting point and that's
really why I wanted to hearabout what your clients are
talking to you about as businesspeople.
Are we imagining that AI, thatthere's some burden on us to
introduce these tools to ourproducts, or is that just us

(45:33):
reading the news and fearing?
What do our clients actuallywant?
Is there this great demand forAI agents, for AI tools?

Mark Johnson (45:44):
Yeah, there is definitely the hype cycle Like
you can take advantage of byeven using the word AI in your
marketing right now, but that'sjust temporary.
Like that won't always besomething you can take advantage
of and I think long-term, Ithink it's just very practical.
Like, use AI in ways that helpyou Like.

(46:05):
So if you don't wanna spend Xamount of time reviewing this
privacy policy to see if there'sanything I need to be concerned
about, let AI do that for you.
If I don't wanna draft thislegal document and then myself,
let AI.
There's fine-tuned AI is nowbeing made for law that can

(46:26):
draft those kind of documents.
So I think an optimistic viewwould be that AI could replace
what we might call dehumanizingwork.
So work that is drudgery, workthat is monotony, and we have
seen this happen already inindustry.
So, like industrialmanufacturing and robotics
taking some people's jobs inthat, economically that is

(46:49):
difficult, but in the end it iswork that humans ideally, in an
ideal world again beingidealistic shouldn't need to
spend 40, 50 hours a weekturning a screw on something.
And the same is true in thedigital space, where we've
basically got our equivalence tolike just doing the same job

(47:09):
over and over and over.
It's meaningless, it's notcreative, it's detached from the
impact that that thing hasbecause you're some part of the
paper chain far down abureaucracy.
I would love it if AI couldreplace all of that stuff.
Of course there's gonna beeconomic disruption because of
that, but I think we'll be in abetter place, like from a
humanity-wide perspective, if wecan automate just the tedium of

(47:33):
the work and then instead, like, really put more value and
emphasis on the things thathumans truly enjoy, not the
pushing paper part of our jobs.

Sam Gerdt (47:43):
Right, and if what humans crave is human connection
, then removing the drudgeryshouldn't be something that we
fear because, what it's going todo is increase what we want and
, in the process, probablyincrease our wealth.

Mark Johnson (47:58):
Yeah, I would think so.
I think that's the optimisticview.
Obviously there's more of apessimistic view that there's a
huge amount of what would havebeen considered white-collar
jobs that are disappearing andthen that has a trickle-down
effect on education, which isalready having a hard time.
Like a lot of the more populardegrees are some of the ones

(48:19):
that would be most targeted forreplacement, like, for instance,
entry-level law positions,paralegals, stuff like that, a
lot of these.
Even accounting, you have toput in your churn, which again
dehumanizing work, but peoplewill spend five years doing
auditing to be able to raise theranks of whatever accounting

(48:41):
firm they're working on and it'spart of like it's a rite of
passage.
But where does that rite ofpassage go when that stuff is
all automated completely?
So it's going to take some timefor those industries, I think,
to adjust and then obviouslyit's going to be very disruptive
to a lot of the job market.
And so it's not all positive inthe short term, but I think in

(49:05):
the long term, I think it couldbe quite positive.

Sam Gerdt (49:09):
Absolutely, and for those to whom it will be
negative in the short term,there are still ample
opportunities being created.
It's not that there's thisvacuum that happens in the short
term, but it does require youto change, and that can be
uncomfortable, I think.

Mark Johnson (49:27):
Yeah, definitely.

Sam Gerdt (49:29):
There's not going to be no jobs.
There's going to be differentjobs, there's going to be
changing jobs, but I don't thinkthat there's a scenario where
there's no jobs.

Mark Johnson (49:38):
Yeah, I agree, and just like the early internet,
when it first was emerging ontothe scene and people didn't know
what to do with it, venturecapitalists were throwing way
too much money at it.
And then school thought Iremember reading articles even
way back in magazines because wewere still reading printed
magazines that the internet wasgoing to replace education and

(50:00):
it was going to replace youdon't need to go to school if
you can look anything up online.
And that didn't happen, and soit definitely changed those
things.
We can say that for sure, butit doesn't normally replace them
, and I think AI will probablybe similar where it definitely
will change them, but how itwill do that is like.
Again, it's hard to predict,but I think a lot of those jobs

(50:23):
will just be reimagined orreimagined, assuming that we now
have AI as an augmentation toolwith us.

Sam Gerdt (50:31):
Yeah, the idea that we get it right the first time
is obviously that's not going tohappen, even with the internet.
There was no doubt that theinternet was a good thing from
the very beginning, but youstill had the dot-com bubble,
you still had all kinds ofdiscomfort and bad stuff that we
had to get through to get towhere we are now.

Mark Johnson (50:55):
And.

Sam Gerdt (50:55):
AI will be no different for sure.

Mark Johnson (50:57):
Oh, absolutely yeah, I think we're still in
that kind of late 90s phase.
We may have not even hit thebust phase of it yet, but yeah,
I think I mean we are seeingfrom an economic standpoint,
there is similar dynamics in theplay, with lots of cash being
thrown into these AI companies,like really big moves like
Microsoft buying half of open AI, for I can't remember how many

(51:18):
billion, but that's not going toslow down anytime soon.
But at some point I thinkprobably when people are kind of
saturated and AI is not as sexyas it is right now that some of
the bottom is going to fall outof that market and we might see
a similar bust to the dot-comcrash.
I don't know, but it could be.

Sam Gerdt (51:40):
Yeah.
So as a product developer, thenhow do you position yourself
going into all of this?
Obviously, you don't want tothrow everything into all your
eggs in one basket, but what areyou doing to ensure that you
are taking the tempered, humanapproach to innovation and so

(52:02):
that, whenever there isdiscomfort, disruption bust
whatever.
It looks like that you're thereon the other side?

Mark Johnson (52:11):
Yeah, I think there are probably two answers.
One is more like on the productside.
We, in a sense our strategy isjust kind of like what's use AI
where it helps and then we'lldouble down on the things that
we know work already just basedon having thousands of customers
.
Making those things better isnot necessarily going to be
influenced by AI or not, whereasthere may be parts of it though

(52:34):
that could be automated.
For instance, scheduling orscheduling due dates first
things is a pain, likescheduling in general is a pain
If we can use AI to help peopledescribe their class schedule.
We work in planning, which wemeet every day on Monday or
every week on Monday at 6.30,and then we can automatically
set the dates.
That's like a painkiller typeof feature that we think in but

(52:56):
it doesn't really matter thatthat's AI or not.
That's just a capability we havenow.
That could be just some otheralgorithm.
That's not an LLM, it doesn'treally matter.
It's just a way of making theproduct more seamless.
And then, on the other side, wehave been trying to
intentionally speak about someof the change that's happening
in public, and so we had a pathcamp recently I think maybe a

(53:18):
month ago when we had severaltopics, one of which was AI,
where we had we invited peoplein you know, experts and stuff
to talk about different thingsand we invite all our customers
to come to that and we just havekind of an open-floor
Discussion and those are alwaysreally educational for us and
for the customers.
We hope Um, but that's one waywe do it too is just by talking

(53:39):
about it with our partners, youknow, with our customers staying
attached to like the real humanneed and not just the yeah
online, like a step of what'shappening.

Sam Gerdt (53:50):
So for the product developer, the approach should
be Stay connected with thepeople that you're serving.
Mm-hmm and don't develop out ofa sense of urgency, develop out
of a sense of Of Right,thoughtfulness, rationality.

Mark Johnson (54:08):
Yeah, I mean nothing good ever happens out of
fear or anxiety, right.
And so, like the yeah you it, Imean I won't.
I'll be honest, like when Ifirst saw some of the stuff that
I was capable of, I did likethink, are we Done?
Like you know, is educationcould shift enough under our
feet just because thesedevelopment that people even
need a product like Bath rate.
But when you come back down toit and you start thinking about

(54:33):
again, like the real human needsthat people have and what a
which parts AI fills and whichparts it doesn't, and just human
connection in general, which isan extremely important part of
education, then a lot of thosefears go away.
So I think you just kind ofhave to redouble down your focus
on who are your customers, whoare your, what are they needing,

(54:53):
talk to them and obviously youknow, think about this AI stuff.
But if you put the AI first, Ifeel like you get the cart
before the horse and like, Idon't know, you can lose sight
of the forest for the trees.

Sam Gerdt (55:05):
Yeah, and and maybe it's a product before people
problem, yeah, there's, there'san.
I think there's somethinganalogous maybe when you look at
Kodak.
So the fit the film Mm-hmm.
There were.
There were companies Kodak, Ithink was one of them who saw
themselves as well were a filmcompany, right, and it was about

(55:29):
the tech.
And then you saw the Uprising,digital and all of this other
stuff, and At no point didsomeone take a step back and say
, no, we're not a film company,we are a company that that
captures people's memories.
We're a company that thatpreserves, you know Something

(55:49):
special and and as soon as youabstract it, you know even just
that one step back you realizeokay, well then it's not about
the tech, we can.
We can still be us withoutlosing our identity, and Advance
into this, into this new.

Mark Johnson (56:07):
Era.
Absolutely, yeah, and I thinkthat takes like especially
difficult if you've been inbusiness for a long time and you
kind of like this is the way itworks.
We know this works like.
This is what our customers arepaying us for.
Trying to find that originallike reason that why you're
doing this in the first placecan get even harder, kind of
strangely.

(56:27):
Then it was at the beginning tobe able to see that very
clearly.

Sam Gerdt (56:32):
Yeah, that's such a necessary exercise, though, for
any business absolutely justhave that, have that intrinsic
Connection with the people thatyou're serving and the need that
you're meeting, and be able toseparate out either your actual
product or your actual servicesfrom it and say these things can
all change.
What can't change is thisidentity who we serve, what we

(56:56):
need, we meet.

Mark Johnson (56:57):
Yeah, that kind of core value that you've got at
the very bottom.
But you're right that that canbe easy to Forget about or have
never identified in the firstplace.
So it's pretty important spaceto explore.

Sam Gerdt (57:11):
Yeah, this has been an incredible talk.
I really appreciate your time.

Mark Johnson (57:16):
Yeah.

Sam Gerdt (57:17):
I will.
I think we'll end it there.
That's such a good note to endon, just remembering that your
product is not your identity.
So is there anything else thatyou wanted to Do?
Is there anything going on withpath right upcoming that you
want to talk about?

Mark Johnson (57:35):
Oh sure, I mean, we've got a new product in the
works actually.
So, um, it won't be launchingthis year but, um, sometime
early next year we're planningon launching a new product that
you know.
We'll have some of this AIwe've talked about, but I think
it'll do what we said there atthe end.
Try to double down on the corevalue of paths, um, so, if you
would like to follow us on that,you could, you know, look at

(57:56):
our website, path 3.com or um,and that's wright, like
shipwright or playwright.
Yes and uh, we'll, we'lldefinitely post announcements
there.
We also do host labs every nowand then.
Um, you know, sometimes, on thetopic of AI, we also launch
courses.
So if you're interested in,like, learning about that kind
of stuff, we'll be posting thoseas well.
Uh, we'll be posting those aswell.

Sam Gerdt (58:17):
Excellent.
Uh, thank you, mark.
I really appreciate the time.

Mark Johnson (58:20):
Thank you.

Sam Gerdt (58:21):
Thanks for talking with me.

Mark Johnson (58:22):
Yeah, it was fun, thanks.
Advertise With Us

Popular Podcasts

United States of Kennedy
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.