Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Alex Kotran (aiEDU) (00:05):
Peter Galt
nice to see you.
Peter Gault (00:07):
Great to see you as
well.
Alex Kotran (aiEDU) (00:08):
When did we
first meet?
You were a Fast Forward alumni,right.
Peter Gault (00:14):
Yep, I was in the
2015 cohort and were you 2018,
2017?
2020.
2020, oh wow, it all blendstogether.
But yeah, I remember seeingyour pitch when you were first
getting started and it's amazingto see the progress you've made
over five years what highquality and success looks like.
Alex Kotran (aiEDU) (00:48):
And it's
funny because I don't know if
you feel this way, but if I lookback on, you know where we were
back in 2020 and where I amtoday I would have I felt like
wow.
I would have been, you know,enthralled to know that we were
able to grow to the extent thatwe did.
And yet now I find myself, youknow, quite anxious about
continuing to grow, and I'm sureyou find you're in the sort of
the same boat where you're.
You never really get to sort oflike rest on your laurels.
(01:08):
There's always more to do, morework to do, more funds to be
raised, teams to be hired, ifanything.
Peter Gault (01:15):
It just gets more
complex.
It feels like a board gameyou're playing, but every round
is more possible moves, moreoptions, more things to consider
, and it just keeps getting moreand more complex.
It's exciting, though, thatwe've both started with nothing
and then both theseorganizations that are now
having a national impact andreaching thousands and tens of
(01:36):
thousands and hundreds ofthousands of kids.
It's really incredible.
Alex Kotran (aiEDU) (01:40):
Yeah, well,
millions in your case.
Yeah, Peter, why don't you justgive our audience, tell them a
little bit about Quillorg andwhat you do?
Peter Gault (01:48):
For sure.
We are a nonprofit that helpsstudents improve their writing
skills.
We really see writing as asuperpower.
That, as a young person, whenyou can write out your own ideas
, when you can explain yourthinking, build an argument, use
evidence to support it, that'san incredibly valuable and
important skill for kids.
But it's so hard to become agood writer.
You need so much practice andfeedback, and that with AI,
(02:11):
there's this incredibleopportunity to give kids
immediate feedback and coachingon their writing and give them
those practice activities thathelps them build those skills.
For us, what's really criticalis doing that in the context of
the really important coursesthat kids are engaging with
today, and so that's things likewhat's happening in English
classrooms and what's happeningin history classrooms, but also
(02:31):
things like AI and how can weteach kids AI through writing,
and that's just this incrediblyimportant topic that is new and
everyone's grappling with andtrying to figure out.
What do we all need to know 10years from now is just this
really difficult question, butone where it's absolutely
critical that young peoplereally we get this right for
them or at least open the doorto what's happening in the world
(02:53):
.
Alex Kotran (aiEDU) (02:54):
And we can
pull up maybe some screenshots
or some just some visuals in thevideo.
But you know what you built isit's.
It's this student-facingproduct.
Yeah, can you just describesort of like what happens when
somebody logs into Quill?
What does the product look like?
Peter Gault (03:12):
Yep.
So when you go to Quill, firstof all, everything is free for
all students and teachers.
So that's absolutely criticalto our mission that you go to
the Quill homepage.
There's a quick sign-in button.
Teachers quickly import theirstudents.
They'll use a service likeGoogle or Clever to import their
students and then they'll haveaccess to more than a thousand
activities that they can assign.
(03:33):
And what we see is that we havethis range of activities
supporting students throughgrades 4 through 12, primarily
in the context of ELA, but weare now creating new offerings
for social studies and STEMclasses, and in these activities
there are about 10 to 15minutes where kids are writing
on different prompts and thenreceiving feedback as they're
writing.
And these are all very quickprompts.
(03:53):
They're one-sentence promptswhere the kids are able to
quickly build an idea and thenget feedback to revise and
strengthen it.
And the core of Quill is inthat feedback loop that students
will write, they'll bestruggling with a concept and
they'll continually andpatiently be given feedback and
(04:16):
over like five rounds offeedback the students are able
to really build these reallycritical skills.
Alex Kotran (aiEDU) (04:19):
So, and
Quill, it's like bite size, it's
like you're not going in andreplacing an English curriculum.
You're basically providingteachers with, you know, these
opportunities to sort ofintegrate technology in a way
that gets students to sort oflike in real time, think
critically, write and respond todifferent prompts, build
critical thinking skills andalso just build their writing
(04:40):
skills and grammar.
And it gets sort of the realtime sort of coaching and
feedback.
And this is even beforegenerative AI you were actually
giving.
You had sort of like a systemfor feedback.
Peter Gault (04:49):
Have you been?
Alex Kotran (aiEDU) (04:50):
using AI
since, I mean, I mean language
models seem, you know, prettywell suited to a product like
Quill.
Peter Gault (04:58):
We've been building
our own AI since 2018.
Back, when it was NLP and youwould build these models, it was
very basic.
Back then, you could do thingslike grammar analysis.
People think about Grammarly,for example, which has been
around for many years andthey've had their own AI.
We had a similar process wherewe would use AI to analyze the
student's writing, looking forcertain patterns in the syntax
(05:20):
and then using that to triggerfeedback and coaching.
And so, with Quill, we neverfix the writing for you.
We always help you to build theskills so that you can learn it
yourself, and so, under thehood, our AI looks similar to
Grammarly and some of thoseother platforms, but the student
experience was the exactopposite, where, again,
grammarly would just fix yourwriting for you.
We wanted to help you buildthese really critical skills,
(05:41):
and so that was some of ourinitial AI work.
We built our own algorithms, wetrained this meta model that
had tens of thousands ofdifferent training responses in
it, and we built these data setsby hand, and that was really
time intensive work, and reallycritical is getting these AI
models to be highly accurate,and so, around 2022, we had
(06:04):
completed all of this work,where we had these amazing
models spent six years buildingthem.
They were working really well,and then the bombshell of large
language models comes, and nowwe're in this whole new era, and
LLMs are amazing in many ways,but it's a completely different
technology from an approach ofbuilding your own fine-tuned
model, and so over the last year, that's meant for us
(06:25):
essentially scrapping everythingthat we'd built over the last
six years and rebuildingnatively on generative AI, which
allows us to do a lot ofsophisticated things that we
couldn't do in the past.
But it comes with its ownchallenges and risks as well.
Alex Kotran (aiEDU) (06:40):
Yeah, I
mean, what are some of those
challenges?
Peter Gault (06:42):
Yeah, so one of the
big problems that all of the
players who are using generativeAI to help students are
grappling with now is that theseunderlying models are trained
to be very helpful for students,and so what that often looks
like is that when a student is,say, struggling to produce the
answer, the LLM says oh, I seethat you're stuck.
Have you thought about?
Blank.
And then it will tell thestudent what to say, because the
(07:03):
LLM knows the answer and itwants to help the student, and
telling the student the answeris the fastest way to help them,
and so what we have to do onour end is build guardrails so
that the LLM isn't giving awaythe answer and that the thinking
is on the student, and for us,what that looks like is building
a multi-step prompt.
You have a sort of a chain ofthought process where it's first
producing an answer and thenchecking is this the right
(07:26):
feedback for students?
Is the student doing thethinking, or is this an example
of feedback that is simplyrevealing the answer to the
student?
We didn't have this problemwith our old models.
With our old models, we controlthe entire model ourselves, we
control the output, and so wedidn't face that risk.
But with the LLM it'sunpredictable and you can build
guardrails around it.
(07:46):
But if you don't have thoseguardrails in place, it can
sometimes go off the book.
Alex Kotran (aiEDU) (07:51):
And yet you
switched wholesale to language
models, and that's because yousaw some serious potential.
I mean, like, what promptedthat pivot, you know, given that
you had something that wasalready working quite well,
being used by millions ofstudents.
And these new models have youknow, given that you had
something that was alreadyworking quite well, being used
by millions of students, uh, andthese new models have, you know
, obviously lots of limitationsand they hallucinate.
Um, why, why did you make thedecision to to go all in?
Peter Gault (08:14):
Sort of like a
couple of really thing big
things for us.
Um, that drove this decision?
Um, the first is that thesemodels are a lot smarter, and so
, while they can beunpredictable, you get the
benefit of a much morefine-grained analysis of the
writing versus when we createdour own models.
It was sort of like putting thewriting in categories and the
LLM just can understand thingsin a very nuanced way, depending
(08:36):
on what model you're using andhow smart it is.
But that unlocks for us theability to go deeper.
A lot of our work is focused onwriting, but it's in service of
critical thinking that studentsare building an argument, using
evidence to support it, and inthat process we want to sort of
be able to go deeper in thinkingabout, like what are other
questions the student could beasking?
How could they use moreevidence to support this claim?
(08:58):
What's a really good thesisstatement that summarizes this
idea?
These are all pedagogicallyresearch validated strategies
that we couldn't previouslybuild into Quill because they
were too complicated and the AIwasn't sophisticated enough for
these strategies.
And so, with generative AI,what's so exciting is that some
of these really advancedstrategies, things that
(09:19):
researchers have known for 50years, work really well, we can
now build in a way that wecouldn't build two years ago,
and so for us that's really whywe made the switch is to unlock
sort of these deeper forms oflearning that we previously
couldn't address.
Alex Kotran (aiEDU) (09:34):
Yeah, I
keep coming to Quillorg as an
exemplar, for you know, how AIcan actually, because there's a
lot of you know hyperbole aroundlike AI is going to transform
education, how AI can actually,because there's a lot of you
know hyperbole around like AI isgoing to transform education.
And I think sometimes I, youknow, you know, my advice to
folks is that you know AI is areally powerful tool, but it's
really not the, it's not thesolution.
(09:55):
It's sort of like part of it'ssomething that we can use on
that journey, and I think Quillis.
You know, I don't think ofQuill as an AI tool.
I think of Quill as a readingand a writing tool.
Now you've integrated AI in away that really enhances the
sophistication and capabilities,but what you didn't do is scrap
Quillorg and start a whole newcompany and I think a lot of the
(10:19):
stuff that's in the field rightnow it's like these wrappers
and I think some people will saythings like well, you know,
teachers will be able to justbuild these things for
themselves, and I think, justlike reflecting on, like the
amount of work that went intofine tuning and even sort of
like the prompt you know, Idon't know if I'd call it prompt
engineering, but you basicallythink about, like, prompt design
(10:40):
.
I mean, is this something thatthat, like any teacher, can just
do for themselves?
I mean, is this something thatany teacher can just do for
themselves, or do you feel likethere is actually maybe a bit of
barrier to entry to reallyeffectively using MLMs?
Peter Gault (10:52):
It really depends
on the task.
I think there are certainthings that a teacher can do
themselves and can be reallyeffective.
There are things like draft meemails to parents based on my
students' progress, and thoseemails could have taken you a
couple hours to write, and thatcould now be a 10-minute process
, and so I think there are someexamples of ways in which AI can
(11:19):
help to speed up communicationworkflows.
Some other examples there are awhole bunch of things that it
can do well.
We're seeing some areas, though, where you're stepping
backwards a bit when it comes toeducation.
So in the world of education,the headline is high-quality
instructional materials, andthere's been this huge effort
over the last 10 years to gofrom these not-so-great programs
(11:41):
that were being provided bysome of the big publishers to
much better, smarter, moresophisticated curricula that
could be used across an entireschool, from grades K through 12
.
Edreports led the charge herewith how it rated curriculum
programs, and the goal here wasto go from incoherence, which is
how education felt often, to acoherent program.
(12:03):
What was happening in parallelwere some tech plays that didn't
pan out.
So you had sites like BetterLessons that tried to aggregate
teacher lessons and that theywould host 100,000 different
lesson plans, and for a teacher,trying to create coherence out
of 100,000 different lessonplans impossible.
We then saw Teachers PayTeachers, which took that idea
(12:24):
and added a payment mechanism ontop of that, but still didn't
have that coherence of K through12 and a program that worked
for the entire school.
You still had individualteachers making individual
decisions, and while sometimesthat can work well, oftentimes
it led to students not having aconsistent learning experience.
And so, as we think about AItoday, the next version of
(12:45):
teachers pay teachers is magicschool, and so you have teachers
generating lesson plans and,rather than say, having a
coherent learning experience,sort of recreating some of that,
that incoherence.
And so I think that's sort ofthe big question is how can AI
fit into this bigger picture ofcoherent learning?
And that happens by startingwith the curriculum.
It starts with what is theteacher doing in the classroom,
(13:07):
and then how can you layer theAI in in a very surgical and
mindful way so that it'ssupporting the classroom
instruction, as opposed totrying to sort of move away from
that altogether and proposesomething that lives completely
outside of these existingprograms, and so I think that's
really where the smart playersare today, so they're trying to
work within the constraints ofwhat really good programs look
(13:27):
like, and Quill has certainlybeen successful by taking that
approach.
Alex Kotran (aiEDU) (13:33):
Yeah, I
sent you something over text.
It was a study that some of myteam shared and this is from
Microsoft Research.
Let me pull it up just so I canget the date right.
This was published, oh, thisyear, so very recent.
I think.
It was like literally like afew weeks ago, and the sort of
headline is as they looked atthe use of generative AI in
(13:56):
knowledge work, they found thiscorrelation between knowledge
workers that are increasinglyreliant on generative AI
actually have a deterioration oftheir critical thinking skills.
Um, and this is not you know it.
It's not necessarily terriblysurprising.
This is something that peoplewere, I think, postulating for
(14:16):
some time that this would becomea crutch.
It's interesting that it'sstarting to play out and
actually be validated byresearch.
Um, and and it, it.
It sort of brings me to thisquestion of, like you know,
there are some folks who arethey see what ChatGBT and other
language models are capable of,and their reaction is well, we
(14:39):
just need to teach kids how touse these tools, because these
tools of the future, like wecan't you know, we can't put
Pandora back in the box, and soI'm curious what your response
to that is.
You know, is it the case thatwe just need to sort of pivot
and adjust, educate the goal ofeducation, from teaching
students how to write criticallyto teaching them how to, let's
(15:00):
say, become really effectiveprompt engineers?
As someone who is employed, youknow prompt.
You know I don't know if youcalled the folks on your team
prompt engineers, but they'vebeen doing quite a lot of prompt
engineering.
Peter Gault (15:10):
Absolutely.
Yeah, I know we've createdthousands of test prompts and
that's very much what they do Tome.
I guess they are.
There's more overlap here.
That prompt engineering iswriting, that you are writing
prompts, and I do think theability to write well does
enable you to do things likequery an LLM and being able to
specify what you're looking forand why you need that
(15:32):
information.
In so much of my own experienceusing LLMs, it's not that first
query that gets you the answer,it's that process of digging in
and interrogating the answers,and so I do think that that is
very similar to the process thatEnglish teachers are trying to
create for their students.
They're reading a novel,they're trying to unpack the
novel, trying to interrogate it,and so that critical thinking
skill of trying to unpackinformation I think that skill
(15:55):
remains critical for studentsand that writing is a way to
build that ability tointerrogate information through
building your own ideas.
There is certainly a counterargument that as these models
get smarter and smarter, therewill be less need to even prompt
engineer, and you've made thisargument to me a few times, and
I actually am starting to comearound a little bit more to this
(16:18):
than I was initially.
So I've been using the deepresearch function that's in
ChatGPT and a few of these othertools now no-transcript and
(16:47):
that our ability to navigatethis world and all of its
complexity.
If you don't have these skills,I feel like you're going to be
left behind.
That being said, in terms of AIas a skill what that skill is,
I think it's very up in the airof what will AI look like a few
years from now.
It's really hard to predictthat.
The one thing I do know, thoughand I do think this is a useful
(17:10):
comparison is, I think, aboutthe era in which we all knew how
to read maps and you look atfolks like taxi drivers and
folks who had to drive for aliving, and they would build
this knowledge of the worldthrough driving, where, for a
lot of us, you know you wouldget that by reading maps and
trying to have that spatialrecognition.
Today it's a little bit of alost art.
(17:32):
I love maps, so I try to forcemyself to sometimes not use
Google Maps, just as like a funexercise in thinking.
But if you're not forced tofigure that out day by day, it
does change how you think, andso it's certainly a big ethical
question of what does thinkinglook like in a world where AI
(17:52):
can think itself in a reallymeaningful and deep way.
Alex Kotran (aiEDU) (17:56):
Yeah, the
maps example is fascinating
because I'm someone who reallystruggles with sort of like
geospatial reasoning and I findmyself, you know, even with maps
and even with Google Maps,sometimes getting lost,
including in New York City,which is hilarious because New
York City is one of the easiestcities to navigate and I
actually find myself I've becomemuch worse and increasingly
(18:17):
dependent on Google Maps, whereyou know it's like New York is a
great example, like youshouldn't need Google Maps.
You should be able to just likelook at sort of the grid, like
roughly, okay, I'm on like 41ststreet and, um, you know it, I
shouldn't have to sort of opengoogle maps and like do the
thing where you sort of like arelike turning around and trying
to get a sense of, like whichdirection you're pointing.
(18:38):
Um, and I don't know that, likemy, you know, the duration of
my ability to navigate the worldis, it's definitely bad, but in
the hierarchy of skills thatmaybe I've lost, it doesn't
necessarily impact me day to day, like you know.
You know, and generally I haveaccess to, to my phone in many
(19:01):
cases, to my phone in many cases.
Critical thinking, though, likeif we sort of abstract, prompt,
engineering, what you'vedescribed sort of like the
process of writing, which isreally sort of like an
underlying skill that leads to,and maybe is like one of the
necessary if not sufficient butcertainly necessary conditions
for critical thinking skills.
It feels much more serious ifwe're in a world where there's
(19:24):
this reliance on AI tosupplement critical thinking,
especially in a world where youknow one of the things that you
described, even deep research,like I'm actually I have a deep
research query running right nowon that study.
I asked like, oh, can you findsome more studies that maybe
corroborate this or not, youknow, being able to effectively
use it?
I have to actually, you know,critically evaluate the output.
(19:46):
In many cases it's made stuffup or it's not quite right, um,
and so that's not really aquestion, it's more just like
sort of like an open concern.
That, I think, is sort of likeunderpins this.
The big question about ai andeducation is um, you know, do,
how do we balance the utility ofthese tools and the fact that
if you don't know how to use thetools, you're going to be left
(20:09):
behind.
That's probably true, but ifyou use the tools and if maybe
you spend too much time usingthe tools, you actually have
less.
You're sort of less effectiveas a complement to the tools.
Peter Gault (20:26):
Yeah, I'm not quite
sure how it will play out.
You know, I think we think ofthis.
I like to think of it like anorchestra and that as the
conductor of the orchestra, youare got a symphony of different
queries that are doing differentrequests, and that the human
being is driving that, and thatI think the orchestra conductor
remains, and that that skill ofhow you manipulate LLMs and find
(20:49):
information, that will remain acritical skill for sure.
But I think there'll be adifficult question of how do we
build those skills and what doeswriting even look like, when
writing is a co-creation process, and I do think that those are
big questions where today, whenI write, I'm building a thesis,
I'm taking a point of view onthe world, and there's a world
(21:11):
where you could ask the LLM likewhat should I think about this?
And you could imagine the LLMgiving you your opinions on
things, as opposed to havingyour own point of view.
I'm somebody who has a lot ofopinions.
I love doing debate.
Throughout high school andcollege, I found that to be the
experience that built my owncritical thinking skills most
effectively, and so I do thinkthat for young people, it's
really important that they havetheir own opinions and they have
(21:33):
this ability to thinkcritically, but I do think that
to do that, they need to spend alot of time doing this and they
need to be given thoseopportunities, and if they don't
, there's certainly a world youcan imagine where that gets
outsourced to the LMs in lieu oftheir own ideas.
Alex Kotran (aiEDU) (21:50):
I think,
for me, the reason I feel so
confident in my, our advice toeducators being, you know, you
shouldn't be focusing onteaching students to use AI
tools.
You should be focusing oncritical thinking skills.
It comes from this, like youknow, even when we've done
hiring and we've thought about,okay, we certainly and we've
actually struggled to hirepeople who are sort of like
(22:11):
super users of AI, I just think,like our generation is still
like there's some anxiety aboutlike, is it even fair to you?
Like, is it cheating to use AIto write something, and so
there's some hesitation.
But if I think about, like youknow, like we're hiring someone
to help, as we discussed, tohelp with, like fundraising and
(22:31):
one of the things you reallypushed on is, like you know,
being having exceptional writingskills is critical to that I
still feel like, in my hierarchyof of skills that we'd be
looking for for that role, Iwould way rather have someone
that's a really good writer.
I feel like I could teachsomebody to use a language model
if they have the writing skills.
I don't know if it's the reverse, like if someone's really good
(22:52):
at chugging out a prompt, I feellike the minute that I need
something that goes beyond whatthe LLM is capable of.
They're going to sort of hitthat roadblock and that feels
much harder to teach.
Like, I don't know that youcould teach someone to be a good
writer on the job.
It's sort of something that youbasically have either developed
during your educationalexperience or not.
And, um, and some people maybehave an act port for some people
(23:13):
don't.
Um, I mean, have you had any?
Something like, how has yourorganization thought about sort
of like talent, you know, giventhat you obviously have a lot of
people that are using the aialmost on a daily basis?
Like, do you have a sense of,like, what skills are really,
you know, set them up to bereally effective, prompt
engineers or users of AI tools?
Peter Gault (23:31):
So I think, both
with prompt engineering and
writing, we have a prettyspecific definition of it or at
least I do which is that writingis knowledge.
And I say that to say and thisis definitely a hot take in the
world of education there's beena big push called the Knowledge
(23:52):
Matters campaign, which is theidea that your ability to write
is predicated on your knowledgeof the world.
And so when you ask kids, forexample, to write about baseball
and they're big baseball superfans you'll get these long
essays of baseball strategy andhits and great teams and they'll
have their writing will lookreally great because it's a
topic that they have knowledgeabout.
But then you ask them to writeabout a book that they're not
interested in and the writinglooks quite different.
(24:13):
And in this world, writingisn't just a skill of you can
construct a sentence.
Writing is your knowledge ofthe world, and so I say that to
say that when we think aboutlike a fundraising writer role,
for example, we find that theworld of philanthropy is very
fractured.
There are many different causesand issues and strategies and
(24:36):
theses of how the world can beimproved, and as we're working
with different partners, ourwork needs to align to their
work, and so that really isrequires knowledge of how do our
partners think, what do theythink the future will look like?
How does Quill align to thatvision?
And, as we're writing, it's notjust an artfully constructed
sentence, it's about how ourmission aligns to their mission.
(24:59):
And so, as we think about AI andas we think about these
critical thinking skills,students need to know a lot of
stuff about the world.
They need to know how it works,they need to know what these
different ideas are to be ableto be a member of those
conversations and to be able tocontribute their own ideas.
And so I do think, as we thinkabout how AI will develop,
there's sort of this underlyingquestion where the more that you
(25:22):
know right, the more you cancontribute, and for us that's
really critical, and that whenyou have that knowledge, the
writing will flow from there.
And I say that all to say thatthose skills look quite
different, that the world ofknowledge is often thought of
something that, like a historyor social studies teacher does,
but it doesn't exist across allclasses, and that's also a shift
(25:43):
that's starting to happen rightnow.
There's a big movement to movetowards knowledge as the main
way in which students areassessed as opposed to things
like reading comprehension as askill, where being able to read
any article on any topic is lessvaluable than knowing about
particular topics, and thatparticular knowledge is more
valuable than that generalability to read.
(26:04):
And there are lots ofproponents for both sides of
this, uh, but I think we'restarting to see how knowledge
itself is durable in a worldwhere some of these skills
become less important.
Alex Kotran (aiEDU) (26:17):
Yeah, it's
like the the.
I don't know if you've beenfollowing the um all the noise
being made about vibe coding andthe y combinator survey that
came out yeah, it's wild and forour, our listeners and viewers,
this is a survey that ycombinator did with their, their
latest cohort, and this is likethe leading technical I mean
(26:38):
like not, these are some of themost technical founders right in
the world.
Like these folks know how tocode um and full I, and a
quarter of the cohort reportedthat 95% of their code is
written by AI.
And so, like, right out of thegate, the headline was like vibe
coding is here.
If you're not using AI tools,you're at risk of being left
behind.
But then when you sort of like,if you listen to the full
(27:00):
podcast and you hear sort oflike the nuance, they also will
say that well, it's also thecase that you know vibe coding
is great for getting you to anmvp.
It's, you know, a great tool.
But what they also have seen isthat the most effective uh
founders and companies had folkswho had I guess they're calling
(27:21):
classical coding, which I findfunny, but had classical coding
skills, um, to be able to dothings like debug, which the AI
isn't very good at, at leastright now, and you actually have
engineers in your team.
I mean, are they vibe coding?
Have you had conversationsinternally about this?
Peter Gault (27:39):
I was just talking
to our CTO, akhil, about this
and he was sharing that at Quill, it's around about 10 to 20
hours of extra productivity perweek.
It's not that we're doing 95%of our coding through AI, but
it's certainly helping us withparticular problems.
It's great at writing SQLqueries.
It's really great for certainprojects.
That number will just keepincreasing, though, I think, as
(28:01):
we build our own knowledge ofhow to use AI and as these tools
get more reliable.
I think we'll certainly see,you know, in a couple years.
We've got six engineers atQuill, which is a small team,
you know, relative to some ofthe big ed tech players, it's
not one engineer like some smallnonprofits where, like you,
only have one person, but whatit means is, if they can have
the output of, say, 12 or 18engineers, that's a huge win for
(28:24):
us, and so I do expect thatnumber to increase, especially
as these tools get better atdebugging and that still is a
big obstacle right now, wherethey write code Sometimes it's
good code, sometimes it's not sogreat code the ability for the
AI to refactor itself though,for it to improve its own code
writing skills, that willcertainly happen.
It's already happening, butthat will just keep getting
better and better.
(28:45):
And so I think there is a bigquestion where one there's a lot
of advice like what jobs shouldpeople be pursuing now?
And there's this huge questionmark around like is software
engineering the path to, like,economic prosperity?
And I don't know the answer tothat question, but it seemed to
be that was like hey, if you'retrying to find sort of a path
(29:05):
towards having a comfortable andeconomically successful life,
that was the path.
And I don't know if that willbe the case in a couple of years
and I don't know what happensin that world, but we're
certainly starting to see someof that.
I'm curious, if you are hearingfrom others, how that
conversation is shifting nowabout things like what does CS
education look like?
Is how that conversation isshifting now about things like
(29:26):
what does CS education look like?
Because, again, the ability tocode does allow you to
manipulate these systems and sothat becomes incredibly valuable
.
But if the learning curve ofbuilding those skills is too
high relative to that ability tojust use an LM, yeah, I don't
know what happens, yeah, it wasjust so.
Alex Kotran (aiEDU) (29:46):
I'm going
to read a quote.
This is from Tom Blumfeld.
He's a partner at YC.
He's my husband's former boss,founder of a company called
Monzo, which is multiple billiondollar valuation, now wildly
successful, and he's beenexperimenting with vibe coding.
And here's his take.
He says software engineers arelike highly paid farmers tending
(30:08):
their crops by hand.
We just invented the combineharvester.
The world is going to have alot more food and a lot fewer
farmers in very short order.
So my response to this is Ithink we have to take very
seriously that the folks thatare really at the bleeding edge
of these technologies, who arereally getting hands on that's a
(30:29):
pretty consistent take.
Um, I don't know that.
I've talked to many engineersthat are like totally complacent
and saying that, like AI isjust, you know, it's just a fad.
I think there's actually folkswho thought it was a fad.
Um, that are coming around, butmost of the folks in CS who are
, like you know, actuallybuilding, um, building, are like
coming to terms with the factthat this is, this is going to
make software engineers moreefficient and, you know, you
(30:51):
sort of just by extension right,like if you can get more
productivity out of one engineer.
You don't necessarily need 20.
Maybe you only need, you know,18 or 16 or whatever the number
is, or 16 or whatever the numberis, um, and I, yeah, I, I worry
about I think, I think, becauseeducation has traditionally
been very oriented towards theselike super discrete career
(31:12):
pathways.
You know, my parents areimmigrants and for them it's
like doctor, lawyer, maybeengineer, and that was basically
it, everything that or bust.
And like I went into politicalscience and they were like
aghast.
They were like, well, you know,you can go to law school still.
And they actually still askedme if, like have you thought?
about going to law school and,yeah, I think you described it
really well.
It's like the reason that wehyper focus on those, you know,
(31:33):
those pathways, is, you know,for someone who grew up, you
know, in a lower middle classhousehold, you know one of the
most certain ways to achieveeconomic mobility was to go into
one of these careers whereyou're, you know, essentially
going to be guaranteed, you know, like you know, six figure
(31:55):
income.
So yeah, I mean, if we have atsome point there's probably
going to be this shift where youknow, you know, if companies
start laying off certainpercentage of their engineers
and then now they're floodingthe the job market and then you
have all these sort of graduatesthat are coming into the job
market and and then you're inhigh school and you're trying to
(32:16):
decide okay, do I go into?
And I don't think it's justcomputer, I think computer
science is the canary in thecoal mine, just because it's
such.
I mean, you know, software is alanguage and so language models
are especially adept.
It'sept it's like there's a lotless friction to applying it
into that specific type ofknowledge work.
But I don't think you knowaccountants or financial
(32:37):
analysts or you know lawyers, Ithink a lot of knowledge work is
really, you know, in thecrosshairs.
It's just a question of, like,how long it takes for
institutions to kind of figureout the implementation and get
past the friction.
Like, how long it takes forinstitutions to kind of figure
out the implementation and getpast the friction.
And it brings me to just sortof like, how are you going to
add value?
Because being able to writelines of code is probably not
(32:59):
sufficient anymore.
You know, my sense is that ifyou said you have six engineers,
let's imagine you had 100.
You know, and if you werethinking about, ok, well, I'm
going to get rid of, let's say,I can get rid of 10%, you know,
the question would then followand maybe you can answer this
for me Um, who are the engineersthat you keep?
Who are the engineers that youget rid of?
Like, do you have a sense of?
(33:24):
Like, what the complimentaryskills beyond just?
Peter Gault (33:28):
like knowing the
software language that you know
are going to be important.
Yeah Well, my own intuition isthat folks who have jobs now I
don't know what the bigcorporations will do, but I
think in general those folkswill be able to find new jobs.
But I do worry a lot aboutthose new graduates.
I think when we think abouthiring, we've hired folks who
are brand new software engineersand we always know that there
is a bit of an investmentupfront, that sort of coming out
(33:48):
of a boot camp you're not quiteready to be a full contributor
to an organization, but thatover six to 12 months you do
become a contributor and that,as LLMs get stronger, that
ability, that entry point, Ithink, is going to be what's
most at risk, and so that'scertainly something I'm really
worried about.
It's so funny my parents alsopushed me to become a lawyer as
(34:09):
well, and that feels the most atrisk of any of these
professions.
When you look at the deepresearch tool, which I love and
I keep running out of my 10credits per month that I need to
upgrade now to the $200 planbecause it's so useful, but it's
a complete game changer in itsability to take a week's worth
of research and do that in 10 to15 minutes.
(34:31):
And so we see this future wherethe LLMs can do these
sophisticated research tasks ina way that would just take a lot
of time to do ourselves.
I've spent so much timeplumbing the depths of Google
searches on page 50 or whateverto try to find some information,
and now that that can be doneautomatically very quickly, that
(34:54):
really changes the needs of theworkplace, and so I don't quite
know what the answer is here.
But I do think that those folkswho have jobs will probably be
in a better position.
But for those folks who aretrying to enter the workforce,
yeah, there's a really bigquestion mark of what do those
entry points look like now?
Alex Kotran (aiEDU) (35:14):
yeah, and
you have um, I don't know if
this is an announcement or aleak, but there was, you know,
the news that openai is going tobe offering like a 20 000 ai
agent.
Um, you know, I have to, youhave to take sort of all this
through.
I think a lens right, which isthese companies are also trying
to command big valuations.
They're beholden to theirinvestors and trying to justify
(35:36):
Because they need to also raisetons of capital to train the
next-gen models.
So I'm less.
There are folks who are sayingwe're one to two years away from
AI being better than any humanengineer.
There are also folks I thinkAmadei at Anthropic has said
(35:57):
we're maybe one to two yearsaway from like artificial
general intelligence whateverthat means is a whole.
So you could spend 90 minutesjust talking about that.
But I think, even if we're likemuch more conservative, even if
we say, okay, you know five X,that it still doesn't really put
any of this outside of therealm of like.
You know, if you're in likemiddle school, like we're still
talking about like your firstjob out of college or maybe even
(36:20):
while you're in college, youknow us getting to this moment,
um, and and just to like provethe point.
So the uh, chat gpt deepresearch just finished its work
and so, as I said, I, I I gavethe initial um initial research
pdf from microsoft research.
(36:41):
It summarized it and then I Ididn't even like spend that much
time prompting, I just said,okay, could you conduct some
research to identify othersources that address this
question?
And, as you said, it kind oflike you know chat GPT before
would have just immediatelystarted generating something.
In this case it stopped andactually asked for, like, which
direction you want me to go, andI didn't even.
You know, and I think this islike for me.
(37:03):
This is what concerns me is thatyou know, prompt engineering
doesn't have to be terriblythoughtful.
I mean, you can sort of justlike, kind of like hack your way
through it and you still willstumble into sometimes you know,
high quality outputs I haven'thad time to actually read
through this, but you know itpulled some legitimate sources.
I found that Microsoft ResearchReport.
It pulled something from, youknow, springer Open.
(37:25):
It sort of outlined, it talksabout sort of the benefits and
opportunities.
Then it did this clever thing.
It's very in-depth um, it didthis incredible this table.
It is and it's like okay, here'sa table that summarizes the key
benefits and risks and so,basically, this, like you know,
um, you know very I mean I wasgoing to actually ask like, oh,
this is really dense, can yousimplify it?
And it.
(37:45):
And then I got to the pointwhere I had already actually
done that and, you know, I guessit brings back to like, what's
the point of education?
It still feels like and I thinkthis is the good news for
teachers and maybe tell me whatyou think about this it's like
education doesn't.
Like the good news is educationactually doesn't need to change
(38:10):
that much.
Like to your question aboutcomputer science, like we still
need to teach students computerscience because their ability to
be really effective vibe coders, if that's what we're going to
call it will actually bepredicated on whether they have
the ability to evaluate andcritically analyze the outputs
and debug and also thecomputational thinking skills
(38:32):
and um, and and those studentsthat have that knowledge, uh,
are going to be by far the bestable to add more value alongside
ai, and so, and then the othergood news is that I don't know
that education really is goingto have to solve the problem of
teaching students to use theseai tools.
I think the tools it's like wedidn't we didn't have education
didn't have to solve the problemof teaching students to use
these AI tools.
(38:52):
I think the tools it's like wedidn't we didn't have education
didn't have to solve the problemof teaching students to use
their phones, and I think inlarge part you know.
The same goes for like theinternet and social media, like
these.
These are technologies thatkind of by design, become sort
of like ubiquitous and seamless.
Peter Gault (39:12):
So I agree with you
, but I have a couple of hot
takes here and I have a few waysin which I think there are some
pressing questions.
So one of those, I think, iswhen you look at education, I
completely agree with you thatit remains vital, and more vital
than ever, that if an LLM canproduce a 10-page report for you
, your ability to read andunderstand and build your own
knowledge from that report iscritical.
That the LLM can't justdownload the information into
(39:34):
your brain.
We're still, hopefully, manyyears away from that scary
reality, but it's all to saythat that sort of becomes.
We'll live in a world of moreinformation, not less.
That's certainly something thatwe know is true that
information is not going tobecome more scarce, and so your
ability to parse it will becomemore critical.
There are more immediatechallenges, though, of like what
(39:54):
does education look like today?
You know you're talking aboutmiddle school students, and it's
wild to think that a10-year-old, a decade from now
you know this question of is AGI.
A year away or two years away,it doesn't really matter.
It's going to come whenever itcomes, and there's no one
definition of it.
It's, you know lots ofdefinitions, but 10 years from
now.
We know the world's going tolook quite different than it
(40:14):
does today, and we know it'sgoing to look quite.
The world today looks quitedifferent than the world 10
years ago did, and so we've beenworried about driverless cars,
for example, and that hasanother huge impact on society,
and that's taken a lot longerthan people have expected to
become this ubiquitous thing.
But my friend was just in LAand in a driverless car and I
(40:35):
was driving him around the city,right.
So it's like these things arereal now, and so I do think, if
we take a 10-year horizon, I dothink there are a couple of
really critical things that wedo need to do now.
One of those is when you look atELA instruction, one of the big
changes over the last 10 yearshas been towards less fiction
writing and more nonfictionwriting.
(40:55):
So when you looked at Englisheducation, one of the big
heavyweights in this space was awoman, lucy Calkins, and she
ruled the roost when it came toliteracy and she really loved
fiction writing getting kids towrite stories about their lives
and it was a really fun andengaging experience for kids,
but it didn't build the criticalthinking skills in the way that
(41:15):
a nonfiction text does, whereyou have to build an argument,
you have to find sources andevidence to support it, and so
when you looked at classroominstruction, about 90% of
writing was fiction writing andabout 10% was nonfiction.
And there's been a big push toshift so that maybe 70% is
nonfiction and 30% is fiction.
I love fiction writing.
It's not that it should go away, but kids need to be given
(41:38):
these opportunities today, andso if you really look at what's
happening in classrooms, howmuch nonfiction writing is
happening, sort of directlyconnected to how quickly and
effectively we're buildingcritical thinking skills.
So that's one really criticalquestion.
A second, though, is like whatshould computer science
education look like?
And should we keep teachingJavaScript to students, for
(41:58):
example, which has become themainstay and I know I'm sort of
stepping my foot into a very hotwater here, but I expect that
in a couple of years, like, wewon't teach JavaScript as the
primary language that kidsengage with as their CS
classroom and I don't think CSclassrooms will go away, I think
they'll become more vital, butthat classroom will look quite
(42:19):
different, and I do think thathaving coding skills is
important driver of that skillif JavaScript isn't a language
that we use anymore, becauseyour Figma designs turn into
front-end code automatically.
What is the role of JavaScriptin that world becomes a big
question mark.
So I do think those are some ofthe questions that aren't
(42:43):
happening today but will happenwithin the next two years.
Alex Kotran (aiEDU) (42:48):
Yeah, I
mean, I think that's like the
right aperture.
I mean, it's not that, you know, do I think we need to continue
teaching computer science?
Absolutely yes.
I also think that the have andthe have not everybody worries
about this digital divide andlike, oh, you know, the kids
that don't have access to AI aregoing to be left behind, and I
actually worry that the digitaldivide will look more like the
kids that are over-reliant on AIwill be left behind and the
(43:10):
kids that toiled throughlearning JavaScript, even if
they don't know, even if they'renot using JavaScript
specifically, they have gone.
I mean, like you know, I'm notan engineer, but my senses from
the engineers that I've talkedto is you know, you struggle
through learning your firstsoftware language, your second
software language and then, at acertain point point, you kind
of develop the instincts forbeing able to, like, learn new
(43:33):
languages, um, but you can'tskip that process.
It's like there is somethingsort of like this the productive
struggle that comes is, likeyou know, malcolm gladwell's
like 10 000.
Is it, malcolm gladwell?
The 10 000 hours, um, you know,you have to put the time in and
you know, I think we need to bereally clear that, like AI, not
only can it not replace thattime, it risks making it much
(43:58):
harder to motivate students,design agencies, and we were
talking about sort of ai art,and he made this point that,
like you know, the, themotivation, the, the um, the
incentive structure for, like,learning art usually goes
(44:19):
something like you spend, youknow, a year drawing and
doodling and struggling to drawa human face and then you
eventually get to a point whereyou can create something really
cool that you're proud of,that's unique and your own and
that drives you to like, learnmore techniques and to spend
more time.
Um, and if ai makes it so that,and I think, and I've already
(44:39):
and I've talked to students whoare interested, who are, you
know, artists or burgeoningartists, and I asked them for
their take on on ai art andthey're generally not excited
about it because they're likewell, now, I spent all this time
learning how to draw and myfriends are creating way better
like stuff that looks cooler.
Whether or not it's art, Ithink is a separate discussion
(45:00):
and it's like demotivating.
And maybe the same would applylike why would you spend all
that time learning JavaScript ifyou can get, you know,
literally, a working video gamewith a single prompt, which I've
seen now with Cloud Code andwith Gemini 2.5 Pro.
It's kind of wild, actually,what you can get with a single
prompt.
Peter Gault (45:17):
That it can build
an entire application is
completely wild, and I thinkthat's what I'm concerned about
is that the incentive isn'tthere.
I think the farming example isperfect, where people still grow
their own crops, People willhave vegetable gardens or they
will have an artisanal farm, andit's not that farming has
completely gone away outside ofindustrial farming, but it
(45:37):
certainly looks quite differentfrom when 90% of society were
farmers.
Right, and I think that that'sthe question of like.
If that incentive structureisn't there, the productive
struggle, I think, is anincredibly valuable learning
experience.
So I want to be crystal clearhere that, while I think the
JavaScript will go away as myown hot take, I'm not saying
that it should go away.
It's just that I think, if theincentive isn't there, the sort
(46:00):
of value out of doing this andthe time it takes to get there
versus spending that time onsomething else, right, Education
is all about opportunity coststhat you have very limited time
in the classroom.
You've got like 30 weeks peryear of instruction, of
instructional time, and thattime flies by, and so what do
you spend that time on?
It becomes that reallypertinent question and is
(46:22):
spending that time learning?
What was it?
Hand coding?
What was the new Vibed?
Alex Kotran (aiEDU) (46:27):
coding or
classical coding.
Peter Gault (46:30):
Classical coding
now I mean, that's my first time
hearing it, but it's alreadytoo funny that, uh, that's now
um, in the rear view mirror, uh,so yeah.
So all to say that those areall things that I think will
become questions about two yearsfrom now, and I don't think
these are happening today.
The world of always lags behinda little bit of the workforce
and these things sometimes taketime, but I think that it's
(46:56):
valuable to try to get in frontof these questions and try to
think about what is the best useof that time, and I don't think
anyone has the answer to it,but certainly, what is vibe
coding is a huge question tofigure out and unpack is a huge
question to figure out andunpack, yeah, but there's, I
think it's easy with AI to gosort of go down the glass half
empty road and there's a lotmore to talk about.
Alex Kotran (aiEDU) (47:18):
We haven't
even gone to sort of like
artificial general intelligence,where what do you even do in a
world where nobody has to workand you get to a place where
there's like maybe it's veryimportant and interesting
philosophical questions.
But there are also questions inwhich I see very little agency
for myself and our organizationand, frankly, for the education
system to fully address.
I think it's, you know, to meAGI is like a question about
(47:41):
sort of social safety nets andour ability to, you know, figure
out like the fiscal policy suchthat we have the resources to
be able to provide people.
So, anyways, it's like a, it'salmost like a political, you
know, political organizingquestion.
But the glass half full versionof this is also, you know, I
(48:03):
think one of the big deterrentsto students coding, like going
into computer science pathways,is, you know, today, or at least
, let's say, two years ago, itwas really hard and required a
lot of like annoying work andeffort to get to a place where
you could create even arudimentary or interesting video
game, and I think, with vibecoding in the hands of the right
(48:23):
teacher, you're not necessarilyreplacing cs class with vibe
coding 101, but your first day,like not even the first week,
your first day in introduction,not even the first week, your
first day in introduction tocomputer science, you are
creating a video game and to me,like I would not even have a
computer science class in myhigh school, but I can tell you
I probably wouldn't have takenit, but I could.
But if, if on day one I wasable to create, you know, some
(48:45):
sort of, it probably would havebeen some sort of fantasy, you
know Lord of the Rings type ofvideo game thing, but that
might've hooked me, you know,that might've actually like
drawn me in.
And so I I'm curious, you know,just to bring things back to
Quill, you know, one of thethings that that we've been
really impressed with yourteam's, it's been your team's
ability to, you know, not justuse, you know, your technology
(49:07):
platform to really effectivelybuild sort of the critical
thinking skills and providefeedback, uh, but also as a way
to like really efficiently umprovide teachers with like
current and just interesting andengaging topics that students
are um just respond well to, andyour point about baseball was
like, I think well taken rightis like students are more more
(49:28):
likely to lead into the learningexperience if it's something
that they actually are, you know, interested in or or feel
somewhat passionate about.
Um, and and to that end, I knowthat this is something we
partnered on right it's likecreating some specific
activities around artificialintelligence, which is a bit
meta right, because we're almostlike using ai in the back end
to help teach students about aiconceptually.
(49:50):
Um, but just to sort of, can youjust paint that picture of like
what are those activities?
Like, how have those beenreceived?
Peter Gault (49:57):
Yeah, they've been
some of our most popular
activities.
We're seeing that kids arereally fascinated about these
topics.
You know, ai is so interestingin so many different ways, and
so we have things like how AI isadvancing animal conservation
and this is one of those areaswhere AI is amazing that it is
helping to protect endangeredspecies and doing things like
being able to use AI to protectelephants or being able to use
(50:20):
AI to communicate with whales.
These are these ideas thatreally get kids excited about
the future.
And as we've been buildingthese new activities, we've been
getting some emails fromstudents which almost never
happens where the students aresharing their opinions and
saying, oh, you covered this,but what about that?
Or this feels too optimistic onthis particular topic, or what
(50:43):
about this other question, andso you're really seeing that the
students are talking aboutthese issues, they're unpacking
them, they're debating them, andQuill had never really gotten
to that level before with kids,and so for us, that's a huge win
and 100%.
What we're trying to do in theseactivities is to really help
students to be curious andexcited about the future and to
(51:05):
be able to think criticallyabout it, and so focusing on AI
knowledge is just an incrediblyinteresting way of opening up
this door for kids.
We're seeing this happening themost in English classrooms, that
when we're building theseactivities, they can be used in
a STEM classroom, they can beused in a CS classroom, but that
English teachers are lookingfor these opportunities to get
(51:25):
their kids debating ideas, tobuild their own opinions, their
own ideas, and that this contenthas just been an incredibly
rich opportunity for kids.
And so we're rolling out awhole series of new activities
over the course of the next yearfocused on all of these really
fascinating topics so how AI isimpacting art and creativity,
for example, and how AI isimpacting things like the future
(51:47):
of work, how it's impactingalgorithmic bias, for example,
and how researchers areaddressing and changing AI to
mitigate bias, and these are allreally critical topics that we
think kids will really, bereally excited to dive into.
Alex Kotran (aiEDU) (52:03):
Yeah, I
love the call out for
algorithmic bias and like AIethics, because I sometimes am
frustrated when people describe,describe AI literacy and
they're like, well, the key isthat, like students just need to
know about algorithmic bias, orlike they need to just know
about, like, the risks andbenefit of AI, and I worry that
they like that sort of there'sthe approach of a literacy and
(52:24):
thinking of it as a contentknowledge is actually not quite
there, because what reallymatters is not so much like the
awareness that it exists, butproviding students with the
agency to actually, like youknow, start to dictate, you know
what, what their knowledge ofAI, algorithmic bias, like how
that is now informing theirperspective on you know if and
(52:47):
how they should be using AI, andlike I think what's powerful is
that we don't have answers toall these questions, and that's
maybe some of the mostinteresting questions to then
pose to a student, because theyhave, frankly, as much entree to
the conversation about ai art,to give another example, as
anybody else um, do you what'slike?
(53:10):
Yeah, what, what is like the,what is coming now?
Like like two or three yearsfrom now?
Like how is, how is quilldifferent or how is it the same?
I mean is is is your vision forgrowth more, more scale and
reach, or do you have like, alsosort of like a product vision
that is maybe expanded,expansive beyond where you know
what you're currently providing?
Peter Gault (53:30):
so we're really
thinking deeply about what are
those big questions that kidsare going to be excited about,
that teachers are going to beable to find to be quite
valuable.
And one example here is theresearcher Joy from MIT.
She's been doing a lot ofresearch on things like facial
recognition and how these toolscan sometimes not have enough
(53:51):
training data to representdifferent ethnicities and races
and have misclassification as aresult.
Rather than just saying, hey,this is a problem, she was able
to build her own data sets andretrain the models so that they
were able to be more responsive,and to us, I think that's just
a really powerful example of howAI is a really malleable
technology.
As you feed more data into AI,you change its output, and that
(54:15):
can really be used for good.
It certainly can be used forbad purposes as well, but that
ability for students to dictatewhat AI is and what its output
looks like is a really powerfulthing, and I think we'll see
that this next generation ofstudents they'll inherit this
technology and they'll controlit and they'll be able to choose
(54:36):
how it's used and understandingthat they can change how it
works rather than it being justthis black box that's beyond our
control, I think, is a reallyimportant lesson for us to teach
now, and so these particularcase studies of retraining a
system and improving it, makingit more effective, these are all
examples of how AI can change.
We're big believers in this ideabecause that's what we do every
(54:58):
day at Quill.
When we're creating feedbackfor kids, we're building our own
custom data sets, and thatwe're not just taking the output
of the LLM and serving it tothe student.
We're building data sets ofmore than 100 responses to a
particular question, for example, where we're mapping out what
are all the different thingsthat kids are saying and how
would teachers engage with thesestudents if the teacher was
(55:18):
sitting down next to the kid andworking one-on-one to give
feedback.
And by building those data sets, by showing those exemplars of
how students are writing and howteachers engage, we're able to
inject that all into the LM, togive it our own opinions of what
good learning looks like, andso we believe ourselves that
that's absolutely critical forgood education.
But there's also this metaconcept that for kids, they need
(55:40):
to know that AI is malleableand it can be changed, and that
doing that impacts their livesand can impact their lives in a
positive way, and so we thinkthose are some of those big
ideas that we're excited totackle over the course of the
next year.
There'll be this very metalevel where we want to cover
Quill's own AI and explain it asthey're using it, and so we see
this all little turtles onturtles on turtles here, but we
(56:04):
see this as a really powerfulopportunity where we can unpack
the how we train our AIourselves and how we try to
mitigate bias within Quill anduse that as a learning
opportunity for kids.
But in doing that work, it'sreally to try to help them
understand that AI again is notjust a static thing and that
they can change it and they'llown it and that in owning it
(56:26):
they can hopefully steer it in agood direction they can
hopefully steer it in a gooddirection.
Alex Kotran (aiEDU) (56:34):
Yeah, and I
can see why Quill is really
well placed to do that, becauseyour approach is all about
having students like hone theirability to sort of articulate
their opinion or criticism orsupport for something, and I
think that's something that youknow.
I think students actually havethe example that you gave in
terms of just like, even likethe partnership that we have,
(56:56):
and students were getting reallyactivated.
It's like students, once yousort of provide some scaffolds,
they become quite articulate intheir in the development of
their opinions about AI.
But I don't think that happensjust by accident.
I don't think, just becausethey're using it and they're
digital natives, that theynecessarily have the tools to
(57:18):
become really informed.
And I think about, like you know, the TikTok algorithm and the
fact that you know we don't needto teach students that
algorithms exist.
They talk about the algorithm,you know, like they sort of
innately understand that thereis this sort of like thing in
the shadows that dictates thecontent that they see.
Um, what we have found is thatand we had this activity a while
(57:41):
back that was actually like itwould challenge students to
train their algorithm to feed acertain type of content.
And, you know, I don't thinkthat kids necessarily realize
like all of the ways, and Ithink that I did the.
It was like owls.
It was like how can you get asmuch as much owl content on on
tiktok as possible, um, and thisis the thing we use in the
classroom, so we kind of phaseit out.
It's more just sort of like ayou know an at-home activity, um
(58:02):
, but you'd have students thatwere like, oh, I didn't realize,
like there are so manydifferent aspects to how I use
these apps that we're we'regenerating and it makes them,
like I think, much moreresilient, as when they see
something, it's like, well, youkind of have control, like
you're seeing that kind ofbecause you've been creating the
reward mechanism for the, forthe tool, um, and it also shifts
(58:24):
this narrative Cause I think,again, like the AI conversation
can get, can get very depressingif you feel like you don't have
agency um and I think what'simportant is, even with jobs,
even with the future of work.
You know, when you talk to, thisis like darren assamoglu, I
think, who was really pushingthis, is like we, and david ator
as well.
Um, I think it's actually atorwho is like closing remarks to
(58:45):
one of his his talks that hegave recently and he was was
like look, um, you know, ai isnot going to happen to us like
this.
Like like when we talk about,like what does the future look
like in terms of jobs, in termsof its impact on society?
Like we are going to makedecisions about the degree to
which we use it to automateskills, the degree to which we
(59:05):
prioritize, you know, buildinghuman capacity alongside it.
And given that the kids todayare really going to be the
primary recipients of thosedecisions, I think it's it's
both powerful and also likequite necessary right for them
to have the to be a part of that.
(59:25):
But it's like, it's not like.
I think what you're, whatyou're describing, is not so
much just like giving students aseat at the table.
It's like you can't like givingstudents a seat at the table.
It's like you can't just give aseat at the table.
Peter Gault (59:32):
They have to have
the rhetorical tools to be able
to like participate in theconversation and really
contribute um absolutely it'sbeing advocates, I think, is
critical, and I think that thatexample of controlling your
algorithm is a fascinating one,because I do think there's sort
of the sense of this technologyand you could.
(59:54):
What is an ideal algorithm?
What content do you want to see?
What makes you happy and joyful?
Should there be more cuteanimals in your feed, because
that makes you happier?
Those are all questions where,ideally, students are the that
right that they're driving theirown engagement, and they often
probably don't think about thator consider that idea, and I
(01:00:17):
think that's where Quill andAIEDU really step in to try to
give them this chance to reflectand think about these questions
that they might not havethought about.
You know, our programs workreally well together because
Quill provides an introductionto a topic that we're covering,
an article.
We're letting kids write aboutit, we're getting them to build
arguments and use evidence andin doing so we're really opening
(01:00:37):
the door.
And then from there, aiedu,with your lesson plans and your
activities, really goes towardsthat building experience of how
do you take this and run withthis and build something new.
You take this and run with thisand build something new, and so
I do think we're both reallytrying to give kids these
opportunities to reflect on thisthing that impacts their lives
every day.
Right With the amount of screentime happening, you know kids
(01:00:58):
are spending what like eighthours on their phones, or
something like that, god is thatthe latest?
These things impact our lives insuch an insane and intense way.
We all know this, and we allknow that it's not the most
healthy thing, but to give kidsa chance to reflect on that and
to think about that, I thinkit's really great the work that
we're doing, and there aren't alot of folks right now doing
this work.
I think it's really importantthat, as this technology evolves
(01:01:22):
, that kids are seeing thishappening and so the work that
we're doing together.
We're in the early innings ofthis work.
The AI is going to be aroundfor the rest of our lives.
It's not going away.
The snowball is only going tokeep growing in mass, and so
doing this work is really goingto become more and more vital.
Alex Kotran (aiEDU) (01:01:42):
Yeah, I
mean just to, because I was
going to ask what you'reobsessed with.
But maybe I'll refine thatquestion to you know, compared
to like, where you were lastyear and based on what you've
seen, I mean, are you, how isyour thinking about, sort of
just the timeline that we're on,changed?
(01:02:02):
I mean, do you feel like, arethings accelerating?
Are things are just like steadystate, high velocity, slowing
down maybe?
Peter Gault (01:02:11):
Definitely think so
.
For us, the big thing is therebuild on generative AI.
For us, the big thing is therebuild on generative AI, and to
us, it really feels like dayone that there are a ton of
opportunities for us to godeeper and build in critical
thinking.
These are things like teachingstudents how to build a thesis
statement, which, for me, wasone of the biggest things that I
remember struggling with as astudent.
That I would be writing anessay and I'd have to build a
(01:02:31):
thesis and no one ever taught mewhat a thesis was.
I just didn't have that classor none of those teachers
covered that, and I'd be like isthis a thesis?
Is that a thesis?
Like?
What should I be saying here?
And this is actually a hardskill for students to like build
a thesis that captures theirentire point of view and getting
practice with that.
There's research that shows thatthis is incredibly impactful.
(01:02:53):
Building a topic sentence evenis incredibly impactful, but
there's not a lot of instructionexplicitly that gives kids
those opportunities, and so forus, that's sort of something
that we've always wanted to do.
It's been on our roadmap for 10years, and the technology was
never there for us to allow astudent to build their own
thesis and then for us to beable to evaluate and provide
feedback and coaching on it, andso that all feels very doable
(01:03:15):
today in a way that was notdoable again even in that sort
of first iteration of generativeAI, where it wasn't quite there
.
Now you have that reallyfine-grained analysis, and so I
think that's awesome.
Alex Kotran (aiEDU) (01:03:26):
Was that
like GPT-4?
What was the inflection pointin terms of capability?
Peter Gault (01:03:30):
In our own journey,
there's been, I think, two
really big inflection points theshift from GPT 3.5 to 4, and
then, for us, the introductionof Gemini Flash 2.0.
We spent a lot of time on GPT3.5 when it first got released.
We've been using thistechnology within like a I don't
know a month or two of when itfirst became available I think
(01:03:52):
within a week of the API access,and it was not reliable at all.
It was just so bad athallucinating and repeating
itself and all of these problems, and we spent so long trying to
make it reliable, and what weshould have just done is waited.
To be honest, at the time, wethought this is the technology
we got to make it work, so wespent so long trying to build
(01:04:12):
these guardrails, and it was agood learning experience for us,
but certainly we spent so muchtime playing with this thing,
which ended up throwing out allthat work.
Four, though, was a big stepforward, where four was able to
give us much more reliableanalysis, with the caveat that
four was really slow and reallyexpensive.
We're helping millions of kidsper year, we're giving feedback
(01:04:33):
on around 500 million sentencesevery school year, and so, at
that scale, four would have costus something like $6 to $10
million per year to run.
That's bigger than our entirebudget of our organization.
And so you saw this powerfultechnology, but it wasn't at
this sort of scale.
It also was too slow.
You know, for us, when kids arewriting, we need to give them
(01:04:53):
feedback in under a second right.
We can't sort of have that longanalysis period, and we knew
that models would get faster andcheaper, but for us, the Gemini
Flash model has represented areally fast model that gives
really great output and isreliable while also being cost
effective, and we see that Flash2.0, there's still room for
(01:05:14):
growth.
That, versus the really powerfulpro models, there is a gap
there, and that gap will getclosed over time.
But it certainly is at thatpoint where we feel confident
that we can deploy it toproduction in a way that's real
and scalable, and so for us,that's been a really exciting
threshold, and that has onlybeen in the last six months that
this technology is available.
I think it came out in August oflast year, and so, while
(01:05:40):
there's been a lot of FOMOaround generative AI since
essentially as soon as it cameout, the truth is is that there
was a sort of period of gettingfrom this technology exists to
it's reliable and it's fast andit's cost effective.
And I think we're in thatterritory today, but we only
entered into that territory veryrecently, and so for us that's
critical towards actually beingable to use this at scale,
versus our own models, wherewhen you build your own model,
(01:06:02):
you have a lot of control overit.
You can control the cost, themodel design, but that was just
a very slow process, so now ouriteration loops are a lot faster
as well, and something thatallows us to do a lot more than
we ever could do in the past.
Alex Kotran (aiEDU) (01:06:16):
Yeah, I
think I'm curious if you've
struggled with this, becauseI've often had conversations
with foundations and you knowthey're trying to figure out
what their AI strategy is andthere's this like sense that
what they need to do is investin AI nonprofits.
And I don't know if I'dconsider Quill an AI nonprofit.
I mean, you're using AI, right,but I mean I think actually
(01:06:38):
you're an organization that isworking to use technology to
help students read and thinkcritically and articulate their
opinions.
But you're really nimble andyou've been able to really
effectively deploy AI and it'sand it's interesting is in the
(01:07:01):
venture spaces is actuallysomething that a lot of VCs have
been talking about, right, like, even like, uh, like uh, andrew
Ng's AI fund.
I mean their whole thesis is,like you know, look for
companies that are reallywell-placed to leverage AI like
rapid iterate and get to sort oflike an MVP.
Um, but yeah, but yeah, I'm justcurious, like from your
perspective, I mean, do how canwe do a better job of
(01:07:22):
articulating?
Because I don't think it's justfunders, I think it's like
school district leaders as well,like the buyers I know you're a
nonprofit, but but to your, toyour sort of like customers,
let's say, I mean, is there?
Have you?
Have you run up against?
You know folks saying, well, ah, I'm really trying to figure
(01:07:43):
out what, like the generative AItool is that we buy?
And is Quill really like?
You know, quill isn't competingper se with like Gemini, like
they're they're.
They're very different toolsand, at the same time, it feels
almost more important forteachers to be using something
that has sort of like thepedagogical structure that
you've put in place than to justsort of like have this like
multi-purpose tool that doesn'tnecessarily have the like, the
deep thinking behind it yeah, Ithink for us we are very much a
(01:08:06):
literacy non-profit that ourgoal is to build strong readers,
strong writers and strongcritical thinkers, and that's
what we're all about.
Peter Gault (01:08:13):
That's the end game
here how do we help of kids
build these skills?
And AI is just a tool that weuse in that process, and so I
think we are an AI nonprofitbecause we have that expertise
in AI.
As a nonprofit, a third of ourteam are software engineers and
product managers and that ourteam is doing all of our own
in-house AI development.
So we have that skill.
(01:08:33):
But it's a little bit likesaying we are a nonprofit that
uses software.
Or an internet nonprofit thathas an internet nonprofit right
or a database nonprofit right,that all software uses databases
, and that's just part of it,and so I do think that right now
, there is this class oforganizations where AI is part
of their model of delivery, butin 10 years, every organization
(01:08:55):
will use AI in some capacity,and that distinction of like are
you an AI?
Every organization will use AIin some capacity, and that
distinction of like are you anAI nonprofit or not will go away
.
There'll be a question of likewho's building novel use cases
on it, you know, because thereare a lot of these thin wrapper
tools where they're just takingthe output of ChatGPT and trying
to sort of wrap some servicearound it, and sometimes that
can be valuable, but sometimesthat's not valuable, and so I
think that's a very differentquestion of.
(01:09:17):
That will become very easy foranybody to do, and you won't be
an AI nonprofit, because, underthe hood, a model is helping you
in some capacity, and so I dothink those are some of those
distinctions that apply now butwon't in the future.
I think the more interestingquestion, though, is that AI is
getting a mixed reception inschools.
We are working on our messagingright now, and we don't talk a
(01:09:40):
lot about AI on our website.
We have our AI program for kids, but it's not splashed across
the homepage and we figuredmaybe we should do more on this
right.
The AI is so central to ourwork.
We're developing it, we'reusing it Like let's make that
sort of part of this sort ofspecial sauce of Quill, using it
like let's make that sort ofpart of this sort of special
sauce of Quill.
But teachers reacted prettynegatively to it, that when they
(01:10:01):
see the words AI, they worrythat this is just going to be a
cheating tool, that this isgoing to replace them.
That in our own surveying ofteachers, the sentiment was
fairly negative, that as beingan AI company is not why they
love Quill, and that callingourselves an AI company felt
like the wrong step forward andI think we all sort of saw like
(01:10:24):
crypto company.
You know we don't want to havethe like crypto company vibes,
right, and so that's a littlebit of the feedback that we're
hearing from teachers today andI think it's nuanced because
there are some really good folksusing AI and then I see a lot
of like not so great products aswell.
So the sort of space has amixed reputation right now
because of those like reallyethical AI players doing usually
(01:10:45):
very narrow use cases and doingit in a highly customized way,
and then folks who are just sortof promising the world with AI
in a way that doesn't actuallydeliver what students and
teachers need and comes with alot of potential problems as
well.
Yeah, I mean.
Alex Kotran (aiEDU) (01:11:01):
I was
actually just talking to one of
the biggest school districts inthe country and they actually
have someone that's like leadingtheir, their generative AI
strategy and they just banned Iwon't say the name of the
company, you probably guessedone of the big rapper, one of
those popular sort of for-profitrappers, because what they
found is that it was.
You know, there's like someinstances of teachers using it
(01:11:24):
really well, but there was likelots of instances where just
like it wasn't being usedeffectively.
Um, and you know, we actuallyhave to go to lengths to like we
I open up almost all mymeetings now with school
districts and I'm just like weare not, we are not the AI
implementation project.
Um, like we're actually, inmost cases, our advice to
schools is, you know, when theycome to us and say, oh, what AI
(01:11:45):
tool should we be?
You know providing to studentsand teachers, we're like none
like you're like you shouldactually, you know, pump the
brakes.
Focus on the question is morelike how do we provide a sandbox
for teachers to actually startto experiment with stuff?
I think Quill is interestingbecause it's to me like that's
actually easier to take to aschool district because it's
(01:12:06):
already aligned with prioritiesthat they have right Like
schools, and the NAEP scoresreally underscore this.
Like you know, many, if not most, schools have like significant
ground to cover in terms ofliteracy, and so solving for
that problem, I think, resonateswith like a much broader
audience.
I am interested, though, inthis sort of like meta component
that you're talking about,where, like as students are
(01:12:28):
using quill, they're also kindof like learning about how quill
works.
I'm curious if there's like ateacher facing component to that
as well, because I'm I'mfascinated with like how do we
build it like?
I almost worry more aboutteachers and students.
I think the students are goingto figure it out like far more
quickly because they're justsaturated with it and they're
sort of, you know, uh, very techforward.
Um, yeah, I mean, does quill,like does quill have a teacher
(01:12:50):
facing component?
I mean, do you see any, anyopportunity there to just sort
of use your, your platform as away to help teachers kind of see
sort of like what it looks liketo implement AI, you know,
really effectively on the backend?
Peter Gault (01:13:02):
Yeah, so everything
I'm talking about now is a
project that we're working onand something we'll hopefully be
shipping with AIEDU sometimeover the upcoming school year.
So a big caveat that this isnot yet live, but we do want to
build activity specificallyaround how we build our training
data sets and how we use thosetraining data sets to evaluate
writing.
And this is a really importanttopic, because evaluation of
(01:13:25):
writing is being used for thingslike Quill, where Quill is a
very low-stakes practiceplatform where kids get to
practice and receive feedback,but it's also being used for
testing purposes.
Feedback, but it's also beingused for testing purposes.
Every year, folks like theCollege Board hire tens of
thousands of educators to gradeall the AP exams, for example,
and there's a ton of work thatgoes into those evaluations.
(01:13:46):
And, as AI builds these skills,evaluation of writing is going
to become part of education, andso getting that right and
making sure it's reliable andaccurate those are all really
critical problems, and you solvethose problems again through
good data, and so for us, thisis a really critical topic that
we want to introduce to students, but also as an opportunity for
teachers to learn as well, andso we're excited to cover this
(01:14:07):
topic.
It's a little bit scary, though, because we're pulling back the
hood a little bit here on whatwe do in our work, and so it's
not yet quite available, but wesee it as a really powerful
opportunity for us over theupcoming year.
To your broader question, though, about how teachers engage with
Quill we provide them with aplatform with access to more
than a thousand activities.
They get to assign activitiesto students, view their results.
(01:14:31):
They get to see all the writingthe kids are doing in the
platform.
One of our tools Quill lessonsis a multiplayer tool where the
teachers and the students arelearning together.
That's one of our most belovedtools.
Kids get to share their answerswith each other.
They get to debate the answers.
That's a real time tool forstudents, teachers, and so we're
(01:14:52):
trying to really intentionallythink about how to create
tooling specifically in a K-12context and a higher ed context,
where teachers are empowered toengage their students and be
partners in their learning.
And there's a couple of bigdesign decisions Again.
Things like these arebite-sized activities that are
layered into the classroom.
They're used a couple of timesper week.
All these decisions arecritical towards Quill being a
(01:15:14):
partner to teachers and helpingto advance their goals, as
opposed to something that livesa partner to teachers and
helping to advance their goals,as opposed to something that
lives apart from the classroomand tries to replace a teacher.
And so we think that by makingall these smart decisions,
teachers feel really empoweredby Quill.
We also have great training andwebinars and all those things
that connect teachers together,but the heart of it is really
(01:15:35):
this sort of intentional designwhere it's designed for teachers
and really to be a partner tothem.
Alex Kotran (aiEDU) (01:15:41):
Yeah, I
mean, that's the golden goose,
right.
It's like, can we use AI andtechnology to actually enhance
collaboration and you know, sortof like these, like human,
durable skills that studentsneed to build alongside, like
like, yes, critical thinking andthe knowledge base itself?
Um, and I think, in the righthands, ai can absolutely make.
(01:16:04):
You know, because, like, notevery teacher knows how to
create a really effectiveproject-based learning activity
around like any topic, right andso, um, you know it's, it's.
It's definitely not so simpleas like, oh, ai is going to, is
it's good or it's bad, you know,for use in the classroom.
I do think that teachers arereally well placed to look to
organizations like yours thathave literally been obsessed
(01:16:27):
with this question for a verylong time, because it's not
necessarily like something thatcan be turnkey and so, but the
good news is there are freeproducts like quill available
and also, you know, ai edu sortof has a very similar approach
modular bite size.
Not trying to be the curriculum, um, it's hard to imagine what
the curriculum would be for aireadiness.
(01:16:47):
It's actually more about how dowe get more organizations to
sort of like start to adapt someof the practices that you've
you've taken like this very sortof like self, like
introspective approach to youknow your product design and
like being really intentionalabout when not to use AI, when
to use it and role of teacher,et cetera.
Peter Gault (01:17:07):
I think the
headline here what I hope
everybody is doing or trying todo and I think we're on the
precipice of this moment butit's to really engage in deeper
and more active learning.
I think that, when I look at edtech, quill's mission statement
from day one has been disruptmultiple choice questions.
My very first grant applicationto the Gates Foundation was
just about how multiple choiceisn't the best way of learning.
(01:17:30):
I remember as a student doingso many multiple choice
questions and A, b, c, d selectthe right answer.
You've got three wrong answers,one correct answer and so you
kind of just could guess, likethis is clearly a wrong answer,
I'm going to go with A or B, andyou're not building an argument
, you're not expressing your ownidea, you're not building
something, and I think that's areally powerful way of learning
and that, as we think about AIin the future, using a thin
(01:17:54):
wrapper tool to generatemultiple choice questions for
you isn't really advancinglearning forward.
You're taking us something thatwe've been doing now for
decades and just making it alittle bit faster.
But the more exciting thing ishow can we go from multiple
choice to writing and toproject-based learning and
towards collaborative learningand things that are very hard to
(01:18:15):
do.
Well, right, there's been a bigpush for many years for
project-based learning, and it'svery hard to implement in the
classroom it's hard to get 30students all working on projects
but you can imagine a number ofways in which AI can serve as a
partner to the teacher in a waythat previously was not
possible, and I think all thoseopportunities lead to deeper and
(01:18:38):
richer and more effectivelearning.
So I think that's the name ofthe game here is how do we
reimagine learning and what arethose opportunities that we can
now pursue that were just hardto do in the past?
And that's where we should beapplying our effort, as opposed
to just using AI to automatewhat we're already doing, which
is not the most effective anddeepest form of learning, and
made what we're already doing,which is not the most effective
and deepest form of learning.
Alex Kotran (aiEDU) (01:18:58):
Yeah, I
mean, I couldn't think of a
better way to close it.
It's like the status quo isclearly not working and if AI
just becomes a way of allowingus to sort of like cover
resource gaps to maintain thestatus quo, we'll have failed.
But there is this opportunity.
If AI can actually it's almostlike the Trojan horse for these
(01:19:22):
sort of like much longer, youknow, sort of very, very old and
, frankly boring conversationsthat have been had for decades.
Right, like 21st century skills.
You know, digital readiness, ifyou know, project-based learning
, critical thinking, like noneof this is new and I think
there's actually power in that,because there's a lot of
disruption happening to schoolsright now.
Um, you know, at the nationallevel, at the state level, uh,
(01:19:45):
it's not necessarily, you know,I don't know that educators
respond well to like moredisruption and they don't see it
necessarily as a positive um,but I think there's there sort
of subtle but really intentionalways that the technology can
actually just make it easier, um, for teachers to start to
implement some of thesepractices that you know are not
necessarily intuitive but butcan be turnkey with, with
(01:20:08):
amazing tools like Quill.
Um, peter, anything else thatwe missed, that you want to,
that you want to share before Ilet you go.
I know it's relatively late onthe East coast.
Thanks for making time for metoday.
Peter Gault (01:20:17):
Yeah, it was a lot
of fun.
We covered a few really bigquestions and really excited
that this is just again theearly innings of this world.
The AI is going to be here forthe rest of our lives.
There are a lot of things tofigure out and I hope that we
can spend more time trying toget ahead of some of these
questions, that it's hard toimagine what the world will look
(01:20:37):
like 10 years from now, but wecan certainly see certain trends
.
We can see that we'll have moreinformation than ever.
We'll have deep researchqueries that are feeding us
100-page documents and thatwe'll need to be able to think
critically, to be able to parsethem, to be able to have our own
points of view.
Education will become moreimportant than ever and that if
we do it really well, if we makeit active, if we make it joyful
(01:21:00):
, it will be a really amazingopportunity for kids.
But if we don't get it quiteright, I think we're going to
feel a somewhat scary worldwhere we're all a little bit
taken aback by the world, whereAI is sort of something that
lives beyond us.
Alex Kotran (aiEDU) (01:21:17):
And happens
to us Well, and you know who
better to help us answer thesequestions than the students
themselves who are going to beboth a part of that world and
also building it?
Yeah, absolutely, peter Galt.
I'll see you in, I guess, a fewdays.
Right, are you going?
Peter Gault (01:21:30):
to be at ACDS.
Yes, I'll see you next week,okay.