All Episodes

April 22, 2025 • 44 mins

Master AI education and boost AI literacy with insights from Dr. Stefania Druga, Google Gemini research scientist and creator of Cognimates. Explore how to use AI to learn about AI, effective strategies for teaching AI concepts to kids and adults, and the crucial role of hands-on tinkering. We discuss overcoming learning barriers, designing AI tools that augment creativity without stifling imagination (like smart toys vs. real-world learning), the nuances of anthropomorphizing AI, and the future of multimodal AI applications in education and beyond. Learn why user observation and custom eval sets are vital for building truly useful AI tools.


Learn more about Cognimates: http://cognimates.me

Dr. Stefania Druga's Publications: https://stefania11.github.io/publications/

Watch Stefania's AI Engineer Talk: https://www.youtube.com/watch?v=ySYLsoAhXmg


Get your free curated AI report from Anetic Daily Intelligence

Sign up at: https://www.anetic.co


Get FREE AI tools

pip install tool-use-ai


Connect with us

https://x.com/ToolUseAI

https://x.com/MikeBirdTech

https://x.com/Stefania_druga


00:00:00 - intro

00:02:40 - Barriers and Perception in AI Learning

00:06:00 - AI Learning Differences: Adults vs. Kids

00:10:37 - AI Literacy for Non-Tinkerers

00:14:31 - The Impact of Anthropomorphizing AI

00:23:37 - Principles for Building Creative AI Tools

00:33:05 - Common Misconceptions in Building AI Tools

00:42:41 - Core Principle for AI Entrepreneurs: Observe Your Users


Subscribe for more insights on AI tools, productivity, and AI education.


Tool Use is a weekly conversation with AI experts brought to you by Anetic.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Learning AI is hard. Ask anyone that's been in the AI
space for a while and they'll probably just tell you attention
is all you need. But it's definitely more than
that. There's unnecessarily complex
terminology. Looking at you rag.
And don't you need to have all that college maths you forgot
about? AI is posed to be the greatest
tutor humans have ever had. With infinite patience, the
ability to teach in a multitude of styles, and with the
proliferation of multimodal models coming out, learners

(00:22):
going to be able to learn in whatever method is best for
them. Truly the golden age of
education. But how to use AI to learn about
AI? Welcome to episode 36 of Tool
Use, the weekly conversation about AI tools and strategies to
empower forward thinking minds brought to you by an edit.
I'm Mike Byrd, and this week we're joined by Doctor Stefania
Droga, a research scientist at Google Gemini and the creator of

(00:42):
the open source learning platform Cognamates, which is
teaching the future generation of AI engineers and scientists.
Steph, welcome to Tool Use. Thank you so much.
Thanks for having me. Absolutely.
So you've done a lot with learning and helping spread the
good word of AI, but can you give a little bit of your
background, how you got to whereyou are today?
Sure. Yeah, so I started working in
the space of teaching kids how AI works and using AI as part of

(01:04):
like their education process since 2015 when I was at MIT
Media Lab and I was actually part of the scratch lab, scratch
team, lifelong kindergarten lab.And before before doing that, in
2012, I started my own NGO called Academia teaching steam
through for kids around the world like hands on workshops

(01:27):
where they would build robots and they could also program.
But it was all focused on allowing young people starting
from 7 and U to really learn through hands on projects and be
able to work on addressing challenges in their local
communities. Like anything for measuring like
air quality, water quality, likehave alternative solutions for

(01:50):
electricity, but also like fun stuff like build their own video
games or like make a banana piano.
So that's that's sort of how I got started.
And there there's a personal motivation.
I come from a small village in Transylvania and it took a lot
for me to, you know, get to worka deep mind or study at MIT.

(02:11):
And I'm the first person in my family to to go to 1st
generation and go to college. So I'm realizing from my own
experience how much like education has this equalizing
force in society. That's what motivated me to
invest a lot of time and effort into improving education.
Love that education is incredibly important.
I did a little stint as a teacher, taught abroad back in

(02:34):
the day. Wasn't a career path but being
able to just impart wisdom and and generate excitement about
things is is wonderful. What type of barriers did you
come across through some of yourstudies as to where people are
and where they need to be in order to learn about this type
of thing? Yeah.
So in 2015, the voice assistancewere just becoming part of the
home. So I, I basically was studying

(02:56):
how the perception kids have of things like Alexa and Google
Home and Siri influences the waythey ask questions to these
devices and also influences how they perceive AI more broadly
and other types of like AI technologies like smart toys and
smart robots. And one of the barrier is that

(03:17):
the initial perception really shapes how kids interact and
learn with these technologies. So to give you an example, like
if the discussion in the family or at school, or like if if the
kid attitude towards AI is like,it's really intelligent, it's
much smarter than me. Like I can ask it anything and
it's going to give me the answer.

(03:40):
They're not going to have enoughof a critical understanding of
when to trust or not to trust this technology.
And also they might not be encouraged to be to really try
to understand how it works underthe hood and open the black box
or how they could customize it and teach it and make it work
for them, right? So that's, that's really what

(04:01):
motivated me to create Cockney Mates to allow kids to open the
black box. And instead of saying this is
magic, you know, it's so intelligent to really understand
where the intelligence come fromand that it comes from people
and that people train these models using data, that the data
matters a lot. So they actually got to do this

(04:23):
right. So as part of Cockney Mates,
there's a train training sectionwhere they can train their own
custom models with images and text and sensor data.
And we did this back in 2015 when we didn't have large
language models. And so they were using transfer
learning under the hood and building classification models
with more traditional machine learning, but they could

(04:46):
definitely see an effect. So just by adding 10 examples of
cats and 10 examples of dogs, like they could have a
classifier that would distinguish between these two
labels they created. But then also see when they
added examples outside of the distribution, like a cat that
was drawn by hand, that the model would struggle with it and
kind of understand like, oh, I need to add that to my training

(05:09):
data and that's why it doesn't work.
And and also have more nuanced conversation around bias, right?
Like, OK, I trained a model so Ican play rock, paper, scissors,
but it only it works for me, butit doesn't work for my friend
because my friend has a different skin, skin color.
I know how to fix it. I'm going to go back to my
training and add pictures of my hand and my friend hand because

(05:32):
kids really like want to be ableto like play with your friends
and care about these things. But like the approach of AI
ethics and critical AI literacy needs to be playful to engage
young people instead of saying, you know, these models are bias.
There is a lot of problems with them that's very paralyzing for
people. Like it doesn't give them agency

(05:53):
to go and fix them, right? Or to to really understand like,
why are they biased? Where is the bias coming from?
And how can we address that? Do adults have the same
willingness to kind of dig in, understand how things work in
order to gain that exposure to adeeper understanding of it?
Or with kids with neuroplasticity, are they more
prone to just be able to grasp anything quickly?

(06:14):
Yeah, I think with kids it's super fun because like whatever
you put in front of them, like we did all of this workshops
where they even had to like drawwhat's inside a Voice Assistant,
draw what's inside Alexa or likemake a kid where they would
design their own assistant or like naturally they love to
break things and then like teaching them how to fix them.

(06:35):
It's like a very powerful skill.But they're very good at
breaking even Gemini chat or ChatGPT or any of the current.
Like they would think outside ofthe box and ask questions you
don't expect or come up with examples that you don't expect.
So they're very good at breakingtechnology, but then it's like
teaching them how to engage in this cycle of break it, fix it
and kind of formulate hypothesis, test those

(06:57):
hypothesis, and then kind of refine their understanding.
So for that, like having a platform that provides the right
level of abstraction, like what do you tinker with is key,
right? And this doesn't need to
necessarily be like block programming language, like
Scratch. That's a good entry point
because they all know it. So they don't feel intimidated.

(07:18):
They're like, oh, I've seen thisbefore and now I can use these
blocks to also program cameras and program robots and teach,
you know, voice assistance. But so it's not intimidating.
It's a good entry point, but that's not always the right
level of abstraction or heuristics.
So, and I'll tell you about whatare some things that I'm
building now with multimodal AI and what level of abstraction

(07:39):
I'm choosing for that. But I think for adults, like it
really depends. So the pandemic was a good eye
opening for me because I had to go from working in person with
kids around the world to actually doing everything
online. And as you can imagine, it's
pretty hard to do online, like hands on workshops.
So I I really start working withfamilies, right, because kids

(08:02):
were stuck at home. And throughout my PhD for three
years, I work with the same community of 20 families from 10
different states in United States, very different
backgrounds, very different levels of exposure to
technology, also different ethnicities and different like,
like, like I said, regions of the country.
And for those three years, like,it was eye opening, like how

(08:26):
important it is like to engage the parents and engage the
families. And I think it was the right
timing too, because like the parents were disappointed with
what's happening in school because they were exposed to
what the kids, how the kids werelearning and what the kids were
learning for maybe the first time to this extent.
And then also they didn't know what to do with their kids at

(08:47):
home. So they were eager to kind of
like engage in learning activities and workshops.
And so that was great. And I think that in the
beginning, the parents were moreskeptical and kind of more less
tinkering like, but then they got inspired by the kids, right?
So it was one of these like really cool partnership where
the parents are there to kind ofmentor and guide and make sure,

(09:10):
you know, like they help. Like this is how you open a file
and save a file. Like more like the IT.
But the kids definitely knew much more about AI and were more
eager to kind of experiment and try.
And the parents were taking a backseat and learning from their
kids. So I actually wrote a paper
analyzing all the roles that thekids and the parents take and

(09:31):
how that changes depending on what they learn.
But yeah, I think kids are an inspiration for adults, even for
their parents or for their teachers, because I'm talking to
a lot of teachers and they're seeing their students using all
these tools. So kind of saying like, we're
going to ban using Gemini, usingChatGPT is like the early days
when they were banning Wikipedia.

(09:52):
It's not going to work. So it's really about like, how
do we teach young people to use it mindfully, right?
Like what's, what's the transition from digital
literacy, media literacy to generative AI literacy and when
to trust that, how to look at the sources, how to use it, what
to delegate, what not to delegate.
So we don't have under skilling,right?

(10:13):
But but yeah, I think adults areare being inspired by how much
and to which extent young peopleare using these technologies.
And they're starting to tinker abit more, too.
Good. Yeah, it's if you can have the
the viewpoint of lifelong learning, it's it's a wonderful
thing. I just did a little stint at
this company called Funkable, which is no code mobile optimal.
It also uses a scratch type block based coding.

(10:34):
And it's really cool seeing people who are really technical
being able to get into it. But what I'm curious about is if
you've done any research into the children who just aren't
interested in tinkering, building, that's everything and
developing their AI literacy. Is there any avenues that help
them be sure that they're fully aware what's going on without
that desire to tinker and get tothe middle of it?
Yeah, it's a great question. So a lot of the work that comes

(10:56):
from lifelong kindergarten and kind of this goes back to the
tradition of Seymour Papert and constructionism.
And it takes a more Socratic approach to learning.
And that doesn't necessarily mean that you need to build
things or tinker, but more that it it builds on this philosophy
that people will figure it out as long as they are supported to

(11:21):
ask questions and kind of explore.
Like we have a lot of wisdom in our priors and our intuitions
and kind of like approaches, like our approaches to learning
are the folks folk philosophies called like if you put a person
in front of a river with a pile of rocks, eventually they'll
figure out how to build a dam, right?

(11:42):
Like that's the Socratic approach.
And yeah, I think it it all stems from what you are into,
Like, what are your interests, Right.
So the reason Scratch became so popular is because it allowed
people like if you're into music, you're going to build
like you're going to create songs.
If you're into animations, you're going to make animations.
If you're into fashion, you could do like whatever you're

(12:04):
into like you could it, it was like allowing you to express
yourself through creative codingbased on your interest.
But it doesn't need to be coding, right?
Like if you're like don't want to build anything, like you just
want to you're into literature or like you're into sports, you
like to go and run outside, right?
Like you could still like in that context, use tools like to

(12:28):
help you keep track of your throws, right?
Or like, I don't know, like coach you like when you're
struggling at school because someone is bullying you or right
now, like this technology is embedded in so many different
aspects of young people's life, like their college application

(12:49):
and like the way they're being graded.
You know, like the, I don't knowif you've seen this, but last
year the teens took to the streets in UK to protest because
an algorithm was being used for grading them.
And the algorithm graded more harshly students coming from
high schools in low SES neighborhoods based on the zip

(13:12):
code. They, they were like, this is
unfair. Like we trust our teachers.
You should, you know, like respect our teachers.
And they did like this massive protest in in London, UK.
So, yeah, I think it's not only for for people who want to
build, but I do think that youngpeople are using applications

(13:33):
that have like AI filters for TikTok, right?
Or like, well, they are using AIin so many shape and form.
So having support for inquiring like what data is this
collecting? Am I getting the right answers?
Like am I do I understand what happens with that data?

(13:55):
Like what recommendations I'm going to get based on what I
click right on Instagram. So it's it's affecting their
lives in a very, very tangible way.
So I think this kind of AI literacy is for everyone, not
only for people who like to build.
Staying on top of AI news shouldn't mean endless
scrolling. Anetic Daily Intelligence is a

(14:16):
free daily report uniquely curated just for you, like a
personal assistant updating you on exactly the AI news you need.
Every report is unique to each reader and continually adapts
based on your feedback. You ahead of the curve on the
topics that matter to you. Sign up now at neg.co.
One thing you kind of touched onis just how people are are
directly influenced by it. And I'm curious about the degree

(14:37):
of anthropomorphization around the AI and if that has an impact
because in my mind I see you know, the the open AI is
advanced voice mode. It's a blue dot that's kind of
amorphous. It sounds real, but it's very
clearly a a a robotic entity. But as we try to make it more
like he, she, it, they is that influence the way people either
learn from AI or interact with AI.

(14:59):
Yes, and I've done a lot of research on for example, when
when the voice assistance came out, there were lots of
different voices. There was an assistant called Q
that had a gender neutral voice.And we asked kids to actually
discuss this, right? Like how does the voice change
how they trust it, how they interact?
How do we design a voice for this type of assistance, right?

(15:21):
And that's even more important for the powerful LLM assistance
that we see now because they're,they're much more believable
than the voice assistance were in 20/16/2017.
And I feel like there's two aspects to this that I want to
talk about. 1 is that anthropomorphization is not

(15:44):
always bad per SE. And the fact that we do that is
evolutionary speaking very logical because we had to make
sense of the world based on references to us, right and what
we understood best. So it's human nature, like even
when you look at the plug to seea face like and of of course,
like young people and elderly doit more and are kind of more

(16:07):
prone to being deceived. But if, if technology design is
kind of really honing in on manipulating like the human
likeness, but anthropomorphizingas a, as a human process of
sense making is not negative. Like we actually do it in how we

(16:29):
talk about things. Like we, we wrote a paper kind
of creating a taxonomy and showing what are all the
different ways in which people anthropomorphize like when they
talk about different AI technologies.
And this was with adults, it's with Emily Bender and Nana Inia.
It was published at fact last year and and it showed that it
from the verbs we use, like if we attribute like cognition,

(16:53):
right, like to a card self driving car, like the car
drives, right? Like that's already
anthropomorphizing, right? Even when you say artificial
intelligence, that's an anthropomorphic term because we
don't say automatic probability,right?
Or like we don't. So we, we assign intelligence to

(17:14):
a machine, right, like or to, toa system.
So it's so embedded now in so many of the terms and
expressions and ways we talk about these technologies.
And yes, I think sometimes that sets the wrong expectation, but
we need to realize that that's human natural tendency to do

(17:36):
that. So I, I think that that that's
this kind of my comment on anthropomorphization, which
tends to always get a bad rap. And I don't think it should
always get a bad rap. Now when it comes to technology
design, in my experience. And you know, I got invited to
kind of advise and evaluate on the kind of amigo tutor design

(17:56):
and a lot of these technologies for kids.
And I've been studying like smart toys and when they were
trying to put like a voice on everything on Barbie had a voice
and like tools. I feel like a lot of times when
we over anthropomorphize, it's very often used as a way of
disguising limitations in the technology because it's kind of

(18:18):
like a, a quick trick to say, look how look how appealing this
is. And look, we talk like a human
and look like what a cool personality and it's fun.
And instead of actually like having that technology be
reliable or like always provide the right answer or do what you
expected to do. So it's it's like, I don't want

(18:40):
to maybe calling it a cheap trick.
It's too, too harsh. But it's definitely a trick that
technology builders and designers use.
And I don't think it always paysoff right, because even if like
these voices are really cool andlike, I like the animation and I
like talking to it at the end ofthe day, if it's not going to

(19:00):
have their good answers or if itdoesn't do what I expect it to
do, like I'm, I'm not going to use it.
Like, and, and, and The thing islike, if you have very high
entrepreneurization, it sets very high expectations.
So it's easier to be disappointed when the technology
does not meet those expectations.
Yep. One note, OnStar toys that you

(19:21):
mentioned, it's just something I've heard from a couple of
people, but I've I've been hearing that kids aren't as
interested in the smart toys as just the more traditional dolls
or or action figures or ever because it almost takes their
imagination, their their imparted personality on it and
forces one on them. Have you noticed anything like
that? Yes.
And I actually taught two classes on smart toy design.

(19:44):
One was at Reese Dean IndustrialDesign and the other one was at
ITP at NYU. It's kind of like interactive
technologies degree at NYU. And it was amazing because we
did play Athons. So the students, these were like
design students or interaction technology students, master
students and they would design smart toys that made sense for

(20:06):
kids. And we actually went to schools
and tested them with kids and had kids come over and did like
lots and lots of iterations and.The idea was like really to only
add AI or technology if it made sense to the play experience and
provide so many other examples that are not voice right.

(20:27):
And that didn't take away from the play from the play
experience and that didn't take away from kids imagination.
It was kind of cool because the students had this idea, as you
know, of like they did like inflatable Legos, for example,
with special molding and casting.
Or there was one musical box where you would hum a song and
then it would on device or recognize the song.

(20:50):
And then you would put differentcards that can add different
sound effects and you would crank it and you would play back
that song, but with the effects of the card instruments.
So these are the type of things like I think we really need to
stimulate people creativity and imagination.
And I do think technology can dothat if if used properly, right.

(21:13):
And for that we need to do iterative design, design with
the kids, listen to what the kids like and see how they play.
My students like I remember likethe first quick and dirty
prototypes, like when the kids came in and start playing how
they were like mind blown because the kids would always do
things they didn't expect it thething that they thought like,

(21:33):
oh, that's the coolest feature. Like they would would be
ignored, right? So they quickly learn like I'm
not designing for myself, right?Like I think a lot of the people
in the AI space today make this mistake where they're designing
for themselves and they think they know their user and
actually, you know, like the waythings were when I was a kid are

(21:55):
very, very different than the way things are for kids today,
right? So I cannot, I cannot use my
experience as a reference. So I really need to understand
how this generation of young people like to play and like to
interact with toys and then design for that.
But yes, I think in general, onetrend we've been seeing, which

(22:17):
is worrisome is that they do spend much more time on tablet
and YouTube and media consumption.
And if you think about it, goingback to like to tinker or not to
tinker, right? Like a tablet doesn't have a
keyboard, it doesn't allow you to input things easily.
It's more like a design that wasmade for consumption versus a
laptop, which is more a device where you could actually create.

(22:39):
But yeah, the trends we were seeing, and this was in 2018,
right? Like when I taught in this class
and build this, the students build these toys, many of them
are actually ended up like beingpublished.
And I think it's one team that went and build a company.
And but yeah, like from 2018 until now, like we were already

(23:01):
seeing like trends that they prefer to spend time on tablets
and YouTube or just consume media and play last with toys.
My prediction is that it's only gotten worse and that I haven't
checked the the stats of toy sales.
But I do think that unfortunately children don't
play as much anymore, which is which is a shame.

(23:25):
Yeah, I mean, I remember growingup with Lego and play Rd. hockey
and all these things and you just don't see as much anymore.
And I mean, things come out likeMinecraft can give you that like
builder sensation, but you're still interacting just with with
the screen, even if it's not just for kids in general.
Do you have any advice or principles for building tools
that generally augment creativity rather than just like

(23:47):
force you into a certain path? Yeah, I thought about this a lot
recently because I'll give you an example that directly
impacted me. I'm learning Japanese.
I'm in Tokyo right now behind me.
It's like, so it's, it's it's a hard language, right?
And I've been trying very hard. And of course I'm using AI and I

(24:07):
use it like I generate, I use Sonar to generate like rap songs
with references to Romanian and Japanese.
And I listen to those in the subway.
And that kind of how I hype myself to stay motivated
learning the language. And of course I took, I'm taking
classes with people in person and that's amazing.
And but when I write right, like, I think a lot by writing.

(24:30):
And I noticed that when I write on paper versus when I write on
a tablet or any digital device, I don't retain information the
same way, which is kind of crazy, right?
Like I retain information much better when I write on, on
paper. It's just like even that
pressure of like how much effortI need to put on the pencil,
like to, to write on paper versus a digital device, because

(24:52):
that's again, evolutionary, right?
Like, and even the fact of writing per SE, like let's take
a a way for for now, like digital or non digital, like
writing. So for me, I'm building these
new tools and interactions to support kids learning and I'm
constantly thinking like what isthe natural workflow that where

(25:16):
they would learn the most and how can I bring the technology
there rather than take taking them away from that workflow and
bringing everything on a device,right.
So I'll give you 2 concrete examples. 1 is I call it
Mathmind. So a tool I recently built, it
allows kids to just solve math on paper.

(25:38):
And there's you can put a webcamanywhere you could use your
phone, just point at your paper and in real time, like I'm
solving there in real time goingto ask me questions.
And it uses a model that actually has a reference to a
benchmark of 55 most common misconceptions that kids do in

(25:59):
algebra in middle school. So if it sees that like I'm, I
have one of those misconceptions, like I don't
know how to work with fractions yet or like the order of
operations or like, you know, there's lots of this kind of
common mistakes going to ask me questions to highlight the fact
that like I don't understand, it's not just the mistake that
I'm making in this example, but that I don't understand the

(26:21):
principle. Maybe give me suggest an
exercise where I can practice until I get the principal.
So that's like kind of like working on paper in real time.
So it's streaming and just asking me question to debug my
thinking about math. But like all I'm doing, I'm not
changing anything, right? Like I'm just solving math on

(26:41):
paper and I'm kind of working through and I have like this
interactive debugging dot and that can ask me questions about
my math, right? That's one example.
The other one is in chemistry, right?
So I'm realizing like with multimodel, there's like all of this
world of opportunities that go away from the devices, right?

(27:02):
So we can act like in the real world.
So if I want to do like experiments, all of a sudden now
like I can have my experiment bench and have sensors that are
connected. I'm using Jack deck, which is
building on micro bit is the cheapest board that most of the
schools have. Most of the kids have.

(27:22):
It's like, I don't know how many$15 now and then you can plug
and play any type of sensor. So in real time it's going to
measure like temperature or light or CO2.
And then I can kind of put on the table like things I have at
home, right? And be like, oh, what can I do
with this? Like what experiment can I do
with this? Or like, if I, I mix this and

(27:43):
this and in real time, it's going to read the sensor data
and be like, oh, what do you expect, right?
Like, or do you need more heat? Or can you, what, what is your
solution going to look like if you add this right?
And, and it integrates with the webcam.
And the cool thing is that the webcam can become a microscope,
right? So like any webcam, if you, if
you flip the lens, it amplifies the image much more.

(28:05):
And then all of a sudden I can look at like, oh, you know, like
I can look at watching my food or watching water from the tap
or so. So these are like, I hope these
two examples highlight how I'm thinking about like meeting
people where they're at. And this is not just for kids,
right? Like it's like, what's your
workflow? Where do you work?

(28:26):
What are the things that you manipulate and kind of naturally
interact with? And how can you bring the
technology support in situ in real time and make it seamless
like so you don't have to go andtype somewhere or like take
pictures and upload or like so it doesn't break your your flow

(28:47):
and it doesn't take you out of the zone, right?
And I think that's a really goodprinciple in general.
Like if I like to write in, I don't know, like a local in Word
or docs or whatever it is like Iwant everything to happen there,
right? Or if I work with Photoshop and
I invested years to learn how Photoshop works, like that's

(29:07):
where I want my support, right? So yeah, I think I think this is
a good principle of like supporting real time interaction
in situ and allowing people to focus on their workflow without
having to worry about going and communicating with a side AI
assistant. Brilliant.
Yeah, reducing the friction. Bring it to where the user's at.

(29:28):
Super important principles. It also, though dangerously,
approaches the point of abstracting away the decision
making from the user and just having something be like a set
of rails that the user just getson it and is pursued through.
Do you have any principles or guidelines for when you build
the applications to ensure that the user still in control and
it's just having the AI poke andprod them in the right direction

(29:49):
rather than like steer them completely there?
Yeah, 100%. So these two examples, like both
the chemistry kind of the Chembody, it's called Chembody
and the Mathmind, they never give you solutions.
They ask you questions, right? So they're extremely Socratic.
So, and, and it's just like, OK,let's, let's try to take another

(30:10):
picture. Like, oh, what do you think
would happen here? Like, let's write down like your
prediction, write down your observation.
Let's look at it. What if you did this and that?
What do you think will happen next?
Right. So I think the art of teaching
is actually asking good questions.
Like I, and, and this is how I teach, by the way, And I, I also
taught like in, in high school and like, it's like, I ask a ton

(30:34):
of questions. And yes, of course, if you're
really stuck, like, and it's actually programmed this way,
like if you ask the same question three times and you
really don't know, like it's going to, it's going to help
you. But the, the, the goal and what
I do at least is like, to make these technologies be kind of
like supporting with your thinking and like tools for

(30:54):
thought rather than giving you solutions.
And I think a way of kind of focusing and honing on the user
and their agency is to, to, to have more efforts like this
misconception example, right? Like if we really understand

(31:14):
what are all the possible mistakes that a human can make
in a particular task or in a particular domain, then the LLM
or the mixture, whatever architecture we want to use.
But the AI is going to be able to support the human better
because it's going to understandlike this is the type of mistake

(31:35):
that the person is doing or thisis the potential kind of Ave. in
which they're going. And this is the type of question
I can ask them. So it constrains like the
support space to, to like known kind of learning scenarios,
which is what a good teacher does, right?
Like in, if you taught the subject for a while, like you,

(31:56):
you sort of know like, oh, at this point, like most of the
people are going to make this mistake, right?
Or this is a common misconception.
And you're, you're kind of like designing your teaching strategy
for that. So I think, I think we can do
the same with AI, but this requires us to really understand
the design space, the learning space to really understand kind

(32:17):
of like in a task in a domain. And it's very domain specific,
like what are all the ways in which things could go wrong and
what are all the common mistakesthat people do.
The other thing is like allow the users to customize some of
this, like allow them to enter system prompt or allow them to,

(32:38):
to kind of control. Like a lot of the kids in in the
scratch copilot I built wanted to be like I always wanted to
give me 3 ideas, not one idea like if generates characters or
names for things or I always wanted to give me like 5 images
or so. So people could actually
customize right? Like what type of support and

(33:00):
how they want how how much they want the AI to give them or not.
Very cool. Based on the research and and
what you've seen compared to thetools that exist in industry
today, what is your assumption as to like some misconceptions
people have in building AI toolsversus what would lead to a
better outcome? I see a lot of people spending

(33:21):
tremendous amount of time on benchmarks like you all scores
and like LMS and like picking the best model when I think the
the real like difference in the most lovable product.
And we keep talking about MVP minimal viable prototype instead
of minimal, minimal viable lovable, right?

(33:42):
Like or like what's the, what's the core interaction or the core
value proposition that you want to have for your users?
And very often you're going to get there through UIUX and
really understanding like the interaction, right?
And often times it's not so muchabout using a more performant

(34:05):
model like by, I don't know, 20%better.
It's really like the amount of time you spend on like designing
the UI, UX and the interaction and how you collect the feedback
from the users and kind of plug that feedback back into your
application design. And I, I think I, I would love

(34:26):
to see more people really honingin on how to collect feedback
from people and not in an annoying way.
Maybe it's just like based on what they click, right?
Or like based on how much time they spend on a task or in human
computer interaction. These are techniques that are
well understood and have been studied for so long, but kind of

(34:46):
use those techniques in this newgenerative AI tools and
applications and really improve the experience, the user
experience, the the interaction and the user experience versus
of focusing on kind of vanilla industry benchmarks and maybe

(35:06):
generate like specific tasks, benchmarks based on what people
ask, right? Like if you see OK, like in my
scratch copilot, like a lot of the kids are asking this
question and they're asking it in this like weird way, like
make the dude walk, right? Like who's the dude?
How do we make it walk? And then I need to create a
benchmark that is specific for my use case, right, for my

(35:28):
application. And when I make decisions of
both the model, but also like the prompts, prompts or
whatever, like database searchesit does or like kind of
aesthetics of the artifacts it generates.
Like I need to run that benchmarks against like what the
users in real time on my platform do and prefer.

(35:50):
So I think we're going to feel alot more efforts into this
pipeline from interaction to custom benchmark and iterating
on that cycle much faster because that's going to lead to
better product experience. And I think the real
differentiator is the product experience.
Yep, I I fully agree. And I've I've heard from some

(36:13):
people that of the startups thatexist today in the AI space,
about 10 to 15% have their own eval set.
Most don't. Do you have any advice on
whether it's for like a personalproject or academia or for
industry setting up an eval set to help with that iterative
loop? Do you have any tips or tricks?
Yeah, I know there's a lot of frameworks that are being

(36:33):
currently being developed. And my preference is to as much
as possible use real data instead of synthetic data.
And I know that people like use LLM as a judge because it's like
easy and you can, you can still have that into the loop and can
be helpful. It would help you like maybe

(36:54):
find certain bugs or certain issues.
But there's no way of going around like having a spreadsheet
with 30 golden examples that youreally understand.
And that's your real baseline, right?
So my advice for everyone, it's not so much about what
technology or framework they're using, but like, take the time

(37:15):
to create a spreadsheet with 30 examples that should always
work, and make sure you always evaluate your system against
those like. Yep, look at your data as they
say. I'd be keen on getting some of
your, your insights on on just the state of what the
state-of-the-art is. Do you have any examples of
using Gen. AI for physical creative

(37:36):
endeavors? That the example you gave with
chemistry was wonderful, being able to take in the sensor down
and whatnot. But anything else in the
physical world that you think iscutting edge and and really
innovative? Yeah.
So there's so many one that I love is in smell.
So there's this company, I forgot what it's called right
now, but I can look it up. But I remember they allow you to

(37:59):
customize perfumes. So you sort of describe what you
like, where you take pictures. Like, you know, I have these
flowers next to me and it's likethese are my favorite colors and
like I take a picture of that orlike a landscape I like or maybe
how I'm dressed. And then I can even add a song
like this is my favorite song. And based on all of that and a

(38:19):
prompt, it would generate A fragrance that is customized to
me and then they can ship it to me.
So I love now that, you know, like with synthetic biology and
like all of this work, the the research done in synbio space,
you could customize fragrances and get like your perf
customized perfume. So that's one example that comes

(38:41):
to mind. Trying to think in the physical
space using of course, like for running in sports, like I really
like to have like custom notification based on my
exercise goals or my heart rate or any of the kind of fitness
tracking and support with fitness tracking, trying to

(39:03):
think robotics is huge. So I guess that qualifies for
the physical space. Hunting Face just had their
first like AI and robotics hacker ton in Paris and it was
huge. And I love they're using like
this open source 3D printed robot.
And when I start playing with robots, I had a 3D printed open

(39:23):
source robot on top of RaspberryPi, which is also from a French
lab. It's called Ergo Robot.
And it looks like the Pixar lamp.
And it was like a joy to play with.
So, yeah, I think like anything in the robotic space right now
is really being disrupted because again, we have better
multimodal, like better vision models combined with

(39:44):
cogeneration combined with text combined.
So, so you could close the loop for supervised learning for
robotics much faster. I love that this is more like, I
don't know if I would, but like at the AI Engineering Summit,
the last one in SEF there was ina robot barista and it was it
was pretty awesome. Yeah, I think these are the the
top of mind examples. I'm sure there are others, but.

(40:07):
That's the question the. Whole space is exploding.
There's some all over. Yeah.
But along the same vein, how about multimodal?
I know that you're involved withthat, and the apps are just
starting to pop up, and it's really cool.
Are there any that you think people should check out?
I really so I was actually looking at examples for video
editing and I think there are a few there are like popping up.

(40:29):
What's this one? Wondershare Filmora is one that
I've been playing with and let me open like I was actually just
talking to those, to these people on Twitter.
There's one that is kind of likecursor for for video editing.
Yeah, I think in the space of like just Co creation for video,

(40:50):
for images, PowerPoints Gamma just released like a new version
like this is kind of for PowerPoint like presentation,
slick presentation design, whichI think is pretty cool.
Cursor for video editing. It's called Ponder, Ponder
Studio AI and yeah, I think likein general, like any sort of

(41:11):
suite for creator tools like of course, like podcasts support
and I really like notebook LM. I don't know if you play with
it, and this is from Google and that's not what I think, but I,
I really like the fact that, youknow, there's so many papers
that I need to keep up with. And I love that I can just copy
paste papers, generate a podcastthat I can listen to while I go

(41:34):
in to the gym or go and do a runor like I'm commuting to work.
And now they have a feature thatallows you to generate mind
maps. And I, I really like that
feature. I love my maps.
So that's another one that comesto mind.
A lot of things in music. I talked about Suno earlier,
like I've seen a plethora of like Co creation with AI for

(41:55):
music. I it's, it's an exciting time.
I feel like a kid again myself because like every time I think
of something like I'm like, oh, I'm sure someone has built this.
And you go in and it's so much easier now to kind of prototype
and play and really go from ideato prototype in a couple of

(42:18):
hours, right? So we live in exciting times.
Yeah, never been a better time to be a curious person who has
access to a computer. I live about an hour away from
everything, so notebook LM changed my life because instead
of just having something narrated or transcribed, being
able to have in a conversationaltone.
Wonderful. I did introduce my dad to Suno.
So every special event now comeswith a a song, which is a lot of

(42:41):
fun. But last one for me.
If you could implant 1 core principle about human AI
interaction to the mind of everyAI entrepreneur, what would it
be? Observe how your users play with
your tool. There's no replacement for that.
Like when I used to do a lot of research or even like, I, I work

(43:03):
with Fixie and I remember like, I sat down our engineering team
and I, I, I showed them videos of people using the tool and
there's no replacement for that.I mean, it doesn't mean to take
a lot, a long time. And these are not like your
colleagues or friends, like justlike your target users or people
who you never talked to before and who are not part of the tech
bubble. Like if you're designing for

(43:24):
them, just like early on, send them prototype and observe
without interfering, without explaining how to use your tool.
There's no replacement for that.And you're going to learn so
much. And I think like iterating fast
on this principle goes a long way.
Absolutely right, Steph. This was a blast.

(43:46):
Learned a ton. Really appreciate coming on.
Before I let you go, is there anything you want the audience
to know? I'm curious what you're
building. Feel free to share and send it
to me on Twitter or LinkedIn. And yeah, thanks, Thanks so much
for inviting me and for all the thought.
Well, questions. Absolutely, yeah.
We'll link everything down below, but look forward to talk
to you soon. Take care.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

NFL Daily with Gregg Rosenthal

NFL Daily with Gregg Rosenthal

Gregg Rosenthal and a rotating crew of elite NFL Media co-hosts, including Patrick Claybon, Colleen Wolfe, Steve Wyche, Nick Shook and Jourdan Rodrigue of The Athletic get you caught up daily on all the NFL news and analysis you need to be smarter and funnier than your friends.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.