Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Mmmm.
Speaker 2 (00:08):
Welcome to the Angular plus Show. We're app developers of
all kinds share their insights and experiences. Let's get started.
Speaker 3 (00:21):
Hello everyone, and welcome to another episode off the Angular
plus Show. We have a very exciting talk. Well for
some it might be exciting for some of they will
be like, oh my god, really, but I'm excited to
talk about it. And with me, I have my one
of my favorite co hosts. What is your actually name? Q?
Your name is not.
Speaker 4 (00:42):
Quantus, very very urban.
Speaker 3 (00:46):
I feel like you're messing again with me, Like you know,
Brandon Roberts is your brother and Brendan robertson tutes.
Speaker 4 (00:52):
I was like, Brendan Roberts is my brother with an
a brother, He's my brother. Fame Mark Texan.
Speaker 3 (01:01):
Oh, okay, okay. We also have a lovely guest here, Michelle,
to talk about responsible AI. Michelle, do you want to
introduce yourself?
Speaker 1 (01:13):
Yes, thank you, Anne. My name is Michelle Frost and
I am an AI advocate at Jet Brains as well.
Speaker 3 (01:20):
Hah, Jet Brains. Why does it ring a bell?
Speaker 1 (01:25):
I don't know.
Speaker 3 (01:26):
I feel like usually we have like all the Cisco
people here from Lara, so I feel like this is
kind of like keeping the balance, you know, like not
that this is a scorecard, but it should be. Michelle,
you are I know that for the fact that you're
like touring around all the conferences right now and talking
about responsible.
Speaker 4 (01:44):
AI, Well, hang on, hang on, hang on, hang on,
hang on. You you know her yan, So that's no,
we don't know her. So let her tell us a
little bit about herself, what she does at jet Brains
and how she got into responsible AI developed, developing advocating.
Speaker 1 (02:02):
Yeah, it's a it's a long story. I'll give you
the brief version. So I'm actually a baby at jed Brains.
I just joined in January, so I think I just
hit month four and this is my first time in
an advocacy role. Prior to that, I've been in engineering
for over a decade. But I started machine learning in
(02:24):
twenty sixteen. And for context you, I actually just recently
last year finished a Masters of Science and AI at
Hopkins And one of the reasons I went was because
I had uncovered a few different cases of fairness issues
(02:45):
different contexts where you know, bias was relevant in a
you know, machine learning model, and I started asking myself, okay, well,
to me, that seemed like an obvious maybe don't do that,
but our other people doing this? And the answer was yes,
And so I kind of spent a few years thinking
(03:05):
about this, and then over the pandemic, I thought that
what else to do with my time then to go
back to school to a crowd program, which is not
for the faint at heart, And ultimately I kind of
felt that I could do, you know, quite a bit
as an engineer as a solo person, but I could
(03:26):
do more if I taught other people the things that
I've learned and what I'm looking for and try and
you know, help people navigate uncertain waters. Honestly, with the
change of AI and machine learning over the past couple
of years, it's it's been a lot and rapidly.
Speaker 3 (03:45):
So Q, do you not have the feeling that you know, Michelle,
what's that helpful for you?
Speaker 4 (03:50):
I know that she's super smart, so well, he's smarter
than me, and the in the aid department for sure.
So I mean, I'm gonna learn a lot today. I'm
happy for that. We've been on a run. This is
like a legendary run on the Agular Plus show where
we're just learning all these things. You've had a lot
of AI stuff. So before I was kind of eh
(04:11):
about AI. But I've been vobcoding the last like two
weeks now because of all the AI talks we've had, so.
Speaker 1 (04:17):
Hey, hey, yeah, it's interesting too to see how much
like the landscape has changed, you know, because a couple
of years ago when I started this program, people are
that were telling me that another AI winter was coming,
which I disagreed with at the time, and I do so,
you know, even more now. But it's been a wild,
wild couple of years in the space.
Speaker 4 (04:39):
Yeah, I think I took a little often then we're like, oh,
AI's going to take over, and it's like I heard
that in twenty fifteen, and we did like the I
was working for money Graham at the time, and we
made a money Graham chatbot, which was really just you know,
a Facebook app that our Facebook messenger thing where you
could talk to money Graham and schedule a payment and
get your money. Pizza Hut did one. We could order
(04:59):
a pizza through the app. I mean it was this
AI air quote and then it was nothing for years,
and then I heard about chat gpt and now the
now it's part of everyday life for us. We are
all using it in our day to day pretty heavily.
Speaker 1 (05:17):
Yeah, I mean it to put around prior to that too,
just like in more secretive ways. I would say, you know,
it wasn't as big of a marketing campaign for companies
to say that they had data science, you know, models
running algorithms, and now everyone's like, you know, we're AI
powered and we've been AI powered.
Speaker 3 (05:37):
When do you where do you see like this uptick trend?
Or like when do you did you notice it the
first time? Because I'm completed with Q like once like
chet GPT three something something came out and like what
was the first time usable and not like within hour's
like putting out some questionable comments. Mm hmmm. So I
(06:01):
would probably like say, like twenty twenty three, maybe it
popped on my radar the first time, like to be
like meaningful.
Speaker 1 (06:08):
And like yeah, I think, I mean so for context too,
for anyone who might not know this, like the model,
the Transformer model. The paper originally came out in twenty seventeen.
Attention is all you need Google Deep Minds paper, And
so I think prior to GPT, I believe BERT came out,
(06:30):
which was the encoder side, whereas GPT is the decoder side,
but you know, GPT had been cycling in various models
that were open source, and then it wasn't until what
was it November, I think of twenty twenty two that
chat GPT. They basically put a UI interface on it,
and I don't think what anyone was expecting was that
(06:52):
public response to it, and it just completely changed you know,
a lot of enterprises roadmaps, even companies that already had
you know, AI systems in place or had budgeted for
what they anticipated doing in twenty twenty three. It kind
of threw everybody for a loop. Not that necessarily that
the technology had changed overnight, but the public response to
(07:15):
it had.
Speaker 4 (07:17):
The public the public like us, the consumers who are
actively using it, but not necessarily businesses though. I think
there was a weird turn where some businesses were like,
let's go all in on AI, and then at some
point it switched where it was like, don't use AI.
Speaker 3 (07:35):
I don't know.
Speaker 4 (07:36):
I guess it's gotten a little more comfortable now that
you can have like these private instances, I guess. But
there was a time at my last employee where like
if we got caught using cat GPT to write code
we could get by. That was a thing.
Speaker 1 (07:51):
Yeah, I think there were like, there were definitely a
lot of and there still should be. Like you know,
see why I said I wasn't going to cuss your
ass policies, you know, helping to we're going to do
and how companies were responding. But there's still like what
(08:12):
I saw too in the consulting space was a pretty
fast response of noticing what the consumers wanted and saying, Okay,
what do we what do we need to do on
the enterprise side.
Speaker 3 (08:24):
How okay. That's one thing that I'm curious about from
your perspective, how popular do you think it's on the
public side right now? Because I feel like if I
look like like if I speak broadly about like my
peer group and like my age group, I think most
people use it to some extent. If I look at
(08:45):
my in laws or my parents, So what is what
do you consider public in that sense?
Speaker 1 (08:53):
I generational? Like there's a there's obviously quite a bit
of difference. Like my parents, for example, aren't using Chatchiput.
They know what it is, but they're not like active.
Speaker 3 (09:04):
Users better in that sense of mind, Well, you.
Speaker 1 (09:08):
Know, my dad cuts me out any AI newspaper article stuff.
It's it's really sweet, but they're not active users of it.
I think over night. I mean they had the one
of the quickest sign ups you know for a product
in you know, unprecedented history. But the AI, like the
(09:32):
public AI discourse has not, is not new to twenty
twenty two. This has been around since, you know, before
we have the term AI. We had this, you know,
futuristic robots. That term was coined in the nineteen twenties
from a play, uh so that existed before artificial intelligence existed, right,
(09:53):
and when you know, public perception to the initiative of
AI was kind of doomsday. Okay, it's we're immediately going
to get this thing that takes over and you know,
then we had the you know how nine thousand kind
of science fiction attitude. And I think that there's been
(10:14):
a lot of back and forth for you know what
sixty plus years of what does the public think AI is?
How does that influence what we're building? You know, and
back and forth when expectations aren't met or you know,
when they think of like where we need to go,
I would align like the AGI artificial general intelligence hype
(10:40):
train along the same kind of path too with what
does the public expect and what are researchers like trying
to achieve? What problems are they trying to solve? Or
you know, what is that limit that line that we're
trying to push, you know, beyond current capabilities.
Speaker 3 (10:57):
Maybe real quick, just to provide like a level playing
field for everyone here. I obviously know all those terms
that you just throw around just very casually. But let's
assume I wouldn't know. So I think within like the
last five sentences, we throw around AI, LM, and AHI,
(11:18):
and then there's technically also agentic stuff, but let's ignore
that for a minute. How do you differentiate that just
for like everyone to be on the same side.
Speaker 1 (11:29):
Yeah, thank you for calling me out on that, because
I am I definitely do that often. So I mean,
you know, we'll start with AI. Maybe that's the easiest
thing to start with artificial intelligence. So this was a
term that was coined in nineteen fifty six at a
summer workshop at Dartmouth, and it was initiated actually from
(11:51):
a nineteen fifties paper by Alan Turing. Can machines think?
Turing died and I think fifty four if my memory
is correct, and a group of researchers said let's continue
the work, locked themselves in at Dartmouth for a summer,
and they came out with the term artificial intelligence and
(12:11):
a few kind of goals of the field. One was
that it was supposed to be interdisciplinary. Two was that
they knew it was going to be expensive, so they
were like, how do we fund this? Probably the military,
you know. And three it was like we need to
have you know, a network of academia, of industry, of
(12:35):
you know, private government all working together, you know. And
then they tried to solve chess and expectations were not
quite met. But so that's kind of AI. Now if
we look at it in a modern you know viewpoint.
If you pictured a ven diagram, Okay, so you have
(12:56):
a circle and that is AI next to the circle
with some overlap.
Speaker 4 (13:01):
Is data science makes sense?
Speaker 1 (13:05):
Now? On the parameter of AI that does not intersect
data science, you kind of have these old school methods,
so our classic computer vision, which is still arguably partially
in data science, but you know, some classic like state
space search, old school like game algorithms, right, and then
(13:29):
in data science you have machine learning. Okay, machine learning
started in like the eighties within machine learning, picture another
bubble of your neural networks, right, and inside neural networks
we have deep learning, and then inside deep learning we
have you know, all these kinds of modern the generative, right,
(13:52):
you're generative natural language processing, so siblings like your large
language model siblings chap cheap, BT and cloud and you know, X,
y Z, and then you have your deep vision models,
your generative deep vision as well as your you know,
discriminatory So so you kind of start you start getting
(14:16):
smaller and smaller. And what a lot of people don't
realize is like the space that we're operating in right now,
we're in this like the bubble inside the bubble inside
the bubble inside the bubble. So we're in this small
little space compared to the you know, nineteen fifty six
to two thousand and twenty five, Like a lot of
stuff has happened in that time. So that's kind of
(14:39):
the quick and short and dirty history of AI to today.
Large language models llms being again like your chap cheapt
and siblings that most consumers are now familiar with and
what was the other term? Keep me honest here agi
that I yes, Okay, So there's sort of like currently
(15:03):
hypothesized three kind of tracks of AI. So what we've
been operating in is what we're calling artificial narrow intelligence
ANI and ANI are models that you know, they can
do something very well. Okay, one thing, they're doing it
pretty well. It's it's a focus that is classic. You know,
(15:27):
we've had machine learning models for literal decades that have
you know, done this or done this a discriminatory thing,
a prediction, whatever it is. And the next like layer
to be would be artificial general intelligence, meaning you could
have one model that is smarter or general more generalized
(15:50):
than most humans, okay, or or an average human. So
like most tasks will be, it will be as good as,
if not better than your average human. And then as I,
which is your artificial superintelligence, is that like very science
fiction one is going to be smarter than every human
(16:10):
on the planet kind of thing. Yeah, And my thing
is like, so if we talk about AI ethics and
responsible AI, I think a lot of times this AGI
artificial general intelligence conversation takes over and you know, people
are talking about it like it's today or tomorrow's problem,
(16:33):
like you know, right around the corner, and we need
to be thinking of you know, is is AI conscious?
Does it need rights? You know, all these kind of
far away things. And I'm not saying that we shouldn't
eventually get there, Probably not on the rights of an
AI model, but you know, the forecasting of how many
(16:53):
jobs is it going to replace? Some of those things
are much more immediate. But what that conversation does is
we haven't proven that we can get there, right We
don't know for certain that we can have AGI or
accomplish AGI and truly what that looks like, right, Like,
we can't even measure intelligence in a human across the board,
(17:15):
So how do we say, across the board you know
that an AI model has reached general intelligence or is
better than you know, all these humans. How do you
measure creativity?
Speaker 3 (17:26):
Right?
Speaker 1 (17:26):
How do you measure novel thought? How do you measure empathy?
How do you know that that's like a true understanding
versus like a mimicry of what it thinks is empathy
or emotional intelligence. My other side caveat is I'll be
like a little bit more convinced on AGI when a
(17:49):
like multi modal robot could teach a toddler how to
tie shoelaces, Because there's almost.
Speaker 3 (17:59):
No need to call me else like that right now?
Not nice?
Speaker 1 (18:04):
Are you still using bell Crow.
Speaker 3 (18:08):
Rain boots? That is where it's at rain Boots?
Speaker 1 (18:11):
All right? I won't knock you on that too hard.
But the problem with the AGI conversation is that we
distract from the AI issues that we've had for the
entire length of the field, for issues like you know, fairness,
which is the field that studies bias in machine learning models,
(18:32):
in you know, privacy in IP and deciding what models
can be trained on who owns both the input and
the output, like what's the line? The generative field has
opened up a whole new can of worms in terms
of like, you know, even what happens to your likeness
(18:54):
when you pass away? I can someone use that to
train a model and make you know, a done in
a version of yourself and what can it do and
who has rights to it? So there's there's a lot
of questions like around these that we still haven't solved
yet we're moving on to this this next thing that's
it's you know, it's still science fiction heat.
Speaker 4 (19:16):
Yeah, that's interesting. I think you kind of open the
door though, like talking about like who owns the output
and who owns the input or whatever? How does how
does your job at jet Brains?
Speaker 1 (19:32):
Like?
Speaker 4 (19:32):
What are you doing at jet Brains specifically with AI?
Are you are you?
Speaker 3 (19:37):
Like?
Speaker 4 (19:38):
What is it like auditing? Like what AI is doing
inside of the idees or what.
Speaker 1 (19:47):
Form my capacity? We're still trying to figure that out.
We've got a lot of work to do in the
general AI space. But I will say that there's some
amazing teams at jet Brains that are working on you know,
human alignment in AI systems. There's a lot of researchers
that are looking into privacy like federated learning. So there's
(20:09):
there's a lot of cool work that's being done behind
the scenes. I'm a little bit more on the frontward
side of of you know, trying to talk about things
and how we should be thinking of, you know, how
we move forward in this space.
Speaker 4 (20:26):
Jet Brains. I use I use the web storm all
the time, and I as you should. Well, I get
a lot of pushback and I get called an old
man because I use Well, no, I mean, I love
that got really loud. I'm sorry, I love I did
(20:46):
hear that? So I guess the question I have is
when you have an AI that's helping you write code
in your ide do you see or do you have
to advocate for or against? Like using how much? Try
(21:06):
to raise the question like sometimes the AI can give
you bad code, and it becomes I don't want to
say for juniors, but maybe some people will like for me, Okay,
I got a good example. I've never really wrote LUA
at a professional level, but I write LUA code for
my kids for their game roadblocks often, like I'll help
(21:26):
design games. I've gotten too the vibe coding of asking
chat dpt to write me some LUA code for specific
things and I don't know if that code is good
or bad. How do you train or is there a
way to differentiate between good patterns and bad patterns? And
like is the AI going to eventually as more people
(21:49):
use it, it's going to become like the status quo
of like what is a good pattern? And we'll have
people like you who's on X saying you shouldn't use
this pattern, like like not like like enomes or something,
which I disagree with.
Speaker 3 (22:06):
Oh whoa whoa whoa whoa whoa whoa.
Speaker 4 (22:08):
But but if the AI is giving me a bunch
of enomes, is that is that cool? Like if that
if someone says like yo, like chat gv T told
me to use these enoms, so I'm just gonna go
with it. That becomes that becomes law essentially, right, Like
that they're all used spinning out that type of code.
That's what everyone's using. Like where do you, like, how
do you teach it not to do that? Or is
(22:30):
it is that gonna just end up being the thing
that people do?
Speaker 1 (22:34):
Yeah, I think So there's a mix of questions in there. Right.
So there's like something that I often tell people is
that all responsible AI, all ethical AI conversations are a
socio technical problem, meaning the solution will never just exist
in technology. Right, It's a social problem too.
Speaker 3 (22:55):
Yeah, you see, you.
Speaker 4 (22:56):
See it often an X where people are grock is
this true? And whatever Grock says, they assume that that
is what is factual.
Speaker 1 (23:05):
Well, and there's misinformation. Right. So you've got like you
have as an organization, right, you have a trajectory or values,
a belief set of of how you think things should be.
As a community of engineers, like we have belief systems, right,
and so we have this like alignment that we're trying
(23:26):
to achieve with what's possible within a technology and you know,
what we have to remember is that, you know, large
language models are going to produce what they have been
trained to produce, and a lot of that is going
to be historical right when we're looking at code like
architecture changes. But if we're using old legacy artifacts as
(23:50):
current trading models, that's going to propagate through to the
end result. You know, there's there's going to be a
mix too, and I don't I don't. We don't want
to steer anyone towards like, oh, you must vibe code
or you know, you have to use AI. Like there's
going to be a several different profiles from you know,
(24:12):
your engineer that does not want AI anywhere near their
code base to your Vibe code or engineer and that's
all fine, Like every path is okay as long as
you understand what you're accepting and does it align with
your organizational values and what your organization expects. I think,
like I would be, I feel a little bit for
(24:35):
like junior debs, you know, because like when I was,
you know, younger, I had amazing, amazing you know, engineers
that I looked up to and architects that taught me.
You know, most of what I learned from right, and
you had that like that senior mentorship from people that
(24:56):
had been around for several decades already who had seeing
you know, the changes of architecture and patterns, and then
you kind of learn to query that. And so I
think too, we're in this interesting time, you know, post
twenty twenty, where there's still a lot of remote work,
and like, I'm a big fan of remote work, but like,
(25:16):
how does then that impact a junior dev who maybe
just got out of school or even like a boot
camp and they're at home and they don't have that
that mentor that sits at a dusk next to them
and they're you know, querying chat GPT or you know
whatever cloud or you know, maybe maybe a Jet Brains
(25:37):
AI assistant. But how do we you know, keep each
other honest and saying like, oh, hey, I see you
know this is your code, but today we actually prefer it,
you know, to do this. And like teaching that that
lesson to the junior deads Like, I feel like there's
calls for both seniors, you know, and juniors to kind
of meet that middle ground too.
Speaker 3 (25:58):
I think that junior senior conversation is super interesting. Well,
I would like to circle back to Q's question because
while he was talking, like, my first suggestion or instinct
was like, well, my opinion on typescript enoms is kind
of better than ces, so my opinion should be weighted higher.
So and because my opinion is just factually correct and
(26:20):
yours is based on emotions, but different conversation if we
I mean this brings us to the question of, like,
why weighted bias for training models? How how do you
stand to that in regards to ethic LAI, Because at
the end of the day, it's difficult if I look
at it, like if I look at like all people
(26:42):
out there, and everyone has opinions on everything. A lot
of those opinions are not a affectual wrong, but that's
a different problem. But some of those opinions are just
like good, where some opinions might be good. So how
do you strike that balance of like, well, maybe scientists
on physics has more correct answers about like gravity and stuff.
Then who knows jack shit about gravity other than it
(27:05):
goes down.
Speaker 1 (27:07):
So there's actually a couple of conversations in your prompt. First,
there is like a field in terms of like ethical
or like values right there is an entire conversation called
the value alignment problem. And basically what this is is
that you know, we've all got opinions. I'm not going
(27:29):
to you know, say what The rest of that sentence is,
we all know it. But like everybody has an opinion
you look at you know, like from a political perspective too,
there are different opinions, okay, and people usually reach their
beliefs by their values, by perhaps what they've been raised in,
(27:50):
you know whatever. On the AI value side, there are
objectives that are contrasting. So as an example, security versus privacy.
Looking at facial recognition, for example, there are those that
argue that we should have facial recognition and all CCTV,
(28:13):
you know, et cetera, because of security, because of national interest,
because we need to know blah blah blah. You know.
On the other side of that, there's a privacy issue
because like I don't want, you know, my my face.
I don't want to walk into a store and every
you know, I can be easily identified. Furthermore, there's a
(28:35):
technology problem in facial recognition of bias, where you know,
light and dark skin and male and females are recognized
at different error rates. So you start getting into all
these different levers, right, and on both like the privacy
and the security side, there's there's valid arguments on both points, right,
(28:58):
So then how do you align, Like how do you
decide which direction that you're going to go in or
you know a balance between regulation and innovation. Right, some
companies that are like, oh, well we have to you know, regulate,
but if we could fudge these couple of things, then
we see this as innovation. Or you know, there's open
(29:19):
source versus you know, proprietary.
Speaker 3 (29:23):
Uh.
Speaker 1 (29:23):
In the field of fairness, a lot of our algorithms
and you know that what are things that we're trying
to predict? We need data? Right, Well, what happens when
your data is historically biased? So you start and the
problem is is that then if you want to you know,
(29:45):
increase fairness, oftentimes you take a hit on accuracy and
on performance. So then how do you walk into a
room of you know, C suite executives and say, hey,
we're going to take a five percent hit on accuracy
for two percent increase on fairness, Like how do you
justify that, you know, to achieve that that outcome? So
(30:08):
there's there's a lot of conflicting conversations, and it's it's difficult.
Then you have the technology side, right, So it's a
it's a very like fine line balance and juggle of
trying to do the most in all the categories and
then deciding where your trade offs are.
Speaker 3 (30:32):
Good morning. You know that moment when your coffee hasn't
kicked in yet, but your slack is already blowing up
with Hey did you hear about that new framework that
just dropped? Yeah? Me too. That's why I created the
Weekly Depth Sproove, the newsletter that catches you up on
all the web deth chaos while you're still on your
first cup. Oh look, another anger feature was just released,
(30:54):
and what's this typescripts doing something again? Look also through
the poor request and change slot GRAMA. So you don't
have to five minutes with my newsletter on Wednesday morning,
and you'll be the most informed person in your standard.
That's better the Weekly Desperate, because your brain deserves a
(31:16):
gentle onboarding to the week's tech madness. Sign up at
Weekly Brew dot dev and get your dose of deaf
news with your morning caffeine. No hype, no clickbait, just
the updates that actually matter. Your Wednesday morning self will
thank you, I'll fine tuned. Are those models in that area?
Like this is really like a cognisant decision, or I
(31:37):
use this data and not that one because of my
swing in this way, or more like well like I
got a bunch of data for the best.
Speaker 1 (31:46):
Uh, that's well, it depends on what we're doing, right, Like,
it depends on are we in like a like let's
say more classic machine learning you know model, there's an
and you know mL models. I give quite a few
talks on this for fairness. For example, your your bias
(32:07):
doesn't just come from your data. It's how you sample,
it's how you derive proxies for constructs. Right like, I'm
for reference for anyone who can't see me, I'm holding
up my cup of tea, and this is like a
pretty easy like we can all agree this is a
(32:27):
cup of tea or a glass of tea or you
know whatever.
Speaker 4 (32:31):
Let's say that is macha, that is not tea, is tee? No,
it's just ma okay, so the ai that like, no,
that's not green tea. That's much so you have brought on.
Speaker 1 (32:48):
This cup I am holding up is water and it
is definitely h two oh inside this even though it's
opaqu I promise you it is water. My point being
is that we can agree on on an output on
a label. But what about when we're trying to talk
about like constructs, right, like human constructs like credit worthiness,
(33:11):
like things like recidivism. There are are human conceptual models
that we've created, but we don't always know what those
features are that result in that output, Like we can
guess and we think we know, but we get that wrong.
(33:32):
So there's bias just in that too, Like forget about
your historical data. There's bias that results from assumptions that
we make in the modeling process or the objective. There's
a bias in that. There's a bias in how we
think people are going to use our model in our product,
how we think we anticipate our users are going to
(33:54):
like what thing it's going to do, Like they could
use it in a different ways. It's a framing trap sometimes,
so like we're we're juggling all these different different pieces
of the puzzle and trying to analyze like what could
go wrong at each point, and it's it's a lot
now from like your large language models, like they do
(34:15):
need a ton of data, and like where do you
think that data comes from? Comes from? You know, oftentimes text.
We've seen obviously like a lot of bias in chop ChiPT,
for example. But for chap ChiPT to get trained, it
was trained on tons of historical text, on books, on websites,
(34:35):
you know, on anything it could get its hands on.
So what did it you know, what did it learn?
It learned like what we taught it?
Speaker 4 (34:44):
So does it not is it not smart enough? Smart
enough to like differentiate? Like if it gets if it
reads fifteen books on the same subject and that's say
seven or skewed one way and eight skeew the other,
does it not learn to make its own or formulate
its own opinion or does it just regurgitate all fifteen
text at any random time?
Speaker 1 (35:05):
It doesn't have opinions? Hmm okay, you know, like it's
it's a product of its training, of its of its engineering.
And that's too where Like I find that the term
hallucination is very problematic because it anthropomorphizes one thing, like
it's it's giving this you know out, Oh well, it
(35:27):
just hallucinates, like that's that's just what it does, you know,
But it's actually an error. So if we frame it
as misinformation because it has produced an error like that
gives us a different feeling, right of what just happened,
Versus if we say it hallucinated, like it's you know,
a human and like just had a rough day.
Speaker 4 (35:48):
My brain is like hurting kind of follow all this stuff.
Speaker 3 (35:52):
I don't. I don't want this to go like super
into a political conversation, but a little bit. Uh have
you all seen the conversation that Measury Taylor Green with Grock?
Speaker 1 (36:10):
I haven't, and I don't think I want to just
based off the description, Well.
Speaker 4 (36:16):
It's it's really just her arguing with Grog, and Groc
back checks people or you know, people say hey, Rock,
is this true? And it gives an answer, and Marty
Taylor Green was like, Grog, You're an idiot. It's like,
well you know this, this, this is what's factual. I'm
(36:38):
just repeating what I've read online. And it got into
this whole big thing of like, you know, Marty Taylor
Green is arguing with the rob with the AI on
on Twitter.
Speaker 5 (36:49):
So it's I don't have to say I think it's
hilarious that Rock leansmall on the left slight than what
I anticipated to put it politically correct.
Speaker 4 (37:05):
Well, when whenever we whenever Elon bought the company, everyone
not everyone, A lot of people assumed that the AI
that he was going to release was going to askew
to the right, and well, well, a lot of X
like X has become very right leaning platforms, So you
would assume that it would gather or at least me
(37:26):
thinking about how AI works and all the documentation that's
constantly put on there, since it's leaning more right politically,
I assumed that it would be a more right leaning AI.
But I guess that's not how it works.
Speaker 1 (37:42):
I mean, there are tracks and balances that you can put,
you know, into play. There's you know, adversarial approaches to
try and correct some of those things like but that's
up to the engineering team to make sure that like
those gates and guardrails are up there. I tend to
(38:02):
take my political us information between the hours of noon
and two. I have found that it is better than
doom scroll scrolling and you know, first thing in the
morning or last thing at night. Otherwise I'll start my day,
you know, pretty poorly, or I won't sleep at night.
So I've I've isolated these but whenever like AI comes
(38:26):
into the conversation, it's just it's usually entertaining, Like what
was the Department of Education like a month ago kept
calling AI like a one stage off, like repeatedly, and
you're like, ah shit, like you didn't practice this out
loud if anyone did you a one.
Speaker 4 (38:47):
But I guess that also kind of leans back skewing
one way right or left. I've seen a lot of
discourse now on angular and how AI sucks it right
angular code. That's like a big trend right now on
X how do you how do you fix that? It's
(39:07):
a lot of the time it's mostly just old code,
you know, like it's it's still pulling in like modules
and it's still pulling in in GF and stuff instead
of if like it's a lot of old templating. You
would think that like even jem and I would be
good at it since it's at a Google, the Google AI,
(39:28):
but it's also bad at writing it. Like how does
one fix that issue? Like that the way you're it's
constantly learning supposedly, but apparently it's not even in context,
Like if I have it in my ID or my repo,
my my files and it knows my data, it still
tries to give me bad code or not code that
I'm not saying my bad code is bad if it's
(39:50):
the answer.
Speaker 1 (39:51):
But well there's I mean, okay, So there's a range
of things that go into this, right, Like first you
are going to have training data that makes a difference,
right of how often things are kept up to date,
how how much data right, how the quality of the
data still matters, right, how much Like it's relevance and
(40:15):
you know, like it's an iterative process. So like if
you look at different models and how they perform it differently. Coding,
there's the open Eye came out with the you know,
swe bench, which is like a benchmark for software related tasks,
and a lot of different models are using this and
(40:35):
they perform differently across different languages, including you know, jet Brains,
Melum And there's there is the training piece, so that
might be part of it with Angular, but then there's
also like the language itself. Now, if there's a difference
between like React and Angular like that, we can probably
(40:56):
take a bet and say maybe it comes down from
the training data and the relevance of the training data
and the quality of the training data and how often
is that, you know, being kept up to date. And
preference comes into play here too, like what what are
the behind the scenes what are the engineers prioritizing? Is
(41:17):
the angular top of their prioritization list. I'm not gonna
piss anybody off by saying maybe not, but these are
the questions that we have to ask as consumers.
Speaker 3 (41:29):
So okay, that is one thing that I discussed with
ha who you haven't met, but so he brought up
this blog post a couple I know'll probably years ago Jesus,
where it's like that newer programming languages have it are
going to have it harder in this new time with
(41:50):
AI just because training data is not their quality of
the results is not the up to par. Whereas I
talked with someone else from an AI startup and their
thesis was more okay. At the current age, the training
data has more of a way to it. In the future,
we're more looking at like, okay, how can we enrich
(42:13):
context by looking at documentation and those kind of things?
So how do you see this on like a scale perspective?
Are we as things are right now are basically stuck
in this This is how code looks in twenty in
the twenty twenty ish and this is what we're going
to write till AI is getting better or do you
(42:37):
see room for innovation and like all new patterns emerge,
new languages emerge, those kind of things.
Speaker 1 (42:43):
Yeah, I mean that's going to have to be a
ongoing conversation, like as new features are added to various languages,
like how do we keep up how do we make
sure the models keep up with that? Like there's going
to have to be a collaboration depending on like, you know,
depending on how accurate or how much people rely on
(43:07):
AI models. If there's some new feature that's dropping, or
you know, something that a new library that's coming out, Like,
there's going to have to be collaboration between whoever's in
charge of that library or that feature change, and like
who are the makers of the models to make sure
that those things go out together, that they get enough
(43:28):
training data and enough examples that they can do something
that is actually beneficial and like keeps the train on
its tracks and that it's represented properly too, like to
veer out of you know, development for a moment. If
we go back to like your natural language processing ll
(43:52):
m's right comes off of data and what it's been
trained on. But what happens when voices aren't represented, So
like if you're queerying, you know, like there's a lot
of bias that comes in there if the stories are
being told about a person or about a group and
(44:14):
not by the group. And there's certain cultures that don't
want to share their folklore, they don't want to share
their story. And that doesn't mean that an AI model
is not going to try and represent them, but that
means that they're going to be represented by someone else
and what that assumption is and so it's it's really
(44:37):
a a social impact too and a consumer impact of
deciding like do we understand this? Do we understand if
I'm asking you know something that is going to be misrepresented,
do I understand why it's being misrepresented? Or that like
maybe I shouldn't take this as you know, ground truth.
(44:58):
I should take it with a grain of salt. I
should take it as an assumption and then translate that
back into into software development. And you know, we've all
seen the last few years, like the made up libraries
right of oh, just import x y z and you're like,
what's x y z And they're like, oh, well it
probably would look like this, and it's just some made
(45:21):
up random script. And you know, some I don't want
to say, junior, but some dev out there will copy
and paste that shit and they're like, oh yeah, it's fine.
Speaker 4 (45:34):
I would never absolutely not.
Speaker 3 (45:39):
So on that note, do you think using AI is
going to become a skill like googling used to be
like ten years ago. And with that also, like as
you mentioned, like that building quality check, as you have
to do with the news here in this lovely country.
Speaker 1 (45:59):
Yeah, oh great, keep keep adding the politics. I'm trying
to not cuss today. Okay, that's my goal.
Speaker 3 (46:07):
You're you're doing great.
Speaker 1 (46:09):
I think I've messed up once or twice, but did not.
I think it's two or even like threefold. Architecture should
still remain like pop of mind, right, and I think
a really a strong learning goal. I think that like
(46:32):
in years past, it's been okay for debs to like
not care about architecture or like the why, as long
as they know the how. You know, like, well, I
can I can put the box on the screen in
the spot that it's supposed to be in, Okay, and
it's fine. Maybe they don't know how to assemble the box.
(46:59):
Maybe they don't know how to take the box apart
and then reassemble the box. Maybe they don't know how
the box works or what pieces come together in order
to you know, I'm going to keep killing myself with
this analogy, like build the cardboard that can you know,
constructs the box that then puts it on the screen,
(47:21):
and then if something new comes along, then they wouldn't
know is this new thing better than you know, the
cardboard that we're using to make fops with. So if
AI can do like AI can easily place the box
in the exact spot, maybe even more accurately, you know,
then your developer. But does AI have that architectural knowledge
(47:48):
of not just assembling the box, but like why did
we assemble the box and with what materials? And why
did we use those materials? Like those are going to
be I think more important things for devs to to
learn in the business context too, Like AI just is
not going to understand the nuances of trying to create
(48:09):
software around changing business logic, Like it's just not going
to So I think for me, it's like architecture has
become more important than ever from a myriad of things
from both like a technology standpoint, yes, but also from
a business translation context, to.
Speaker 3 (48:30):
Like put it in my own words for my understanding,
means that you think in an AIH developers, the necessity
for soft skills becomes much higher than it is right now.
Whereas now you get nicely paid to put a box
on the screen, which I used to do for the
last ten years, this becomes more of like, okay, architect
(48:52):
the system discussed with the business team what they need
and make sure that they that's really what they want.
Is that a fair statement? Yeah?
Speaker 1 (49:00):
And I think that that has always been true too.
I think it's just like it's exasperated, you know, like
you can always teach the hard skills right like you
can teach and sometimes it's even actually more fun to
have someone that hasn't built those opinions yet, you know,
and help kind of understand like drive down that lane
(49:22):
of like well why are we doing this? But the
soft skills are things that are like never going to
be replaceable.
Speaker 4 (49:29):
That's why prompting is so strong.
Speaker 1 (49:30):
I guess sure, Yeah, Like how do how do you
use ai how to prompt effectively? To your earlier point
about like how to google? Like yes, not just from
a success standpoint of like achieving your result but also
getting there faster. And if you get there faster, then
there's the sustainability conversation. You know, there's less of a
(49:51):
footprint if you're if you're more successful in your prompting
and your querying and getting to that final destination and
a shorter path, that's going to be better, you know,
for the environment, for XYZ, for your tokens, Like you
can go list a bunch of different things than someone
who takes two three times the amount of prompting that
(50:13):
you know, PERSONA does.
Speaker 3 (50:16):
I would like to hook into the environmental aspect that
you just brought up, because I think the last time
that Jody actually told me that it's basically killed, she
nicely put it that it kills the planet training models,
and I think that was twenty twenty three, which feels
like ten years ago in the h of AI. So
is that still the case? Is that from that aspect, like, Okay,
(50:41):
to balance using AI, I should probably become vegan to
like kind of not feel horrible about myself, Like where
are we on that scale of like killing the planet
versus actually using AI for being sake of productivity?
Speaker 1 (50:55):
It's not helping. Okay, it's not helping. Like there's actually
there's a lot of conversations that are are are going
on right now about sustainability of a eye models, about
sustainability of data centers, right, Like there are data centers
that are being built all over the world to try
(51:17):
and and house you know, these things, and it's it's
not just about the environmental impact then, it's about the
communities in which the data center gets built in how
it impacts them. You know, like if you have a
community that gets a data center built instead of like
a you know, a place where the children can go
(51:39):
learn or you know, have after school activities, like that's
there's a there's a difference there, right data center, you
know what, there's there's a choice, there's a choice being made,
and like so there's a social impact as well as
an environmental impact, a sustainability impact, and there's like Hugging
(51:59):
Face has done a lot of I would say, in
twenty twenty five, a lot of like discussions around this
from both papers as well as the chat Ui like
energy score that came out last month by one of
their engineers.
Speaker 3 (52:14):
So it's basically like an energy efficiency writing on models.
Speaker 1 (52:18):
It's like a it's like a UI to help engineers
kind of see and understand like what their their footprint is.
And like, I think it would be really wonderful if
that became a norm for like even providers to show
a user like, hey, like this is how much you know,
this is how much of our model you used, not
(52:40):
just from tokens, but like this translates to you know,
x amount of water and trying to like give someone
the ability to understand like what their usage is, because
then there should be motivation to use it for the
right things, you know, to make sure that it's not
something that you could like think about it a little bit.
(53:00):
You know, you could write the email yourself there. You
can't really google it anymore because now by default, like
AI search is baked into Google. It's right, it's the
most like obnoxious thing ever in my mind, and it's
making me really reconsider using Google in general. But it's
(53:23):
it's becoming a default, right, she just automatically include those things.
And if you look at like some of the generalist models,
like you don't necessarily need a generalist model to answer
some of the questions, Like it's the like you know
that that video that came out a few years ago
(53:43):
that I still think about the test engineer that's like
dropping the shapes into their respective and it's like it
goes here, it goes right.
Speaker 4 (53:52):
It can go into the same one every single time.
Speaker 3 (53:54):
Yeah, and like.
Speaker 1 (53:56):
No, it goes it goes there, and you know it's
there's there's part of that, like do we need all
that overhead for At jet Brains last month, uh, melim
Are was open sourced. Melum is the model that powers
our code completion that was open sourced on hugging Face
(54:17):
And you know, one of the things that we talked
about with melam is that we're calling it a focal model,
meaning it has a very specific focus. It's very laser
focused on one thing that is code completion. And one
of the benefits of like kind of going back to
what we've learned in AI is that you know, smaller
models that serve a purpose are are sometimes better for
(54:39):
things like sustainability, Like why do you need all that
overhead If we're just trying to reach a specific outcome,
we don't need all that other stuff we have we
have a targeted task at hand.
Speaker 3 (54:54):
How I mean, in the context of melum it's pretty
easy because you want code completion in the editor, Otherwise
you probably want something broader. But how would you Is
there a way that you could meaningful distinguish this is
the task a hand that we could dutilized model X
Y Like. Looking at it from an engineering perspective, I'm like, well,
(55:16):
but that's kind of like the purpose of that's how
I use AI be It's like it is uncontrolled in
that sense I can just throw any random task at
it and it yields the result.
Speaker 1 (55:29):
Yeah, I feel like there's going to be maybe a
little bit of a shift, or at least I'm kind
of hoping towards like more of like a micro services
of AI task approach. I piss some people off with that,
but you know what, like, yeah, why do we need
the big thing when we're doing a small thing? Right
When chat GPT was like really booming, I got in
(55:53):
quite a few conversations in twenty twenty three with people
that were like, Oh, I want GPT or something like
it to do you know X Y Z, And I
was like, you know, like we could just do like
a linear regression model on that, Like you're dealing with numbers,
it's going to be more accurate, it's going to cost less,
like we've we've solved this problem before. But you kind
(56:15):
of get that business hype train where they're like they
want to say that there's an AI powered something, and
you have to kind of steer that conversation back of
like I get that, but this is going to be
cheaper and more efficient and it does the exact thing
that you're asking for, like in a small way. So
(56:36):
I think there's a little bit of reframing that has
to happen too. From like a mindset perspective, it's going to.
Speaker 4 (56:42):
Be a long, hard journey.
Speaker 1 (56:46):
Uh yeah, yeah, And the and the governance like comes
into play too. I gave a talk a couple of
weeks ago in London on you know, AI governance for
product teams. Yes, And when I first wrote that talk,
the political landscape in the US looked a little bit different,
(57:08):
and I was like going through it and I'm looking
at it and I'm like shit, I have like all
these mentions to policies and regulation in the US that
is like rip, like they're dead. Wow, So how do
you how do you teach something that's like evolving and changing?
(57:28):
And I ended up basically throwing that entire talk out
the window. And rewrote it as a framework for well,
let's look at the laws as a constraint. Let's look
at you know, what, what should you do? What are
your ethics, what are your values? What must you do
(57:49):
or what must you not do? What are the laws right?
What can you do? What's the technology constraints? What's your budget,
what's your time? What are your resources? And then how
can you arrive at an outcome that checks all these
different boxes across the board? And the way that that
(58:09):
conversation is going to continuously shift is I think going
to be pretty wild. Like we're not going to see
a ton of regulation in the states right now, not
for at.
Speaker 4 (58:19):
Least the next three or four years or so.
Speaker 1 (58:22):
Yeah, try try, you say, I know what, it's not
like take the politics out of it. It's like take
the who's in office out of it. Like it's there's
a geopolitical climate.
Speaker 4 (58:42):
Yeah, the current landscape right now, it's Yes, I don't
I don't think it would have mattered what happened a
few months ago. I think you very much disagree with that,
But I don't think so because because of who makes
who makes the policies still skews a particular way.
Speaker 3 (59:04):
I am kind of curious that on the regulation versus
innovation space, because that's particularly often a very common discussion
between the European Union and the US, where like US
is more liberal and like has a culture of VC
money and innovation, and the was more like does not
have that.
Speaker 4 (59:24):
China too though, right Like China even has like done
stuff too in that department.
Speaker 3 (59:31):
I feel like we're going from no, I'm not trying
to go political.
Speaker 4 (59:34):
I'm just saying they have two that's true.
Speaker 3 (59:40):
Where do you strike the balance from your perspective? Because
the first time that I popped up that AI is
used unethically was with the open Eye case and the
scholar Johansson case where they emulated her voice. Whereas like,
from a legal perspective, they it's difficult to like gauge
(01:00:01):
where it's like ethically you look at it and it's like, yeah,
that's wrong. Legally, you're like, well is it? So where
where do you where you're at?
Speaker 1 (01:00:12):
Well that's a slippery slope too, because there that goes
back to the like who owns your face conversation and
people you know are like, what what is in the
public domain. There's there's a myriad of examples, like the
open ay has been you know, brought up in Esther Peral.
(01:00:35):
She's like a if you're not familiar, she's like a
pretty world renowned therapist. Like mostly she has a podcast
that's like pretty entertaining. Honestly, that's like a couple's like
relationship advice and there's some wild stories in there. So
if you're ever on a really long road trip, it's
(01:00:56):
you know, worth the time. But she's written a few
books and she's world renowned in this space. And someone
put up an example early on in the chatbto Days
of like describing their relationship problem and said, now give
me advice like your Esther Parrel, and GPT was immediately like,
oh yeah, well, you know if I was Esther Perell,
(01:01:16):
like she says this and this and this in her book,
therefore applying it to your situation, like here's kind of
what her guidance would be. And it's like, well, wait
a minute, So that means you absolutely have Esther Perel's
books as part of your training data enough to like
be able to break this down. Esther Pearrell is not
in the public domain, like doesn't isn't that Should that
(01:01:38):
be a violation for for her intellectual property? Who she's
still alive and profiting off of these books? Like should
that like be thrown away? You know? Like that's that's
one example from the innovation and regulation standpoint. There's a
book Olivia Gamblin is the author, and she's a really
(01:02:00):
amazing AI ethicist that's been in the space for a
number of years, and she wrote a book it's like
the Responsible AI Framework. I can't remember the long name,
but the short name of the book is Responsible AI.
And she gives a really great example of choosing a
value and then deciding do you do like the regulation
(01:02:24):
or do you innovate on it? And so her example
was the difference between WhatsApp and Signal. So they're both
messaging apps, right, and they do the same thing. WhatsApp
pretty much hits the mark for what it has to
do in terms of security, right, it's better more secure
than like you know, certain other like platform messaging apps.
(01:02:50):
But Signal was like, we're actually going to innovate on
the regulation. We're going to make this thing as secure
as possible. And so you have two apps, but over time,
like Signal, I'm you know, not bringing up the recent
you know, happenings of Signal Gate. But let it know
(01:03:12):
that was not Signal's fault that somebody added the incorrect
person who imss do that that had nothing to do
with Signal security. That was a user error.
Speaker 4 (01:03:22):
Yeah, we're not taking it there.
Speaker 1 (01:03:27):
Took it there just a little bit. But my point
is is that what's app regulated, you know, they they
developed for the regulation, and Signal innovative on that regulation.
And so Olivia brings this example up in her book
and it's it's just such a clear example, especially if
you I use both of those messaging apps and I
(01:03:51):
prefer Signal. Now I've used What's opped for a long time,
but there's been you know, changes in that and it's
become a more trusted name. And then it goes back
to consumers, right, because consumers were the one that decided like, oh,
security is important to me, and that's how Signal you know,
got its traction.
Speaker 4 (01:04:14):
And always get a sell with with security. Yeah, I mean,
this this was an excellent episode. I think it's because
you and I were in it young, but Michelle definitely
helped with all that knowledge viewing. Do you have anything
you want to like promote while you're here, while you
(01:04:34):
have an audience of about ten thousand people listening close.
Speaker 1 (01:04:43):
You know, I think we've we've talked about, you know,
a lot of different things, I would say, like a
very wide range of topics, and there's even more to
add on for like consumers. I think maybe I'll emphasize that,
like there is a consumer responsibility in here to decide
what you align with and what's acceptable to you, because
(01:05:06):
that's going to drive what is seen on the industry side,
on the regulation side, or even lack thereof. So like
we remain an active voice in what happens, and we're
not going to like get to that amazing place by accident,
like whatever future that you're dreaming of with AI, Like
(01:05:28):
it's just not going to happen because it happened. Like
it's going to take people making choices.
Speaker 4 (01:05:34):
And we got to get you on for a part
two because I think I only asked like two of
my fifteen questions, so.
Speaker 3 (01:05:41):
You kind of lost me once you were starting to
talking about m's.
Speaker 4 (01:05:43):
Like, oh my gosh, save it for Twitter or sorry
X save it.
Speaker 1 (01:05:48):
For X day guy, Blue Sky, what are you doing?
Speaker 4 (01:05:51):
I need to get on there, but ye're not. I mean,
I'm on there, but it doesn't have the same Like
I really like to get into the NFL talk and
the Drake versus Ken Drake talk and a little bit
of the board game talk. I don't want to get
into it on here. But I'm a I'm a Drake.
(01:06:12):
I'm a Drake stand No, I'm an absolute Drake stand I've.
Speaker 1 (01:06:19):
Been I gotta go now.
Speaker 3 (01:06:21):
I've listened to.
Speaker 4 (01:06:23):
And eight and ten and seven, so I'm like, I'm
a hardcore into it. I like Kendrick a lot, but
because the stand wars are going on, I have to
I have to rep team.
Speaker 1 (01:06:38):
Drake opposite signs here.
Speaker 4 (01:06:41):
I don't even I don't even listen to G and X.
I've listened to a few of the songs, but I
can't do it. I can't give him streaming numbers.
Speaker 1 (01:06:48):
Well, it's a consumer choice too, right, Like that's that's
a perfect example of a consumer based like choice as
well as X versus Blue Sky. Right, these are consumer
choice is that we're making that are influencing like broader things.
It becomes not just about like Drake and Kendrick. It
becomes about like their teams. Right, the team representing.
Speaker 4 (01:07:10):
Them absolutely absolutely.
Speaker 1 (01:07:15):
What's your AI? What's your responsible AI team? I guess
that's the question for everyone.
Speaker 4 (01:07:22):
All right, everybody, We'll see you next time. Hopefully Michelle
comes back in the future for a part two. Yahn,
keep it, keep it, keep it classy up there in
Kansas City always. We'll see you guys.
Speaker 3 (01:07:37):
Yeah, thank you.
Speaker 6 (01:07:40):
Hey, this is Preston Lam, one of the n G
Champions writers. In our daily battle to crush out code,
we run into problems and sometimes those problems aren't easily solved.
Ng comp broadcasts articles and tutorials from NNGI champions like
myself that help make other developers' lives just a little
bit easier. To access these articles, visit medium dot com
for its slash n gcomm.
Speaker 2 (01:08:02):
Thank you for listening to the Angular Plus Show and
INGCOMFF podcast. We would like to thank our sponsors, the
NGCOFF organizers Joe Eames and Aaron Frost, our producer Gene Bourne,
and our podcast editor and engineer Patrick Kys. You can
find him at spoonful ofmedia dot com.