Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
How'd you like to listen to dot net rocks with
no ads? Easy? Become a patron for just five dollars
a month. You get access to a private RSS feed
where all the shows have no ads. Twenty dollars a month.
We'll get you that and a special dot net Rocks
patron mug. Sign up now at Patreon dot dot NetRocks
(00:21):
dot com. Hey you welcome back to dot net rocks.
Carl Franklin and I'm Richard Campbell. Yeah, so I said
(00:41):
it for me because I just like massing the pronouns.
Speaker 2 (00:44):
I don't know why.
Speaker 1 (00:46):
Yeah. So we're talking about AI. Mark Semen is here.
We'll get to him in a minute, but first I
have a related better know a framework.
Speaker 2 (00:55):
Awesome, It's all right, what do you got?
Speaker 1 (01:05):
Ezra Kline is a New York Times columnist. He does
a great podcast called Ezra Kleinhow Yeah, I highly recommend it.
Speaker 2 (01:14):
I don't know nobody's naming strategy, but I'll tell you
what his theme song on point.
Speaker 1 (01:19):
Yeah. Well, anyway, the the article or the podcast that
I listened to this week was how the attention economy
is devouring gen Z and the rest of us? Right,
and uh, we're going to get in into the weeds
here with with Mark in a bit, But I just
(01:39):
wanted to point this out as an absolute necessary, absolutely
necessary required listening slash reading. Even if you don't have
gen Z sons and daughters, or you know someone or
you are gen Z, this is a really good perspective
piece about you know, well in general, he's saying that
(02:02):
gen Z came of age during COVID, Right, Yeah, I
have a gen Z daughter. She Yeah, she was graduating
high school during COVID. She was robbed of her high
school senior year. She did not have any social activity
that whole year. Then she went off to college. Everything
(02:23):
is on zoom. All the information that she learned school
wise is on zoom, like anything of importance. And so
that kind of shapes the way the gen z Ers
think about things, and in particular the nihilism attitude about
(02:43):
why should I, why should I try? Why should I
go to college? Why should I better myself? Because the
AI is going to take my lunch. There's no entry
level jobs anymore. Like it's a very kind of a
dark place that the gen Zers are incause how of
their experience and because of what has happened in the
last few years, and I think, yeah, Mark is nodding
(03:06):
his head, we're going to get into this. This is
a good point for me to mention that I have
a TikTok carl at atfenex dot com, and one of
the first things that I've done is a video that
basically says, AI is no excuse. It's no excuse to
give up, to stop learning, to stop trying to be
(03:31):
the person that you wanted to be when you were
a kid, whatever it is. You know your hopes and dreams.
Don't give up. You know, we don't know what's going
to happen in the AI future. We don't know if
what the jobs are they are going to be available,
but we do know this, if you just try to
be the best whatever that you can be, you have
a better chance of surviving no matter what the AI
(03:52):
landscape is. Here's an example. You want to go into
a trade, let's say carpentry because that's an AI proof,
So you think profession. But then you think, oh, well,
you know the robots are going to just start building
houses and all that stuff, so why bother. Well, here's
(04:13):
why you should be the best carpenter that you can
possibly be. So that when the robots do come and
they're affordable, and construction people general contractors are hiring them.
You're the boss because you're awesome, and that's how you
got to be the boss, and then you can hire
the robots to work for you. Do you know what
I'm saying. I mean, it's a weird example and it
(04:35):
probably isn't going to be true anytime soon, but it's
no excuse to stop trying and to stop learning and
to just put the brakes on your life and resign to,
you know, flipping burgers for the rest of your life,
you know, unless that's what you want to do.
Speaker 2 (04:49):
So anyway, although let's face it, flipping burgers is far
more automatable.
Speaker 1 (04:52):
Yes, yes, yes, so so anyway, I highly recommend this,
and by the way, check out my TikTok because I
have a bit to say about this that will continue
this conversation, I'm sure. So that's when I got Richard
who's talking to us today?
Speaker 2 (05:07):
Well, knowing we were going to get philosophical today. When
I was taking around for a comment, I grabbed one
off of the artificial intelligence geek out that we did
back in twenty fifteen, whoa ten years ago well, now,
you know, the other thing is to realize, why did
we do that? Geek out? Then, Yeah, that was the
(05:27):
time when Bill Gates and Elon Musk and Stephen Hawking
were all going on about AI emerging and then like,
we have to be careful now. Then the subtext of
this is that Google had successfully hired many of the
best minds, including guys like Jeff Hinton and so forth,
and they were doing some extraordinary things in a group
(05:50):
called Google Brain even then, and so really this was
a push for we've got to get those scientists out
of Google. What you were seeing was the setup that
would become open AI, but we didn't know that at
the time. It would emerge another year or two later,
and with all of the problems that that had attacked
and continues to have attached to it. Again, nobody expected
(06:11):
any of the things that have happened here.
Speaker 1 (06:13):
Well, and Hinton famously started warning the public against general
AI and the things that I'm sure Mark is going
to be talking about too.
Speaker 2 (06:22):
But yeah, once his shares in Google were fully vested, right, Yeah,
so you know it should be clear.
Speaker 1 (06:29):
Not that you're cynical about that or anything.
Speaker 2 (06:31):
Just watching for people's motivations here. That's right, follow the money.
So there was a lot of comments on that show
because we were pretty far ahead of our time when
we were at that particular point. You know, we're reading
in a lot of the tea leaves much more science
fiction based, so makes it ten years all the more extraordinary, right,
And I grabbed this comment one of literally dozens, and
one of them was Mark Semens comment too. And Mark
(06:53):
probably doesn't remember either because he writes lots of comments
on lots of shows. But you referenced a book called
blind Site, Sir, which is a very interesting study and consciousness,
because we did go down that path twenty fifteen about
what's consciousness, what is sentiency, and what is intelligence that
kind of thing. So Tom Kirkhoff's comment, another past guest
(07:15):
of the show, he says, as you mentioned, it depends,
So what is artificial intelligence? People such as Bill Gates
are cautious with AI and tells us we should not
do it, But we have and we entered the era
where AI is in the past, Apple has Siri and
Microsoft has Cortana and as personal assistants who are more
(07:35):
and more integrated in all our toys. So where do
we draw the line. Isn't it cool to think that?
In twenty fifteen we Tartana. Yeah, and when I Robot
came out, and that's the Will Smith version of Isaac
Asimov's movie. We saw robots helping us as humans in
our day to day work, which you know, the funny
(07:56):
part is here we are with some interesting software, but
still build a robot that humans can be around. Is
that artificial intelligence? Because we can get this today in
some sort I don't think of remote qualifies. We have
robots that are capable of walking like animals. We have
sensors such as connect connect I remember connect that can
detect walls, open doors, and well, plus we have Cortana
(08:20):
and with our knowing our schedule and helping us to
remind stuff and look stuff up the combine this together.
Are we getting close to those robots? Also? Where are
we with laws supporting lists like getting self driving cars
and personal assistance and stuff? How will we protect ourselves
from human hackers or AI going wrong? These are all
interesting talking.
Speaker 1 (08:38):
Well, you'd be happy to know nothing has happened in
law government to protect us from anything because they don't
even know what the heck is going on.
Speaker 2 (08:47):
Well, that's not true. The EU has passed an interesting self.
Speaker 1 (08:51):
I was talking about our government, Richard, Well, my government
they have no clue about AI or what to do
about it. So the other civilized civilizations we have a
little bit more to do.
Speaker 2 (09:01):
It is I mean, we were thinking about these same
problems ten years ago, but with obviously some gaps, right, Like, Yeah,
the voice assistance of ten years ago actually worked better
than they did in just the past couple of years
before the LM showed up, because they never made money. Yeah,
And as they didn't make money, their budgets got squeezed
tighter and tighter, and less compute resources were used on them,
(09:21):
and they degraded. And eventually, just before the LLM breakout,
before chatchpt, both Google and Amazon came out said hey,
we're cutting these groups back because they're just not doing
what they were intended doing with attending not being helping people,
but making the company money. And then of course it
did chatchpt lands and the whole thing's up in the
(09:42):
air and they're all scrambling. So Tom, thank you so
much for your comment. Great to hear from your friend
nine years ago, and a coffee of music. Cobe is
on its way to you, and if you'd like a
coffee of musicobe, I write a comment on the website
and done at rocks dot com or on the facebooks.
You publish every show there, and if you comment there
and I read in the show, we'll send you copy
go by Music.
Speaker 1 (10:00):
To code by Still Going Strong. Thank you Mark Semen
for that idea that you gave me long those many
years ago. Oh that turned into music to code by
twenty two tracks now and you can get them an
MP three wave or flack of twenty five minute compositions
at music too coode by dot net.
Speaker 3 (10:18):
So that's the end of analysis of music.
Speaker 1 (10:20):
Yeah pretty much? Wow? Yeah Yeah. And it's designed to
be in that beat per minute range that was cited
in the study with the baroque music the children that
were doing math problems when it was six between sixty
five and seventy two piece per minute. I think it is.
And it's neither too distracting nor is it too boring,
(10:42):
Like you're not going to lose your mind listening to it.
There is some variation in there, but nothing's going to
jump out and scream at you. So it works, and
it works for a ton of happy customers, including me.
All right, well, let's formally introduce Mark Mark Seamen.
Speaker 2 (11:01):
Hmmm, we got to do nineteen sixty.
Speaker 1 (11:03):
Oh yeah, we do. Why don't I always forget that? Richard?
I don't know.
Speaker 2 (11:08):
Maybe we should let this go at some point, but
I kind of want to run until we get to
two thousand and two and we have inception.
Speaker 1 (11:12):
I do too, Yeah, yeah, yeah. So significant events in
nineteen sixty included the independence of seventeen African nations, the
Greensboro sit ins for civil rights in the US, and
the first televised presidential debate between John F. Kennedy and
Richard Nixon.
Speaker 2 (11:28):
MM, which went well.
Speaker 1 (11:30):
Kennedy was very telegenic, he was you wore a dark suit,
and Richard Nixon blended in with the background. I remember that.
Speaker 2 (11:37):
All these things you didn't need to think about until
television came along.
Speaker 1 (11:40):
It was also marked by the U two incident, where
an American spy plane was shot down over Soviet airspace,
escalating Gary Powers old war tensions.
Speaker 2 (11:51):
They believe that plane flew too high to be shot down,
and they were wrong. So what's on your list, Richard?
The first laser is rendered operational. A guy named Theodore
Maymon where can at a huge research used a synthetic
ruby with flash lamps based on a bunch of science
a group of other smart folks over the past few years.
But he's the guy who actually implemented coherent light. Wow
(12:13):
your DBD, thanks you Wow.
Speaker 1 (12:15):
And shortly after that, Star Trek came online and something
they're using phasers because they couldn't say lasers.
Speaker 2 (12:22):
And the very first weather satellite ever TIROS one, the
US satellite TIRO short for Television Infrared Observation satellite, launched
by aor able rocket. It had solar panels on it
in nineteen sixty at solar panels Wow, a wide narrow
angle infrared cameras and it took about twenty three thousand
pictures before an electrical failure after ten weeks knocked it out,
(12:45):
beginning this idea of being able to look at weather
at a macroscale from orbit, which is incredibly important. It's
amazing to think that that's only it's only been sixty
years of being able to do that. Yeah, it's still
actually in orbit. There was this electrical failure in the
battery system knocked it out early. Could have lasted longer,
but it was followed up by many more, but that
that began in nineteen sixty.
Speaker 1 (13:06):
I shall also mention the pill was ratified in nineteen sixty,
so that ushered in a whole era of women's reproductive
rights and all of that.
Speaker 2 (13:14):
Next, we're going to talk about the age of Aquarius.
Speaker 1 (13:16):
No, no, no, come on, man, come on.
Speaker 2 (13:20):
All works together.
Speaker 1 (13:21):
Where would we be without the pill?
Speaker 3 (13:23):
There?
Speaker 1 (13:23):
You seriously, I wouldn't have gotten laid in high school.
I don't know about you guys. But all right, So
is there any other computer oriented events or computers that
were breakthroughs in nineteen sixty that you can.
Speaker 2 (13:36):
No, but nineteen sixty one is a big one. So
hang in there, will all right, we'll talk a lot
about the integrated circuit, all right next week.
Speaker 3 (13:44):
Did they start with four trend back then? Or is
that that's around that time maybe a little bit earlier?
Speaker 2 (13:50):
Yeah, yeah, yeah, no, four trends already around by then.
Oh yeah, okay, you know, not on a you know,
we're talking kind of pre This is before we actually
have digital computers per se. Right, they're largely electrical mechanical
like the the uh. We have transistors, but we haven't
(14:10):
really got an integrated circuit, so the compute power is
not the same at all.
Speaker 1 (14:13):
All right, So the bio that I'm going to read
was not written by me. It was written by Mark himself.
Mark Semen is a bad economist who's found a second
career as a programmer. He has worked as a web
and enterprise developer since the late nineteen nineties, and he
blogs regularly at blog dot plo dot dk. That's p
(14:35):
l o e h did I pronounce that writer? Is
it more like pl that's plur pl That's right.
Speaker 3 (14:40):
Okay, you got that the second time around. Yeah, that's
pretty good. That's pretty good.
Speaker 1 (14:44):
Well, welcome back, and uh, thank you. I just had
to formally introduce you there. That's even though we've been
talking to you for ten fourteen minutes.
Speaker 3 (14:53):
Yes we have.
Speaker 1 (14:54):
Yeah, all right, So what are your thoughts? Are you
fan of Ezra Klein?
Speaker 3 (14:57):
First of all, I've I I usually used used to
listen to a podcast by Sam Harris and these two
sort of enemies, if you will, also, so I haven't
really listened to to Ezra Klein. But but on the
other hand, I think it was if Scott Fitzgerial who
said something like and you know, the sign of intelligence
(15:18):
is being able to hold two opposing thoughts in your
mind at the same time and not go insane. So
maybe I should. I mean, it's it also sounds like
he's been doing he's been on sort of Isra Klein
has been on some sort of journey where he's starting
to realize that, you know, some of the problems that
you just talked about here are actually really important. So yeah,
(15:39):
so maybe I should. So I haven't really been a
fan there, but you know, I have no beef with
him personally, so maybe maybe I should give it a listen.
Speaker 1 (15:47):
Well, what about the idea of gen Z being sort
of caught in this vortex of impossibility?
Speaker 3 (15:53):
That that that absolutely rings true. I have two gen
Z kids, and the well, the old one is old
enough so she's almost not a gen Z, so she's
she's sort of got you know, through most of this
stuff without too many, you know, too much impact that.
But the other one, he's eighteen now, and he's really
(16:13):
he's really you know, grabbed by TikTok and phones and
so on. So that's yeah, that's that's a bit of
a problem.
Speaker 2 (16:20):
Yeah, yeah, I mean my kids are just that bit
much older that maybe they slipped past us to some degree,
but the the bait here, and you brought it up
right at the top there, Carl, is how much of
this is just the attention economy in general? And how
much of it is the impacts of the pandemic of
that two years of psychosis just this crazy time.
Speaker 1 (16:43):
It was really psychosis, absolutely crazy time.
Speaker 3 (16:47):
I think we were seeing signs of this already before.
I mean, was it the Senta shoop Off who wrote
this book about the attention economy? I think that predates
the the pandemics. I remember, So there were definitely people
talking about this, you know, even in the in the
twenty tens. But that's not really what we're here to
(17:09):
talk about.
Speaker 1 (17:10):
No, no, no, is it. We're just getting started here?
Speaker 3 (17:14):
Yeah, I know, but but maybe I should start with
an other experience I had with a young person. So
I was following a university course on something computer science
I don't exactly remember. And because I was doing that,
they you know, they have us do some group exercises
as well. So I was doing a little paper with
some young people and we were having a discussion about
(17:38):
how to interpret a certain algorithm, and you know, whether
we were in one regime or another and we couldn't
really agree. And then the other one he was just
writing on on what some DM what's it called? I
can't remember anyway, So we were dming back and forth and
he writes to me, well, but I just asked chat
(17:58):
GBT and it says blah blah blah. So I'm right ah,
and I'm sort of like, I don't care. I don't
care what Chad GBT says. I wrote back and he
was like, oh my god, you don't care what chat
GGBT says. How can you? I mean, it was that
was very much a generational divide there, and every time
we came back to that, he's sort of like, oh, yeah, Mark,
(18:20):
is this weird person who doesn't believe in everything that
jaduary it.
Speaker 1 (18:25):
But you're lude you don't know anything about technology?
Speaker 3 (18:31):
Yeah, And I should probably preface I'm going to say
a lot of critical things about AI, but it's not
that I'm a complete lot eyed. I actually do see
that there's some you know, benefits to be gained as well,
but that's not what we here to talk about. So
if the listener gets the impression that I'm just in
grumby old man shouting at the cloud. It's not the
entire picture, but let's just pretend.
Speaker 1 (18:51):
Well, that's besides the point.
Speaker 3 (18:53):
Let's just pretend that that's the case.
Speaker 2 (18:55):
Anyway, Yeah, that cloud didn't need shouting.
Speaker 3 (19:01):
Indeed, indeed. But I keep running into this thing where
people are backing up their claims by saying, well, I
just asked you know, some chat TBT or some other
EI online EI system, Last Language model, whatever you want
to call it, and then they're using that as their
appeal to authority and saying, well, it's true because it's
(19:23):
it says so, And it's really hard to argue against that,
because if people are actually in that mindset where they
think that's as an authority that they can trusts, it's
hard to get them out of that mindset.
Speaker 2 (19:36):
But it is actually new. This is not new to LMS,
no chat GBT. People have been saying the computer says
uh huh, yeah, since we put computers in front of people.
Speaker 3 (19:46):
Right, Well, that's a fair argument, but I think we
are We're we sort of reached a new level there
because usually, you know, in the old days, when the
computer said something, it was usually correct under the count
take that you know, in which it would say something.
Speaker 1 (20:04):
Right, data came out of a database somewhere.
Speaker 3 (20:07):
Yeah, you would ask it about something in the database,
And of course you can you could have wrong data
inside that database, or you could have a bug in
the program and so on. But in general, if you
understood the context in which the you know, the computer
and the program and the software would actually be giving
you answers, there would be some sort of knowledge to
(20:28):
be gained. And that's not really where we are with
those you know, new systems, and that's it's not you know,
one thing is the system itself, but it's how people
are interacting with these sites concerned me a bit. Yeah, yeah.
But also the thing is that that they tend to
see them as oracles, is that you can go and
ask them about anything and then you a lot of
(20:48):
people seem to just blindly trust them. Which that's that's
really what concerns me here, because.
Speaker 1 (20:54):
Yeah, I did an experiment Mark, I asked for a
recommendation of a product on Amazon based on my parameters,
and it recommended something, and then I went it was
an electronic piece for electronic Here, I went on Amazon,
I looked at all the reviews and there was very
many one star reviews saying this thing overheats and then
(21:17):
goes to crap, don't buy it. So then I brought
that up to Chatchip. He said, you're right, let me
look for another one. Sound familiar. Let me look for
another one that doesn't overheat. Here's here's the one.
Speaker 2 (21:30):
You want.
Speaker 1 (21:31):
This is because it's going to satisfy this condition, that condition,
And I said, okay, and you know it's fairly well reviewed.
So I bought it and it didn't work. It didn't
do some of the things that I asked for with
chat GPT, and it was vague because when I went
back then looked at the description didn't explicitly say this
(21:52):
thing that I needed. I just assumed that it would
do it because most things like this did it. So
I ended up returning it and something else. But it's
conscinary town. So yeah, it's an experiment. I wanted to
see if I could rather than going through the tedious
task of searching on Amazon and then starting by most
(22:14):
favorite reviews and reading them. Rather than doing that, I
just asked chapter GPT to do my bidding. And it
didn't work.
Speaker 3 (22:21):
It lies, but indeed, in this in this case, you
were still in a scenario where you were able to
you're still able to verify or in this case, in
this case as actually falsify the claim that was made
by the LMS. So of course, because you were ordering
I assume a physical product, it took some time to
(22:41):
actually get that verification of falsification in place, but you
could still do that, and that's that's not even you know,
I'm not too concerned about people using lms in that
way because I actually use them like that as well.
You know, if I if I have a problem where
I can you know, I know, I don't know exactly
what the answer is going to be, but if I
(23:01):
get the answer, I can do a verification check and
then I can see if that solves my problem or not.
I had I've had very nice, you know, experiences with
the limbs that do that for me and save me
a ton of time. So I don't really have a
problem with that because you know, if you can get
an answer and then you can verify whether or not
it works, you're still on solid ground in terms of epistemology.
(23:26):
So okay, so now we said the big word here,
but it basically means the theory of knowledge. So how
do we know that we know things, why do we
think that we know some things?
Speaker 1 (23:35):
I know that in the term of epistemological studies that
are basically just tabulating answers from people, but you don't
know whether or not they lied, right, the entomologic entomological studies,
there's another All right, I'm mixing up my words.
Speaker 3 (23:54):
Here, go ahead, Yeah, so where we so? Yeah?
Speaker 2 (23:57):
So?
Speaker 3 (23:57):
But that's the on thing. So if you can ask
you know, a system and then get get it to
give you an answer that you can then later verify,
I think that's that sound. I don't really have a
problem with that. My problem is really with you are
asking a system to do something and you have no
way of verifying whether or not it actually, you know,
is what it is that you wanted to do. Then
I think now I'm getting concerned. And since we are
(24:21):
on a podcast where we usually talk about software development,
you know, one of the things that really concerned me
is when people ask you know, these systems to write
code for them, because but then again, you know, if
you if you do that, well, if you can actually
read through on the code and then you have an
idea at what you're looking at, well that might actually work,
but often you hear people so yeah, I know you
(24:43):
talked about vibe coding already, and and for me it
is a pejorative. I think it's it sounds like a
really really bad idea because if you don't know, if
you if you don't know how to code, or if
you ask this system to write code in a language
that you don't really understand, then how you know it works? Right?
Speaker 4 (25:02):
Well, the compiler has to say if you are writing
in a language that actually does compile, and people are
using a lot of often they get it to write
JavaScript or Python or something like that for them, and
they those languages don't even compile.
Speaker 1 (25:16):
The point is something that they don't know. Yeah, right,
Why would Why would I ask an l M or whatever,
an agent to write me an assembly program because I
think it's going to be faster When I don't read
assembly and I can't verify it, and I can't step
through the code, and I don't know what the heck
that thing does. It might look like it works, but
I ain't going to run that thing, right, I'm gonna
(25:39):
If you ask an agent to write code in a
language that you don't know how to verify, you get
you know, you get what you get, right, you get
what you pay for basically deserve it.
Speaker 2 (25:54):
That being said, I'm now having experiences with very experienced
software helpers where we spend an entire day working through
a sprint of code that we estimated it would have
been six weeks worth of work, and then knocked it
out in a weekend using these tools. Yep, yeah, right,
Like but in the hands of skilled people who understand
(26:16):
what they're doing and are working hard with these tools,
you can get extraordinary results. Not you cancre mental results,
but literally weeks of working days.
Speaker 1 (26:27):
We just heard and I don't remember if this was
talking to you, Richard or somebody else, might have been
Brian McKay that he had a guy that was in
a meeting, a two hour meeting about a spec and
about building a prototype, and by the end of the
meeting he had it done.
Speaker 2 (26:44):
Right. That seems to becoming more common again known problem space.
You know, these these were forms over data problems, so
they were pretty automatical anyway, and with someone who knew
the tools and the language well, and they have a
put together assembly and their productivity is astonishing.
Speaker 3 (27:03):
Yeah, but again, how do you measure productivity? In software development,
because it seems to me that we are forgetting that
lines of code is not a measurement of productivity, you.
Speaker 2 (27:14):
Know, Yeah, this is delivering features to customers.
Speaker 3 (27:17):
Yeah, and that makes a lot of sense, of course,
if you can measure that. But that's a whole different discussion.
Whether that's because one feature is not necessarily equivalent to
another feature. You know, some features are big and someone small.
But that's probably a different discussion.
Speaker 2 (27:33):
No, but I think it's a really valid one that
there's a bar here that these tools seem to be
able to handle at a bar, and above that bar
they cannot.
Speaker 1 (27:41):
Yeah, above that bar. You have to sort of break
it down into you know, bite sized pieces for them. So,
but that's how I like to work anyway, you know, Yeah, yeah, yeah.
Speaker 3 (27:51):
Of course. But I'm still I'm still wondering whether we
can trust these things even if we look at them.
Because so we reached the story that you told, I
don't know exactly the details of it and so on.
But but one of the questions I would like to
ask when when people do something like that, is to
how do you actually know that the software works? How?
How did you how did you decide that that software
(28:13):
worked in that particular case. What was the decision criteria there?
Speaker 2 (28:18):
Oh? I mean again, they'd also built a set of
test suites. Yeah, you know, I saw that these features
need to be tested this way and measure you know,
he did the complete coding solution, including the security evaluation,
like all of the different pieces. Like, you didn't just
spat it. This was not vibe coding. No, this was
a thoroughly thought out architectural solution.
Speaker 3 (28:38):
But who wrote who wrote the tests?
Speaker 2 (28:40):
Though? With the tools?
Speaker 3 (28:41):
How do you why do you trust those? Then?
Speaker 2 (28:44):
Well, because you could see the code, right, there's no
secrets here. Tests are pretty straightforward to understand.
Speaker 1 (28:50):
Yeah, I guess the thing that we can agree on
is if you let it get away from you, right,
and you don't follow up on every change your AI
is making for you and test it and on it
and you know, observe it, and you just let it
go wild, you're you're going to lose control. And so
staying in control, I think this is the key.
Speaker 2 (29:10):
The question you keep asking is why do you trust it?
It's like, don't don't trust it? Yeah, don't know exactly. Yeah, Look,
I already do distributed development. I have people contributing to
my projects that I never meet, that I only interact with,
you know, through issues on GitHub. You don't trust them either, No,
but you evaluate the code.
Speaker 3 (29:28):
You review the code.
Speaker 2 (29:29):
Yeah, yeah, that's the job. But the reality is it's
still a force multiplier to have multiple people contributing to
a project. It takes less time to review code than
it takes to write it.
Speaker 3 (29:39):
And I do not disagree with that. That's reasonable enough,
but it's still my concern is still that we you know,
if we have an output of code that is multiplied,
you know, tenfold, one hundredfold in comparison to what we
had a couple of years ago, then we should also
(30:00):
to have that that we should also spend that much
more energy on actually reviewing the things that are being produced.
And I'm not really getting the impression that that's the case.
Speaker 1 (30:12):
So it's the case at my house, I can tell
you that.
Speaker 3 (30:16):
Yeah.
Speaker 2 (30:18):
Well, but again, you know, well, and this will be
this is also some with self fulfilling. Those who trust
these tools, right, will get burned. Absolutely.
Speaker 3 (30:26):
Yeah, that's that's also what I'm what I'm concerned about.
And we can just hope that it's just some simple
forms over the data and and they're they're probably only
hurting the company that actually owns that software. But what if,
what if the actually we're beginning to see people, you know,
writing fly you know, h operating systems or systems for
(30:49):
controlling hardware or elevators and medical systems and so on.
And then I'm getting a little bit concerned here that's
probably not going to happen this year. But well, in
a couple of is we'll see, we'll see those people
who do use those systems at the moment, they some
of them will graduate to writing those kinds of systems,
and I'm just concerned that they're probably going to take
(31:11):
some of their bad habits with them.
Speaker 2 (31:12):
Without a doubt, I think, yeah, let's do the break
and then I want to dig into the next tier
of this problem, which I think is the junior developer.
Speaker 1 (31:21):
Yeah, okay, and we'll be right back after these very
important messages. Did you know that you can work with
AWS directly from your ide AWS provides toolkits for visual studio,
visual studio code, and jet brains rider Learn more at
AWS dot Amazon dot com, slash net, slash tools. And
(31:47):
we're back. It's dot at Rox. I'm Carl Franklin and
I'm Richard Campbell and that is Mark Seaman, and we're
talking about AI concerns. And just as a reminder, if
you don't want to hear these ads, you can pay
five bucks a month become a patron Patreon dot nerocks
dot com. You'll get a free an ad, free feed.
Take it away, Richard.
Speaker 2 (32:05):
The folks that I'm seeing that will be successful these
tools are very experienced developers. Yeah, you know, really they've
spent most these days. They don't even write a lot
of code, and maybe they do some spikes and things,
but they're mostly supervising a group of developers. They are
the architects, you know, they run at a high level
of responsibility, and they're starting to see these tools act
as inexperienced developers under fairly strict guidance with constant code reviews,
(32:31):
but ultimately productive. And it begs a question like where
does the junior developer go now?
Speaker 1 (32:38):
Right? Are we the last generation of people who came
up as junior developers?
Speaker 2 (32:43):
Yeah?
Speaker 3 (32:44):
Yeah, that's my concern too, because well, I think you
said it pretty well, Richard. I'm not sure that I
have a lot of stuff to add to that.
Speaker 2 (32:53):
Actually, I mean, I am meeting young developers right now
that are pretty freaked out, and I wonder if it's
because we trained them poorly at this point, Like here
we are, this inflection point where things are changing. And
the funny part is when I have a conversation with
them about solutions, and I'm thinking back to the show
that we did together Carl with the Imagining Cup folks.
Speaker 1 (33:13):
Wow, what an inspirational group.
Speaker 2 (33:15):
Phenomenal, But you know what, they didn't care about tool stacks? Yeah,
do you remember there were one of the ladies asked us,
like you make a podcast about dot net? Like why
why would you do that?
Speaker 1 (33:25):
Right?
Speaker 2 (33:27):
Right? And I realized, like we've got old thinking. You know,
when it was a nine to twelve month commit to
get to an MVP of a piece of software, you
could spend a couple of weeks arguing over what stack
to use. Right, but with the productivity level that we're
talking about right now, who cares? Just take the tool
out for a spin. You know, there's so many different
there's Even before the LM showed up, it was so
(33:50):
much easier to learn a new programming environment. It was
so much easier to experiment that those times are getting
shorter and shorter, and the stacks are just not that
different from each other. You know, fundamentally, they still draw
on screens and they still communicate over the Internet. Like
a lot of this stuff is the same, and if
you focus on the solution, you're fine.
Speaker 1 (34:09):
Like I don't.
Speaker 2 (34:10):
I wonder if we're not actually growing the right generation,
next generation of software developers, because they are not hung
up on the stuff that we're hung up on. Well,
at the same time, maybe they should be. I mean,
so here's here's a scenario, and this came from a
real story that I heard from somebody. Somebody is a
(34:31):
back end dot net developer, and a full stack dot
net developer does the front end, does Blazer and all
that stuff. Somebody comes to them and says, hey, we
want to use React for the front end, but still
keep the as peanut core back end.
Speaker 1 (34:45):
Can you do that? And they think, hey, I've got
chat GPT or I've got the agent and they say, yes,
yes I can. They know nothing about React, right, but
they generate all this code and it work. Would you
say yes? Would you say yes, I can do that,
(35:06):
or would I say no? I think you better get
a React programmer to run the tool.
Speaker 3 (35:11):
Yeah, I wouldn't.
Speaker 2 (35:11):
What would you do it? Yeah? Yeah?
Speaker 1 (35:13):
And if I did it that way, would you accept
the code if I didn't know anything about React?
Speaker 3 (35:20):
Yeah? That's that's the other problem. So this reminds me
of of an experience I had many years ago. So
obviously it is because long before the ill a limbs.
But I was working with a with a customer of mine,
trying to teach them to move in small increments and
do test driven development and all these things that I
usually do, and it's actually working pretty well, you know,
(35:41):
trying to also give them an idea about how to
do pull requests and work in this sort of like
quasi open source way of working with the small, small
itserations and all of that. And they hadn't told me
that they actually had an offside group sitting in another country.
And you know, three weeks into my engagement with this customer,
(36:02):
I get this pull request from hell from this you know,
team sitting in another country because no one had told
them that I was actually now trying to you know,
change the things around, and they hadn't told me about
that system as well. So I get this pull request
and it's just like a you know, fifty thousand lines
of code or something like that, and I'm trying to
(36:23):
get the people that I was working with, you know,
going through and saying well, okay, if you write the code,
we need someone else to review it, and well you
can do it with path programming, or we can do
it with pull requests.
Speaker 2 (36:35):
I don't really care.
Speaker 3 (36:35):
I just want to have more than one person actually
looking at this code. And then I get this thing
in from the from the outside, and I'm sort of like, okay,
what do I do with this now, because you know,
the usual reaction to something like that is to say, well,
looks good to me, because that's what you already always
do with those big, big requests.
Speaker 2 (36:55):
The classic sign of I have not read.
Speaker 3 (36:57):
This exactly exactly. And now, fortunately I was actually, you know,
engaged by the CEO of the company, so I know
that I had I had pretty free, you know, range
of deciding what to do. So I wrote back and say, well, okay,
so I'm really sorry that you weren't in on what
is what it is that we're doing at the moment,
but I'm actually going to politely decline this pull request
(37:19):
because it's just too much. And I don't know whether
it works. And it's not that I don't trust you
in the sense that I think you have you know,
ill intent, but I don't even trust myself to write
you know, flawless code. So that's why we need someone
else to actually review with it, because it's part of
this whole you know, process of figuring out does the code,
(37:39):
does the software actually work as intended? Does the code
do what it is that we wanted to do, And
we can't do that if you just give me, you know,
all of that in one go. So I said, well,
I'm not going to take this one. But on the
other hand, you still have all the codes, so I'll
work with you and try to break it down into
smaller pieces and we can get.
Speaker 2 (37:55):
It in that way.
Speaker 3 (37:56):
So we sort of made that work. But my mind
here is that we're sort of in that situation now
where we do get you know, something that reminds us
of you know, the pull request from Hell. But it's
just like it's not written by a person anymore. It's
just now it's written by some statistical system. And and
(38:17):
then if you just get all of that code in
one big, you know chunk, you don't really you can't
really fit it in your head, and then you.
Speaker 2 (38:26):
Just supplies submit a series of smaller requests exactly.
Speaker 3 (38:31):
And if you can get these systems to do to
work in that way, which I suppose you could, then yeah,
that would that would probably help. So it's not that
I so that's that's actually a pretty that's actually that
maybe showing us a way out and trying to work
with LMS as though they were you know, contributors on
(38:53):
an open source project and then try to tease them
to do small increments that you can review.
Speaker 2 (38:57):
Well, that's certainly the way I look at it, because
it's yeah, you know, more and more we're in a
situation where all you can do is see the code.
You don't really see the person, and you certainly have
no way to measure the qualifications. Let's face it, we've
almost never had a way to measure qualifications in software
that was meaningful in any way. In the end, the
code had to speak, and so if they could, you
have to engage with the person. Yeah, you know, there's
(39:21):
an argument here that looks just like, oh, you don't
have a PhD in COMMSI you can't contribute to our.
Speaker 1 (39:26):
Project, right, which is silly.
Speaker 2 (39:28):
In the end, let the code speak, and if you
can insist that the tool delivers the code in a
form that is viable for you to validate, because in
the end, it's your butt on the line, right, you
are the professional engineer. You're going to sign off on this,
then you have a chance of being able to use
these tools. And I you know, this seems like the
(39:49):
most solvable problem in the LLM space compared to what
people are talking about are playing with doing it outside
of software, Like, at least software has pretty good tools
and governments. We already have a method doing distributed programming.
Oh yeah, oh yeah, oh yeah, name other industries that
are even close to this ability.
Speaker 1 (40:08):
Yeah, before we leave software, just there's another gotcha for
junior programmers that more experienced programmers won't necessarily have. And
I've talked about this on dot Ner Rocks before, which
is a junior developer will ask a question that they
think is the right question to ask when it might
not be. So they'll ask a question, you know, they'll well,
they'll say something like, please make me a you know,
(40:31):
a thread safe list component or list control. Let a
thread safe list class, right that I can use that
that's completely thread safe and locking and all that stuff.
And that's the wrong question asked, because in dot net anyway,
there is one in the framework. So the first question
that should be asked is, hey, is there a way
(40:53):
that I can use a list in a thread safe manner?
And then you know, if the thing is worth anything,
it will say yeah, well there's the spread safe collection, right,
But instead a junior programmer might go down the very
difficult path of doing one themselves because they don't know
what else is available. So whereas an experienced developer would
(41:15):
would know and not ask that question, and a junior
developer would not. Most likely this question is isn't this
very teachable?
Speaker 3 (41:23):
Yeah?
Speaker 1 (41:23):
Sure it is, but but how many hours is the
junior developer going to waste working on something that when
they you know, check it in you say, hey, you
know that there is something like this?
Speaker 2 (41:32):
Yeah, again, there's a teachable moment around. Make sure you
ask the question what already exists and you know, yeah, we.
Speaker 3 (41:39):
Still imagine a future version of large language models will
probably be able to you know, loop in that kind
of questioning saying, oh you're asking about this, have you
do you really want? You know, a ground of implementation
here or can you use the one that already exists?
Have you looked into the framework and see whether they
(42:00):
as a reusable component. I mean, I could probably imagine
that even if they don't do that right now, they
could they actually do probably be yeah, yeah, yeah, yeah,
because it's it's a fairly common question to ask ask
if you're a senior developer anyway, so anybody, Yeah.
Speaker 1 (42:15):
You may need your own implementation because it may need
features that the base class doesn't have or isn't extendable to.
Speaker 3 (42:22):
So but usually not.
Speaker 2 (42:23):
But maybe maybe now you're starting, you're already seeing in
these tools that you can put in and pre prompts
like this should be included with every prompt, right so
that you could see the idea of an enterprise group
setting a set of rules that any prompt code generation
has to follow these rules. But they don't always follow
(42:44):
the rules. That's the problem.
Speaker 1 (42:45):
I don't know if you've noticed this, but in my
system prompts and in my user prompt if I say
don't do this, sometimes it will anyway. And that's just
that's just the way it goes.
Speaker 2 (42:57):
Yeah, and again I'll often skip things as well if
the gets too complex, so you still see some development
need to be done to do more work on validating
the outBut.
Speaker 1 (43:07):
Right, yeah, all right, well I'm ready for Harry carry
are you guys?
Speaker 2 (43:13):
I'm actually really excited about all this because it does
seem to empower more people to build software. As these
tools mature and they get more reliable. They said they're
as bad as they're going to be right now. I
don't see an exponential growth here. You know, we've basically
indexed all the Internet into these models. As it is,
(43:34):
there is no more data to consume, and so far,
training against stuff generated by these tools is degenerative. It
makes it worse, not better.
Speaker 1 (43:44):
Yeah, and most of the time that's not going to happen.
Like I've got confirmation that if I have a private
repo and I use the GitHub Copilot agent to generate code,
it's not going to train their models. They're not going
to train their models on the code that it generates.
In other words, my code isn't going to leak out
into the ether and you know somebody else is using it.
(44:09):
That's just a promise by gethub. I don't know it's true,
but that's a promise.
Speaker 3 (44:13):
But even if it's true. Now you know what's going
to happen in the future.
Speaker 2 (44:17):
Yeah, well but.
Speaker 1 (44:19):
You mean when Oracle buys get ub uh huh.
Speaker 3 (44:23):
Pretty sure it's not for sale, you know, a completely
unrelated you know. Note, you know, there were people who
were submitting that DNA sample suport to form you know,
tes be and me, And I'm so happy I never
did that because even back, you know, ten years ago,
I thought, that's not data that I want, you know,
sitting in someone else, you know, repository that I can't control.
Speaker 1 (44:44):
And yeah, too late, I've already cloned you.
Speaker 2 (44:48):
Indeed.
Speaker 3 (44:49):
Yeah, so I think we should be, you know, a
little bit careful with trusting these things, you know, because
things change, you know, so even if you try to
trust an entity like Microsoft, get up and you might
not want to trust it forever.
Speaker 1 (45:03):
But then again, Mark, you know, how long will it
be before your iPhone twenty four will be able to
sequence your genome just by taking a picture of a
hair follicle? Yeah?
Speaker 3 (45:15):
Yeah, maybe you should go back to probably have a
knock here somewhere lying around that still works. You should
go back to those. That's why we all kind of end.
Speaker 2 (45:25):
Yeah, I saw a modern flip phone the other day.
That wasn't like it was still an LCD, right, Like,
it wasn't the I was very tempted.
Speaker 1 (45:32):
Yeah, Samsung flip z Well that there's an Android phone
that flips up and down and I have one of those.
Speaker 2 (45:39):
Yeah yeah, but those are all those are smartphones. I'm
talking about a full retro flip phone. Oh wow, yeah, wow,
it was exciting. You do have those retro urges without
a doubt. Sure, there isn't the Again, I feel like
the programming situation is the best case scenario. I think
we're more cautious or more familiar with these models of
needing to valid and so forth. The concern space is
(46:01):
pretty much everything else happening in LLMS. Yeah, like you
get back to the computer says stuff. Yeah yeah, But
I also think we've gone through that. We used to
believe everything Google said too, Like this is just yet
another learning pattern that you have to go through the
experience of realizing these tools are based on knowledge we
(46:24):
have and a lot of that knowledge is inaccurate and
so when you you know, quote it verbatim, you are
often wrong.
Speaker 1 (46:32):
So maybe Mark your critique is more of the population
than of the AI tools.
Speaker 3 (46:39):
Right, Oh, absolutely, you know.
Speaker 1 (46:41):
Can we trust people to do the right thing with these?
And yeah, my answer is people are driven by incentives,
and if the economic incentives outweigh the moral incentives or
the ethical incentives, guess which one wins. It's just that simple.
Speaker 3 (46:58):
Yeah, yeah, that's that's a bleak. But I'm I think
I'm probably agreeing with you there.
Speaker 1 (47:04):
Unfortunately, Well, it comes down to the developer who says
yes to developing something with an AI in a language
they don't understand.
Speaker 3 (47:13):
Oh yeah, And it's it's not that it's an moral
excuse or anything. But usually what actually does happen is
if you say no, someone else will say yes. So
it's going to happen anyway. And again it's not and
it's not an excuse for doing something that's unethical. But
but still from a you know, a high level, you know,
(47:33):
just looking at society overall, that's still the mechanism that's
going to happen. You know, someone won't do it.
Speaker 1 (47:38):
Yeah, I wouldn't say yes, just because I'd code myself
into a corner, you know.
Speaker 3 (47:43):
Yeah, but that's but you also that you senior, just
like you know Richard is and and what Richard talked
about you know. You can actually have success with these
things if you you know, if you know a lot
of enough about programming, and even if you've never seen
a language before, you've seen other languages that are similar
enough that you can probably still get a good sense
(48:04):
of if you ask it to write you know, if
you don't know, go, I don't know go. If I
ask you know, an LM to write me something and go,
I would probably have a fairly good idea about what
it is that it produced, and I'd have to look
up a few things and so on. But again the
problem is if you're not even a programmer from the beginning,
or if you're a junior, as we talked about, then
(48:25):
that gets a lot harder.
Speaker 1 (48:27):
But still, if it was you know, a language like reactor,
you know JavaScript, but you know, if Reactor is going
to do something, how do I know that it did
it the right way or the most efficient way, or
you know that there isn't a better way?
Speaker 2 (48:41):
I don't Yeah.
Speaker 1 (48:42):
Yeah, So what I would do is I would hire
a subcontractor that does React, and I would encourage them
to use the LM because of the agents, because they'll
be more productive, sure, especially if they're charging me by
the hour. That's another question. How ethical is it to
charge by the project versus by the hour?
Speaker 3 (49:01):
Now fair enough, but still it comes down to accountability
because if you hire a sub contractor, you need to
trust that some subcontractor to actually do the right thing,
and and and and absolutely and and that's that's another
problem that we tend to have with ais in general
is they're not really accountable right now. We don't have
(49:23):
any laws that govern how you know, who has responsibility
for the output of them. And as Richard said, well,
we even know how to deal with with software development,
but if we're looking at the broader picture of just
you know, asking it to do all sorts of other
tasks for us in the in the rest of the world,
you know, outside of software, you know who's accountable then,
(49:43):
and we we don't know they're they're not, so we're
sort of stuck with, you know whatever.
Speaker 1 (49:50):
We saw companies using their their AI bots as an excuse,
uh when bot gave them bad advice. Remember that one, Richard,
I think it was an line thing, whether refund or something.
Speaker 2 (50:02):
Yeah, that was the Air Canada incident. Yeah, where where
a bot told a customer or you'll be able to
get a refund on that. So they went ahead and
did the thing. When they went to get the refund,
they were refused and ultimately ended up in front of
a judge and the judge said, use the bot as
if it was an employee. If an employee said that,
you'd have to make it true. So the bot qualifies. Yeah,
(50:23):
and they had to issue the refund and it just
was a you know, the interesting part was considering that
Air Canada had made a publicly accessible tool that early on. Yeah,
and at least in Canada now has set a piece
of case law in place. Not a bad thing. I'm
not unhappy with that outcome. No, that's I don't want them. Yeah,
(50:43):
cautionary tale to other companies, right, test and be prepared
to pay for the consequences exactly.
Speaker 3 (50:49):
So that also means that if you can sort of
sense as a customer that you are that you have
an ill limit the other end, you just keep asking it,
you know, variations of the same questions until you get
something you like and then you pounce on that. Oh yeah,
I'll take that deal, thank you.
Speaker 1 (51:07):
Yeah. I remember asking about are you a bot? And
it said no, my name is whatever from blah blah blah.
Speaker 2 (51:13):
Yeah, sure, yeah, I.
Speaker 3 (51:15):
Think we should. I think that should should be a
law that says, well, they are not allowed to impersonate humans, but.
Speaker 1 (51:22):
You should know.
Speaker 2 (51:22):
I have to wonder if we had done a better
job on privacy in the first place, if we wouldn't
be dealing with quite as many issues as we've got. Yeah,
that doesn't mean we shouldn't continue to try. Like I'm
we talk about the challenges of the gen Z and
younger generations. It's this using despairs and excuse not to try. Yeah, hey,
it's not an excuse. You have to continue to be
(51:44):
the best person you can be. Be the best programmer, dancer, electrician,
whatever it is you are. Be the best you can
possibly be, and use the tools to your advantage. Don't
become a tool yourself. All right, at all? In all,
I'm pretty optimistic.
Speaker 1 (52:01):
Yeah, I actually am too.
Speaker 3 (52:03):
Well, I'm not, but that's just my disposition.
Speaker 2 (52:08):
To see some patterns happening again. It's like, oh boy,
we get to learn this problem again. Oh you've been
trusting software. Huh, All right, here we go.
Speaker 1 (52:15):
We're gonna have to trust but you know what, the
chickens will come home to rust, like we've said, Richard,
and you know the rest of the people will wake
up and say, oh we need to do this, we
need to do that. We can't just rely on these things.
So yeah, there'll there will be some pain for sure. Yeah,
but ultimately come down to people and the decisions they make. Yeah, yeah, okay,
is that a show?
Speaker 2 (52:36):
I think it's a show?
Speaker 1 (52:37):
All right, Mark, thank you. It's always always awesome talking
to you.
Speaker 3 (52:41):
It's a pleasure, all right. Thank you for having me.
Speaker 1 (52:44):
You bet, and we'll see you next time on dot
net rocks. Dot net Rocks is brought to you by
(53:09):
Franklin's Net and produced by Pop Studios, a full service audio,
video and post production facility located physically in New London, Connecticut,
and of course in the cloud online at pwop dot com.
Speaker 5 (53:23):
Visit our website at d O T N E T
R O c k S dot com for RSS feeds, downloads,
mobile apps, comments, and access to the full archives going
back to show number one, recorded in September two thousand
and two.
Speaker 1 (53:38):
And make sure you check out our sponsors. They keep
us in business. Now, go write some code, see you
next time.
Speaker 3 (53:45):
You got jam Vans
Speaker 2 (53:49):
And for the pass