Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
How'd you like to listen to dot net rocks with
no ads? Easy? Become a patron for just five dollars
a month. You get access to a private RSS feed
where all the shows have no ads. Twenty dollars a month.
We'll get you that and a special dot net rocks
patron mug. Sign up now at Patreon dot dot NetRocks
(00:21):
dot com. Hey guess what might it's dot ad rocks
episode nineteen seventy nine. And I said that with an
Australian accent because I'm Carl Franklin here in Connecticut.
Speaker 2 (00:44):
And I'm Richard Campbell, and I'm down in Queensland, Australia.
Speaker 1 (00:48):
So and our guest is in Australia as well, So
it's gonna be a down Under show today.
Speaker 2 (00:53):
Say we got a Southern Hemisphere bias?
Speaker 1 (00:55):
Yeah, all right, Uh so let's talk a bit out
the show number. It's nineteen seventy nine, So what happened
in that year?
Speaker 2 (01:05):
Well, let me just oh a fair bit. Yeah, a
lot of things. You perusing the list, I've got a
lot of space and a lot, well a little bit
of space, a lot of science at a huge amount
of compute, Well.
Speaker 1 (01:16):
A lot happened in Iran, China, and the Soviet Union
in Afghanistan. Three Mile Island happened, the Iran hostage crisis,
the Iranian Revolution, Soviet invasion of Afghanistan.
Speaker 2 (01:32):
That didn't end well, Nope for them, for anyone.
Speaker 1 (01:36):
US China relations. United States severed diplomatic ties with Taiwan,
established full diplomatic relations with the People's Republic of China. Oh,
let's see some The music for UNSEF concert was kind
of important.
Speaker 2 (01:52):
We're heading into the Live aid era, right, Yeah, that's coming.
Speaker 1 (01:56):
The Dukes of Hazzard premiered on January twenty sixth I
remember my brother and Nice to drive my mother crazy,
all the car chases and stuff. She would like come
in and turn the television off. It just drove her
up the wall. I think that was the point of
the Dukes, that hazard, wasn't it.
Speaker 2 (02:12):
Driver.
Speaker 1 (02:12):
Yeah, you'll talk about space. So an anti nuclear demonstration
happened on September twenty third. Nearly two hundred thousand people
protested nuclear power in New York City, or just nuclear
in general. But tell us about space and compute what happened.
Speaker 2 (02:35):
Ninety seventy nine is the year that the Columbia Shuttle
is delivered to Kennedy Space Center. It's still two years
away from flying, but they it has now been built.
Enterprise is being retired as a museum article after finishing
its testing. It was too expensive to refit it into
being a spacecraft, although that had been the original attension.
Instead they'll use a different test article, which will become Challenger. Yeah,
(02:57):
but Columbia's now assembled, bunch of changes. It still has problems.
It's the heaviest of the shuttles that will fly, and
so it's limited by certain aspects. But it's still you know,
we're making progress, although a couple of years away from flying.
Voyager one and two make their fly bys of Jupiter
in nineteen seventy nine, and Pioneer eleven does the very
first flyby of Saturn. Wow. So those are all new things.
(03:19):
And Skylab in July nineteen seventy nine now originally been
launched in nineteen seventy three as part of the Apollo
Missions projects. It was only intended for one hundred and
forty days of use, okay, and so this is already
six years later. They actually had one hundred and seventy
two days of use over three missions they extended it.
(03:42):
It had plenty of problems they and it had been
left empty, although prepped for someone else to visit it.
They had a welcome kit, the door been left unlocked,
all that sort of thing. And they had thought that
they would be able to get the shutle ready in
time to keep an a do over. In fact, they
figured it would stay it over till at least the
early eighties, but there was a solar maximum going on
(04:02):
then they and the additional solariation expands the atmosphere to
the dragon Skylab was greater than expected. Also, there was
a lot broken. You know, they used control moment gyros
for directional stabilization on Skylab, and at that point one
had already failed and another one was failing, and there
had been no plan to make them serviceable, so there
was really no way to fix them. They would literally
(04:23):
have to deploy a new set of gyros on some
kind of attachment to the space station. So there's going
to get this point was like, you could just build
another space station.
Speaker 1 (04:30):
So Skylab was manned. At one point.
Speaker 2 (04:32):
My Skylab had three mission sent to it, yes, okay, one, two,
and three, and we talked about those you know, in
the past few shows when when those happened. But now
it had been empty for several years, the atmosphere had expanded,
so it was reintercasterment and expected, and it came down
over the Indian Ocean and Australia, and.
Speaker 1 (04:50):
It lasted a lot longer than my and my brother's
snow sculpture referring to the blizzard of seventy eight that
we talked about last week's.
Speaker 2 (05:00):
In the previous episode. One other science one before we
get into the computing stuff, because the community stuff so extensive,
is nineteen seventy nine is the year that we first
found hydro thermal events. So this was the Riviera submersible
experiments off the southern south of Baja California, about eighty
five hundred feet of water with you remember the Alvin
(05:22):
submersible woods whole. So they were looking around for underwater
volcanic activity and came across these things we now know
as black smokers. So these were jets of black material
coming out of the ocean floor and the extremely hot
three hundred eighty degrees celsius like seven hundred degrees fahrenheit
in the deep dark abyssal parts of the ocean. And
(05:46):
they're surrounded by life. There are tube worms and kinds
of crabs and all sorts of things that are feeding
off of the heat and the minerals that are pouring
out of these black smokers. And it was a revolution
in thinking around where life could emerge. It was this
belief that you needed to be a certain distance away
from the sun and have pad with water and all
(06:06):
these things. And here was this completely dark place that
had life emerging around it. And it speaks to the
idea that we talk about these days about perhaps the
Europa in orbit around Jupiter, with enough heat, could possibly
have life under that ocean, underneath the ice cap that
is Europa. So that all begins in seventy nine with
(06:28):
its discovery. Nobody thought they were there. This was a
fine So is the important moment.
Speaker 1 (06:32):
You know what I remember fondly about this period National
Geographic magazine. There was some great photos and especially from
those space missions, and just amazing, amazing stuff. And as
a kid, I was just wrapped by a National Geographic
My mother recently passed and we're cleaning out her attic,
(06:53):
and it turns out there they saved all their National
Geographic magazines going back to the thirties.
Speaker 2 (07:01):
Wow, that's crazy. Yes, So so here's a weird one
about those black smokers. So you'll share this nationally graphic magazine.
Those two birds are brightly colored. They're white and red.
It's like it's dark. Why these things have colors? Yeah,
I did do, all right. Let's talk about computers. Nineteen
seventy nine the release of the Atari model four hundred
(07:21):
and eight hundred, also the TI ninety nine that is
a scientific calculator. No, no, t I nine nine was a
computer made by talking instruments. Yeah no, I mean games
for it back in the day. The Motor sixty eight
thousand processor is released, and a couple of online things.
The origin of CompuServe. Oh yeah, I love the CompuServe.
(07:45):
So originally a company a subsidiary of Golden United Life
Insurance out of Columbus, Ohio, which they actually started in
nineteen sixty nine renting time on PDP tens, but in
seventy nine they added dial up so you could island.
It was acually internal customers primarily, but they started opening
up to consumers and it didn't go particularly well. The
(08:06):
company gets acquired in nineteen eighty by H and R.
Block for twenty five million dollars. They only have they
have less than a thousand users at that time, but
by ninet eighty four it'll be one hundred and ten thousand.
Speaker 1 (08:15):
Wasn't they owned by Sears at one point.
Speaker 2 (08:17):
Yeah, it went through many hands before it end up
at AOL.
Speaker 1 (08:20):
But I just remember the snap packs which had you know,
a code and it had the password explore plus world,
right you remember that?
Speaker 2 (08:29):
Yeah? Yeah, and so much fun. This is also just
beginning of modes in general. So this is when I
first got a motem as well. And for better or worse,
like the Vancouver area had a ton of BBS's, it
was very early on. It just ways for geeks to
connect to each other. It's also the first computer worm
was built by John Shock and John Hopp at Xerox Park,
(08:50):
largely by accident of propagating piece of software across networks.
And so there was your first computer worm. But by far,
without a doubt, the most important thing for computing in
nineteen seventy nine VisiCalc. Van Bricklin, the precursor to all
the spreadsheets. Yeah, and the first real you know they
at one point it was personal computers are described as
(09:11):
an accessory for Visical. Right, this was the point of
the personal computer first released in seventy nine on the
Apple two and IBM sites in their history of the
IBM PC that that product shipping made them accelerate the
development of the IBM PC to get a version of
Hysical Chronic. Now, the ibm PC made sense and they
(09:32):
got it out in nineteen eighty. The story of VISICALCU
is also one of mush drama too, because a former
Visit Corp employee, Michiga Porl leaves and forms the Lotus
Corporation Lotus one two three, create Lotus one two three
and then the Electronic Frontier Foundation. Yeah, that was part
of that, and that's nineteen eighty three, and what was
(09:52):
the claim to fame for Lotus one two to three
totally optimized for the IBM PC, and that of course
will later lead to multi Plan by Microsoft, which will
eventually become Excel. Yeah.
Speaker 1 (10:03):
I remember my father bought a Tiers eight Model four
and that's kind of my first foray into computers, and
he bought it for Visical.
Speaker 2 (10:10):
YEP. Visical was villefort because.
Speaker 1 (10:11):
He did his taxes and bills and everything on Visical.
Can I remember once a month and getting that thing
out and putting it on the dining room table and
we didn't see him for a few hours.
Speaker 2 (10:21):
You know this is really up until now, personal computers
have been toys. Yeah right, this is the product that
made it actually a business product and important and changed everything. Yeah.
Speaker 1 (10:34):
Still a critical tool in the arsenal today.
Speaker 2 (10:36):
Oh yeah, spreadsheets, spreadsheets. Whole companies built around spreadsheets without
a doubt.
Speaker 1 (10:40):
You have a great joke, don't you, Richard? How about
you and your wife in spreadsheet?
Speaker 2 (10:44):
Oh but yeah, she's the industrial engineer. I'm a programmer.
When we argue, it involves a spreadsheet. Yeah, and it's
only because that happened, right. We were trying to we
were arguing about how to redo a deck, and in
the end resolved the argument with a spreadsheet. I'm like,
this is who we are. It's great, this is reality
for us. Awesome? So is that it? That's it? All right?
Speaker 1 (11:05):
Well with that, let's roll the crazy music for better
no framework?
Speaker 2 (11:15):
Man, what do you got?
Speaker 3 (11:16):
Well?
Speaker 1 (11:16):
I realized when we were recording last week's show that
this might have been a better, better no framework for that.
But tech Nitium Software has a DNS server written in
C sharp that's open source and their tagline is self
host a DNS server for privacy and security, block ads
(11:40):
in malware at the DNS level for your entire network,
and technitium I don't know how you say it, but
tech nit i um technitium probably technician who knows. DNS
server is an open source, authoritative as well as recursive
DNS server that can be used for self hosting a
DNS server for privacy and security. It works out of
(12:02):
the box with no or minimal configuration and provides a
user friendly web console accessible using any modern web browser.
Cool how about that?
Speaker 2 (12:11):
Yeah? Yeah, interesting? And we always argument of like what
DNA server should you run, because you know, there's plenty
of free ones out there, but this is a free
one too, guys. It's just open source, so good stuff.
Speaker 1 (12:21):
But if you think about it, like your basic Windows
land has everything except the DNS server, and back in
the day we were editing host files and putting IP
addresses and names in there, and then you had some
other protocols on top of that that worked with anything
but TCP IP, right, So you know, this is kind
(12:42):
of a it's kind of an important thing. If you
don't want to go outside of your network for DNS. Yeah,
there you go. If you're gonna know if you need it. Yeah,
we're not going to try to sell.
Speaker 2 (12:52):
You on No. And without a doubt. It's like, yeah,
it's one of those things. Just learned to configure it correctly.
It's important, right, you know that whole running joke of
it could be DNS, it's always DNS. Definitely, definitely DNS.
It's impossible if it's DNS. It could be DNS. It's
dns NS. Okay, Well that's what I got today. Richard,
(13:14):
who's talking to us awesome, grabbed a comment off show
nineteen seventy four, the one we did with Don Delamarski
when we talked about Gethub spec kit. Yeah, and they've
said a huge conversation, lots going down there. And I've
read comments from this show before, and I'm gonna read
oother one. This one's from Richard Cox who said, I
just finished listening to this. I think there's one part
that your guests got wrong. If m inference stopped, work
(13:35):
stops thanks to the bubble bursting. Talking about the AI bubble,
the models will start to degrade as new input like
code using new versions of tools will not be included.
Then the output will not use those new capabilities. IE.
Over time, as the rest of the world moves forwards,
the lms will increasingly be stuck in the past. Yeah,
I mean, I get the sentiment that, you know, when
(13:57):
the bubble bursts, a certain amount of work's going to
go away. Would argue there's too many models. Now, I'm
sure there'll be a few models, so they'll continue to
go forward. But we're seeing such incredible overspending at the moment, right,
It'll be interesting to see what happens. And modelbility is
getting easier, and I would also argue less important, like
(14:17):
in that sense that it's just going to be part
of the flow. It'll be interesting to see what models
emerge post bubble burst, right, and boy, there's lots of
noises about the bubble versus these days. It's be interesting
to see what happens. Oh, I know it, but fair point,
it's all we're you know. The interesting thing about this
is this software in some ways does degrade because the
(14:39):
world keeps moving on and these things need to keep
being regenerated and optimized. So Richard, thank you so much
for your comment and a copy of music co Buy is
on its way to you, and if you'd like a
copy of music, go buy. I write a comment on
the website at dot netrogs dot com or on the facebooks.
We publish every show there and if you comment there
and ever reading the show, we'll send your copy us
to go buy.
Speaker 1 (14:55):
And of course needs to co by developed a while
ago to provide music that's neither too boring or to
distracting twenty five minutes track and there's now twenty three
tracks and you can get it at Musicdoco by dot
net and the entire collection in wave, flak and MP
three formats. Okay, let's introduce Callum Simpson and he is
(15:20):
a solution architect with fourteen years of software development experience
as of this recording, and recently promoted VP of AI SSW,
an enterprise consultancy based in Australia. Our friend Adam Cogan
runs that he spends most of his time using AI
to help deliver projects faster or building projects that use AI,
(15:43):
and usually both. Despite being more productive than ever, he
claims to have not written a complete line of code
in over a year. He's also a product owner of
SSW yak Shaver, an AI product. He'll tell us about today.
Speaker 3 (16:00):
Callum, thank you very much, Carl, and great to be here.
Speaker 2 (16:03):
Hey Richard, hey Man, great to have you. I was
just down at SSW was part of their brainstorming day,
so we had a chance to hang out.
Speaker 3 (16:10):
Okay, what did you think about it?
Speaker 2 (16:11):
Yeah, that's great. You know, it's not my first brainstorm.
It's always fun. What I think you saw my clothes
where I commented on you know, SSW is clearly all
in on the AI space, and more than anything, what
I saw was a bunch of different teams trying to
find a way to rain the LMS in, to put
parameters around them so that they focus on the things
(16:33):
that are important productivity wise, whether that's around controlling architecture,
deployment strategies you talked about. It was a group that
we're talking about, I said, a UX frameworks that also
the LMS would be pressed against. It's like this is
how you build UI when you write, when you're building code,
and I just thought it was really clear thinking from
a group of really smart developers trying to get the
(16:54):
most value from these tools.
Speaker 3 (16:56):
That's awesome, that's right. Yeah, we're trying to use it.
I guess in every way we possibly can to get
as much value out of it. So anything that can
be sold by AI, we are trying to do it.
Speaker 2 (17:08):
Cool.
Speaker 1 (17:08):
I am so curious about yak shaver. First of all,
what a funny name. Wow, And I think.
Speaker 2 (17:16):
Adam Cogan was involved. What do you think was going
to happen?
Speaker 1 (17:19):
I'm sure, yeah, But but there's probably just a very
small handful of people in this world who have ever
attempted to shave a yak.
Speaker 2 (17:28):
Well, first you have to own a yak, don't you.
Speaker 1 (17:31):
Well, you know, necessarily you could be a yak shaver
professional and go around from yak to yak to yak,
you know. I mean they have obviously yaq milkers who
make butter from yak, you know, cream or whatever.
Speaker 2 (17:46):
I've had yak milk in my coffee, in my tea.
You know, it's the thing when you're in Nepal.
Speaker 3 (17:51):
I was gonna say it's potentially an alternative name, but
the name itself came from I think there was a
guy called Colin Vieri in the nineties, was a PhD
student at MIT, and he got the name from an
episode of The Ren and Stimpy Show. In that show,
(18:13):
there was a yak shaving day, and basically yac shaving
Day was sort of like this thing where where these
people do this series of ridiculous tasks. And so the
concept of yak shaving is sort of like when you
start doing your main mission and then you realize, all, right,
in order to achieve this mission, I need to go
(18:35):
down a side quest, and then in order to do
this side quest, I need to go and do another
side quest first before I can come back and finish
my first side quest to finish the main goal. And
then you end up going down this like ten ten
layers of causality, and whatever you're doing has absolutely nothing
apparently to do with the original goal, but you have
(18:55):
to do it unblock all your other things to get
back up to the original task.
Speaker 1 (18:59):
And then every new layer you say to yourself, should
I really be doing this exactly?
Speaker 3 (19:04):
What am I doing with my wife?
Speaker 2 (19:09):
Better use is of my time?
Speaker 3 (19:12):
Exactly? And so that's sort of what we're what we're
trying to do with with Yakshava is cut out as
much of that sort of busy work as possible. Now,
obviously we are software developers, so most of the busy
work we do has to do with or at least
a large part of it has to do with putting
(19:32):
items in backlogs. So that was the sort of original concept.
Is you know, when you've got an issue, when you
see a bug on a website, what do you have
to do to actually report that bug to the right team?
You know, you need to figure out which backlog does
this PBI belonging, who are the stakeholders of this project,
(19:57):
all that sort of stuff, And if you're on a
call with a bunch of important people, you can either
sort of skip over the problem because you know you've
got better things to talk about, or you can tell everyone,
all right, wait for five minutes, I need to go
figure out all these details, put in the right backlog
all that stuff first. So it is a tough problem.
(20:17):
And obviously we don't want to skip over issues when
we see them, but we also don't want to waste
everyone's time. So that's that's sort of the whole idea
of what we're trying to achieve or the problem we're
trying to solve.
Speaker 1 (20:29):
So yak shaver does it? Is it sort of an
agent kind of thing where you give it permission to
do stuff, you know, like an MCP word and why
wouldn't you just use an MCP.
Speaker 3 (20:44):
Yeah, good question. So when Yakshava first was conceptualized, it
was sort of like, I think it's almost three years
ago now, so back then we didn't have mcps. We
had only just got you know, of custom GPTs that
can do tool calls and stuff like that. But yeah,
(21:05):
the idea of an MCP was still very far off.
So back then the idea was instead of just you know,
letting the agent do whatever it wants because obviously models
weren't as reliable, then we would force it through a
pipeline where there's a couple of branching points where it
(21:25):
can either do this or do that. Is, are you
trying to report a PBI or send an email for example,
as one of those branches, and we make each.
Speaker 1 (21:34):
Of the all right, So it's not you're not giving
a total agency, You're you're guiding it.
Speaker 3 (21:39):
As you say, exactly, That's that's sort of the the
Yakshav V one you have mentioned MCP. We are currently
developing a V two that will indeed use MCP servers.
Speaker 1 (21:55):
Okay, video context, what's that?
Speaker 2 (21:58):
Yeah, exactly.
Speaker 3 (21:59):
So the the main I think distinguishing factor between Yakshava
and maybe firing up a you know, a MCP host
like clawed Desktop and just saying hey, go create an
item in the backlog is with Yakshava, we use a
video as the input. So what you'll do is, you know,
you'll share your screen and then you'll speak into your
(22:22):
microphone and you'll say, hey, I'm just on this website.
There's a URL, and here's the problem that I've got
on the website, and then stop recording. And then basically
the AI will obviously analyze the transcript, will analyze what
it can from the screen that you shared, and then
interesting go ahead and you figure out where to put
(22:44):
it and how to format it all that stuff.
Speaker 1 (22:46):
So it's a little more powerful than something like play rate,
which can go and navigate a site and all that stuff.
But what if you're not using a website, What if
you're using a piece of software. Right, yeah, exactly, use
a video screen. I'm sure it's brilliant.
Speaker 3 (23:01):
That's right, And and some creative people have used it,
even completely unrelated to software, which was an unexpected use case.
You know, people like the building maintenance team when they find,
you know, like an issue with the coffee machine, for example. Yeah,
record a video of it and it gets filed off
(23:23):
in the appropriate Obviously we're using back clothes for the
office maintenance because Adam's the boss.
Speaker 2 (23:32):
But in the same way that people use GitHub for
recipes and things like, it's useful to have a coherent
documentation chain for any of these things.
Speaker 1 (23:39):
Well, chat GPT is good for that, and I find
that the chat GPT is more consumer oriented that way,
Like I can take a video when I could say, hey,
what is what is this thing? You know, here's the
here's a couple of pictures I snapped at my laptop,
and it figures out what it is from that, here's
the problem I'm having. I take a screen, you know,
(24:00):
video of the screen, and it you know, can diagnose
problems that way. So, but what I don't like about
chat GPT is it doesn't understand the context of code
right for example, and I don't want it to right Yeah.
Speaker 2 (24:14):
Yeah, yeah.
Speaker 3 (24:15):
So one of the I guess main things that we're
doing with Yakshav, apart from the video input, is also
the sort of organizational consistency. So what you'll do with
Yakshav is set up you know, these are all my projects,
These are all my people and who's associated with which project,
and these are the formats that I like to have.
(24:36):
My pbis created in all that sort of stuff, you'll
sort of define it and that way everyone who uses
it ends up with a consistent result, because obviously if
everyone just used chat GPT and recorded a video of
whatever with none of that consistency, then you're going to
end up with a completely different thing.
Speaker 2 (24:57):
Every time. The world is full of abandoned video that's
just sort of a normal thing, right, Like this is
I think the important part in all of these things.
And I'm not going to point out they actuap per se,
but it's like what do you do with it after?
Like where does it go? How does anybody ever look
at it? Although you've always got the quality problem right,
like we've been you see this recurring theme with AI
(25:19):
generated text in general. People are writing their corporate emails
using these tools and they're over long and over complicated,
and some ways you're pushing the problem down the line,
like right, yeah, well, I guess this is the challenge
with anything related to these AI tools is like how
do you make sure the thing you're making is concise?
(25:40):
That the next person down the line isn't being dumped
with a lot of unnecessary work.
Speaker 3 (25:44):
Yeah, well, I'm glad you mentioned that actually, because one
of the other really cool things about yakshaber is not
only does it generate the PBI, but it also puts
the link to the video that you recorded on the PBI.
So what it means is that the developer who picks
up that issue and works on it, they, if they want,
(26:05):
can just ignore all the AI generated PBI text and
just watch the video of the user explaining the problem
and what do you mean when you see PBI backlog items?
So that's basically the of the bug. So if we've
got a bug on a website, we've recorded a video
showing that bug, and then the developer who picks up
(26:27):
that that issue to fix can watch the video.
Speaker 1 (26:32):
That's very Adam Cogan. He used to and maybe he
still does. But when we had issues with our website,
we would get emails from Adam that had screenshots that
were annotated with you know, things circled and whatever, and
you know this should be that, and that should be this,
and yeah he's always been that way.
Speaker 3 (26:54):
Yeah, yeah, yeah, But often there's heaps of details in
the video that has an extra being transcribed in the text, right,
and just being able to watch the video is it
makes it so much easier because you don't have to
go back to the original reporter and say, hey, what
are your reproduction steps? So you know all that sort
of stuff. You can just watch the video.
Speaker 2 (27:12):
Yeah, anytime you can avoid having to go struggle with
a reproduction is good. It's all about capturing that. But
it's interesting to think about this from a workflow perspective
of what's the next thing We're trying to avoid shave
in the act here? Right? So are these just distraction
items from a main thing? Because we also talked about
the fact that there's many little things you need to
(27:33):
do before you can get on with the main thing,
right right, So I guess it's some of this just
getting harnessing more people into the workflow so when they
get those things done, you can move forward.
Speaker 1 (27:42):
I'm curious about video transcription. I know that there are
tools out there that do it, but they're kind of
the transcription is usually integrated into those things, right, Like
there's we use camtesa a little bit here, and I
have maybe an older version.
Speaker 2 (27:58):
I don't know.
Speaker 1 (27:58):
If the newer versions do trans but transcription rather, but
that is a huge thing for me, so if I
have meetings, I know that Zoom can generate transcriptions, but
just it would be really easy to to have a
desktop application though. You could just drop a video into
it and it could transcribe the video. Yeah, for sure,
(28:22):
it seems like a simple thing. But then I could
look it up. I could look up what we're we
talking about.
Speaker 3 (28:26):
You know that's true, And I mean that sounds like
an interesting future feature that weak we can backlog. Maybe
we could actuate it right now and you your details
about it will form the video, and then the developer
who picks it up, we'll have all the context of
why you wanted it, so that.
Speaker 2 (28:45):
Yeah, I like it. All right, take a break?
Speaker 1 (28:47):
Yeah, all right, Well this seems like a good place
to take a break, so we'll be right back after
these very important messages. Did you know you can easily
migrate asp net web apps to Windows containers on Aws.
Use the app to Container tool to containerize your iis
websites and deploy to AWS managed container services with or
(29:10):
without Kubernetes. Find out more about app to Container at
aws dot Amazon dot com. Slash dot Net, slash Modernize,
and we're back at starting at Rocks. I'm Carl Franklin.
It's my friend Richard Campbell. Hey, and this is Callum
Simpson from SSW and he's down in Australia with Richard.
(29:32):
Right now, we're talking about Yakshaver and is this going
to be an open source product? I know it's a
desktop application.
Speaker 3 (29:39):
Yeah, So I mentioned earlier we've got sort of like
a V one Yakshaver and a V two Yakshaver. So
the V one Yakshaver as it currently is is not
open source. It's basically a cloud based pipeline subscriber before
where we sort of force everything through a pipeline with
a few brand options. But what we really wanted to
(30:03):
do with V two is to make it open source
desktop application. The reason why we wanted to make it
open source is because it's a desktop application that we
expect users to install on their machine. So obviously it's
not doing anything crazy. All it's doing is orchestrating MTP
(30:27):
servers from the local machine. So why not make an
open source so anyone can open the lead and see
what's happening inside.
Speaker 1 (30:35):
It's really, really cool.
Speaker 2 (30:36):
Now what are the key AI parts here? Is it
just a transcriber? Like, what do you use an LM
for their other generative ail? What's going on inside this out?
Speaker 3 (30:45):
Are we talking about the V two up with the
MCP service?
Speaker 2 (30:48):
Yeah? Yeah, So basically.
Speaker 3 (30:52):
You'll record your video well, as you said, transcribe the
text of the audio of that video, and then we
throw it over to the MCP orchestrator. And basically what
that is doing is saying, all right, here's the transcript
that the user has submitted. Here's the system prompt. So
(31:14):
inside yak Shava you can sort of define how you
want it to work. We leave that sort of up
to the user, but you would have I guess an
organizational default that you can then customize if you so wish,
and then Yakshava will basically take that and then start
using the MCP servers that you made available to it
(31:36):
to get through whatever it is you're trying to do
with that transcript. So, for example, you could give yak
Shava the GitHub MCP server and you could give it
instructions to say, you know, figure out which backlog or
project I'm talking about in my transcript, put this issue
(31:58):
in that backlog. Surely you include the link to the
YouTube video in that issue and use whatever whatever PBI
template is available in that repo as well. Uh, and
when when approaches it is it will then use you know,
the GitHub MCP to search all the projects you've got available,
(32:18):
figure out all right based on that transcription, I think
he's talking about that one, and then you know, search
the that repo for the templates, find the right template,
maybe have one template for reporting a bug, one template
for creating a feature request, that sort of thing, and
then fill out that template. And then we also pass
(32:38):
it originally when you've recorded your video we uploaded to YouTube,
and we pass it in the link to the YouTube video,
so it can then embed that that YouTube video in
the issue as well as.
Speaker 2 (32:50):
Part of it.
Speaker 3 (32:50):
So start getting issues with videos, which is pretty cool,
that's right. And we also we also give it a
couple of built in MCP tools as well. So we
have one tool that basically says, grab a screenshot at
the specified timestamp, because when we give it the transcript,
it's you know, it's got all the time stamps of
(33:13):
when each bit of text was said, so it can
sort of figure out what the key moment or moments
are in the transcript, you get a you got a
thumbnail maker based on context, that's right, and then we've
got another tool that takes that well, can take that
screenshot and then you use a multimodal LM to analyze
(33:34):
it so it can sort of grab some context out
of that screenshot as well. So in that way it's
sort of able to analyze the video as well as
what you've said.
Speaker 2 (33:43):
It's very cool. I'm also thinking just de duplication is often,
you know, you build these tools so it's easy for
people to report problems. They're going to report a lot
of problems, and they're off going to report the scene problems.
Oh yeah, for sure.
Speaker 3 (33:55):
So yeah, that's that's one of the by the way,
one of the reasons why we really wanted to transition
to this V two MCP approach because a lot of
people have a lot of requests like that, you know,
and all you have to do to do it with
the new world is you just say, and you're prompt
(34:16):
before you create a GitHub issue, make sure you search
for any duplicate issues that already exist, and if they do,
add this as a comment instead of creating a new one, right,
and yeah, that way it's just a prompt update instead
of having to go back to our pipeline and adding
all these extra branches to handle different cases.
Speaker 2 (34:35):
Well, and it's tougher to get people to do that
because it's way easier to just create a new issue
that searched you existing ones. So yeah, you can get
the tool to do that for us, Thank goodness for that.
Speaker 1 (34:42):
Well, but you have to put that in the system
prompt you know, that has to be a rule, right,
that's right, don't create new issues unless there's nothing there already.
Speaker 2 (34:52):
Yeah, you think it's actually original. Yeah, it's really interesting.
Another you know, we're talking about how general VII is
coming into the different aspects of software development. This is
an aspect of generating issues intelligently and trying to get
as much information to the developer as possible, whether it's
for a bugfix or a feature. Of course it's free.
Speaker 3 (35:13):
Course, Yeah that's right. Yeah, So I was going to say,
originally we with the V one solution. This was another
pain of the V one solution. We integrate with GitHub
and as your DevOps, but the majority of people we
spoke to about it, they didn't actually use those tools.
They use something like Gira or whatever some other tool,
(35:35):
and it was it was just too hard to keep
adding all these because you know, every time we create
one of these branching bits of logic in the V
one we had to also reproduce that same you know,
the same actions or set of actions in each of
the backlogs that we're integrating with so GitHub Badge develops.
(35:56):
We didn't want to have to keep doing that with Gira, Zender,
whatever other tool people are using. So that was yet
another argument why we moved to MVP because with MCP
you can even have exactly the same prompt, but if
you've got a different system that you're using for your
backlog or you have to flop the MCP server and
(36:18):
as long as it has a vaguely similar action available,
it'll work just fine.
Speaker 2 (36:23):
Yeah, way more scalable. That's classic V two stuff is
creating the rate set of interfaces for the next iterations
where people want to use it in more places.
Speaker 1 (36:31):
I know you guys had some discussions on the back
end about whether or not it's a good idea to
give people all this freedom and flexibility. So what was
that discussion like and would you come out of it with.
Speaker 3 (36:45):
Yeah, I think it's still sort of an open question
that we haven't got an answer to, because I think
it could potentially be detrimental. If you give people full
freedom to do anything, Suddenly they're sort of swamped by
choices and things they can customize, and they end up
(37:08):
not knowing how to best use it. Whereas if you
sort of force them down a path, you may not
be the optimal path, but at least they're forced down
the path and they'll figure out how to use it.
Speaker 1 (37:20):
Yeah, like some workflow templates or something like that you
can choose from exactly instead of just giving people a
blank slate. Also, if you think about it, you know,
it's kind of irresponsible to just slap some MCP service
together and give them full agency to go do whatever
you would normally do. I mean that can open up
(37:40):
a huge can of worms. Oh yeah, cans and cans
and cans of worms.
Speaker 3 (37:45):
Absolutely, you really need to be careful, So, I mean,
there is a lot of power. With power obviously comes responsibility.
Speaker 2 (37:52):
I've heard that you need to be here.
Speaker 3 (37:55):
You need to be careful about, you know, surfacing untrusted
third party data to MVP servers that can perform you know,
potentially destructive actions is actually sort of a running joke
in the actually the dev team is when someone's recording
a video to be processed, someone in the background will
(38:17):
shout out and delete the whole repo while you're there. Yeah,
we need to build in some safeguards to prevent.
Speaker 2 (38:25):
That from there.
Speaker 1 (38:27):
Used to be the running joke on dart net Rocks
Alexa delete on my no, not you don't listen to
me exactly. She's not quite sure how to help me
with that. But you know, a delete my account, you
know that kind of thing. Or send five hundred pounds
of concrete to Richard Campbell.
Speaker 2 (38:50):
Yeah, still trying to clean up that concrete. Thanks for that.
Very good. But you know, we even that office conversation
with more and more aishing. Our role as shepherds of AI,
goodness knows, is to constrain it, to put parameters around
each of these things, and so same thing here. You
want to focus on particular issue, particular capabilities, and keep
(39:14):
limits on all the things that it can do so
that it does focus on the direction you wanted to
go in.
Speaker 3 (39:19):
The other good thing about using MCP rather than a pipeline,
I think is that every action that the MCP service
takes on your behalf is sort of done in your
name because you're connecting directly with your own or credentials
or whatever if you use and get MVP.
Speaker 2 (39:38):
So yeah, you need to.
Speaker 3 (39:39):
Take your responsibility for everything that it does. You can't
just give it a random video and then cross your
fingers and hope it works.
Speaker 2 (39:47):
So that Yeah, Well you bring up a great point
because there's lots of conversation about I think this was
just at Ignite where they are setting identities for agents,
and that's almost like giving you an excuse lack of culpability.
All the software did It wasn't me. It's like, dude,
it was your prompt the software. We may call them agents,
(40:07):
but how much agency do we want them? They're working
on our behalf.
Speaker 1 (40:10):
Yeah, it's a constant theme.
Speaker 2 (40:12):
Well, I just feel like these are unsolved problems. Like
I'm appreciating which the work you guys are doing callum
just because you are making these experiments and using them
yourselves and finding out like what what works, what doesn't,
what the limits are on all this are because we
I think we're a few years away from really nailing
down what these new workflows look.
Speaker 3 (40:31):
Like, Yeah, that's right, an experiment.
Speaker 1 (40:33):
At this point, you have some other products AI products
that you're working on at ss W.
Speaker 3 (40:38):
Oh yeah, we've got SSW Eagle I okay, which is
a sort of a dystopian email analysis tool that basically
checks all of your emails and makes for sure that
you are sort of you know, adhering to all the
SSW rules and and sort of game fires that a
(41:00):
little bit as well. So you have like a leaderboard
who sends the most you know, checked by email, and you.
Speaker 1 (41:07):
Know, so it checks outgoing emails, not incoming, because that
would be a huge security risk.
Speaker 3 (41:14):
Yeah. Well, obviously you opt into it, right, So it's
not just checking any random email.
Speaker 1 (41:21):
You have it checking outgoing emails. Yeah, correct, that's what
it sounds like, but not incoming.
Speaker 3 (41:26):
Well, it's checking emails that you send to other people
inside the company, right, So it's not just paying any random.
Speaker 2 (41:32):
It's eternal email. That's good. We did a Cogan rules
show in two thousand and six. That was fun. I
remember that. Yeah, well, and then you know he's out
of serious about that. The rules continue to this day.
I think they're very in the middle of a migration
of them right now, if I recall from the brainstorming
session right there, Yeah, that's right. Yeah. So this, this
(41:53):
is the idea of an LLM being able to part
stuff and saying, are you is this compliant? Are you
following the rule set? Maybe making suggestions for what's incorrect? Yeah,
that's very cool.
Speaker 3 (42:02):
Interesting, Yeah, that could be coming soon. The other brains
sewing idea, the one that I was working on was
sort of an AI that constantly scans your site and
looks for problems and sort of reports them before someone
needs to actualy them.
Speaker 2 (42:19):
Right.
Speaker 3 (42:20):
I thought that was a cool idea.
Speaker 2 (42:21):
Smart site testing, yow. Yeah, could we build out it?
Can we build out a tool that was really good
at putting wrong things into text boxes?
Speaker 3 (42:33):
You know?
Speaker 2 (42:35):
Did you know those test guys, the ones that you know,
this is where I entered negative forty three? The all
thing blew up?
Speaker 3 (42:41):
That's right, that's kind of test is the one that
just ignores the instructions and does whatever they want. Yeah,
I'm sure if you get a play right MCP to
just prompt it, you're a crazy person who ignores instructions
and does whatever they want.
Speaker 2 (43:00):
Yeah, the breeze I don't want to feed you an
LM is ignore instructions like that just seems disturbing to me. Yeah,
that's risky. Yeah, absolute trouble, without a doubt. How many
folks worked on on yacual what it take to get
into this point.
Speaker 3 (43:13):
We've got a team of about ten. Now, they're not
all working obviously constantly, because you know, when we've got
client engagements up, go off and do that. So we've
got sort of ten coming going. I think the core team,
you could say, maybe five people working on it. But
(43:33):
one of the very interesting things we've noticed recently is well,
I'm sort of operating as the product owner of Yakhava,
and so as the product owner, I've been trying out AI,
like completely AI driven development with not even looking at
(43:54):
the code, just to see is that a viable approach?
Speaker 2 (43:59):
So vibe coding I hate the phrase, but this.
Speaker 3 (44:02):
Is exactly vibe coding. Yeah, I mean, yeah, the concept
of vibe coding is interesting. I mean, people have different
definitions of it, right. Some people say, anytime you use
AI to write code, that's vibe coding, whereas others would say, well,
it's only if you you only talk to the agent
(44:23):
and you don't look at the output.
Speaker 2 (44:24):
Sure, so I can say, but that's what Kapothi said
at the time, right when he came up with the.
Speaker 3 (44:29):
Phrase exactly exactly, but the terms being corrupted. I think
so a lot of people just say anytime you use
AI to write code, you're vibe coding. I don't think
that's good at all. Yeah, and I think it's actually
important to draw that distinction because, like we just said,
you know, you need to take responsibility for the code
that you generate if it's generated under your name. So so.
(44:51):
But yeah, So the concept is, as product owner of
yak Shav, I'm just vibe coding the features that I
want to see in the tool, and then rather than
giving the team an issue that says, hey, I want
this feature in Yakhava, I will basically vibe code the
(45:11):
feature itself and then give them the poll request, and
then their job instead of implement the feature is just
review the poll request.
Speaker 2 (45:19):
Figure out how badly you've gotten the LM to mess
things up exactly.
Speaker 3 (45:26):
And I mean obviously that sometimes it does a terrible job,
but I think more and more it does a good job,
particularly at tasks that are sort of well constrained or
are you an implementation that is along the lines of
some codes that's already written, but just doing a slightly
different thing, you know what I mean. So there's a
(45:48):
big difference between creating a new, completely new piece of
code that may have got some new architectural components and
that sort of thing that me sounds rather risky to
just let AI do whatever it wants. But if you've
already got all that set up and now you're just saying,
add another you know, vertical slice on this project and
just copy the one that's already there, I think it
(46:10):
does a much better job of that.
Speaker 2 (46:11):
It does speak to this idea that more mature software
will be easier to maintain with these tools than very
new software. On you know, the wen Star, as long
as it's it's good, you know, as long as it's
written well, right, right, So it's got to be written
well in the first place.
Speaker 3 (46:28):
Because it's going to give you more crap, I imagine.
Speaker 2 (46:32):
I also wonder if if the success of this has
more to do with the scope of the feature or
the quality of the prompt.
Speaker 3 (46:38):
Yeah, good, good question, And I think that in my
experience anyway, a lot of people have like when when
they try to do something with AI. You know, the
people who I think get bad results. Some people seem
to get bad results even though they're fantastic developers. You know,
they'll try to use AI and they'll get a a
(47:00):
bad result and then they say, well, you know this
AI sucks another tool myself.
Speaker 2 (47:04):
Yeah, exactly.
Speaker 3 (47:05):
But from my perspective, I always try to think, if
I've used AI and the result was substandard, rather than
blaming the AI, I want to blame myself, and I
want to think, how could I how could I prompt
it better? How could I give the right context or
better prompt or whatever to get the better output next time.
Speaker 2 (47:27):
Well, let's say that's the old adage it is a
poor craftsman that blames as tools.
Speaker 3 (47:31):
That's right, good point, And yeah, so that's something that
I'm heavily leaning into is figuring out how we can
you know, make this process of AI generated code not
vibe coding, to be clear, because we're drawing a distinction there,
but generating code with AI that we actually care about
(47:52):
and that's going to form a part of our long
term code base. How do we actually go about doing
that in the best way. And obviously, to start with
it takes longer, probably than writing the code yourself quite
but I think in the long term, once you've got
your systems in place, you know, you've sort of set
(48:13):
that up once and then you can use it infinitely
many times. But but yeah, so it's that reason that
I've also been doing this vibe code experiment just to
see what is the difference between these two different approaches
and what goes wrong when you're VIBE coding. What are
the problems that AI has and then how do we
try to address them so that we can do a
(48:33):
better job.
Speaker 2 (48:34):
So I see three categories of code. Then that you
have handwritten code, you have AI syst a code, and
then you have AI generated code. And you know how
different are those things? How can they support each other?
Speaker 3 (48:47):
You know?
Speaker 2 (48:48):
Where is where is it? I think we're still trying
to figure out where the human needs to step in
more and where automation it can work fairly responsibly on it.
Speaker 3 (48:56):
That's right, I think it really the human needs to
If you're delivering a piece of software for a client,
the human obviously is the person who the client is
engaging to deliver that software. You need to take responsibility
for you know, understanding the problem and ensuring that the
(49:17):
software that is delivered meets all the requirements and does
everything the client needs, but also does all the technical
things that the client doesn't even know that they need.
Speaker 2 (49:25):
But they do. The client presumes security, probably without even
articulating it, that's right. You know, client presumes reliability also
without caculating it. Like we've got to make sure those
things exist, that's right.
Speaker 3 (49:38):
And if we just mindlessly generate something with AI and
don't even look at it and then yeah, okay, it
does most of the functionality you wanted, but it's all
these gaping security flaws, all this sort of things. That's
that's you know, that's a problem for you that you
haven't taken responsibility for what you have delivered.
Speaker 2 (49:56):
Yeah, that's what That's what responsible detment looks like. That's right.
Speaker 3 (50:00):
But you can totally be responsible with your delivery and
also use AI to do everything. It's just that you
need to be taking responsibility for it right ultimately, so
do the right thing.
Speaker 2 (50:11):
So as a product owner, I see a shape of
a V two, is your stuff going to a V three?
Ben Yette like, what's the future of the ACTUAV? Look like?
Speaker 3 (50:18):
Good question? I mean, I see after after V two.
Obviously V two we should be rolling out fairly soon.
Hopefully V two is all about MCP servers. I see
potentially V three is going to be Yakshava actually writing
writing code sort of like disposable code in order to
(50:43):
achieve tasks, rather than just using MCP servers.
Speaker 1 (50:47):
Disposable code like writing PowerShell scripts and then.
Speaker 3 (50:51):
Executing that kind of thing.
Speaker 2 (50:52):
Yeah.
Speaker 3 (50:53):
Yeah, so like in integrating with whatever it's trying to
integrate with, it could also be using the MCP servers,
but you know, writing code to perform the tools in
MCP rather than just using MCP because obviously one of
the issues with one of the issues with MCP is
you know, if you give it a whole heap of tools,
(51:16):
that's just eating up a bunch of context that probably
doesn't need to be eaten up. Although I do suspect
in the future we'll get better MCP management, Like you
might have some sort of middle layer that says, here's
here's the prompt, here's on my tools. Just surface a
couple of tools that seem like they need to be used,
and that sort of thing.
Speaker 2 (51:36):
Yeah, it's a pretty pretty raw design at this point too, right,
Like it's all of this stuff is so crazy new. Yeah.
Speaker 3 (51:44):
Yeah, it's an exciting space to be in.
Speaker 1 (51:46):
So what's next? What's next for you? Personally?
Speaker 3 (51:49):
For me, well, I think I continue to use AI
to deliver projects. You know, I always felt that my
favorite part of my job was the part where I
get to talk to the client, understand the problem and
design a solution and understand all the different trade offs
(52:12):
in all the decisions that are making in that solution,
and then you know, talk to the client figure out
what is the best solution for them. And then the
actual part where I write the code is sort of
the necessary evil. You know, it's not the fun part.
The fun part is designing the solution, right. So I
think that's one of the main reasons why I love
(52:32):
using A for this, because I can sort of still
be fully involved in all that decision making process and
solution design, but then I'm basically just delegating all the
actual code writing to a system of parallel cloud based
agents who while I'm asleep, they can just make a
(52:54):
dozen pull requests and the next morning I'll wake up
and just review them and you know, provide feedback and.
Speaker 2 (53:01):
So on and so forth.
Speaker 1 (53:04):
It's a brave new world in it.
Speaker 3 (53:05):
Absolutely it's very exciting, Yeah it is.
Speaker 1 (53:08):
And it's also I think freeing our imaginations to come
up with solutions because and this has been a theme
we've talked to many people about, starting with that Scott
Hunter interview. Right, just imagination is going to be a
very important commodity now, the creativity to think about what
(53:29):
you can do, and if you can think it, you
can probably get it done. And that's just an amazing thing.
Speaker 3 (53:36):
That's right, because I guess there's sort of no downside
to just trying something, right is if you can articulate
your idea, then you can have an agent go off
and have a crack at it and just see what happens.
And I mean, the only thing you've lost is maybe
a few cents of token usage, and that's really it.
Speaker 2 (53:59):
So why not?
Speaker 1 (54:00):
So it pays to stay in school kids and learn
as much as you can about the English language, and
take writing classes and be clear and your thoughts and
all of that, and don't use like too much, and
you know, treat each other well.
Speaker 3 (54:20):
That's right, and the good news.
Speaker 2 (54:23):
Of wisdom.
Speaker 3 (54:26):
You can also use AI to teach you things as well,
which is cooling. So if you want to learn something,
you can just ask AI to create a personalized tutorial.
Speaker 1 (54:37):
I think that's or even little things like I do
this all the time.
Speaker 2 (54:41):
Now.
Speaker 1 (54:42):
I was watching a TV show and this woman, this
young woman had what looked like a tube coming a
white tube coming from her ear into her nose with
some scotch tape on her cheek. And I thought, that
looks very strange. Is it oxygen?
Speaker 2 (55:01):
What is it?
Speaker 1 (55:02):
So I took a couple of pictures of the TV
screen to send it to chat Gypt and it quickly
figured out that it was a feeding tube because and
then turns out like later on somebody asked her because
it was a cooking show, somebody asked her, did you
chase your food? And she says, no, I can't taste
it because I have this gastric just whatever situation where
(55:27):
it takes two hours to half digest food, and so
she has to eat through a feeding tube. And she'd
had to do that she was twenty one, she'd had
to do it since she was thirteen. Wow, But chatchypt
figured it all out, and then I was asking it
questions about does that go all the way down the esophagus,
isn't that uncomfortable? And it's like telling me all this
stuff that I would It's just like having an expert.
(55:49):
It's like having Richard in your house.
Speaker 3 (55:52):
You know.
Speaker 1 (55:53):
I can say Richard, what is that? And hit clause
and Richard knows everything, so he would tell me it's.
Speaker 2 (55:58):
Really cool, right, which is it's interesting.
Speaker 1 (56:01):
It's better, way better than just Google.
Speaker 2 (56:03):
Yeah, for sure.
Speaker 1 (56:04):
Before you leave, I have an Adam's story for you.
It was I can't remember when, but he was doing
some videos with me for DNR TV, so that tells
you it was a while ago, early two thousands. Yeah,
and he was actually in Boston for techads. So that
was what two thousand and six maybe something like that, Yeah,
something like that. So he came down and we were
(56:27):
up all night and I was just basically waiting for
him to get his demo together because he was, you know,
working on it. So seven o'clock in the morning rolls
around time for Brecky. So we go to a diner.
So two thousand and six, right, so we barely have
internet on phones, but we do have it. And he
go to a diner and I'm putting pepper on my eggs.
(56:48):
He goes, you gotta watch that stuff. I said, what
black papa bad for blokes?
Speaker 3 (56:55):
Like what?
Speaker 1 (56:56):
He says, Yeah, I'll give you a prostate cancer. Come on, Well,
I didn't have Chatty obviously, but I did have Google
on my phone. I looked it up and not turns out.
It turns out not only is black pepper good for
your immune system and therefore not causing cancer, but kespasin,
(57:19):
which is not black pepper at all. But it's the
stuff that makes Pepper's chili peppers hot. When you apply
ks spasin directly to prostate cancer cells.
Speaker 2 (57:28):
It kills them.
Speaker 1 (57:30):
So I was like, dude, where did you learn this?
Speaker 3 (57:33):
He goes, friend, Yeah, well, look, it's all about strong
opinions weekly hill.
Speaker 2 (57:38):
So yeah, yeah, a change. But I told my kids.
Speaker 1 (57:42):
Because they grew up in the Internet age, and you know,
when you have a phone, when you get older, you
can use it as your portable BS detector. And you know,
and somebody tells you something, don't take it at face value.
Go and look it up and look it up at
a reputable fact checking site, not just like you know, TikTok. Anyway, callum,
(58:03):
thank you very much, it's been so great talking to you,
and this is great stuff and I can't wait for
V two. I'm gonna run it myself.
Speaker 2 (58:10):
I can't wait.
Speaker 3 (58:10):
Yeah, I can't wait to roll it out of you though.
Thanks having me.
Speaker 1 (58:13):
You let us know when it's available do and thanks Callum,
and we'll talk to you next time on dot net rocks.
(58:41):
Dot net rocks is brought to you by Franklin's Net
and produced by Pop Studios, a full service audio, video
and post production facility located physically in New London, Connecticut,
and of course in the cloud online at pwop dot com.
Visit our website at d O T N E t
R O c k S dot com for RSS feeds, downloads,
(59:04):
mobile apps, comments, and access to the full archives going
back to show number one, recorded in September two thousand
and two. And make sure you check out our sponsors.
They keep us in business. Now go write some code,
see you next time.
Speaker 3 (59:19):
You got javans
Speaker 2 (59:23):
And