Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Peter (00:01):
What's up, everybody?
Welcome to another episode of
the CompileScript Podcast. Wehave a guest this week, and I'm
very excited to talk about this.I have Gorkem Ercan with me
today. We're gonna be talkingabout some CICD, some open
source.
We have some machine learning inthere as well. But Gorkem please
introduce yourself.
Gorkem (00:18):
Thanks, Peter for thanks
for having me. My name is Gorkem
and I work as the CTO of Jozu.Jozu is a company that tries to
help enterprises adopt AI andML. What we believe in Jozu is
the AI and ML adoption needs tobe very close to your current
(00:39):
CICD practices and standards. Sowe try to help establish
standards, open standards, andwe try to reuse existing
standards for INML adoption onenterprises.
Before Jozu, I was adistinguished engineer with Red
Hat. I was running a big portionof the Red Hat's developer
(01:03):
tooling efforts. And, beforethat I worked for Nokia. The
other thing is I have been anopen source contributor for
pretty much all my career over20 years. I worked on open
source projects as part of mydaily life.
I worked on projects as part ofnot my daily job as well. I did
(01:27):
all of those kind ofdistributions. And until a year
ago, I was the board member atthe Eclipse Foundation.
Peter (01:36):
Very interesting.
Gorkem (01:37):
I guess.
Peter (01:38):
Yeah. A lot of topics.
It's you have had
Gorkem (01:40):
a a A lot of topics.
Where do you wanna start?
Peter (01:42):
Yeah. We got a lot of
topics to dive into there. So
we're gonna we're gonna try andcover as many as we can. First
of all, I'd like to to startwe're not gonna do this in
order, but I would like to startbecause I think you are the
first person I've ever directlyspoken with that has had a
connection with Eclipse and allof the Eclipse technologies.
Back in the day, I was anEclipse user like so many folks.
(02:06):
And then I feel like Eclipse wasthe first IDE that I I truly got
to grips with. Everything elsewas played with up until then.
And because of the nature ofEclipse with the the plugins and
so many different distributions,it really felt like that was a
tool. Everyone was using it forso many different interesting
projects and builds andeverything else. What was it
(02:28):
like to spend time workingthere?
Because that, even today, it wassuch a hot property. I would
imagine there was a lot ofdemand and a lot of pressure.
Gorkem (02:37):
I think when we talk
about Eclipse as a brand, I
think there is a curse and aluxury with that brand. Because
as like you, many people haveused Eclipse IDE for many years
and there are still a lot ofpeople using Eclipse IDE for
different purposes. But then theEclipse Foundation, which is the
(02:59):
foundation that hosts theEclipse, Eclipse ID development
is actually much, much more thanthat. Today, if you look at the
whole portfolio of EclipseFoundations, open search
projects, you have everythingfrom automotive to embedded to
cloud. So it's a very diverseset of open search projects.
(03:23):
So Eclipse Foundation itselftoday is a truly open source
foundation beyond Eclipse IDE.As for the Eclipse IDE, yes,
you're right. It's been therefor over 2 decades now. And it
has been used for many differentpurposes from Java EE to C and
(03:45):
it still continues to to chugalong. And some are areas of the
IDE are not as active as it usedto be.
But that's actuallyunderstandable in the sense that
those areas have been they havebeen they don't actually need
that much activity. Forinstance, Java tooling, as you
(04:06):
can imagine, is chugging alongand it's getting updated as with
the Java specification changes.And there are new language
features introduced to Java. Sothose are updated, obviously.
Bugs are cleared andcompatibility issues or
dependencies are updated and soon and so forth.
But that doesn't sum up to ahuge amount of work. So more of
(04:29):
a fact, we can even call itmaintenance work.
Peter (04:31):
I was gonna say, so from
an IDE perspective then, do you
feel like that's more of amaintenance compared to some of
the other like you say, so manytopics and so many products
within the Eclipse ecosphere.Right? Do you feel like the IDE
now is it's a maintenance mode?Not to imply that in a negative
way, but it's very immature aswell. Right?
Gorkem (04:52):
Yeah. It is a very
mature product. Therefore, yes,
there is a lot of parts of itthat are in maintenance mode,
but I don't think it would befair to say all of it. There are
other areas where there is a lotof involvement movement happens.
Like, for instance, Eclipse IDEhas language server support
today.
So I wouldn't call that amaintenance mode feature. But it
(05:14):
is a feature that has been addedafter the popularity of the
language servers. Right? I don'tthink it's fair to say that all
parts of the Eclipse IDE is onmaintenance mode. There are some
areas that are emerging.
And then, therefore, there ismore activity. But the core of
Eclipse is very stable. Andalso, that popularity and
(05:37):
adoption is also a curse aswell, perhaps. Because the
moment that you start to makechanges to that base, there are
so many projects that are builton top of Eclipse IDE that you
cannot really it is I haven'tseen too many changes in Eclipse
over the last decade where itwould introduce breaking changes
(06:00):
and would not cause a smallcivil war.
Peter (06:03):
That's interesting.
Gorkem (06:04):
Because you can't
imagine how many industries, how
many, projects are actuallybased on Eclipse. And this is
not just the ease. For instance,think about all the rich client
platform applications, RCPapplications. There are major
industries that actually run onRCP applications that are doing
(06:24):
mission critical stuff as well.It's really important for
Eclipse IDE or an anti Eclipsecore to be to continue to be
stable as well.
Peter (06:36):
That's a very important
point because I was thinking
about it, and I remember, I'mgonna show my age now, folks.
Back in the day, Eclipse was theIDE, was the editor that I used
to edit ActionScript for Flashapplications and with those
platforms, because it was thefirst one that I found with all
(06:57):
the plugin architectures whereit's like, great. You can set it
up for what you want, like somany other things. And like you
say, even today, even Flash is agood example of what we call a
dead technology. But I don'tthink there is such a thing
because these things live onforever.
And like you say, once somethingbecomes a standard within a
(07:17):
particular industry orespecially if a large company
adopts something, they becomevery reluctant to want to change
and update those, which alsomakes it hard to go, like you
say, go too far into the coreand make too many changes
because you cannot risk breakingall the platforms that are out
there. Banking is a good exampleon that with still having COBOL
(07:40):
out there and those kind ofthings. You have to be very
careful. And once you bring inopen source to the mix as well
and you start having third partycontributions, I would imagine
it gets very complicated. Butthat's why I love a lot of these
foundations that have theseboards that sort of look at
these things and take a sensibleI was gonna say a slow sensible
(08:00):
approach.
Right? Really think through whatmakes sense to move on and and
expand upon. And I think thatmoving into more of the open
source discussion now, I What'sup folks? I have been using
SaneBox for years to monitor myemail. Now what does it do?
It's very smart. It learns overtime and you can train it as
(08:20):
well on individual emails. Itwill watch your emails and you
can have it, for example, go tospecific boxes. So by default, I
have mine setups. There's alater box where things like
newsletters and all of that kindof stuff goes in there and
anything that I don't need todeal with right now.
I have another one that justbasically filters out the spam
(08:43):
and it makes it go away. Thatworks fantastically, by the way.
It's very smart. And I also haveone where I can set them up to
snooze, and I can also set upcustom ones as well. I have
another one, for example, thathas all of my receipts.
So I have trained it to learnwhat a receipt looks like for
me, you know, things like Amazonor those online services, and
(09:04):
they all get filtered into theboxes. And at the end of the
day, that means my inbox reallyhas just the email that I need
to deal with right now and needsmy attention. As I say, it gets
smarter over time. You can trainit. It's very smart to begin
with, but I want to help you outhere.
Like I say, I've been using thisfor years and there is a link.
You can go topeterwhytham.comforward/sbox, s
(09:27):
b o x, and get $5 off theservice and give it a try. But I
have been using it for years. Iit I cannot tell you how much
time it has saved me. My inboxtruly is sane again at last.
So go topeterwhiddham.comforward/sbox
and get yourself $5 off. I feellike that's that's a good thing.
(09:48):
I know some folks say it couldbe limiting in that, oh, if you
quote play it safe, but I thinkthat a lot of people now are
starting to understand a lot ofthese tools, these platforms.
Open source software in generalis so prolific and so widespread
that we cannot be here I am inTexas. We cannot be cowboys with
(10:09):
these things anymore.
Right? We have to be veryresponsible to so many
companies, so many industries,and we hear about this all the
time. One little change in opensource breaks something critical
for so many people, so manyservices. Right? I know you said
open source.
You're you're very committed tothat. How much of that do you
see in that responsibility thatfolks take a little bit of a
(10:32):
less sort of Wild West attitudeI think we get these days in a
more disciplined approach?
Gorkem (10:38):
Like the foundations
like Eclipse Foundation, Linux
Foundation, and ApacheFoundation, these are one of the
things that that you have withthese foundations is, let's say
that you're relying on some alibrary, whether that's hosted
by Eclipse Foundation or not.Right? The one thing that will
definitely gonna happen is thepeople who are involved with
(11:00):
that library is gonna verylikely to change. And at some
point, you're gonna end up witha project, hopefully with new
set of people. But there is alot of projects out there who
actually have 0 to nomaintenance.
If you're relying on a projectand tomorrow that project just
(11:20):
disappears on you or there areno maintainers left. If you're
not on a foundation, there's notnowhere that you can go. It's
like someone can just deletethat project and disappear. But
if it's an Eclipse Foundationproject, that will never happen.
You're guaranteed that that willnever happen.
Peter (11:40):
Yeah. I was gonna say
I've had it happen to me. I've
I've worked on apps that haveused, a GitHub repo, but
basically third party sourcecode. And, hey, the person that
owned it decided they didn'twanna do this anymore and rather
than leaving it up, they pulledit down and it broke a whole
bunch of things. Now they'reperfectly entitled to do that.
I I got no problem with that.Right? If you choose to to
(12:03):
incorporate a third partysolution, you know what comes
with that. Right? I think as yougain experience in the industry.
So that's why I love this ideawith these foundations that
says, hey. There may be nothingnew, but you're okay. Your
legacy can live on. Right?
Gorkem (12:18):
Yeah. But in the
foundations, you can also do the
new things as well. Right? Ifyou look at Eclipse Foundation,
a lot of new projects. If youlook at Linux Foundation or
Cloud Native Foundation, a lotof new projects, a lot of
imported projects there as well.
So it's not an impediment tobuild new projects in those
(12:40):
foundations. Yes, they do comewith a bit more requirements
than, just pushing the code intoGitHub, into our GitHub
repository. But most of thoserequirements are have a reason.
Peter (12:52):
Yeah. No. And it's
interesting and timely in a way.
I've not read up too much onthis, but I think it was either
yesterday or maybe Friday I readthat Apple had announced Swift
and Java interoperability. Butnow that we have this
combination where I can now useSwift and Java together, I think
that's gonna be interesting.
(13:13):
And I like to see lots of thesethings because it enables you to
use those new languages, thosenew skills with the more mature
ones as well. And it alsoinvites folks to start using
maybe some of their legacy codeand things like that and to
adopt these new platformswithout having to go wholesale
(13:35):
and convert all their code. Atimely announcement there by
Apple as well. And with Swift,of course, being open mostly
open source, I'll be diplomaticabout it. We have that as well
and and it makes it makeseverything very transparent.
So whichever side of somethingyou're sitting on, you can go
look just like everything elsewith open source and choose to
(13:55):
use it or maybe fork it and useit yourself and and make
adoptions that way as well.Something else that I I wanna
ask about here as well becausewith Jozu, CICD is something
that is I've done a few episodestalking to a few different
folks, but it is critical thesedays. Right? We don't have
(14:15):
especially the complexity ofsoftware and things like running
automation with tests, buildpipelines, all of these things,
we don't sit here and I I guessI was gonna say Xcode users. We
we famously joke about where youyou hit the build button on
Xcode and you go home for theday.
So CICD, let's talk about thatbecause it's very important for
(14:40):
me. I'll not only do I like thebuild process, but being able to
run those automated tests is abig one as well.
Gorkem (14:48):
Yeah. As you said, CICD
and automation is really
important for any project. But Ithink the, the CICD actually
goes a little bit further thanjust being able to compile your
project and also test yourproject. Right? There is also
one thing that we have learnedin the last 5, 10 years is our
(15:11):
supply chains needs to besecured.
And that pretty much starts withCICD. You or that's, or let's
put it this way. CICD isactually the gatekeeper for
that. So the days off, like Iwould be really worried if you
are able to just do your buildon your machine and push that
(15:31):
binary to someone for someone touse. Like that wouldn't, that
would be a very,
Peter (15:36):
very safe
Gorkem (15:37):
to have. Yeah. But
today, CICD involves being able
to build things, but alsorecording what you build as
well. Because you want to beable to do secure supply chains.
And part of that is, oh, I amactually building in an
environment where I know whatthat is.
Like we had these problems with,what was the SolarWinds project?
(16:03):
Right? The SolarWinds problem.Right? If you're not able to
control your CI and CDenvironment closely and record
what you have used in your CICDenvironment and record what you
have used as part of yourdependencies for every build
that you make and provide thatin a manner that is not that is
(16:26):
that you can prove that hasn'tbeen tampered with, then your
supply chain will always be inquestion.
So there are tools andtechniques that has emerged in
the last 3, 5 years in responseto incidents like the
SolarWinds, where you are ableto record your built environment
(16:47):
and if you are able to recordyour dependencies and your, and
finally your binaries, as wellas your test results, if those
are needed. So like the, one ofthe big, the, the end result is
an SBOM, right? A secure, billof materials, which is something
that you can sign and hand overto to and store security and and
(17:12):
hand over to your DevOps SREorganization or just store it so
that in the future, you can comeback to it. And the s bombs are
essentially JSON, XML documentsthat you can search your
dependencies. And there aretools out there that you can
actually use to turn an SBOM toa searchable graphical tree
(17:35):
even.
And then there are tools, evenGitHub Actions have tools where
you are able to record yourbuilt environment and your
dependencies as well. And thereare other tools like the CV
Foundation has Techton, which Iwas involved with in the past.
And then that does a very goodjob with this. And pretty much
(17:58):
every other CICD tool out there,I'm guessing Jenkins too, but I
haven't touched Jenkins in awhile.
Peter (18:04):
It's been a while. Yeah.
Gorkem (18:05):
Is able to to it comes
with these sort of capabilities.
Right? So this is veryimportant. Like, automation is
one part of it, but you alsoneed to be able to think about
secure supply chain.
Peter (18:17):
Hey, folks. If you like
what you're hearing in this
podcast and you wanna help thispodcast to continue going
forward and having great guestsand great conversations, I
invite you to become a Patreonsupporter. You can go to
where you will get ad freeversions of the podcast along
with other content.
Gorkem (18:37):
And the things gets a
little bit more complicated when
you move into consuming AI andML as well because you you still
have a secure supply chainconcern. So you should you need
to be able to still retrieve andrecord your AIML projects. And
that's one of the reasons why westarted the KeytaOps project.
(19:00):
What KeytaOps project does is ituses OCI artifacts. And by OCI,
I mean the Open ContainerInitiative, not the Oracle one.
Apparently, there is an Oracleone as well with the same
acronym. So an OCI artifact isessentially like a Docker image,
(19:20):
but it can include differentthings. So what we come up with
is an OCI artifact for storingAI and ML artifacts. What kind
of artifacts you can store? Youcan store model weights and
models themselves, essentially,datasets, code, documents, and
configuration.
(19:40):
So with everything stored is anOCI artifact, the benefit is you
can actually use existing ticklike SBOMs, signing, and similar
techniques to what you do todaywith your applications and just
use the same tools. Forinstance, for signing, you can
just use a tool like cosine,which you would use for your
(20:03):
applications to sign yourartifacts and sign your SBOMs
and generate your SBOM. Right?And then that would give put you
into a secure supply chaintrajectory that already exists
with many of the organizations.The other benefit is these OCI
artifacts are stored in an OCIregistry, which is your Docker
(20:25):
hub, your Google, not Google, Gpackages and so on and so forth.
So that means that the existingmechanisms for your
authorization and auditingapplies to your AIML artifacts
as well. So you're basicallychanging. That's why we mean,
(20:46):
oh, it's a standard you arealready using. So you're able to
adopt your AI and ML into thatstandard. Of course, the
challenges for AI and ML doesnot end there with the CICD.
Because when you think about aclassic application that you
built, you put the source code,you go through your build
(21:09):
process. At the end, your binaryis the same binary that you
would, if you would do the samebuild again with the same source
code. With AI and ML, it's not,that's not.
Peter (21:22):
I was gonna say that's
the interesting aspect here. And
I think that is, I'm so gladthat you brought this up and you
explained this because for a lotof us, myself included, I'm
certainly no expert on this.That is a major concern, right?
Is with conventional sourcecode, in theory at least, we
should be able to repeat buildsover and over again, same
(21:43):
expected results. And, ofcourse, one of the concerns
folks like myself who are not aseducated on these things as
arguably maybe we need to be atthis point, immediately, we
start thinking, okay.
AI, ML, is it changing thingsbetween builds? How would I know
that? And, of course, like yousay, with that comes a trust
factor. Right?
Gorkem (22:03):
So yes. The if you are
using the same code, like, our
brains as software engineers aretrained with to to think that if
I put the same source code, if Icheck out the code from GitHub
or get my Git repository, putthat through my I'll get the
same. That's how our brainstrain them up. That's whole the
whole process of GitOps actuallydepends upon. Right?
(22:27):
That's one basic rule that weall agree on. But with AI and
ML, that's not the case. Becauseyou can have the exact same
code, exact same dataset, doyour training, and it will be a
slightly different result. Sowith AI and ML, I don't think
our conventional or existingtruths about how we do GitOps is
(22:51):
going to apply. So one of thethings that that we are working
on is, oh, you do your AIpipeline.
But also you also understandthat your pipeline is not going
to be 100% repeated. You'regonna get a result. You're gonna
store those results. And thenfrom those results, you're gonna
(23:12):
select the one that is mostlikely to be the best for your
what you're trying to achieve.In in AI ML, this is called the
experimentation.
Right? You will get multipleexperimentation results. And you
will play with configurations,get another experimentation
result, and so on and so forth.You need to be very ready to do
(23:34):
these experimentations and ableto compare their results and
select the correct one for whatyou are trying to achieve and
get that to production. So thereis all this process that needs
to happen.
And this is not, as you canimagine, this is not a linear
process as it was with theapplications anymore. This is a
(23:57):
process that needs to getfeedback, make decisions, needs
to get feedback and makedecisions. So the automation of
AI and ML projects, if you'retrying to adopt AI and ML in
scale, and my guess is many ofthe enterprises will try to do
that in the next decade, is notas linear. So these 2 probably
(24:20):
have different methods,techniques, and tools compared
to what we have for CICD forapplications that can actually
react to that loop or feedbackor experimentation. And this is
just the experimentation part.
This is that there is the thereare also things that you need to
(24:41):
consider when you are going toproduction, when you are doing
inference out of your models.And there are also pipelines
that you need to do. I thinkthat's a little bit better known
than when you're trying to getdata into training. I guess the
end result or what to summarize,the CICD for AI and ML is going
(25:02):
to be much more intelligentmaybe or more, more flexible
than the CICD for applicationstoday.
Peter (25:10):
And I wonder, sometimes I
think there's this nervous
apprehension with a lot ofdevelopers on that very topic,
which is, I'm hoping that we'vegot past that phase of
realizing, okay, it's not goingto take our jobs. Yeah. We've
all matured and grown up andrealized that's not gonna
happen. And now we need to buildour trust in these technologies
(25:36):
and say to ourselves, we don'tnecessarily need to know every
single thing. We hope that overtime as these models become more
experienced, just like us.
Right? As we become moreexperienced and more evolved
with these things, we adoptbetter techniques. We learn
better ways. We hope thesesystems are gonna do the same
(25:56):
thing. And and I guess thequestion is, how do we best
educate those around us thatit's okay to not necessarily
understand exactly 100% how thisworks and and to essentially
trust in the system.
Right? And I get it. It's earlydays. I'm not saying tomorrow we
should just trust flat out trustthese things, but the theory
(26:19):
being they're gonna get smarterover time. They will be smarter
than us.
Right? I I think that's a fact,and we have to live with it. But
our responsibility is how we usethat and control it and have
that trust with it without just,like, some folks say, oh, you
let the AI build this thing andthen you push it to production.
(26:40):
No. You don't just let it do it.
Gorkem (26:42):
Yeah. You don't no. I
would push anything to
production that is AI generatedwithout
Peter (26:47):
Understanding it.
Gorkem (26:48):
Looking at it. You need
to look at it pretty closely.
Yeah. It gets better all thetime. It gets trained with data.
And the more GPUs you can throwat it and the more data you can
throw at it, it makes the endresult better. I think that's
the most important thing withthe AI revolution that we are
(27:09):
having is the difference is thatwe are able to throw more
resources to trading and getbetter results. So I think
that's the big difference. Itlooks like to me that we are
still able to throw moreresources and get better
results. So we didn't reach adead end where throw more
resources and the results arenot better.
(27:30):
Right? So that would be the endof that. But we are not there
yet. The models are gettingbetter, but that doesn't mean
that they will be able to covereverything. And this is we're
talking about replacing softwareengineers.
And if you have been using, andI have been experimenting with
generative AI since CheggptGPTcame out, right? It's to
(27:54):
understand what it is capable ofand what it is not, in means of
software development. And itgets better. But at the end of
the day, it will never be a100%. Like, you would never be
some of the minuscule tasks,yes, you can just leave it to
it.
But when you design a systemthat is a little bit off the
(28:17):
chart of the beaten road thathas been that the AI has been
trained with, you it starts tointroduce bugs. Right? And to my
horror, the code looks okay. Itshould look like it is it should
function. And then you come upwith this bug, and now you're in
(28:38):
a debugger.
Right? That's all you have todo. If you don't have a good
debugger and if you don't knowhow to do debug software, don't
use it.
Peter (28:45):
Yeah. That's a really
good point because as you were
saying it, it's worth remindingfolks, but it's that old saying,
alright, it's the output is onlyas good as the input. And as
much as these models may bebuilt upon fantastic programming
techniques and patterns and codefrom all the fantastic software
(29:05):
developers out there that it'salso picking up the bad ones.
And thinking about that, theanalogy would be you could ask 2
different developers. You couldask and get a good answer and a
bad answer.
Which one you go with is stillyour choice. And like you say,
if it's not gonna work, in someways, it's a case of you got the
code from AI. Like you say, yougo into the debug session, and
(29:29):
then the bad programmer, whenyou go back and ask him, would
be, oh, that's weird. Good luckwith that. So you do have to use
your skills to understand whatit's done and not just trust,
okay.
I don't understand it. It mustbe fine. No. Learn from what
it's done. And then next time,you won't have to ask it.
Gorkem (29:47):
Yeah. It's a tool at the
end of the day, and you need to
learn how to use that tool aswell. But what is interesting to
me with the and what ishappening is to be honest, this
whole conversation aboutsoftware engineers being
replaced with AI and ML, Iwouldn't say replace, maybe I'll
Yeah.
Peter (30:06):
That's how I think.
Gorkem (30:07):
As well, but that's
probably the table stakes
conversation at this point. It'sthe most obvious one that we all
look at and and say, oh, yeah.You know what? We have all these
software engineers and they'rewe give them text and they
produce text. So we like, an LLMis essentially text in, text
(30:28):
out.
So we can replace that. That'snot how it works. But then the
interesting bit is, for me, isthere are all these problems as
software engineers that weweren't able to fully solve in
businesses where the rule basedtraining techniques were not
enough and you had to dosomething more than rule based.
(30:52):
And that's when we you start touse ML techniques at that point.
Right?
It's, oh, I can't do this. Can Ihow do I do prediction? How do I
do categorization of a data thatis really not very well
structured? Right? Those sort ofthings.
Peter (31:07):
Alright. Here it is. The
one thing that I cannot do
without every day, and that ismy coffee. Anyone that knows me
or anyone that's listened to anyof my podcasts or anything else
knows that I absolutely cannotoperate without my coffee and I
love good coffee. So here's thedeal.
I'm gonna give you one free bagof coffee by going to
(31:28):
peterwidham.com forward slashcoffee. There is a wonderful
company out there that followsthe fair trade practices, helps
out a lot of independentroasters of all sizes, and the
operation is simple. What you dois you're gonna go to to
peterwidham.com forward slashcoffee. You sign up there. You
(31:48):
get a free bag of coffee sent toyou.
Yes. In return, they say thankyou to me by giving me some
coffee, but that's not thereason I'm doing this. The
reason I'm doing this is becauseI have found so many good
coffees that I just would neverhave come across, heard about,
or experienced without thisservice. Trade coffee is just
(32:10):
fantastic. You know, there areplenty of places out there.
We all know them that supplycoffee, good coffee. You can go
to the store, get the coffee.But there is nothing better than
discovering new independentroasters and supporting them,
discovering new flavors ofcoffee, new grinds for you can
set it up. It's very smart. Youtell it the kind of coffee you
(32:33):
like and over time it getsbetter and better as it trains
in on your selections and yourchoices and gives you exactly
the coffee you're looking forand recommending new ones that
that will be very similar.
Every time I get a new packet ofcoffee, I go through and
afterwards I try the coffee, Igo through the service and I
say, look, I loved this coffee.I thought this coffee was okay
(32:56):
or I say, look, I've this wasreally not for me. And every
time I do that, it makes theservice a little more accurate
on the next selection for me. Soagain, just go to
peterwhidham.comforward/coffee.Get your free bag of coffee
today.
If you're a coffee lover, you'regonna really appreciate this
service. I have been using itfor years at this point and
(33:17):
thoroughly recommend it.
Gorkem (33:19):
Thanks. I think those
are the interesting bits for me
because those problems are stillpart of our businesses today.
And because we weren't able tosolve them in the past or
solving them was actually reallyexpensive. We now have those
problems lying around. I thinkwhat's gonna be our next step
(33:39):
is, oh, we need to learn how toadopt AI and ML as cheaply and
quickly as possible to solvethose sort of problems.
One thing that you said earlier,do we really need to know what
is happening inside an LLM 100%to be able to solve a
categorization problem? Do wereally need to know what is
(34:00):
happening inside an OLM to beable to convert an unstructured
text to a structured data?Right? I think that's those kind
of problems. I think we start toget and adopt AI and ML in
there, and that will bring inthe efficiencies to our business
where you are able to actuallyaccomplish more for the
(34:22):
business, which you weren't ableto before.
It all becomes who can adapt toit faster, who can run it
cheaper. Those sort of concernswill start to be important for
the businesses. And who can doit securely as well. Let's not
forget about that. That's not,that shouldn't be an
afterthought.
Peter (34:42):
Right. And in response to
that question, I'm thinking my
answer there would be, yeah, Idon't necessarily, or we don't
necessarily need to understandsometimes how it gets to its
conclusion and its result, aslong as we can understand the
conclusions and the results andlook at them and say, we got
what we wanted. Right? It's thatold thing of going to school
(35:05):
showing show your work, right?Your homework.
Right? If the answer is correctand you understand how you got
there, sometimes the part inbetween, hey, it's fine to not
necessarily get that. Right? Yougot the answer. And the more
times that you get the answerright and the results you're
looking for, surely that's theimportant part.
Right? Because I was readingthis thing the other day about
(35:27):
people were starting to theorizeabout how we're gonna use AI to
and and ML to solve medicalproblems that we don't have
answers for. And eveninteresting things that I've
never even thought of. There'sthis possibility, and it sounds
crazy on the outset, I know. Thepossibility that maybe AI and ML
can help us understand how tocommunicate with languages that
(35:49):
I don't speak.
And someone even said animals.And at first you're like, that
sounds really crazy. But thenyou realize, yeah, just because
I cannot compute the answer, ifsomething else can, surely
that's the part that matters.Right? And giving it time to
prove itself and then get itcorrect.
So I guess that would be myanswer. Yeah. If I can
understand the solution it givesme and I know that it's the
(36:11):
right solution, it's done itsjob, and it's safe to go with
that. Right?
Gorkem (36:15):
And for instance, also
there is a lot of automation in
that as well, right? It's likethat you cannot if you ever
worked in a project thatinvolved the the the televoice
services, like support over thephones or complaints over the
phones and that kind of things.One of the things that, you
know, when I did that kind ofwork, one of the things that
(36:37):
happens is the supervisorslisten to some of those calls
and try to find if the customeris satisfied with it, with the
answers of, and, and of thesupport and so on and so forth.
That, that kind of try to assessif the customer satisfaction is
at a level that they want it tobe. The problem with that is if
(36:58):
you have a, even a smallishoperation with hundreds of
people working at a time, youdon't have enough manpower to
monitor a lot of these calls.
Right? I have, I had thepleasure to talk to one of the
company who's actually doingthat in their call centers where
they have AI listening to theserecordings and afterwards giving
(37:24):
it, doing a sentiment analysissaying that, Hey, you know what?
The sentiment for this call waspositive at the end and the
customer was satisfied and so onand so forth. And interesting
bit is of course they did thisand they were able to automate
that and they were able toimprove their results by making
callbacks. So that if the thesentiment score is under certain
(37:48):
level, they wanted to do makecallbacks to solve actually
solve the problem.
Of course, it didn't come easy.They had to do adjustments
because just by listening tothey realized that just by
listening to the recording, youwere not actually able to make a
the correct decision for directsentiment. So they actually had
(38:08):
to pull in a few more datasources so that the AI can do
better decisions. But it waswhen we talked, it was very
interesting to to hear theiterative process that the
company went through to be ableto do this. So there are things
like that as well, where youcannot humanly automate this
kind of work.
(38:28):
But for something like AI, whichis essentially a software that
is running, they can easilylisten to these conversations,
combine other data, and come upwith a sentiment score will
allow you to improve yourbusiness by doing callbacks.
Peter (38:44):
Yeah.
Gorkem (38:45):
It was an interesting
case for me, the whole iterative
nature of it. But there will beI can imagine hundreds of cases
should exist in a business thatis that this is not addressed or
was not addressed because of thenature of the problem.
Peter (38:59):
Yeah. It's interesting.
You hit on a on an area that,
without giving away too manydetails, an area that I'm
working with some folks at themoment, and we are looking at
exactly those things. Using AIto analyze audio, visual,
textual information in a volumethat it will be humanly
possible, but as a company, youwould look at it and say it's
(39:22):
not viable to do it. You'd haveto have so many folks working so
many hours.
And if we can get the machinesto the software to understand
and at least flag things that itthinks are worth looking at for
whatever your criteria is. Yes,you may have a human go back and
review it, but, hey, if you canjust trim, say, down to 10
(39:46):
minutes out of 10 hours worth ofsomething then you've achieved
your goal, right? And it'sinteresting how quickly that is
improving and how accurate it'sbecoming over time compared to,
like you say, you just couldn'temploy enough people. And also,
of course, the added bonus isthis software can run 247.
(40:07):
Right?
Yes. Of course, there are costsinvolved. You've got to power
these and maintain them, butthey can run 247, maybe
eventually do a better job. Soit is all very interesting,
these things that you didn'tthink of in the beginning that
are now making you realize, oh,if you can solve this problem,
you've got this other problemthat's very similar. Right?
(40:27):
And that's fascinating. I Ithink even with this being in
the early days of commerciallyviable AI and ML, we've come so
far so quickly that it's almostimpossible, I think, to predict
where we'll be 5 years from now.How good is this stuff gonna be?
Gorkem (40:45):
And I always, again, now
it's for bike time turn to show
my age. I don't know if youremember that, but there was
this whole cut, the early days,like this slogan about digital
transformation, right? Orpaperless offices, right? And
people that that whole idea cameout, it's, Oh, paperless office.
(41:08):
And everyone thought that thenext morning everything will be
paperless at the office.
I know companies who wentthrough that paperless office
idea for a decade and reached tobecome a paperless office. Some
of them are still trying to dothat. So I think AI and ML is
going to be a little bit likethat as well. It's going to take
(41:30):
a journey for a company to beable to automate the missing
pieces of their, their processesthat they weren't able to
automate.
Peter (41:38):
Yeah. Yeah. And it's
also, like you say, with the
paperless office, it's funnythat even today, as I'm sitting
here right now with I don't knowhow many computers around me,
right? Desktops, laptops,phones, tablets, watches, all
these things. And yet, when itcomes down to it, if I have to
do something in a hurry, there'sa pad and a pencil right there.
(42:00):
And and like I was sayingearlier, the these things never
die. Right? They they merelycomplement each other. Right?
And even today, you think aboutall of the technologies out
there that's captures yourhandwriting on a piece of paper
or screen.
Yes. Okay. Now it translates itinto a system, but we are
essentially still taking notes,for example, like we used to.
(42:21):
It's just the mechanism of doingit.
Gorkem (42:23):
One of the reasons, like
I take my notes on a tablet, but
actually do use handwriting fordoing that. It's just you're
used to it and you but then itaugments me in a way that, oh, I
can just turn that into textvery easily. Although with my
handwriting, it's
Peter (42:41):
Me too. Left handed. So
It's
Gorkem (42:43):
a mess. And then I can
turn that into text and I can
make that searchable. Andmoreover, for those who can
actually read my handwriting, Ican send them directly to their
mailbox if I need to. So, like,that doesn't replace my
handwriting. It just augmentsit.
Peter (42:59):
Yeah. No. You're
absolutely right. And and a lot
of it, like we say about AI andthat as well, it's context.
Right?
It it's learning how to captureand use that context that we as
humans, we we just do it withouteven thinking about it. Right?
Because, sometimes I feel likewe underappreciate how clever
our brain is taking all of thesethings Mhmm. And keeping that
(43:21):
context that gives themrelevance. Like you say,
searching.
Yes. I can now search myhandwritten notes. I could have
done with this when I was when Iwas a kid at school. It would
have been priceless and thingslike that. I'm very conscious of
your time.
Is there anything we haven'tcovered that you wanna cover
here?
Gorkem (43:38):
We talked about open
source. We talked about AI and
ML, what we do with how we seeAI and ML on Jozu and the
Keytop's ML project. So I thinkwe we covered a good portion of
Fantastic. I'd like to talkabout.
Peter (43:52):
Fantastic. Okay. So
folks, it has been a fascinating
conversation. We could probablykeep talking about this for
hours as we do where excitingtopics like this, they spark the
imagination. And whatever we candream up, we can do these days.
So, Gorkem, thank you so muchfor your time today and for the
the fascinating conversation.Please tell folks where they can
(44:14):
find you, they can find Jozu. Gofor it.
Gorkem (44:16):
Yeah. Thank you, Peter.
You can find me and
jozu@jozu.com and use my name atjozu.com if you want to reach to
me via email. If you want to getinvolved with open source
project, keytops. Ml is the URLfor the project.
And you have all the informationthat that in the website for to
(44:40):
be able to use or contribute tothe project.
Peter (44:44):
Fantastic. Yeah, folks,
we will put everything in the
show notes. Please go look atthis. Right? Research this.
It it's a fascinating area. Itreally does blow your mind when
you realize what's happening outthere, what's possible, and how
it can benefit you as well. So,yeah, go check out all the
links. With that, you know whereyou can find me,
(45:06):
Compileswift.com and and all thenetworks. With that, folks,
that's what we got for you.