Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Mike Verinder (00:08):
Hey everyone,
this is Mike Verinder.
With Modern Software, we arecontinuing our series on
Autonomous STLC.
Today we are going to dive in alittle bit from a test product
perspective of Autonomous STLC.
So I'm here with Gal.
Product perspective ofautonomous STLC.
(00:28):
So I'm here with Gal.
Gal is one of the founders atChecksum AI and he's going to
share a little bit of hisperspective from a test product
perspective on autonomous STLCand where that may be going and
what his viewpoints are on that.
Hey Gal, how's it going?
Gal Vered (00:41):
Good, how are you?
Mike Verinder (00:42):
It's good to see
you, sir.
Thanks for joining us today.
Hey, Gal, how's it going Good?
How are you?
It's good to see you, sir.
Thanks for joining us today.
Let's start off.
Why don't you just tell theaudience a little bit about
yourself and about Checksum andwhat you guys do?
Gal Vered (00:52):
Yeah, so shortly
about myself.
I worked in tech in the last 10years.
I worked at big companies likeGoogle, as well as a CTO of
small startups until we came toCoFound Checksum, and Checksum,
in the short version, is that wegenerate end-to-end tests using
(01:13):
AI.
So for those who are notfamiliar, end-to-end tests are
tests, at least from theChecksum perspective tests that
literally open up a browser, goto our customer's application
and click on buttons to makesure everything is working.
End-to-end, Front-endcommunicates with the backend,
database, everything.
(01:33):
So that's what Checksum does.
In short, Our goal is tocompletely automate the process.
So when you use Checksum, wedon't just provide some code
that sometimes works, sometimesdoesn't work.
We detect the different cases,the different test cases that
customers need to generate.
We generate the teststhemselves and we provide tests
(01:54):
in open source products, in opensource frameworks Playwright,
specifically, Playwright andCypress and we automatically
maintain the tests Clip,Playwright and Cypress and we
automatically maintain the test.
So, essentially, we're tryingto provide a full service to our
customers, so they don't needto think about end-to-end tests,
except actually running them.
Mike Verinder (02:13):
Okay, cool.
So, if I'm hearing youcorrectly, it sounds like you
have a number of AI solutions ontop of a web-based UI
automation.
Is that correct?
Gal Vered (02:27):
Yeah, we have a
system of AI models that the
sole purpose of the system is togenerate and maintain a full
end-to-end testing suite withvery little input from the
customer side at least verylittle input required.
Our customers can provide moreinput if they want to see
(02:48):
specific tests, but within that,when we provide the tests, we
provide them with Playwright.
It's an open source frameworkbacked by Microsoft, so
essentially it's like having anextension to your team.
It's like having more engineerson your team that just sit down
and write tests all day whenyou run the test.
(03:09):
At the end of the day, it'sPlaywright, not Checksup.
Mike Verinder (03:13):
Yeah,
playwright's an amazing tool.
It works really well.
It's competed a lot latelyagainst Selenium, which is
another big part of my audience,but they're doing really well.
They've got a great tool andit's a great tool to partner
with, to be quite frank.
Gal Vered (03:30):
So awesome From the
rise.
We do find a lot of people wantto migrate to Playwright, and
using AI to do the migrationprocess is one of the use cases
we see from the market.
Mike Verinder (03:43):
Does Chexum help
that out, or with that migration
, or do you have to do athird-party vendor or something
to do that?
Gal Vered (03:50):
No, we do the
migration.
So at the end of the day, ourmodel writes code and this means
that we can handle migrationsand all type of weird cases and
not just like the very happypath of we don't have anything,
let's go ahead and just generatethe test case.
Mike Verinder (04:08):
Yeah, one of the
biggest issues, I think, with
migrating from Selenium and Iknow that's not the topic of our
discussion today, but one ofthe biggest issues that I've
always had has been just theframework right.
That I've always had has beenjust the framework right,
because the Selenium frameworkis built however the guy wants
(04:30):
to build it.
You know it's not a standardframework.
It's however the guy wants todo it If he wants to have, if
he's having tables that runSelenium, his Selenium software,
if he's having this job or thisjob coded in whatever to clean
up his automation and thingslike that.
That framework has been.
It's not the library as much,it's it's that framework aspect,
(04:54):
because you know that guy like,how does he clean up his test?
Now, right, or how does he?
How does he migrate away fromthat?
So I I agree.
Gal Vered (05:04):
That's a lot of these
things up, and it's also it
just it was written in the lastyear, so it just fits better for
modern development.
Mike Verinder (05:14):
Yeah, that's
always been the big.
The hard part about thatmigration, though, is is that
that framework aspect?
Yeah, so Well, cool man.
Is that that framework aspect?
Yeah so well, cool man.
So I, I'm, I'm interested to, toknow you.
You know checksums a prettyneat, pretty neat product and a
pretty neat tool set, and youknow, I am interested to know,
(05:35):
you know, how do you guys thatthe ai space is so big and
there's so many solutions and somuch going on, and solutions
that are great for a smallstartup sometimes aren't aren't
so hot for a big enterprise?
You know enterprise enterprisesare complex, you know.
I know enterprises that stillhave cobalt and mainframes, and
(05:57):
you know desktops and webapplications and mobile
applications and services andjust all these, all this
hodgepodge of stuff, and so it'sinteresting when you think of
AI in an enterprise setting.
You know they probably thecursor is probably applicable,
replit is probably applicable insome situations.
(06:20):
So it's interesting to me, likehow a BI focused product
company, you know, reallychooses what to focus on.
Giving a solution out right.
So a requirements to test cases, that's a great solution that
you have, and that's I think Itry to think of a user's journey
(06:45):
right, and they're, they'recreating requirements and and
they they're making test cases.
And even in a big enterprise,banks have to have test cases,
insurance highly regulatoryplaces have to have have to have
test cases.
So those are still popular.
Those are still a thing thatyou have to have.
How do y'all figure out whatthat roadmap looks like for
(07:11):
y'all and what to focus on next?
Are you engaging with yourclients to do that?
Is it a little bit of engaging?
Gal Vered (07:18):
No, that's a great
question so first of all, when
we started, we really startedChexam to solve our own problems
.
So Checksum was started beforeJetGPT was launched.
So it's not like we've seen AIand what it can do and reverse
engineer our way to some problemand solution.
Mike Verinder (07:38):
Yeah.
Gal Vered (07:39):
We always use
transformers, but our first
versions weren't even largelanguage models.
So we just had seen thisproblem again and again and
again, where your engineeringteam, at a certain point it
becomes big Maybe it's Sirius B,maybe it's Sirius C, maybe it's
even Enterprise and it becomesvery hard to make changes in
your application, in your codebase, and know, be 100% sure
(08:03):
that you didn't break somethingelse right, because it's all
dependencies on dependencies ondependencies, and you change one
thing.
You don't realize that youcause the ripple effect that,
like five features later willjust break something that's
completely unrelated, at leastlike in in a direct way.
And the way to solve it is andthe way it manifests is just,
(08:25):
your engineer spends 50% oftheir time on firefighting, so
you plan a sprint day afterthere's like breakage in
production or in staging andeveryone needs to do what
they're working on and just fixthose bugs.
And the way to solve it istypically end-to-end tests,
because when you have at least avery straightforward like high
(08:45):
impact testing suite that givesyou immediate feedback on what's
working and what's not, you'reable to detect all of the issues
before you even make the PR or,right after you make the PR,
just solve them on the spot,finish the task and move on.
So, instead of doing this danceof you finish with the task,
you move on to something else.
You realize you created the bug, you go back to the task.
(09:06):
You move on to the next thing.
You realize you created anotherbug, you go back to the task.
This back and forth between oh,going back, fixing, working on
the new thing, going back,fixing, working on the new thing
.
Back and forth between QA andengineers, et cetera, et cetera.
You just have all of thefeedback.
You fix it, you move on.
So it came from a really really,really really world problem.
But obviously the problem withend-to-end tests it takes a very
(09:28):
long time to write and maintain.
It's not just writing them,like the maintenance, it's like
any software project, it's aproject.
It's a project that you need tomaintain forever.
And so we were very focusedfrom day one on how do we solve
this very specific problem andluckily, with end to end test,
(09:48):
because it depends on your fontand your backend, your database,
like it depends on a lot ofstuff, not just on your code
base, and we haven't seen, and Idon't believe we'll see in the
future, off the shelf solutions,which are amazing.
We use cursor and repair.
It is great but we haven't seenoff-the-shelf solutions able to
actually solve end-to-endtesting problem.
(10:09):
I think you need a dedicatedend-to-end test solution.
Mike Verinder (10:14):
Yeah, so when you
talk about that, so are you
focusing on test data generationthrough AI and maybe Go ahead,
I'm sorry, it's not like.
Gal Vered (10:24):
the main focus is to
actually write the Playwright
scripts as well as the datasetup, data cleanup as part of
the tests.
So to an extent we do need toset up the data, but obviously
so.
We work both with startups,which is more straightforward,
and larger enterprises, whichtypically have their own test
harness and ability to spin up Cdata in a testing environment
(10:46):
so we can hook up.
The advantage of us usingPlaywright and our model
generating code is that we canhook up to existing
infrastructure and basicallyleverage that in the test case
detection generation and thenmaintenance.
Mike Verinder (10:59):
So what's your
opinion of an autonomous world
Like I have this beautifulvision.
Someday and I can't tell youwhen it's magically going to
happen, but someday we wouldhave an autonomous SDLC.
You might have a productmanager that's sort of tweaking
the product a little bit andthen they may have like a BA
(11:22):
that sort of defines what theywant and then sort of validates
what the product manager'svision is implemented.
Right.
But that's about.
I have this hope that somedaywe could get to that state and I
see it happening.
I mean, it's already happenedin web to some degree.
To some degree.
You could go to Wix, wixcomtoday, stand up a beautiful
(11:47):
website, integrate it with thee-com and all that kind of stuff
.
One person can do that rightBack in what, 2000,?
That was a whole team that ittook to do that.
So I understand enterprise iscomplex and our IT departments
are complex.
But what's your view of anautonomous SDLC?
(12:10):
Do you think we'll ever get toa point where we'll be at that
state, or do you think it's amixed bag for the next 20 years,
or what do you think?
Gal Vered (12:19):
I think it's very
similar to autonomous driving.
Maybe that's a good analogywhere getting to a point where
you can run experiments withcars driving completely
autonomous happened very fast.
But actually getting a fullyautonomous car is a very, very
long, long tail of problemsolving.
(12:40):
Car is a very, very long, longtail of like problem solving.
And we've been hearing aboutautonomous cars coming in the
market for the past 20 years.
Right, every year they say nextyear we're going to have
autonomous cars, and so I thinkit's the same way where we're at
(13:03):
the point right now where 80%of the road took 20% of the the
time, but now completing it tofully autonomous will take a
very, very long time.
But, with that being said,there's still a lot of value to
create.
So even if you continue withwith this autonomous car analogy
, like tesla works great, right,like you have, you still need
your hands on the wheel, but itit's able to navigate, it
reduces stress, like it's a verycool product.
(13:26):
So I think in that case, likegreat, now you have tools like
Cursor or Repelit, where you canactually okay, maybe it doesn't
fully build the product, but itmakes a software engineer 5x
more efficient and Checksum ismaybe more like Waymo in this
case.
So Waymo takes you from A to Zcompletely autonomous.
You just click on a button, youget the car, but it doesn't do
(13:48):
everything right.
So that's why Checksum is veryfocused on end-to-end tests and
we are thinking about ourselvesand I think many companies will
start doing that as a result asa service company Like we're
going to deliver to you anend-to-end testing suite.
That's our goal.
Checksum is not as much as aproduct you use daily, but more
of an AI agent that justdelivers code for you.
(14:09):
And when we deliver code, wedeliver it on GitHub or GitLab
via pull request or we're veryhooked into, like your existing
tools, and so you can barelyinterface with like a Checksum
UI.
It's mainly Checksum does stuffas if you had another developer
on the team, and so I thinkchecksum, in this sense, is very
similar to Waymo, and I thinksoftware engineering and
generally processes businesses,processes with AI will take the
(14:32):
same route.
You'll see very wide tools thatrequire people, but it still
makes you 3x more efficient andvery narrow tools that automate
the entire thing reacts moreefficient and very narrow tools
that automate the entire thing,but it's like a very narrow and
focused use case like checksum,end-to-end tests.
We only do web and we mainlyfocus on playwrights, and that's
(14:56):
it right.
We're very focused.
Mike Verinder (14:58):
Yeah, what do you
think we're missing from an AI
perspective in the end-to-endspace?
Do we have the CICD down?
Well, do we have mobile down,where we could be pretty
seamless with mobile?
I mean, mobile was kind of anissue for a long time, but what
(15:22):
do you think our weak spots are?
Gal Vered (15:24):
Are you asking from
testing or generally?
Mike Verinder (15:28):
Testing in format
with those aspects added right.
So testing with CICD, testingwith mobile right.
Gal Vered (15:39):
Yeah, so again,
chexam solves pretty
automatically the web componentof it.
Yeah, so we kind of build ourown models, we train on data, so
we have a setup that we need todo per customer in order to
make sure the accuracy iscorrect before we can unleash
the model.
But that's all done on checksumside.
(16:02):
But web is pretty automatic.
Today with checksum Ourcustomers spend maybe an hour a
week and get a full test andsuite and this hour is mainly
review the results that the testwe've generated and you know
when a test fails, figure outwhy it failed and actually fix
the bug.
(16:22):
So maybe an hour a week, sothat's pretty automatic.
I don't think the CICD, it'sjust a way of running the test.
I don't think that mobile hascaught up completely, but that's
mobile is a bit harder becauseof the devices and the different
operating systems.
Mike Verinder (16:35):
Yeah, do you see
a future for manual testing in
that environment?
Gal Vered (16:41):
Yeah, I actually see
a future for manual testing in a
sense, more than automatedtesting, because I think what
could be automated will beautomated by AI, than automated
testing, because I think whatcould be automated will be
automated by AI.
And you'll always need Qautomation engineers, but you'll
just need one Q automationengineer will be able to serve a
bigger slice of the pie.
And again, with Chexam, a lotof companies we work with
(17:06):
engineers directly because ifit's an hour a week then you
might as well just haveengineers do it.
So it really depends.
But manual testing requiresmanual testing, not like
regression, like let's do achecklist, but actually
understanding the differentcustomers and the different edge
cases.
And there are apps withthousands of configurations so
(17:28):
you can't automate everything.
So, understanding what is thehighest risk for this release
and having an automated testingsuite that gives you peace of
mind, make sure the corefunctionality is working, so you
know 95% of your users or 99%of your users will not break.
But automating the 1% of usersthat have those weird configs is
(17:49):
going to be very hard because,again, you're going to need
hundreds of thousands of tests.
And this is where manual isgoing to be very hard Because,
again, you're going to needhundreds of thousands of tests
and this is where manualactually is going to be
important, because or still isimportant in the same way.
It is important today becauseyou, okay, we release this
feature and that feature and Iknow this customer has some
weird issues with this featureand they use it in a weird way.
So let me just do a manualregression suite for this new
feature we released in thisweird way, which we're not going
(18:12):
to automate, because it's likeone customer out of a thousand
and you know it's only when wetouch this feature.
So I think manual has stillsimilar in a similar way.
It still has a place today.
I don't think I will replace itbecause it requires a lot of
context and a lot of like.
It requires breathing, yourcustomers and your engineering
team.
But I think QR automation it'seasier to see how it's going to
(18:33):
be, or Checksum is, to an extent, already automated, but it's
easier to see how, if you justwrite code, machines can do it.
Mike Verinder (18:45):
I got you.
So you mentioned Checksumreally kind of started.
I maybe you got the idea forchecksum.
Uh, when llm's chat gpt cameout, uh, you know, do do you
integrate just with?
Do you use chat gpt on the backend or do you integrate with
anthropic, or is that userdefined or what's that look like
(19:09):
?
Gal Vered (19:10):
yeah.
So we started checksum beforethose things were launched and
we actually had models which wetrained.
By the way, and till today,checksum is a hybrid between
models we deploy and train anduse to some tasks that
(19:30):
off-the-shelf APIs will workwell.
So we always try to pick andchoose the best model, because
end-to-end tests need to be veryfast, need to be very reliable
and they need to know thespecific application, so like
they're very specific.
So this is why we need to useiRead.
Whatever we can upload toOpenAI and Anthropic and Google,
we do, but it's not the bulkmajority of the AI usage.
Mike Verinder (19:57):
I was also
wondering what you think the
future.
Do you think there's a futurefor open source testing?
Why I say that is because I seethe tools just getting better
and better.
Why I say that is because I seethe tools just getting better
and better Playwrights andthere's a number of test product
(20:22):
companies out there built onPlaywright.
And then there's Selenium isstill great, but it seems like
the products are getting really,really strong and they're built
on open source.
Do you see a big future foropen source?
Do you see that still living?
Gal Vered (20:41):
at the same time yeah
, it's a good question, Like,
obviously, open source willalways be, to an extent, the
infrastructure of likeeverything we do in technology,
right?
Right, Our servers will stillrun Linux and it's going to
continue to be maintained, andI'm sure the next Linux or the
next infrastructure open sourceproject will continue to exist.
(21:05):
That maybe hasn't been inventedyet.
And then people still useDocker to containerize
applications and again, the nextthing will be invented and will
be used, and Playwright is oneexample.
But I think, especially with AI, the app layer is extremely
important, Like productizingthose models and allowing
(21:29):
productizing those models in away that allows customers to
actually complete full workflowand gain full advantage of those
models beyond just hey, give mesome piece of code here, piece
of code there, and save you afew minutes.
I think the app layer and theconnections between everything
is going to be super importantand again, to an extent, with
(21:50):
checks and where we focus onend-to-end tests, we can see how
you can kind of drive results.
Mike Verinder (21:54):
Yeah, the reasons
for open source used to be well
, test products are reallyexpensive.
That's not really the caseanymore.
It also used to be.
Our integrations are prettyin-depth and I can never find a
test product that can facilitateour integrations, and the
integrations these days arepretty commonplace.
It's not like it was six yearsago or seven years ago even, and
(22:18):
those were big reasons.
And, yes, open source startedout with the free.
We want it because it's free,but we're not talking about.
Every tool these days is$25,000 a license, right.
So tools are more competitivelypriced these days, and open
source doesn't inherently giveyou things like AI solutions
(22:41):
that just have a huge return oninvestment, right.
So when you compare that plusthe infrastructure and the
support structure that you get,I don't know man, I get a little
nervous about the open sourcefuture.
Gal Vered (22:57):
I understand your
question better now.
No, I think open source willcontinue to be the main channel
Specifically for testing.
Obviously, at Checksum, we tooka bet on open source right, so
we very deliberately providecode and provide playwright
tests versus provide a platformand tests that are like,
(23:20):
displayed in no code fashion, toactually run it.
So, very deliberately, weprovided no open source because,
from our experience on talkingto prospects at the beginning
and working with our customers,we see a lot of importance of
using open source specificallyfor testing.
A testing suite is as strong asthe engineers on the team
(23:43):
connected to the results and ifyour test sits within your code
base and are run within yourGitHub actions and you can see
exactly the code that's beingrun and you can quickly make
changes with code with the toolsyou already know.
That's the most important thing.
If we have some platforms thatnone of the engineers ever go to
and they need permissions andthey don't even have permissions
(24:05):
and they don't know how to useit and they don't know how to
review the reports, that's wherewe lost them.
So, specifically for testing, Ithink open source is extremely
important and again, we made avery deliberate decision.
At the same time.
I think the tools that arewinning in the market today are
the tools that allow you towrite code and actually make
things happen, and I don't see ashift towards no code generally
(24:27):
in software development becauseof AI.
So I think the trend will betools that allow you to leverage
open source frameworks andtools, like we've been doing
today, in order to get what youwant to get, yeah, okay.
Mike Verinder (24:42):
So let's go back
to Checksum a little bit.
You know and how you'reintegrating AI and you know how
are you, how did you make thatproduct and come up with that
product and how do you deal withthings like hallucinations and
and aspects of that?
Gal Vered (25:01):
Yeah, that's a great
question.
So we at Chexam, what we foundmost useful is, instead of
creating one large model thattries to do everything, and we
think about checksum as a systemof small models.
Each has a very specific taskand each solves a very specific
problem.
So we have a model with thegoal of summarizing the HTML and
(25:25):
basically providing an HTMLsummary, because HTMLs today are
huge and you can't fit all ofthem.
Even if you can, it createsissues when you fit the entire
HTML.
We we have a model with we,whose goal is to decide what
next action to take.
We have a model whose goal isto generate the best locator.
We have a model or selector,and we have a model whose goal
is to generate the bestassertions.
(25:46):
So, basically, we just have aseries of very small models with
very specific tasks.
Some of them are internallytrained, some of them use
external APIs, but this allowsus to keep the and to keep in
hallucinations to a minimum.
It allows us to generate testswith a cost efficient manner and
it allows us to make fastdecisions that are more
(26:08):
constructed and also, whensomething fails, it allows us to
kind of break it down.
So yeah, take a big problem,break it into small steps and
build a system of small modelsor small decision-making
capabilities versus dumpingeverything into an LLM and hope
you get the correct answer.
Mike Verinder (26:27):
So I've got
another one for you, and it's
something that I've seen a lotof is SaaS companies that are
giving AI solutions and thingsthat require LLM integration.
How do SaaS companies deal withthat in a regulated environment
where privacy is like a bigissue?
Gal Vered (26:48):
That's a wonderful
question.
It's hard.
It's privacy and also companies, even if the data is not
consumer.
Private companies today arereally worried that large
language models will be trainedon their data, even if it's not
consumer data, just because theyfeel like they're missing
(27:11):
something out.
Whether this feeling is corrector not, and whether you should
care if someone trends on yourdata, I don't know.
But yeah, it's a big problemWith Checksum.
Luckily, we've architected ourentire system to not collect PII
, so we don't collect anypersonal identifier information,
and the data that Checksum isexposed to is mainly how your
(27:32):
front-end application looks like, which every customer of our
customers have access to, theweb app that you build
front-ends for people to use.
So we don't need access to thespecific code base.
We don't need access to thebackend systems, to the database
.
We don't need access to PII orspecific consumer data, so we
architected it in this way.
But as we do this, we see thatcompanies really care about it
(27:56):
and I think we'll need to.
You know there will need tocreate some standards as to what
can you share, what can youshare, can't you share, and
companies need to make adecision about how much they're
willing to give in order to gainAI efficiencies, because
Checksum was able to do it withvery little data.
But some use cases you can'tjust do.
(28:16):
If you're not willing to givedata to an AI model.
And I'm assuming, as AI modelsbecome better and you can 5X
your revenue if you just use AI,I think I will see companies
more willing to put data insideof AI.
So it's more of a rewardstructure function.
Mike Verinder (28:35):
So do you help
them sanitize that data or do
you make them aware that thatdata needs to be sanitized, like
if it's a social securitynumber or a phone number or
anything really?
Last name addresses and stufflike that.
Gal Vered (28:49):
So we don't work on
real customer data, right?
We generally test in a UATenvironment.
We log in as fake users, right?
So we try to never touchcustomer data and we have
sanitization mechanisms in placewe can activate in case there
are environments that are morehybrid and it's hard to.
Typically we just do the clearseparation, but if we can, we
(29:10):
have checks and balances inplace that make sure there isn't
any social security numbers oremails being sent or any PII.
Mike Verinder (29:19):
Where do you see
checksum in the next 24 months?
Smaller clients enterpriseclients.
Gal Vered (29:26):
If I tell you I know
where Checksum will be in 24
months, I'll probably be lying.
We're hyper-focused on we havea product that works really well
.
We go very fast, so we'rehyper-focused on solving the
correct problem at the end.
What we do know is where do wewant to be in 10 years and where
do we want to be in the nextmonth and everything in between,
or in the next three months andeverything in between?
We kind of figure it out as wego.
(29:51):
But, generally speaking, as wemove towards more autonomous
systems and specifically forengineering, more autonomous AI
engineers, we believe that thereis no autonomous AI engineer
without autonomous QA automationengineer, because if AI is
(30:12):
going to write more code, thiscode is going to be lower
quality and more code will bewritten.
So you, if and if you want AIto fully autonomously write code
and more code will be written,so you and if you want AI to
fully autonomously write code,you need a way to test this code
very robustly and you need away to test this code across
systems.
Because AI is very good atfiguring unit level code.
(30:34):
It's still not good at likeunderstanding a full code base
all at once, exactly how itworks.
Yeah, so our vision in the nextI don't know know if two years,
but in the next five or tenyears is to play a significant
role in building autonomous aiengineering systems by providing
the testing part and thefeedback part that will be then
(30:56):
fed back to the ai model in aniteration loop.
So I'm imagining an AI engineermodel will write some code,
checksum will test it, provideall of the failures to the AI
model, the AI model reflects andso on, and in a sense, that's
what we're doing today.
Just behind the scenes.
It's not an AI engineer, it's aperson who's maybe using AI to
write code, but the immediatefeedback of everything that's
(31:18):
working is like the mostpowerful thing about checksum
and end-to-end testing suite ingeneral, and I think this
feedback will be necessary topush the envelope.
So that's the role we want toplay in the overall grand scheme
of things.
Mike Verinder (31:31):
You mentioned
reporting a little bit or giving
feedback, which I considerreporting.
I can tell you, as anengineering manager for 20 years
, this is going to sound kind ofweird, but one of the most
important things to me was tohave like a dashboard or a
viewpoint into how well myrelease was doing and the health
(31:52):
of my release when I got to youknow, two days before release,
I need to really be able tomanage a lot of things Other
people's expectations right,everybody in the business and
our clients to manage a lot ofthings, other people's
expectations right, everybody inthe business and our clients
and just a lot going on.
Does Checksum have that abilityto be able to pull all of that
together and give kind of on theexecution side, for them to be
(32:12):
able to say, hey, we're ready togo or no go, kind of a status?
Gal Vered (32:19):
Yes, and today we
focus very much on the
engineering standpoint, like isyour app functionally correct?
But on our roadmap isdefinitely bringing on different
stakeholders and different datapoint to be a central
decision-making center forwhether you can deploy a release
(32:39):
, and testing and whether yourapp is working is the main part
of it.
But we can definitely do morein it because we're already
somewhat connected to the newfeatures, everything that
changed in the release, right,because we're running the test,
so we can start publishingchanges and informing customers
or informing product managers orinforming the company.
(33:01):
So we do have a lot of datathat we understand of what
changed in the release and whathappened and we can definitely
operationalize it.
But you know, as a relativelyyoung startup, we're very
focused on end-to-end tests andwe also see a strong pull from
the market.
So we don't see any reason todiverge too much at this point
because we feel like there'sstill a lot we can do to, you
(33:25):
know, satisfy the core use caseof.
Mike Verinder (33:26):
I want to know
that my app is working and if
it's not working, I want to getit fast.
Yeah, yeah, it's.
It's one of those cool thingsthat happens when you become an
end-to-end automation productcompany is that you start
collecting a lot of data and youstart realizing man, look at
all the insights that I couldgive if I repackaged a lot of
this data, like you could makeimprovements to your test
(33:49):
processes if you tweak this oryou tweak that.
Right, you found five bugs lasttime, but you found it in this
section of code, so this sectionof code is higher risk than the
other areas or whatever.
That's always been kind of athing, right, as a manager, I
would always go back and do kindof that retrospective and how
(34:10):
do we get better?
I could see as a product maker,I would love to be able to have
that kind of data and to beable to push that back into an
informed decision later 100%.
Gal Vered (34:23):
We call it internally
a continuous quality.
So there's continuousintegration, continuous
deployment and the checks thatwe want to play a part in
continuous quality Basically, atevery point, give you the
insights and give you the toolsand the information you need in
order to increase the quality ofyour product.
Again, end-to-end test is justthe start, where, like, are
(34:47):
things working?
But if there are bugs, we canthen tell you in the future why
do bugs happen?
Which modules do bug happen themost?
Maybe you can refactor thesemodules, maybe you can, you know
, do something about it so it'sbugs don't happen in the first
place.
So, yeah, I think there's a lotof opportunity here and we kind
of and try to, you know, lookfar but also focus on on the
(35:09):
near so I've got one other, Iguess, area that's kind of
really been on my mind a lotlately is uh, and it's it's not
really.
Mike Verinder (35:18):
I guess it is
product specific in a way, but
it goes back to funding.
When you're a startup andyou're going through the funding
process and you're looking forinvestors, what are you seeing
that they are looking at or thatthey are interested in these
(35:38):
days?
Is it just anything that you'rethrowing at them with AI in it,
or is it the specific aspectsof that?
What do you think is do youthink their VCs and the private
equity companies out there arereally interested in these days?
And really sort of turns themon?
Gal Vered (35:55):
Yeah and I think
first of all, no, it's not
everything you throw AI.
I think people tend to give notenough credit to investors that
are, of course, they're smartpeople and they want to also
make sure there's actually astrong business behind it.
But AI did change a lot ofthings and I do see investors
and maybe even more generallypeople take things into
(36:19):
different ways.
Right, there are investors whothink that building software and
building UI elements will becommoditized, so advantages you
used to have if you built a CRMright, you could have advantage
and great business If you builtin CRM.
I see a lot of investors saythat now it's very easy to copy
features.
So you know application and UIand user experience is no longer
(36:41):
an advantage.
I don't know if I subscribe toit, but I do hear people say
that and they only focus on,like, deep tech that cannot be
copied quickly.
And some even go further wherethey think every company in the
future will build their own CRMaccording to exactly their
specifications because it'sgoing to be so easy.
And the other investors is justfocused very much on speed,
(37:03):
which same kind of insight butdifferent outcome is like yeah,
so now if you just move fasterand you understand your
customers best, you win becausecopy software is easy.
So actually they focus on likethe best team and team that
moves faster and have the mostinsights, and so, yeah, I don't
think it's as uniform todaybecause of the latest changes
(37:25):
and it will probably converge ata certain point to some thesis
that controls the market.
Mike Verinder (37:31):
Do you think
liquidity is opening up again in
the market, or is that stillpretty tight?
Gal Vered (37:39):
Yeah, I don't think
I'm the right person to ask you
know, we know about that, we'redoing well, or is that still
pretty tight?
Yeah, I don't think I'm theright person to ask you know, we
know about that, we're doingwell.
We haven't, like you know, so Ihave a sample size of one, so I
don't know if I'm the rightperson to answer this question.
Probably you need, like theinvestor side, to tell you what
they're seeing, if they'reseeing a higher volume.
Mike Verinder (37:59):
I mean, I talk to
a lot of boards.
I've talked to probably aboutsix different test product
boards, I don't know.
I've seen it a little bit allover the place.
To be honest, the money's there, it's just a little tighter.
One thing I have seen is theyask.
I think it's interesting howmuch they ask.
They ask a lot of questions alot of times because they aren't
(38:20):
the expert in the tech.
You know so interesting stuff,gal.
I was just wondering what yourinsight was.
Gal Vered (38:27):
Yeah, no, I agree
it's an interesting.
It's an interesting period.
I think tech is changing very,very fast and AI will definitely
.
Tech will be the first thingthat is going to change
completely because of AI, and Ithink other industries will also
have to.
Mike Verinder (38:45):
Can we expect to
see checksum at any conferences,
any test conferences oranything like that this year?
Gal Vered (38:52):
Yeah, definitely, we
are moving very fast.
So I can tell you, we have thecalendar for 2025 and here's all
of the things we're going toattend.
But on a month to month basisis definitely on our roadmap.
We want to engage more with thecommunity at scale, and
conferences is a great way to doit.
Mike Verinder (39:10):
Yeah, is that
your approach?
More meetups, more local eventsor more conferences, or a
little bit mix of both?
Gal Vered (39:20):
It's a mix.
We also see now we've madeenough names.
I don't know if 90% of thepeople know us, but we know
enough names for ourselves.
We made a good enough name forourselves that we get a good
stream of inbounds.
People are just hearing word tomouth or kind of becomes
familiar with Che checksum andthey find us.
(39:41):
But obviously again, we want toengage with the community in a
broader scale, so conferencesare a good way to go about it.
Mike Verinder (39:51):
To get your name
out a little bit right.
I understand that.
So you know.
I know who you are and you'rean innovative test product
company.
You're also from Israel.
I think that's awesome.
I'm so impressed with Israelicompanies.
One of my first jobs was doingdevelopment work and testing
(40:11):
work with a company calledAmdocs, which is an Israeli
telecom billing company, andit's just so amazing to see such
awesome stuff come out of sucha small country.
You know that you wouldn't, youwould never think that just the
innovation that's able to comeout of Israel is just
(40:35):
unbelievable 100% agree and Ithink there's a few reasons to
it, but yeah.
What are your reasons for it?
Why are y'all so?
It's like a little SanFrancisco over there, or
something and our engineeringteam is in Israel.
Gal Vered (40:51):
So we have a team in
California, our engineering team
is in Israel and overall we'rea US company with a big Israeli
presence.
And obviously I sit inCalifornia but I'm from Israel.
Why?
I think Israel is a relativelynew country.
(41:14):
It was established on similarvalues to the US from the sense
of like pioneering right, andthis was very recently.
So I think the entrepreneurialculture is very prominent in
Israel because it's kind of usedto, like 70 years ago when
Israel was founded, people justdo stuff to make things happen
(41:35):
and that's very aligned withstartups.
And also, again, the culture Iknow in a lot of cultures when I
used to be at google, I left toco-fund the startup.
When you do that in othercultures, like your friends is
like why, right?
Like why would you think, hey,you would a perfectly good job
making good money, and like, cutyour salary in more than half
(41:55):
and to start a business, that'sthat's unstable.
And I think, like in in israelculture or at least my friends
like yeah, great, like that'svery ambitious, that's that's
unstable.
And I think, like in in israelculture, at least my friends
like yeah, great, like that'svery ambitious, that's amazing.
So I think those things kind oflike get people to try more
yeah, well, it's like I said.
Mike Verinder (42:12):
It's amazing to
see you give endless examples of
of software companies that havedone really well, so, uh, so.
So congratulations on allyou've done so far.
I'm refreshing my website tosee what you come out with and
what you continue to come upwith every day.
So thanks, gal.
Gal Vered (42:33):
Thanks and thanks for
your time and it was a lovely
conversation.
Mike Verinder (42:37):
Hey, this is Mike
.
Thanks for watching part two ofour series, autonomous SDLC a
test product perspective.
If you want to go back andcheck part one, which was
Autonomous SDLC developersperspective, feel free to look
that up in our channel.
I do expect to do a part threeto this sometime within the next
month, so welcome to check thatout as well.
(42:59):
Special thanks to Gal and thechecksumai team for taking their
time to talk with us.
Links to checksum is in thedescription below.
They're a great product andit's a great team over there.
It's a lot of energy.
It's a good company.
All right, thanks guys.