Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Michael (00:00):
Hey. I'm Michael
Dyrynda.
Jake (00:02):
And I'm Jake Bennett,
enjoying a wonderful Arnold
Palmer lite half and half icedtea lemonade.
Michael (00:09):
Actual almond palmer.
Arnold. Arnold. And welcome to
episode one seventy one of theNorth Meat South web podcast.
Jake (00:18):
There it is. We're back.
We're back.
Michael (00:21):
Return. I have a I have
a follow-up. I have follow-up.
Jake (00:24):
Alright. Let's hear let's
hear your follow-up to your most
recent rant.
Michael (00:29):
So last last episode, I
said this coach is no good. He
needs to go. We need to dismisshim. What the clerk has actually
done in the meantime is formallyannounced his succession plan.
So this year will be histwelfth, thirteenth, and final
year at the club, and then hewill ride off into the sunset.
Jake (00:51):
Say.
Michael (00:51):
What's No. No. They've
already signed the next coach.
They have signed the next coach.It's already They've given him a
three year contract.
So
Jake (00:59):
I'll believe it when I see
it. I just have a feeling
somehow he's gonna find his wayback into coaching again. Don't
You know?
Michael (01:06):
Considering how things
have gone over the last twelve
years, it would not surprise meif something changes his mind.
Because there was lots of fluffthat came out of the club, lots
of blowing hot air, and, youknow, he's the right man, and
he's still passionate for it,and he's excited. And we're like
the thing the thing that ires memost, and I've I've been
(01:28):
enjoying the word ire the lastcouple of months.
Jake (01:30):
I like that. That's a good
one.
Michael (01:31):
Ire. That raises my ire
above all else is that after all
of these, you know, failure tocapitalize on the best list
we've ever had and the bestplayers we ever had and all of
that, we will be deprived of thecoach being sacked. He's just
gonna be allowed to walk offinto the sunset. So
Jake (01:49):
You wanted there be wanted
to be some shame associated
Michael (01:51):
with that. Like shame.
Yeah. He's just such a good guy,
and everyone loves him and allof the no. Look.
Hopefully, we had two of our keydefenders get injured, so
they'll be a little bitundercooked if they make the
start of the season anyway,which is in, like, four weeks
from now. So Mhmm. Hopefully,we'll just lose badly in the
(02:12):
early
Jake (02:12):
stages. And then everybody
can just be really mad at him,
and he can get the firing hedeserves.
Michael (02:16):
That's right. Just
usher him out. So, anyway, we'll
see what happens.
Jake (02:20):
Oh, man. Well, I wish the
worst for you then, my friend. I
hope it goes terribly, and Ihope he gets just shamed into
oblivion and that there's, youknow, no honor for him leaving
on a good season. Just
Michael (02:31):
We'll certainly boo him
every chance we get.
Jake (02:33):
Make him suffer. I'm just
kidding. But, well, you know.
Yeah. You know, I wish I had arant to follow with.
I think I could let me see. CanI cook something up? Has has
there been anything bothering merecently? I think that one of
the things that I could have arant about would be when you
(02:56):
build things for people and thenthey don't use them, and then
they come back and they say,hey. Could you build this
feature?
And you mean the one I built foryou, like, six months ago? That
one? Is that the one you'retalking about? Mhmm. But
sometimes even worse, and I'mI'm just I'm ranting about this.
Like, this is a thing. It is athing. It's happened before, but
(03:16):
it hasn't happened recently. Butwhat has happened recently is
when you build a feature andyou're like, okay, we're gonna
release it for feedback. Like,we're I feel like we're far
enough along.
This has some value to it, butit's not like in its final form,
but I wanna get some feedback onit. Right? And so, you've
released it to the world, andyou say, okay, guys, go play
with it. Whenever a month goesby, and you come back, and
(03:39):
nobody's used it yet, but bygolly, they have changes for
you. There's literally not onerecord in the database yet that
they've used the But they'relike, you know, I think the
reason why we didn't use it isbecause it would be really nice
if it actually had this feature.
It's like, use the tool. Use thetool, and then tell me about
what else do you want. Like, youhaven't even tried it yet. Oh,
(04:03):
man. That's just makes yourblood boil.
I don't know. Maybe I'm in asmall you know, Michael, I feel
like you and I I don't know. I'mI'm trying to like evaluate the
landscape here. Are the vastmajority of people that are
listening to this Nobody'slistening to this. But is the
vast majority of peoplelistening to this, are they, you
know, people who work on aspecific product for like, you
(04:24):
know, a long time?
Is it like, do you have internalusers? Do you have customers?
Mhmm. Or, you know, or is itlike agency sort of work?
Because in the case that it'sagency work, like who gives a
crap?
Like if they haven't used it,like you're getting paid to do
the thing. Yeah. It doesn'tmatter. I mean, in some sense,
like I could say the same thingabout myself. Like, I'm getting
paid to do it.
Who cares? Why are youcomplaining about it? But it's
(04:45):
like, I feel like for me, like,I'm serving our internal users
for a lot of what we're doing.And so, it does it does like irk
you when you, you know, you sortof go out there and and build
this feature that maybe youdidn't have time for in the
first place. And then it's like,hey, we want more changes to
that thing.
It's like, no, you don't. No,you don't. You just need to use
(05:05):
it.
Michael (05:06):
Use it. Thing. And then
then we'll figure out from
there. Yeah.
Jake (05:09):
Yeah. Yeah. Yeah. Yeah.
That's my only rant.
That's all I got. That's asthat's as as, you know, what's
the word?
Michael (05:18):
It's as ruffled as your
feathers get.
Jake (05:20):
Yeah. Like on Lego movie
two, this is the thing. Well,
she's she's brooding. That'swhat it is. Brooding?
Michael (05:26):
She's
Jake (05:26):
brooding. They're gonna
yeah. She's like, you're get
dark and moody and like brood. Abrood session. A good brood
session.
So that's as brooding as I get.That's that's about as far as it
goes. So anyway, you had somestuff that you messaged about.
You know, not often do we get onthis and have, like, a bunch of
stuff that we've got queued upto talk about. Usually, it's
just kind of whatever bubbles tothe surface, but you came
prepared today.
So I'm gonna let you take thefloor. Let's see what you got.
Michael (05:49):
Cool. So and I wanna
know what you think about this
if you've been in this situationbefore, which I think maybe you
might have given the kind ofwork that you do in your day
job, and and maybe if anylisteners have been in a similar
situation. Essentially, we havethis quoting platform where we
(06:10):
have a panel of, you know,dozens of lenders that we
integrate with that that willquote, that are on the panel,
that we can send our brokers,customers to for the loans to to
buy things, cars, property,tools, whatever. Not all of the
(06:31):
lenders that are on our panelprovide an API integration to
quote for loans. Not all of
Jake (06:39):
the
Michael (06:39):
lenders that provide
APIs provide endpoints to make
quotes. Sometimes they justprovide us endpoints to send a
loan application, and it goesinto their system, and then you
use their system to do whateverwe need to do. So the the tool
the platform that we built thatwe use essentially provides an
abstraction over all of thesedifferent lenders, and we create
(07:03):
business rules based on thoselenders' requirements in order
to basically provide a anormalized view of, you know,
the what the monthly repaymentswould be, what the fees would
be, what the broker fees wouldbe, what the establishment fees,
like, all of this stuff. And sofor each lender, a lot of this
is is very manual. Once a week,they will send us or once a
(07:26):
month, they will send us anemail.
Who that email goes to, I foundout this morning, could vary in
the business. It could go from alender's BDM, business
development manager, to one ofour business development
managers. Sometimes they send itto our head of HR for some
reason. Sometimes it goes to ourhead of operations. We're trying
to standardize that soeverything kind of goes into one
place, which we then create aticket in ClickUp, which will
(07:48):
then get processed.
But the format of all of thesenaturally is different. Like,
sometimes it's just one personemailing. Here's our updated
rates. Sometimes it's ourupdated rate is this, and here
is a PDF with, like, all of theadditional details. Sometimes
it's like a mailing list thatjust goes gets blasted out and
and passed around.
So it is then the job of someonein our team to go through that
(08:11):
PDF or that email and figure outwhat's changed and provide us
we've got, like, this masterspreadsheet, which has all the
formulas in it, all of thevalues and things like that,
which we then put into theplatform and then code it up.
And then we make sure that,like, given all of the same
inputs, we get all the sameoutputs for all of the different
scenarios. So there's, like, 20or 30 scenarios. Now you would
(08:33):
write an automated test for allof these scenarios. Great.
That's fine. It passes today.But then in a week when we get
the new
Jake (08:41):
rates real quick. When you
say okay. Okay. So you're saying
you've automated test for allthese scenarios, meaning, like,
give me give me three of them.What are the scenarios?
Michael (08:49):
Well, say it's like a
an individual person that is
purchasing a car. Right?
Jake (08:54):
Mhmm.
Michael (08:54):
Or sometimes it's a
joint application, so a husband
and wife purchasing something.Or it's what we call, like, a
sole trader, like someone whowho works for themselves
effectively as a contractor, andthey need to buy a new computer.
So that would put in a loan forthat. Like, so there's all these
different situations.
Jake (09:12):
And so you'd say, like, if
given that these are the
circumstances under whichthey're asking for a loan in,
they should get rates and thingsof this of this type from this
particular broker.
Michael (09:21):
Yeah. So from each of
these lenders, like, this lender
Sorry. Lender lender a, b, cMhmm. D, let's say. So a and c
are like, yep.
We can provide you a a quote. Bgets knocked out. You know, you
don't meet some eligibilitycriteria, and d is, like, we
don't offer that kind offinance. So great. This this is
(09:43):
fine.
But the problem with this isthat these rates change. Right?
Week to week or month to monthor whatever whatever the
frequency is for differentlenders, the rules might change
for different lenders. So we'realways in there. We're making
changes to the code when therules change.
But the problem with withautomated testing is that the
(10:04):
inputs change. So in in ageneral test, you're saying,
given this is the world that Ibuild, these are the values that
come out the other side. So youcan make assertions and
expectations around that kind ofstuff. But the problem with
these tests is that theexpectations will change
depending on, like, the businesslogic changing for a specific
(10:26):
lender periodically, or thevalues will change. You know,
how much they'll lend or whatthe the the repayment fee will
be will change based on datathat comes in.
Sometimes, some parts of thissystem are user managed by by
our internal staff that go inand say, okay. We need to go and
update the base rates. You know,it changed from 9% to 8%, or it
(10:48):
went from, you know, eight and ahalf percent to 9.25%. These
things happen in real timewithout our intervention, and
that, like, these are just ratechanges. They're not changes in
behavior.
So in order for us to be able todetermine that the system is
working, we need real time datato say, like, yes. The current
(11:09):
state of the world is accuratebased on these inputs. So I
guess where this comes into playand and, ultimately, my question
is how how would you build anautomated test system, or
automated suite of tests forthis system that requires real
(11:32):
time values that come externalto the system and can be changed
by anyone at any time. So wedon't know that it's correct.
And and I think a lot of thebase rate calculations are
simple enough, but when the thelogic to determine whether or
not someone is eligible for aparticular lender may change.
(11:52):
And so
Jake (11:53):
Yeah. That's that for that
for me seems like the most
challenging part. And I thinkthe idea too that like, you
know, the the way that ourbrains are stuck thinking about
it is typically in this likeunit test world where we're used
to running it sort of inisolation. And it's like, I
think you're sort of outside ofthat a bit. You know, the
situation that you're talkingabout is a bit outside of that
(12:14):
specific scenario.
And I don't I mean, as I'm justthinking top of my head, off the
top of my head, I don't know howyou account for changes in logic
without changing your code.Unless you can account for that
change in logic by having acheckbox that that is some you
(12:38):
know, that's the flag thatchanges it to be truthy or falsy
or something like that. Mhmm.You know? So, like, if the
person that's actually inputtingthe data is checking the box to
say, yes.
It is, you know, we do allowthis particular type of loan or
no. We don't allow thisparticular what type of loan or
there's some threshold that saysif it's above this amount dollar
(12:59):
amount for this particular lineof loan, then we do not allow
it. Or if it's below, you knowwhat I mean? Sort something like
that. So it's like, you have toit's a you can only be as
specific in your tests as the UIthat you're allowing your users
to update can be.
So that that makes itcomplicated because it's boy, I
(13:22):
don't know. Yeah. It's I'mtrying to think of how it's it's
it's more like a it's more likean end to end test than it is a
unit test. You know?
Michael (13:35):
And I think I think
there's really two sorts of
tests that we can write.
Jake (13:41):
The yeah. The the other
thing I was gonna say too is,
like, there's that idea of,like, fuzz testing too. Have you
ever heard of that? I think,like, Spassie talks about that.
Okay.
So it's essentially, like, youhave, like, happy path tests
that you create, and then youhave, like, fuzz tests that are
like, try to break crap. Like,just literally try and break it.
Like, you should, you know, justtake every path you could
possibly take down every avenue,down every road, and and make
(14:03):
sure I don't get like someexceptions. I feel like
Exploratory testing.
Michael (14:07):
Yeah. QA QA we usually
do that.
Jake (14:10):
I think they call it fuzz
testing. Like, it's it's a
thing. Like, you can automatefuzz testing, though. Mhmm.
Michael (14:15):
You're on mutation
testing is kind of similar,
Jake (14:18):
but maybe different.
That's the idea.
Michael (14:19):
Yeah. To go and, like,
change things and see what
happens. So there's there's,like, two kinds of tests that we
can write, and I think theeasier of the two is to test
those business rules. Like, whenthe business rules change,
you're changing codes, you'reprobably changing tests. And
and, theoretically, you'rechanging the test first to say,
okay.
This previously said it wasokay. And and, realistically,
(14:41):
you're just checking for Booleanswitches. Like, if you're
expecting that given this set ofknown inputs that the the
applicant would either be ableto get that loan or not. That's
easy enough to test. Yeah.
I think the trickier bit is,like, testing the actual
calculations, and they theycalculated output. That given
(15:03):
some known inputs, the outputwill be this because the
variables change in a way thatis separate to, like, the system
itself. So if the if theinterest rate, yeah, if the if
the repayment amount sorry. Ifthe loan amount is, say, $40,000
(15:28):
and the interest rate today is10%, then the repayment amount
would be, let's say, $500 amonth. Okay?
But if the interest rate changesto 9%, you know, the calculation
is the same, but you can'treally make an assertion against
the values because that changesin a variable way.
Jake (15:48):
Yeah. It's right. It's
it's man,
Michael (15:52):
that is really tricky
one. What you're actually
testing. But we need to makesure that, like, when we apply
all these business rules, thatthe things that come out the
other side are correct at thetime that we make those changes.
So, you know, maybe you don'ttest you know, you kind of just
ignore the fact that thevariables in the system exist
and just say, like, given whatwe know of the world today with
(16:15):
these inputs and thesevariables, this is the
expectation. And so long asthose tests keep passing, the
expectation and the reality ofthe system are correct.
In production, in the in the dayto day usage of the system, when
the variables change, of course,the values will will drift from
whatever is in the test, but youknow that the calculations are
(16:37):
correct.
Jake (16:38):
Yeah. I think that's I
think that would I I think
you're correct in that, like,what you actually have to test
is you have to test that thebusiness rules are being
followed. So you would have totest the threshold to say like,
so the example I gave earlier,which is like, okay. Given that
this is this amount, and this isfor this particular type of
loan, and this is the repayment,and this is the interest. If
it's above this, or if it'sbelow this, then we do or we do
(17:01):
not.
That's the business rule. That'show we test it. And we're like,
we're sort of saying, here's theedge. Yeah. This is the edge.
And like, I'm gonna test one onthis side of the edge, and I'm
gonna test one on this side ofthe edge. And as long as those
two pass, I can sort of guess orinfer that everything in between
there should also pass. Mhmm.And then, I think you actually
could do something like fuzztesting, where you don't
(17:24):
necessarily even have to careabout what the particular inputs
are that are happening. But fuzztesting basically allows you to
automate automatically test itwith invalid or random inputs to
find bugs or errors orvulnerabilities or things like
that.
So, you know, if you you know,that might be something
interesting to look into becausethat could essentially account
for any type of value that yourpeople would possibly put in,
(17:47):
you know. So you could say inany instance, any instance,
doesn't matter. I should neverget a negative repayment value.
It should never happen. Mhmm.
You know what I mean? And thenjust like, okay, go to town. Try
it. See if there's anything youcan do that would possibly make
that happen. Or you know, youcould you could put like sort of
those sort of guide guardrailson it and say Yeah.
These these tests here, theseare like worst case scenarios,
(18:09):
like, should always, it shouldalways return a positive value,
and then just start throwingrandom stuff at it. Mhmm. And
let it let it go to town, andsee if it can break it. If it
can, then you know like, I don'thave to wait for that magic
instance when they put insomething stupid and it breaks.
I've tested that in advance.
I've tested every possiblepossible value plus every
possible value they might notthink of in advance to see if
(18:30):
it'll break. Yeah.
Michael (18:32):
And by break, we mean
like throws an exception because
it's gone out of bounds orsomething. Yeah. Not not like it
should always just return trueor false. It should never throw
an exception or, you know Yeah.Not not know what to do.
It should always you know? Ifthe method is to return true or
false, you should no matter whatthe inputs are, always return
either true or false. It shouldnever throw an exception. You
know, unless the method isspecifically guarding for that
(18:56):
and throwing an exception inspecific circumstances. So,
yeah, I think I mean, that willbe the approach that I think I
will take when we start puttingthese tests in.
So this this all lives in, like,an external system that we kind
of API out to. Like, we managethe system. It's just that this
is the last remaining piece of alegacy system that never got
pulled into, like, the newplatform. And so the work that
(19:18):
I'm doing at the moment is to tomerge those two galaxies
together and see what happens.So
Jake (19:23):
Yeah. I think I think
maybe the idea too is, like, if
and I keep saying fuzz testing.I promise it'll be last time.
But if if you do that so whatyou might end up finding too is
that you don't have strongenough typing on the inputs as
well. So like, it might be thatthat is the hero of the day.
It's like, nope, this must be apositive integer that is
(19:44):
represented as greater, youknow, as between this value and
this value. Like, those are theonly acceptable values, you
know? Yeah. And it might mightbe the the the fuzzing it
actually tells you, well, youdon't actually have really good
protections on this. Is notyping that's checking for this
type of thing.
Michael (19:59):
Right.
Jake (19:59):
You're allowing strings in
here that are not numeric at
all. Mhmm. It's gonna break atsome point in the future if you
don't fix that. And so maybethat's what it does is it
basically forces you to havestrict typing up front so that
you can guarantee that yourprocesses downstream are gonna
adhere to your business rulesbecause Mhmm. You are doing the
type checks up front.
Michael (20:19):
Yeah. Yeah. And I think
getting getting the the test in
place on a per lender scenariofor those business rules will be
will be the way to start. Andthen we just assume that the
rest behaves itself, you know,and put tests in when we when we
start hitting those boundarycases in production where, you
(20:42):
know, given these inputs, someerror, some unexpected state was
reached and then gone write thetest specifically to account for
that with those inputs. Mhmm.
And, yeah, I think you're Ithink you're right. Just kind of
avoiding testing the actualvalues is is gonna be the only
way to there there's gonna haveto be some manual step where we
(21:04):
send it out to a reviewenvironment and get someone in
the business or even just one ofour developers to to go through
when they're making thosechanges and hit the API and just
be like, okay. Given I have allof these inputs, make sure that
the outputs match what themanual spreadsheet is, and then
that's just gonna have to be thedegree of testing we do at that
time. Yeah. Because because wehave seeders and things like
(21:26):
that.
But the the the problem withseeders is there's still a
snapshot in time where you'vestill got users of the system
going and updating. Like, we'vegot a a UI to go and update
these base rates and thesethings, these variables that get
put into the system so that wedon't have to make code changes
to support those things. So
Jake (21:45):
Yeah. It is hard when,
like, the test is specifically
tied to the logic that you havewritten inside the code. So it's
like it's sort of like avalidation test, you know what
mean? Where Yeah. You're nottrying to test the framework,
but at the same time, it's like,how do I check to make sure that
these rules haven't changed?
Well, I have to sort of reflectin my test the code that I've
written. So Mhmm. In my formrequest, I have that these are
the five validation rules thatare running there. And so in my
(22:07):
test, I say, check that theserules are still present. So it's
a spell check, you know what Imean?
Yeah. But it's like, it's stillthere. It's, you end up having
to change validation rules,you're gonna have to go change
the test. I mean, that's sort ofwhat you're saying with like
the, you know, there are thereare manual things per lender
that must be checked just basedon like, hey, hey, they sent you
an email and say, hey, by theway, no, we're we're no longer
doing this type of loan. Mhmm.
(22:28):
Okay. Well, there's no automatedway to say you could do that
unless you have a configurablevalue for every single lender
that, you know what I mean?You'd have to basically collapse
all of that Mhmm. To say it'sgeneric. It's genericized across
the lender base and the thingis, that's actually gonna end up
being more work than it is tojust write the business logic
(22:49):
rule and test it.
You know? Yeah. And so, you justgotta like deal with some
pragmatism there too. Like,we're going we're gonna solve
the problem in the simplest waywe can without sort of, you
know, turning our brains tomush, making this stuff all the
exact same across every singlevendor. Yeah.
Michael (23:07):
Alright. Well, thank
you for being a sounding board.
I think that'll be the theapproach that at least we start
with. Because, like, this isthis is an older like I said,
it's just kind of been sittingoff to the side because it
works, and we haven't we've hadmore important things to kind of
focus on. But now it's like,okay.
That thing is kind of raisingsome flags in certain, you know,
(23:30):
pen tests and audits and thingslike that. It's how do we deal
with that. Okay. We pick up allof the code that we need to make
that specific functionalitywork, adapt the system that's
using it to to use this newstuff. And then at least once
it's part of the main app, thenit's gonna be tested.
And and we can start saying,okay. Well, when you make
changes here, you now need towrite tests for it because
there's no more Wild Wests overthere.
Jake (23:51):
Yep. Yep. Yep. Audits. Are
you guys doing SOC two?
Michael (23:56):
I think it's on the
cards. I think we were
Jake (24:01):
Are you guys doing
something else?
Michael (24:02):
Yeah. We do ISO 27,001.
Like, we're
Jake (24:05):
Oh, boy. That's a that's a
thing harder even or or even
harder to SOC two, I think.
Michael (24:08):
So two. No. No. No.
27,020 ISO 27,001 is the, like,
here's the list of stuff that wesay we're doing, whereas Sock
two is here's the list of stuffwe say we're doing, and here is
the proof that we're actuallydoing it.
Like, they can ask you at anytime We've talked
Jake (24:24):
to. The
Michael (24:25):
orders that can come in
and say, like, okay. Show me
that you're doing this thing.And you have to be able to,
like, show them that you'redoing this thing. So whereas
27,001, you can kind of say, oh,yeah. We that's that's like a
known gap.
Like, can just say, you know,this is a known gap, and we'll
Jake (24:38):
look for It's an
acceptable risk. Sure. Yeah.
Okay. Interesting.
I think so I think we'vetypically called that like SOC
two type one and SOC two typetwo. So type one is like, you
know, yeah, we're gonna put allthose things in place, but we
may or may not be doing them,you know, and the stock two type
two is after you've been auditedon the things that you've put in
place. And so Yeah. But boy, isit freaking expensive? It feels
(25:01):
like Mhmm.
I feel I feel like we need tochange auditors at this point
because it's just every yearthey bumping it up. It's like,
guys, it's literally gettingeasier and easier for you every
year. We're going through thesame exact checks, the same
exact things. We've we've doneit half the time it took last
year, you know? We're doing thesame stuff.
And so but every year it's moreexpensive. So I'm like, yeah,
maybe we just
Michael (25:19):
change it up.
Unfortunately, you, yeah, you
either need to find anothervendor to do the audits. But in
order to keep your certificationcurrent and I would assume keep
with contractual obligations orkeep certain third parties that
you work with, you know, keep tokeep working with you, then
you've gotta keep yourcertifications up to date.
Jake (25:39):
Yeah. You do. But you can
I mean, like, you basically own
your controls? You know, it'stalked to you you say like, here
are the controls that we have inplace, and then the auditor is
basically just reading throughyour controls and checking.
Saying like, hey, here's thething that you have in place to
control this area, and thenlike, are you doing it?
Mhmm. Yeah, you know, GitHub hasmade it really nice. I mean, you
know, you have the full historyof everything, you know, and
(26:00):
then you just have to put rulesin place for all your branches
to be sure that only certainpeople can manage, you know,
merge stuff and then, you know,all the automated tests and
everything have been super nice.Like Mhmm. You know, we're doing
static analysis.
We're doing unit tests. We'redoing feature tests. And I prove
it. Okay. Here's every pullrequest we've done for the last
six months.
You can see that every singleone of them has been approved by
a developer other than theperson who wrote the code, you
(26:22):
know, and that all that's ableto be managed easily through get
hub branches and protectionrules and things like that. So
it's good stuff. There's there'stools out there now for people,
like if you're in a startupsituation, I know that it feels
like some of these startups havejust sort of punted on this. And
like Fathom Analytics, like,nope, we don't do SOC two.
Sorry.
Tuple, nope, we don't do SOCtwo. It's not something we're
(26:44):
interested in. It's like, okay.And the funny thing is, it seems
like they've been able to justskate by with it. They have
these huge companies who arelike, hey, we wanna use Tuple,
and they're like, great.
And they say, do you guys haveSOC two? And they say, nope. And
they say, okay. We're gonna useyou anyway. It's like, great.
Like, I it's it's just hilariousto me how many they've been
David (27:01):
able to just say, like,
yeah, we don't get anything
Michael (27:03):
like this. You've gotta
have this certification. Okay.
Well, I guess we're not gonna beable to take heavy business.
Yep.
Sorry. And Yep.
Jake (27:10):
And so then
Michael (27:11):
And like
Jake (27:11):
And they have like What
they what they do have though is
they have like a security page,like tuple and Yeah. What do
they They have a security pagewhere they say like, here are
the controls that we have inplace to make sure this is, you
know, that we are being carefulwith the stuff, and here's the
entire process of how ourarchitecture works and all that.
And it's like, I think I thinkthen people look at it and say,
oh, that's actually easier toread than a SOC two would be.
Sure. Okay.
Let's do that. We're good. Youknow? Yep. Yeah.
Michael (27:34):
Yep. Yeah. And it's and
it's good because, like, in the
in the case of Fathom Analytics,so their plans start at, what,
$10 a month, $20 a month. Like,for $20 a month, I am not going
to go through your arduous, youknow, vendor onboarding process.
Like, either you pay $20
Jake (27:52):
a and you get what
Michael (27:53):
you get. Yeah. The
procurement stuff. Either you
pay $20 a month and you get whatyou get or, like, go somewhere
else. That's fine.
Right. So yes. Unlike LaravelCloud who is doing all of that
process. Yeah. And, you know,you can't really skirt that one,
can you?
Right.
Jake (28:11):
Right. So what I was gonna
say, though, is there are there
are companies, one calledSprinto. Mhmm. Which is actually
pretty cool. And if you you youbasically it's an express
process to do sock two.
Right. So they will help you getset up. So like that sock two
type one I was talking about.Mhmm. They will say like, hook
up your hook up your, you know,architecture provide like, who's
(28:35):
who's hosting yourinfrastructure?
Yeah. AWS. Great. Okay. Here'swhat you need to do.
Give us a auditing permission,and we're gonna go audit all the
rules and everything. Oh, yep.Looks like you need to have
CloudWatch turned on for this.You need to make sure this
bucket's encrypted. You need todo this and that.
And you just go resolve thethings. And then it says, okay,
great. Do you have a policy forthis? No. Do you wanna use ours?
(28:57):
Yes. Okay. Here's the policy.You know I mean? Like, it's, I
mean, they make it, so you know,who's your version control
provider?
GitLab, GitHub, which one?GitHub. Mhmm. Okay, great. Give
us permission.
Oh, looks like you don't havebranch rules turned on for this.
Go ahead and turn branch ruleson. Looks like you don't have
tests automated, you know,automated tests running for
(29:17):
this. You need to fix that. Youneed to turn on 2FA for all the
people who have access to this.
Here's this user who doesn't.Like, it just does all of that
for you. And so Yeah. What usedto be, you know, a $50,000
investment to get some companyto come in and tell you what you
need to do and not even havethem really do all that stuff.
They would say basically to you,you need to run ScoutSuite on
(29:37):
your AWS stuff.
Now, we're not gonna tell youhow to do that. Just need to
figure that out. No. Yeah. Thiscompany literally does it for
you, and it's talking aboutmaybe a 6 to $7,000 investment
as opposed to 50 Yeah.
Now, that only gets you the typeone. They help you get set up,
but then you have somebody comein and do the auditing of it,
and you know, they they stillpartner with people who will
(29:57):
actually do it for a lot cheaperthan these organizations who
have been doing it the oldschool way for forever do.
Because literally, you get aportal where they go in and say,
yep, check check check checkcheck check check. All the green
boxes are good. You're you'redone.
Like, because you can prove it.They've done all the testing for
you. And so, it's really good.Sprinto. Check that out if
you're if you're gonna do thatSOC two stuff.
And if your architecture sorry,not your architecture, if your
(30:23):
compute stuff is being used inone of the major providers. It
just makes it really easy tocheck those boxes.
Michael (30:31):
Sweet.
Jake (30:31):
Yep. Indeed. We're at
thirty minutes, my friend.
Michael (30:34):
I think that's I think
that'll do. I think that
Jake (30:37):
will do.
Michael (30:37):
Good one.
Jake (30:39):
Absolutely. Alright,
folks. This was episode one
seventy two, I believe. Findshow notes at
northmidsouth.audio/170two. Hitus up on blue sky or at Twitter
at jacobennett at michael drendaor at north south audio.
And of course, if you like theshow, please feel free to rate
it up in your pod catcher ofchoice. Five stars would be
absolutely incredible. Folks, weare both going to LariCon this
(31:00):
year. So if you have not yetbought your ticket, you should
definitely do so. Save us a seatat the restaurant you go to.
Shoot us a text. Hit us up, andwe'll be sure to come join you.
Love to see you there. Thankyou.
Michael (31:11):
Great. See you all in
two weeks. Also, is episode one
seventy one.
Jake (31:17):
One 70 one. Sorry, folks.
Alright, everybody. Well, sounds
good. See you later, everybody.
Bye bye. Bye.