All Episodes

August 21, 2025 33 mins

In this episode of Screaming in the Cloud, Corey Quinn talks with Jonathan Schneider, CEO of Moderne and author on Java microservices and automated code remediation. They explore why upgrading legacy systems is so hard, Schneider’s journey from Netflix to building large-scale code transformation tools like OpenRewrite, and how major companies like Amazon, IBM, and Microsoft use it.

They also discuss AI in software development, cutting through the hype to show where it genuinely helps, and the human and technical challenges of modernization. The conversation offers a practical look at how AI and automation can boost productivity without replacing the need for expert oversight.



Show Highlights

(2:07) Book Writing and the Pain of Documentation

(4:03) Why Software Modernization Is So Hard

(6:53) Automating Software Modernization at Netflix

(8:07) Culture and Modernization: Netflix vs. Google vs. JP Morgan

(10:40) Social Engineering Problems in Software Modernization

(13:20) The Geometric Explosion of Software Complexity

(17:57) The Foundation for LLMs in Software Modernization

(21:16) AI Coding Assistants: Confidence, Fallibility, and Collaboration

(22:37) The Python 2 to 3 Migration: Lessons for Modernization

(27:56) The Human Element: Responsibility, Skepticism, and the Future of Work

Links

  1. Crying Out Cloud Podcast & Newsletter: https://www.wiz.io/crying-out-cloud
  2. Modern (Jonathan Schneider's company): https://modern.ai
  3. LinkedIn (Jonathan Schneider): https://www.linkedin.com/in/jonathanschneider/



Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
These were all the sortof basic primitives.
And then, you know, at some pointwe said, well, recipes could also
emit structured data in the form oftables, just rows and columns of data.
And we would allow folks to runthose over thousands or tens of
thousands of these, loss of semantictree artifacts and extract data out.
This wound up being the fruitful bedfor LLMs eventually arriving is that

(00:25):
we had thousands of these recipesemitting data in various different forms.
And if you could just expose this tools,all of those thousands of recipes to a
model and say, okay, I have a questionfor you about this business unit.
The model could select the right recipe,deterministically, run it on potentially
hundreds of millions of lines of code,get the data table back, reason about

(00:46):
it, combine it with something else.
And that's the sort of, I think,foundation for large language
models to help with large scaletransformation and impact analysis.
Welcome to Screaming in the Cloud.
I'm Corey Quinn, and my guest today hasbeen invited to the show because I've

(01:08):
been experiencing, this may shock you someskepticism around a number of things in
the industry, but we'll get into that.
Jonathan Schneider is the CEO of modern.
Uh, and before that, you'vedone a lot of things.
Jonathan, first, thank you for joining me.
Yeah.
Thanks for having me here, Corey.
Such a pleasure.

(01:28):
Crying Out Cloud is one of thefew cloud security podcasts
that's actually fun to listen to.
Smart Conversations, greatguests, and zero fluff.
If you haven't heard of it, it's a cloudand AI security podcast from Wiz run
by CloudSec pros, for CloudSec pros.
I was actually one of the firstguests on the show and it's
been amazing to watch it grow.
Make sure to check them outat wiz.io/crying-out-cloud.

(01:53):
Uh, we, we always have to start witha book story because honestly, I'm
envious of those who can write a book.
I just write basically 18 volumesof Twitter jokes over the years,
but never actually sat down andput anything cohesive together.
Uh, you were the author of SRE withJava microservices and the co-author
of Automated Code Remediation,how to Refactor and Secure the

(02:13):
Modern Software Supply Chain.
So you are professionallydepressed, I assume.
I mean, I as like most softwareengineers, I hate writing documentation.
So somehow that translated into, youknow, write a, a full scale book instead.
I, I honestly don'tremember how that happened.
A series of escalating poor lifechoices is my experience of it.

(02:35):
I think no one wants to write a book.
Everyone wants to have written a book, andthen you went and did it a second time.
Yeah, much more a smaller one.
That second one, you know, just the 35pager luckily, but, but, you know, still,
um, or it's always, uh, quite the effort.
So one thing that I, I wanted to bringyou in to talk about is that the core of
what your company does, which is, I, I,please correct me if I'm wrong on this

(02:58):
software rewrites software modernization.
Effectively, you were doing whatAmazon Queue transform purports to
do, uh, before everyone went AI crazy.
Yeah, it started for me almost 10years ago now at Netflix on the
engineering tools team, where I wasresponsible for making people move
forward, uh, in part, but they hadthat freedom and responsibility

(03:20):
culture, so I could tell 'em, you'renot where you're supposed to be.
And they would say, great, do it for me.
Otherwise I'm, I got other things to do.
Uh, and so really that forced ourteam into trying to find ways to
automate that change on their behalf.
I never worked at quitethat scale in production.
I mean, I've consulted in there inplaces like that, but the, that's a

(03:41):
very different experience 'cause you'rehyperfocused on a specific problem.
But even at the scales that I'veoperated at, there was, there was
never a. An intentional decision ofsomeone's gonna start out today, and
we're gonna write this in a languageand framework that are 20 years old.
So this stuff has alwaysbeen extant for a while.
It is, it has grown roots, it has workedits way into business processes and a

(04:04):
bunch of things have weird dependencies.
In some cases on bugs.
People are not, uh, declining to modernizesoftware stacks because they haven't
heard that there's a new version out.
It's because this stuff is painfullyhard because people and organizations
that they build are painfully hard.
I I'm curious, in your experiencehaving gone through this at scale

(04:24):
with zeros on the end of it, what,what are the, what are the sticking,
what are the sticking points of this?
Why don't people migrate?
Is it more of a technological problemor is it more of a people problem?
Well, first I would start and hopefullywith a, a sympathetic viewpoint for
the developer, which is like, pretendI haven't written any software yet
and I'm actually starting from today.

(04:45):
I look at all the latest availablethings, and I make the perfectly optimal
choices for every part of my techstack today, and I, I write this thing
completely clean six months from now,those are no longer the optimal choices.
Oh God, yes.
Me.
Worst developer I evermet is me six weeks ago.
It's awful.
Like, what, what was this idiot thinking?
You do get blame and it's you and wow,we need not talk about that anymore.

(05:08):
But yeah, the past mewas terrible at this.
That's right.
And, and always will be future.
You will be the pa the next past you.
So it's, it's, there's never anopportunity where we can say we're making
the optimal choice and that optimal choicewill continue to be right going forward.
So, uh, I think that paired with oneother fact, which is just that the.
The tools available to us have essentiallyindustrial industrialized software

(05:32):
production to the point where we canwrite net new software super quickly
using off the shelf and third partyopen source components we're expected
to because you have to ship ship fastand you know, then what do you do when
that stuff evolves at its own pace?
So nobody's really been goodat it, and I think the more.

(05:53):
Authorship, uh, automation that we've,that we've, uh, developed for ourselves
from IDE rule-based intention actionsto now, you know, AI authorship.
This, like the time that we spendmaintaining what we've previously
written has continued to go up.
I would agree.
I, I think that there has been a. A a,a shift and a proliferation really, of

(06:16):
technical stacks and software choices.
And as you say, even if you make theoptimal selection of every piece of
the stack, which incidentally is where,where some people tend to founder, they
spend six months trying to figure out thebest approach, pick a direction and go,
even a bad decision can be made to work.
But, but there are so many differentpaths to go that it's a near certainty
that whatever you have built, you're.

(06:37):
There, you're going to be oneof a wide variety of different
paths that you've picked.
You're effectively become a unicorn prettyquickly regardless of how mainstream
each individual choice might be.
That's right.
Yep.
That's just the nature of,of software development.
I I am curious since, uh, you didbring up the, uh, Netflix, uh,
freedom and responsibility culture.

(06:58):
Uh, one thing that has made meskeptical historically of Amazon
queue's, transform abilities and, andmany large companies that have taken a
bite at this apple is they, they trainthese things and build these things.
Inside of a culture that has a veryparticular point of view that drives
how software development is done.
Uh, I like how many people have we metthat have left large tech companies

(07:20):
to go, found a startup, tried to buildthe exact same culture that they had
at the large company and just founderon the rocks almost immediately.
Because the culture shapes the companyand the company shapes the culture.
You, you can't cargo cultit and expect success.
How, how varied do you findthat these modernization
efforts are based upon culture?
I'm glad to say that for my own story,I had a degree of indirection here.

(07:43):
I didn't go straight from Netflix to, tofounding something, so I was at Netflix.
I think that freedom and responsibilityculture meant that Netflix in particular
had far less self similarity orconsistency than say, a Google that has a
very prescriptive standard for formattingand everything and the way they do things.
And so I left Netflix.
I went to Pivotal, VMwarewas working with large.

(08:06):
Enterprise customers like JP Morgan,fidelity, home Depot, et cetera, working
on an unrelated problem in continuousdelivery and saw them struggling with the
same kind of problem of like migrationsand modern, like everybody does.
And what struck me was that eventhough they're very different cultures,
uh, JP Morgan much more stronglyresembles Netflix than it does Google.

(08:30):
Um, Netflix's uh, lack of consistencywas by design or by culture, intentional.
And JP Morgan's is just by the verysheer nature of the fact that they have
60,000 developers and 25 years of this,of history and development on this.
And so a solution that works wellfor, uh, dissimilar by design
actually works well in the typicalenterprise, which is probably closer

(08:53):
to Netflix than it is to Google.
Yeah, a lot of it dependson constraints too.
Uh, JP Morgan is obviously highlyreg, sorry, JP Morgan Chase.
They're particular about the namingpeople are, they're obviously highly
regulated and mistakes matter in adifferent context than they do when your
basic entire business is streaming moviesand also creating original content that

(09:14):
you then cancel just when it gets good.
Right, right, right.
Right.
Yes.
So there's, there is that question,I guess, of how this stuff evolves.
But taking it a bit away from theculture side of it, how do you
find that modernization differsbetween programming languages?
I mean, I, I dunno if people are watchingthis on the video or listening to it,
if we may, we take all kinds, but you'rewearing a hat right now that says JVM,

(09:38):
so I'm, I'm just gonna speculate wildlythat Java might be your first love, given
that you did in fact write a book on it.
It was one of my first loves.
Yeah.
The technically a Java champion right now.
Although, you know, I actually startedin c plus plus and I hated Java for
the first few years I worked on it.
But, um, I actually think, uh,
Stockholm Syndrome can work miracles.
It it sure can.

(09:58):
It absolutely can.
I, I don't know that the.
Problems are, are that different?
There's, you know, a lot ofdifferent engineering challenges.
How statically typed is something,how dynamically typed is it, how
accurate can a, a transformationbe provably made to be?
But in general, I think the problems are,um, the social engineering problems are
harder than the, than the specifics ofthe transformation that's being made.

(10:22):
And those social engineeringproblems are like.
Do I build a system that issues masspull requests from essential team to
all the product teams and expect thateverybody's gonna merge them because.
They love it when, you know, random thingsshow up in their, uh, in their PRQ or,
uh, do product teams perceive that, likeunwelcome advice coming from an in-law

(10:43):
and they're just looking for a reason toreject it, you know, and then they would
prefer, instead to have an experiencewhere, you know, when they're about to
undergo a large scale transformationthat they pull or they initiate the
change and then merge it themselves.
So like those are the things that Ithink are, are highly similar regardless
of the tech stack or company that's.
Uh, because people are people,uh, kind of everywhere.

(11:05):
Now
you take the suite of Amazon Q transformoptions and they have a bunch of of
software modernization capabilities,but also getting people off of VMware
due to, you know, extortion as well asgetting off of the MI of the mainframe,
which that last one is probablythe thing I'm the most skeptical of
companies have been trying to getoff of the mainframe for 40 years.
The problem is not that you can'trecreate something that does the same

(11:26):
processing, it's that there are thousandsof business processes that are critically
dependent on that thing and you can't.
Migrate them one at a time in most cases.
I am highly skeptical that just pour someAI on it is necessarily going to move
that needle in any material fashion.
I think that there's a, a twodifferent kinds of activities here.
One is code authorship, that newauthorship, uh, that's what the

(11:51):
copilots are doing, the Amazonqueue is doing, et cetera.
It's, it's reallyassisting in that respect.
And then there's code maintenance,which is, I need to get this thing from
one version of a framework to another.
Maintenance can also include,I'm trying to consolidate one
feature flagging vendor to, or twofeature flagging vendors to one.
Um, but.
When I think think of something likea COBOL to a modern stack JV young

(12:15):
or.net or whatever the case might be,I honestly see that less of as, as
a maintenance activity and more asan authorship activity, a new, and
you're, you're writing net new softwarein a different stack and a different
set of expectations and assumptions.
Um, and so I'm skeptical too.
I don't, I don't think there's amagic wand, but to the extent that our
authorship tools help us acceleratenet new development, those problems.

(12:39):
The cost of those problems goes down.
I think over time.
Yeah, that, that does trackand makes sense of how I tend
to think about these things.
But at the same time that the costof these things goes down and the
technology increases, it still feelslike these applications that are decades
old in some cases are still explodinggeometrically with respect to complexity.

(13:01):
That's right.
Yeah.
Like how do you outrun it all?
Well, um.
Uh, to me there's not just one approachhere, but I feel like, um, you know,
for my own sake and my, where my focusis, is really trying to reclaim, uh,
developer time in some area so thatit can refocus that effort elsewhere.

(13:25):
And I think one thing I hearpretty consistently is that because
of that explosion in softwareunder management right now.
A developer spending like 30or 40% of their time just kind
of resiting applications andkeeping, keeping the lights on.
And that's something we need to likeget rid of a bit or as minimize as
much as possible so that, you know, thenext feature they're developing isn't

(13:47):
just a net new feature but is actually,you know, pulling some like old system
into a more modern framework as well.
That's just another activitythat can go back onto their,
uh, is something they can do.
But that does track the, I guess thescary part too, is it having lived
through some of these myself wherewe know that we need to upgrade the
thing to break off the monolith, tomaster the wolf, et cetera, et cetera,

(14:11):
et cetera, and it feels like there'snever time to focus on that because you
still have to ship features, but everyfeature you're doing feels like it's
digging the technical debt hole deeper.
It is.
It is.
Yeah.
So I mean that's, and this is what I meanis like if we can take the assets that
we have on our management right now and,and like keep them moving forward, um,

(14:33):
then um, we have like less drift and less,you know, um, complexity to deal with.
Overall.
It's an important part ofpiece of that puzzle I think.
As you said, you've beenworking on this for 10 years.
Uh, gen AI really took off at the endof 2023, give or take Well, during 2023.
And I'm curious to get yourtake on how that has evolved.

(14:55):
I mean, yes, we all have to tella story on some level around that.
Uh, your uur l is modern.ai, so clearlythere's, there is some marketing
element to this, but, but you're, butyou're a reasonable person on this
stuff and you go deeper than most do.
I think a lot of what, what I'vedeveloped over the last several years,
or our team has, has been, you know,accidentally leading towards this moment

(15:16):
where, um, we've got a set of toolsthat, uh, an LM can take advantage of.
So the first thing was, youknow, when I'm looking at a code
base, the text of the code is in.
I think to the abstract syntaxtree of the code is insufficient.
So things like tree sitters, you know,that I won't mention all the things

(15:37):
builds on top of tree sitter, butif it's just abstract syntax tree,
there's not enough information oftenfor a model to latch onto to know
how to make the right transformation.
And the reason I started open Rewritein the very, at the very beginning,
10 years ago was because the veryfirst problem I was trying to to solve
at Netflix was moving from blitz forJ and internal logging library to

(15:59):
not blitz for J. We were just tryingto kill off something we regretted.
And yet that logging library lookedalmost identical in syn PAX to S
SL for jj, any of the other ones.
And so just looking at log info.
Well that looks exactly like logon info from another library.
I couldn't, you know, narrowlyidentify where blitz for day still

(16:19):
was, even in the environment.
So I had to kind of go onelevel deeper, which is what
does the compiler know about it?
And that is actually a really difficultthing to do, to just take Texta code and
parse it into an abstracts and text tree.
You can use tree sitter to go onestep further and actually exercise the
compiler and do all the symbol solving.
Well, that actually means you have toexercise the compiler in some way.

(16:42):
Well, how is that done?
What are the source sets?
What version does it require?
What build tools require, like this windsup being this like Hu, hugely complex
decision matrix to encounter an arbitraryrepository and build out that LST.
We built out that that LSTor loss of semantic tree, and
then we started building theserecipes, which could modify them.
And those recipes stacked on otherrecipes to the point where like a spring

(17:05):
boot migration has 3,400 steps in it.
And these were all thesort of basic primitives.
And then, you know, at some pointwe said, well, recipes could also
emit structured data in the form oftables, just rows and columns of data.
And we would allow folks to runthose over thousands or tens of
thousands of these loss of semantictree artifacts and extract data out.

(17:27):
This wound up being the fruitful bedfor LLMs eventually arriving, is that
we had thousands of these recipesemitting data in various different forms.
And if you could just expose astools, all of those thousands
of recipes to a model and say.
Okay, I have a question foryou about this business unit.
The model could select the right recipe,deterministically, run it on potentially

(17:51):
hundreds of millions of lines of code,get the data table back, reason about it,
combine it with something else, and that'sthe sort of, I think, foundation for
large language models to help with largescale transformation and impact analysis.
This episode is sponsored by myown company, the Duck Bill Group,
having trouble with your AWS bill.

(18:12):
Perhaps it's time to renegotiatea contract with them.
Maybe you're just wondering how to predictwhat's going on in the wide world of AWS.
Well, that's where the DuckBill group comes in to help.
Remember, you can't duck the duck bill.
Bill, which I am reliablyinformed by my business partner
is absolutely not our motto.
Uh, to give a a somewhat simplifiedexample, uh, it's easy to envision 'cause

(18:36):
some of us have seen this where we'llhave code that winds up cranking on data
and generating an artifact, and then itstaes that object into S3 because that is
the defacto storage system of the cloud.
Next, it then picks up that sameobject and then runs a different
series of transformation objects on it.
Now, from a code perspective, thereis zero visibility into whether that.

(18:56):
Artifact being written to S3 issimply an inefficiency that can
be written out and just have itpassed directly to that sub-routine.
Or if there's some external process,potentially another business unit
that needs to touch that artifactfor something for reporting.
Uh, quarterly earnings are a terrificsource where a lot of this stuff sometimes
winds up getting, uh, getting floated upand it's, it is impossible without having.

(19:19):
Conversations in many caseswith people in other business
units entirely to, to get there.
That's the stumbling blockthat I have seen historically.
I Is that the sort of thing thatyou're, that you wind up having
to think about when you're doingthese things or am I contextualizing
this from a very different layer?
I do think of this process of largescale transformation, impact analysis,

(19:39):
very much like what you're describingas like a, a data warehouse, ETL
type thing, which is, you know, I needto take a source of data, which is
the text to the code and enrich it.
Into something that's everything.
To compile our, knows all the dependenciesand everything else from that point.
And once I have that data, that's acomputationally expensive thing to do.
Once I have that.
There's a lot of differentapplications of that same data source.

(20:03):
I, I should point out that I havebeen skeptical of AI in a number of
ways for a while now, and I wanna beclear that when I say skeptical, I
do mean I'm middle of the road on it.
I see its value.
I'm not one of those, it's justa way that kill trees and it's
a dumb markoff chain generator.
No, that is absurd.
I'm also not quite on the fence of thischanges everything and every business

(20:25):
application should have AI baked into it.
I am.
I am very middle of the road on it,and the problem that I see as I look
through all of this is it, it feelslike it's being used to paper over.
A bunch of these problemswhere it has to talk to folks.
I've used a lot of AI codingassistance and I see where these things
tend to fall short and fall down.

(20:45):
Uh, a big one is that they are seemincapable of saying, I don't know.
We need to go get additional data.
Instead, they are, uh, they'reextraordinarily confident and
authoritative and also wrong.
I say this as a whitedude who has two podcasts.
I am conversant with the beingauthoritatively wrong point of view
here at sort of my people's culture.
So it's, it's one of those, how do you,how do you meet in the middle on that?

(21:09):
How do you get the value without goingtoo far into the realm of absurdity?
Well, I, I do think that these things needto be, they need to collaborate together.
And so, uh, so it is with AmazonQ Code transformer, that's, that's
working to provide migrationsfor, uh, Java and other things.
You see that, that Amazon QCode transformer actually uses.

(21:30):
Open rewrite a rule-based ordeterministic system behind it to
actually make a lot of those changes.
An an open source tool thatincidentally you were the
founder of, if I'm not mistaken.
That's right, yeah.
And that, that our technology is reallybased on as well, and it's not just
Amazon Q Code transformer as we've seen.
Uh, you know, IBM Watson MigrationAssistant built on top of Open rer,
Broadcom application advisor built ontop of Open Rewrite Microsoft GitHub,

(21:54):
copilot AI Migration Assistant, I thinkis the current name also built on that.
And they, they're better together.
I mean, it's, you know, that tool runs,uh, open rewrite to make a bunch of
deterministic changes and then followsthat up with further verification steps.
That's, that is the, the golden path, Ithink, is trying to find ways in which.
Non-determinism is helpful and tostitch together systems that are

(22:17):
deterministic at their core as well.
I hate to sound like an overwhelmingcynic on this, but it's one of the
things I'm best at, uh, it the Pythontwo to Python three migration, because
Unicode had no other real discerniblereason, uh, took a. Decade in no large
part because the single biggest breakingchange was the way that print statements
were then handled as a function.

(22:37):
And you could get around that by importingthe future package, uh, which affected a
lot of, uh, two to three migration stuff.
But it still took a decade for the systemtools around the Red Hat ecosystem, for
example, just run package management tobe written, to take advantage of this.
And that was.
And please correct me if I'mwrong on this, a relatively
trivial, straightforward upliftfrom Python two to Python three.

(23:00):
There was just a lot of it.
Going, looking at that migration toanything that's even slightly more
complicated than that seal feelslike at a past a certain point of
scale, an impossibility, you clearlyfeel differently given that you've
built a successful company and anopen source project around this.
Yeah.
I think actually one of thecharacteristics that was difficult
about that Python two to three migrationare there's things like that that you

(23:21):
described that were fairly simple changesand that were done at the language level.
But alongside that came a host ofother library changes that were made.
Not really because of Python twoto three, but because there's an
opportunity, they're breaking things,we'll break things and everybody,
everybody, lets just break things, right?
And so a lot of people got stuck onnot just the language level changes,
but all those library changesthat happened at the same time.

(23:43):
And that's an interestingproblem because it's kind of an
unknown scoped problem, right?
Well, how, how much breakage you havein your libraries very much depends
on the libraries that you're using.
Right now, um, so I mentionedearlier like the Spring Boot three
migration, two to three migration open.
Red recipe right now has 3,400 steps.

(24:04):
I promise there's some part of twoto three that we don't cover yet.
I don't know what that is.
Uh, but somebody will encounter it.
And for them
in production, most likely
in, yeah, they're gonna betrying to run the recipe.
They're gonna find something that, oh, youdon't cover Camel or something great, you
know, like, uh, and so, and that's fine,you know, and we will encounter that.
And probably if they use Camel in oneplace, they use it a bunch of places.

(24:27):
And so it'll be worth it then to buildout that additional recipe that deals
with that camel migration and then,you know, boom, and then you know that,
and then that's sort of contributedback for the benefit of everybody else.
I think what makes this approachableor tractable is really that we're all
sort of building on the same substrateof third party and open source stuff.

(24:50):
From JP Morgan all the way downto tiny like, you know, 15 person
engineering team, modern, like,
oh, oh, we, we can't overemphasizejust how much open source
has changed everything.
Back in the Bell Labs days, seventiesand eighties, it was everyone
had to basically build their ownprimitives from the ground up.
It,
yeah, it was all completely bespoke.
Yeah.

(25:10):
Now it's almost become a trope like,so implement quicksort in a whiteboard.
Like, why would I ever need to do that?
Okay.
Uh, I, I guess another, anotherangle on my skepticism here is
I work with AWS Bills and theAWS billing ecosystem is vast.
Uh, but, but the billingspace is a bounded problem.
Space.
Unlike programming languages thatare turning complete, you can

(25:33):
build anything your heart desires.
Uh, even in the billing space, Ijust came back from finops X in
San Diego and none of the vendorsare really making a strong AI play.
And I'm not surprised by this because Ihave done a number of experiments with
LLMs on AWS billing artifacts, and theyconsistently make the same types of
errors that seem relatively intractable.

(25:55):
Uh, go ahead and make this optimization.
That optimization is dangerous withouta little more context fed into it.
So I guess my somewhat sophomoricperspective has been, if you, if you can't
solve these things with AI in a boundedproblem space, how can you begin to tackle
them in these open-ended problem spaces?
I, I'm, I'm with you actually.
And there's a counterpoint tothis, which is that I think.

(26:16):
That the, all the large foundationmodels are somewhat undifferentiated.
I mean, they kind of take pullposition at any given time, but
Right.
Two weeks later, the wholeecosystem is different.
Yeah.
If they kind of all roughly havethe same capabilities and there
are some, like we said, there arevery useful things they can do.
There's some utility there.
You know, there are placeswhere non-determinism is useful

(26:38):
and to the extent that you canapply that non-deterministic.
Then great.
You know, like that, thatthat's, that's fantastic.
But I'm not in a position where I thinkSpring Boot two to three upgrade or
Python two to three upgrade applied to5 billion lines of code is going to be
deterministically acceptable either nowor six months from now or a year from now.

(27:01):
And maybe I'll be a fool andwrong, but I don't think so.
Oh yeah.
Honestly, this is, this whole AIrevolution has sh turned my entire
understanding of how computerswork on, its on their heads.
Like short of a rand function,you, you knew what the output of
a given stance of code was goingto be, given a certain input.
Now it kind of depends.
It does.
It really does.

(27:23):
Yeah, the problem I run into is nomatter how clever I have been able to
be and the people I've worked with we'refar smarter than I am, have been able
to pull off, uh, the, these insights.
There's always migration challengesand things breaking in production
just because of edge and cornercases that we simply hadn't.
Considered the difference now isinstead because there's a culture
in any healthy workplace aboutnot throwing Steven under the bus.

(27:45):
Well, throwing the robot under thebus is a very different proposition.
I told you AI was crapsas the half of your team.
That's AI skeptic, andit's not the ai, it's.
Vault says the, uh, peoplewho are big into I, into I,
ai, ai, business, daddy logic.
And the reality is probablythese things are complicated.
Uh, compute, NN neither computer norme nor beast are going to be able to

(28:06):
catch all of these things in advance.
That is why we have jobs.
I've noticed this just ineven managing our team.
You know, I, I catch people whenthey say, you know, but Junie
said this, or, but, you know.
Uh, don't pass through tome what your assistant said.
You, you're the responsibleparty when you tell me something.
So you, you, you have a source.

(28:27):
You check that source, youverify the integrity of the
source, then you pass it to me.
Right?
You can outsource the work,but not the responsibility.
A number of lawyers are finding this,uh, uh, uh, to be the case when they're,
they're not checking what paralegals havedone or at least blaming the paralegals
for it, I'm sure.
Exactly.
Always has been.
Always has, but
it's, I, I also do worry that a lot ofthe skepticism around this, uh, even

(28:47):
my own, my own aspect of it comes froma conscious or unconscious level of
defensiveness where I'm worried thisthing is going to take my job away.
So the first thing I do, just torationalize it to myself is point out
that things I'm good at and that thisthing isn't good at, at the moment.
Well, that, that's whyI'll always have a job.
Conversely, I don't think computersare gonna take jobs away from all

(29:08):
of us in the foreseeable future.
The answer is probably a middleground in a similar passion fashion.
The way the industrial Revolution sortof did a whole lot of, uh, of a number on
people who are an in independent artisans.
So there's a, it's an evolutionalprocess and I, I just worry that I am
being too defensive, even unconsciously.

(29:29):
I, I think that's sometimes too.
I, I really do feel like this isjust a continuum of, of productivity
improvement that's been underfoot for along time with different technologies.
And I mean, I remember the veryfirst eclipse release and the
very first eclipse release is whenthey were providing, you know, uh,
rules-based refactorings inside the IDE.

(29:49):
And I remember being superexcited every two or three
months when they dropped another.
And just looking at the releasenotes and seeing all the new.
Things and what did that do?
It made me faster atwriting that new code.
And you know, here we've got another thingthat has very different characteristics.
It's like, it's almost good at allthe things that IDE based refactor
weren't good at, but I still guide it.

(30:11):
And, you know, unlike a, yeah,I think the, the drop CEO said,
uh, our CTO said IDs will beobsolete by the end of the year.
I don't believe this at all.
I don't believe this at all.
I think we're still driving them.
I am skeptical in theextreme on a lot of that.
I, I, I, because again, these, let's behonest here, these people have a thing

(30:32):
they need to sell and they have billionsand billions and billions of other
people's money riding on the outcome.
Yeah.
That would shape my thinkingin a bunch of ways.
Both subtle and grows too.
I try and take the, a more neutralstance on this, but who knows?
I think it's not just neutral,it's a mature stance and it's

(30:53):
one that's, uh, it's, it's a lotof experience going behind it.
I, I think that you're right.
I don't think we're we'reanywhere close to being obsolete.
No, and I, and I also, frankly, I saythis coming from an operations background,
CIS had been turned SRE type where.
I have been through enough cycles ofseeing today's magical technology become

(31:13):
tomorrow's legacy shit that I have tosupport that I am, I have a natural
skepticism built into almost everyaspect of this just based on history.
If nothing else,
you know what Vibe Coding reminds me of?
It reminds me of model drivenarchitecture about 25 years ago.
The like, you know, justproduce a UML diagram and don't
worry like the, the codal.

(31:36):
I'll ship the, I'll just generatethe rest of the application.
Or it reminds me of, uh, behavior drivendevelopment when we said, oh, we'll
just put in business people's hands.
They write the test and, youknow, don't, you know, we don't
want engineers writing the test.
You want business?
Like, I feel like we've seen this playout many, many, many times in various
forms, and maybe this time's different.
I, I don't think so.

(31:57):
And to be honest, I, I like to saythat, well, computers used to be
deterministic, but let's, let's be honestwith ourselves, we'd long ago crossed
threshold where no individual personcan hold the entirety of what even a
simple function is doing in their head.
They, they are putting theirtrust in the magic think box.
That's right.
Yes.
That's absolutely right.
So I really wanna thank you fortaking the time to speak with me.

(32:17):
If people can, people wannago and learn more, where's the
best place for them to find you?
I think it's, it's easy to find me onLinkedIn these days or, uh, you know,
go find me on Moderna, M-O-D-E-R-N e.ai.
Um, either place.
Happy to always, uh, send me adm. Happy to answer questions
and we'll of course putthat into the show notes.
Thank you so much for yourtime, I appreciate it.

(32:39):
Okay, thank you Corey.
Jonathan Schneider, CEO at modern.
I'm Cloud Economist Corey Quinn,and this is Screaming In the Cloud.
If you've enjoyed this podcast,please leave a five star review on
your podcast platform of choice.
Whereas if you hated this podcast,please leave a five star review on
your podcast platform of choice alongwith an insulting comment that maybe

(33:00):
you can find an AI system to transforminto something halfway literate.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Medal of Honor: Stories of Courage

Medal of Honor: Stories of Courage

Rewarded for bravery that goes above and beyond the call of duty, the Medal of Honor is the United States’ top military decoration. The stories we tell are about the heroes who have distinguished themselves by acts of heroism and courage that have saved lives. From Judith Resnik, the second woman in space, to Daniel Daly, one of only 19 people to have received the Medal of Honor twice, these are stories about those who have done the improbable and unexpected, who have sacrificed something in the name of something much bigger than themselves. Every Wednesday on Medal of Honor, uncover what their experiences tell us about the nature of sacrifice, why people put their lives in danger for others, and what happens after you’ve become a hero. Special thanks to series creator Dan McGinn, to the Congressional Medal of Honor Society and Adam Plumpton. Medal of Honor begins on May 28. Subscribe to Pushkin+ to hear ad-free episodes one week early. Find Pushkin+ on the Medal of Honor show page in Apple or at Pushkin.fm. Subscribe on Apple: apple.co/pushkin Subscribe on Pushkin: pushkin.fm/plus

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.