All Episodes

December 29, 2024 • 60 mins
KCAA: Inside Analysis with Eric Kavanagh on Sun, 29 Dec, 2024
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Maximum pain relief. I use it all the time because
I'm always active, playing golf, working out, fixing up my
place right now, and I put it on in the
evening around my back and it gives me maximum pain relief.

Speaker 2 (00:15):
It's amazing.

Speaker 3 (00:16):
You have to get it if you're in pain.

Speaker 1 (00:18):
To find out more about ice bod, go to icebodactive
dot com and get yours today. This week they are
having a flash sale where you can save twenty five
percent off by using the promo code KCAA, plus they
have free shipping. Go to ice bodactive dot com. You're
gonna be pain free and you're gonna love it.

Speaker 3 (00:37):
Kc AA Radio, Loma Linda, where no listener is ever
left behind.

Speaker 2 (00:43):
The information economy has a ride. The world is teeming
with innovation as new business models reinvent every industry industry.
Inside Analysis is your source of information and insight about
how to make the most of this exciting new era.
Learn more and insight Analysis dot Comside Analysis dot com.
And now here's your host, through Eric Kavanaugh.

Speaker 4 (01:10):
All Right, folks, hello, and welcome back once again to
the only nationally syndicated radio show all about the information economy.
It's called Inside Analysis or truly Eric Kavanaugh is here
and I'm very pleased to have Arun Baradarajohn. He is
the founder and chief commercial officer for a company called Assendian,
and we're going to talk about a new development at Paradigm,

(01:31):
So just to give some context for that. As anyone
who has worked with professional services firms know, they typically
make their money by the hour, and if you want
to do something new, they're like, oh, we need more people,
more money, got to check out the price, so we'll
make more money and give you new stuff. And then
there's the other side of the equation, which is that
a lot of times if you want to go faster

(01:52):
or for less money, they're like, well, it's going to
cost you in terms of quality, So you have to
give up something either quality or price, money or ideas,
like even don't even start your project, as maybe you
can't afford it. And I think Arun has come up
with a pretty clever way to address that, and it's
an end to end engineering platform.

Speaker 5 (02:12):
They figured out.

Speaker 4 (02:13):
There are all these friction points one hundred and fifty
I think is what I heard him say, friction points
from idea to production. So what are those and how
do you solve them while we live in a very
interesting world these days where you can have teams work
together on projects without holding each other up. I give
an example all the time about how Google blindsided Microsoft

(02:34):
with Google Docs allowing multiple people to work on the
same document at the same time, which was an absolutely
brilliant innovation. Saves tremendous amounts of time and effort. You
don't have to throw things over the wall anymore. You
can also be looking at the same document, typing, editing,
and the technology tracks who did what. So you have
all these wonderful guardrails and safeguards built into the technology,

(02:57):
and that's what you want. And I think they have
something like that for a development platform. But with that,
I run, I'll throw it over to you. Tell us
a bit about ascending and what are these friction points?

Speaker 6 (03:07):
Absolutely so so, Eric, The Cindy On journey started about
four years ago, and we had a group of us
who came from the the overall engineering and software industry,
and we had realized that there were fundamentally three issues
that clients were facing. There was there was a crisis

(03:27):
of trust, there was a crisis of speed, and what
I call is a crisis of capital.

Speaker 5 (03:34):
The crisis of trust was you.

Speaker 6 (03:36):
Know what, these guys will show up at my door
and say, hey, don't worry, I'll get this done for you.

Speaker 5 (03:41):
This will be the cost.

Speaker 6 (03:42):
I've got these accelerators, I've got these really good talent.

Speaker 5 (03:45):
I can get this work done for you. And then
they show up at the door and say that.

Speaker 6 (03:49):
And then when the reality you know what do I
say manifests, they find that, oh my god, I'm being
charged more than what I was told.

Speaker 5 (03:58):
You know, I'm not.

Speaker 6 (03:59):
I don't have full transparency into how things are being done.
I don't know what your engineers are doing. It's a
black box. I have no clue. The second is a
crisis of speed. Everybody comes and says, yeah, we will
accelerate you. We will drive more more value to you.
And then lo behold, when you say I want to accelerate,
the first thing the service provider says is you need

(04:21):
to add another hundred people right that cost.

Speaker 5 (04:24):
And then listen.

Speaker 6 (04:26):
One thing I can tell you is we add more
people into a software engineering project, it actually slows you down.

Speaker 5 (04:33):
It doesn't.

Speaker 6 (04:34):
It doesn't increase your speed because you add more humans
into the software process, which is very it's very nebulous,
not superly defined. You will find that there are more
errors creeping in and you're busy fixing those errors. Right, Yeah,
So it's a it's actually a it's actually a negative
move to add more people to projects. The third verse

(04:54):
crisis of capital. Now, what is the crisis of capital?
Over the years, because of the first two issues, clients
have been building software with a lot of technical problems,
and we call it technical.

Speaker 5 (05:09):
Debt, a lot of debt.

Speaker 6 (05:12):
So what happens is your capital is locked on these
legacy old code and you're spending a lot of money
just maintaining it, and you don't have enough money to
do new things and innovation.

Speaker 5 (05:25):
And I call that the crisis a capital.

Speaker 6 (05:27):
So all this was creating, in my opinion, a logjam
and a guardian not as I call it. Okay, which
the cord is yes, And I said, it's time to
untangle this. So the thesis statement for Cyndion was, let's
go back and understand what is causing this problem. And
to me, I went back to the manufacturing world, and
I saw how lean manufacturing transformed the entire manufacturing world.

Speaker 5 (05:51):
Right.

Speaker 6 (05:52):
Otherwise, earlier, if you had to do a line change,
or a die change or a product change.

Speaker 5 (05:56):
It took you days to do it.

Speaker 6 (05:59):
You had issues with product quality, you had a whole
bunch of things with lean manufacturing, where automation and to
a large extent, AI fundamentally transform manufacturing. And I said,
if manufacturing can do it, why are we lagging behind?
Why can't we do the same thing with software. So
we went back to the drawing board and really looked
at the entire engineering value chain, right from ideation through

(06:23):
to production, through to even post production, and said, what
is causing these what is causing this friction and problem?
And to a large extent, I'm sorry to say that
it is the humans.

Speaker 3 (06:35):
Right.

Speaker 6 (06:36):
If you give a human, if you give four humans
one requirement saying hey, I want to do this very
simple requirement. It could be a very two to three
lines of description of what you want the software to do,
and you give it to four different engineers, you will
get four different code.

Speaker 5 (06:55):
Bases for the requirement.

Speaker 6 (06:57):
Sure you will also get ten different bugs that each
of them will will come up with. And I said,
this is ridiculous. It's time for us to change. So
that's when Sindion we was started with the notion that we
are going to build a platform. The platform is fundamentally
going to have the ability to support the different actors

(07:20):
in the engineering value chain, use machine learning and AI
and try high degree of repeatability, standardization, and really get
the process. See if you look at the software process today,
it is all in documents. With the platform, what we're
doing is we're taking the process and instantiating it into

(07:40):
a physical platform that will drive the process and ensure
that there is standardization and scale. That is what we
have and thanks to all the AI work that has
been going on over the last two to three years,
we've been able to take this to a different level
and I'm happy to talk more about it.

Speaker 3 (08:01):
Yeah.

Speaker 4 (08:01):
Well, so standard components, that's one thing you can focus on, right,
making making solutions modular. Maybe walk through what are some
of the foundational components in the platform and maybe explain
how it is. I'm just guessing here that developers can
work on different parts of the process without causing each
other trouble and without delaying each other, right, because that's

(08:24):
one of the huge challenges is when one group has
to wait for some other group to finish something and
then they get delayed. So now these guys are off
the ranch basically just twiddling their thumbs.

Speaker 3 (08:36):
You don't want that.

Speaker 4 (08:36):
You want everybody working at the same time without disrupting
each other. What are some of the component parts or
buckets if you will, of development that you've isolated.

Speaker 6 (08:47):
So we have gone even more radical, right, So our
radical thinking is that why do we even need humans
to actually do any of this? And we are thinking
differently actually saying that. So if you look at our industry,
we do different things for our lives. So sometimes we'll
come in and build a new platform for a client

(09:08):
where we have to take it from idea to production.

Speaker 5 (09:11):
As you mentioned.

Speaker 6 (09:11):
Earlier, all the client will say is and I have
an old platform. I want you to modernize it. You know,
the old platform could be written on something like Cobol
or pl one, which is forty fifty years old old language,
and they want to move it to a modern construct.
So there are several such asks from our lives and
my view was why can't I get AI to do

(09:34):
all of this? So today, if you look at our platform,
our platform is what is known as an agent tick
AI platform, So I literally have an agent that can
do anything you wanted to do, and I can configure
this agent to say, okay, agent read this. So a
requirement in our world is called a user story, basically

(09:55):
saying this is the story that you will present to
the user. Right this, this is how the user will
use that use the software.

Speaker 5 (10:04):
So the user story is a very important input.

Speaker 6 (10:07):
Now I have an agent that will read the user
story and create the code. I have an agent that
will read the user story and create the testing that
needs to be done on the code and execute the testing.

Speaker 5 (10:22):
Right.

Speaker 6 (10:23):
So what we are doing is and with this, what
we're fundamentally doing, Eric is now when you write a
user story. If I'm a product manager, my job is
to write a user story.

Speaker 5 (10:33):
That's my role to write user stories.

Speaker 6 (10:35):
But the funny thing is if I again, just like
I mentioned earlier about developers or engineers, if I give
somebody an input to write a user story, and the
input isically typically.

Speaker 5 (10:46):
Called an epic.

Speaker 6 (10:47):
So if I give you an epic and I say,
please write user stories for this epic, and if I
give it to four different product managers, I will get
the user stories written in four different ways and will
not cover all of the needs of that epic. Because
the human can only think so much. Right, But what
I'm doing with AI is I tell the AI A

(11:10):
here's the epic, generate user stories, and I have live
examples where I've taken I've done this time and motion
study and comparison between what the human output is and
what the AI output is. Right, So when I give
a user, when I give an EPIC to one of
my AI agents and say generate user stories, it may
generate fourteen fifteen user stories that cover all of the

(11:33):
aspects of what the system needs to do in the
first iteration, Whereas if I give that to a human,
I sometimes see them coming up with seven or eight.
They're not able to think about the edge, the edge conditions,
the limit. You know, when you think about software, you've
got to think about all the eventualities and all the
possibilities that you may have to encounter as a user,

(11:56):
and many times as a human mind, we have these limitations. Right,
that gets taken away, and then I can do this
at scale instead of having you know, each time, let's
say I need I have tons of epics, and I
need to attack all these epics. I need to add
more product managers. Right here, I just add more agents,

(12:18):
and I still need product managers because I will have
them eyeball what the agent creates, taking the human away
from the loop. But I'm fundamentally saying, hey, if you're
if I had ten product managers doing the job, maybe
I just need.

Speaker 7 (12:31):
To m hm and more.

Speaker 5 (12:34):
These product mats are really.

Speaker 6 (12:36):
Improving the improving the agent's ability and efficacy to do
this even better.

Speaker 4 (12:43):
So if you don't mind my asking the the agents
that you've designed, how what what language did you.

Speaker 3 (12:49):
Use to design them?

Speaker 5 (12:50):
How do they run?

Speaker 4 (12:51):
They do they run in containers? Is this a containerized environment?
Tell me a bit about that.

Speaker 6 (12:56):
So our platform is completely containerized. Okay, follow all the
modern methods. It's a platform. So our platform allows you
to create agents, and I can create an agent for
different use cases. So somebody may come to me, Like
one of my clients came to me the other day

(13:16):
and said, listen, I've got these Pearl scripts. Now peerl
is again one of the old scripting languages that nobody
uses anyone, right, So he said, I've got Pearl scripts.
I really want to convert them into Java. And he said,
I'm trying to get my engineers to do it. It's
taking way too much time. And I've got tons of

(13:37):
Pearl scripts. I've got these four hundred applications running in
my data center and I want to move them to
the cloud, and I can't really use PELS scripts if
I move into the cloud. I want to move it
to the new language Java. In the old days, what
we would have done is we would have tried to
use these code converters that will take the peerl scripts
and converted into whatever language that is not working for

(14:00):
us anymore because we want to move to micro services
and containized component based applications. What we're doing is we
have changed the process. So I have an agent which
goes and interestingly I put two of my principal engineers
with our platform and we were able to get this
done in weeks for our clients who actually took months

(14:20):
to try and do this. So what we did was
we had what is known as a reverse engineering agent
that came in and read the Pearl scripts and understood.
And these Pearl scripts can be very messy. They can
have routines. I mean, I'm not going to get super
I don't know how technical I can get here, but fundamentally,
when you have these scripts, these they call each other

(14:43):
there's a lot of linkages here and there. Right, So
we actually put an agent that went and read all
of this and generated what we call is a simple
ascendion language, that's what we call it, which is just
English and nim on that describes the Perl script's logic.

(15:04):
And then I take that, and then I get another
agent to come in and convert that language, and I
go and tell the client, does this logic make sense
to you? Right? Forget about the Pearl scripts, none of
us know how to read it. But does the logic
of what is getting done in the Perl script make.

Speaker 5 (15:22):
Sense to you?

Speaker 6 (15:23):
Says yeah, this part makes sense. This one I don't
even think I use. This part looks good. So I'm
able to also say, you know what, I don't even
need to waste money converting these Pearl scripts because they.

Speaker 5 (15:36):
Are no longer used.

Speaker 6 (15:39):
So I'm now a document that is super efficient that
I reverse engineered, so I can now forward engineer it
whichever way you want. So I can take this document
convert them again into what I call as user stories,
which is the input, as I said, the requirement input
for development. And once I create the user stories, I

(16:00):
can I have an agent that converts one user's story
into Java. I can have one agent that can convert
it into c SHAP or whatever you want, and moving
completely into micro services architecture. And while I'm doing that,
in parallel, I have agents that can create the test
scripts and I can automate the testing too. So with

(16:23):
what is happening, Eric is, I'm my teams and I
we are completely reimagining how.

Speaker 5 (16:29):
Work gets done right. There's no need for rework. Now.
The biggest I'll.

Speaker 6 (16:35):
Tell you thirty person of project costs is rework because
the developer didn't understand the user story. The used story
was not written correctly because the product manager did not
understand the user needs and did not elaborate the scenario.
As well, the testers did not test the software correctly.

(16:59):
These are all all the pitfalls we are eliminating by
running these agents and bringing in high degree of standardization
and scale.

Speaker 4 (17:11):
That's very interesting. You know, I understand conceptually what you're
talking about. Did you use jen Ai for this reverse engineering?
Is that how you loaded a Pearl script? You said, hey,
what is this thing actually doing? Was that a Genai application?
We got one minute left in this segment, Go ahead,
we use.

Speaker 6 (17:28):
A combination of GENI and other AI. So we use
machine learning, we use you know, deep learning, and we
use GENI depending on the type of problems we're.

Speaker 5 (17:38):
Trying to solve.

Speaker 6 (17:39):
With GENAI, we've been able to take this to the
next level.

Speaker 4 (17:42):
Mm hmm, well I can imagine. I mean I remember
playing around with it whenever it came out a year
and a half ago or so, and immediately drawing the
conclusion that if it can write in French and German
and Spanish and English, that you can probably write in
cobal and C sharp and C plus plus and Java
and these other languages. And guess what it can. You know,

(18:03):
my understanding, having researched this a good bit is that
it'll get you eighty percent of the way there typically,
and then the last twenty percent is the mile you
have to do fine tuning, manually checking things.

Speaker 5 (18:14):
You do want to make.

Speaker 4 (18:15):
Sure that, as you've already suggested, you have a human
being monitoring things, making sure that the outputs are accurate.
But the beautiful thing about software that works is that
it works, and if it doesn't work, then it doesn't work.
And you know very clearly in a binary fashion, is
this accomplishing the task or not. And if it's a no,
you have to go back to the drawing board. But
this very interesting stuff, well, don't touch one, folks. We'll

(18:37):
be right back talking with ascending about a new way
of doing software. It's a lot faster and probably a
lot more efficient. Will get into the details in our
next segment.

Speaker 3 (18:46):
We'll be right back.

Speaker 4 (18:47):
You're listening to Inside Analysis.

Speaker 5 (18:49):
React.

Speaker 2 (18:55):
Welcome back to Inside Analysis. Here's your host, Erica.

Speaker 4 (19:03):
All right, folks, back here on Inside Analysis talking to
Ruin Varadarajan. He is the CCO and founder of a
company called Ascendian. It's just like it sounds ascend with
io n at the end. And they have developed a
do a platform for software development that is agentic. They
have all sorts of AI agents out there.

Speaker 7 (19:20):
Now.

Speaker 4 (19:21):
This is the talk of the town in Silicon Valley
and around the world quite frankly, because if a machine
can do a job better than a human, let the
machine do the job. I mean, humans make mistakes. The
machines make mistakes too, Jennai we've talked about famously makes mistakes.
It generates things, so you have to do all sorts
of work to train it and to ground it. Basically,

(19:44):
but that's not really what we're talking about here with
this agentic stuff, though, you still want to monitor what
they do and guide them. And frankly, this is my
big question about agentic AI is how do we monitor
their behavior, correct their behavior when they're wrong? How do
we orchestrate their behavior? Because you've got a bunch of
little guys out there doing stuff? Now do the overlap?

Speaker 3 (20:05):
Is it log.

Speaker 4 (20:06):
Files that gets spun out of these things? How do
you actually know what they've done to where you can
correct them or optimize what they're doing?

Speaker 5 (20:14):
Amazing, amazing question, unbelievable question.

Speaker 6 (20:17):
So what we are when we first of all, let
me explain to you what is an agent architecture?

Speaker 5 (20:24):
Okay, yeah, complete, Right.

Speaker 6 (20:28):
So what we have done is in our we can
build agents for anything under the sun. Right, but we
decided as a company that we're going to focus on
building this agentic platform around one value chain, which is
software engineering, because we said we want to be super
specialized in this area.

Speaker 8 (20:46):
Mm hm.

Speaker 6 (20:47):
So if you really look at an agent, an agent
has got five elements to it. First element is it
has what is known as a prompt template. Okay, that's
where you tell the agent, What, what is your goal?
What is your purpose? What am I expecting you to do?
What should you? How should you operate? How should you

(21:07):
how should you write the code? How should you write
the user story? I'm telling it a lot of things, right,
So I've started defining the framework around which it needs
to operate. The second element of the agent architecture is
the model. In our platform, I can literally use any model.
I can use anthropic, I can use you know as

(21:29):
your open AI, I can use lama from from you Know,
from meta. We can use different models, and we have
our own viewpoints on what models work best for what
type of use cas So our platform allows customers to say, hey,
I've got a relationship with AWS, so they can use
aws's bedrock services to choose the model, but we help

(21:52):
them with model selection. So that was That is a
second element of the agent. The third element of the
agent is what I call as its memory and its knowledge.

Speaker 5 (22:04):
Right.

Speaker 6 (22:05):
That is where I feed the agent with all of
the knowledge it needs. For example, when I'm writing code
for a client, I may be writing a code on
authentication the client. My client may already have an authentication
framework saying that go get the entitlements from this server,

(22:26):
go get the access rights from this IDP or whatever.
I can teach all of that to the agent so
that the agent does not go and generate generic stuff.
I can also teach the agent on all of the
technical standards, the kind of language that they use when
they write something, so it has this what we call

(22:48):
as in context learning or its knowledge base. That is
the third part of the agent. The fourth part of
the agent is where we define guards because that's very important.
If you're writing, for example, an application we're using things
like social security number, et cetera. You want to make

(23:10):
sure that the application does not expose the social security number, right,
it still maintains the format social security is, you know,
three four three, So it maintain the format, but I
don't want it to be exposed. Or you may have
certain ethical standards. You may say listen, when I design,
I don't want to think about race, or when I

(23:31):
think about design, I want to be inclusive. All of
those things can be put in the guard rails, and
we use a guardrail framework. And the fifth is every
agent can be associated with tools, because I may have
an agent that needs to use a tool to get
something done right. It may have to use a testing tool,
or it may have to use a webscraping tool. So
this is the architecture of an agent. Now what we

(23:54):
do is we have a we are when I am
building an agent, we actually have an agent that helps
me build an agent.

Speaker 3 (24:03):
Nice.

Speaker 6 (24:04):
And we are also starting to use genetic algorithms to
really improve the efficacy of an agent for a given
use case.

Speaker 4 (24:13):
And what was that again? You use what genetic algorithms
or evolution genetic algorithms?

Speaker 6 (24:18):
Yeah, yeah, So what we are doing with genetic algorithms
is if I have a use case and I have
two let's say agent candidates, I input it into my
into my GA algorithm, and it will generate children.

Speaker 5 (24:35):
And you know how GA.

Speaker 6 (24:36):
I don't know if you have presuming that you and
your audience know how generic algorithm works. But it works
just like human evolution, right, It creates, it creates multiple
strains of the of the agents, and multiple versions of
the agents, multiple generations and children's and whatnot. And what
we do is we have what is known as a

(24:59):
fitment function, right, or a fitness function that we have
defined for that use case, and the fitness function will
decide which of those children go to the next generation
literally like you know, evolution, and lead that method to
come up with a high quality agent candidate for a

(25:20):
given use case.

Speaker 4 (25:22):
Okay, I mean you're so. Just to give some context
to the listeners, what you're talking about is kind of
what they have in the in the database world of
a query optimizer, right. A query optimizer figures out how
to do the job better, faster, more accurately, for example,
pulling data from various systems, and you can do There

(25:42):
are all kinds of different machine learning algorithms out there,
so you can do You can use different means to
determine how efficient is this could be more efficient. To
what you're saying is that basically these genetic algorithms, you
have a sort of a fleet of them. You use them,
see which ones work better than others. The ones that
work best, you continue that one, you don't continue the others,
and thus you are incrementally improving these individual agents and

(26:08):
the algorithms that they use.

Speaker 5 (26:09):
Is that right, correct?

Speaker 6 (26:11):
And this is in the beginning when I'm building an agent, right,
So when I'm right off the bat, I have a
very high propensity for being accurate in the agent's capability
to do something. So right off the bat, we are
starting with good quality agents through our process. Then we
also run a whole range of analytics. So every time

(26:36):
you know, when the agent does something, Let's say the
agent writes a piece of code for a developer and
the developer looks at it and says I like it
and clicks says like that goes back into my platform
and I'm capturing it. So we call it answer relevance.
So if an engineer says, test this code for me,
or optimize this code for me or whatever, and if

(27:00):
it likes, and if the engineer likes the output of
the agent, then that goes back as analytics. And I'm
constantly monitoring the analytics of these agents to.

Speaker 5 (27:10):
Improve either the prompt or.

Speaker 6 (27:13):
Give it more learning and improve its knowledge base, or
fight you in the guardrails, or create or implement new tools.
And this is an evolving process that we fall right.

Speaker 4 (27:27):
And that's largely through user provided feedback. Is that correct
or is some of the feedback also agentic?

Speaker 5 (27:35):
So very good point.

Speaker 6 (27:37):
What we also are doing is because we have an
so literally what we're doing is we're mimiking the workworld
in our platform. So we have two types of agents.
We have a manager agent and we have co worker agents.
So the manager agent is actually managing all the co workers,
and he or she the manager she may deside that

(28:00):
the same piece of work I'm going to give to
two agents and I'm going to see who comes back
with a better result. Right, So we can even have
two agents doing the same piece of work and find
out which is more effective and take that output too.
So that is something that we so we call them
workflows in our in our platform. So I can literally

(28:22):
create a workflow where I can concatenate multiple agents and
put a master agent on top and say, okay, go
build this piece of software. So one so the first
agent may just take the epic, understand the epic, generate
the user stories the mask. The manager agent may say,

(28:43):
I don't like that. I'm going to get another agent
to come and do the same and see which one
is better. And then once he or she is happy
with the work that was done in terms of building
the user stories, she can invoke and we can use
she and he interchangeably, because these are you know, yeah, agents,
I presume, So it can go and then kick off

(29:04):
the next set of agents. It and if it finds
that the work needs more agents, our platform allows the
mask the manager agent, to spawn more agents so that
more agents can attack the work.

Speaker 5 (29:18):
So this is how the whole thing.

Speaker 6 (29:20):
Can So we foresee at some point in time, as
we improve the efficacy of solving different problems for different clients,
we could potentially get to a true factory model like
you see in a process chemical plant where the process
is running and there's one guy or two people sitting
in a in a in a room with a cockpit,

(29:44):
tuning this and turing that and tuning and looking at
gauges and letting the factory run the process.

Speaker 4 (29:51):
Sure, sure, no, Well you're reminding me of these transformers, right,
the whole concept of the transformer. And I actually have
a thread going with Jurgen schmidi Huber to hopefully come
on my show someday, and we were talking about this,
and it's the same kind of thing where you have
an array of agents or automatons let's call them, and
there's one that manages them and thinks through. It's like, huh,

(30:14):
because like if you think about how these large language
models work, from what I understand, it's still next character
basically next token, what is most likely based upon the
prompt of my training, et cetera, which is a very
linear process. But what the transformers. Now, you've got a
sort of supervisor who can kind of look over you
go hmm, I think I'll go with you this time
and you that time, which I think is very clever.
And that's I think also reflected in mistral right with

(30:36):
this mixture of experts, that that whole architecture, yes, is
very similar, and that you have some agent that is
basically orchestrated in the behavior of a group of agents
and they all work as a team. Is that about right?

Speaker 5 (30:51):
Exactly exactly? Now.

Speaker 6 (30:53):
The interesting thing is now think about what I told
you in the in the initial part and our thesis
statement of why we started doing this. If my client
comes to me and says I gave you three files,
three Cobal files to move to Java, I'm now going
to give you one thousand files. All I need to
do is to run parallel workflows.

Speaker 3 (31:16):
Yeah, right, sure, and I do.

Speaker 4 (31:18):
These GPUs are made for right.

Speaker 5 (31:21):
Right, And I don't need to bring in more humans.

Speaker 6 (31:23):
I just need to have, right, you know, maybe ten
maybe five ten humans who are looking at the process
so I've told so, if you're a let's say you're
a you're a developed let's say you're a senior software engineer,
what does this mean to you, because your role is
going to change. What I'm telling my senior software engineers

(31:44):
and my principal engineers is earlier you would work on
writing the code for your client and optimizing the colde
and right.

Speaker 5 (31:54):
Right, that's what you were doing.

Speaker 6 (31:56):
Now what I want you to do is to make
these agents more effective in doing that for your clients.

Speaker 4 (32:02):
Right, Well, I'll throw a theory at you. I think
you're going to agree with this because I think a
lot about AI and the impact on workflow, on day
to day lives, on how we actually build things in
particular software, and as a general rule, I think we're
at a massive inflection point. It's an inversion really, because
in the past we push the machines like think a lawnmower.

(32:24):
I'm pushing it along the way, using my strength and
guiding it myself. Well in the now, in the near future,
it's turned around where the machines are sort of pushing us,
but we're still guiding them. We're still trying to shepherd them.
Say no, not this, or yes, more of that. So
but the point is that the impetus is now coming
from the machines. They're creating these things, and we're just

(32:47):
sort of shepherding and guiding them, almost like a gardener
does in pruning a tree.

Speaker 5 (32:51):
What do you think totally?

Speaker 9 (32:53):
So?

Speaker 6 (32:53):
In fact, I'll give you a very interesting example. There
is a role in the software engineering world calls it
a tester or a quality engineer. So his job is
to test or her job is to test something. So
think about now, what's going to happen to her role.
So I've told her now I want you to stop
thinking yourself as a quality engineer. I want you to

(33:15):
think yourself as a quality systems engineer. What does that mean?
You are going to run the quality system. I'm not
going to do the quality engineering. You're going to run
the quality system. So you should know how to engineer
your quality system so that you can ensure and if

(33:35):
you know, you know, I still go back to manufacturing,
because these guys have cracked it.

Speaker 5 (33:40):
We've got to go back to manufacturing.

Speaker 6 (33:42):
They have this notion called total quality management where they
go to your supplier supplier all the way through to
the raw material to make sure that every part of
the inbound value chain is bringing in the quality for
you to build the right kind of ball bearings.

Speaker 5 (33:58):
Or whatever you're doing.

Speaker 10 (33:59):
Right.

Speaker 6 (33:59):
Sure, so with this quality system, So today, most of
my testers are only trying to test what the developers
are done. They're not testing to a large extent, what
the product.

Speaker 5 (34:09):
Managers are writing. You're not how the users are thinking.

Speaker 4 (34:12):
Oh interesting, now this is this is very important because
what you're speaking to is the criticality of understanding the
whole process end to end. And we talk a lot
about this on these various shows. Things like error propagation.
If it gets all the way upstream, it just spreads
to the entire environment. And now it's like tainted water.
For example, if you get some poison in the water,

(34:34):
if it's in the distributions system, then it's everywhere and
you've got a huge problem. Well, if you stop it
at the source, that's when you really get make some Hey,
but folks, don't touch a dewel.

Speaker 10 (34:43):
Be right back.

Speaker 4 (34:44):
You're listening to the Inside Analysis.

Speaker 2 (34:49):
Welcome back to Inside Analysis. Here's your host, Eric Tabanac.

Speaker 4 (35:00):
All right, folks, back here on a fascinating Inside Analysis episode.
We're talking to our own Baradarajen. He is the chief
commercial officer and founder, and they have built an agentic
platform that allows their clients to leverage the power of
agentic AI. What is that the little AI agents? There
are little semi autonomous mini applications that can do various things.

(35:22):
And I'll throw this over to you. What I've heard
from lots of companies and who are doing something similar
to this. Variant for example, has automatons. They have like
I think twenty of them now and they plan to
have another twenty soon. And what their CEO told me
is that they want each agent to do one thing
very well, because I had asked them, do you want
the agents to learn to do multiple things? And he

(35:43):
basically said, not really unless you're talking about the orchestrating agent,
sort of the manager agent that has the bigger picture
in mind. And if all this stuff is declarative in nature,
that's great, right, because you have an end state you
want the agents to achieve. The manager is watching to
see how that gets done, is instructing the individual agents

(36:03):
to do their thing.

Speaker 3 (36:04):
Go get this.

Speaker 4 (36:05):
Data, you process this data, you check the data, you
double check the data.

Speaker 3 (36:08):
You push it to production.

Speaker 4 (36:09):
All these different agents get their instructions right, and you
can parallelize that stuff like for go get that data,
spin up one hundred agents to go pull in a
big large amount of data and then start analyzing it,
acting on it, but pushing it production, et cetera. That's
all declarative, right, And then again my question is how
do you know, how does the real person manage that stuff?

(36:32):
What kind of levers can they pull to change what's happening?

Speaker 5 (36:36):
So to me, actually I don't fully agree on that notion.

Speaker 4 (36:40):
Okay, that's fine.

Speaker 6 (36:44):
I actually think while I agree with that, you need
to have an agent kind of focused on an area.

Speaker 5 (36:52):
I wouldn't use the word task alone a space.

Speaker 6 (36:56):
You want to use the generative capabilities of AI today
because I can tell you all we want to do
is to create some boundaries and say, listen, thou shall
operate in this space and let me give you all
the knowledge you need to be effective in this space.
So we expect actually our agents to bring in a

(37:16):
high degree of creativity and I'll show it. I'll show
it to you when I show you the platform. Because
what's happening is when my manager agent tells, hey, you
agent write the code. When she comes back with the
code and says, here's my code. I'm ready to ready
for you to move to the next step, he says, no,

(37:37):
I don't like some of this.

Speaker 5 (37:39):
Change it.

Speaker 6 (37:40):
You have not addressed some of these things, and it
goes and explores that and improves it.

Speaker 7 (37:47):
So interesting.

Speaker 5 (37:48):
So what we are.

Speaker 6 (37:49):
Thinking is that these agents need to use reasoning because
there are decisions that need to be made. Should I
write it this way versus that way? So we want
the reasoning abilities, which is what agent Kai does. It
really leverages the reasoning ability, the reasoning abilities, because you

(38:09):
need that.

Speaker 7 (38:10):
Mm hmm.

Speaker 4 (38:12):
I mean that's what humans do best, right, I mean,
right now, that's our I think our leg up on
the AI agent's at the moment is that we really
can think through you look at this story about was
it Apple's new chip? They have a new quantum chip.
I think it was that solved some problem like a
like a million times fast than it ever would have

(38:34):
been solved before. And it's because you have sort of
multiple layers of things, right. I mean, I've given the
example in the past of Carl Sagan when I was
a kid, blew my mind because he talked about two
dimensional characters and a two dimensional world and how they
can't see past each other. But if someone came along
and picked one of them up into the third dimension,
now they can look down and see everything. And that
is huge. I mean, think about getting through a labyrinth.

(38:56):
If you can get up in the air and look
down at the labyrinth, now you can figure out exactly
how to get out of there, whereas before it would
have been a very painful trial and error type environment.
And I think what you're hinting at here, or even
openly articulating, is that you know, if you have the
right array of agents and the right architecture, then they
can all sort of check and balance each other and
much more strategically solve problems. Is that right?

Speaker 5 (39:18):
Very true?

Speaker 6 (39:19):
We have an agent's In fact, we have the pure
agents talking to each other already, and the manager agent
comes in and just kind of draws some boundaries and says, hey, guys,
you know, let's not go crazier, right, say within this back.
If I find that you're not giving me the output
that I want, I want you to go back and

(39:40):
nerate again. Now the thing is as we are, so
in fact, when I sit down with my team. We
are not trying to solve the simple problems that you
know these RPA type of companies are doing right, because
robotic process automation is very deterministic in go fetch data,

(40:02):
my agents can do that, but I'm not really I
want to solve the tougher problems. So for example, if
you take I'll go back to testing one of the
starting points, and testing is I will go to the
to the development team and say, hey, guys, what are
you going to be building in the next round? They
call it sprints. You know this agile world. By the way,

(40:23):
Agile is also going to go away with all of this.
I will tell you at another you know, it's called
the point. It's going to die.

Speaker 5 (40:30):
Now.

Speaker 6 (40:31):
Now what's going to happen is I'm going to go
as a test manager or whatever and ask the development
team what are you going to build in the next round?
Because why is that important? Because I need to now
figure out what should I test? Where should I test? Right,
it's important because I may be building some things that

(40:52):
are new, I may be changing things that are already
in existence.

Speaker 5 (40:57):
Right, so I'm getting it.

Speaker 6 (41:00):
I actually have an agent now where I can get
your your sprint plan. I can get you know, what
are you planning to do? I can get your schedule
that you have. I can get a whole bunch of things,
and then the agent is able to generate saying this

(41:21):
is the testing you need to do. That's not like
going and fetching a piece of data. That is the
reasoning power.

Speaker 7 (41:30):
Right.

Speaker 6 (41:31):
I'm saying over time that things like design architecture. So
when I used to when we used to build systems,
there's a lot of time we used to spend around
architecture and design because there are a lot of decisions
that need to be made that have implications on various aspects.
It could be performance, it could be you know, user experience,
it could bee hundred things. And for the human mind

(41:53):
to do to really do trade offs beyond four or
five dimensions is not possible do right, I think like
that we're not some of us are lateral thinkers, but
we may miss some of these things. So those areas
where you really need to think about design and and
really come up with what works best.

Speaker 5 (42:14):
I think these agents are going to do better than us.

Speaker 4 (42:16):
Wow, No, I can. I can say that because when
I think about how computer systems are working now. I mean,
first of all, we have just wild, crazy innovation in
all directions. I mean these deep learning modules like you
see Gemini and Claude and all these various things. I
try to explain to people there is no limit to

(42:38):
the number of permutations for how these things can be built,
and they can even be dynamic and change over time
and sort of readjust you know, dynamically adjusted software defined
software development. That's kind of what it falls down to, right,
And that's what you're what you're getting at. And because
so whenever we talk next, I want to get into
the details of the reasoning of these things and how

(43:01):
you score and manage the scores of all the agents
in certain environments for certain tasks. Because if you are
incrementally inching closer to more and more efficient design, that's
very very very interesting. I mean, right now, if you
I could just make a blanket statement and I'll give
you ninety seconds to comment done, and if you think

(43:22):
about the compute that happens in any large organization. I
mean VMware came along and optimized that to a large extent, right,
optimize the use of CPU and things of this nature,
which is important, but all in all, if you think
about all the unnecessary crunching of data and processes that
are not generating value, not even needed, we're probably at
like twenty percent efficiency, I think. And so if you

(43:45):
get this right, you can save Like we're talking trillions
of dollars at scale, we've got a minute, thirty seconds.
What are your closing dots?

Speaker 5 (43:52):
A one hundred percent?

Speaker 6 (43:53):
Let me tell you the first market I want to
disrupt is a there's a low hanging fruit. And I
know I'm going to piss off some of my peers
in the industry, which I'm fine doing because that's why
we're here for. They are making oodles of money and literally,
you know, taking the clients to ransom around testing.

Speaker 5 (44:15):
You will be amazed.

Speaker 6 (44:17):
These companies have thousands of people testing software e to
day and some of these companies, I mean, I don't
want to name them because I'm not here to necessarily
shame them, but they have to really reflect some of
these large firms and you know the names. I don't
need to tell you. They make billions of dollars in testing.

(44:38):
And you know how do they do the testing? They
just put butts on seat butts on, butts on, butts
on seat.

Speaker 5 (44:46):
That's all going to do.

Speaker 6 (44:48):
And let me tell you with this model, I think
in fact, when I started essenting on, my notion to
the team was I think CIOs are spending let's say
Seattle's are spending one hundred dollars. I don't think they
should spend more than fifteen dollars.

Speaker 4 (45:08):
Yeah, I'm with you. I mean, I totally see this,
and we're going to pick this conversation up at a
future show. I am sure, wo folks, look these folks
up online. Ascending in I think is ascendant. I'm pretty
sure about that. They certainly have the right idea because
you can optimize the freaking daylights out of what is
being done in the world of computing these days. I'm

(45:29):
talking absolutely massive gobsmacking savings if you do it right,
and it'll be vastly more efficient.

Speaker 5 (45:35):
We'll talk to you next time.

Speaker 4 (45:36):
Folks, you've been listening to Inside Analysis.

Speaker 3 (45:37):
We supply the words you paint the picture KSEAA.

Speaker 11 (45:43):
And these times sometimes you have to take away the stress.
Are dressed to impress. Lux Transportation has just a ticket
for you and your niedes to relax, kick your feet
up and style and makes new fund memories looking for
a hassle free, comfortable ride experience the ultimate luxury transportation
and as sleek luxurious black on black suv private Discrete
Rise Airport, runs, sightseeing, proms, weddings and Romanta date nights

(46:08):
are all ahead in your future with Lux Transportation, karaoke,
disability friendly and no problem. We also senior citizen friendly too.
Call it text mister Holland is to take away the
stress at nine five to one three nine nine five
four two five. That's nine five one three nine nine
five four to two five Lux Transportation. When you need
the absolute best?

Speaker 3 (46:29):
Are you looking for a good union job? The Inland
Empires fourteen thousand members strong Teamsters Local nineteen thirty two
has opened a training center to get working people trained
and placed in open positions in public service, clerical work
and in jobs in the logistics industry. This is a
new opportunity to advance your career and raise standards across

(46:53):
the region. Visit nineteen thirty two Training Center dot org
to enroll today. That's nineteen thirty two Training Center dot org.

Speaker 11 (47:06):
And these times sometimes you have to take away the
stress or dress to impress. Lux Transportation has just the
ticket for you. And your niches to relax, kick your
feet up in style and makes new fund memories. Looking
for a hassle free, comfortable ride experience, the ultimate luxury
transportation and a sleek, luxurious black on black suv. Private
discrete rides, airport runs, sightseeing, proms, weddings and romantic date

(47:30):
nights are all ahead in your future with lux transportation, Kriok,
disability friendly and no problem. We're also senior citizen friendly too.
Call it text mister Holland is to take away the
stress at nine five to one three nine nine five
four two five. That's nine five one three nine nine
five four to two five lux transportation when you need
the absolute best.

Speaker 12 (47:55):
This important, time sensitive message is brought to you by
this station's sponsor, George Ltzfield Associates, who has important Medicare
information for all current and future Medicare recipients about some
big changes happening Medicare Clarified. Medicare is a nonprofit consumer
service organization.

Speaker 13 (48:15):
It's more important than ever to review your Medicare plan
for twenty twenty five from October fifteenth through December seventh
to find out if you're in the right plan for you.
People are calling nine five one seven six nine zero
zero zero five nine five one seven six nine zero
zero zero five A popular and local Medicare plan is improving.

(48:39):
Others are raising copays and adding deductibles, biggest changes in
the Medicare drug program in fifteen years.

Speaker 12 (48:47):
We thank George Letzfield and Letsfield Insurance for their generous
support of this radio station.

Speaker 14 (48:55):
Hi, I'm Lannie swood Wote and I'm back on KCAA
ten fifty eight and Express one oh six point five
FM every Tuesday at eight pm. My show is beyond
common sense. It's Lanny Sense, featuring me Lanni Swardlow, kcia's
resident gay, Jewish liberal, potsmoking, race mixing, left handed atheist,

(49:17):
an evangelical, fundamentalist, Christian nationalist, worst nightmare with subjects that
no one else will touch in quite the same way.
Every Tuesday at apm on Express one oh six point
five FM, the Legacy ten fifty AM, and live streaming
on Kcaradio dot com.

Speaker 15 (49:38):
KCAA Radio has openings for one hour talk shows. If
you want to host a radio show, now is the time.
Make KCIA your flangship station. Our rates are affordable and
our services are second to none. We broadcast to a
population of five million people plus. We stream and podcast
on all major online audio and video systems. If you've
been thinking about broadcasting a weekly read program on real

(50:01):
radio plus the internet, contact our ceo at two eight
one five nine nine ninety eight hundred two eight one
five nine nine ninety eight hundred. You can skype your
show from your home to our Redlands, California studio, where
our live producers and engineers are ready to work with
you personally. A radio program on KCAA is the perfect
work from home advocation in these stressful times. Just type

(50:24):
KCAA radio dot com into your browser to learn more
about hosting a show on the best station in the nation,
or call our CEO for details to eight one five
nine nine ninety eight hundred.

Speaker 16 (50:37):
What does it take to take on Alzheimer's? Awareness that
nearly two thirds of those diagnosed to women, including black women,
dedication to lwering your risk by eating healthy and monitoring
blood pressure, and confidence to talk to your healthcare provider
about screening and early detection. You have what it takes
to take on Alzheimer's. Learn about sciences ringing at take

(51:00):
on alz dot com. Brought to you by the California
Department of Public Health.

Speaker 17 (51:07):
Be responsible, don't drink and drive, choose a designated driver.
Our sponsor, MVP Router is family owned serving the San
Diego in the Riverside areas with quality and pride for
all your plumbing and drain means, whether it's residential or commercial.
Called the pros at nine by one nine zero zero
eight six eight seven for Riverside or sixty one nine
six three eight eight six eight seven for San Diego.

(51:30):
To learn more, visit mbprooter dot com. Mbp Router, a
BBB accredited company reminding us to never drink and drive.

Speaker 18 (51:40):
Challenging conventional wisdom can advance society's understanding of truth good.
Arrogantly challenging the complex balance of nature, however, can go
kabluey very bad. In recent times, there's been an unfortunate
tendency for some scientific hotshots to send society off on
techno tangents to quote remake nature promising miracles. About seventy

(52:05):
years ago, for example, a so called agg science genius
promised that dumping synthetic pesticides on monoculture crops across the
globe would end hunger chemical giants and governments rushed to
do the dump, but the fix ultimately resulted in the
ongoing poisoning of Earth's land, water, food, and people, while
enriching agricultural monopolis and allowing hunger to rage. Unfortunately, insistence

(52:30):
by technologists and profiteers that they can outsmart and overwhelm
nature is now being pushed with cosmic vengeance. A covey
of arrogant academics and billionaire backers are saying, trust us,
we can handle that little global warming issue. One is
named David Keith, running one hundred million dollar quote stratospheric
solar geoengineering scheme named scopex. Keith proposes to solve global

(52:55):
warming by get this, dispensing volumes of sulfured oxide into
the Earth's stratosphere to quote regulate the amount and location
of sunlight around the globe. Gosh, what could go wrong
with that? Never mind the unknown consequences of tampering with
basic nature, argues Keith, for his bold techno fix to
global warming bypasses the political difficulty of ending our fossil

(53:19):
fuel addiction, So we should just do it. This is
Jim Hijar saying Keith does admit that he can be
inappropriately forceful. I'm intense, he says. Well, then let's all
chip in for some therapy sessions to help him overcome
his megalomania before he makes an irreversible mess of the
only planet we have that sustains life. The High Tar

(53:40):
Radio Lowdown is made possible by youth subscribers to Jim
Hiitar's Lowdown on Substack. Find us at Jimhiitar dot substack
dot com.

Speaker 7 (53:50):
Hey y'all, Burl here, good news for once. My neighbors
is jealous of me. You want to know why, because
my grass is growing and looking green, and I can
seal my i out in front yard and I don't
even have to overwater it anymore. You know how I
did it. I listened to damn water boys on the
water Zone every Thursday night on KCIA. Well, I got

(54:11):
me a smart controller and now at Water's at night
art looks darn tooting. No more sneaking around and hooking
up my horse to my neighbors pigott in the middle
of the night, and his dog won't bite me anymore.
And you can do it too. Listening is easier than ever.
Kcia is now screaming online it's streaming.

Speaker 3 (54:31):
What it's streaming, you don't.

Speaker 7 (54:34):
Well, I don't know much about streaming, but they doing
it apparently at KCA radio dot com. So AnyWho listen
to the water zone and fix your yacht up right
right here at KCIA, the station that leaves no listener behind.

Speaker 10 (54:49):
Bob Vila here with my home improvement tip of the day.
When I talk to homeowners about safety, it often centers
around using tools, ladders, and so forth. But there are
a lot of other ways that you may be in
in your home. One of them is by mixing the
wrong chemicals. You've probably heard you shouldn't mix bleach with ammonia.
That's true. It produces vapors that can damage your lungs
and even kill you. Also on the don't mix list

(55:12):
bleach with vinegar. When combined, they give off a chlorine
vapor that's similar to the poison gas used against Allied
troops in World War One. Bleach shouldn't be combined with
toilet bowl cleaners either, since they too can produce toxic fumes.
Also steer clear from combining highly acidic products with products
they're highly alkaline. They can cause serious chemical burns if
they come into contact with your skin. Before using any

(55:35):
household product, it's best to check the label. Potentially harmful
interactions are often listed there. Get more info at bobila
dot com and right here at home with me Bobila. Hey,
this is Gary Garb.

Speaker 1 (55:51):
If you work out like I do, or have a
job where you sit all day and your back hurts
and you're in pain and you don't know what to do,
I have the perfect solution for you. It's ice bod.
Icepot Active is form fitted compression where with pockets that
fit ice cold gelpacks called flexpods. These flexpods fit around

(56:11):
your joints ensuring maximum pain relief.

Speaker 2 (56:15):
I use it all the time because I'm.

Speaker 1 (56:17):
Always active, playing golf, working out, fixing up my place
right now, and I put it on in the evening
around my back and it gives me maximum pain relief.

Speaker 2 (56:28):
Laker's legend.

Speaker 1 (56:29):
James Worthy is a founder of this company and really
believes in it. To find out more about ice bod,
go to ice bodactive dot com and get yours today.
That's ic bodactive dot com. This week they are having
a flash sale where you can save twenty five percent
off by using the promo code KCAA go to ice

(56:49):
bodactive dot com.

Speaker 19 (56:51):
What is your plan for your beneficiaries to manage your
final expenses when you pass away?

Speaker 5 (56:58):
Life?

Speaker 19 (56:58):
Insurance, annuity, bank accounts, investment accounts all required deficitivity which
takes ten days based on the national average.

Speaker 7 (57:08):
Which means no money is immediately available.

Speaker 19 (57:10):
And this causes stress and arguments. Simple solution the beneficiary
liquidity clan use money you already have no need to
come up with additional funds. The funds wrote tax deferred
and pass tax free to your name beneficiary. The death
benefit is paid out in twenty four to forty eight

(57:31):
hours out a deficitary your every money out a deficity
call us at one eight hundred three zero six fifty eighty.

Speaker 7 (57:41):
Six, Pilastina CACAA Loma Linda at one O six point
five FM K two ninety three c F Burrito.

Speaker 3 (57:47):
Valley, Located in the heart of San Bernardino, California, The
Teamsters Local nineteen thirty two Training Center is designed to
train workers for high demand, good paying jobs and various
industries throughout the Inland Empire. If you want a pathway
to a high paying job and the respect that comes
with a union contract. Visit nineteen thirty two Trainingcenter dot

(58:11):
org to enroll today. That's nineteen thirty two Trainingcenter dot org.

Speaker 9 (58:22):
NBC News Radio. I'm Lisa Carton. Jimmy Carter, the thirty
ninth President of the United States, is dead at the
age of one hundred. Carter's death came after a February
twenty twenty three announcement that he had decided to enter
hospice care and spend his remaining time at home with
family after a series of short hospital stays. Carter served
a single tumultuous term and was defeated by Republican Ronald

(58:45):
Reagan in nineteen eighty a landslide loss that ultimately paved
the way for his decades of global advocacy for democracy,
public health, and human rights. Former President and Missus Carter
worked with Habitat for Humanity in communities throughout Georgia and
globally for nearly forty years. More from Liz Kennedy.

Speaker 20 (59:04):
In nineteen eighty four, Jimmy and Roselind Carter created the
Carter Work Project, working alongside volunteers with Habitat for Humanity,
building and avocating for affordable housing. It was an experience
that fulfilled the couple.

Speaker 8 (59:18):
Every time we've ever been out as volunteers leading a project,
no matter where it's been any in this structure oin
around the rest of the world. At the end of
the Habitat project, we always feel that Rose and I
got more out of it than we put into it.

Speaker 20 (59:32):
The former President One said, Habitat provides a simple but
powerful avenue for people of different backgrounds to come together
to achieve those most meaningful things in life. A decent home,
but also a genuine bond with our fellow human beings.
I'm Liz Kennedy. Carter is widely revered for his champion
of human rights. His brokering of the Camp David Accords

(59:55):
with Egyptian President and War Sadat and Israeli Prime Minister
menach And began nineteen seventy eight, remains central to his legacy.
Carter also received the Nobel Peace Prize in two thousand
two for his efforts to push for peace across the globe.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.