Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Are you looking for a good union job? The Inland Empires,
fourteen thousand members strong Teamsters Local nineteen thirty two has
opened a training center to get working people trained and
placed in open positions in public service, clerical work, and
in jobs in the logistics industry. This is a new
opportunity to advance your career and raise standards across the region.
(00:26):
Visit nineteen thirty two Training Center dot org to enroll today.
That's nineteen thirty two Training Center dot org.
Speaker 2 (00:37):
Listina KCAA Lewelinda at one O six point five FMK
two ninety three CF Brino Valley.
Speaker 3 (00:43):
The information economy as a rod. The world is teeming
with innovation as new business models reinvent.
Speaker 4 (00:50):
Every industry industry.
Speaker 3 (00:51):
Inside Analysis is your source of information and insights about
how to make the most of this exciting new era.
Learn more at inside analysis dot comsideanalysis dot com. And
now here's your host, Eric Kavanaugh.
Speaker 5 (01:10):
Hello, and welcome to Inside Analysis. I'm your host and
Yell LeBlanc. On this episode, Eric Kavanaugh is interviewing Alex Galago,
who's the CEO of Red Panda Data. Together, they will
talk through the new landscape of LMS and how the
innovation of AI is affecting companies all around. Stay tuned,
(01:33):
you're listening to inside Analysis.
Speaker 6 (01:35):
I'd like to dig deeper into Red Panda.
Speaker 7 (01:38):
I mean, I know you've you've basically rewritten the whole
platform to do streaming, to do it efficiently, because Java
is not very efficient, right, So you said, well, just
go to the drawing board and hack this thing out.
Speaker 6 (01:50):
How long did it take you to do that?
Speaker 8 (01:52):
Beca In many ways it feels.
Speaker 9 (01:57):
It felt to me like it was a natural evolution of,
you know, decades of work. While I was in school,
I took a job at a Forks trading company and
I really got sort of introduced to a bunch of
low latency things and how do markets work? And I
(02:18):
ended up writing some some trading algorithms in a weird
language called MQL four. It's like this weird obscure language,
kind of Jenkie, to be honest, very Python like, but
not at school. And then I wrote it at a
bunch of trading apps on this thing called pips, and
(02:39):
a pip is like whatever, like one hundred dollars or
something like that, just like some really large spread. But
you want to deal in terms of bacurns. This long
story sort, that's where I really started to kind of
lean into things that I found really fun, which were
like low latency, high performance things. And so then I
went to work for an Attech. I guess when I graduated,
(02:59):
I worked in an embedded database and I worked on
the greed this is for the BlackBerry phone. We wrote
this like c parsing code where we were pulling this
like three dimensional database into a series of doubles because
it was really efficient to encode for I think open VMS,
and so we wrote the color coding on the S
(03:23):
and P five hundred table with like dynamic rendering. That
was really cool and so long story short, when I
think about the evolution of red Panda, it felt to
me like the natural evolution of the ideas that I
had worked on, which went to be an Attech in
New York for a company called Yildmo, and then for
(03:43):
a stream processing company that I wrote the tech for
in twenty fourteen through twenty sixteen that I sold to Akamai.
And so anyways, in many ways, it was like, why
is the world so complicated? Why does it have to
be so hard? And everyone that has been on call
and you know, usually your page I goes up at
(04:03):
three am. I don't know why, but it's like somewhere
around that time. It's like though, it's when you're like
writing your deep sleep and you get paid and then
you wake up and you're like, I don't understand why
this is so complicated.
Speaker 8 (04:15):
And so.
Speaker 9 (04:17):
It was like, well, why couldn't we do better? Like
what's what's the mechanical reason? Like, okay, let's remove all
the hype. I just want to understand fundamentally, what is
the job that this software is doing that is so
hard that requires it to be so complicated. It turned
out not to be that difficult, frankly, and so I
wrote something from a scratch really for me, and I
(04:40):
left Akamai I think December twenty eighteen somewhere around there,
like basically, and I started writing Red Panda in January.
But I would say I had a like a pretty
clear goal and so it was just a matter of
writing the code. And so that part didn't take that long,
maybe like three months to have a prototype, a couple
(05:02):
of months to have something well that that works interesting.
Speaker 7 (05:07):
And of course the world has just sped up in
all directions around us. Now, so we're all kind of
riding this new wave. And you know, when I think
about streaming, I've always i mean from the earliest days,
I was like, oh my goodness, if you do this right,
you can just you can supersede ancient architectures or what
should be viewed as ancient architectures. But you have to
(05:28):
do the whole thing. It's like when you try to
fuse these two worlds together, that's where the impedance mismatch
kicks into high gear and you've got to figure out
some way.
Speaker 6 (05:37):
I mean, you know, I.
Speaker 7 (05:38):
Heard stories of at Vertica where they were taking all
kinds of streaming dages just like filling it out into
you know, a traditional database, and you know it's a
column oriented database of course, but still it's like, Okay,
you know, do you need all that? Like is that
really really necessary? And the short answer, of course is no,
not really. But you know, for for like net new
(05:58):
use cases, I think you guys would be a dream, right,
But in traditional architectures do you find that's kind of
a hindrance is that the behemistic are the big fortune
two thousand companies They just you know, they're like, oh
my god, like can we even handle this is that?
Is that still a hurdle or do you see more
people figuring it out and just architecting around it.
Speaker 8 (06:20):
Yeah, good question. So on for that that's really our
sweet spot.
Speaker 9 (06:26):
I know it's counterintuitive because it's like, Okay, well here's
this you know, startup. But the brands that we work
with are like, you know, probably drove you to work today.
I guess if you work from home, maybe not, but
for those of you that commute to work, like you know,
whether it's like the largest electric car company, the largest
city and largest bank in the US, largest like iced
B in Europe, one of the largest banks in Australia, whatever.
(06:49):
When you look at the brands that we tend to
work with there, they tend to be this this fortune
two thousand and I think fundamentally what happened is twofold
one AI has definitely accelerated the pace at which people
are wanting to move towards a more interactive architectures. That's
why too people realize that really the log is the
source of truth in an architecture and databases are caches.
(07:13):
And I think this wasn't my idea was it was
popularized by Amazon They're like the log is the source
of truth and everything else is just the cash and
so uh and I guess you know, third where when
we started talking about Iceberg and so on, people are like,
finally I get finally all the blocks. Uh you know,
(07:33):
queny when you put the final piece on your lego,
it's just like it looks beautiful and it works. Is
when you can start to glue analytics, which is the
look looking behind the present, which is really what Redpana
is with with Apachi Iceberg as the glue, and then
the future with autonomous decision making with with agents by
(07:55):
carrying context. I was like, now I think people understand
the timeline horizon of how an application works, and so
we're seeing the opposite. We're seeing actually acceleration in the
fortune two thousands or five thousand really more more then
I sell early on. And I don't know if it's
a function of brand. Maybe we have better marketing, Like
(08:15):
it's kind of hard to tell, and I don't want
to like a you know, correlate and or incorrect assumptions here.
And so I think fundamentally where if you were to
ask me, what's like your gut, I think is the
world is thinking about applications differently in large because of
AI and in that world, red Panda place a critical role.
(08:41):
And I'm not sure, you know, in the context of
our by C for example, there there's like a.
Speaker 8 (08:48):
You know, many alternatives.
Speaker 7 (08:50):
You know, you just reminded me of something. And I
haven't read too much about this. And We've talked about
streaming for years. I've talked about streaming for many, many years.
Speaker 6 (08:57):
There's MQ series.
Speaker 7 (08:59):
There's all kinds of things in the past that were streaming, right,
It's like little message cues basically MQ series all kinds
of different message cues, which is kind of what's streaming.
Speaker 6 (09:08):
Is that that's the category that's in.
Speaker 7 (09:11):
But to your point, there is a maturation now in
the environment, and I'm thinking to myself, this is a
real time application architecture, right. That's what it really boils
down to is like, guys, look, you want an application
that does something. What is the fuel that's going to
use to do that something. Is it going to be
a traditional static database which is the traditional way of
(09:31):
doing things, in which case, guess what, that database will
slow you down, Like sooner or later, it's going to
slow you down either because it's going slowly, or things
that attached to it are going slowly. But the point
is that this just ever ever growing ballast. If you
kind of look at it in a certain way, do
you really want that? And I think that's your answer
is no, not really. What you want is for your
(09:53):
apps to use real time data that is live right now,
that is interactive with the users or the partners or whatever.
And that's what you're enabling, right It's a real time
application architecture.
Speaker 8 (10:05):
Yeah, and so I think two comments.
Speaker 9 (10:09):
The first comment is that I think the reason why
database is not enough is because you lose data. And
so set another way, you need the highest fidelity of
the data to be able to reconstruct a different materialization
of your data. And so if your data it's in
a relational database, let me give you the example is
(10:33):
average sales price and so or like the price of
the last ticker symbol and ice that that's probably a
better example. The amount of data that is flown with
every tick of that symbol as it's being traded is tremendous.
But when you ask a database, it gives you one
killer by the data vice like this is the company name,
this is the ticker symbol whatever. This is some price,
(10:53):
but the amount of input to the database could be
a gigaby per second. And so what happens is you
lose data, right like fundamentally it is it is a
lossy compression algorithm almost right, like you're sending your data,
but you're only querying the last point in time. And
so that's what the databases are. And so I think
it is limiting less from like particular you know, I
(11:17):
think through put databases have gotten really good at throughput costly,
you know, so maybe the other limit is dollars, but
they I think that now modern databases are pretty good
at handling the throughput latency not not so good. And
so but it's more fundamentally it's about like you're losing
access to the real data. And so what happens is
it forces the architect and engineer to have like, uh,
(11:42):
computed all of the ways in which they wanted their
data before they write an application. And what the log
gives you is freedom. You're like, well, I want to
store it in a really cheap storage. They's say S
three or something like that compatible and then today is
stick pulling, then tomorrow could be whatever. Elastic could be
whatever it is, and so that freedom is just key
I think. For that's where I see like the biggest
(12:04):
pillar from an architectural perspective.
Speaker 6 (12:07):
That's very interesting.
Speaker 9 (12:10):
And then for AI thought that changes a little bit too.
But that's about way for a second.
Speaker 6 (12:14):
Yeah, what then what do you mean by that?
Speaker 9 (12:16):
So Okay, Historically, I think we talked about the challenges
of picking a database. Yes, in old school databases, perhaps scale,
uh you know, on throughput and latency those for limitation.
In modern cloud databases, I think throughput is okay. Latency
is still not great and so maybe for some use
(12:38):
cases does tend it to be. But we talked about
like the architectural primitives that you get when you adopt
the law as the source of truth.
Speaker 8 (12:44):
What happens is when.
Speaker 9 (12:46):
I'm going to define an agent as like an object
with an L l M right, and so for simplicity
and when you have these things, uh, what I guess
the future is going to be about this autonomous decision making.
And so if you look at a timeline t and
you say analytics stuff like data breaks and so on.
(13:09):
They were all about looking at the past, help me
do my job better. Where during best sellers, what region
is the best producing region, what product is the best
producing product? What should be my average sales price?
Speaker 10 (13:21):
Right?
Speaker 9 (13:21):
Like, help me do what I do better by looking
at the past. That's analytics. Red Panda has been classically
and we'll talk about NX about like being the best
operational system in the world. Right, So if you're running
your system and you depend let's say you're trying to
protect white house that go from getting taken down by
North Korea bots or something like that, you need to
(13:45):
know the movement of IPAs around the world, or maybe
you're trying to track fraud detection or whatever. So that's
the present. That's the operational sense. The future is about
carrying the context of the past and presents so that
this code can make autonomous decision for you. And so
let me let me can now loop all the timelines past,
(14:06):
present and future. And so the past and present are
going to be bridged by technologies like apati Iceberg, which
is a way to have a zero shot integrations with
all the query systems, and so that that's an important
tig piece and streaming engines will play a second. So
that's where streaming engines play super critical role. Is just
(14:27):
exposing the data in a way that is accessible for
many many tools, right doctor B ClickHouse, right like red
pand or whatever Pinot trino U. Basically just about every
data bas is going to speak Iceberg. So that's that's
a critical road for streaming engines. And then for the future,
it's about carrying context for the agents to do their job.
(14:50):
And so let's say that the job of an agent
is to determine yes or no to a credit card
transaction or a credit or a crediting period or whatever
you need to give it cont and so think of
a self driving car, right, Like you need to carry
the context, either like the images or the previous five
transactions or some sort of windows so that the agent
can make a decision. So as part of the prompt,
(15:12):
you can tell the agent, Hey, is this or is
this not a credit card transaction?
Speaker 8 (15:18):
These are Alex's last five.
Speaker 9 (15:20):
Credit card transactions and he just bought a bagel in
San Francisco. So if it gets transacted in the UK, like,
obviously market has fraud, and give me the explanation so
I can render it on the screen. Right, And so
I think that that's why streaming engines are seeing in acceleration,
one as a glue to analytics and the present with Iceberg,
(15:41):
and then second as a way to carry context for
this autonomous decision making.
Speaker 7 (15:46):
So I'm trying to processes my brain, right. So when
you have, like with cough gets topics, right, you have
topics that are streaming and you can choose which ones
you pull together in any given point in time as
I recall, and that be comes the fabric or the
context that you're talking about.
Speaker 6 (16:02):
So here you're.
Speaker 7 (16:04):
Talking about getting context from log files basically from systems
that are important to this particular potential transaction. And what
you're talking about is allowing the agent to be a
much more adaptive and real time semi autonomous entity that
will absorb context and just make a decision quickly. Yeah,
(16:25):
Like that's the idea, is that? So you've in a sense,
you've deconstructed the traditional data flow that goes into an
application which makes a decision. Because to your point, like
all the Stata warehousing, that's my background, it's all the
Stata warehousing stuff, right, you do ELT and et and
all this stuff.
Speaker 6 (16:44):
You get into this.
Speaker 7 (16:44):
Big central repository, which we did because they figured out
that you couldn't really query an ERP It's not what
it was designed for. And besides, what you want to
do is know how all this stuff relates to each other.
That's where the magic is, right, is how all these
different pieces of data relate. And so with red Panda,
what you've done is expedited the process of feeding the
(17:05):
important bits from important systems into a real time context,
which the agents can then grasp as needed.
Speaker 6 (17:12):
Is that right?
Speaker 9 (17:13):
So it's cool about red Panda From an engineer perspective,
it's basically most of our code is in the open, uh.
And so we will link an example here of what
I mean, like a simple concise yamal file so people
can just like see the whole thing put together and
it becomes like really becomes something that they can, like
they can run on their laptop.
Speaker 8 (17:34):
But that's exactly right. And so.
Speaker 9 (17:38):
First of all, I think it's worth it just to
share like maybe a little bit of redpan So we
did start red pandas is like you know, high performance
storage engine. And the idea is like if we adapt
to how.
Speaker 8 (17:50):
The hardware is. Right, so.
Speaker 9 (17:53):
Most most software just operates on like you know, a
tremendous amounts of layers of abstraction, and you'll loose performance
with every layer because you're generalizing things. And so when
I started a red panel's like, what if we just
adapt to how hardware works, and if you go all
the way down to even like core to core coherency protocols,
(18:14):
so you have you know core they say L zero
or you know zero to one on a dual core system,
single socket, right, so single motherboard. We don't have to
complicate the picture with multiple sockets on a motherboard, but
one motherboard you have one on one chip and the
chip has two dual core right when you go down there,
when you're talking about memory coherency protocol, it's all message passing.
(18:37):
And the insight here from hardware manufacturing people is that
you could do a bunch of useful work if you
don't block, right, And so if you adopt a similar
software architecture where it's all message passing, you can yield
at points of blocking and then you can let the
CPU do a lot more useful work.
Speaker 6 (18:59):
And so that's really interesting. Folks, don't touch it. That
will be right back.
Speaker 11 (19:02):
You were listening to inside Analysis.
Speaker 3 (19:11):
Welcome back to inside Analysis. Here's your host, Eric tabanac.
Speaker 9 (19:19):
Red pand that runs on a network of ny square
single producer, single consumer, lock free cues that acts as
mailboxes for every core. And you can assume that every
core is like an actor system on a super relatency network.
And so when you embrace that, by the way, you
get a simple concurrency I guess software structure and a
(19:44):
simple parallelism uh deployment and so anyways, performance and this
is like one of my favorite topics to talk about.
I want to stop us here because this could be
a two hour call where we like dig super deep
into the mechanics of.
Speaker 6 (19:59):
The This is so interesting for lots of different ways.
Speaker 7 (20:03):
I mean, I've been around this business a lot of time,
so I learned from all sorts of different vendors about
different things they're doing, and it's always very interesting. But
in the recent past, meaning six seven months ago, I
ran into a company called hammer Space.
Speaker 6 (20:14):
Are you familiar with hammer Space that I mentioned those guys.
Speaker 7 (20:17):
You need to look into these guys, I mean, because
they are thinking in a very similar way that you're thinking.
They went out what they do. They're a parallel file system.
So this is big in the HPC world for life
sciences and training models. They're actually training a LAMA two
and LAMA three, so they're being used to deliver the
data to the CPU, and they did something that was
so damn clever it blew my mind. He talked about
(20:38):
how in the average AI training architecture, data has to
make eleven hops to go from where it is to
the GPU. It's like controller nodes and different things like
that in the storage environment, the network, all these different
steps along the way, but still that's eleven freaking hops.
Speaker 6 (20:55):
So they talked.
Speaker 7 (20:56):
About how I think it's vast to cut off one
hop and cumulo cut off two hops.
Speaker 6 (21:02):
Maybe, but these guys.
Speaker 7 (21:03):
Were able to figure out by affinitizing the processing to
where the GPU server, which has built in flash memory
that often gets ignored. They call that stranded capacity basically
because a lot of times companies will want to just
have all their governance protocols and security around the storage
array and they don't want to have to try to
(21:24):
figure that out on the GPU array as well, right,
so it's these two separate worlds. Well, they figured out
how to affinitize that to where when the processor knows
that the data is on this machine, Well, I don't
even have to go to the network. I don't have
to go to a storage around Look anywhere is right here.
So they've adopted done to four hops from eleven. And
if we're talking about training billions of parameters on these models, well,
(21:47):
guess what, like cutting the hops by almost two thirds
is kind of a big deal, right the governments thing
you talked about, And they went out and they got
two of the best developers from the Linux kernel to
work on it, such that this is native Linux now.
So in other words, a lot of times these sort
of accelerators you have to put little agents everywhere in
order to make that happen. It's about a software not here.
(22:09):
And what they did is they abstracted all the metadata
management around file systems into this layer above, thus opening
up this massive data path underneath, so you can orchestrate
data wherever you want it to go, and that's all
handled in this hammer space abstraction layer, such that below
the data is like just flying as fast as it
(22:32):
possibly can to be able to train large language models.
And that's the only company I've come across that has
gotten as deep as you're talking about, like literally understanding
the hardware protocols and how these things operate together.
Speaker 6 (22:46):
I just think, I mean, you guys need to talk.
Speaker 9 (22:48):
Is you know what's interesting about that model is very
similar to how we approach the like at a high level,
it's a captin space. But if we take that as
an example, by the way code deployment of the data
with a bunch of share memory stuff that it's like
it is exactly how you start to eliminate the things
(23:10):
that programmers have invented, right, Like in many of the
bottle legs are just like made up that the things
things that don't need to be in and like, look,
at the end of the day, you can't eliminate complexity.
You can either manage it yourself or you can let
someone else manage it. And when someone else's manages the complexity,
then you're onboarding all of their bad decisions.
Speaker 8 (23:29):
As well as the good decisions.
Speaker 9 (23:30):
Right and so an example is a Linux kernel page cash,
a great general I mean, I wrote the first pass
of the storage layer, and once I did that, I
was like, wow, the page cash is like a great
general algorithm, and it's a terrible purpose built algorithm because
it does it takes a bunch of locks. It like
(23:53):
has all these global objects. There is like there's a
bunch of contection, there's like a bunch of metadata that
has to get flushed, ultiple cash lines, blah blah blah blah. Yeah,
like I could do this so much cheaply in like
this hyper specific context and so very much cut from
the same thre We have also a bunch of you know,
(24:14):
either BSD or Linux colonel hackers as part of the
engineering team.
Speaker 8 (24:17):
I feel like the.
Speaker 9 (24:18):
Nerds that we tend to attract some of my friends,
they tend to be like the same kind of nerds
that that would have on HPC systems.
Speaker 7 (24:25):
Right, and that I mean, you know, the thing that
really blew my mind when I interviewed David Flynn, he's
the CEO of Hammer Space. This is like September maybe,
and I was talking to him and I was like, dude,
you are the first person I've come across. And I've
interviewed two thousand companies all right in this data and
I be able the past twenty five years a lot
a lot a lot of companies, and I always love
to get as far down as I could possibly go
(24:46):
to really understand who's tinkering where and what they're doing,
and you know, I joke to David first of all,
like shift left all this stuff. I'm like, you cannot
shift any further left than into the operating system the
end of the kernel, like that is as.
Speaker 6 (24:58):
Far left as you can shift. It's like, well way
over there.
Speaker 7 (25:01):
And I said, you're the first company I've come across
to do something which is you know, I was in
Austin like fifteen sixteen years ago at an HPC conference
and at the time I was, I was fifteen years
ago because I was tracking cloud Dehra in the earliest
days and Horton works. I know all those guys, I'm
ra Awadala and friends of them for a long time,
very very smart guys. Right, cloud Erra was going to
(25:23):
be the big thing and then you know, long story short,
cloud pretty.
Speaker 6 (25:28):
Bad, yeah, miss cloud.
Speaker 7 (25:29):
First of all, like my business partner, Robin Belori, he's
retired now mostly, but he launched Ploor Research in UK,
and he and I launched Blur group here in the States.
But he had agree, always makes funny little comments.
Speaker 6 (25:41):
He goes, who put the cloud in cloud Deerra?
Speaker 7 (25:44):
No one, And that's the problem because it works to
design for the cloud. You're like you called yourself cloud
dehra and you're an on prem solution, Like, what on
earth are you doing? But what I told David Flynn
was you are the first person I've come across to
fuse HPC with enterprise big data processing because these guys
never talked. I go to this conference and no one
(26:06):
from the big data world, analytics, bi none of that stuff.
None of those people were talking to the HPC folks.
I'm like, why on earth don't you guys talk? Like
there must be things they've figured out over here that
you can use. And I think it's because you know,
it's the center of gravity, right, HPC is it's heavy
in life sciences, it's heavy in universities, so all they
(26:27):
are their own little ecosystem, right, and they don't really
want to worry about this other stuff. And then in
the business world that's the cloud eras and the Horton
works and the Verdicas and all those guys.
Speaker 6 (26:36):
It's like two different worlds. I'm like, why don't you
guys talk these the.
Speaker 7 (26:39):
First one I can't? He was like, wow, I guess
no one's ever noticed that before, Like, yeah, yeah, how
am I the only one to notice this?
Speaker 9 (26:46):
There's a few papers over the years that do you know,
we obviously read on that was in that first one
to figure it this out. I just think that they
were There's just very few systems that are public that
are this. I know of a couple of proprietary systems
that have sort of taken this, and but but it's
(27:09):
it's actually pretty common to get a ten XT thing.
And the way I'd like to frame it is that
sometimes you're lucky enough to reinvent the wheel when the
road changes. And look what changed between the time COPCA
was what came out and Red Panda is hard drives
got super super fast and really cheap, right like now
you have this mbmss D devices that can write a
(27:29):
page to to like the underliner storage I don't know,
and like eight sixteen microseconds somewhere around there. Let's say
like a little contention, like you know, twenty is fine,
so super fast and if you know, you remember this
pin and disc, you would right click your seat drive
and hit the fragment and it would take overnight. It
would start making this AOL dial up noises and so
(27:51):
so then you know, hardness got super fast, and yeah, exactly,
I think everyone in the US nightmares from exactly like
you've got mail area or like you know, when you're
when your parents interrupted your your music downloads because they
picked up the phone.
Speaker 6 (28:10):
That's right, Oh the good old days.
Speaker 8 (28:14):
It's terrible.
Speaker 9 (28:15):
And then and then CPUs uh became like you know, uh,
you know, MULTIICR but like really as a dominant factor, right,
you go from like I guess a long time I
got to single course uh to like whatever ninety six
core vms on Amazon as the norm.
Speaker 8 (28:33):
Like you know, readily available around the world.
Speaker 9 (28:37):
And so if you were to start from a scratch,
sometimes you get to do different things differently, and so
that was the original. Theisis of of of red Panda.
Speaker 6 (28:46):
That's cool.
Speaker 8 (28:46):
Yeah.
Speaker 7 (28:46):
The other company you should talk to is Ocient, you
know Oscient.
Speaker 8 (28:51):
I've heard of them, but I haven't personally talked to them.
Speaker 7 (28:53):
They're doing hyper scale data warehousing, so they basically they
saw NVM coming. And by the way, David Flynn, one
of the other cool things about him is last company
before hammer Space, they focused on the NVMe E protocol,
so he was focused on trying to make that a thing,
and now a handful of companies are really using that
(29:13):
hammer Space for sure uses the mvm E protocol SODA's oceans,
and they use it to do trillions of records. I mean,
just crazy amounts of records being brought in and it's
a whole new era, you know.
Speaker 6 (29:25):
He jokes.
Speaker 7 (29:25):
He actually lives in a town where I used to be.
My first job out of school was at the Lamont
Metropolitan Newspaper in La Mont, Illinois, which is like, it's
the tiny little corner of Cook County, just outside of Chicago,
and it's actually older than Chicago. So the I and
M now goes through Lamont's and that's what sort of
built the town was. They were building this I in
(29:46):
M Canal, the Illinois Michigan Canal they call it, to
get it from Lake Michigan to the Mississippi River so
you could do transport, right, this is how old it goes.
Speaker 6 (29:54):
But that's where he lives. I'm like, dude, I used
to do that. I was the editor of the local
newspaper in that town. Right. Cool, but yeah, it's wild.
He's such a nice guy too. It's just Chicago, so
they're they're.
Speaker 7 (30:03):
Pretty humble, hard working, you know, city of big shoulders stuff.
Speaker 6 (30:08):
But they spent two and a half years.
Speaker 7 (30:09):
Their first GA was like version seventeen or something like
we wrote drivers and everything because they saw, all right,
we have to rewrite this whole pyramid to get up
to a point where we can really leverage this stuff
and just go hyper scale.
Speaker 6 (30:22):
And they did so.
Speaker 7 (30:23):
Now it's just like, good God, Like, how you know,
how would charity to compete with that?
Speaker 6 (30:27):
You can't.
Speaker 7 (30:28):
I mean this is they have old, old, old technology
and it's not going.
Speaker 6 (30:31):
To work this new scale.
Speaker 8 (30:34):
You know, That's how technology about.
Speaker 9 (30:36):
It's like, in many ways, is why the world has
space for our companies like Red Panda to go in
and challenge basically multi billion dollar companies. It's like, okay,
well on a product to product, let's compete.
Speaker 8 (30:49):
I'm game.
Speaker 12 (30:50):
Do you know?
Speaker 9 (30:50):
We feel pretty confident of it took it took a while,
you know, the first two years of the company, it
was just me and a bunch of friends. We didn't
work in a GARS, but pretty much in agars. We
were overmode originally, and we hacked basically day and night
for the first few years just to get something like
so It Systems.
Speaker 8 (31:10):
I didn't realize just how.
Speaker 9 (31:14):
Hard it is to do a good job like you
could do an okay job quickly in a year maybe
two years. But to do a great job, it is
just so so so hard.
Speaker 6 (31:24):
So well, so many things to think through.
Speaker 7 (31:26):
I mean, that's the problem is like you know, and
I think a lot of people who don't know programming
don't realize that. You know, with programming languages, you're always
sort of dancing around the thing, right, the thing is
what you're trying to do, and it's like, huh, you
can't just go right at it.
Speaker 6 (31:40):
You always have to kind of go around it somehow.
Speaker 7 (31:42):
So it's like what is the what's the tightest circle
and then the concentric circle around that and around that
or however you want to view it, layers however you
want to get there. Still, it's like, what are you
what are you trying to accomplish first and foremost? And
how can you build the foundation as strong and alleyable
as possible to be able to build on that, because
(32:03):
you know, if you change your mind a year later,
it's like, oh man, how like I remember I was
talking to one guy one of the funniest conversations I had.
This is probably six seven years ago now. We were
talking about Kubernetes and how it is not is not stateful, right,
it's it's a stateless environment.
Speaker 6 (32:19):
And I was talking to this guy goes, yeah, I
think some of those guys are started to think maybe
we should have thought that. I thought about that.
Speaker 7 (32:26):
Yeah, this whole architecture, like maybe state was kind of important.
It's not gonna do it somewhere else, right, that's the point,
like wherever else has to be managing the state.
Speaker 6 (32:34):
And that gets pretty complicated.
Speaker 9 (32:36):
Right, State is is the the problem. That's like the
PSL resistance. It is like state is the hardest thing.
I think I will know when AGI gets here, when
it when it solves all the Kurnetis diplomentations like that's
that's the test. If I can ask a prompt to
solve my coronettes. It's just like that's that's a g
(32:56):
I in my mind.
Speaker 8 (32:58):
That's fun.
Speaker 9 (32:59):
That's at what this means for users. I think to
pop this sack a little bit of what we're seeing
in prout, which is pretty cool. And then this is
a shift you mentioned like, hey, red Panda has been
classically good for the fortune five thousand Whatever's that's been true,
But what we're seeing is that the same reasons why
people that led to the designer space of what became.
(33:23):
You know, all the messaging systems like Coffka and red
pant et cetera. Are needed for these agentic marklets. And
so by that, I mean you want well named communication channels.
And so in the Kafka parlance that's called a topic.
In the sequel pipeline partlines, it's called a table, right, Like,
(33:43):
you basically want something that is well named, Like I
want to exchange messages on this channel, then that channel
is going to be called orders, and the other channel
is going to be called the fraud and the other
channel whatever. So you want these well named channels. But
what's interesting is it's like what from an end to
end perspective, you can about the first channel and the
oppa channel. So effectively, when a user types in the
(34:05):
prompt like summarize the tank a documents for a public company,
say Mango, d B or whatever it is, and then
it gives you a summary rate. But the in between
you could always make it smarter by introducing something in
the middle. So you could always introduce an agent in
the middle and continue to make it smarter over time
as long as like the end to end channels continued
are sort of like exported functions and see if you will.
(34:27):
The second thing is that people wanted the same primitives
of microservices. They wanted access control lists, they wanted audiit logging,
they wanted independent pipelines, they wanted global policies of what
models you're allowed to use and you're not allowed to use.
And so in any way, there's this full circle of like, hey,
(34:48):
AA is a totally different thing in.
Speaker 6 (34:50):
Like whatever, don't touch it. That will be right back.
Speaker 11 (34:52):
You were listening to Inside Analysis.
Speaker 3 (35:01):
Welcome back to Inside Analysis. Here's your host, Eric Tabinat.
Speaker 8 (35:06):
And take this to show.
Speaker 9 (35:08):
So have all the like phenomenal smaller models that are
produced in a state of the art quality answers, but
they are small, you know, say sixty four gigs of
RAM kind of thing, so like relative like you can
you can run it on a single computer basically, and
once you could do that, you're going to have these
(35:30):
networks of small models that are specializing in things. And
so when you give a flow an into a task
of like give me a summary, you can have these
models assume personalities, like one personality is the program that
is going to optimize the prompt the other personality is
like the thing that is actually going to perform the task.
(35:52):
The last personality is going to make sure that it's
like making sure it's not insulting your customers.
Speaker 8 (35:58):
It's like a safety model, right right, yep, exactly.
Speaker 9 (36:02):
And so guess what when you're trying to deploy this
to production, they think that is the long poll for
large enterprises, this fortune five thousand. It's not the model
because they're not building the model. First of all, they're
just downloading it from from hugging phase or GitHub or
something like that. So like that, that does not the
long poll. The long call is like, that's the CIO
(36:22):
trusted is the input and output record recorded, hence the
log are the right things accessing these AUDIIT logs? How
can I do global secure and policies? So do I
push it to an open IDA connect like octa to
make sure that these agents are being like you know,
I could sort of decommission a fleet of agents in.
Speaker 6 (36:41):
Governance something attacked governance, right, Yeah, So those are.
Speaker 9 (36:46):
The things that I didn't anticipate would be pillars to
the acceleration of red Panda in the enterprise or you know,
streaming systems in general.
Speaker 6 (36:56):
Oh, interesting, that's fascinating. I get it. I get. I mean, well,
because you have multiple steps right in this. I mean
I look at Mistro. I think that's probably the most
clever architecture because again, you've got multiple agents. They are
very good at certain things.
Speaker 7 (37:09):
And this is what I'm hearing from a lot of people,
like even Variant, which has been around for a good
long while, a pretty big company.
Speaker 6 (37:15):
They talk about their little robots.
Speaker 7 (37:16):
Because I asked the guy, are you going to try
to get your agents to do multiple things?
Speaker 6 (37:20):
He goes, no, we want them to do one thing.
Speaker 7 (37:22):
Very very well, and then they'll be called by another
sort of orchestrating agent. And that makes a whole lot
of sense. But even still, like you look at some
of the newer miles like, oh, eighty one percent accurate,
I'm like, okay, what part of your business can be
wrong twenty percent of the time? Not much, you know,
I mean that's a big error for you know, for
something serious like operations, no fulfillment, no accounting.
Speaker 6 (37:43):
No you know all these things.
Speaker 7 (37:45):
No no, no, no no, but you know in marketing, okay,
marketing for content creation, things of this nature. But to
your point, this new AI agent like application fabric if
you will, needs to have the right mix of data
at the right time time, and you can't be doing
these sort of reaches into multiple different databases on demand
(38:07):
to find out what the history is or something. I mean,
that's what I think is very interesting. So I love
your point about how we thought it was going to
be architectural this way and it's turning out to be
that way.
Speaker 6 (38:17):
That's it. That's interesting, you know, because it's it's.
Speaker 7 (38:20):
A challenge for the organizations, and you know, I think
a lot of companies have got to be like, what
are we going to do?
Speaker 6 (38:24):
Like, how are we going to figure this thing out?
Speaker 13 (38:26):
Man?
Speaker 7 (38:26):
Because it's there's so many moving parts and it keeps
changing fast too. That's the other part, right, It's like
it keeps changing.
Speaker 6 (38:33):
I mean, I don't know.
Speaker 7 (38:34):
I look at chat GPT and I think this is
this deep sea thing. Was great news for just throwing
down the garlet to these people because they had gone
down this road. They'd already said, Okay, it costs so
much money to train models, so just get used to
the idea. Sam Altman's asking for what was it like
three trillion dollars or seven trillion dollars to build out
some whole new inversure're do, what are you talking about?
Speaker 9 (38:56):
Five hundred billion? It's it must be a great one day.
We hope to be that scale too. But you know,
the other problem here in enterprise is this idea of
sovereignty and who has access to your data. That's that's
really I think the the major heartburn for the CEO
(39:19):
and the CECL of these large companies. They are like, look,
the model companies are great, They're going to be here
for a while. We're going to send some select use
cases to those people. But I just don't feel it
doesn't matter what the legal language says, personally, emotionally or whatever.
I don't think this is the technology think really this
is like a trust thing at the moment. Maybe it
(39:43):
changes in the future and everyone is like, I don't
want to use a vendor, but I'll send I'll run
all my infrastructure in Amazon anyways, so I don't pay
for software, but I run my all my code on Amazon.
Speaker 8 (39:54):
I was like, you know, what is this mean?
Speaker 9 (39:55):
So you know where they just they don't feel comfortable
sending their private data to open AI or Anthropic or
some of those places, and so what they feel comfortable
with is running the smaller models inside their infrastructure. And
so I think through tool calling, et cetera, you're going
(40:17):
to need a glue system that looks like a log
with function calling, you know, maybe some support and environment
like connectivity. Is why we bought a company a few
months ago around connectors. Where you're going to be able
you need to bridge internal data. So they say your
are call database for simplicity with these global expert systems, right,
(40:39):
So you're going to or this foundation model systems, right.
And so instead of the interaction being like here's the
prompt with all of the raw data context, you can
pass it through first a local model that is great,
and what you send is you're sending a digest of
the internal informations. And so an example would be the
(41:01):
road data would be Alex's last five credit card transactions
if you're trying to determine fraud, and then you can
ask this model let's say deep seek or Meta Lama thie,
so like, hey, I want you to summarize this into signals.
And so the signals would be like not not the
credit card information, not where I live, not where the
(41:21):
but instead would be the high level signals like Alex
had five credit card transactions in downtown San Francisco with
the timestamps, and it doesn't really say much. It just
says like about five things. Right, So maybe like the
level of sensitivity changes from like here's like the row
five transactions, which has like Alex's address and his social
Security number and he's like full credit card data. And
(41:44):
then you can once it's gone through this prism filter
of this local model, then you can even use a
foundation model. And so it's fascinating seeing the world change
in real time. Pun not intended, where like you know,
people are like trying to reason about how do I
think about the world differently? But the metal point here
(42:04):
is that you need a thing that's going to glue
the world. And that thing even if people have writed
themselves it just it doesn't matter if it's right punt
or not. It's going to look a whole lot like
red Punda, right, Like you're going to have to write
these things durably to this, You're going to have to
figure out how to do AKA, You're going to have
to do how to do whatever. Accoles and like you know,
centralized identity management and deployments and multi region and clouds
(42:27):
and servilis and multi tenancy and blah blah blah blah,
all of the things that are hard you're going to
have to do it that, by the way, that are
not differentiated for people. And so it's been fascinating to
see the adoption of streaming as like the glue that
it's going to make the future of agentic workloads.
Speaker 6 (42:47):
So here's what I'd love to do.
Speaker 7 (42:49):
We should try to get you on a show with
ideally David Flynn, maybe Chris claud when David Flynn is
hammer spaced Chris is from Ocians, because to really get
into these things, right because these are deep, deep architectural discussions.
Speaker 6 (43:02):
That's what our audience loves.
Speaker 9 (43:04):
I think people tend to want to hear about the
details too. This is the fun part, like the stuff
that we've been talking about.
Speaker 7 (43:10):
Yeah, and we love that. I mean, I don't want
just the high level conversations. I want to get all
the way into the weeds. And you mean, you know,
luckily we recorded all this so I can go through
and like what was he saying there and like just
kind of unpack it. But you know, and so when
you're talking about sovereignty. That's hammer space. That's what they
solve because you have this parallel file system which basically
(43:30):
sits over all of your other file systems. So it
consider of rest three you can sit over you know,
Google Cloud platform on prem who cares, it doesn't matter.
Now you can see all of your files and basically
have this layer of abstraction, single pane of glass for
managing them. And just the for example, that the copies problem,
like what is it the average is nine or ten
copies of a certain document you have like spread all
(43:52):
around because of different backup strategies over the years and
migrations or whatever. That's the reality of de facto information
ARCT textures. Well, they can solve that problem. And then
you know, by the way, they're also feeding massive amounts
of data to your GPUs to train these models or
whatever it is you need done.
Speaker 6 (44:11):
And you know as well as I do, if you don't.
Speaker 7 (44:14):
Have a view over everything, whatever you're not seeing is
where the problem could possibly be. Right, So data governance
per se, I mean, you know, until five seven years ago.
Speaker 6 (44:24):
It was kind of a joke. It's like, Okay, you can.
Speaker 7 (44:26):
Control access to a database, you can control ac access
to an app and that was about it. There wasn't
a whole lot in between that you could use to
achieve de facto data governance. And now you can do that.
Now we can actually pull that stuff off, So you know,
it's it's kind of amazing how so many developments have happened.
And to your point, and I think this is the
hot topic for you guys, is this new real time
(44:50):
application architecture. How do you feed these agents the context
they need to make those decisions in a tight window.
And I love this concept of the sort of intermedia
model right where it finds the signals that can then
be used by the foundational model to transact something where
you're not divulging important enterprise information PII whatever I mean.
(45:11):
I think that's a very very interesting thread that we
could build something around. Frankly, we'll be talking to you
next time. You've been listening to Inside Analysis.
Speaker 14 (45:22):
Get all the facts all you need to know on
KCAA radio.
Speaker 12 (45:29):
From the Bureau of Economic Geology. This is Earth Date.
Mark your calendars and buy some eye protection because on
August twenty first, twenty seventeen, the US will experience a
total eclipse of the sun. A total eclipse happens when
the Moon passes perfectly in front of the Sun, blocking
(45:50):
its view from the Earth. Only a small path across
the US will see the Sun completely obscure. But that
path of totality stretches from Oregon to South Carolina, passing
over nearly fifty million people. If you were to view
the eclipse from space, you'd see a small moonshadow seventy
miles wide, cast upon the surface of Earth and moving
quickly across it. At fifteen hundred miles an hour from Earth,
(46:14):
you can experience this moving shadow too. If you're lucky
enough to witness the total eclipse from a mountain top,
you'll see the moonshadow racing towards you from miles away.
Eventually it will engulf you, Darkness will fall, the temperature
will drop ten to fifteen degrees. You'll see what looks
like sunset on all horizons. Birds will stop chirping, and
(46:34):
crickets may start, But don't blink. This ethereal scene will
last only about two and a half minutes before the
shadow races on. The experience can be awe inspiring, even unnerving,
and you'll understand how early cultures would have given powerful
religious significance to a total eclipse. If seeing one is
on your bucket list, head to the Path of Totality
(46:56):
on August twenty first, find some high ground and look
to the heavens, wearing the right eye protection. Of course,
I'm Scott Tinker, and I hope you'll enjoy this coming
date on Earth.
Speaker 15 (47:07):
Earth Date is produced by the Bureau of Economic Geology
at the University of Texas at Austin. Earth Date is
researched by Julie Hunnings, written by Harry Lynch, and distributed
by Mark Blunt and Casey Walker. For more stories, follow
us on Facebook or visit earth Date dot oorg.
Speaker 13 (47:27):
What is your plan for your beneficiaries to manage your
final expenses when you pass away life insurance, annuity, bank accounts,
bestment account, all required deftitivity which takes ten days based
on the national average, which means no money's immediately available
and this causes.
Speaker 8 (47:47):
Stress and arguments.
Speaker 13 (47:49):
Simple solution the beneficiary liquidity clan use money you already
have no need to come up with additional funds. The
funds wrote tax deferg and pass tax free to your
name beneficiary.
Speaker 16 (48:03):
The death benefit is paid out in twenty four for
forty eight hours out a definitary You heard me right
out a definitive All I said one eight hundred three
zero six fifty eighty six.
Speaker 17 (48:18):
Tune into The Faran Dozier Show Musical Marks Place in Time,
the soundtrack to Life. Sunday nights at eight pm. Are
KCAA Radio playing the hottest hits in the Coolest Conversations
Sunday nights at APM on The Faran Dozier Show within
the ray of music, talk, sports, community outreach, and veteran resources.
Speaker 8 (48:39):
The hits from the.
Speaker 17 (48:40):
Sixties, seventies, eighties, nineties, and today's hits. The Farandozier Show
on KCAA Radio on all available streaming platforms and on
A six point FIVEFM and ten fifty Am. The Farando
Zier Show on kc AA Radio.
Speaker 4 (49:10):
KCAA Where Life's much better. So download the app in
your smart device today. Listen everywhere and anywhere, whether you're
in Southern California, Texas for sailing on the Gulf of Mexico,
Life Sabreeze with KCAA. Download the app in your smart
device today.
Speaker 18 (49:28):
I'll be saving yesterday in the dumpu.
Speaker 1 (49:35):
Bxico CACAA.
Speaker 19 (49:46):
With sixty years of fascinating facts. This is the man
from yesterday and back in time. We go to this
time in nineteen fifty eight. Centr D's rise to fame
is coming quickly. The fifteen year old began modeling at twelve.
She was a girl Scout and modeled at one of
her troop's benefit fashion shows, where a talent agent spotted her.
Speaker 18 (50:11):
I'm Sa'na benzon Win, I can't, I'm.
Speaker 19 (50:22):
Sandra And from this time in nineteen sixty two, CBS
announces that news corresponded to Walter Cronkite will replace Douglas
Edwards on its early television news program beginning next month
in April of nineteen sixty two.
Speaker 3 (50:38):
This is the CBS Evening News with Walter Cronkite.
Speaker 1 (50:42):
And that's the way it is.
Speaker 2 (50:43):
Thursday, March seventeenth, nineteen seventy seven, This is Walter grand Guite,
CBS News, good.
Speaker 19 (50:48):
Night, And from this time in nineteen sixty now playing
in theaters, Sink the Bismark, Although not in the movie,
the hit song Sink the Bismarck is moving off the
charts by Johnny Horton.
Speaker 8 (51:00):
They find that terminate battle of ship as they get Sunship,
buss redouta Synctopis Marcos.
Speaker 19 (51:06):
The World and Ben sold us with more at mad
from Yesterday dot Com.
Speaker 1 (51:14):
To Hebot Club's original pure Powdy Arco. SUPERTA helps build
red corpuscles in the blood, which carry oxygen to our
organs and cells our organs. The cells need oxygen to
regenerate themselves. The immune system needs oxygen to develop, and
cancer dies in oxygen. So the tea is great for
healthy people because it helps build the immune system, and
it can truly be miraculous for someone fighting a potentially
(51:37):
life threatening disease due to an infection, diabetes, or cancer.
The T is also organic and naturally caffeine free. A
one pound package of T is forty nine to ninety five,
which includes shipping. To order, please visit to Hebota Club
dot com. To hebo is spelled T like tom, a
h ee b like boy oh. Then continue with the
word T and then the word club. The complete way
(52:00):
I'm sign is to Hubot club dot com or call
us at eight one eight sixty one zero eight zero
eight eight Monday through Saturday nine am to five pm
California time. That's eight one eight sixty one zero eight
zero eight eight t e bot club dot com.
Speaker 20 (52:14):
You like to safely leverage bank money to earn double
digit returns income tax free, with guarantees and no downside
market risk.
Speaker 6 (52:23):
How can you do this?
Speaker 20 (52:24):
This is Farreence, host of the Your Personal Bank Show.
One You fund a high cash value policy one time
to earn dividends and interest. Two establish a bank line
of credit using the cash in your policy as collateral.
When you earn more in dividends from your policy than
the interest the bank charges, you keep the difference, and
the difference is average two to five percent annually in
(52:45):
your favor for the past forty plus years. Three the
bank funds contributions years two to twenty plus. Each year
the bank adds funds, your rate of return increases. Your
average rate of return can grow too strong, double digits
annually within a few years. Contacnia your Personal bank dot
com Your Personal bank dot com or eight sixty six
two six eight four four two two eight sixty six
(53:07):
two six eight four four two two for more info,
or tune into Your Personal Bank Show.
Speaker 3 (53:13):
Your Personal Bank Show airs Tuesdays at four pm right
here on case EAA ten fifty am and one oh
six point five FM, the station that leaves no listeners behind.
Speaker 2 (53:24):
Now here's a new concept. Digital network advertising for businesses.
Display your ad inside they're building. If a picture is
worth a thousand words, your company is going to thrive
with digital network advertising. Choose your marketing sites or jump
on the DNA system and advertise with all participants. Your
(53:45):
business ad or logo is rotated multiple times an hour
inside local businesses, where people will discover your company. Digital
network advertising DNA a novel way to be seen and remembered.
Digital network advertising with networks in Redlands and YUKAIPA call
in the nine to nine area two two two nine
(54:06):
two nine three for introductory pricing. That's nine o nine
two two two nine two nine three for digital network advertising.
One last time Digital network advertising nine oh nine two
two two nine two nine three.
Speaker 21 (54:24):
What's wrong with Neil Gorsuch? His soul I mean? As
one of the domineering right wing extremists on the Supreme Court,
Gorsuch routinely supports enthroning plutocracy, autocracy, and his own brand
of Christian theocracy over people's democratic rights. But he also
uses his unelected, unchecked judicial position to take power and
(54:47):
justice away from America's least powerful, most vulnerable people, including
the homeless. For example, he ruled last month that an
Oregon city's ban on homeless residents sleeping outdoors was not
cruel and unusual punishment, never mind that the city provided
nowhere else for homeless individuals and families to bed down.
(55:08):
Gorsuch saw no problem with penalizing people who have to
sleep or camp out in parks, on the street, et cetera.
After all, he blithely explained, it was not a ban
on homelessness, but merely on sleeping outdoors. It makes no difference,
exclaimed his supremeness, whether the violator is homeless or a
backpacker on vacation, or I suppose he had, say, a
(55:32):
Supreme Court justice sleeping under a bridge. To punctuate his cluelessness,
Gorsuch actually asserted that the law applied equally to everyone, except,
of course, that the homeless can't just go home after
being kicked out from under the bridge. This at Jim Higtailer,
saying its rank and justice for Gore, such a child
of a politically powerful and rich family, product of ivy
(55:55):
league schools and high dollar law firms, possessor of enormous
personal wealth and multiple homes to dictate. Let them meet
cake rules for homeless people who'll never know or understand. Yes,
homelessness is a complex social scourge, but cavalierly criminalizing its
victims is itself a crime that solves nothing. Neil is
(56:16):
not morally fit to judge poor people, So how about
replacing him with a homeless person who actually knows something
about real life. The High Tar Radio Lowdown is made
possible by youth subscribers to Jim Higtar's Lowdown on Substack.
Find us at Jimhiitar dot substack dot com.
Speaker 1 (56:33):
KCAA Radio has openings for one hour talk shows. If
you want to host a radio show, now is the time.
Make KCAA your flangship station. Our rates are affordable and
our services are second to none. We broadcast to a
population of five million people plus. We stream and podcast
on all major online audio and video systems. If you've
been thinking about broadcasting a weekly radio program on real
(56:57):
radio plus the internet, contact our CEO SOH at two
eight one five nine nine ninety eight hundred to eight
one five ninety nine ninety eight hundred. You can skype
your show from your home to our Redlands, California studio,
where our live producers and engineers are ready to work
with you personally. A radio program on KCAA is the
perfect work from home advocation in these stressful times. Just
(57:20):
type KCA radio dot com into your browser to learn
more about hosting a show on the best station in
the nation, or call our CEO for details to eight
one five ninety nine ninety eight hundred. One of the
best ways to build a healthier local economy is by
shopping locally. Teamster Advantage is a shop local program started
(57:42):
by Teamster Local nineteen thirty two that is brought together
hundreds of locally owned businesses to provide discounts for residents
who make shopping locally their priority, everything from restaurants like Corkies,
to fund times at SB Raceway, and much much more.
If you're currently a Teamster and you won access to
(58:02):
these local business discounts, contact Jennifer at nine oh nine
eight eight nine a three seven seven extension two twenty four.
Give her a call. That number again is nine oh
nine eight eight nine eight three seven seven extension two
twenty four.
Speaker 10 (58:25):
NBC News Radio. I'm Chris Garragio, Pope Francis remains in
critical condition, with blood tests showing mild signs of kidney failure.
In an update today, the Vatican noted that the Pope
had a mild renal insufficiency, which is under control. The
eighty eight year old pontiff is being treated for double
pneumonia and is receiving oxygen therapy. The Vatican added that
Pope Francis continues to be vigilant and well oriented. He
(58:47):
took part in the Holy Mass from the apartment on
the tenth floor of the Jamelli Hospital this morning. This
is the third time in his twelve year papacy that
he has not delivered the Angelus prayer. Ukraine's president Vladimir
Zelenski says he's willing to resign in extra change for
peace or a NATO membership. Lisa Carton reports.
Speaker 14 (59:03):
The Ukrainian president made the offer at a news conference Sunday, saying, quote,
if it is peace for Ukraine and you really want
me to leave my post, I'm ready. Zelenski also said
he would also trade his position for immediate NATO membership
if it means the safety of his country. This follows
public disputes with President Trump last week after Trump implied
Zelensky was responsible for Ukraine's war with Russia and called
(59:26):
the Ukrainian leader a dictator. Zelenski also insisted that he
does not intend to stay in power for decades.
Speaker 10 (59:33):
Elon Musk is giving federal workers a deadline of midnight
tomorrow to justify their work or they'll be fired. In
a social media post yesterday, Musk said federal employees will
receive an email requesting information about what they worked on
over the last seven days, and any failure to respond
will be taken as a resignation. The move comes after
President Trump said he'd like to see must be more
(59:54):
aggressive in his efforts to slash the federal workforce. An
American Airlines flight that took off from New York was
as awarded by Italian fighter jets after being diverted to
Rome for security reasons. The flight was on its way
to New Delhi when it requested a flight diversion to
Leonardo da Vinci