Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Private, discrete, Rise Airport runs, sight saying, proms, weddings and
romantic date nights are all ahead in your future with
Lux Transportation, karaoke, disability friendly and no problem. We're also
senior citizen friendly too. Call it text Mister Holland is
to take away the stress at nine five to one
three nine nine five four two five. That's nine five
one three nine nine five four two five Lux Transportation.
(00:23):
When you need the absolute.
Speaker 2 (00:24):
Best, Killustina KCAA Low Melinda at one O six point
five FM K two ninety three c F Burrito Valley.
Speaker 3 (00:33):
The information economy has a ride. The world is teeming
with innovation as new business models reinvent every industry industry.
Inside Analysis is your source of information and insights about
how to make the most of this exciting new era.
Learn more at inside analysis dot com, insideanalysis dot com.
And now here's your host, Eric Kavanaugh.
Speaker 4 (00:59):
All right, ladies and gentlemen, Hello and welcome to the
only coast to coast radio show all about the information economy.
It's called Inside Analysis or True Layer Kavan. I hear
with my old buddy from ages ago. We haven't seen
each other in like fifteen years time fliers. We're having fun.
Mark Pico, he is quite the innovator. He's taught at TDWY,
(01:20):
He's taught all sorts of different classes, and he's a
real thinker. And so we were talking earlier about the
whole concept of value and what is value. People say,
when you're pushing a project, oh, there's no value? Well
what do you mean by that? So I'll just throw
it over to you to kind of dive in on that, Mark.
Value is this concept that everyone thinks they know, everyone
thinks they understand, but once you start kind of drilling down,
(01:43):
it could be hard to define. Right.
Speaker 5 (01:45):
That's so true, Eric, I guess whatgind me thinking with
this over the last little while. As there's many themes
of blogs and webinars and presentations that say, we you know,
the failure rate of data related projects is pretty high.
Why value wasn't created? I think, what what does that
actually mean? Value? I'm not an economist, I'm not a
(02:05):
financial person. I started thinking, how can I think about
value in a way that unifies people's thinking? And I
started thinking value is something I care about. And when
I say I there's different types of people as stakeholders
that care about different things, So how do we think
about value from Department AD to Department B to Department C.
(02:27):
That unifies the flow and connection between these. So I've
started thinking about thinking about value as something created if
my objectives are meant it. Those objectives mean I personally
care about this happening, right, And I started to think
about what are the different things we care about in companies?
(02:48):
And I'm taking further from a data driven lens, you know,
because I've given talks before and you know about data governance.
So people say, well, where's the value of data governance?
I think, what are you really asking? What does that mean?
So I started thinking about conductivity and flow in companies,
and I started thinking about value as something has a
(03:09):
life cycle to it and it exists in different kinds
of stages. And I started thinking about how do I
organize this in a way that's simple and meaningful. So
I coined the term that I'm kind of building out
something called value literacy, right, And value literacy means literacy
if anything, means I can have a conversation with you
(03:30):
about it because I know the concepts, I know the vocabulary.
So this means if I know how to converse with
my stakeholders, we should have something in common to discuss.
And if we think about the different kinds of people
that work in the data field within a company, it's
very very broad and very very diverse. And I started
(03:51):
thinking about this. I don't know what organize this in
a way that's kind of simple yet meaningful, and I
started creating. My first thinking was, let's think about value,
like like energy, like charging a battery. When you charge
a battery, you're creating potential energy. But if that battery
is never been used for anything, it's there, but nobody's
using it. But when we use that battery in our
(04:13):
phone or in a device, maybe something's now coming on.
And so that's kinetic energy. You can now use your phone.
It's kinetic value because you're turning the potential into something actionable.
But people don't really care about having the phone turned on.
What they care about is booking an order on Amazon
with it, or communicating with their family or watching a video.
(04:35):
So what they really care about is the use you
have because the phone's on, And I call that realized value.
So you go from potential kinetic to realized. Now potential
is like raising data quality, building a data product, building
a report. Kinetic energy or kinetic value happens when you
(04:55):
use that to drive a process. So if I have
information to say, how do I scal do my salespeople,
or how to allocate my resources, I can now turn
that into a better process. The realized value happens because
you've done that. It's now you've got happier customers, you've
got more revenue, and you've turned this into that flow.
(05:15):
But there's two more building blocks of this. There's something
I'm calling structural value, which is really what data governance prepares.
And structural value is the creation of standards and policies
and they then enable you to then do data quality
to raise the potential value of potential value data. But
there's one other thing we care about its companies. That's
being revolui, dealing with risks, dealing with bad weather events
(05:39):
with you, technology failing or whatever, or safety issues. So
the fifth one I've called is resilient value. Now, each
of these things or value domains are created by something
called capabilities, but some value is also enabled other capabilities.
(06:01):
So if you think about the world of data, think
about data generation as a capability. I get data because
I acquire it. I enter it into my screens. I
measured it from my plant, but somehow I generate data.
Then I advocapability to manage that data. I've got a
store and provision it, integrate it, protect it, make it
(06:22):
all the stuff available for others to use. Those are
both enabled by data governance capabilities in others that I
can actually because I position data governance as abilit to
share data right now. If I can share data and
as provisioned, the next capability there is called data consumption.
I can use the data that was provisioned because it
(06:42):
followed the policies, and now I can consume it and
however form. I'm a data scientist or an analyst or
whatever I am, and I'm now interpreting the data. I'm
consuming it to make a recommendation to my boss or
a decision maker. The last piece, or not really the last,
but the second last piece, is something called data utilization.
It's the data and acting on it. So I've scheduled
(07:02):
my staff at my restaurant better, I've got my salespeople
more allocated, I've got my plant with more reliability, and
now my output of my company is better. That's the utilization.
It creates the realized value. The last piece of the puzzle,
I believe is something called data leadership capabilities. They're the
ones that set the tone for why this is so important.
(07:23):
So data leadership. Without that, these things won't connect. They're
isolated silos. But if you take a step back, just
like you're charging your battery so you can run your
phone so you can order off of Amazon and have
a nice book to read, that flow, in my view,
is what value can be modeled as. And so if
I'm working in it, or I'm working in marketing, I'm
(07:45):
working in HR, we all see this differently. But yet
if we all say we're failing because there's no value
being produced, we need to know which of those five
types of value are we talking about it? Because the
word value isn't precise enough, and we just come plane,
we whine about it, it's not working, and we point
fingers and we walk away. If we can talk about
(08:05):
it with a nice, simple picture about how it's connected,
we can realize it's breaking here because of that interesting
and that's my premise. And to take this one step further,
I'm working to models in a graph database. So I
have some nodes our value items and some nodes or
capability items, and if we can see how they're connected
(08:26):
and go to a workshop with our stakeholders, we can
have conversations around why is it not flowing, where's the bottleneck,
where's the where's the where's the speed bumps? And that's
that's where I'm kind of at with this.
Speaker 4 (08:40):
This is very interesting. I like the fact that you're
trying to dig into a better understanding of what value
really is because typically when people think value, they think, Okay, ROI,
what's the return on investment for this data governance project?
And the answer would be something like, well, our customer
satisfaction has gone on twenty percent. Our upsell capability has
(09:02):
gone up because the customers are more satisfied. So that's
the case where you can kind of track some monetary
value from the blocking and tackling work you did, which
is not very fun improving your data quality. That's true, ROI.
But to a deeper point, I think you are enabling
much more productive conversations about what to do next, because
(09:22):
that's really what every business person is trying to figure out,
what do I do next in order to get new
business satisfy the business I have. Whatever the case may be,
it's always pretty darn unwieldy, whether it is trying to
figure out a new market to approach, trying to figure
out a new product to roll out, a new service
to roll out. These are deep gritty questions that you
(09:44):
don't get answers to very easily, very quickly. And then
the other side of the equation is you kind of
never know. Like I remember what I learned the concept
of opportunity costs. I was like, oh, wow, yeah, that's
pretty interesting. So if I'm working on this, I'm not
working on that. There's a cost of not working on
something as opposed to cost of working on something. So
what is the opportunity cost? What are you missing? All
(10:05):
these things come out, and I'll give you an interesting example.
I learned a risk management conference years ago. I was
talking to these big bankers and they're all trying to
figure out how to hedge their bets, how to move
forward in a safe but effective manner in a very
dynamic world, which is financial services. And I said, you know,
don't you find that sometimes, like voting a certain way
(10:28):
will cause personal tension, Like people take things personally and
they get upset about stuff. And he goes guy said,
that's a very good point. That's why on our boards
we either concur or don't concur. So it's like we
don't even really say yes or no. We either concur
with the assessment that you've made, or we don't concur.
So it's a softer way of kind of getting around
(10:49):
the rough spots of conversations. And I kind of feel
like what you're building is that softening but clarifying structure,
almost like an emulsifier or something. What do you think?
Speaker 5 (11:00):
I agree? I think it's stimular to have a conversation.
I think if you don't have literacy on a topic,
you can't converse about it. I'm a real basketball fan,
so if I'm talking to a person about basketball, or
they don't understand what's important in basketball, they may know
the words, but they don't have a conversation because they
don't know what's important. Right. And so you can literacy
(11:20):
on any topic, and you need concepts and vocabulary and
some sort of an underlying model that kind of connects them.
Speaker 4 (11:28):
Right, that's cool. That's cool. So now you're building out
will this be something you'll give speeches about and consult
with organizations? Essentially? Is that the vision.
Speaker 5 (11:38):
My goal is that I've been working on this now
on a part time basis, make you since late last year,
maybe around November I started. I've been thinking about it
for a long time, but I got kind of really
serious about it late last year. My goal is to
build maybe workshop content where we can assess and diagnose
your values trapped and why. So rather than complain about
values not happening, let's assess in the term of what
(11:59):
to do about it there. So I'd like to turn
it into workshop material. I've already given a presentation last
week to a group of CEOs to kind of position
the idea. So that's my goal is to build education
content right now. Maybe it's some consulting and definitely workshop
content kind of things right, But I also parallel one
more thing I think of I think of the word
(12:20):
data literacy is kind of too narrow. I think of
the word data storytelling was too narrow. I do a
lot of work in data storytelling, but I think it
is better talking about value storytelling because if we can
talk about value, we can have a conversation at a
higher level. Data is simply a component. It's not the
be all and end all. It's the value we're trying
to create by as it flows through the company. So
(12:42):
that's that's my premise is to visualize this in simple
diagrams that show where it's stuck, where it's broken, and
where it's not working, and ultimate where it does work,
and so we can kind of see how to, how to,
how to fix problems.
Speaker 4 (12:52):
Yeah, I think it's absolutely fascinating. And you know, I'll
throw out another interesting tidbit here, which is, at the
end of the day, companies consist of people, processes, objects, assets,
and it's all driven by ideas. And you look at
some of the most successful companies, they pivoted hard at
(13:14):
some point in time. You look at Amazon, they were
all about being a book delivery service basically, and they
figured out, wow, we just built a fantastic supply chain
system essentially, or point as saler or everyone to describe it,
and they just pivoted to be selling whatever you want
to sell and creating an interface that anyone could come
along and sell. And that's because they obviously had a
(13:34):
culture where ideas could flourish. You could share an idea
and you can really get somewhere with it. And I
think your approach to understanding value can help organizations get
that kind of atmosphere in that kind of culture, because
that's when you succeed. I mean, if you've got a
company of even ten people, that's a whole lot of knowledge.
(13:54):
It's a whole lot of moxie and savoaf fare and
problems and issues and personnality whatever it is. There's a
lot of bad stuff, for sure, But if you can
work around the bad stuff and get to the good stuff,
that happens in honest conversations in the moment, referencing real assets,
real data, real ideas, real things that are needed. And
(14:15):
to do that in a collaborative way, in a collegial way,
that's the key. And I view your work in this
value matrix as being very facilitating to that goal.
Speaker 5 (14:28):
What do you think, Absolutely, my whole point is to
enable communication, and you can't communicate if you don't have literacy.
So where do you start? What are the concepts, what's
the vocabulary? How are they connected? And now starting that
to have a conversation in the space, we have some
building blocks to work on. Now this is a work
in progress. It's early days, so it's not refined yet.
(14:50):
There's probably other kinds of value not considering right, But
again I'm not an economist, I'm not a financial analyst.
So some people say this is too simplistic, but I'm
thinking if it helps break the dead luck of conversation,
then it's got value using that word again, using it's
got value to people because you can now solve problems
without being an e communist or a financial analyst.
Speaker 4 (15:10):
Right when I like to do do you do this
analogy to energy? Like I remember learning potential energy and
kinetic energy, So you've got potential value, you've got kinetic value,
and then you've got real life value. Right, then you
also had resilience value and leadership value. Is that right?
Speaker 5 (15:27):
Structure structural the person.
Speaker 4 (15:29):
Structural yet and go back on that again when it's that.
Speaker 5 (15:31):
So, structural is laying the frameworks, the policies, the things
that kind of keep us structured.
Speaker 4 (15:38):
In the company data governance program, the primary.
Speaker 5 (15:41):
Output of governance to me is structural value. It then
enables data management to raise the quality based on that
structural policies and standards. And so when people say what's
the value of governance, it produces structural value. Right, And
if you know what that is, let's talk about it.
Because that's that that tell if it's successful, right, things
(16:01):
downstream of that will depend on it.
Speaker 4 (16:03):
Right well, And that's you know, what's the old expression,
the car broughts from the head down right? So who
is ever at the top they're if they're bad, if
they're doing bad things and bad decisions, that's going to
reflect all the way through the organization, and that's going
to lead you to an unpleasant place. But the key,
I think is for extant organizations to leverage this kind
(16:24):
of thinking to understand where are we, where can we
go next? Realistically? What's plausible with our assets, with our people,
with our resources, what can we realistically hope to accomplish.
That's a big question these days, and especially it's a
big question because now with AI and all these technologies
and all the data that's available to us, you can
spin up a business model quickly. And the cloud used
(16:46):
to be that you had to hit ten twelve people.
That they can do with two people. So you can
spin up ideas quickly, but how do they scale?
Speaker 6 (16:53):
Well?
Speaker 4 (16:53):
In order to scale, you need that structural value right correct.
Speaker 5 (16:58):
You know, if you're building anything, you need in structure
to connect things, to have standard ways of doing things,
and if you don't standardize, you can't scale, right. You know,
that enables the scaling to have some way you're doing
things in a standard way.
Speaker 4 (17:10):
Right, standard processes, standard procedures, standard file formats. I mean,
if you get all the way down to where rubber
meets the road, standard nomenclature for columns and things of
this nature. If you do things properly, it's easier down
the road to integrate into mix and match, right.
Speaker 5 (17:27):
And I think one of my big goals here is
also to get people out of their silos. Yeah, I
think one of the biggest barriers, and I kind of
attribute why we have these problems is, I mean this
is not new to anybody, but siloed thinking. Right, one
department things are successful and the next one down the
hall doesn't. Right, And well, then are we one company
or we two companies?
Speaker 7 (17:47):
Right?
Speaker 4 (17:47):
Right?
Speaker 5 (17:48):
And that's the whole goal here, is to connect the dots,
get rid of silos, and understand that how things flow
should flow right independent of your org structure, because that
can change.
Speaker 4 (17:57):
That's right. Well, And I'm reminded of something I learned
a long time ago by moreen Clary and Kelly Gilmore
of connect the knowledge network. They used to teach here
at TDWY all the time, and she taught me a
lot about organizational hierarchies and structures and talking to each other.
And what you want are all the managers talking to
each other across divisions. This sort of top down communications
(18:20):
structure that we've all become aftimated to. Its kind of
going out the window. And you're gonna need to be
talking left and talking right to people near you and
around you. In addition to understanding what your boss wants,
what your customers need, you really want to get that
context from around you. That's in the structural value, it's
in the systemic value. Well, look this gentleman up online,
Mark peakel Peco. He is a twy instructor. Will be
(18:42):
right back. You're listening to Inside and OUTSIM.
Speaker 3 (18:50):
Welcome back to Inside Analysis. Here's your host, Eric Tavanaugh.
Speaker 4 (18:58):
All right, folks back here on Inside and olicis. And
now I've got Josh Hicks with me from a cool
company called Jumpmine. They're right down the street from me.
I'm in Pittsburgh. They're in Columbus, Ohio, doing really innovative stuff.
They're as old as a DM radio too. Seventeen years old,
and they do a lot of data replication. It's just
one thing they do, but they do data replication extremely well.
(19:18):
We're not going to mention any company names. We're talking
thousands of stores, real time updates. And Josh, you were
telling me one of the real challenges with data replication
is when things go wrong, like a network goes down,
a store closes because of a hurricane. Well, these are
all real serious issues. When you're trying to track thousands
of data points and get them information about all their
(19:41):
products like updates and pricing or whatever. To be able
to do that seamlessly, that is a real magic bullet
tell us about jump mine on what you guys are
working on.
Speaker 6 (19:49):
Yeah, so yeah, Eric, Like Eric said, specializing data replication
a lot of different endpoints, a lot of different platforms now,
but it started with a lot of retailers having the
problem of being able to move this data between all
these locations and dealing with things that happen network issues
and communication problems and low bandwidths and bad networks and
(20:12):
things like that. And so the happy path of replication
can be pretty straightforward and a lot of people can
probably work through it. Basically, on their own with a
little bit of tech experience, but when you're dealing with
a lot of the thing unexpected outages, and when you
have a note offline, you could have conflicts. So dealing
with conflicts and not causing backlogs in the replication and
(20:35):
being able to be resilient to networks that are down
and stores that are offline, or sites that are offline,
or databases that are offline. Because you're not really focused
anymore on just the retail platform. It's pretty much used
everywhere data is needed and moved and migrated, and with
so many platforms out there now, it's not just relational
(20:56):
databases we're dealing with anymore. It's a lot of different
plays platforms with the nose sequels in the righthouses and
the cloud came along, right, So there's a lot of
benefits to utilizing lots of these data sources for different reasons,
and so moving that data between them is kind of
where we come.
Speaker 4 (21:13):
In well, right, And we were talking about the history
of the database industry and how you know, twenty years
ago there were like five choices you could make. Now
there are four hundred and twenty five choices you can make.
I mean, it is that diverse out there, and let's
face it, individual companies can have any number of technologies
in play. But one thing I've always loved about effective
(21:35):
information architectures is that companies figure out where we're going
to keep data, how we're going to move data, how
we're going to use data. And you have these layers
of abstraction where something in the middle can handle any
source and handle any target. And that's the idea. That's
where you are. You're the intermediate almost like the middleware
as we've called it over the years, where you got
(21:56):
all these source systems. I got an Oracle database, I
got a Postgrass database, I got a Cybe database, and
then I have all these point solutions, all these targets
where I want to get information to the stores, to
the manufacturers, whatever. If you try to do point to
point for every one of those, you're going to jump
off a cliff, like you are not going to be
that's the unhappy path. Yeah you talk about.
Speaker 6 (22:16):
Right, yeah, And I mean and being able to merge
all that together and not be vendor locked into a
specific product. Right, You're having a tool that has the
cross platform flexibility like we do. We don't really care
what platforms you're using. We just want to be an
option and we want to work with all those options
that you define how your data should move, where it
should move to what platforms you choose. Right, and that
(22:39):
is that that becomes a big deal of keeping more
than one or two of them in sync, because now
you have to keep the order of how those transactions
occurs very important, and some might not be available when
others are. So keep being all that status ying and
history of what data has moved and what hasn't moved,
and then might be performant it because.
Speaker 4 (23:00):
Exactly right managing state, say, I try to explain state
to people. It's a very interesting concept. The easy example
I give is when you are online. Let's see on Amazon,
you're shopping for stuff and you're searching around, and then
you do the magical thing, which is I want to
go ahead and buy this stuff, and so you hit
go to Kart, and now the state is okay, I'm
(23:21):
in the state of getting ready to take the money. Right,
and if you say, oh no, wait, I want to
go find something, you're getting out of that state. Now
you're going back into your searching state to find things
to build your cart. But the point is when you
are in that cart and about to hit the purchase button,
that is a certain state. And what you talk about
makes a lot of sense of when things go down.
So if I'm logged in, I'm about to hit purchase
(23:43):
and the system goes down, my power goes out, or whatever,
where is that state captured and persistent such that I
can log back in and get to where I was. Right,
And that's what you're talking about. Right, when things go down,
you need some ledger basically or matrix that's monitoring where
things were at the time of the failure so you
can pick back up and not miss a beat and
(24:05):
not miss something right.
Speaker 6 (24:06):
Yeah, The nature the nature of how the product fundamentally
works is it will retain changes until they're committed at
their target I see, and now obviously commit being more
of a database term, but we deal with some other
platforms like the kafkas and things like that. But being
able to make sure that that is committed before you
move on, and that's very important because we must retain
(24:29):
that and keep that state and know that where that
state of that information is until we've gotten a full
acknowledgement that it's committed. Right, and we have processes that
purge that stuff out. Then once that state is that's resolved,
and there's all kinds of bells and whistles to turn,
and you can set different retention periods for how long
you keep history of that. But in a happy path,
(24:51):
you know, keep it. You don't need to necessarily keep
it around real long. If things are moving but something
comes offline, you need to be able to maintain that
state and playback hours potentially.
Speaker 4 (25:01):
Rice right right, the unhappy path. Now, that's such a
good that's such a good point. And you know, I'm
reminded again that if you have a good architecture and
you know where your marshaling area is for moving things around,
that allows you the flexibility to bring on different source
systems and different target systems. And I did want to
explain the commit, like a two phase commit is something
(25:24):
I think a lot of people can understand. In the
banking world. Because I send you one hundred dollars through Venmo,
that the request goes through, take one hundred dollars out
of my account, put it in his account. I need
to get a message back that says, Okay, we got it.
It's now in his account. That is a two phase
commit right.
Speaker 6 (25:40):
Yeah, very very common that EF.
Speaker 4 (25:43):
We're gonna understand, I now have the money, and you
know that I have the money, right, as opposed to, well,
the money's out of my account, but it's not in
your account. That's a problem. That's the unhappy path for
money man.
Speaker 6 (25:55):
With relational databases, yeah, we are an asynchronous because a
lot of people say, you know, I want a synchronous solution,
but didn't really know what that means. Because you don't
necessarily want to open a connection on your source and
wait until it's committed on the target. That could hold
up your user experience. So making that asynchronous is a
big difference, and it's actually more performance for the user
(26:17):
because it allows them to finish the user experience on
the source, gather the data asynchronously, processes right target. Now,
those it could be very fast and feel like it's synchronous,
but it's important that if there is any failure along
that pipeline, you're not holding up the user experience, right,
And that's a big deal. It comes to data application right,
(26:40):
not to affect the user. That's a lot of times
what the first question is, how is this going to
affect my users.
Speaker 4 (26:45):
Well right, well, I mean everyone has experienced either in
the store or they're on the phone and customer service center.
And what does the person say, Oh, our systems are
running slow today? Well, why is that? It Could be
the database, It could be netwight, could be data replication,
it could be any number of things. And boy, the
more interconnected we get in the cloud, the more things
(27:08):
that are that can go wrong. I mean there are
all sorts of things in between.
Speaker 6 (27:12):
As we've evolved in our experience with replication, we've we've
gathered more and more information and stats along the way,
which is really beneficial for pinpointing when there's a problem.
Where is the bottleneck is?
Speaker 4 (27:24):
You know?
Speaker 6 (27:25):
Is it the extraction? Is it the transfers at the load?
Is it somewhere in between?
Speaker 5 (27:29):
Is it? You know?
Speaker 6 (27:30):
What failure points? What retries? Did we have ryan gathering
all of those statistic points, which helped put the bigger
picture now and again back to the happy path. Those
may look real inear and be real smooth, but when
you start to have h maybe you add more bandwidth,
you have more stores, you have more offices, you have
more data, and you start pushing more through the pipe.
(27:53):
You'll start to expose where the bottlenecks are. Maybe there
weren't bottlenecks first quarter, right second quarter, helping the amount
of users, you've added customers, you've added more data through
your pipeline, and being able to quickly analyze where those
fragments are and where those performance breakdowns are allows you
to get in and make adjustments well.
Speaker 4 (28:14):
And you bring up an excellent point about the history
and the tenure that jump Mind has because over time
you do gather data, and so you'll understand when this
system connects to that system, we've had some problems. When
that system connects over here, we've had some problems. I mean,
there was the whole disastrous CrowdStrike crash a few months ago,
(28:37):
which was apparently because there was an update pushed and
they hadn't done a full null scan and there was
one of those little formula in there that just zapped
the systems and brought it crashing down. I mean, that's
the kind of thing that should not have happened to
be blunted me. I still haven't gone to the bottom
of what really happened there, but the point is that
if you have been around for a while, as you
(28:58):
guys have you can capture lots of metadata and understand
what is a reasonable amount of time for this process
to execute.
Speaker 6 (29:05):
And actually a feature that's kind of interesting that we've
added to the product and about the last year and
a half. We call them insights, and what it's doing
is there's several of these that are running in the
background and they're watching trends. They're watching those. So we
started by capturing the metrics because that's the first paint point,
is to be able to gather certain metrics along the way.
(29:26):
But now these insights are running and trying to fire
and do a kind of like an AI analysis. Hey,
we noticed this normally takes this much time to push
this much data. You're up one hundred and fifty percent.
Speaker 4 (29:38):
Right, what's going on?
Speaker 6 (29:39):
Yeah, it can bring it to your forefront and say
these trends are happening, and then in some of these insights,
we actually can provide you and approof. But it might
be we want to durn a nile during a dial
behind the scenes, we want to up some time out,
we want to up a threshold, increase your connection pool.
Maybe you've involved class and there's so some of those
(30:02):
metrics we have settings, but we you know, it might
involve in the past, that might involve consulting, right, go
in and analyze this right now with these insights, they're
kind of bringing it to your attention. Here, we noticed
something that's abnormal. Improve We're going to tweak this dial
for you and adjust it. You know, maybe you appreci
(30:22):
your connection.
Speaker 4 (30:22):
Pool right now.
Speaker 6 (30:24):
Some people may or may not know what that means,
but you know, at least it puts it in front
of them to say, hey, this is what we'd like
to do.
Speaker 4 (30:31):
Well, you know the old expression what gets measured gets
managed exactly and what I love, especially because these systems
are so complex and so interdependent. Now, yes, and there
are so many things that can go wrong, and you're
always like, is it my network? Is it me? I'm
the one who's doing the speed test, Like, maybe it's
me the other team. Right, it's like playing the tool, right,
(30:51):
that's the old pickers playing the tool. But when you
can see and you and guess what, that also educates
the user, so now they know a little bit more
about what's actually happening. And it's like I found that
people will be very reasonable as long as they know
what's happening. It's when you don't know what's happening that
it gets very dicey.
Speaker 6 (31:09):
And one of the key features we've always and the
owners have always brought to the table when they formed
jump Line was the support we've worked hard to We
don't outsource any support. It's all the engineers that have
worked on the product.
Speaker 4 (31:23):
Wow.
Speaker 6 (31:24):
We typically try to get through about a six month
learning curve where they if they hire new that we
bring them on because the support is everything we want
to be able. We've all worked in it. As an engineer.
I'm still on call at times, and we've all been
in crisis situations and there's nothing worse than having not
having the proper support change yeah yeah, yeah yeah. And
(31:45):
like you mentioned earlier, it might involve multiple teams, so
we want the data side of things in the replication
to always have the metrics and stuff that we can
provide is best support we can for it, because we
know what it's like to have an auditgen in the
middle of the right right and oftentime we're dealing with
an outage, could increase increase a backlog very quickly because
(32:07):
it is replication, right, so immediately a small problem becomes
a big one and thinks backlog while so the support
has been a huge model. And quite frankly, I think
people are just moving a lot more data. We didn't
have to ty years ago. It wasn't as it was
a big deal and what your storage was. Storage was expensive,
data was expensive. Now we run into groups and I'm like,
(32:32):
do you even know what these tables are, right, because
they just have copies of everything, right, So that's it's
kind of fascinating seeing the progression of how much data
is moving now.
Speaker 4 (32:42):
Right, Well, and you actually just hit on one of
my classic talking points. And folks who've listened to these
shows multiple times have heard me say this multiple times.
But the dynamics have changed so much. I mean, I'm
a big fan of constraint based design because your constraints
are reality and you have to either address them or
work within them. Right, And back in the day, processors
(33:03):
were slow, storage was expensive. Networks speed was.
Speaker 6 (33:07):
Slow on the forefront of everybody's mind.
Speaker 4 (33:09):
Right Now, processors are fast and you could do multi core,
you could do GPU versus CPU, the the pipes are fat,
storage is cheap, So all those dynamics that that drove
the design of old systems are different now yeah, and
are gone. So what do we do now?
Speaker 8 (33:26):
Now?
Speaker 4 (33:26):
There is this concept of code bloat remember learning now
we got about a minute left, and like code bload
is when the processors got fast that the developers got lazy. Yeah,
who cares? Just go write more script to get it done.
Speaker 6 (33:36):
It seems like it's.
Speaker 4 (33:37):
Kind of a wave right right, right, it goes up
right and we're going back down again. Well, closing thoughts,
where can someone learn more about jump mind? And we
didn't talk too much about It's called symmetric.
Speaker 6 (33:48):
Das Yeah, for symmetric data synchronization. Right, you can find
us on jumpline dot com and it's under our data.
There's a data menu, right, and there's a trial. It's
a look at it. And then obviously if you email
us myself for one of the other engineers, are happy
to kind of walk it through and give a demo.
Speaker 4 (34:06):
I love it.
Speaker 6 (34:07):
We're very big on making sure we're not maybe the
right fit for every use case, So we love to
talk to people. Hear what their story is, what their
use case is. We want to find a good fit.
It's not a good fit, it's a waste of both
their time. Yeah, yeah, yeah, that's that's been a key.
Speaker 4 (34:21):
Well, that's that's a great signed folks. I can tell
you that's a look up jump mind. And I remember
one of the coolest companies I knew. In the same sentence,
a guy would say, well, here's what we do and
here's what we don't do, right, and it's really important
to understand that. Well, folks, don't touch it out with
right back, you're listening to Inside Analysis.
Speaker 3 (34:45):
Welcome back to Inside Analysis. Here's your host, Eric Tavanaugh.
Speaker 4 (34:52):
All right, folks, back here on Inside Analysis at TDWY
in Las Vegas, my old haunt. I'm hanging out with
some of my old buddies, and I'm here now with
Patrick O'Halleran. He is with the company called wear Escape.
I've known wear Escape for about fifteen years or so.
Old Michael Whitehead he launched it and sold it to
a Idea a long time ago, or a number of
years ago, I guess. But they're great for data warehouse
(35:15):
documentation because guess what, you know, how many developers like
to do documentation? None, No, developers like to do documentation.
Slightly less than one, slightly less than one, somewhere between
zero and one. Of the developers out there like to
do documentation. That's important, but they do a lot of
other stuff too, and it's an excellent tool for being
able to leverage the power of a snowflake for example,
(35:36):
or I guess at data bricks or any of these guys. Right,
So tell us a bit about what you're working on
with wear Escape red these days. Sure, for sure.
Speaker 9 (35:43):
So as you say, wear Escape's been around for quite
a while, and we started twenty five years ago in
New Zealand as a consulting company building data warehouses.
Speaker 5 (35:52):
We're doing a lot of.
Speaker 9 (35:53):
Repetitive tasks and rise that those could be automated with
technical tools.
Speaker 4 (35:57):
Right.
Speaker 9 (35:58):
Those tools became very powerful and very popular, and some
people started asking if they could start buying those tools.
So at that point Whereescape stopped as a consulting company
began as a software company. We've got a few thousand
customers around the world. Wow, in Asia, US, North America,
South America, Europe, Middle East Africa.
Speaker 4 (36:17):
Wow. And if you still in New Zealand, right, Well,
that's a cool path to start as a consultancy, figure
something out and then build the software. I see that
quite often because as a consultant you're on the front lines,
you're trying to solve things, and to your point, you
keep running into these repetitive tasks and you're like, wait
a minute, why don't we just automate this task or
(36:38):
that task? Documentation being one of those things to say, okay,
what do we do? And that's one thing I love
about the cloud is that I think a lot of
the best practices around things like log in, things like
tracking what's been done, are just sort of baked into
the architecture of cloud. And now we see that with
a lot of these products too, right to bake in
things like documentation and testing. I mean, automated testing is
(37:02):
one of the real keys.
Speaker 9 (37:03):
Right, Yeah, So let's talk about documentation first. I'm a programmer,
you know, one of those guys that started coding on
IBM PCs at twelve years old, Neucurabo, Pascal and eighty
eighty six assembly and all that stuff. And like you said,
documentation is one of those phases. I describe it as
the phase of a project at the end where someone
runs around and grabs all the posted notes and types
(37:23):
it into word and as soon as the.
Speaker 5 (37:24):
Hit file save, it's obsolete. That's fun.
Speaker 9 (37:27):
So The thing about Wearscape and automation tools, the Warscape specifically,
is that we're capturing what you're doing as you're doing it.
So it's not like you're writing a bunch of code
then we have to go figure out what that code means.
But through Wizards, through other automated procedures, intelligence built into it,
not just in code generation, but in design and architecture
as well. We captured the key points where you're doing things.
(37:48):
Something as simple as data lineage, you've got to say
a factor, a dimension at the end of a long
chain of transformations, and a data analyst looks at it
and says, where does this number come from? Anytime you've
got a new dashboard, a new report, the question is
always do you trust these numbers? So the nice thing
about that kind of documentation is I can very very
(38:09):
quickly go back online and see exactly where that number
came from, what transforms it went through, what calculations it
went through. Maybe it came from data combined from three
or four different source systems. That's something that's virtually impossible
to do manually. Track back diagrams be able to present
documentations that the users. Something as simple as when we're
(38:31):
profiling a source system to pull data in. If there's
extended attributes like in SQL server, if there's any kind
of extra information, they get a data catalog. If there's
things we can feed in from the data governance tool
or data cataloging tool, purview or whatever it is, that
can work its way all the way downstream into the
documentation as well. Wow, the documentation is literally a couple
(38:53):
of clicks of the button. And if you're in a
dev environment at QA environment, every morning you can create
this large p has all the necessary documentation in it
and then push that out to a shared drive or
someplace we can get to it. So it's not just
having a docutation that's having to get accessible as.
Speaker 4 (39:09):
Rights right, having it accessible. And you know, when I
think about documentation, I think about automatic documentation log files,
and log files are baked into many of these enterprise
systems as a way of knowing what happened, as a
way of knowing what the system did and thus being
able to go back and trace it. But there are
so many of them that you need to have a
process around analyzing the log files, and that's been a
(39:31):
whole thing over I don't know the last twelve fifteen years,
I've seen as companies figure out, hey, that log file
is a source of truths. Let's pay attention to that
and watch for changes and patterns in the log files.
And that's how you can understand what happened, what went wrong,
and then go back and remediate something. Right.
Speaker 9 (39:46):
Yes, Spunk is a good example thought, right, So that's
what they do, right, Yeah, they've read in parcelog files.
We're even a step beyond that because pretty much everything
that's going into the log files is already in our
bedded data. So if you're running jobs to work your
data from the source system through all the transforms, through
your data foundation layer into you know, if you're doing bronze, silver,
(40:08):
gold layers or presentation layer or semantic layer, whatever it is,
all that data, all that processing is already on our metadata. Okay,
it's wide open, worse kept to a wide open product.
We don't publish a format of the metadata. But we've
got plenty of customers, plenty of service partners that have
built scripts queries to go look at it. Maybe they've
got jobs that are processing more data or less data
(40:31):
than they used to. Maybe you've got jobs that are
taking longer, and you've got to measure throughput those kinds
of troubleshooting things which you used to have to go
through and look through, and logs you can pull directly
from our metadata. We've got a sister company called yellowfin.
Speaker 4 (40:46):
I know phone Yeah, yeah, I know, guys.
Speaker 9 (40:47):
Okay, we're adding an analytics package to to our tool itself,
our scheduler that's going to be built into the tool
using their technology, so that you can see the operations
of your data warehouse and how things are working and
when they're slowing down, when they're speeding up, where things work,
where things don't work, to help you not just identify
as a it's a problem, but to identify exactly where
that problem is to help you fix it faster.
Speaker 4 (41:09):
Well, and you just picked up on one of the
hottest topics in the world of data today, which is finops, right,
because if you can understand what's happening in there, you
can understand what it costs you. Right, So running certain reports,
doing certain projects, what does it really cost? I mean
a lot of times it's very nebulous. I Mean, you
can see what the line item is on your budget
when it comes through, when the bill comes in but
(41:31):
actually being able to have fine grain views into which
which query cost how much? All that kind of stuff
is extremely valuable.
Speaker 9 (41:38):
Right, Yeah, I'm sure with you know, cod based consumption
costs being the big cost for everybody. I won't mention
a specific cloud vendor, but there's a certain one, certain
one that's getting a cold shoulder sometimes, right, because a
lot of them cussion costs are unexpected. Right, So when
you can use analytics, I'm sure it can be added
(41:59):
tooryally package to go query the cloud vendor and find
out what is it costing?
Speaker 5 (42:04):
What does it costing me to do this? Right?
Speaker 9 (42:06):
And it's it's I used to be in big an
application performance management, and the ninety percent of doing a
good job is knowing where to focus. You can take
something that's very slow but doesn't run very often and
improving it doesn't really give you much of a much
of an improvement. Where you can take something that's just
a little bit slow but runs a lot more frequently
and work on that and get a bigger improvement. So
(42:27):
maybe you've got one particular sequel statement that's incredibly costly
but it runs once a week that may pop to
the top of some list saying, hey, pay attention to me.
Need I need, I need some attention. But really what
you want to pay attention to is the total cost.
So yeah, having analytics packages to be able to slice
and dice that for you and give you, like I said,
someplace to focus so you can you can you can
(42:49):
get more bang for your buck.
Speaker 4 (42:50):
Right well, and to understand the inner workings of these things,
how they've come together, what value you're getting from them.
We just talk to a gentleman, Mark Pico, all about value,
understanding what value is. He's got some really interesting concepts there.
But you know, in the data warehouse automation world, let's
face it, there are lots of tedious tasks that come
(43:11):
together to create a data warehouse and then to use
the data warehouse and a wear escape. You are really
the first that I came across to start focusing on
automating some of these tasks. Right, people just think of it,
get the data in and do the analysis and get
the analysis out. There's a lot of other stuff that
happens in between to make that all take place. And
you guys will sit on top of any environment, right
(43:31):
like data bricks or snowflake or whatever, right.
Speaker 9 (43:33):
Yeah, there's about say seventeen eighteen different targets we support.
So we talked about this earlier today, you and I
did about how Wearescape over the years has evolved from
a very specific product that works for SQL server and
works for.
Speaker 5 (43:45):
Work and works for serio data.
Speaker 9 (43:47):
And as the tool grew and use and grew an expansion,
it's become what I would call framework.
Speaker 4 (43:53):
This is very interesting, it's a very interesting concept.
Speaker 9 (43:56):
I said to my Unix backgrounds, and I told you
it's been long enough ago that I still call it
Unix not Linux. That the best program is the program
that doesn't do anything. It's everything's been externalized into scripts
and to configurations and things like that. Warescape has really
been turned into a framework where you apply packages what
we call enablement packs or rules or other things to
(44:17):
really configure it and how you need to use it.
And I gave you the example of data bricks. We
had about two years ago. We had several prospects come
to us asking about whether wear escape supports data bricks,
and the first three times we said no, and the
fourth time we said, we need a different answer. So
one of our engineers took the Snowflate and Snowflake enablement pack,
which we had and in about two or three weeks
(44:39):
had it working for data bricks. So now we technically
support data bricks. But then we went through some beta
testing and refined it because the enablement packs are geared
towards a specific technology, and within about four months we
had a productized release of supported data bricks. So we
went from zero to sixty in four months.
Speaker 4 (44:56):
Maybe that's not something to brag about. Yeah, No, zero
sixty in a few seconds is good. That's not too bad.
That's not too bad. Well, folks, we've got a podcast
bonus segment coming up here. Next, we're talking with Patrick
O'Halleran from wear Escape, the guys who really kind of
invented data warehouse automation way back when. And now I
got seventeen different targets, a lot of stuff going on.
We'll be right back. Don't touch that down.
Speaker 3 (45:21):
Welcome back to Inside Analysis. Here's your host, Eric Tabanaugh.
Speaker 4 (45:28):
All right, folks, back here on Inside Analysis, talking to
Patrick O'Halleran of war Escape. And you just mentioned in
the break there the design side, you're not just the
code generator, but you're helping in the design of the
data warehouse and helping organizations know how to get the
most of whichever their target system is. Right, let's say
it's Snowflake, which, let's face it was. I watched the
(45:51):
whole had dup Era come and go, and I was
one of the ones scratching my heads about thinking, oh,
I don't think you guys are missing something here. I
don't think it's going to solve all the world's problems.
Then what happened. Snowflake comes out, figures out how to
separate compute from storage. And the other thing they did
was very clever, is they figured out that you know what,
in the old world at data warehousing, the sign of
the schema was really important and really constraining because once
(46:14):
you had it, you weren't going to go changeing. I
want to change the skin, like what are you crazy?
Don't even sales words to me again? And they figure
out how, look, just tear it down, change the schema
and build it back up. Right, wasn't that one of
their big that's a pretty big innovation. But you guys
can help in that process, right, sure?
Speaker 9 (46:31):
Sure, I'd say a lot of our customers have come
to us a lot of people aren't familiar with what
their data warehouse automation is or just data automation, so
we wind up doing a lot of education on that,
explaining where the benefits are, where the time savings comes in,
and where their performance improvements come from. And it's not
just code generation. That's one level of automation. I did
a webinar with Ken Graziano.
Speaker 4 (46:53):
Yeah, not too old, the data Warrior.
Speaker 9 (46:54):
Yeah, about different levels of automation. And think about cruise
control in your car. Okay, my motorcycle, it's cruise control.
Speaker 4 (47:03):
Was a throttle lock.
Speaker 9 (47:05):
That's that's automation.
Speaker 4 (47:06):
That's funny.
Speaker 9 (47:07):
Then you've got cruise control, which keeps a constant speed. Okay,
that's maybe another level of automation. Then you've got adaptive
cruise control, which is keeping a space between you and
the front of you and it's suggesting it's it's speed
that way. Those are all levels of automation. Well, Wearescape
has and data warehouse automation has different levels of automation.
So beyond just generating code, beyond just you know, generating
(47:30):
physical Python scripts and DDL and email and all that
kind of stuff, Warescape helps with the design as well,
so I can create a conceptual model of my sources
of my data warehouse, either pull one in if I've
already got one created, or created one for tratch. And
we also always tell people that's a business process. Creating
(47:50):
your data warehouse is a business process, not an IT process. Yeah,
there's certainly technology involved in it, but it solves a
business problem. Okay, So you create your large conceptual model
independent of your source systems, addressing business issues and business
objects and xsts. You define all the attributes of all
those entities that you want to.
Speaker 5 (48:09):
Track and report on.
Speaker 9 (48:11):
Then you can go and you can profile all your
source systems, find out what data you have, assign attributes
to those attribute types, run profiles not just about the structures,
but the data itself, and then our rules engine can
do that. Sounds kind of silly about anything with that,
So PI, for example, you can bring in data, flag
(48:31):
it as PII. Our rules engine can then do something
with it. What do you want to do? Do you
want to mask it? Do you want to drop it?
Do you want to encrypt it? Do you want to
put it into a separate object and then shut that
off to a separate schema. That's lockdown, You decide what
you want to do, you create the rules, and you
can be sure that anything you've defined as PII in
your source system is handled appropriately. Data quality checks, data
(48:55):
governance checks, things like that being able to grab an
object and say I want this to be a I
want this to be a type two dimension data vault
is built for automation. If you're familiar with data vaults
the whole concept, it's a very pattern based structure for
creating what is an abstract, complicated foundation with ub satellites
(49:16):
and links where things are broken apart in strange ways,
linked and strange ways. Can you build a data vault
with without automation?
Speaker 5 (49:24):
I've never seen one.
Speaker 9 (49:26):
I would say two thirds of our databault customers have
come to us because they've tried that and failed.
Speaker 4 (49:30):
Wow.
Speaker 9 (49:32):
But the idea that I can create my conceptual model
and my logical model independent of my physical model, bed
all that within wear escape, attach my source systems to it,
and then push that all out to our builder tool
that will then create all like I said, the DDL,
the DML, the Python scripts and everything else I can build.
I do it in demos all the time. I can
(49:53):
build a star scheme leable effect in a couple of
dimensions from two or three different source systems, and have
a physical model up in snow Lake or data bricks
or a SEQL server or whatever, and I can do
all that in twenty thirty minutes and have the documentation.
Speaker 4 (50:05):
Wow, that's crazy. Yeah, Well, it's like it's almost like
you've got to make an analogy to painting. You've got
a tool that allows someone to imagine what they want
that painting to look like, and then you hit the
nentwer button and it goes b let it sprays it
all on the screen. So the original tool we had
was our builder tool. It's a bottom up approach. You
(50:27):
defined load tables and stage tables, and then all the
objects you want to have in your warehouse. I always
describe that as you've got a bunch of legos and
then you've built your dinosaur or your spaceship or whatever
it is.
Speaker 5 (50:36):
The design tool on top of that is top down.
Speaker 9 (50:40):
Do you tell it what you want to build. I
want to build a spaceship, a dinosaur, a castle or
whatever it is, and it then figures out all the
individual legos that has to produce to build that interesting
and we have customers that do both and do both successfully.
Speaker 4 (50:52):
So you can do top down and bottom up at
the same time. And yeah, yeah, don't see how they
meet together.
Speaker 9 (50:57):
I talk about the design tool as being like a
an architect for a new house. They've got a blueprint
to hand that blueprint off to the general contractor who's
going to build the house. Great, Now, what happens in
the real world is the architect didn't understand a certain
zoning rule, or there's a big rock that they can't move,
or the homeowner decides they want the staircase moved. The
(51:18):
GC will just go ahead and do those things right.
The nice thing about our tool is that you can
do both. You can build the blueprint, you can have
the builder tool building things, but then you you can
communicate information back so your design never goes out a date.
Speaker 5 (51:31):
Your mind is always up to date, which is a
huge feature. That's interesting.
Speaker 4 (51:34):
Yeah, So, I mean people can go buy snowflake and
they don't have to use something like wear escape. But
it makes a lot of sense to do that because
any I was talking about this today with a couple
other people. Because you've abstracted out the design and build
layer from the actual management of the warehouse. And one
thing I love about Snowflake is how they'd say, you know,
(51:55):
we're going to control your data, you'll do everything around
it basically, and they have a very clean line of
demarcation between what they're doing what the partners are doing,
which I thought it like it's central to their success,
but what do you think?
Speaker 5 (52:06):
Yeah.
Speaker 9 (52:06):
The other nice thing is because you're working at an
abstract level, at a conceptual and logical level, you can
switch targets very very quickly. We had a customer, i'll
just call them, a large insurance company in the Midwest
that had been using wear Escape in Terra Data for
years and they decided they wanted to move have up
a pilot project in Snowflake. Okay, so they said, what's
it going to take to recreate our data warehouse and Snowflake. Well,
(52:28):
we already had all the metadata, we had everything the
information we needed to generate it because they've been using Wearescape.
So we went to our offshore team and said, what's
it going to take to convert this data from Terra
Data to Snowflake and then create data warehouse And they
gave me a number, and since I was the manager
of professional services. I bumped it up ten percent. I
bumped it up to forty hours, all right, forty hours
(52:49):
and in a week, in a week, they had their
physical data warehouse replicated from Terra Data to Snowflake.
Speaker 5 (52:55):
Why, including all the objects, all the procedures and everything else.
Speaker 4 (52:58):
That's crazy.
Speaker 5 (52:59):
That's an exception. But that's that's a real case.
Speaker 4 (53:02):
Well, it's because you had the recipe in wear Escape,
you had the metaday, you already knew what the structure
would look like, you had the sort of matrix, the
blueprint to use your term, and then you could just
go connect snow flicking build it up.
Speaker 9 (53:14):
Yeah, and if they had customizations and things, that might
have slowed down. But you know, even if you get
eighty percent of the way there, right, push up a button.
Speaker 4 (53:21):
Right, great, Wow, this is fantastic. Well look this gentleman
up online folks, Patrick o'hallerin. So it's a good Polish name,
of kidding, it's an Irish name. So yeah, look up
wear Escape, w H E R E Escape. They're part
of Idea these days. I D E R A and
we'll talk to you next time. You've been listening to
Inside Analysis.
Speaker 10 (53:40):
Get all the facts all you need to know on
KCAA Radio.
Speaker 7 (53:44):
KCAA where Life's much better. So download the app in
your smart device today. Listen everywhere and anywhere, whether you're
in southern California, Texas or sailing on the Gulf of Mexico.
Life s abreeze with KCAA. Download the app in your
smart device today.
Speaker 3 (54:02):
I'm being yesterday in the dove.
Speaker 4 (54:17):
Casey eight eight.
Speaker 8 (54:20):
What is your plan for your beneficiary to manage your
final expenses when you pass away?
Speaker 4 (54:27):
Life?
Speaker 8 (54:27):
Insurance annuities, bank accounts, investment accounts all require death activity
for sakes ten days based on the national average, which
means no money's immediately available. This causes stress and arguments.
Simple solution the beneficiary liquidity claim. Use money you already
(54:48):
have no need to come up with additional funds. The
funds grow tax deferred and pass tax free to your
name beneficiaries. The death benefit is paid out in twenty
four for forty eight hours, out out a deficit as
a one hundred three zero six fifty eighty.
Speaker 10 (55:10):
Six to Hebot Club's original pure powdy Arco super Ta
helps build red corpuscles in the blood which carry oxygen
to our organs and cells. Our organs them cells need
oxygen to regenerate themselves. The immune system needs oxygen to develop,
and cancer dies in oxygen. So the tea is great
for healthy people because it helps build the immune system,
and it can truly be miraculous for someone fighting a
(55:32):
potentially life threatening disease due to an infection, diabetes, or cancer.
The T is also organic and naturally caffeine free. A
one pound package of T is forty nine ninety five,
which includes shipping. To order, please visit to Hebot club
dot com. To hebo is spelled T like tom, a
h ee b like boy oh. Then continue with the
(55:53):
word T and then the word club. The complete website
is to Hebot club dot com or call us at
eight one eights six one zero eight zero eight eight
Monday through Saturday, nine am to five pm California time.
That's eight one eight six one zero eight zero eight
eight t ebot club dot com.
Speaker 2 (56:11):
Now here's a new concept, digital network advertising for businesses.
Display your ad inside their building. If a picture's worth
of a thousand words, your company is going to thrive
with digital network advertising. Choose your marketing sites or jump
on the DNA system and advertise with all participants. Your
(56:32):
business ad or logo is rotated multiple times an hour
inside local businesses where people will discover your company. Digital
Network Advertising DNA novel way to be seen and remembered.
Digital network advertising with networks in Redlands and YUKAIPA call
in the nine to nine area two two two nine
(56:54):
two nine three for introductory pricing. That's nine oh nine
two two two nine U nine three for Digital network
advertising one last time Digital network Advertising nine oh nine
two two two nine two nine three. Gillstina KCAA Lomolinda
at one oh six point five FM, K two ninety
(57:15):
three c F Burrito.
Speaker 10 (57:16):
Valley, located in the heart of San Bernardino, California. The
Teamsters Local nineteen thirty two Training Center is designed to
train workers for high demand, good paying jobs and various
industries throughout the Inland Empire. If you want a pathway
to a high paying job and the respect that comes
with a union contract, visit nineteen thirty two Training Center
(57:40):
dot org to enroll today. That's nineteen thirty two Trainingcenter dot.
Speaker 11 (57:46):
Org for KCAA ten fifty AM, NBC News Radio and
Express one of six point five FM consumers in the
Inland Empire may be filling the bite of inflation more
than any other metro Area's wallet Hub analyzed the Consumer
(58:07):
Price Index to determine how inflation has changed in the
short and longer term. The Riverside Samernardino, Ontario area posted
the fifth highest change of the twenty three metro areas
that were studied. Still higher where San Diego, Honolulu, Boston,
New York, and Chicago. The lowest were Denver, Phoenix and Tampa.
The San Bernardino County Department of Behavioral Health has partnered
(58:29):
with various key agencies to create Coast Community Outreach and
Support Team. The team launched in Fontana in twenty twenty one.
Twenty twenty three, the City of Ontario implemented the program
victeam consists of a behavioral health professional, firefighter, law enforcement,
and a therapy canine. The purpose of the Coast model
is to provide residence with rapid access to crisis triage
(58:52):
in a non threatening manner. Beauty industry leaders unite to
support wildfire evacuees with free services. The CEO of Willpower
Integrated Marketing is spearheading an initiative to support families displaced
by the recent fires. A network of beauty professionals will
provide grooming services and essential personal care items on Monday,
(59:13):
March tenth at Burners Barber College in Pasadena. The owner
of the barber college lost his home in the wildfires,
but believes in the spirit of resilience and has offered
his facility as a hub to support the community whether
in the Illan Empire. Highs in the mid seventies with
lows in the low fifties weekends, Sunday on Saturday with
highs in the mid sixties and a chance of rain
(59:34):
on Sunday. For NBC News Radio CASECAA ten fifty AM
and Express one O six point five FM. I'm Lillian
Vosquiez and you're.
Speaker 7 (59:42):
Up to date.
Speaker 2 (59:49):
NBC News on KCAA Lomelinda sponsored by Teamsters Local nineteen
thirty two, protecting the Future of Working Families, Teamsters nineteen
thirty two.
Speaker 5 (59:58):
Dot org UMMM.
Speaker 11 (01:00:04):
Welcome to the Fabulous Lifestyle Radio Show. Tune in for
a vibrant