All Episodes

January 21, 2025 38 mins

Send us a text

Understanding Human Behavior in AI: A Deep Dive with Jay Van Zyl

In this episode of The Human Code, host Don Finley engages in an insightful conversation with Jay Van Zyl, a seasoned expert in artificial intelligence, machine learning, and business transformation. Jay discusses his intriguing journey into the intersection of humanity and technology, emphasizing the importance of understanding human behavior to build meaningful AI systems. They explore the nuances of hyper-personalization, the role of trust and reliability in AI-driven solutions, and the integration of computational social science with generative AI. The discussion also touches on practical applications of AI in enhancing customer experiences, especially in regulated industries like finance. Jay shares his philosophy on balancing the analytical and empathetic aspects of AI, aiming for solutions that are both technically sound and human-centric. This episode offers valuable insights for tech enthusiasts, entrepreneurs, and businesses looking to leverage AI responsibly and effectively.

00:00 Introduction to The Human Code
00:49 Meet Jay Van Zyl: AI Pioneer
01:55 Jay's Journey into Human Behavioral Sciences
03:50 The Philosophy Behind AI and Human Behavior
06:19 The Role of AI in Personalization
10:32 Challenges and Trust in AI Systems
15:10 Practical Applications of AI in Business
21:27 Future of AI and Human Interaction
29:34 Ecosystem.ai: Enhancing Customer Experience 
38:22 Conclusion and Final Thoughts

Sponsored by FINdustries
Hosted by Don Finley

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Don Finley (00:00):
Welcome to The Human Code, the podcast where
technology meets humanity, andthe future is shaped by the
leaders and innovators of today.
I'm your host, Don Finley,inviting you on a journey
through the fascinating world oftech, leadership, and personal
growth.
Here, we delve into the storiesof visionary minds, Who are not
only driving technologicaladvancement, but also embodying

(00:23):
the personal journeys andinsights that inspire us all.
Each episode, we explore theintersections where human
ingenuity meets the cutting edgeof technology, unpacking the
experiences, challenges, andtriumphs that define our era.
So, whether you are a techenthusiast, an inspiring
entrepreneur, or simply curiousabout the human narratives

(00:44):
behind the digital revolution,you're in the right place.
Welcome to The Human Code.
Today, we're thrilled to welcomeJay Van Zyl, a pioneer in
artificial intelligence, machinelearning and business
transformation with over twodecades of experience shaping
industries.
Like finance, cybersecurity andautomation.
As the founder of ecosystem.ai.

(01:06):
Jay leads the charge indeveloping no code platforms
that provide actionableinsights, enabling organizations
to make smarter, real timedecisions.
In this episode, Jay shares hisphilosophy on understanding
human behavior as the foundationof meaningful AI systems.
We'll explore the intersectionof computational social science
and generative AI, theimportance of trust and

(01:27):
reliability in AI drivensolutions and how hyper
personalization can redefineuser experiences across
industries.
Join us.
For an insightful conversationabout the role of AI in
empowering businesses andindividuals and how
understanding the complexitiesof human behavior can lead to
transformative outcomes.

(01:47):
I got Jay Van Zyl here with me.
The man is awesome.
I am really looking forward tothis conversation, Jay.
you've got such a richbackground, and I'm going to
start you off with the classicquestion of what got you
interested in the intersectionof humanity and technology?

Jay Van Zyl (02:03):
Don, the way that I grew up, in the context of
science and math and in a worldof absolutes, you get to a point
where you really believe thateverything can be modeled.
can be framed in some veryspecific outcome and highly
deliberate and accurate answersand frameworks.

(02:24):
and it's only when you realizeeventually that, oh dear, this
human in the system doesn't dowhat it's supposed to do.
It completely confuses youbecause nothing that it says
it's going to do, it willeventually do.
There's high degrees ofuncertainty as it goes through
cycles of change.

(02:44):
People, get angry about thingsthey're not supposed to.
They get happy when they mightnot need to.
they are just, very dynamicbeings.
and in the 90s, I got exposed tothe concept around, human
behavioral sciences.
And as I got more into it, Irealized that there were very
few actual frameworks andcapabilities that we could

(03:05):
leverage reliably to buildalgorithms.
And it became a journey.
And my journey started inlearning, human learning.
So that means single loop,double loop learning, Sean and,
people like Peter Senge with,the fifth discipline and that,
that was my kind of foray intothe discipline was to try to

(03:27):
make sense of not necessarilypsychology of the human itself
in their own right, but how theyfit within a collective, that
collective being their family,their work, their friends.
And how that fits within asociety.
so it deeply intrigued me howthese various elements, are used

(03:48):
to make sense of how humansbehave.
So what transpired over time wasthis notion that if you want to
invent any technological toolingcapability, or you want to
create some kind of way in whichyou're going to assist a human
in achieving better outcomes,then You need to first
understand or attempt, let'scall it attempt, to understand

(04:10):
the human or empathize with anindividual because it's tough to
empathize because you can'treally think the way that people
think that are nothing like you,but you attempt to understand
who they are, and then you wantto make sure that the tooling
that you provide can servicethat individual.
They must feel great about whatthey experience, and then if
they are in a society, or let'sthink about they are with their

(04:34):
friends and their family, Andthey in their, local coffee shop
of people that they meet only onMondays and then they're in a
restaurant of people that onlymeet on a Friday.
You might find that people havegot these different swirls of
relationships.
they form in different ways andpeople shift their personalities
almost to fit these communities.
So you need technologicalcapabilities to allow people to

(04:56):
do and to deal with theseongoing changes.
And if you don't make sense ofany of that, then the easiest
and the kind of the opt out isit lets us make an assumption
that all of society based on thestereotype are all the same.
And I think that really dawnedon me.
The fact that many parts of theresearch that I observed over
time was heavily focused ongenericizing or stereotyping or

(05:23):
simplifying all humans that looklike this, all humans that.
Speak like this.
All humans that, read thesethings, all humans that engage
in the following activity mustbe the same.
And that really scratched me andI really couldn't make sense of
that.
And I wanted to understand whoam I as the individual in that
system.
That's what intrigued me.

(05:44):
That's what kind of got me intothis.
And I kept on reasoning about itand It's been keeping me busy
for 20 something years.

Don Finley (05:50):
It's amazing because it sounds like a very spiritual
process as well as an analyticprocess too, because that
individual in a system, we allknow that we we can get attached
to the team.
We can get attached to thesociety.
we have an identity thatreflects upon the society that
we live in the geographic,region that we're in as well.

(06:10):
And so we do it to ourselves,but you're also saying from the
opposite side of trying tounderstand.
who your customer is or who thatpopulation of people is.
You want to get to a hyperpersonalization of that space,
or at least that's what I'm

Jay Van Zyl (06:24):
Yeah, I think maybe to reflect on a word to use that
I've thought about over time andmaybe slightly differently than
most, we don't really thinkabout a word like spiritual or
it's like a spiritualexperience.
It's more Do I have a keyphilosophical hook from which I
can drive my reasoning that if Iwant to take that reasoning into

(06:48):
some kind of technologicalinvention, that it will provide
me enough guidance that theoutput of what I produce Will be
And I think that a philosophicaltone for us made more sense
because if I think about, allthe elements of deciding how I
will engage with, let's say aone on one chat, let's take a

(07:08):
couple of examples.
If I chat on Facebook chat ortelegram or WhatsApp or whatever
the tool is that I use to have aone on one conversation or
iMessage, whatever that mightbe, the technological platform
is purely a carrier of theintent that sits in my mind in
communicating with another humanwho needs to receive it reliably

(07:30):
and communicate back with me.
So that means That there isnothing in the platform itself,
in the technological inventionitself, that assists me in
becoming better or poorer at whoI am or what I am.
It is the fact that I'mconnecting with another human at
the end of the line that playsthe role of connecting with me.

(07:52):
So that means that we oftenconfuse this technology because
if I and I deal with atechnology that's going to, Get
me to connect with my immediatefriends.
And I have, and if you think ofthe strength of weak ties, that
is a breakthrough bit ofreasoning and economics of graph
theory.
So if you take, the social graphby meta, or you take the

(08:13):
knowledge graph by alphabet, oryou take the, economic graph by
LinkedIn or any one of thecompanies who built the entire
business on the fact thatthere's a collective that
believes certain things.
The collective has a far greaterimpact on you as an individual
because you keep on.
review and essentially getbombarded with content that are

(08:34):
produced by people that are one,two, three, four degrees removed
from your immediaterelationship.
So that means that if I think ofon a one end, having a personal
conversation with somebody overa chat, and it's only about us,
you and I, in this situation,that's very different than all
of us, where everybody cancontribute and I might not know

(08:55):
half the people, and Theunfortunate thing is that humans
behave very differently whenthey go from this very private
conversation to a conversationthat gets more public because
they put on a face, they changethe entire style of engaging.
They behave differently.
And I think That deeplyintrigued me because if you
think about any new inventionand I want to deliver something

(09:19):
that's better to you as anindividual, I need to have at
least a philosophical view ofhow people behave differently
under these differentsituations.
Because if I think it's all justthe same, then I'll be in
trouble.
I won't be able to createsomething that's truly impactful
and meaningful.
and that's the thinking.
Yeah.

Don Finley (09:39):
I can really appreciate that.
That's a nice, solid foundationin order to, take a look at the
world, take a look at how youcan actually make somebody's
life better.
and the one word that I continueto resonate with is the
intention.
And.
you are right.
it takes intention for us to getonto this call.
It takes intention for us to go.
And the technology doesn't moveitself in that regard, right?

(10:02):
Like we years ago probably wouldhave recorded this in person,
right?
But now we have the opportunityto record this halfway around
the world kind of approach.
And that's like the absolutefascinating aspect of where it
is, but the intention that youand I have to connect.
hasn't changed.
The technology has opened up anopportunity to, now, we're

(10:24):
sitting in a wave of, of AIright now.
and I think that we're startingto see a lot of fascinating
solutions come out of it.
How has your philosophy playedout in this, latest tech rush of
artificial intelligence andhaving a shovel that can now
show some choice in where it'sapplied?

Jay Van Zyl (10:44):
Yeah.
Listen, that's a great entrypoint into, I think, any
technologically enabled companyto figure out how they think
about the tooling.
And I think, for us, we want tomake sense of a human situation
or an intent reliably.
And I think the really key thingto think about is reliability as

(11:06):
the key construct to find outthat if I behave in a way, and I
know that I want a responsebecause of a certain set of
behaviors, I don't want anengagement with the technology
that's going to make up thingson the fly.
So that's why we really thinkabout the AI world In the way
that the discipline is broken upinto the two kind of, if you

(11:27):
think about the latest movementwith generative models, we think
of generative models anddiscriminative models maybe
differently than most, becausewe truly believe that if I have
a model that's going to detectthat, that I have spent money in
a certain way, that I am stillengaging, that I have connected
with certain people, that I havebeen to the office today, that I

(11:49):
am driving my car down a freewayat this speed.
All those are bits of evidencethat you don't need a model that
is going to make up storiesabout what you're doing.
I need the evidence of what isaccurate and precise about the
things that you are engaging inand doing over time.
And I want to then enable thatusing various kinds of

(12:14):
generative tooling.
So I want to enable it withsomething that is linguistically
useful.
that is aesthetically appealingand that might also be, from a
auditory point of view,something that I can turn into a
voice that is soothing or thatmakes me feel great about my
situation.
So that means that thegenerative piece is incredibly
important to, to be good at whatit does.

(12:37):
So that means that in our world,we have an intent detection
engine.
And an intent detection engineis, if you're busy spending, and
let's say that I detect that youmight be getting yourself into
trouble, because the algorithmis showing me that people who
spend at this velocity, based onthe current debt situation,

(12:58):
income ratio, whatever thatmight be, is showing signs of
stress.
I need to now take that based onwho you are as a person and
translate the language intosomething that will make you act
in a way that is appropriate foryou without feeling like you are
being treated wrongly.
So that means that if you are aperson that is, let's say we

(13:20):
take, a well known framework,like a Myers Briggs type
indicator, MBTI indicator, andyou like, let's say a thinker or
a feeler, If you are a thinker,and we have detected from your
historical behaviors that youare a thinker, I want to talk to
you in facts and figures.
I want to say to you that you doknow that if you continue to
spend at this rate, you're goingto get a 20 percent deficit at

(13:42):
the end of the month.
You're going to have to earnmore money.
You are, 100, short of thisparticular payment.
You need 60 to solve that.
And I'll give you the facts.
But if you are a feeler, youwant to be able to say to the
person, how bad you're going tofeel when you can't pay your
bills this month.
Because your family are notgoing to be happy with you.
So that means that I think whatis happening in this
intersection of accurate andprecise evidence based activity

(14:07):
of what you're doing, and thenusing generative AI inventions
that in our case, we fine tuneour own models.
We've got quite a big stack oftools that we bring together to
make sure that these tools canfunction together into something
that.
will serve the customer at theend of the day or serve the
employee that is also a human.

(14:27):
Because what we found is that ifI'm on the phone with you and
let's say that I'm a privatebanker or I'm a, an insurance
representative or whatever therole is that I'm, The first
line, if you think aboutclassical supporters, I'll just
speak to some agent and it'slike something that's, you
don't, they don't understand me.
They don't really care about me.
They're really friendly.
They're trying to solve mymechanical problems.

(14:48):
But as soon as I have aninvestment portfolio, I have an
advisor.
I have a far more client centricrelationship.
I need and expect that you andon the other end.
to know me better because I'mthe one who's spending millions
in my portfolio with you.
I'm the important one as yourcustomer.
So how are you going to talk tome?
So that, so it's tough forcompanies to make sense of this.

(15:11):
so now what we've been workingon is saying that, let's say I
can reliably categorize,predict, determine your
personality in the context of,let's say, your spend or your
money behavior or your, Geneticbehavior in the context of your
retail spend, whatever thealgorithm is in the company
situation.
You want to then enable therepresentative of the
organization to extract fromthose behaviors, something that

(15:35):
they can use as a cue to speakto that human, but it must be
reliable.
we've been bringing thegenerative, and a part of the AI
world that we know that workswell, that is linguistically
useful and the like, withaccurate and precise outputs,
morph that together, so that theperson receiving it can see, oh,
this really works for me becauseI can see That it is not making

(15:58):
up something about a client thatit doesn't know anything about.
yeah.
So that's maybe a longerexplanation, but that's how we
see that.

Don Finley (16:06):
I think that's a, there's a couple of beautiful
notes that you have in there.
One, we've got to be careful asfar as how much trust we give to
technology in any one batch of,space, hallucinization of
generative AI models is still achallenge.
there has been some work toalleviate some of that, but
additionally, it isn't an Oracleof information.

(16:27):
And then additionally, just as ahuman does it, right?
Like we have processes, we havethings that we go through.
we synthesize data from otherpieces of data to create
knowledge and information.
there's no reason to throw thoseprocesses out because a new tool
has come by, the ability for usto audit information is
incredibly important forfinancial services.

(16:49):
like we were looking at.
AI to, to assist inunderwriting, which is, a highly
regulated space, and youdefinitely want to understand
why you're making decisions onwho you're lending money to and
who you're not lending money to.
and so like the technologyisn't, I would say, mature
enough today to be doing AIunderwriting.

(17:12):
but at the same standpoint,there are certain aspects of it
that can be done in this moment.
But I think you've laid out agood paradigm.
That we all can learn from isthat we a have to use a tool for
what it's good at.
It is good at synthesizinginformation from a subset of
data and presenting it back toyou in a personalized way.
in fact, that's one of myfavorite uses of LLMs is to

(17:35):
take, are you familiar with theI Ching?
So the book of changes, It's a,an ancient Chinese

Jay Van Zyl (17:42):
Yeah.

Don Finley (17:43):
basically, what I'll do is I'll throw the coins, and
for the listeners who aren'tfamiliar, it's basically you,
it's a divination tool in somecapacity, and so you use it to
ask a question, you throw thecoins, and then it gives you a
hexagram.
That is representative of thesituation that you're in.
But I'll tell you, Jay, all thelanguage is highly culturally

(18:07):
Chinese.
And I don't have thatbackground, but I do have an
appreciation of it.
So what I'll do is I'll send itover to chat GPT or I'll send it
over to one of the other LIMs.
I'll be like, Hey, translatethis into a Western like context
for me.
And it does really well withthat aspect.
And so I think yeah, you've gota good A good idea as far as

(18:28):
let's base our, to say, base theinformation that we're using on
hard, objective, provable,factual processes, and then use
the LLMs, the generative AI forthat last mile of helping to be,
that emotive type of capacity ofhow can we connect to it
culturally, but also, asindividuals.

Jay Van Zyl (18:50):
No, definitely.
And I think that the thing isthat I like your example,
because if you think of how themodels are trained, If you take
a list of tokens from an inputsource and you tokenize them and
you go through an encodingprocess into the latent space
basically only contains thepatterns and the common

(19:10):
behaviors of what came out ofthe input data.
It has no understanding,reference, or link to the
evidence that was created orthat was used to create it in
the first place.
So you'll see that we have aRAAC framework.
So we've got Retrieval AugmentedGeneration frameworks and the
like to make sure that theknowledge that we provide comes

(19:31):
from a source that we know thathas been pre approved, pre
vetted.
And it has got some kind ofempirical understanding as to
the hypothesis that you're busy,implementing at the, in that
moment, where a generativemodel, because it generates
tokens based on a cost function,the cost function is trying to
generate the next token in acheapest possible randomized way

(19:51):
based on its context.
And obviously the context iswhat's known as a prompt to the
world.
so that just means that Themodel decoding process in
generative versus discriminativemodels needs to be at the
forefront of your thinking whenyou design any of these
capabilities.
And I think that Because most ofus are, in my team at least, we
are all deeply technical.

(20:12):
We train models, we buildarchitectures at that level.
We make sense of it.
But I think that the people whodon't, who just believe it
blindly, who think that, alearning model is intelligent,
because it's not really.
It's a probabilistic model thatgenerates the next token.
And believe that it's sentient,that it's obviously not.
let's create, It does createsome confusion in our client

(20:33):
base that I often have to becareful of because I have to
stop myself from saying certainthings because I might, say
something that somebody mighthave a belief to say, hang on,
I'm talking to this and it does.
like me, and I would just say toit, the linguistic engine
determined the cost function totell you that somewhere in the
input data in the latent space,there's somebody that talked
about that.
And now I can present that toyou.

(20:53):
It doesn't mean that's actuallywhat it feels because it doesn't
feel it is not a biologicalsystem.
And I think that's the wholepoint.
Maybe also in the context of mybusiness is that we really want
to agonize over are there betterways to understand the
biological system, thebiological system that is
genetically enabled being thehuman?

(21:14):
Is there a better approach andunderstanding to, to make that
person feel better about whothey are, what they're doing,
where they're going, whatthey're spending money on, how
they're conducting their lives,how they're dealing with their
families.
Is there a way that you canunderstand that better?
Now, people might say in thehumanities that, Oh, we've all
figured it out.
we've got psychoanalysis, we'vegot, genetics, we've got social

(21:37):
sciences, we've gotanthropology, we've got, but not
really because most of thestudies done historically, We're
done with students atuniversities and closed groups
communities.
It's only now that we haveaccess to this enormous,
evidence base of humanengagements that we can
construct new hypotheses.

(21:57):
That's why I like computationalsocial science as a discipline,
because you don't believe whatpeople say.
You only believe the shreds ofevidence that they are leaving
behind.
So it's no point me telling youI'm vegan.
All right.
And I'm telling you that everyday.
And then.
If you look at my financialtransactions, that I buy my
Wagyu beef and I buy all thatsort of, every morning, then,

(22:20):
what's the point that it mightmean that I'm keep on giving my
financial transaction tosomebody else.
But if I want to thencommunicate with you about a
certain eating habit or a styleor whatever it is, and I don't
understand that I look at whatyou're trying to tell the world
who you are versus what youactually are.
I think there's a lot to be donein that space that in the

(22:41):
previous era, I think it.
Almost created like a creepysituation for a lot of people.
We say, how do they listen tome?
They're hearing everything theydo.
And it's personalization wasoften seen as something that,
that people don't like.
So the question is how do Icreate it to make sure that you
feel safe and it's engaging andI know that it's not going to be

(23:04):
something that this wants toexploit me and that's what
really agonized over that.

Don Finley (23:10):
And God, I've spent a lot of time agonizing over
that as well.
Cause you look at like, socialmedia was the big, the aha of
this, right?
Like you get the advertisementfor the thing that you were just
talking about or the thing thatyou just Googled.
And so like you, we all had thatlittle, is it listening to us?
And it's now the data is a lotcheaper to get from somewhere
else.
And, people are just, Tradingdata sets constantly in order to

(23:33):
get you that advertisement andbuild it out.
But additionally, Facebook,Instagram, any of the social
media networks are optimized forattention.
And the AIs that they were usingrecognize that the attention is
easier when you're angry,fearful, upset, or, something
else.
So it has a negative impact toour emotional state.

(23:56):
to be engaging with thesesystems that want to keep us
attached to them.
that level of trust that I thinkwe, we all have for these
algorithms is a bit of mistrustbeing that it wasn't aligned to
what our intentions were ofbeing able to connect with
family, connect with people to,to share our information, to
share what we have.

(24:16):
with this world and thealgorithms seem to want us to do
something else, right?
Like how do you play in theworld of you're creating
algorithms over here andadditionally creating AI systems
that help to engage with people.
And that level of trust is lowbecause the system has its own
intention and goals that it'sbeen designed for.

(24:36):
Whereas in Today's world we'regetting more emotionally.
It's emotionally capable isn'tthe right term It's definitely
like it's an imitation ofemotion, right?
Like it's the mimicry that youknow We use to connect with
humans that the machine is nowable to mimic us in a way that
actually creates that emotionalresponse that talks about like

(24:57):
how we have to be carefultalking about sentience and
talking about like the actualintelligence that's there when
it is a probabilistic machinethat just knows that hey, I can
do this.
So I guess the question comesabout is in the systems that
you're building, how do youbreach that approach of making
it a system that is there forthe end user?

Jay Van Zyl (25:14):
Yeah.
Maybe just to stand back andlook at this holistically, in
the social kind of media era, ifwe can call it that, of the
evidence that is collected abouta society or a community is self
declared, as we know, it's selfexposed and it is mostly

(25:37):
reflective.
If I go to Instagram and I'mbusy shaping a story about a
persona, the persona that I'mcreating is a persona that I
think the world needs to seeabout me, not necessarily who I
am.
And all social media platformsare essentially working on the
back of that core assumptionthat the humans that participate

(26:01):
will provide content andevidence, basically as content.
That will show some things thatthey are doing.
So it means that if you becomean influencer and you at some
island, in, in Italy or inGreece, and you take a picture,
it is likely that you are there,so that's the assumption that
you have to work from you, youcreate the image.

(26:23):
And if you are a person who'salways happy about a situation,
you can never be sad and you'rein your public persona.
So you create this image.
you essentially work off.
The construct that most of ourlarge clients worked on for
years in a, in the basis ofdesign, and that is that, that
there's a concept called apersona.
it's a generic description thatis an abstract representation of

(26:47):
a collective of people.
If we now take the world we'removing into, because people are
pushing back on that, and let'stake your bank, your
telecommunications company, yourinsurer, your retailer, what
they, initially thought aboutthe data that they collect about
you is that it's really distransactional.
And what we believe is that thehuman sits in that data

(27:10):
somewhere.
There is evidence that is farmore reliable.
And far more closed off inprivate than you have in your
public situations.
So that means that when youengage with your bank or with
your telco or whatever thecompanies you're engaging with,

(27:30):
they cannot expose, they cannotsell because they are regulated.
They are forced to make surethat they do it in a safe way.
So that means that you thinkabout the evidence that you have
collected far more carefullythan what you do.
If you are just a genericpublic, Tech company.
So what we've been doing, ourclients are mostly large
enterprise who really want toservice their clients better.

(27:52):
They really believe it.
they find that if I'm going toimplement customer lifetime
value practices, can I give somethings on the journey to you for
free?
Can I assist you to get you to abetter outcome?
And if you are in a betterplace, we will be in a better
place.
Why?
Because I've got evidence,reliable evidence, of the things

(28:15):
that you are doing day by daythat I can use to make those
decisions within my risk andcompliance and all the things
that I need to conduct mybusiness safely.
If you look at public mediacompanies, you have none of
that.
In actual fact, you have maybe afraction of it, to be honest,
because you might have the moneythat I spent.

(28:37):
on getting to done through somekind of campaign to say that I
want to target a person who's infully at a coffee shop at this
age doing these kind of thingsand I want to pay and then what
I want to do is put an advertthat might want to manipulate
you and there's no way theplatform can stop you because
it's considered free media.
You don't know if any of theevidence is true or whatever
it's presenting you as evenpractical or if it's not been

(29:01):
made up.
You have no understanding.
I think we're moving into aspace now where companies
realize that The evidence thatthey collect about you when you
paid for that coffee.
You paid for it.
There's evidence somewhere inthere.
You might not know the factthat, you had a cortado with,
almond milk or you had, soy milkwith your cappuccino or whatever
that might be, but I do knowthat you were there and I can

(29:23):
rely on that far more accuratelyas an input data point to
determine your behaviors thanwhat I would do in the social
world.
So that's what we've beenworking with.
So we've been working on.
a technological invention, aseries of, and it's actually, in
fact, that's why we call itecosystem, an ecosystem of
platforms, technology cometogether, that keep on solving
those kinds of problems.

(29:44):
So that means that if I knownothing about you never engaged
with you, you are new to mybusiness, my algorithms cannot
be the same than if you havebeen engaging with me for many
years.
So that means that if I knownothing about you, should I just
treat you as some genericpersona, or should I attempt to

(30:06):
get to understand you based onyour preferences of the platform
that you are engaging with meon?
So if I'm a bank or whatever,and I perform certain
transactions and I see that thisperson is never taking up a
loan, but I can see that theyare doing, home renovations.
Are they doing it on their ownhome, on the rental.
I see.
They're not paying rent, sowherever they're going.
so you could make, far betterdecisions about, one of the

(30:28):
algorithms we've been working onis called the money personality
is to work out your debtsituation, your protection
situation.
So do you have insurance and thelike, on your vehicle.
And so that means that there arebetter ways to get to understand
the person.
And then make sure that you canservice them.
so that's what we've beenworking on.
And then, and the outcome isthat, we call it the real time
behavioral prediction engine isthat let's say you go to a

(30:52):
website, to an app, you phone, acall center agent or whatever
that might be.
You want the moment that youconnect with them, it needs to
know that you are a person thatlet's say highly ritualistic.
You go check your balance, youpay the bill, you leave.
You are intentional, you'reritualistic, and you go.
Or you're the person who likesto look for, there's a banner to
say, there's a, the new band inNew York.

(31:13):
I know that Chase Bank has beendoing this.
it puts banners to say, there'ssome show that you can go and
attend.
Click here, and you can justtake it off your account and it
needs to know at least thatlevel of engagement with you.
so I think that because thoseare baby steps, you need a
technological platform toautomate it for you, because for
a human to figure it out across10, 20, 30, 50 million, a

(31:34):
hundred million customers makesit impossible.
So you want something to do itautomatically for you.

Don Finley (31:41):
we have a similar view of how we treat some of
these problems between our twoorganizations.
And I really appreciate that.
and at the same time, I'm goingto offer you how we, talk about
this.
I'm basically like, look, Whatyou're looking to do 30 million
times is you're looking to doone thing and do it 30 million
times.
So if you can define how youwant to approach this for one

(32:03):
person and that level ofpersonalization that you're
talking about, if you can dothat, then we can train an AI or
instruct an AI to interact withsomebody enough to get that
information out so that you canpersonalize it.
For the 30 million, instead of,trying to make population
assumptions and grouping andbucketing people before you have

(32:26):
the necessary information.
but coming back to this, howwould you recommend that either
companies or individuals, taketheir next step in this, with
this new technology and the newAI and or how like they can
engage with ecosystem?
Yeah.

Jay Van Zyl (32:43):
I'm in a series of what we call prediction stories
that we've been trying to use tomake sure that companies can
make sense of it.
And maybe just to get reallypractical for a moment.
Let's say that An organizationis busy with its digital
transformation journey, and thatmost companies are these days.

(33:03):
And a digital transformation ismostly about generic automation.
Get a person off paper,obviously nobody faxes anymore,
or trying to get them off email,and get them onto Some kind of
app where they can take apicture of the ID or just, scan
it and send it in and identifywho you are and engage with you

(33:24):
all the way through to if you'rean existing customer, just go
check your balance on theaccount and the like.
So we've created a set ofprediction stories that
basically goes predict X from Yfor the purposes of Z.
And then using behavioralscience.
So predict X from Y for thepurposes of Z using a science.
And we've created some customGPTs, by the way.

(33:45):
So if you're interested and yougo to OpenAI's custom GPTs, you
type in ecosystem.
ai.
You'll see that we've got quitea few of these published now.
We've got a value proposition,use case designer, smart message
recommender, and a couple of thelessons that we've learned in
companies over time.
We've now turned that intopublicly, freely accessible GPT,

(34:07):
so people can go and do querieson it.
So if you go to, let's say, thevalue proposition canvas, and
you have a problem that you'reworking on, and let's say that
you want to engage withcustomers that have just joined
you.
So let's say that you're a bankand they've just opened an
account, they just engaged withyou.
So they're called new tobusiness, so they're NTBs,

(34:28):
they're new to business.
And you want to activate them.
You want to make them feelcomfortable that they belong,
that they've made the rightchoice.
Coming to you in the firstplace, what do I need to do to
service them?
So what you do is if you use,let's say our value proposition
canvas, it will then tell you tosay, the job to be done of the
person using that device willbe, can I sign up and because

(34:49):
I'm new here, can you guide mein a lot more detail than if
I've been here the second, thethird, 10th time, because that
means that I've learned howeverything works.
And as we know that most.
of the specialized technologiesare not really publicly
accessible.
So your banking app is notsomething that anybody can just

(35:11):
go and access.
it's purposeful for the job thatit does, or you're going to go
top up your, at your telcoprovider, your data bundle or
whatever it might be, or buyadditional services.
It's quite specific to thatpurpose.
So that means that you need tohave a way that you can guide
this person.
So that means that In thoseGPTs, we've created some of this

(35:32):
to at least help people tounderstand it.
That'll help you to say, the jobto be done is to smooth the
journey for the customer.
The pain that they experience isthat it's difficult to navigate.
It's not always the bestlanguage to use.
They don't know if it's for themor not.
And the game would be, can Ifigure out from the beginning
who this person is?
Can I learn at a rate that isfaster than if I had to do it

(35:55):
manually?
And can I get models to convergeon who these people are?
So that means that the painrelievers are the essentials.
And the gain creators are thethings that will separate me
from the rest.
So we've created a set of thesegain creators to say, if I'm
going to have it, let's take aclassical example.
I'll go to a menu option on agambling site.

(36:16):
And it's one of the cases we didthat recently.
And you always go to check yourbalance and that the menu option
is on a far right, because it'snot for everybody.
But you, the person who goesthere every single time, how
quickly should the menu convergeto move that to the left, to
move it right in your line offocus?
So that means that you can setthat up.

(36:38):
So in our world, you canconfigure it so that for every
single human, it connects withwhat you provide digitally.
You can get banners and panelsand language and a whole lot of
things to converge and what youoffer them, what you're about to
sell them will then follow aslightly different guidance
because you now have pricingrules and everything that is in

(36:59):
it, but you want in real time tofigure out if this person is
likely to engage with thisaction or not.
So that just means that to startwith understanding what the
problem is.
Knowing what actual pains it isthat I need to relieve and what
gains I'm, attempting to derivefrom this technological platform

(37:19):
is a starting point for us.
And that means that at the endof that, it will then say to
you, predict the menu optionfrom the person's continuous
behavior for the purposes ofimmediate action using a
behavioral science called let'ssay loss aversion or continuous
engagement or systematicdesensitization or whatever the
algorithm is.

(37:40):
Once you know that and you putit into the use case designer,
it will tell you exactly how toconstruct it in our product
right down to actually doing it

Don Finley (37:49):
basically your product is like having a team of
people available to you thathave an understanding of some of
these new cases and likeinteracting.
And I think that's a really anawesome piece of like user
experience that you justdescribed around Hey, I'm
finding collaborators in theproblem solving aspect of this.
Whereas most no cool, most nocode tools that we come across

(38:11):
tend to, you're still relying onall of the human ingenuity of
being the problem solver, wherenow You're bringing tools to the
forefront that help you to solvethe problems as well.
Jay, I gotta thank you so muchfor being on today.
It's been an absolute pleasuregetting to know you better and
also to share your story andyour insights with the group.
So thank you once again.

Jay Van Zyl (38:31):
Thanks Don.
I love talking to you.
It was a great conversation.
Thank you.

Don Finley (38:35):
Thank you for tuning into The Human Code, sponsored
by FINdustries, where we harnessAI to elevate your business.
By improving operationalefficiency and accelerating
growth, we turn opportunitiesinto reality.
Let FINdustries be your guide toAI mastery, making success
inevitable.
Explore how at FINdustries.

(38:57):
co.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Amy Robach & T.J. Holmes present: Aubrey O’Day, Covering the Diddy Trial

Amy Robach & T.J. Holmes present: Aubrey O’Day, Covering the Diddy Trial

Introducing… Aubrey O’Day Diddy’s former protege, television personality, platinum selling music artist, Danity Kane alum Aubrey O’Day joins veteran journalists Amy Robach and TJ Holmes to provide a unique perspective on the trial that has captivated the attention of the nation. Join them throughout the trial as they discuss, debate, and dissect every detail, every aspect of the proceedings. Aubrey will offer her opinions and expertise, as only she is qualified to do given her first-hand knowledge. From her days on Making the Band, as she emerged as the breakout star, the truth of the situation would be the opposite of the glitz and glamour. Listen throughout every minute of the trial, for this exclusive coverage. Amy Robach and TJ Holmes present Aubrey O’Day, Covering the Diddy Trial, an iHeartRadio podcast.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.