All Episodes

November 17, 2025 46 mins

Systems should make life easier, not more complicated. That idea runs through our conversation with technology strategist Vipin Saroha, whose journey from SAP in India to Geneva to advising global institutions shaped a simple practice: start with the problem, then use data and AI to serve people with clarity and care.

We dig into what most teams get wrong about data—confusing volume with insight and falling into confirmation bias. Instead of chasing clever dashboards, we map a workflow where hypotheses are tested, methods are transparent, and systems explain themselves in plain language. The result is trust. And trust is what unlocks adoption, the critical moment when data actually changes a decision. From HR policy Q&A to legal discovery, we show how AI can strip away repetitive labor so humans focus on context, tradeoffs, and fairness.

Designing for the public means building for real settings: clinics with noise, fields with poor connectivity, and city services that must be accessible, secure, and easy to use. We explore digital twins, predictive maintenance, and crowdsourced reporting—and why each only works when the loop closes and action is visible. Along the way, we share a framework for people-first AI strategy: educate users, co-design with business owners, choose use cases where automation is safe and useful, and require explainability where stakes are high. The through line is constant: human judgment at the end of the loop, with AI as the force multiplier.

If you care about ethical AI, public sector innovation, and data that leads to better outcomes—not just faster reports—you’ll find practical steps you can apply today. Subscribe, share with a colleague who wrangles dashboards for a living, and leave a review with one question you want AI to help your community answer next.

Send us a text

Check out "Protection for the Inventive Mind" – available now on Amazon in print and Kindle formats.


The views and opinions expressed (by the host and guest(s)) in this podcast are strictly their own and do not necessarily reflect the official policy or position of the entities with which they may be affiliated. This podcast should in no way be construed as promoting or criticizing any particular government policy, institutional position, private interest or commercial entity. Any content provided is for informational and educational purposes only.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_05 (00:00):
How I can very easily visualize it and
understand it and use it in theright context as well.
So that's where the whole ideaof data you know analysis comes
in as well.
You know, and then AI is a layeron top of it, which is making it
just much more easier for us tounderstand that data as well.
I don't need to be an Excel guruto figure out to do calculation.

(00:22):
I can I can use an AI tool to dothat for me.

SPEAKER_02 (00:34):
We're joined today by VPN.
He is a technology strategistwho knows that everything behind
AI model or data dashboard.
There's a bigger mission,delivering real public value.
VPN's work expansion from Indiato USA to Switzerland to SAP to

(00:57):
the UN and from productinnovation to digital
transformation.
He helped governments, globalorganizations, and major
consultancies think how they usedata in AI, not just how to
become smarter, but to becomemore human-centered, more
ethical, and more effective.

(01:18):
In this episode, we explore howAI and data can do more than
just optimize.
They can align a system withpurpose, drive smarter
decisions, and unlock the kindof information and innovation
that benefits everyone, not justthe bottom line.
Welcome.

(01:39):
Thank you so much.
Please introduce yourself andwhat led you to be here in
Geneva, Switzerland.

SPEAKER_05 (01:46):
No, thank you.
No, I think you already made agood introduction, uh, just a
little uh more context in that.
So I'm a software engineer uh byby training, uh, but then I kind
of dabbled into the area ofpublic sector.
So that's when I decided to do apublic administration masters as
well, still focusing on howdecision making and data is

(02:07):
used, or how data is used fordecision making in public sector
uh organizations.
So that's what has been thetrajectory, and now kind of I'm
in this situation where I kindof advise then those agencies
and also occasionallygovernments onto like how now
they can effectively use data.
And now with AI, how they canuse the AI knowledge, how they

(02:30):
can even identify what use casethey have for AI.
Because it's very easy to saythis AI will solve a problem,
but if we don't have the rightdata, then none of this would
work as well.
And then we're very easy toblame the technology and not the
inherent issues we have withinthe structure of the
organization or the data that wehave.

(02:52):
So that's that's what I'm kindof doing.
What led me to Geneva was uh soI started my career in India
with SAP, pure technology play,working with Fortune 500
companies on how to use SAPproducts um as well.
And that's when I realized Iwant to do something in the
public sector.
So went to US, there be therefor three years, finishing my

(03:16):
master's at the same time,working with the government
there in the state of Georgia aswell.
And then that basically thenlanded me in UN in Geneva uh
over 10 years ago, 2015, I wouldsay.
Uh and first I was with ahumanitarian organization, kind
of helped them do informationmanagement, how they do, how

(03:39):
they manage their data, theirinformation in time of crisis.
And then after that, I spentgood around seven-ish years in
in WIPO, World IntellectualProperty Organization, also kind
of helped them uh work withgovernments, identify where we
can bring in operationalefficiencies by building up
digital solutions, new productsuh as well.

(04:01):
Did that for seven years andthen decided it's time to move
back to the private sector, uh,but not to leave all that
experience that I have gained uhbehind.
So I am part of theinternational development team
uh in EY, where we basicallywork with UN agencies, big
international NGOs to help themadvise on uh digital solutions,

(04:23):
AI, you know, their ERP systemsuh as well.
So I've not completely leftbehind what I have learned, but
now I'm coming in from anexternal perspective to help
them be more efficient and anduh really not lose ground with
this whole AI wave.

SPEAKER_02 (04:39):
Yeah, so not everything can be done with AI.

SPEAKER_05 (04:43):
Not everything should be done with AI.

SPEAKER_02 (04:45):
Okay, but it's that's also the thing.
But it's it's it's interestingto um the perspective that you
have because you have beeninside so many different uh um
uh ecosystem and organizations,and also the way that they
approach issues as quitedifferent.
So it's it's uh you have a veryunique uh view, and you can also

(05:06):
understand your counterparts,um, the reason why they decide
the things that they decide orhow they make the decisions
themselves.
So it's a very privileged uhinformation that you have now
working on the other side.
Yeah.

SPEAKER_05 (05:19):
Yeah.
So for sure.
No, I think that's that's thewhole idea as well, that it's
very easy to say how anorganization should function if
you've never worked in thatcontext.

SPEAKER_04 (05:29):
Yeah.

SPEAKER_05 (05:30):
Uh the true value comes in once you've been in
there, you've seen the problems,you've seen it firsthand uh as
well.
And that's what that's whatmakes a big difference between
even saying a certain solutionwill work or will not work uh in
the context of thatorganization.

SPEAKER_02 (05:46):
Yeah, it's true.
It's it's it's easier once youwalk uh mild in their shoes,
then you can say, okay, I know,I know how to address this issue
and I know what um how can Ihelp you?

SPEAKER_01 (05:57):
Exactly, exactly.

SPEAKER_02 (05:58):
So from from this uh uh intersection, um you work in
AI, data, and public sectorstrategy.
What originally drew you to thisspace?

SPEAKER_05 (06:12):
As I said, I've I have a software engineering
background, but I was never akind of a person who'll sit
behind a computer and just writea piece of code all day long.

SPEAKER_01 (06:22):
Sounds boring.

SPEAKER_05 (06:23):
Uh yes, it is.
But then again, there are peoplewho love it.
My brother does that day in, dayout.
He loves it.
I didn't, I was more I want totalk to people and understand
their problems.
So that's where this wholeconcept of okay, but at the same
time, I don't want to loseeverything that I've learned.
You know, I'm not a salesperson.
I I I want to help organizationsusing what I've learned, either

(06:47):
it's being uh through myeducation or through my
experience as well.
And that's where it was a goodmix and good intersection of me
wanting to talk to people andunderstand their problem and
using technology and data tosolve that uh problem, that
whatever we are trying to do isfact-based.
Uh, it's very easy to say it'sfact-based because I think

(07:07):
that's a fact.
But then actually to showsomething behind it, that it is
fact-based because there arefacts behind it as well.
And that's that's where I thinkI want it to be, and I want I'm
still working on because I thinkthere's something new.
I also learn every day, whetherit's AI or it's data analytics,
but how we can effectively thenuse that, use that data to make

(07:29):
that decision making, to makethe organization better and
effective.
And that's the whole idea behindwhat I want to do and what I
aspire to keep doing.

SPEAKER_02 (07:37):
Okay, so it's it's making sure that the information
is accurate in order then to usethe technology based on that
information.

SPEAKER_05 (07:46):
Exactly.
And then how how to even usethat?
Because everybody has data lyingaround, you know, I have it on
my laptop, you have it as well.
But it's also about how I canvery easily visualize it and
understand it and use it in theright context uh as well.
So that's where the whole ideaof data, you know, analysis

(08:06):
comes in as well, you know, andthen AI is a layer on top of it,
which is making it just muchmore easier for us to understand
that data as well.
I don't need to be an Excel guruto figure out to do calculation.
I can I can use an AI tool to dothat for me.

SPEAKER_02 (08:22):
Yeah.
I'm very bad at Excel.
As a good lawyer, I'm very badat math.

SPEAKER_01 (08:27):
So that's okay.

SPEAKER_05 (08:28):
You're good at other things.
So that's why I am I am very badwhen it comes to legal things.
I try to avoid it as much aspossible.

SPEAKER_02 (08:36):
So that's why everyone has their own uh
strengths.
Yeah.
You've helped shape digitalstrategies for large
institutions.
In your experience, what's thebiggest misconception about
using AI and data for the publicgood?

SPEAKER_05 (08:57):
It's a very interesting question.
I think it's it it even goesbeyond the places I worked with.
It's the biggest misconceptionis that just because I have data
that I should have the answers.
Like, but if I don't know how touse the data, if I don't even
know where to look for thatdata, uh that's that's the

(09:18):
biggest, biggest issue thatmerely having data will give you
the answer uh is not the rightway forward.
We need to be able to ask theright questions uh as well.
And what happens a lot of times,and and it it goes a little bit
in a different tangent from yourquestion, is that we tend to
come up with an answer first andthen try to find data to justify

(09:43):
that answer.
So that's and that happens moreoften than not, unfortunately.
Uh so that's so that I would saybiggest misconception is that I
have a decision, I have the datato back it, but what is the
order I went through?
Did I look at my data first witha hypothesis, and then did I

(10:05):
prove or disprove thathypothesis with the data and
then came with a fine uh answer?
Or was I going the other way,the wrong other way on that?

SPEAKER_02 (10:15):
Yeah, and also the the lens that you're looking at
is like if you're only lookingfor data that confirms your
hypothesis, you're never gonnaeven acknowledge the other data
that doesn't.

SPEAKER_05 (10:26):
Exactly, exactly.
And then that leads to thatleads to problems as well,
because then we are only lookinginto the same little uh little
bubble uh we sit in uh as well.
And and when it comes to publicgood, data is there, it's just
that how we use it.
Yeah, that's the that's thepoint uh on that.
And more often than not, we tendto just look at, as you said,

(10:49):
things that justify and verifywhat we already believe.

SPEAKER_02 (10:55):
And then what do we need the data for if we're gonna
make the same decision anyways?

SPEAKER_05 (11:00):
Yeah, exactly.
That's why I'm saying then thenpeople tend to w work towards
finding that data to justifythat uh that answer.
And you might find someoutliers, but outlier is not a
norm.
You know, it is outlier for areason.
And it happens maybe 2% of thetime.
Yeah, you know.

SPEAKER_02 (11:19):
Perfect.
So um you've implemented AI forpublic sector, digital
transformation, and even IPmodernization.
What's a success story thatreminds you AI can deliver
long-term value, not justefficiency?

SPEAKER_05 (11:42):
So I think it's it's AI in terms of long-term value
is about how we use that AI,whether it's a solution, whether
it's a feature uh that we have,and there are a lot of use cases
around that.
Um for me, using an AI solutionthat gives long-term value is
that how can I relieve peoplefrom doing mundane work,

(12:06):
repetitive tasks uh as well.
So I can't go into the you knowthe details of projects I've
worked on, but um for allconfidentiality reasons.
Uh but to give you an example,for example, we all have worked
with HR departments, and we allhave questions on HR policies uh

(12:28):
as well.
And now I've been on paternityleave, I've looked at HR policy
and never been able to figureout whether they're all been
written from a legal lens uh aswell.
So with an AI solution, youshould be very, you are able to
easily understand what thatmeans for you for me.
So I don't need now a group ofpeople who are just sitting

(12:49):
there answering questions that asolution can answer.
So that now I have those peoplefreed up to do actual innovative
work as well.
And that's the long-termstrategy.
There is it's it's related thatthere is an efficiency gain, but
that efficiency gain only getsyou a certain distance.
Yeah after that, it is the humanwhich will uh drive that

(13:10):
conversation uh in there, andthat's the long-term value that
how can I free up my people fromdoing repetitive tasks?
If it's a repetitive task, amachine can do it and it should
do it uh as well, so my peoplecan think.

SPEAKER_02 (13:25):
Yeah.
So, for example, if you havesomething that I have done in
the past, thousands of pages,because you need to find
specific information.
And it takes a person weeks, ifnot months, to do it.
But an AI can do it right away.
Yeah.
So that it does something thatit's it there's no added value

(13:46):
in you wasting weeks of yourtime just looking for a very
specific information in a sea ofdocuments.

SPEAKER_05 (13:53):
Exactly.
And then and you're a lawyer,you said, right?
So like you wouldn't know betterthan that, like you know, for
any legal case, you have to doparalegals or sitting there
identifying that one case, youknow, going through books and
stuff.
And that with with AI, it's it'sit's a matter of few seconds to
find that.
But then at the end of the day,and that's where the human
intelligence comes in, andthat's where the long-term value

(14:15):
generates.
Is that now I know people whocan use the AI solutions to be
more productive.
So they use that solution andthen they put their
intelligences like it'll giveyou 20 cases, but that for that
particular case, you then stillneed to find those two ones that
exactly match as well.
We are working, we uh technologyis moving faster that it even

(14:37):
gives you a very closely youknow written uh solution in a
right way as well, but it stillneeds to be validated.
You cannot go to a judge andsay, like it was in it's a case
that was written by an AI, sogive me a you know uh ruling on
it.

SPEAKER_02 (14:53):
Yeah, exactly.
So it's the human factor is veryimportant because then has a
lawyer for in in this uhexample, you will decide if it's
actually relevant or if itfollows um what you need to, or
or is also highlights what youwant to highlight, because
sometimes it's a great case, butit's great for the other party,
not for your not for the one youwant to build.

SPEAKER_05 (15:15):
And human has to be at the end of this loop, be it
data analysis, be it AI.
You know, if you don't havehuman at the end of the loop,
there is a certain level oftrust that comes with the fact
that we human are the onesdeciding uh on this um as well,
and then the level of efficiencygains that you get out of it.

(15:37):
It's all about how can I be moreproductive using that.
So I don't need to fill out aform that that can be very
easily filled out because theformat is the same all the time.
Yeah, it's to it's to enhance uhthe the the human productivity,
exactly, and and theintelligence as well at the same
time, because there might be,you know, there's certain uh

(15:59):
capacity we as humans have aswell in terms of brain and and
focus.

SPEAKER_01 (16:04):
Energy.

SPEAKER_05 (16:05):
Energy, exactly.
That is like I can't keepspending 24 hours just to find
one case law, but at least if Ihave a good starting point that
those are the only 20 ones Ineed to look at, I can focus on
those 20s.
And then even if those thosedon't make sense, then I can ask
for more and then identify moreuh as well.
But I have a companion who canhelp me go through that data

(16:27):
quickly and faster.

SPEAKER_02 (16:29):
Yeah, it's is it's guiding you through through the
through the field ofinformation.
Exactly.
Governments are increasinglyadopting AI tools, but trust and
transparency are still concerns.
What's one step we could weshould take to make AI
strategies more inclusive andpeople first?

SPEAKER_05 (16:51):
Uh it's a very interesting question.
It has been, I think, theresince the whole AI thing
started, how much how we buildtrust.

SPEAKER_01 (16:59):
Um do no harm.

SPEAKER_05 (17:01):
Yeah, exactly.
But then what does do no harmdefinition is?
And that's the key thing.
Um it's not one step, it's it'sa it's a series of steps, you
know, it's it's it's educatingpeople, you know, to even tell
them what AI is, you know, soyou have to go through the whole

(17:22):
journey as well, because AI hasuh you know uh just completely
blew up on the scene quickly,and suddenly everybody was
expected, you know, to knowabout it, everything inside out
within a matter of a year or twoyears, you know, with uh with
the transformationaltechnologies at it is, I think
there's a lot of work that stillneeds to be done to educate

(17:44):
people, not just in schools, inuniversities, at work, uh as
well, so that they know what isAI, what it can do, and what it
cannot do.
That's the key thing uh there aswell.
And then work with those peopleto identify within their
processes what are those tasksthat can be done by AI, where

(18:05):
you can build a solution uh onthat.
If my task is day in, day outtalking to people, talking to
governments to figure out whattheir problems are, and AI will
not be, at least not in nearfuture, would be able to
replicate that as well.
But the back-end analysis ofthat is where AI uh could come

(18:26):
in.
So that's where we need to, sois it's a bunch of steps.
We need to educate, then we needto figure out where in our
process this AI solution can beplaced, and then develop that
solution.
Okay, but those solutionsdevelopment should not be uh
technology decisions, they needto be business decisions.

(18:47):
If you are a lawyer, you knowthe best in your process of
analysis where you spend themost amount of your time.
And this is where you then youknow it can tell you where you
can then save the most amount ofuh time as well.
And then the solution has to befor you and not because I, as a
technology person, told yousolution will work.

(19:09):
So it's it's the involvement,and that's the tenant for any
kind of transformation.
AI just brought it in theforefront for everything, even
if you do a big you know ERPtransformation or a big data
analytics uh transformation inyour in your organization or in
any organization, those are thesteps you need to you need to

(19:31):
look for.
You need to involve people atthe right stage, you need to
communicate that effectively aswell, and then own up the risks
as well, that there will berisks as well.
But then at the same time, thereare risks with people's analysis
as well.
Yeah, you know, so whatever I amgiving an analysis is based on
my knowledge.
My knowledge is also limited.

(19:51):
I can't give you an analysisoutside that as well, and it
could be wrong as well.

SPEAKER_02 (19:55):
Yeah.
And I like the your approachthat it's not a technology
solution, it's a businesssolution.
Because it's it's somethingthat's uh it has to serve the
the purpose of the business, ofthe organization, what you're
looking to achieve.
It's not technology for the sakeof technology, exactly.
It's technology because you needit for X or or C.

SPEAKER_05 (20:18):
And we're seeing more and more of that as well,
that governments as well arewanting to use, you know, how
they can make the lives ofpeople better, you know, how
they can use it in the field ofeducation, forecasting, be it
weather, or what is the besttime for cultivating your fields
uh as well.
So these are the there aresolutions out there.

(20:39):
It's all about then educatingthe people, making sure that
they know how to use them to thebest of their abilities as well.
And it shouldn't be justanother, you know, solution or
app on my phone that I'll use ituh once every often and then I
forget about it.

SPEAKER_01 (20:53):
Yeah, or or it drives me crazy every time I
need to use it.

SPEAKER_05 (20:56):
Yeah, exactly, exactly.
And yeah, it agree it has to beeasy to use.
Yeah, it has to be.
And then and that's what athat's why AI has also just come
up on the stage.
It's it talks you in the way ahuman human could could talk, or
at least it mimics that way.
Uh so and that's how we need tomake sure that people don't lose
that uh in the process that hemay be talking like a human, but

(21:20):
it's not human.
So I need to have my ownjudgment on top of it as well.
We need to keep in mind thatit's still an AI, it's not a
human, it can be human-like, butit's not a it's not a human,
it's not AI, but but obviouslyit can do a lot of things, it's
just all about making sure thatfinding the right solution.
Do I need a AI solution for thatwork or piece of work or not?

(21:41):
As well.
And if if everybody is convincedof that from the very beginning,
the adoption of any solution,and I'm not even talking about
AI at this point, adoption ofany solution is much more
higher.
Uh either it's by masses ofpeople at a national,
subnational level, or in anorganization.

SPEAKER_02 (22:03):
Yeah, it's it's more than just technology.
Yeah.
So your background combinesstrategy, product design, and
data storytelling.
When design in tech for publicuse, what elements should never
be overlooked?

SPEAKER_05 (22:21):
I think the biggest tenant when it's when it's for
public use or public good, uhfor that matter, is we do not,
we should not lose focus fromthe main objective.
That we are, as we were talkingearlier, that we are doing it
for a certain objective toachieve a certain outcome, and
not doing it for the sake ofdoing it.

(22:42):
Because we have the mandate andI have the skill set and I have
the you know the financialresources to do it.
You know, if it's gonna helpsomebody, that's when we do it.

SPEAKER_02 (22:51):
Okay.

SPEAKER_05 (22:52):
And then if if we are not doing it, then as I
said, it'll be one of thosesolutions that'll it's it's like
now I have 20 dashboards to lookat the data, but did I ever ask
for that?

SPEAKER_01 (23:04):
Was it really necessary?
Yeah.
So many circles.

SPEAKER_05 (23:08):
Exactly.
So so it's it's all about like,do I need it and what's then the
outcome of that as well?
And does it serve a good widerpublic good in that sense?

SPEAKER_02 (23:19):
And the user experience is is so viral.
That's something that uh hasbeen uh uh a common trend for
for about a decade now.
That how are we navigatingthrough this technology, how are
we using it?
Um the is it obvious enough thatanyone can uh use it right away,

(23:39):
or do you need a learning alearning uh uh curve for all of
that?
So it's it's important to tokeep in mind who are you making
the technology for and the reallife interactions they're gonna
have with that technology.
Are they gonna have time to sitdown, use it, or are they gonna
do it on uh walking in in afield or in a busy hospital?

(24:00):
So you need to understand thedifferent realities that they
will have.

SPEAKER_05 (24:04):
Yeah, yeah.
And also the people that will beimpacted by it.
Yeah.
You know, if if it's a solutionthat has a wide impact and is
very critical, you mentionedhealth for that matter.
There needs to be strongerguardrails around that as well.
The data security needs to bemuch more stronger than
something, you know, if I needto see what events are in my

(24:25):
locality, you know.
So it's it's it's it's it'sabout that, you know.
It's like what's the purpose uhof those whatever solution I'm
I'm gonna develop and who'sgonna serve?
Because that's what will definehow I need to structure
everything around that.

SPEAKER_02 (24:40):
Yeah, because it comes from the ground up.

SPEAKER_05 (24:42):
Exactly.

SPEAKER_02 (24:44):
How do we avoid building smart systems that feel
cold or distant from real humanneeds?

SPEAKER_05 (24:54):
That's a tough one.
How do we avoid it?

SPEAKER_02 (24:58):
Technically, it's warm and fussy.
Yeah.

SPEAKER_05 (25:01):
I think it's it's all about like it's not even
about feeling cold or distant,it's about like I can have a
conversation with a GPT tool,you know, chat GPT Gemini, and
it'll feel human-like and it'llfeel warm in there.
But it's it it goes back to theeducating part of it, like if

(25:29):
Yeah, if I know what is behindit, I would have a certain level
of trust and knowledge of how touse it.
You know, it's it's I wouldn'teven say it's it's anything to
with do with cold or distance,it's just about the fact that
how I use it.

SPEAKER_03 (25:48):
How I use it.

SPEAKER_05 (25:48):
You know, I can use Google search to find 100 things
as well.
There, I know for sure it'snobody behind it.
I can do the same thing withChat GPT, yeah.
Ask the same question, andprobably get even much more
detailed uh results uh on it aswell.
So it's all about how we areusing those solutions.
There are solutions out therewhich are helping people, young

(26:12):
adults, to get some sort of uhyou know um psychological help
as well.
Yeah, because there's theresomebody to listen to them and
not judging uh on that.
So if we just take thatdefinition, systems are neither
cold or or distant or warm orfuzzy or anything.

(26:32):
Systems are systems, yeah is howI use it and how I feel about
it.

SPEAKER_02 (26:36):
So it's the human impression to the system.

SPEAKER_05 (26:38):
Exactly, exactly.
Yeah, if my car is my car, youknow, it could be it it'll drive
the way I drive it, and uhthat's how I will feel about it.

SPEAKER_02 (26:50):
Yeah, it's true.
It's just a means oftransportation, or if or it's
something that you love andthink about it.

SPEAKER_05 (26:55):
And you are attached to it as well.
So it's it's just humans if weas we as people, we humans, is
how we how we want to use it.
But I don't think the technologyshould and can be seen in this
black and white of cold,distant, warm, fuzzy.
Is there it'll do a job if youask the right questions.

SPEAKER_04 (27:15):
Okay.
Okay.

SPEAKER_05 (27:18):
And sorry, I'm not sure if this uh was the answer
you were looking for.
I don't really have an answerfor this question.

SPEAKER_02 (27:23):
No, because it's it's a matter of if you already
gave the answer, is it's how weuse the technology because it's
it's um something, a tool caneither be used um in a way that
feels cold and feels distant, orit can be used in a way that
feels that it enlightens yourlife or or keeps you company.

(27:45):
So, as you said in in youranswer, is how you use it and
how um how you make it a part ofyour life.
So it's it's it's uh again, it'sa decision of not necessarily of
the designer of the technology,but of the user of the
technology.
So uh speaking of design, if wecould redesign public digital

(28:05):
services from scratch today,what principle should you put at
the core?

SPEAKER_05 (28:14):
That it is there to serve the wider public good as
well.
And if we're talking aboutpublic digital services, is what
I'm thinking is like if I haveto go get my passport renewed or
uh you know get changes to mydocuments as well and stuff,
this is something it should bekept in mind.
How am I going to use it um aswell, and that's that it it'll

(28:36):
serve that purpose withoutmaking my life complicating and
filling out 20 different forms.

SPEAKER_02 (28:41):
Of information you already have about me.

SPEAKER_05 (28:43):
Exactly, exactly.
So I think it's it's all aboutwho's at the center and who's
gonna be served by that uh thatservice, and we need to keep
that in mind when we designanything.

SPEAKER_02 (28:56):
So people centered.

SPEAKER_05 (28:59):
And this is nothing a new concept, the human-centric
approach.
Yeah, you know, it is a it'sit's a concept as old as
technological uh innovation.
So it's it's and it's it's rightnow even more important than
ever that whatever approach wetake, it needs to be a
human-centric approach uh aswell, be it an organization, be

(29:19):
it a government level, uh aswell.
And if we don't have that, thenwe miss the point altogether of
developing something.

SPEAKER_02 (29:27):
Makes no sense.
Yeah.
Okay, so what's the mostimportant human mistake you've
seen an AS system mimic?
And were you surprised?

SPEAKER_05 (29:44):
I you will I am sure you would have come across
people who with one hundred conpercent confidence say that no
what I'm saying is correct.

SPEAKER_01 (29:51):
Yes.

SPEAKER_05 (29:51):
And when you ask for facts and then you look at those
facts and you realize it's allwrong.
But it is mimicking human natureor human behavior.
In that, because it is trainedthat that way, to give answers
in a confident manner and alwaysgive answers.

SPEAKER_03 (30:07):
Yeah.

SPEAKER_05 (30:08):
You have to explicitly tell, and now it's
being incorporated as well, thatif it doesn't know it, it'll say
it will not know it.
But it was always trained togive you an answer, no matter
where the answer was comingfrom.
Any answer.
If it in the like even now,sometimes and even when it all
came out, you could ask forreferences and it'll give you

(30:28):
references of things that didn'texist as well.
But it all goes back to thathumans trained it, it's
mimicking the human behavior.
That's this is how we expectedit to it to work.
And we didn't have proper datain place for it to be trained.
So it's garbage in, garbage outkind of a situation.

(30:50):
If my data is not good, myresults won't be good either.
Yeah.
As well on that.

SPEAKER_02 (30:56):
So it's it's the uh overconfidence, uh, don't know
when you're right or wrong kindof approach.
You've designed dashboards,systems, and integrations.

(31:20):
What have you seen that's morepowerful than data?

SPEAKER_05 (31:24):
Usage of data.
How we use it to uh for decisionmaking uh as well.
I can give you all the data andall the analysis behind it.
If you refuse to use it.

SPEAKER_01 (31:37):
Yeah.

SPEAKER_05 (31:37):
It's the it's the human who is the human who's
gonna accept it and humans who'sgonna reject it.

SPEAKER_02 (31:44):
It's a decision there.

SPEAKER_05 (31:46):
It's is the decision makers in that whole situation.
Systems are there to serve that,but I should be a willing
recipient of that to want to useit.

SPEAKER_02 (31:57):
Because then the all the work is for nothing.
Yeah.
So now we move to the flashsection.

SPEAKER_04 (32:04):
Okay.

SPEAKER_02 (32:05):
Data dashboards that show what's happening or ones
that ask why it's happening.

SPEAKER_05 (32:16):
Ones that ask why it's happening.

SPEAKER_02 (32:19):
Deleting 10 million data points to protect one
person or keeping them for thegreater good.

SPEAKER_05 (32:28):
I don't know who that one person would be, but I
would keep it for the time beingfor my further analysis.

SPEAKER_02 (32:37):
A system that learns from users or one that teaches
them how it works.

SPEAKER_05 (32:43):
A bit of both, actually.
It has to be both.

SPEAKER_02 (32:46):
Okay.

SPEAKER_05 (32:47):
Otherwise it's not evolving.

SPEAKER_02 (32:49):
Okay.
It doesn't make sense.

SPEAKER_05 (32:51):
No, it you can't have a system that cannot learn
and at the same time it teacheshow it works.
Like because then it'll getstuck in the same loop.

SPEAKER_02 (32:59):
Yeah.
Public infrastructure run bypredictive AI or by human
intuition.

SPEAKER_05 (33:07):
Can I say none?
No, again, it has to be a mix ofboth.
It has to be, it has we've seenexamples of sorry, I know it's a
flash section, so I'm not gonna.

SPEAKER_01 (33:18):
Don't worry about it.
I will say expand.

SPEAKER_05 (33:20):
I I think it's a both, we've seen examples where
you have predictive algorithmsproviding uh public services and
they failed miserably becausethey were biased in it, because
my data was biased.
Uh, but then you're also, if youjust rely on human intuition,
there is also bias uh as well.
So we just need to make surethat they both work together to

(33:41):
be as less biased as possible.

SPEAKER_02 (33:43):
Yeah.
So it's a combination.

SPEAKER_05 (33:45):
It's a combination.

SPEAKER_02 (33:48):
A button that slows a system down or a button that
makes it forget.

SPEAKER_05 (33:56):
If it's about making it forget my data, if I don't
want to be part of it, thenyeah, I would say button that
makes it forget.
And I think we already have thatwith GDPR in place as well.
That I have the right to get allmy data and get it deleted as
well.
So yeah, I would say the secondpart.

SPEAKER_02 (34:15):
Okay.
A silent algorithm or onerequired to explain itself in
plain language.

SPEAKER_05 (34:23):
Yeah, it should explain itself.
Yeah.
We have it in place.
We've developed, like I workedon projects where we developed
solutions, and this is all aboutbuilding trust as well.
Yeah.
You can very easily for a simpletask asking a question, yes, you
don't may not need why, how itcame up to an answer.
But when you're doing bigcomplex analysis, I want the
system to tell it, tell me whatwas the thought process behind.

(34:46):
And these capabilities exist.
It's it's it's you can if it'san enterprise level uh solution
you are working on, you can veryeasily embed that that you
always see how it came up withthe solution, what it searched
for, what it reviewed, and giveyou all that uh analysis as
well, on top of just the answer.

SPEAKER_02 (35:05):
So it's it's not only the the answer themselves,
but also the process.

SPEAKER_05 (35:09):
The process of reaching an answer.
Because then that can also leadyou to another part uh and
exactly, and then you can youcan also fine-tune that process.
Maybe the thought process iswrong.
Yeah, it's the same way withhumans, right?
If somebody tells you this ishow I approach this problem, and
there's like, okay, there aretwo mistakes in your approach.

SPEAKER_02 (35:28):
You started wrong here.
Yeah.

SPEAKER_05 (35:30):
So, you know, you just need to shift courses.
So that's that's that's that'sthe whole idea behind it, that
it needs to be able to explainhow it got to that answer.
Yeah.
As well.
And it exists, it exists, it'sjust about how we want to
showcase and who controls thetechnology to be able to
showcase you that.

SPEAKER_02 (35:55):
You can take the futurist true.
So through, you have to choosetrue or futurist.
Uh, true means that it'shappening right now, is um is
the everyday life or it's aboutto happen, and futuristic is way
in the future, it's not gonnahappen for now, is almost

(36:17):
high-fi.
Okay, first question publicdashboards will come with a BIOS
alert that pulses when the datatills.

SPEAKER_05 (36:31):
I think futuristic for the fact that I don't think
so.
Anybody wants to develop it.

SPEAKER_02 (36:40):
A global open source data set will be declared a
digital commons.

SPEAKER_05 (36:48):
Futuristic.

SPEAKER_01 (36:49):
Okay.

SPEAKER_05 (36:50):
Because it requires a lot of collaboration to have.
And which data set are we eventalking about?
For what?

SPEAKER_01 (36:58):
So who owns it?

SPEAKER_05 (36:59):
Who owns it, where it's coming from?
Parts of it are probably digitalcommons already.

SPEAKER_01 (37:05):
Okay.

SPEAKER_05 (37:05):
But when you look at the whole thing, it may not be,
and it may never be.
But when you're talking aboutyou know, patent applications,
there's a lot of that data thathas to be kept safe and
protected.
So that will never make it topublic eye until it's accepted
or rejected.

SPEAKER_02 (37:23):
So no.

SPEAKER_05 (37:25):
No, so no.

SPEAKER_02 (37:27):
Citizens will be able to request edits to their
government profile the way thatthey request Wikipedia changes.

SPEAKER_05 (37:36):
Yeah, it's true.
I've I've seen that happening aswell.

SPEAKER_02 (37:41):
Governments will auction unused citizen data back
to the public with royalties.

SPEAKER_05 (37:48):
Ah, that's I hope it never happens.
So I'll I'll stick withfuturistic and not even
futuristic.
I hope it never happens.

SPEAKER_01 (37:56):
Um no, no, a big no sign.

SPEAKER_05 (37:59):
They might want to sell it back to public sector,
private sector for royalties.
I don't think so.
Public's gonna buy that.

SPEAKER_02 (38:07):
Every new digital law will be tested by an AI
train on citizen behavior.

SPEAKER_05 (38:18):
Yeah, I would say futuristic.
I've not seen things moving inthat direction as of yet.

SPEAKER_02 (38:24):
Okay.
Public service bots will beevaluated on empathy, not just
efficiency.

SPEAKER_05 (38:34):
That's true.
Yeah, I agree.
It should happen, and I thinkthere are some kind of side
ventures that are training it tobe more empathetic.

SPEAKER_01 (38:45):
Okay.

SPEAKER_05 (38:45):
Uh as well.
So emergency.
And you can also ask it to beempathetic.

SPEAKER_01 (38:50):
Like be kind.
Yeah.

SPEAKER_05 (38:52):
So you can, it's it's all about how you frame the
question.
Okay.
So if you go in there on any AIsolution and say, like, look,
your persona is your persona issomebody who's very empathetic,
listens, and XYZ things, it willtake on that persona.
And it will give you the answerin that empathetic manner

(39:12):
already.

SPEAKER_02 (39:13):
Okay.
Ceres will have digital twinsthat negotiate resources in real
time with other series twins.

SPEAKER_05 (39:24):
So there's two parts to it.

SPEAKER_01 (39:26):
Okay.

SPEAKER_05 (39:27):
Cities with digital twins, it's true, it's happening
not at a very large scale, butit is happening in a lot of
countries, Asia, Africa, Europe,it's happening.
The negotiating part, that's notwhat digital twins will ever do.
That's not the role.
Because digital twins areeffectively just for us to
analyze how services are to berendered in that.

(39:51):
It's a real-time visualizationof how a city functions.
So it will now negotiatingresources.
No, that's that's not yes to thetwins to understand better how
the sense function, but no tothe what services you need to
provide where uh as well, butnot the negotiation part.

SPEAKER_02 (40:11):
I think that's you need a human.

SPEAKER_05 (40:13):
That's way too futuristic.
Or will ever be.

SPEAKER_02 (40:18):
Okay.
So there will be aconstitutional right not to be
optimized.
You should have the legal rightto be left imperfect,
inefficient, unprofiled, andunquantified by AI.

SPEAKER_05 (40:40):
I don't know.
I don't know.
No, I don't know.
I think I it has to be it has tobe futuristic.
Yeah.
Um but at the same time, thereare discussions around this
topic as well that who shouldown the data.
Uh but but nothing's to come innear future.

SPEAKER_02 (40:58):
Okay.
Public effect structure will runpredictive self-diagnosis and
call for maintenance when anyonenotice before anyone notices a
problem.

SPEAKER_05 (41:08):
Yeah, that's true.
That's true.
That that's already is in place.
Um, and it's not even just AI,this is just predictive
algorithms.
Uh so there are services thatrun like that.

SPEAKER_02 (41:18):
So there's a pothole here.

SPEAKER_05 (41:20):
Yeah, I wouldn't even go that far.
You know, you have you havenuclear reactors, for example,
which can tell you, right, whenthe next maintenance is due and
and things like that.
So this this is already there.
Um as widely spread as it shouldbe, no, because it requires a

(41:40):
lot of other things fall inplace before you have that
public infrastructure in inplace.
But that's then connected todigital twins uh as well.
Yeah.

SPEAKER_02 (41:50):
Uh so it's it's it's um they need to work in tandem
to have that proper predictivityin the There's a lot of apps um
coming out, or they were in a atsome point of citizens reporting
that what it needs to be done inthe city.

SPEAKER_05 (42:06):
Yeah, that's crowdsourcing.
Uh so that's a pretty yeah, it'sa good concept and it's been
there for a long time.
And I think UN agencies havealso leveraged that.
You have had a lot of uh opensource solutions that are all
around crowdfunding uh as well,so or crowdsourcing.
Um but yeah, but no, that's inplace.

(42:27):
That's in place.
It's it's all about the factthat because these are a lot of
these are city levelinitiatives.
Uh so if a city is willing toinvest in it and willing to
report on it and willing to takeaction on it, is when you can
make it effective uh as well.
I've I'm from India.
I've seen in cities where peoplereport and the and the municipal

(42:49):
services take action on itfairly quickly, but there are
other cities where they don't.

SPEAKER_02 (42:53):
Okay.

SPEAKER_05 (42:54):
Not just in India, even here or in US, uh, for that
matter, and nothing thenhappens.
Okay.
And then people don't report.

SPEAKER_02 (43:01):
Yeah, because they feel that they're not being
listened to.
Exactly.
Yeah.
So that's lost.
Yeah.

SPEAKER_05 (43:06):
Yeah, exactly.
So so the crowd crowdsourcingconcept is is not new, it's been
there, and a lot of goodproducts have been built up on
that uh as well.
So, like the whole concept of ifI to stretch it a little bit,
open source.
Any kind of open sourcetechnology is community driven.
Anybody can go in, log in,suggest improvements, um, the

(43:29):
community will review thoseimprovements and then accept it
as well.
Yeah.
Uh so from a technology field.
Everyone for everyone.
Exactly.
So there are there are thosecommunities that exist as well
in there.

SPEAKER_02 (43:42):
Yeah.
But um, final question now tothe long view to reflect.
If you had the chance toinfluence the next generation of
public AI and data systems witha single idea, not a feature,
but as a foundation.
What principle would you buildinto the core of every design?

SPEAKER_05 (44:05):
I think it's what we've discussed earlier as well.
It's I would make sure that weneed to have human at the center
of it, and we are very clearfrom day one why we are doing
it, what's the objective of itas well.
So that's that has to be thecore tenant uh as well.
Whatever solution is beingdeveloped needs to be for the

(44:27):
public good, needs to be for thepeople, and they need to be they
somebody has talked to them andlisten to them uh as well before
designing it uh also.
So you need to have thathuman-centric approach uh to
this whole thing.
Uh it's again, as I said, it'snot a new concept, but this is
something that we have to keephoning in and keep reminding so

(44:50):
that we don't forget.

SPEAKER_02 (44:52):
Pushing in every second.

SPEAKER_05 (44:53):
Exactly.
It's the power of repetition.

SPEAKER_02 (44:55):
Yes, that's how you learn.
Thank you so much for your time.
Thank you for singing with us.
Thank you for having me and andtalking about uh how AI can be
used for public goods and how wewe we cannot lose the human.
That's that's the main the mainuh takeaway, I think, from this
uh uh episode is the human isthe key to every technology.

(45:19):
So technology doesn't just shapewhat we do, it reflects what we
believe, what we protect, andwhat we dare to imagine.
Thank you for reminding us thatdata and AI are not just for
systems or in speed.
They're about choices, values,and the invisible architecture
behind every decision thataffects our lives.

(45:42):
Because building for the publicgood means asking better
questions, designing with care,and always making space for what
doesn't fit neatly into themodel.
The future is not something wewait for, it's something that we
shape one decision, one system,one conversation at a time.

SPEAKER_05 (46:03):
I think it was a great conversation.
Uh, it's an important topic, anduh, let's hope uh people keep
remembering that and continuedoing the good work.

SPEAKER_01 (46:14):
Thank you, thank you so much.
Thank you.

SPEAKER_00 (46:20):
Thank you for listening to Intangibilia, the
podcast of Intangible Law.
Plain talk about intellectualproperty.
Did you like what we talkedtoday?
Please share with your network.
Do you want to learn more aboutintellectual property?
Subscribe now on your favoritepodcast player.
Follow us on Instagram,Facebook, LinkedIn, and Twitter.

(46:41):
Visit our websitewww.intangibia.com.
Copyright Leticia Caminero 2020.
All rights reserved.
This podcast is provided forinformation purposes only.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

The Brothers Ortiz

The Brothers Ortiz

The Brothers Ortiz is the story of two brothers–both successful, but in very different ways. Gabe Ortiz becomes a third-highest ranking officer in all of Texas while his younger brother Larry climbs the ranks in Puro Tango Blast, a notorious Texas Prison gang. Gabe doesn’t know all the details of his brother’s nefarious dealings, and he’s made a point not to ask, to protect their relationship. But when Larry is murdered during a home invasion in a rented beach house, Gabe has no choice but to look into what happened that night. To solve Larry’s murder, Gabe, and the whole Ortiz family, must ask each other tough questions.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.