All Episodes

February 14, 2025 43 mins

Is AI just another tool, or is it transforming into something much bigger? In this episode, Daniel Manary sits down with Atif Khan, Chief AI Officer at MessagePoint, to explore what it really means to be an AI-first company, how AI is evolving into a cognitive companion, and why the future of AI lies in multi-agent systems rather than single-task automation.

With over 25 years of experience in AI and data science, Atif has led the development of transformative AI platforms in Customer Communication Management (CCM) and Customer360 solutions. He also mentors startups at Communitech, helping them scale their AI capabilities. If you’re a business leader, AI practitioner, or just curious about where AI is headed, this episode is a must-listen!

🔑 What You’ll Learn in This Episode

📌 AI in Business: From Hype to Reality

  • ✅ Why AI is not just automation—it’s an extension of human cognition
  • ✅ The biggest misconceptions about AI in the workplace
  • ✅ How AI should be measured for real business impact

📌 What It Means to Be AI-First

  • ✅ AI-first is more than a buzzword—it’s a mindset shift
  • ✅ Why integrating AI into workflows is more about strategy than technology
  • ✅ The key difference between using AI and rethinking business operations around it

📌 The Future of AI: Multi-Agent Systems & Cognitive Companions

  • ✅ Why the future of AI isn’t a single assistant—it’s teams of AI agents working together
  • ✅ How AI is evolving from task automation to thought processing and decision-making
  • ✅ Practical use cases of multi-agent AI systems in business

🔗 Resources & Links

🚀 Enjoyed this episode? Leave us a review & share it with a friend!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
MessagePoint is redefining itself as an AI first organization.

(00:04):
That means how we look at data is different.
How we look at technology is different.
How we solution for things is different.
Right.
And when you do that, you realize that you can actually go further.
You can conquer a taller mountain.
There's a misconception that agentic, there may be some newer term, but the
agentic way of thinking is something new.
Right.

(00:24):
Yeah.
It's not actually, if you look at AI research over the last couple of decades,
multi-agent systems is a well-established research field within AI.
Welcome to Artificial Insights, the podcast where we learn to build AI
people need and use by interviewing product leaders who have launched AI products.
I'm your host, Daniel Manary.

(00:46):
And today I'm joined by Atif Khan, the chief artificial intelligence
officer of MessagePoint.
I met him through a local meetup group dedicated to generative AI, where he
has been leading it and given us all a chance to explore deeper with him.
And I think you'll be excited to hear how they've used

(01:07):
generative AI at MessagePoint.
Atif, could you introduce yourself to our listeners?
I'm Atif Khan.
I'm a chief AI officer now at MessagePoint.
But I've been in the AI space for a long time.
I organically grew in the industry as software developer, computer scientist,
and then went back to university to work on my grad studies to understand, not

(01:29):
just the engineering part of things, how software systems are put together,
but also what's the science behind it.
And during that part of it, I had done software engineering for a long time.
I was like, okay, now let's do something different.
So got involved in AI in a capacity that was like, okay, so this is what I want to do now.
And as part of my journey, I literally wanted to teach machines how to think.
So that was the machine whisper is the term I used for a lot of times for my

(01:52):
initial PhD journey.
But then I think when I, what I realized was I'm also an applied AI person.
So it's not for me, it's not about just learning the science or learning the theory.
It's actually making building things happen.
So the engineering part of me is more dominant.
And as a result of it throughout my career, I've built, worked with young

(02:15):
companies, I've built things, built AI platforms, built big data platforms.
Made a lot of mistakes and as a result of it, like I think I've been, I've become
wiser in terms of like how to engage with technology.
So I've also advised at Communitech as a data growth coach with the, with young
companies, like just trying to figure out like how do we even engage with technology.

(02:36):
So this different hats that I wear, but in the end, I'm just a technology enthusiast.
I like building things and I like to understand things.
For me, building requires foundational understanding so that I can actually
build a foundational understanding so that I can break it and put a tech
together again, so that it's, it's now in line with my intentions in line

(02:57):
with what I wanted to do, not just, I'm just engaging it as an audience, right?
Yeah.
I've interviewed a number of people on that spectrum for the show before, where
some would rather think about the requirements and how people are going to
use it and some would rather think about the, how it's built and then move towards

(03:17):
how it's going to be used and for me, I really appreciate the, I need to know both.
Like we're going to break it apart and put it together.
I think we lived in that framework where we were either one or the other.
We were either scientists that we knew, we just thought about theories and thought
about algorithms and, but never actually implemented them at scale in industry.

(03:39):
And, or we were just in the industry.
It's like, ah, this algorithm is too difficult.
It'll take forever just to complete.
We're not implementing it.
Right.
And now that we're at a point where technology is, has allowed us to be able
to do things bigger, better, faster in the industry.
I think we need to know both.
Even with my teams, there's heavy requirement and influence that you need to

(04:01):
foundationally understand what you're building, but also you need to understand
how it changes lives, how it becomes tangible.
Right.
And one is not complete without the other.
Right.
It just doesn't happen that way.
Did you give us an example of one time that's happened for you?
Yeah.
So I had, so in my company, I normally hired through, let's say the different
masters program for around data science and machine learning.

(04:23):
So the idea is that, okay, so these good kids will come in and they'll work with
us for a while, four months, eight months type of stuff, and then they'll move into
full-time employment type of stuff.
Right.
And then I see this very clearly where this divide now happens.
So a young person, she's, I think she's first week into the work or something like
that, and I've given her a classical problems that you go, could you go build a

(04:47):
classifier for me?
And literally a couple of hours later, I think I realized that she was crying.
And I'm trying to understand what just happened.
And so I talked to her, I said, what's going on here?
She goes, my classifier is not working.
Right.
And then I realized that the mistakes that she was making, she was literally
taking the classroom activity and trying to apply to how she was going to build a

(05:09):
classifier.
The data was noisier, there was different techniques that we had to apply.
And she wasn't exposed to that.
So she felt that she wasn't prepared well enough or she was missing something.
Right.
And for me it's like, no, what you're doing is correct.
You just need to do it slightly differently.
Just put on this other lens that don't expect all your data to be deduplicated, all your

(05:31):
data to be like nice and clean.
Just wear the different lenses and look at the data differently and you'll be fine.
Right.
And that has stuck, stayed with me for a long time because it's not just, it wasn't just
a one person thing.
Like I think it's a shift when we train people, how they're learning and getting introduced
to that technology and when they're applying the technology, there's definitely a difference.

(05:54):
And sometimes people are smart and they'll just jump over that bridge really fast.
Sometimes it gets stuck there.
And I think learning both sides of like how something's supposed to work and how you
apply it could be different.
Another example I use is like clustering is a common enough technique that we all learn
in school and we do clustering with dots, with points.

(06:17):
Right.
And I was working with content that had, it was some sort of document that had bounding
boxes around different things.
So it was no longer points.
It was rectangles.
Right.
So now I think the traditional algorithm broke down a little bit because if you just do points,

(06:37):
then rectangles start to overlap and all hell breaks loose.
Right.
So you need to just change your mindset a little bit.
Okay.
So what does it mean if it's no longer a point, if it has something, if it has a 2D shape,
for example, how do I cluster it now?
Right.
It's not if clustering fundamental is changing, the application is now has to be much more
informed and then you make those variations.
Right.

(06:58):
That's amazing how little actually has to change to just say you're now in contact with
the real world and how you built it before isn't how you need to build it now.
Yeah.
Yeah.
Okay.
And one question that I think is on everybody's mind for a podcast like this is, are you an
AI?
Sorry, I buy an AI.

(07:21):
Are you an AI?
No, I'm not an AI.
However, if you look at my name, my first two initials are AI.
So it's AIK, AI Khan, and this is a joke that my daughter unearthed or uncovered.
It's like, dad, do you know that AI is in your name?
It's like, what do you mean?
And sure enough, so she showed me my name and I had not even reflected on this before.

(07:42):
Right.
But my middle initial is I, so literally it's AI Khan, right?
But I'm not an AI.
No.
That's good to know.
I feel like it's close enough to just be referenced like that with your daughter.
That's hilarious.
Last year, she started her own AI initiative.
So the journey was actually interesting because Chad GPD showed up and she was not allowed

(08:04):
to touch Chad GPD at school.
And I wanted her to learn this because I think from my perspective, it was like, you're not
avoiding this.
You have to learn this.
So she was caught in dilemmas.
Like, I'm not allowed to use it at school, but you want me to use it.
So now help me understand it.
And then when she uncovered the potential, she's like, literally her question was, but

(08:26):
my friends are not benefiting from it.
So the idea showed up that, or in her mind, let's create a youth led organization that's
teaching AI to youth.
And then she founded Youth Tech Labs, where I brought people in from the industry, from
academia to teach young kids what AI is all about and the different aspects of it.

(08:48):
Right.
So they've ran successfully for last year.
They're now into the second year.
They have engagement from local companies.
So they're sponsoring local companies.
The five teams that were chosen last year to build their ideas into product.
It's incredible how grade 11, grade 12 kids are making an impact in the real world with
this technology.
Right.
Yeah.

(09:09):
Wow.
That's super cool.
And even just to start an initiative like that and to get it supported is amazing.
She has surprised me nicely.
Yes.
I thought she was going to run away in the other direction.
Like, no, it's not for me.
Right.
No, but she has literally taken the bull by the horns and was like, let's make this my
own.

(09:29):
Wow.
That's amazing.
One other question that came to mind when you were introducing yourself is that title
CAIO, Thief AI Officer.
And I know that's a recent thing for you.
So I'd be interested in knowing what does that mean?
What is it, Thief AI Officer?
And then how did you get there?

(09:51):
I think it was a natural progression to the journey that I was already on.
So initially, my focus on AI was either research or build a product.
And then as we got more comfortable with AI and playing with it, it dawned not just on
me, but I think on all of us, that AI is here to make an impact, not just in your product

(10:16):
or in your research part of it, but it's actually transformative enough to change everything.
So how you do your daily work, how you think about solutions.
For example, if you're sitting in 10 different meetings in a day, how do you actually consume
that content out of those 10 meetings at the end of the day?
So it was pretty clear that changing the product, changing the research is definitely the starting

(10:39):
point, but changing the entire organization is where the right bang for your box is.
So everybody has to be somehow AI enabled or has to start thinking about moving away
from the horses and start thinking about the cars, right?
That type of thing.
So it needs to naturally come in into their way of existence, their way of thinking, their

(11:02):
way of working.
So now the emphasis is, okay, so now that we've successfully built this thing for the
last few years and taken to production, how can we go the next mile now and take the next
step where we take the organization to the next level?
So as part of that conversation, I think that the idea behind Chief AI Officer, and it's

(11:23):
not unique to us.
I think a lot of other people are thinking about it in the same way as well.
It is still new, it's still fresh.
And sometimes I think, why not a CEO?
Why Chief AI Officer?
And I think in my mind, the idea is that AI is not just a technology piece, it's not just

(11:43):
a new computer, it's not just an email.
It has this cognitive ability behind it as well that we'll have to deal with.
It has an impact both technically and from a society perspective, from an organization
perspective, from how we use it.
So there's ethical issues, there's responsible AI issues, like this, this.

(12:05):
The picture is quite complex, right?
Once you've built something and you lift up your hands like, okay, so where am I now?
And then you realize that these are all these things that you didn't care about before.
You now have to figure out how to care about them, how to put it in a way so that governments
don't find you, how to put it in a way so that it's aligned with the right way of how

(12:26):
society is moving forward.
I think those are concerns that we will have to figure out answers for very quickly, right?
And as part of this new role, I think my shift is now to start thinking about these things
as well and start engaging with them.
I'm talking to you or I'm part of different organizations in terms of locally.
So the Chamber of Commerce, for example, is a good place.

(12:47):
I'm involved with the Canadian Chamber of Commerce as well as the local Chamber of Commerce
here in KW.
But just try to figure out like, okay, so what is it that we're all trying to do?
Like kind of how do we move forward?
And it's more than building, right?
So just literally in my mind, that is that role, the bigger picture and then the ability
to be able to draw that bigger picture for other people.

(13:07):
And has that resulted in changes in the company already?
Yeah.
So I think within MessagePoint, for example, it took us, we've been playing with AI for
almost eight and a half years.
I came back to MessagePoint eight years ago, roughly, just to establish the AI practice.
And literally at that particular point in time, it was just another augmentation of
our product offering.

(13:29):
And now it's AI first.
MessagePoint is redefining itself as an AI first organization.
That means how we look at data is different, how we look at technology is different, how
we solution for things is different, right?
And it's not always like just throw more people on it, but it has to be right, figure out

(13:49):
the right optimizations.
And when you do that, you realize that you can actually go further.
You can conquer taller mountains just because putting the right foundation helps, I think.
Right?
So AI first definitely helps.
But it is a mindset that is not there and it requires some sort of twisting and reimagining

(14:11):
and it's like, how do I even think about these new things?
And especially hard if you don't even understand those new things, right?
Maybe that's a good place to spend some time then is what would you say it means to be
AI first?
And what's maybe one example or way that's worked out so far to twist the way that people

(14:31):
think about it?
Well, I think people will define AI first as differently.
Let me start.
The first step for me for this journey is that when I look at AI, from my perspective,
it's an artistic expression of cognition, right?
So that means when I'm thinking, can I take that thinking and have machine reproduce it,

(14:54):
have machine enhance it, right?
So literally the way people paint and the way people write books is just literally just
that, but it's bigger and it's better, right?
So starting from there, if you can understand that it is that thing, so it is an expression
of human cognition or a discussion of human cognition, it allows you to take expert knowledge,

(15:15):
capture it and reproduce it on demand.
I think once you understand that, then at least from my perspective, what it means to
be AI first becomes very apparent, right?
So you need to take that concept of taking things that you were doing before that required
human level involvement, human level tasks.

(15:35):
Can we potentially look at a solution that is now machine first, right?
Understanding where there's a difference between the AI muscle and the physical muscle that
we have, in terms of a person can't possibly consume thousands of documents in a finite
amount of time, where the machine can, right?
So understanding like, okay, so if you're developing a new solution or if you're looking

(15:58):
at a new strategy, leveraging that part of the machine that can actually do that heavy
duty lifting, the machine flexing that muscle is much easier, much better, but literally
also mirroring the two together so that one is not independent of the other.
So the machine is some kind of muscle, the human is getting benefit from the muscle.
The machine is not just flexing that muscle just for the sake of, hey, I'm a cool AI machine

(16:20):
and I can read a lot of documents or whatever, right?
There has to be a need, a use case that we're trying to solve.
There has to be a purpose behind an activity that we're trying to do.
And so marriage of the two, the human being now being able to do things differently just
because they're thinking about problems differently.
So it's no longer, let me just, like, for example, if you're trying to figure out like

(16:44):
all the pages and then across documents are similar, I'm not just putting them, printing
them and putting on a table and looking at physically, and the AI first strategy will
now require you to say, okay, can I just have the machine label those pages for me and then
I can go back and then exert my control or my opinion on it and talk about it, right?
So just thinking differently, but understanding that AI allows you that cognitive ability

(17:06):
that we didn't have before and apply it and express it in a way that we didn't know how
to do before.
And is there something you've implemented so far at MessagePoint that is, that enables
that in a new way?
So the business that MessagePoint is, it's customer communication management.
And what that really means is there's a lot of text that people generate using the MessagePoint

(17:29):
platform.
There's business communication is just not just the text that shows up, but also the
images and how it looks.
There's a lot of effort that goes behind the scenes in terms of choosing the right language,
phrasing it correctly, putting it together on a piece of paper so it looks the right
way.
How do you take that?
Let's find a piece of paper, let's send two page print materials.

(17:51):
How do I take that to an email so it's not losing its impact?
And then more importantly, I think as a communication company, multiple languages are the ability
to be able to make people functional even if they don't speak a new language, for example,
and they can still understand what is it that they're working on, right?
It's the sentiment.

(18:12):
So there's a lot of pieces that we were able to, over the last six, seven, eight years,
we were able to bring to the table where the CCM pieces are now foundationally different.
One of the big things that we did was around this ability to be able to look at content
and tell you why these two things are different, regardless of language, or look at translations
and tell you, explain to you why this is a good translation, why this is not a good translation,

(18:34):
right?
Wow.
There's the normal usual suspects.
That machine creates some content in a regulated industry, maybe a machine does not create
all the content, but maybe it morphs content from an approved template, right?
But the game and the human being goes back and looks at it.
But this translation was one of them machine can do the translation.

(18:55):
Sure, that's fine.
But then the extra mile for me was literally like, once we do the translations, we have
human beings looking at the translations to figure out whether they're good or not.
Why don't I just train a machine to do that?
Right?
There was just a natural extension.
Part of it was literally just looking at CCM use cases and say, understanding how we can
change them.
And then part of it was, now that I can change them, let me just go the extra mile as well

(19:17):
and just do this other thing that we didn't think was possible before.
Right?
And I think maybe that's where knowing how it's built and how to take it apart is the
most beneficial.
And what would you say are some things that we should be aware of and keep in mind when
we're looking for those AI specific use cases?

(19:39):
In the end, AI is just an enabler.
Right?
I believe if you take an AI stamp and stamp on top of your product and say it's an AI
product, your product is not a new product.
It's not a different product.
It's not a better product.
AI is an enabler.
And I think we need to understand that how does the AI piece matches with the, what is

(20:00):
it that you're trying to do with the use cases?
What is it that you're trying to do?
Right?
To find success with AI, I believe that is the key thing.
If you miss out, a lot of us are not going to be producers of AI as a product.
So OpenAI is a producer of an AI product.
Google is an AI producer of AI as a product.
A lot of us are not going to be producers of an AI piece or offering, but we will be

(20:23):
enhancing our own offerings using the application of AI.
Right?
And I think that's where the magic lies.
So not just putting AI stamp on top of it, but understanding how do we do these things
differently, bigger and better.
Vision is important.
There are a few, and vision is both from the AI perspective and also, but what is it that
I want my business to do?
What is it that I want my solution to do?

(20:45):
Right?
And then with vision comes strategy.
So I want to be able to do this, but this is how I want to execute.
And strategy is where the AI influence can come in a little bit more as well.
Because you can now have a different strategy that you couldn't possibly have before because
it just was not feasible.
Right?
I think you mentioned experimentation.
So playing with it, understanding it, breaking it, putting it back together again, more importantly,

(21:12):
knowing its limitations and understanding those limitations will keep changing.
So whatever you've learned today is evolving rapidly enough that it's not going to stay
stuck there.
And then augmentation, put it on your thing, try to take existing use cases, existing people
workflows, enhance those, make it part of you.

(21:34):
People, spell check, put in the word processor, let people write whatever they want to write,
but let the spell check figure out dynamically on the fly when people have done something
wrong.
And then the last part is literally implementation.
In the end, if you've played with the technology, researched with the technology, but you haven't
benefited your core use cases, your core product, and you've not implemented, not taken to production,

(21:58):
it's kind of game over.
You have not actually benefited from it.
But the idea that this new thing can make real difference once you align it to what
you're trying to do, I think that's the foundation of it all.
And it sounds like finishing is one of the important things.
Finishing is definitely one of the important things.

(22:20):
Yes.
What advice would you have for leading that we built something, but now we really want
to be AI first?
Let me maybe give you a slightly different example.
So let's say I give you a thousand numbers to add or a million numbers to add.

(22:41):
Adding numbers is an easy task.
We've all learned it.
We all came more than capable of doing it.
Certain people will literally just take the thousand numbers and start adding them manually.
Certain people will just say, okay, maybe let me find some patterns in the data and
then maybe I can use those patterns in data.

(23:02):
And this comes with experience.
The more you know about your industry, the more experience that you have, the better
you can do this.
Certain people will say, okay, let me maybe just go to the tooling.
So I'm going to pull out Excel and I'm going to put all those numbers in Excel and get
Excel to do it.
Why should I do it?
We're very comfortable with the Excel card today.

(23:23):
So going back to an AI first, a lot of people, if you talk to saying, here's a thousand
numbers, can you add them?
I think it'll be very hard for us to find somebody who would just sit down and start
manually adding them together.
A lot of people will take you to the Excel level, right?
Which is good.
That means they understand there's a technology that exists that can potentially do this for

(23:44):
you and they can go and leverage this technology.
The AI step literally is instead of me looking at those thousand numbers and even importing
them Excel, why don't I just show a picture of those thousand numbers to the machine and
ask the machine what is the sum, right?
And that's the mind shift that we need where generative AI is Excel like, but better, where

(24:05):
it's taking away the need for you to learn the technology pieces as well.
So you still have the business problem.
You still have a determined outcome that you're looking for.
Today we're so used to addressing by applying tooling to it.
And we'd learned, we spent a lot of time learning those tooling and figuring out what is the
right tooling.
The generative AI, the next level of that transformation is literally like, well, it

(24:28):
also knows your tooling, right?
So potentially it can look at those numbers.
It can write a Python script in the background, add those to all those numbers, give you the
Python script so you can validate that this answer is correct, as well as give you an
answer in real time as well, right?
So in my mind, AI first approach is literally a mindset shift, which is now looking at AI

(24:53):
as a more of a tooling that can interact with you at a cognitive level.
They can understand your needs at a cognitive level.
That means just solutioning should change in accordingly as well, right?
You should not be just limited to, let me go back to Excel.
You should be now looking at entertaining.
Can I just show a picture?
Can I just take a picture of these thousand numbers and send it to my LLM, for example,

(25:15):
right?
It's going to take some time.
I think if I can use an example from my own life, my grandfather never trusted the bank
tellers, right?
My father never trusted online transactions.
Like if he always wanted to pay his bills in the bank itself, he didn't want to do it

(25:38):
online.
And I've never seen a bank teller or paid my bill in the bank.
I just do it online, right?
And I'm sure that this whole industry in China, for example, they don't even know what banks
are.
They're just entire transactional ecosystems on WeChat.
So I think it's a mind shift in terms of our change that is required to say, how do I look

(26:00):
at the problem differently, right?
And not just using the words that we're just using AI or we hired a whole bunch of PhDs
and now we're AI.
It's literally looking at the problem differently is what describes or defines that your needs.
And AI first company.
The application comes after like once you can see it, once you can imagine it'll happen,

(26:20):
but it's the ability to be able to say, I'm not using Excel anymore.
Why can't I just show this machine the numbers and have it added together?
That's the AI first mentality, right?
And it's not very common right now.
It will become common as more younger kids show up in workplaces.
I think internet went to the same thing as well, where a lot of people didn't know like

(26:41):
what when you connect the information and make it make the cost of accessing that information
like really cheap, what are the benefits of it?
Took us a little while to get to realize what can we do with it and all kinds of stuff showed
up.
Napster I think told us like how to take how to make the cost of content delivery cheaper,

(27:01):
right?
For good or for bad.
And then when you have these companies now that there are literally like Netflix, for
example, it's a content delivery company, right?
Spotify is a content delivery company.
But I think with with generative AI, for example, and with the AI first approach, we're literally
at the cusp of similar type of transformations like so what is it now?
Now that we can make cognition cheaper and easier, what is that we can do with it now?

(27:26):
Right?
And the solutions will come.
But it's getting into that frame of thinking, like getting into the mindset of like this
is how I'm going to address this.
This is how I'm going to look at things.
That's what AI first means.
And does that mean then starting with helping the people in the company to think AI first
about their problems is a precursor to actually making a product that has the AI in it that

(27:49):
solves a problem?
So I think you're absolutely right.
And then as part of my new role, I think that is definitely one of my mandates where every
organization with our within MessagePoint has to become an AI first by learning about
AI first, and then by trying to solve their things and do their day to day activities
using AI if possible.

(28:09):
So that means that we not have to teach them what it is, but also enable them by giving
them access to the right tooling, setting boundaries in terms of how what we're expecting
them to use.
I'll give you a more common example.
A year and a half back, we were talking about CoPilot for coding for all our coders.
Right?
And we found it was using CoPilot to create unit tests in cases automatically based on

(28:32):
the code that people have written or have it do code reviews so that it can find things
like I think that was just a natural application of it.
Right?
But that meant that we had to provide CoPilot to everybody, each and every one of our developers
first step.
Then the conversation was, do we show them how to use CoPilot?

(28:53):
And then there's different school of thoughts.
One is like, I'm going to tell you exactly this is how I'm expecting you to use it.
My take on that was a slightly different.
I said, no, I think everybody, every developer's journey is going to be slightly different
with CoPilot.
I don't want to influence that journey in a way so that I don't want to stop them from
learning something that they would have learned on their own.
I want that journey to be organic with the realization now that my expectation from the

(29:16):
developer is that they'll be able to do more because they have this tooling.
So for example, if they're automatically generating the unit test cases, then they will be spending
less time generating test cases.
We'll be spending less time doing code reviews as a team, for example.
So I had to change my expectations.
And then in return, they had to figure out if I don't use the tool, I can meet these

(29:41):
expectations.
Ooh.
Right?
So it almost became a necessity.
It's like, if you want to go there and if you want to play in the big boys park now,
you have to have different equipment.
And it worked out really well.
I think people didn't feel like that they were pressed to learn something.
I think everybody's journey became their own.
But at the same time, we all agreed what the desired outcome was as an organization.

(30:06):
These are things that we're looking improvements on.
Less code reviews, more robust code that has better coverage from a testing perspective
and all that stuff.
And then these are things that we're concerned about today anyways, and we were just doing
them manually before.
Right?
Yeah.
Sure.
All things importantly that you can measure as well.
You can definitely measure each and every one of those on a daily basis with every commit.

(30:28):
Yes.
Oh my goodness.
And I think you mentioned just before that we're essentially at the cusp of figuring
out how we can provide a lot of value in society, I think even with AI, similar to how the internet
made it cheap to deliver content.
So I was interested to know what do you see as that feature?

(30:52):
What do you see as potential big opportunities for value?
I think if I generalize that question, I think maybe where's AI heading?
Is that the, is that okay?
Yeah.
It is hard to predict where it's heading.
Right.
But at the same time, I think there's patterns that we can see and those patterns are tangible.

(31:13):
The patterns are extremely real.
It's changing at a rapid pace.
So there's no denying that.
That means it's getting exponentially better day by day type of stuff.
Right?
We're going to get into, we're going to get into a world where it will be around us all
the time.
Not, so for example, if you're sending a text message, it just be there.

(31:37):
Like these days, for example, you don't have to build an emoji.
You don't have to figure out the exact text combination.
The emoji just shows up as you're writing stuff, right?
It's changing fast enough.
It's going to be everywhere.
That means we need to figure out how, what are the rules of engagement?
What does governance look like?
Right?
If I'm using AI as a student versus if I'm using AI as an employee versus if I'm using

(32:01):
AI as a doctor, what does that mean?
I think, and it's going to be different for different people.
But AI, especially the way things are shaping up, I think it's almost, if people believe
that it won't be there or their businesses or their lives are not going to be impacted
by it, I think they're living in a bubble that will burst pretty fast pretty soon.
I also believe that, I'm sure you won't find a person today, maybe everybody's very sick.

(32:31):
Most of the people that you'll meet into today will have their cell phone on them.
Okay?
It's a device that does, is very capable device, a device that records your communication,
a device that may be potentially constantly listening to you, is a device that knows exactly
where you're going.
The extension of that is literally when you contemplate AI is going to be everywhere,

(32:53):
is the digital you or digital twin with AI.
So I used to spend a lot of time gaming.
So that means like in worlds that are not real, but spending time walking around those
worlds, making transactions in the world, fighting battles and then all that type of
stuff.
Right?

(33:14):
I think that the natural world, the physical world and the digital extension of it, by
that what I mean that what you can potentially do.
So if you go to a market, maybe you can buy sell things differently, better.
Maybe you're informed about what is available in that market through your digital twin in
a much better way, much deeper way.
Your transactions could be different.

(33:35):
If you want to inspect something, like you're interested in buying something, maybe that
experience is not different as well.
But this idea of the digital twin showing up both from a consumer perspective, but also
from a, for example, research perspective or simulation perspective.
As an engineer, I'm a big believer of what was basically the importance of simulation
was drilled into me.
Like you don't put engineering solutions out there without doing simulations.

(33:59):
But I think this ability to be able to create new universes, create new worlds on demand,
guided and constrained by what we believe is possible.
I think that's going to be a reality around us.
In all of that, I think this technology part that really excites me, but I think the governance
part is also important.
Ethics of how we integrate AI into our daily lives.

(34:19):
And then in the end, it's going to be a complex journey.
Right?
So we have these tooling today.
We have cars, but we have regulations around cars.
When cars make us mobile enough that I can go and still report to a company in Toronto,
for example, right?
It just, but at the same time, a car can also be dangerous.
Right?
And then you can see a lot of that.
You can say the same thing about a microwave, right?

(34:41):
It just makes my life experience better, but it could also be deadly if not used right.
Right?
I think we're going to learn very fast that it's going to be everywhere.
Our existence is going to be enriched by it as well, but we need rules around in terms
of how to engage with it.
Very fast, very quick.
I like that about governance.
I often hear it in a stuffy corporate way where it's more abstract, but to say that

(35:05):
it actually is governance that's going to decide how we use it as a society on a daily
basis is, I think, the more beneficial perspective even, but not as common.
I think AI will maybe make the term governance less dirty, like, or less, it'll make it more
real for people.

(35:26):
Like, if I know the rules, then if I know why they come using it, and then maybe it's
not just a, I'm not a corporation, so I don't have to worry about it.
But if you're driving a car, you have to worry, you have to still obey the laws, right?
You still need to know how to, how does, like, when the right's led, you should stop.
When there's a pedestrian crossing, you should let the pedestrian go.
I think the governance will become more tangible, more understandable, and less complex because

(35:50):
the term itself is overused.
This thought of it, maybe it only applies for legislation, or maybe it only applies
in the corporate world, I think it applies now in our daily lives as well.
People will just understand it better.
And as a result of it, we have social norms, like, social niceties, social ways of engaging
people.

(36:10):
I think we have similar types of things that will show up for AI as well.
And last question, was there anything you'd like to share about what you're working on
now or how people can connect with you and follow you?
Sure.
I'm going to say if it's not the answer, not generative AI, I'm not doing my work
properly.
So for the last two, two and a half years, I'm heavily involved in different parts of

(36:33):
generative AI, which is thinking bigger, better with generative AI.
So it's multimodality, it's multimodal, it's agentic workflows.
So literally trying to grasp in terms of how and where this beast is heading, right?
And then trying to stay on top of it to be part of that journey and not fall off, right?
So there's a lot of time that we're building in terms of reimagining our product, reimagining

(36:56):
our solution space, being AI first, as we discussed.
I'm going to say this literally consumed me over the last two years.
Change in that is also that I'm trying to understand the tangential aspects or concerns
around bias and then around what legislation is now involved and what does it mean to put

(37:17):
AI into, integrate into somebody's life so it can be impactful, right?
So those journeys are becoming more real as well.
But a lot of time is literally just spending the AI journey.
If you want to get in touch with me, I said the best way to get in touch with you is show
up for one of my peer-to-peer AI sessions.
For me, I think it's incredible how people from different backgrounds have showed up

(37:40):
and built a little community and then asked and addressed and reflected on questions that
are just needed to be talked about, right?
And there's no better way to, if you want to reach out to me and make an impact, just
show up for one of those sessions and then take part in the conversation and be that
community, right?
LinkedIn is good, but again, I think for me, an organic way is to just reach out through

(38:05):
Communitech, for example, or through the genera of the app, peer-to-peer.
Awesome.
Paul?
What links are those, Steve, for the people who are local?
And if you have one minute, I was curious to hear what you thought about agent use cases.
Is there, I don't know, some insight you could share on that?
Yeah, for sure.
So we think that, or we shouldn't say we think, there's a misconception that agentic, maybe

(38:30):
it's a newer term, but the agentic way of thinking is something new, right?
It's not actually.
If you look at AI research over the last couple of decades, multi-agent systems is a well-established
research field within AI, which where people have done a lot of good work in terms of how
different AI agents come together, collaborate in real time, experience their environment,

(38:52):
and then make decisions autonomously, right?
What's new today is that implementation is being done using generative AI, and hence
the agentic from, let's say, solutioning around multi-agents coming together, but using generative
AI foundational pieces to make that happen, right?
The generative AI part is new, but the multi-agent part is not new.

(39:15):
So if A, first of all, I don't think it's new.
B, I think there's a lot of untapped potential.
If I would just go back and then, I don't know, pick up some interesting papers from
the last two or three decades, there's a lot of foundational concepts that I think will
start to surface where we will figure out that for these agents to communicate, for

(39:35):
these agents to observe, for these agents to do reasoning or reflection on what problem
they're trying to solve, a lot of these things were solved already, but just in a different
scope, right?
It's not entirely new, but at the same time, I think its implementation is new, but at
the same time, the design patterns that we had in the center we developed before under
the multi-agent systems, research material, I think that's all applicable.

(39:59):
So I sense and I feel with the next six months into a year, this will explode, not just because
LLMs are better, because we're still making the connections back to the actual research
that made it happen, right?
If there's an analogous example, it would be like, so we knew neural nets were useful,
it just took us a while to figure out when deep learning become useful, practical when

(40:23):
back propagation become real, like within our grasp, these things became more useful,
right?
So I think we just have to connect those dots.
There's a lot of multi-agent research that's already out there that I think will be very
useful and it's going to show itself within the SQL system as well.
Is there any use cases that you're currently using for agents?
So MessagePoint is a complex platform that there's different aspects to it.

(40:48):
It's a specialized platform and this is not just MessagePoint, this is a lot of others,
like medical systems, for example, banking systems, same thing, where if I want to get
my work done, maybe it takes me 20 clicks, right?
There's a lot of manual user driven, a lot of SaaS systems are used as well.
And what that means is literally now parts of MessagePoint will be represented by expert

(41:10):
agents that understand that little piece of MessagePoint or the platform itself, right?
And then having those things come together and in real time say, I'm trying to do X,
Y, Z, can you guys figure this out and then help me do this and let me know which button
to push?
That's magical, right?
It takes away how fast somebody can become useful in our platform, right?

(41:31):
That time goes away, it takes away the...
If you don't know all the concepts around CCM, that's okay, you can still be relevant,
you can still do your work.
If you don't know Spanish, for example, it's okay, you can still do your work because we
can measure things behind the scenes for you or the agents can measure things behind the
scenes for you.
But the agents coming together and working together, giving the answers, you can almost

(41:54):
think of it as like you can describe a new recipe or discover a new recipe on the fly
and then that recipe now gets you a new dish that you didn't think was made possible before.
It sounds like thinking of them as mini concierge.
Yes, yes.
That's exciting.
People have tried to build this one, like, okay, my entire system is just one.

(42:15):
I think there's just too much onus, too much prompting, too much knowledge that one agent
has to learn.
There has to be multiple agents, everybody, like a little mini specialized version of
something.
And then as soon as you do this, you now have to figure out like how do these things interact,
right?
And then, and this is where I'm saying the multi-agent research that's been done before
is going to become very relevant very fast.

(42:38):
I started this podcast because I wanted to stand at the gate of businesses using AI and
see what separated hype from lasting impact.
Back when cities had walls, you had to go into the city to do business at the market.
So if you wanted to talk to someone, you waited by the gate until they came in or came out.
Do that enough times and you could talk to everyone.

(43:02):
That's what I want to do, stand at the gate of people doing business with AI and talk
to them, see what they do and why they do it.
If you know someone that's making an impact in the world of AI, would you connect them
with me?
You can find me on LinkedIn or shoot me an email at daniel@manary.haus.
That's daniel@manary.haus.

(43:28):
Thanks for listening.
Advertise With Us

Popular Podcasts

Are You A Charlotte?

Are You A Charlotte?

In 1997, actress Kristin Davis’ life was forever changed when she took on the role of Charlotte York in Sex and the City. As we watched Carrie, Samantha, Miranda and Charlotte navigate relationships in NYC, the show helped push once unacceptable conversation topics out of the shadows and altered the narrative around women and sex. We all saw ourselves in them as they searched for fulfillment in life, sex and friendships. Now, Kristin Davis wants to connect with you, the fans, and share untold stories and all the behind the scenes. Together, with Kristin and special guests, what will begin with Sex and the City will evolve into talks about themes that are still so relevant today. "Are you a Charlotte?" is much more than just rewatching this beloved show, it brings the past and the present together as we talk with heart, humor and of course some optimism.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.