Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
I don't know if you know this, a lot of hospitals still have single servers running on DOS.
(00:05):
You haven't found a new answer for that for DOS 3.1 for cardiac machines?
Come on, there's got to be a better solution.
AI is not going to take your job.
The person who knows how to use AI is going to take your job.
If you upskill everybody, the people who work with you, for you, and partner with you will
be that much better.
(00:26):
Welcome to Artificial Insights.
The podcast where we learn to build AI people need and use by interviewing product leaders
who have launched AI products.
I'm your host, Daniel Manary, and today I'm joined by Chris Wynder.
Chris has had an unusual career, having received a PhD in developmental neurobiology from Rockefeller
(00:47):
University, and eventually he became a vice president of research and development at a
biotechnology firm.
Then, he pivoted his career into tech through information management.
Now, he's the director of marketing at OpenText, where he helps hundreds of thousands of people
organize their company's information and make use of process automation and AI.
(01:08):
Chris, could you introduce yourself to our audience?
Yeah, my name is Chris Wynder.
I am the director of the developer program and product marketing director for our thrust
portfolio products.
So OpenText is moving into the cloud age.
One of the things that happens when you move into the cloud age is you start to have a
whole bunch of random APIs that sit around.
(01:30):
Part of what we've done is, as our partners have matured and as our customers have matured,
we've understood that you need to make those APIs available.
What the thrust program and the developer program, which sits underneath it, does is
makes those available to developers, whether they are commercial developers or working
for a company, so they can try them and then eventually either commercialize them or use
(01:53):
them within their company to boost productivity and to reduce some of the unnecessary pieces
that people often see.
Cool.
And OpenText does what?
That's a great question.
I think when I started seven years ago, you could say we handle content and records.
Over the last seven years, OpenText as a company has expanded exponentially.
(02:18):
And as of now, I think the way that I usually talk to people is we have products that businesses
use across 10 different technical categories.
If you need something to get your developers more productive, we have tools for that.
If you need something to manage your service management for your organization, we have
(02:39):
tools for that.
Plus, we have the standard information management, whether you call that data, whether you call
that content, we have tools for that.
And then we have all the tools to analyze that and put it into work.
I often like to say that the core value of OpenText is we put what you do in context.
We have the tools that allow context for whatever it is, whether it's IT, whether it's a line
(03:03):
of business, whether it's your customer management, we have the tools that allow you to put it
in context and go faster.
Very cool.
And what do you get to spend most of your time doing?
Me?
A lot of my time is spent on essentially what in the IT consulting world we call enablement
and change management.
Some of it's with our customers and partners where my team, my team of three of us spend
(03:28):
a lot of time explaining how to use the tools.
We explain a lot of times how do you work with the tools.
When you talk about these things, it sometimes gets scary, right?
Well, what about this?
What about that?
Sometimes you just have to be like, we're the group that comes in and we're the developer
whisperers a little bit of time out.
Your job hasn't gone away.
(03:49):
Let's talk about what you need to do.
What's your goal for this project and where does it fit?
It's not for everything and everyone, but there's a use case that can bring value to
your work and simplify what you do.
So we spend a lot of time doing that.
That's really the bulk of our days nowadays.
And I think it's super cool that even you haven't even mentioned the word AI yet, but
(04:14):
we know it's coming, but people are still worried about their jobs going away.
And so I'd like to dig into that later.
But for now, I think the question on everybody's minds with a podcast like this is, Chris,
are you an AI?
No, I'm not actually.
And you can tell because I stutter.
I do use inappropriate terms sometimes like an AI.
(04:36):
So it's a valid question sometimes with me.
That's a good point.
When you think of an AI, you don't typically think of replicating, I'll use a graphical
analogy, like pores on a human face.
Exactly.
So that's fun.
I like that.
But all right.
So getting back to that automation aspect of it, what do you see people doing with the
(04:59):
services you provide at OpenText?
So there is a wide variety of things.
And I think one of the coolest parts about my job and what my team does, we run hack
fonts, so we throw our tools out in the wild and say, what can you make?
Here's a challenge.
What would you make?
When the first time we did that, I was frankly, absolutely floored.
(05:20):
What you could do with our information management tools or the products are known as thrust
services.
It's everything from workflows, content extraction tools, text extraction, document file types,
just nice, simple, basic tools that everybody needs.
But we had somebody who combined that with blockchain to create a new interface for hospitals.
(05:43):
It's really the Lego building side.
That's what's really cool about it to me is that these thrust services really provide
the creative pieces or building blocks that a developer can use to build whatever they
need.
It's not the whole puzzle, but it's certainly a lot of the blocks and the way you use them,
the way you put them together and build around them.
(06:05):
There's some really cool things that we've seen customers and partners do.
I think a lot of it is though around the idea of process automation.
A lot of our customers and a lot of our partners focus on large enterprise and you can imagine
task management and just getting data from this cloud to that cloud is a really arduous
(06:27):
task.
What you use OpenText for in our services is that piece and that's how we see the automation
fitting in is just you need context in every system and you need the same context in multiple
systems.
I have some experience with iPaaS integration platforms as a service for example.
(06:48):
How does that compare with the functionality they offer?
Yeah, I'd say that ours is more of a slim slice of that because it deals with really
the content and data piece.
It's not a full robust, full iPaaS.
It's really designed for more of that process piece.
For example, what do you do with an it?
What happens with an it to an invoice?
(07:08):
If you're a large business, you need part of it in your ERP system and by part of it,
I'm using big air quotes here that the audience can't see.
You need part of it in your customer service and you need part of it in your billing system.
What do you normally do?
Most companies just take that and they make three copies and there's a copy in each and
then the customer goes, hey, that's actually not the bill.
(07:29):
You forgot you had two of these and not... What happens next?
The systems get out of alignment because now you're dealing with different context.
What you could do, which is actually a better way to do it, is just take that data and have
one source of truth of data and have the data show up in your ERP system, your customer
service system, and then you have one place to update.
(07:53):
That's really one of the coolest things we see and that's what automation is to me.
When we talk about the backend, how do we simplify the stuff that people need access
to do their jobs?
Not getting rid of people, it's stopping them from throwing their computer through the wall.
It sounds like you can save a lot of headache by being the source of truth and then having
(08:17):
other people hook into that rather than just passing things around.
Exactly.
Okay.
Now, speaking of automation, I feel like that word is big and currently maybe a little polluted.
How would you define AI and how does that fit in with automation?
Yeah.
I think when I put on my old curmudgeon hat, and I'm just about to have my 49th birthday,
(08:41):
so I get curmudgeonly sometimes, generative AI, which is really what we're talking about
nowadays when we talk about AI, is just another tool in the automation playbook.
Does it replace everything that you could do with the old school, it's like a five-year-old
tool, the old school tool of robotic process automation or the even older tool of workflow?
(09:06):
Not really, not yet.
Will it get there?
Maybe, but the things that it does well for a business are really different than the things
it does well when you talk about chat GPT and the other things, which are really cool
tools that I personally use and have made my personal productivity much better and made
(09:27):
it easier for me to wrap my head around what I do in a job.
I think on the personal side, AI is a better automation than anything else that I've seen.
On the business side, it leaves a lot to be desired because of lots of reasons, but I
think the main one is how secure are you in opening the kimono of all your intellectual
(09:50):
property, all those emails that you really don't want exposed to customers, to competitors,
all that's really required to get good information.
The reason public AI works so well is it has all that, right?
Whether you're talking about the arguments that OpenAI has gotten into with Sarah Silverman
(10:12):
or any of the other content creators, it has a big robust engine underneath it of anything
you could think of, and it has a way to connect those.
That's the value of these large language models is it comes up with these really interesting
and new connections between pieces of data.
Now if you're enterprise, you don't necessarily want those, you don't really want that puzzle
(10:33):
completed and you don't necessarily need something brand new.
You just need to know what is actually in there.
You mentioned that it has a niche of solutions that it's really good for, generative AI.
What have you seen at OpenText that uses generative AI really well?
(10:55):
I think one of the coolest tools, and we debuted it last year and coming up we're going to
show an even more robust version, is the idea of having a user interface which mimics how
we think.
So the idea of having like, five years ago we would have called it a chatbot, having
a chatbot where you just say, hey, where's all the sales data for North America for this
(11:21):
product?
And then you get this list and you get the summary and you get the links to it.
And we have this solution that we call Aviator and really it's that idea of taking, how do
you create a user interface and a user experience that allows them to do their job and have
access to maybe the summary of information that they may not be able to have the full
(11:43):
mod to you.
And that's the coolest kind of business use case I've seen.
Again, to me it fits into that personal productivity, not business productivity side.
I think Gen.AI and these tools are so cool for that and do so many really interesting
things.
I'm looking forward to seeing what you can do when we get rid of some of the risk and
(12:04):
trust issues that people currently have.
Okay, so I think you mentioned the distinction twice now of personal versus business.
And that sounds like a business tool, like a business analyst perhaps could use it.
So why would you call that personal productivity?
Well because it still has, so it's built with shackles.
(12:26):
You can as the back end say, thou shalt not have access to this.
Or you could say, Daniel can have access to the full monty, everything.
Chris can only have access to the things that are related to what he does for his day job.
So Daniel has a role that is visionary and aligned to corporate strategy.
(12:49):
Chris only has a role that's aligned to the developer strategy.
And we're not where we want to be, where we want the lower levels to have access to all
of that corporate strategy because it brings intellectual properties and all these other
pieces.
So that's why I call it a personal.
It's more on the personal because it allows you to do your job better with the kind of
(13:10):
information that you're allowed.
When I think of business tools, I think of more of these platformy tools where you really
have, you can bring in new information and interesting insights into more than one person
at a time.
Where you could allow more of these collaborative pieces.
Think of, I always fall back into the life sciences side.
(13:31):
One of the coolest tools in life sciences, and I actually just want to know about Prize,
is DeepMind.
And the research behind that, it makes a whole bunch of things available to multiple people
at once so that they all can do their own thing, but with a single goal.
So the goal there is to say, how many protein structures can we make and make them look
(13:52):
better and make everybody's job easier?
That's to me what I would consider more of a business tool where it has that role in
a larger picture, aligned to a corporate strategy.
Where it's a business tool across departments and you're not thinking of it as in how it
fixes an individual user's day.
It may replace some of the tasks that a user does to interact with other parts of the business.
(14:18):
It's a fussy line.
I realize it's not, my definition there is not crystal clear and it's a bit of a line
in the sand.
Yeah.
I think it is meaningful though.
I like that because on the one hand you've got software like Excel and that would be
a personal productivity thing.
Is there a non-AI analog to a business tool?
(14:43):
When I start thinking and I always go back to, I always fall back onto the automation
side, there's a couple that I think of like decision trees.
So you can automate decisions that can mimic the corporate strategy.
So for example, my favorite one that I've been talking about for years is what I call
no touch travel approval.
It's that if you are a company that has people living in multiple zones and you have a salesperson
(15:10):
who lives in, we both live in Ontario, that lives in Ontario and has to go see a customer
in Paris, Ontario.
And so they submit their bill and if the approval is in a different country, they're like, wait
a second, why are you going to Paris?
Because maybe the invoice or maybe the bill just says Paris Hilton.
(15:35):
Now did they go visit Paris Hilton in the Celeb?
Did they go to the Hilton in Paris, France?
You can automate that decision based off of the context of where do they live and some
deeper information.
And that doesn't necessarily require AI.
Could it be done better with AI?
Could you make that a bigger set of decisions?
Yeah.
(15:55):
But that's when I started thinking of business automation.
Yes, it's for the individuals, but it aligns to a strategy because you can take that decision
and say, you know what, we're comfortable with automating it up to $5,000 for no touch
travel approval.
And you just do it, you get your money, you go back.
And when the bucket's empty for the travel, you're not allowed to travel.
(16:18):
And you can make that a global piece, right?
So it's not just for the person, it's to take the steps out of a person's day, but it's
to align the person day with the corporate strategy.
I feel like that's a tough thing to grasp, thinking at the strategic business level.
It's really high level and I love this analogy, but I also hate it.
(16:45):
It's similar to the Supreme Court decision on pornography.
I don't know how to define it, but I know it when I see it.
To me, that's that difference between personal productivity and business productivity.
Businesses are always made of people.
And so the more productive a person is, the better the business can be off.
That's an easy statement.
But to me, business tools allow corporate to control the strategy, to align process
(17:12):
to strategy.
A personal productivity tool doesn't have that barrier on aligning to corporate strategy,
but it does allow you to get your job done, assuming that the corporate strategy and you're
aligning to the corporate strategy.
So it's where the assumption is made, right?
If that's at all.
Yeah, it does, because then a business productivity tool is one that is inherently aligned with
(17:35):
the corporate strategy.
Yeah, and that is different.
That's very different because otherwise we got to know, we have to have a lot of meetings.
Human judgment.
And how has AI affected your job?
So again, I'll fall back onto that business versus personal.
(17:58):
On the personal side, it's actually made things faster.
On three different fronts, which is really cool and why even though I often sound like
a Danny Downer on AI, I actually just absolutely love it.
One of my side projects and my pet projects that I do within OpenText is I do a market
scape of all the different vendors who have APIs.
(18:21):
I translate 250, 300 vendors across, gosh knows how many categories that I track.
Just using AI to ask who are the top 10 vendors that show up in everybody's list has made
my life easier because I'm no longer tracking 200.
It also helped me create the formulas that allow me to take more of a time track look
(18:43):
within the existing data sets that I have.
So now that I can say, okay, I built the formula using AI that allowed me to connect different
data sets that were built differently and now do re-ranking live and add more robustness
and tools to it.
And finally, just the idea, I have a learning disability, which is my writing.
(19:08):
So I have what's called dysgraphic and dyscalculin.
And so it's the idea, I'm not dyslexic, but when I see grammar, it's often swapped.
So my writing is often like I can write two paragraphs without a period.
It'll look like two paragraphs, but there may not be a period in there.
(19:28):
And so what I use AI for is on that personal side where I chuck in what I thought was a
really cool, really good two paragraph blog and ask it to summarize and check the grammar
and it'll make those changes for me.
It's not changing my ideas, not writing my content for me, but it's ensuring that my
voice is her.
(19:49):
Now, that's awesome.
It's a wonderful use of AI to fix those human mistakes.
And is there a way that it's affected your job professionally?
Not from a tool perspective.
I think it's changed how we communicate because we deal with a lot of developers.
(20:09):
Change how we communicate what you do.
I think it's changed some of the enablement that we do because it's top of mind.
I'm still in marketing.
I'm still trying to get people to come and use our tools.
So we're always talking about in the context of GenAI and LLMs and these other newish tools.
So it hasn't changed my job.
(20:30):
It's changed the focus of my customers.
And so to align to my customers, I've had to spend more time learning GenAI than I spent
learning any other single tool this fast.
I empathize with that.
As a person on the technical side, it's like the state of the art changes every week.
With the customers that do come to you, what are their top of mind and the problems they're
(20:53):
trying to solve?
So when I talk to a lot of them are around their high volumes.
So the ones who come to us and want to talk about our data tools, our AI tools are the
ones that have ridiculous volume.
So for example, we just went through a whole exercise with a healthcare company in the
(21:13):
US and they just wanted to embed some logic and the processing for about 19 million documents
per year.
And they're like, we're like, okay, 19 million invoices and like 19 million, full stop.
(21:35):
We don't always know what we get.
Sometimes it's insurance documents, sometimes it's invoices, sometimes it's EOBs and EODs,
which is the explanation of benefits, explanation of deferral, denial.
So they wanted to build basically a funnel that would allow them to take all this in
(21:57):
and shoot it off to where it needed to be.
Now some of that technology is just straight up processing power.
Like you take an OCR engine, a file type, a workflow and Bob's your uncle, you got about
70% of that.
But 30% is a big deal when you're talking about 19 million documents.
(22:20):
That's still more than you want one person going through.
And so you still need some additional logic that isn't just built into a workflow and
OCR.
You need something that can look into the document and say, oh, it references, you could
use something as simple as regex where it says, oh, this looks like a social security
(22:41):
number.
I'm going to start off and figure out if it's a social security document.
So that's a logic.
And that's actually something that we're finding our partners, particularly our partners, to
say, hey, I've got this database of document types that we've worked with in the past.
How do we turn this from something where we handcraft it for every customer into a solution
(23:07):
that we can now use to process documents, put them to the customer?
Because a lot of them deal with health care and life sciences.
They don't want to hold the documents long term, but they do need to process them.
And a lot of hospitals, I don't know if you know this, a lot of hospitals still have single
servers running on DOS.
Yeah, when I was a consultant, we went to this one site and they're like, okay, can
(23:34):
you turn that machine, like you're running DOS 3.1?
Yeah.
For, it's 2010.
And they're like, yeah, we're hoping that we get another couple of years out of it because
we don't know what we're going to change it over to.
And you're like, oh, can you just turn it off?
They're like, no, it actually runs all the cardiac machines on the floor.
(23:54):
And you're like, you haven't found a new answer for that for DOS 3.1 for cardiac machines?
Come on, there's got to be a better solution.
And so it's that idea that hospitals typically don't run good infrastructure from a software
side.
(24:15):
So they're always looking for somebody who's going to take their stuff away and give them
a nice simple data set, but you have to follow the rules.
So a lot of our partners are like, we ought to follow the rules.
We know how to take it away and give it back to them.
But we need better margins because the documents are getting crazy and some of them are old
and some of them are new.
(24:35):
And so they built out these corpuses, these essentially LLMs or what do they call them
now?
Mini LLMs or SLMs or something.
They built their own.
So how do you connect?
How do you bring the pipeline there and then bring it out, which is a really cool problem.
So it's been one of the more interesting things we've done is the, how do we get the data
(24:57):
into our specific AI?
And then how do we ensure that it gets out and make sure that it's insecure, out secure?
They deal with that stuff, but as long as we can deal with the secure in and the secure
out, it's a really good partnership for both sides.
And it sounds like there's a lot of need for AI in that because there's just so much variation
(25:20):
and absolutely any doctor's writing, for example, maybe that's in there too.
It's not easy to read.
So what would be the advantage of having that slim LLM, using a custom LLM versus we talked
about the big public ones potentially using your data?
(25:43):
What are the advantages to one or the other?
So there's three ones.
Let's start with the more concrete ones.
One is frankly just trust.
The problem with a big large cloud company like the majors is that they're putting everybody's
data into the same data source and telling you that they tokenized it and put a firewall
(26:05):
between X, Y and Z. And we all know that there's some tests you can do that show that their
walls aren't quite as thick as they're telling you.
It's more of a security by obscurity.
And that's not going to fly for healthcare and healthcare information.
When we get into these highly regulated use cases, whether it's government or healthcare,
that just doesn't fly.
That's fine for the manufacturing documents for IKEA's chair.
(26:29):
We could all figure that out.
But when it comes to healthcare, you're putting people's lives at risk, frankly.
And so that trust piece is a big piece of it.
You like to have a face.
And so if you've been working with a partner for years and they've managed your documents
for decades, that's the face you want.
You want to know that they've upped the technology and that trust in that personal relationship
(26:52):
you have is still there.
So trust is the big one.
It's the tip of the iceberg.
But not just trust of the LLM either.
So typically when I hear trust, I would think of the output is good, my data is safe.
But you're also referring to the relationship with a company.
If a hospital has a vendor, getting on that vendor list is hard.
(27:15):
And so if you can bring that in-house as a trusted partner, then you're ahead of the
game.
With the very large LLMs, we do see hallucinations.
They're worse in healthcare.
And I think that has finally come out publicly simply because healthcare has weird jargon
that kind of looks like real words, but isn't.
So the thing about an LLM, it's not designed to say, I don't know, it's designed to say,
(27:39):
well, there's a gap in here.
Let's pull a nugget from here and a nugget from there and give you an answer and then
give you the little might be wrong.
Well, you can't have might be wrong in healthcare.
So just the fact that it creates answers because that's its job is to create answers actually
ends up being a negative thing.
So you can't necessarily trust the data.
(28:01):
So you have the trust of the company, the trust of the data.
And the last one's a little more esoteric, but I think it's becoming a real thing as
far as what I'm seeing is the amount of processing power, whether we talk about electricity,
whether we talk about CPUs to run these large LLMs.
Do you really need that to check a three digit code in a document versus a database?
(28:25):
I mean, in theory, you can do that in Excel.
Do I really need Nvidia's $12 million data center to do that?
I don't think I do as a hospital.
Now as a partner who's looking to be a global power, absolutely I can see that.
But as a hospital, as your customer, it's not really for you, right?
(28:51):
And then you have to pay.
And what we're finding is that consumption costs significantly more than if you ran it
from a smaller system.
Now it's really cool at Chatt GPT and Gemini because you want that big, that consumer,
that personal knowledge.
I want to know the whole corpus.
(29:11):
And I like the fact that it gives me wacky answers, but it gives me the link to it.
In healthcare and a lot of business processes, that's not a good thing.
I don't want any of the folks that work for me to find something about open text from
Chatt GPT and start writing about it and putting it in there as if it's true and part of our
(29:36):
corporate strategy.
Because then that gets up to the bosses and now we're all in trouble.
So there's that idea of the trust falling down, but then the cost, it's the trust and
cost kind of matrix, right?
How much do I trust the data?
How much do I trust the person?
(29:56):
And how much does it cost me?
If I have low trust in the data and the person, but it costs me a million bucks, how happy
am I?
So it also sounds like you're saying that it's actually beneficial to the output to
use a smaller model, to use a more limited model.
(30:17):
I think it can be.
I think in use cases where we're talking about really specialized areas, so geology for gas,
where the geology of China really has no relationship to where you'd find gas in Northern Ontario,
where there might be some high level things, but the devil's in the details there, where
(30:40):
you really want to be in the right place.
You don't want it going, well, somebody found 12 million deciliters in the Mongolian desert
and the Mongolian desert's really dry and it's really sandy and it looks just like the
bottom of Lake Superior.
So there must be gas there.
You don't want to make a business decision on Mongolia looking like Lake Superior.
(31:01):
It just doesn't match the smell test.
So there's places where a larger set of data and more information is just not beneficial
and really specialized industries, that's probably very true.
And you'd really like to get more guidelines and more so that smaller data sets probably
really beneficial, not just from a speed perspective, but from an accuracy perspective.
(31:27):
And is that something that you've had to do at OpenText or you've seen customers do?
We haven't had to do it.
I think we've taken a very conservative timeline with AI and because we do have a very strong
strategic partnership with Google and up here in Canada, Google has a healthcare team and
(31:50):
because of our partnership, I've gotten to hear them speak and whatnot.
And I know there's parts of our engineering teams that are working with the Google engineers.
We have left that to the partner.
And so we do the same thing on the smaller side where we have partners who are bringing
the idea to us.
As big as our portfolio is, we're sticking to our knitting.
(32:11):
We have the AI tools that we build are to make our products better for our customers,
not a general big, huge AI to replace.
That may come, but for my part of the business, which is working with developers and our API
business, it's not something where we have near-term roadmap items.
(32:32):
And one of the more interesting use cases for an LLM is an agent, like something that
can act on it.
And it sounds like with all those connections, an agent that can act on the connections is
it makes a lot of sense.
What have you seen in terms of agentic use cases?
I haven't seen a lot other than the one that we debuted last year, which we call Aviator,
(32:55):
which is that kind of chat bot.
We have one for code and we have one for interrogating your records.
That's the only one I've really seen.
I like what they've done there.
It's got some limitations on the back end.
It's designed to work inside of your business.
I think, I mean, honestly, Daniel, I haven't seen anything better than what Gemini and
Chat GPT do for almost any other use case, which doesn't pain me to say, but I really
(33:22):
don't see anything better than those.
I've seen some really cool other GPTs that people are trying to build and some other
prompt engineering tools that are really interesting.
I really like where the technology is going because it sounds like people are thinking
about these problems and how do we solve this for a business problem or a personal problem.
(33:45):
But I haven't seen a lot in real world use cases where I'm like, oh, that changes my
work structure.
That makes life easier.
And it sounds like with Aviator, it could make someone's life a little easier being
able to talk to your data, but maybe there just isn't that use case yet that hooks it
up like Chat GPT did when it came out.
(34:08):
Not that it's filtered through my part of the world.
I know there's a lot of really cool things that are kind of still in skunkworks mode
that I've seen that you're like, oh, they figure out how to meet the trust standards
and the technical standards that we hold ourselves accountable to.
That's going to be really awesome.
But there's one tool that we just released, which is our Developer Aviator, which takes
(34:30):
our code and allows you to say, okay, if I had this, I prefer Python.
So how do I take your JavaScript tutorial and turn it into Python?
And it gives you, now I do know that our PM team is not as excited about that tool as
I am or the SCs are, in part because they're like, well, and I look at it and I say, that's
(34:54):
a ridiculous thing to worry about.
But I know it does actually affect the quality of the code and whether the code runs properly.
Why is it putting an extra space in there?
Is that a space or is that a return?
And I totally get it, but it seems obsessive.
But you have to be when you're telling somebody, you copy this code, it's going to work for
you.
(35:14):
There's those little things where it looks really cool to somebody who's not totally
technical like me.
But I get some of the angst and anger I hear from some of the developers who look at the
outputs of some of these tools and go, why is everybody using this?
I do get it.
I also cheat and go, well, what if I did it for me?
(35:37):
Yeah, yeah, exactly.
I had to build something with the Plaid AI with it hooks up bank information and they
have a chat bot that doesn't quite work agentically, but it's trained on all the docs and it outputs
stuff that makes no sense and comes from nowhere all the time.
So I guess I could understand a little, but I, well, I feel myself, I get as like you,
(36:00):
a curmudgeon with it comes to AI sometimes.
And so I'm not sure if to be excited for these agentic use cases, but it sounds like you've
seen some things that are very exciting.
Is there another example of what you're excited to see in the future?
It is a life sciences use case that involves a really intriguing way to combine, and this
(36:26):
is going to feel like I'm throwing marketing jargon terms at you, so I'll apologize ahead
of time, where it combines a chat like UI with secure record holding via blockchain.
So you can actually, you can now use in their example, you can use the chat bot as a patient
(36:46):
to say, hey, blah, blah, blah.
I'd like to make my record shareable to Dr. X, but only on a PC and only in this location.
So it then goes and goes to the blockchain and says, hey, I need a token and I want it
(37:08):
time limited and I wanted it this.
And so it does in some ways, all the coding in the background, which I feel is like a
really, it simplifies the patient experience.
It's a really cool use case because it's all that stuff that I need actually to happen
in the background that typically you'd have to have a developer do, and you can't just
(37:28):
release that into the wild to a patient's hands.
It's really cool.
They're really early.
There's some gaps in how it does it because all of a sudden if your computer moves, like
it's too sensitive.
And I don't a hundred percent understand it, but it's using, they basically said it's using
too many decimal points in the GIS.
(37:49):
So literally he's like, Chris, if you had the same computer and you went three blocks,
it may not let you access it.
And I'm like, that's cool, but I guess I'm not working from home.
Yes.
Now I like that as a use case too, because well, one blockchain is esoteric to begin
(38:11):
with and two, no one wants to have to code if they don't want to.
But one of the other topics that I think is super big in AI is how is it going to impact
broader society?
How are we going to not just jobs, but how are we going to use it and adapt to it?
(38:31):
And what's your perspective on that?
I think it is a tool.
I do like the phrase and I wish I knew who initially said it was AI is not going to take
your job.
Who knows how to use AI is going to take your job.
I don't think anything that I've seen, and again, I see what most people in the public
(38:54):
see, there's nothing that I've seen that makes me go, holy geez, lots of jobs are at risk.
As in the whole job category, I don't think this is like the steam engine to gas engine.
I don't think this is like the car and the horse.
I do think there are some jobs that will get downgraded from a full-time job to a part-time
(39:20):
job, maybe unnecessarily, maybe it's actually a good thing.
But there are some, like last night I was at a UX symposium and it was really interesting
hearing one of the more senior women talk about it.
And she said, this happened with print and I was replaced until I wasn't.
(39:42):
And then it happened when graphic design went out.
And then I realized that I was actually never a graphic designer.
I was a user experience designer.
So are graphic design jobs going away?
Probably not.
Is the skill set valuable still?
Yeah, absolutely.
Will the person who is currently a graphic designer make the same amount of money in
(40:06):
five years doing only what they do today?
No.
I like that friendly for the question a lot.
But the problem is that that's a really deep explanation.
And for most people, all they hear is, I can't do my job tomorrow.
And that's actually probably true, but that's always been true.
(40:32):
That's been true.
Maybe it's accelerated, maybe it was generational.
You wouldn't do the same, even if you're an ad exec, you wouldn't do the same job that
your dad did.
So maybe it's just the timeline is getting so compressed of how fast these jobs change.
My son is currently just in university.
He's at Waterloo doing his, and he's in geomatics.
(40:56):
And at first he was a little disappointed because he'd always wanted to do just pure
development.
He's been coding for, he's just about a decade and he's only 18.
And then he started doing his research.
He's like, I think this is actually a good thing.
Looking at where he could go, he didn't think about what jobs exist today.
(41:18):
He looked at the potential for years down the road and said, if I get the right co-ops
and I plan this right, it can actually be a really good thing for me.
It can be a job that's there in 10 years, not a job that I'm trying to get the job from
five years ago.
I'm trying to get the job from five years from now.
That does feel like a shift in mindset.
(41:39):
Yeah.
Where I think you're right.
My, like my parents' generation, they would see as strange to change careers every, or
jobs every three years.
And yet that's so typical nowadays.
And it does feel different.
You can't aim at the past, you gotta aim at the future.
And are there any areas that you feel like AI is not ready or should not be used at the
(42:01):
moment?
Healthcare, healthcare is the major one where I feel like I know enough to know why I'm
scared.
Some of the other ones, like there's parts of marketing where I'm like, I don't think
you could do that.
But I do, obviously my title is in market, I work in marketing.
So there's a little bit of personal bias there to be fully transparent.
(42:25):
I think there's parts of my job that can't be done.
I think there's going to be somebody who does, who can do my job twice as fast simply because
of AI.
But healthcare and things where you require an exacting, no changes to it detail, where
if you don't have the answer, it's an escalation, not a fill in the blanks and hope it's close
(42:49):
enough.
Right?
We can all laugh at the, at the seven-fingered handshake.
But if a robot misses by a tenth of a millimeter on a cancer, you could die.
So you don't want it to go, wow, is that a decimal point or was that a dot on that handwritten
note that is the basis for the surgical, you know what?
(43:14):
It's good.
I'll just take a little bit more.
Like that's, you don't want the hallucination to say, if I don't know where the decimal
point is, put it at the farthest to the right.
Like that's not right.
You want it to say, hold on, I can't do this.
That's just not, that's a hard thing to, to engineer into a system because there's always
(43:34):
going to be times where it needs to find new data.
The line between finding new data and saying the data doesn't exist is really, it doesn't
exist nowadays.
Right?
Because in the internet you can find an answer for everything.
And the AI model is designed to give an answer for everything.
Absolutely.
Right?
Yeah.
(43:54):
Yeah.
That makes a lot of sense.
If it needs precision, guessing isn't good enough.
And, and just as a last question, is there anything you'd like to share about what you're
working on now?
Yeah.
I think what's really cool is with what we're working on now, and, and it goes back to that
idea that a lot of what my team does is enablement, is we're working a lot to help developers
(44:18):
define where they can go.
So whether that is, you know, building out a skillset around taking a developer to be
a, an entrepreneur, or whether that is teaching them how to do research or facilitate a conversation,
we're building out a corpus of that and we're going to make it free to everybody.
And that's one of the things I'm really excited about.
(44:41):
Because if you upskill everybody, the people who work with you, for you, and partner with
you will be that much better.
And you become that much more interesting to work with if you're not just transactional.
And that's where OpenText is going.
We understand that it's not about the transactions anymore in this new age.
It's about the partnership, the trust, and the ability for everybody to win.
(45:02):
Me.
That's beautiful.
I think that's one of my favorite things about AI.
It lets you focus on what it means to be human.
Absolutely.
Yeah.
Well, thank you, Chris.
Is there a way that people can follow you or get in touch with you?
Yeah, absolutely.
The place to start is probably LinkedIn.
I'm pretty open about connections.
I do run a blog on Medium, and you can find me by just my name.
(45:26):
And I'm always happy to connect with people.
I love to talk about this stuff, and I talk about it all the time.
Thanks for listening.
I made this podcast because I want to be the person at the city gate to talk to every person
coming in and out doing great things with AI and find out what and why.
And then share the learnings with everyone else.
(45:47):
It would mean a lot if you could share the episode with someone that you think would
like it.
And if you know someone who would be a great person for me to talk to, let me know.
Please reach out to me at Daniel Manary on LinkedIn or shoot an email to daniel@manary.haus,
which is Daniel at M-A-N-A-R-Y dot H-A-U-S.
(46:12):
Thanks for listening.