Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Hi and welcome to another episode of the Data Revolution podcast.
(00:19):
Today my guest is Ross Dawson, whom I've known for many years, and he is a world-renowned
keynote speaker, strategy advisor and entrepreneur and all-round smart, nice person. Welcome,
Ross.
Pleasure to be here, Kate.
So I think we're going to have an interesting chat about AI. So what's got you really passionate
(00:39):
about AI at the moment?
Well, for me, this is sort of a masterful invention of humanity, but we have to be looking
at how it is compliments humanity. So this frame here of humans plus AI, how can humans
and AI work together? So my lens on AI is always around how can this augment humans,
how can this increase our capabilities to do the things we want to do, taking that stance.
(01:06):
And essentially, if we look at cognition, this idea of taking in information, making sense
of it and being able to take useful actions as a result, that the human cognition and probably
what we can sort of AI cognition are quite distinct and highly complementary. So I look
at the spectrum of decisions, which we need to make. And some of them are very low-level,
(01:33):
which can be data-based. So there's a lot of, I mean, the classic one is your small credit
lending decision or fraud detection or things like that. These are ones where you have lots
of data and you can make an effective decision around whether to proceed or whether to flag
something or so on. True to the more complex ones, which is what should I do with my life?
Or should we make this strategic acquisition or things like that where essentially, you
(01:58):
know, human intake of data and processing of that is more effective. But bringing those
together, we can look at what are either AI first decisions, where we can use a whole
array of wonderful techniques that have been developed over the years for essentially processing
lots of data to be able to predict effectively and make useful decisions. And the far broader
(02:23):
context in which humans work and where we have human first decisions, which are incredibly
complex, involve human and ethical issues and not ones which we're necessarily in trust
to AI's at the stands today, but where AI can play an extraordinarily valuable role in
supporting or complimenting or augmenting the decision-making process.
(02:44):
So could you perhaps explain for us just what you're talking about? So I think you were
basically talking about things like machine learning for those automated decisioning processes,
you know, where we want to know, have you got a good enough credit score to do this?
Up to what are the other kinds of AI technologies that you're talking about?
Well, I mean, I love the sort of the, you know, it is simplistic, but I think it is
(03:08):
very useful to distinguish between, you know, what I call analytic AI, or some people call
traditional AI or good old boring AI and generative AI. And of course, you know, there
are a lot of overlaps between them. But one of the most important distinctions between
them, to my mind, is that machine learning is essentially domain bounded. It takes in
(03:33):
data where you have some parameters around what that data is. So if you have credit data,
you're not going to throw in the price of fish or whatever it might be.
Well, I hope so. Yeah.
So, well, yes, you've got to do some data cleansing on the way to make sure you do have
the appropriate set, which in fact goes that point. You know, we do need data cleansing,
(03:54):
we do need to be able to have the right data and to train machine learning, which is useful.
And so, but that is always domain bounded. And so if we're looking at predictions, you
know, and that's, you know, AI is traditionally often being described as a prediction machine,
and it can be very effective. That's always been in a specific domain, whereas generative
(04:14):
AI is unbounded. So it's trained, you know, essentially on all scraps of content that
humanity has created that they can do some things, it can do unpredictable things. Yes.
Yes. And but it's there is, you know, you can look at any domain, you could go, you know,
have conversations or engage with a generative AI system or broadly trained LLM on any topic
(04:38):
you choose and just hop about wherever you want. And machine learning is completely
distinct because it is bounded. So while there are a number of different, you know, distinctions
between the machine learning and the, oh, sorry, traditionally, I, or analytic AI, since
there's a lot of related techniques in there. And, and so that's where you have to have
(04:59):
specific data sets on machine learning, about domain bounded training and generative AI,
you have unlimited data sets and part of the magic which happens in those is that, you
know, it does pay attention to the relevant pieces so that we can, we can find what is
useful and pertinent to us.
And how do you see RAG models retrieval, augmented generative models playing into that?
(05:23):
Well, essentially, you know, those, it's those are essentially generative AI techniques, because
these are still content, there's, you know, you know, text or a whole variety of other
domains, and it's simply been able to use, you know, what essentially the large language
model type techniques to be able to retrieve those building some structure around them.
(05:44):
And I think, you know, the whole rise of vector databases is absolutely critical in
being able to build the embedding the relationships between all of the information so you can
pull this out in a meaningful way.
So essentially flat search, as it was with, you know, the Googles and a lot of the old
enterprise search was find a text ring in this bunch of data. And, you know, whilst
(06:07):
that some of those algorithms improved a little bit over the years, it's a completely different
thing is when that data is structured in vectors where you can actually implicitly identify
the relationships between the, you know, the tokens or the elements within that data so
that you can find what relevant.
It sounds as if it's so it sounds as what you're talking about is kind of going against
(06:32):
what a lot of people have been talking about is that I'm generally they are working over
largely unstructured data. You're saying we need to structure the data again to drive this.
Is that what you're arguing?
Yes, well, the process, the training of LLMs essentially does implicitly build up, you
(06:53):
know, the embeddings, which is overlaid on all of the ingested data starts to provide
these kinds of structures, which make it more useful. And so a couple of layers above that
is where you can start to build knowledge graphs and ontologies and the ontologies which could
(07:15):
describe anything from understanding a domain of medicine through to the structure of a
supply chain is something where essentially you do need some human oversight. You know,
what is the actual meaning of the relationship between a pot number and the, you know, the
location of where the, you know, the refrigerators applies to, you know, go and so this starts
(07:41):
to become there are layers from essentially the embeddings which are implicit in the processing
of data ingested into LLMs to building more structures, knowledge graphs and ontologies
where you do have some direction to make sure they are as useful as possible. You've been
able to pull out what is meaningful and relevant.
(08:03):
You've really cheered me up because, you know, I've had a grand passion for graph databases
for a very long time and I think they're coming into their own now, which is really
great.
Oh, yes. Well, I mean, my most successful LinkedIn post of this year with, you know,
1500 likes or whatever is started off saying ontology may be the word of the year and,
(08:26):
you know, then described a little bit why and, you know, just people are recognizing
that, you know, that it is building the structure of relationships between data.
Yeah, but not in a relational database because it's very costly to do searches on, for example.
Yeah, well, also essentially relational is essentially row column. So you don't have
(08:50):
many parameters to play with in terms of mapping the relationships. You know, doesn't, you
know, they just have a couple as opposed to, you know, the unlimited scope of a, you know,
essentially embeddings which look at the, whereas, and one of the points being, you
know, embeddings look at a token level in terms of what are the, you know, the proximity
(09:12):
of different tokens and thus to be able to make the next token prediction through to
the, you know, knowledge, restaurantologies, which refer to higher levels, which is not
just a phoneme or something within a word, but something which is, you know, meaningful,
such as, you know, a drug or a, you know, a part or a...
(09:36):
Which kind of sounds a lot like what people are talking about using RAG models for, for
that higher order understanding. So that's really interesting. I wanted to dig into the
human side of it because one of the things that people are often talking about how AI
is going to take all the people out of the loop all of the time. And I kind of don't
really like that idea because I like your idea of it being augmentive of humanity, not
(10:02):
replacing us, except where we want it to for boring stuff. I'm also replacing us for boring
stuff.
Well, I mean, I think there's two aspects to that. One is part of it is through design.
If we aim to automate humans out of work, then that's more likely to happen. But I think
we can design it in a way which does amplify our humanity and creating the work if you
(10:26):
want to work. But in fact, I actually think that work will be destroyed at a far, far,
far slower pace than most, most pundits believe. And I'm very, I'm pretty hard on the optimistic
end of the spectrum. I did a little mini report earlier this year, which I call 13 reasons
to believe in a positive future of work. And, you know, there's a, you know, that going
(10:52):
through them blow by blow. I think that not only do there are many jobs that are highly
relevant to people. We are going to we value what humans do. And there is so much of what
humans do, which is actually not substituted for by AI. And so I think we, and, you know,
(11:17):
the pace of change, you know, it's how long, how long have you been hearing about people
saying, you know, automations give us destroy jobs. We're still, you know, biggest population
world, you know, lowest unemployment ever, just about so it hasn't happened yet. And
I don't see it coming too soon either. I just, there's a whole lot of structural ways of
social, you know, part of it is, you know, social impetus and so on. And I, we are also
(11:41):
just, we continue to create new roles. What it is we see people are useful for to be able
to make people more productive. And so I think one of the key things within organizations
here is that, you know, just hearing from somebody who's saying, you know, people are
frightened about things that can make you more productive, because suddenly, well, they
can do that job in half the time. Well, that's because you've got to find a job as a box,
(12:06):
you know, railed against this for very, very long time. If you say, well, this is a box,
here's a, you know, let's find somebody that fits that box and we'll stick them in that.
And then as soon as you automate it away, then you've, or part of it, then you've replaced
person as both saying, here is a person who worked for us, who's amazing. And all sorts
of ways that, you know, we've discovered and we have yet to discover and they have yet
(12:26):
to discover and we can create an organization where we can amplify their productivity to
do, you know, unimaginably more value creating things than they ever did before.
And that's such an important thing. You know, I was in a, I was at a board dinner the other
night with the chair of this AICD and she was talking about how Australia's productivity
problem, you know, we've got a genuine productivity problem in this country. And the recent Australian
(12:52):
government report on generative AI use like Microsoft co-pilot inside government. And
all of the executives said that, that generative AI was really powering their work, you know,
it was helping them get through stuff more efficiently. And, you know, that's the kind
of step change we need as a country of productivity improvement across the board.
(13:15):
Yeah. And it takes a lot of things and part of it, you know, on a very crude level, like
we need to design the system, the AI, whatever, so that it is usable and useful and complimentary.
And we also, of course, need the skills of humans to be able to use them well. And we
also need to redesign organizations so that they're not saying, okay, well, here's some
(13:37):
people, can we replace them or not? But here is an organization which we redesign, reconceive
as in how can people and AI, you know, create more value than they ever did before.
It's really fascinating because I'm going to be doing some work with a company that
is literally redesigning all their processes around AI. And they had stuff that they did
(14:03):
in the real world where they would leave the building and go out and do stuff. And now they
don't need to do that. They can do it with AI. And so they're reinventing how their company
works at a really fundamental level right now, you know, be in the next couple of weeks.
So that's a really interesting, like they're obviously an early adopter in this space,
(14:24):
but they've kind of seen the writing on the wall and they, they reckon if they don't
do this, they won't exist in a couple of years.
Yeah, well, there's some, it depends on the industry, but some organizations are more
challenging than others in terms of what their business model are. And, you know, within
any industry, if you think that way, there's, you know, you do have to be looking at how
(14:49):
your adjacent companies are working, you know, are you moving as fast as them and being
able to amplify your capabilities of taking your resources of people or assets or relationships,
whatever, and creating value for customers. And that's, that's going to shift so fast
for the fast movers. But the thing is, there aren't that many fast movers. So it's, it's
(15:12):
actually not as hard as I think some leaders think to, to actually be, you know, reasonably,
you know, reasonably, you know, somewhat ahead of the pace at least.
Well, I think, I think it's just the, the traditional Christensen's innovators dilemma.
How do you innovate while keeping your cash cows alive? And, and, you know, we've all
(15:32):
faced that ever, ever since I've been working, which is a very long time, you know, it's
always been challenged. And now AI, now it's just AI powered.
Yeah. Yeah. And so business model. So in fact, one of my core focuses right now is AI driven
business model innovation. So it's like, say, well, how does AI change business models
(15:52):
and whole ray factors from scalable efficiency to changing AI augmented products to being
able to have AI as a customer to be able to restructure value delivery to how it impacts
platforms and ecosystems and all of these facets. So AI changes your business model,
but it also gives you the opportunity to amplify and to accelerate your ability to innovate
(16:13):
in your business models. And I think that every leader must be focused on business model
first saying your business model is changing. And it is a massive opportunity now to reframe
what your business model could be. And without any practice,
but I think people are really struggling with and people are asking me this a lot, you know,
(16:36):
what kind of things should we do with AI? Like, like they don't even have a mental map
to put the, you know, the really low hanging fruit, the operational efficiency, take the
friction out of processes stuff, which is really kind of obvious to me and you, but
they're not even understanding how they can even do that very basic stuff, let alone
(17:00):
reconceptualize their business model.
Yeah, well, that's, that's where we need the leaders. And it's got it is, it is frustrating,
you know, with this focus on cost efficiency. I mean, it's the easiest of all of those facets
of how it is AI can impact your business. It's easiest, the most obvious, but there's
so many other ways in which can be used to add value. And part of my broader thesis as
(17:25):
well is that talent is going to be so important that those organizations that prioritize people
in the shift and augment their people are going to be the ones where talent goes. And
if, and if leaders of kind of say, Oh, how can we cut costs, which, oh, that happens
to be a headcount. I had not got to find it as easy to attract the most talented people.
(17:48):
So what sort of, what sort of roles do you see that'll be really essential in building
this new kind of business with this augmented human AI?
Well, I did a framework on humans in the future of work in 2016. Yeah, I pointed to
(18:09):
three sort of things of creativity and expertise and relationships and sort of, and all of
these things, the way they combine and right in the very center of it, I put human centered
design and AI can't do human centered design. And every aspect of what we do is a design
process. It's not around just around customers. It is around the nature of what work is. It's
(18:31):
the nature of how we interface with, you know, the rest of reality. This is all a design
thing. How are we making sure that we are, you know, shifting things and moving things
more towards things that are human focused and which as a result, more value creating.
So I think this design in the, in the very broader sense far beyond the way people most
(18:54):
understand design is, is, is in everything. You know, it's in how we set up this conversation.
It's in how that's the whole nature of everything. So I think that's one, one critical piece.
And there's an interesting point around this idea of translation in the sense of how can
technologists be trans, you know, communicate with business people is one of the most obvious
(19:19):
ones. And there's a whole way of other translation things or how can the product designer sort
of be, you know, reach the customers, you know, hopefully there's less of a translation
piece there. There are so many facets of how we are facilitating communication, that merging
of the minds and, you know, and again, sort of going back to big picture, you know, I
(19:41):
think of an organization as collective intelligence, at least that's what it should be.
Yeah.
This idea of, okay, well, it's got a bunch of individual individuals in it who hopefully
are individually intelligent. And the measure is, is the organization collectively more
intelligent or less intelligent than the, you know, the sum of the parts and the opportunity
(20:01):
of clearly is that we can have collectively intelligent organizations where we can combine
the insights and the capabilities and all of the facets of the people within them to,
you know, be amplified to degrees which will, you know, we've never, never yet reached.
That's absolutely on the table.
That sounds like a delightful future. I look forward to it. One of the things that occurs
(20:24):
to me though, so what sort of things might one need to change in one's organization to
start to realize that kind of future?
Culture.
Posadas.
You know that I'm saying, you know, culture is strategy for breakfast.
Yeah.
Well, you know, the culture.
I worked at one company 10 years ago and then I went back 10 years later. The culture had
(20:50):
not changed even though they had spent millions of dollars trying to change the culture. So
culture is really resilient.
It is.
Well, I mean, I wouldn't put my hand up and say I'm the change management expert, but
I mean, ultimately it is in leadership. And, you know, I think, you know, two of the many
(21:11):
dimensions of it. One is this of that value human beings first.
You know, we value people as, you know, whether they be customers or people on the street or
people working there, that there is truly human and people focused. That's just kind
of a fundamental thing. If you haven't got that, that's, that's you're not going to do
(21:34):
as well as organizations that do as one piece. And the other is experimentation and play.
As in, when there's nobody's got the, you know, this is in the box, what you should
do, you should need to do as an organization. You know, you can look around, you can learn
from others, but essentially the only way to do it is to, is to be curious and to experiment
(21:58):
and to play and to learn. So every, you know, this, you know, old, old, but I don't think
trite, you know, things that we have to be learning organizations, man more than ever,
everything is moving faster and faster and less people want to learn and do learn and
rewarded for learning.
That means, that means that really organizations, if they want to be that kind of organization,
(22:20):
then they need to start to think about how they're going to create a sense of psychosocial
safety inside their organization so that it's okay to be wrong. It's okay to fail fast.
It's okay to try things, try new things. And, you know, that, that's a really important
thing for people to understand. This is like, this is about innovation as a practice and
(22:46):
AI as a tool. And if you understand that, and for that to be effective, you need to have
that sense of psychological, so psychosocial safety. And that's, that's a thing that organizations
are looking at now, which is really good.
Yeah, well, it's certainly explicit as in say, yes, we do need to create psychological
safety in organizations, there's lots of consultants helping to do that. But it just
(23:09):
goes back to the intent. Do we prioritize people over, you know, things. And again,
you know, I always believe there's not the conflict between the profit and whatever,
you know, social or environmental good, whatever you're seeking, you know, that's the path
to the greatest profits, you know, as long as you have a more than six months time frame,
(23:34):
you know, is always to, to prioritize people because that's going to get you to where you
are, the most successful organization.
Yeah, I love that. That's one of the things I've always loved about you. You always had
that focus on people and people making organizations stronger and stuff. So what, what would be
your key takeaway for organizations? Like, what do they need to start to think about
(23:58):
doing in the short term to be able to participate in this kind of future?
There's one of the, the, it's capability development. As in every, you know, they're
talking about the learning organization. All right, yes, we don't need to learn. But
it's saying, well, what are we going to become? You know, this all is it the Bob Dylan quote,
(24:22):
you know, he was not busy being born as busy dying. And same thing for organizations as
in this continual reborn. And that's the way I, when people say, what's innovation for
me, it is the renewal. You must be continually renewing yourself. And so this requires this
saying how what are the capabilities that matter? How can we get better at those capabilities?
(24:47):
You know, how do we learn as individuals? What are the processes we download? How do
we get our data infrastructure? You know, decent. How do we get our data pipelines?
Good, because that's a pretty important part of it. You know, you again, it's nice. We're
talking a lot of very high levels. But you do need your data pipelines, your data infrastructure
(25:09):
and data architecture is absolutely fundamental capabilities for all of this as well. And
so you're saying, well, what are all these layers of capability development? Just say,
okay, it's a journey. We're not going to get there tomorrow or in a year, but we can see
the direction. These are all the sets of capabilities we need to develop. Let's get busy developing
those capabilities. And the best way to develop those capabilities is to do stuff, experiment
(25:33):
and learn by doing.
It's really interesting that you mentioned the data, the data platforms and data pipelines,
because, you know, I spent the last five years at UNSW building a data platform that could
serve up traditional business intelligence and analytics, AI and ML, low code apps and
data integration. And, you know, I keep to people go, Oh, my God, you built all of that.
(25:57):
It's amazing. Like, I haven't done that. It's just start today. Like it, it only happens
if you start the journey. So take one step towards it.
Exactly.
So it's so fundamental. But now one question I did want to ask you is, what do, how do
(26:19):
you think the, the different parts of the organization like the board, the executive
and then the rest of the team, how do you see their roles in this, this area that's emerging?
Well, that's, that's an interesting question. And I think there's, you know, as I do a lot
(26:39):
of work with boards, and I guess the dynamic between the executive and the board is, well,
it's always interesting. And, and one of the, one of the key points is where the strategy
reside, you know, in terms of framing the business models of the organization. And,
(26:59):
you know, there's different opinions and different debates around this. And, you know, obviously
it needs to be co-created. And essentially, you know, at its best, the board creates the
conditions whereby the executive can do the hard work of being able to frame and to propose,
you know, the, the details of how it is, what will implement that. And, you know, to be
(27:22):
able to push back and improve and refine and, and hopefully approve what is proposed. And
so that's the, you know, the board creates the, the frame within which the strategy innovation
and business model innovation and the, you know, the culture development and so on can
happen. And, you know, the, so that's, and there's a lot of hard work to be done below
(27:48):
that. And I, you know, one of the key points, of course, is, you know, diversity on every
dimension that you can think of. And that includes, of course, in layers of the organization
where, you know, frontline people, as well as board members, as well as executive team,
as well as lots of layers in between, they all have different perspectives. And if you
just have the board and the executive, you're not going to, you're not going to have the
(28:08):
diversity of perspectives you need.
Yeah. Yeah. So really, really some important points there. So this is a pretty good time
to wrap up. Thank you so much for your time, Ross. It's been really great talking to you.
Always a pleasure.
Fantastic. Talking with you, Kate.
And that is it for another episode of the Data Revolution podcast. I'm Kate Crothers.
(28:30):
Thank you so much for listening. Please don't forget to give the show a nice review and a
like on your podcast app of choice. See you next time.