All Episodes

November 22, 2024 • 37 mins

Had a great chat with Kobi Leins about some good feedback on the Australian Governments Microsoft Copilot pilot (full story here: Australian Government trial of Microsoft 365 Copilot). We also had a wide-ranging chat about the upcoming training deficit all organisations have in respect of AI, the recent changes in the federal AI space.

Links to stuff we mentioned:

https://www.digital.gov.au/initiatives/copilot-trial/microsoft-365-copilot-evaluation-report-full

https://www.microsoft.com/en-us/worklab/work-trend-index/copilots-earliest-users-teach-us-about-generative-ai-at-work

https://www.industry.gov.au/news/four-new-centres-help-australian-businesses-adopt-ai

https://substack.com/home/post/p-151681219

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to another episode of the Data Revolution podcast.

(00:19):
Today my guest is Dr. Kobe Lines.
Again, great to have you back on the show, Kobe.
Thanks for having me back.
It's an honour to be back.
It feels like just yesterday we were having a conversation.
Yeah, and today we're going to talk a bit about co-pilot and a bit about some of the movements in the AI space in Australia.
So yeah, we're just going to have a chat about that.

(00:41):
So I was reading the Commonwealth Government, the Australian Government Trial of Microsoft Co-pilot report, and I was sharing that with Kobe earlier.
And I thought that might be interesting to actually have a chat about.
As you do, as one reads these reports just for fun, Kate, what did you find most interesting about it?

(01:05):
I thought the really interesting thing was that it was pretty positive, overwhelmingly positive, but not overall positive.
So there were some things that people said were not as effective as other tools.
So that was around coding.
So the thing people seem to love, and this seems to be really consistent with my experience at UNSW, they love the meeting notes.

(01:31):
Everyone loves fewer meetings and the ability to not have to attend all the meetings.
That is the thing that everybody loves.
A lot of people say it makes them more productive.
Hilariously, it was people in the senior executive service and corporate roles that found that they got a lot of productivity benefits.
So we don't know, and I probably need to delve into it a bit more to find what happens with the frontline associates.

(01:55):
So the people are actually dealing with customers and what if they were included or if they had a good experience.
So I don't know what their experience was, but overwhelming senior executive service and corporate services, overwhelmingly positive feedback.
So that makes me think of also rolling out co-pilot in a large organization and just recently I was reading the thesis whisperer who in the academic circles was being gushing about the benefits of co-pilot.

(02:22):
Particularly for these kinds of things, taking minutes, not having to attend meetings, attending multiple meetings was one of the pictures we had from Microsoft about using co-pilot.
But then I also wonder about some of the non-verbal relationship building stuff that happens in those meetings, particularly when they're in person and you're sort of having those conversations that then it sort of comes back to what is productivity and then also how those people,

(02:44):
most of these reports are self-reporting, at least the ones I've seen. Is that the same for this report as well?
Yeah, so it's the government assessing their own activity. They did it as a proper pilot. They did survey, they've got charts in the report.
It's a really nice summary though. And it's funny because, and I'll put the links to both of these in the show notes for people, because there is another article that I've been sharing as part of my AI for organizational innovation course that I'm delivering at

(03:17):
Australian Graduate School of Management at the moment, which is by Microsoft, which is what can co-pilot's earliest users teach us about generative AI at work. And it's a first look at impact on productivity, creativity and time.
And because Microsoft wrote it, obviously, it's overwhelmingly positive. And, you know, in my last class, people were saying, well, we might take that with a grain of salt.

(03:42):
So I think this is one will be better for people to have a look at and get a real, real feel for what it looks like across a very large organization.
There is some interesting, slightly more independent research from the Boston Consulting Group around, and I remember, I use some of this in my trainings as well, that the benefits vary vastly to your point, depending on what roles people have.

(04:05):
So people who have more access to more information, I mean, it's really obvious when you say it, but they're going to have more benefit from a tool like this that can skate across and collate and curate what they can access already.
But someone who's more junior or frontline or has limited access is probably going to get less benefit. But that said, for more junior staff who are not as competent or, I mean, I'm assuming that junior people are less competent.

(04:27):
Let's reframe that. For people who are less competent at writing generally, they're going to get more benefit out of these tools because they're great for a first throw down and then you can go through and review them.
But for those who are really good writers or think by writing, they're not getting as much benefit out of these tools. So there's a whole lot of research on those sorts of connections.
And then the wonderful sort of sidebar of people worried about losing jobs are going to start putting documents not on teams and shared sites because, you know, particularly for lawyers who have precedents or other kind of documents they don't want to share and they want to keep their value in the workplace.

(04:56):
They will gain some of these tools as well, which I also looked at in terms of a side effect of some of the tools being used. But, you know, I look at it from all the red teaming what are people going to do that's less expected or how are people going to be changing behaviors.
And that is one of the interesting things because corporates are overwhelmingly on Microsoft, you know, they use Microsoft Office. They've all migrated to the cloud version of that product.

(05:22):
But now with co-pilot baiting, whether you want it or not, it starts to really raise questions for organizations that have needs like lawyers.
And I know a lot of law firms have been exploring developing their own internal custom largely.
And many of them are already. Yeah.
Yeah. So it'll be interesting to see how that plays out in future. But yeah, so it was an interesting piece of work.

(05:48):
Tell us what's happening in the AI space in Australia. There were some government movements recently weren't there?
Yeah. So the National AI Centre has moved into under the Department of Science and Research into DISA.
And the interesting thing about that is that it's now becoming operationalized. So there's some large, there are some large sources of funding that are being distributed to various corporations that are going to be AI enabling.

(06:14):
It's more AI enabling. It seems an AI management and governments. And I make that distinction because I think there's a, to your point about certain framing of messages.
I think there's still a push to, you know, frustrated to uptake AI and to use it, use it more, which I don't disagree with.
I would just say it. I would add the end of it needs to be managed and used in a sensible way.
So it's going to be interesting to see as those centres roll out their work, what they're offering.

(06:41):
I co, well, I ran a workshop for standards Australia and Sydney a couple of weeks ago.
And I think one of the questions I'm getting at just about everywhere I present now is how do small and medium enterprises even start to engage with managing and governing AI.
And there's a real gap in that space because the large corporations can tack an AI person onto their privacy team or in their tech team.

(07:02):
And they're starting to do that. So I'm starting to hire chief AI officers.
But for the small and medium enterprises, they really are going to need a resource where they can either have services and products pre approved or at least have some kind of uplift or skill skills provided to help them do that.
There's a massive gap there, I think.
I was literally talking with Gladwin Mendez, who's a, you know, a well-known chief data officer in Australia, New Zealand yesterday about this.

(07:28):
He's actually set up a new business for fractional chief data officers. And that's the way of the future because he was saying the same thing.
All of these big corporations can do it themselves. But all these smaller corporations, they need the support and they probably need more fractional support from more roles, but they don't need it full time.

(07:51):
There are two pieces to that. I think, firstly, they don't even the larger corporations don't necessarily need someone full time for two reasons.
One, and this is the two, these are the two parts.
One is there aren't enough of us to go around to do the work. So some of us have been doing this work for a long time and know how it needs to be done.
They're just done enough of us. We're going to have to uplift and train. And that's going to take a little while.

(08:13):
The other piece is you don't really need someone full time because you actually need to do it incrementally and do some internal work.
So you can actually have someone come and help you. You go do the work internally and then step out and come back.
And I'm seeing that with a lot of companies that want policies created. Okay, you've got your policy created.
That's been settled with your execs and your board and everyone's got buy in your socialized it. Then you need your processes.

(08:38):
And in between, you don't necessarily need someone full time shepherding that that needs to be driven, you know, the highest levels.
So then you can you can duck and we even provide a lot of assistance without being there full time. I think I think he's he's really onto something.
Yeah, yeah. So you know, it's really interesting that he's identified that gap in the market.
And, you know, I think that's going to be growing area for the future.

(09:03):
Well, the other the other thing is just the training that people are going to need. So I've been having a lot of conversations with organizations.
And they're even struggling to grapple with just the basic AI literacy programs like you and a sub would solve it pretty easily because they're used to teaching.
So they had a new new online course for staff up and up and done pretty quickly. But other organizations that don't teach as a business, they're really struggling with with this.

(09:32):
How do we get just a basic level of AI literacy across our entire workforce?
Yep.
And how do we how do we then work out what skills our technical people need and what skills our business people need to build.
So, you know, so there's going to be a lot of training needed across the entire workforce.

(09:53):
Yes, and translational work. So I think, again, back to the chief AI officer point a lot of there are starting to be full time chief AI officer roles because I'm watching the
space fairly closely. And a lot of them are technical. They're looking for people who can build AI ML and also govern and also educate and and and and and the different skill sets and different
different framing.

(10:14):
Well, yeah, I think being able to do translational work.
Anyone else can do that. So how do you how do you know I was has actually the governance management and the technology and architecture skills like I can do that because you know I'm very old and I've been working across data.
You're also amazing. But yes, yes.
But finding that in one person is exceptionally rare. And they're going to need more than one is what I'm saying more than one person.

(10:45):
And I think that educational piece. Again, you can't do one that fit one size fits all so the trainings that I do with data scientists are very different to the trainings that I do with lawyers this week I gave a presentation to the Victorian
Comprehensive Cancer Alliance which is very different again talking to medical professionals to think about what what kinds of automation AI could be used and what things to think about.

(11:06):
It's there's an industry specific angle and an ability to sort of be able to be able to draw lines and less learn lessons from others that I think is really important.
On that note, I'm going to do a shout out for Kendra van who has a substack. She just wrote, it's got these data runs deep and put link in the show notes.
She made a point today in her blog which I thought is brilliant and I haven't seen in many places.

(11:28):
When you hire someone to do this work, let them speak publicly about it including the red teaming and the things that go wrong and you didn't do.
The other two benefits that one is that you're actually showing that you're thoughtful and you're mindful but the other issue raises is you're going to attract others who want to do similar kinds of work with responsible organizations.
So don't just tell your customers and the broader public what you've done right. If you've stopped something going through or you've had some kind of system that you've reviewed and gone on, that's a bad idea.

(11:54):
I've heard of a couple of funny examples this week that I can't share but definitely share those, share them with the public and also let others learn because that's how the uplift is going to occur when others here that other companies are stopping doing things.
So I thought that was a really excellent point that I'll definitely weave into my trainings going forward as well.
Yeah and you know, I know that Australian National University when they had their big cyber breach were very open about it and shared a lot of security across the high-read sector.

(12:21):
And it was the first time any organization in the higher ed had really done that and we all got real value from finding out what had happened and how it had happened.
And it really helped to uplift our knowledge of cyber breaches. So you know, I can't imagine why it wouldn't work in the AI space as well.
100%. I think also for people to understand tolerances and I really enjoyed the workshop with standards Australia for the case studies.

(12:47):
People got really into the case studies and had different views and had different opinions and in that particular piece each table had different C suite and a vendor.
So it was as realistic as it possibly could be to sort of say, what are your interests here?
What questions would you want to be asking? What would you need to be thinking about?
In that case, it was to align with the ISO standard but even just thinking about aligning with your own processes without the ISO standard and managing risk internally.
That's a really helpful framing. But I think people are hungry since I get is that people are hungry now for practical applications.

(13:14):
They've tried their POCs, they've done their co-pilot trial or they've heard about someone else's co-pilot trial and what they want now is really concrete guidance on how to embed policies process and the people cultural piece as well,
which is often forgotten in how to manage these systems going forward.
Yeah. And you know, I think the hardware vendor example that I've used extensively where they did a pilot last year.

(13:39):
So this time last year, they turned on a pilot and a couple of weeks later they turned off their pilot because it had surfaced so much confidential information and the thing that made them close it down was somebody shared, somebody was able to discover a spreadsheet with everybody's
bonus, which didn't go down well to be honest. So that pilot got shut down pretty quickly. But I was having a chat with Microsoft, some of their senior folks from Seattle about that, about this time last year.

(14:10):
And they were just saying their baseline assumption is that every organization has its information and data governance sorted.
That's their base. They've released everything and gone, we think you'll be fine because you've got this sorted. And increasingly, it's obvious to me that many organizations have not even started on this journey.

(14:31):
So I'm running a data governance for leaders course at AGSM and you know, the number of people that are coming saying, how do we start? How do we get started with data governance? So they're not even looking at AI governance, they're just saying how do we do data governance?
And that's where you start. So this is where our work ducks and weaves is that people come and say, I want to use AI or I'm using AI or, you know, I've thought about doing something.

(14:54):
And the place you start every time is your work. It's data governance is how much do you have lined up internally and also, and this is also in Kendra's blog today, you can't run a pock with a few things sort of held together with blue tack and sticky tape and then go we can roll this out wholesale.
If you haven't sorted your data governance to begin with, and you haven't got ongoing processes to go to do that, your AI is going to be not very good and potentially harmful and not give you the value for money.

(15:21):
So that's an interesting thing because one of the things that I'm talking about when I'm doing a lot of public speaking these days is just having an agreed pipeline for proof of concept to pilot to production and to have have the right stage gates and the right
approvals so that you have a systematic process for saying, yeah, this pop was good. It's good to go. Now it can go into pilot. Now it's good to go and now it can go into production.

(15:46):
And then to have your ongoing assessment and the thing that I'm telling people now is AI is buying a puppy. Like, it's not like a, it's not like in the old days where you'd implement a CRM or some kind of accounting system and you'd leave it.
You would just leave that thing alone until you did the next upgrade. AI needs to be tweaked and adjusted on a regular basis. Otherwise it will drift. You'll have model drift and you're right. Your answers can be egregiously wrong in a very, very short time.

(16:15):
So, you know, it's like buying a puppy. You've got to feed the puppy and people are not understanding this. So it's always a bit of a shock to them when they realize what AI doesn't just sit there and run properly forever.
It needs to be tweaked.
Yeah, I've just got a puppy wandering past puppies also require puppy school and socializing and connecting with other puppies on that analogy could go on and on and on.

(16:39):
I triple E's got some really good frameworks around procurement. I don't know if you've seen those. They have very specific requirements for procurement that can be really helpful.
The ISO standard is more about the AI management over the life cycle, but both of those require ongoing review. So to your point, you absolutely can't just acquire and forget.
But also you attach things to other things to use the analogy of the puppy school like these puppies are going to play with other puppies and you want to make sure they're not biting and they're not doing things they shouldn't be doing even if they're teething.

(17:09):
The other thing is definitions. I've seen Pox and pilots uses terms just to get things over the line when they're on a scale that they're clearly not either of those. So having definitions internally in your policies that say a Poc is this size group for this period of time.
Pilots this real world is this is really important so that people also don't game just getting things out into the wild by saying it's a Poc or a pilot.
So that's where your policies come in. So all of those pieces are in interplay with if you don't have the policies without the processes and you don't have people trained.

(17:36):
So this is probably the biggest thing a friend of mine was at a Varys's talk this week, Sean Brady was talking and she was so excited where she sort of said, oh look, you know, they're talking about how systems, you know, lawyers like a system and other things are like a system and we as lawyers always look at that end result when actually it's the whole system
interacting. She said this is stuff you've been saying for years. I said yes, yes it is. But she said the other thing you talked about was culture. The biggest one of the biggest past pieces of this and one of the hardest pieces is actually having a culture where people feel safe to call stuff out.

(18:05):
Yeah, and that that feeling of psychological safety is so important to start thinking about consciously developing across your organization. You can even do it in your own little pocket, you know, that it can be safe in your own team, even though the the rest of the organization might be slightly toxic.
You can literally build that kind of culture, but it's so important because there are often very early warning signs that get ignored and you know we're going to have plenty of case studies.

(18:34):
Oh yeah, they're already starting to happen. I mean the other, the other thing was she said they use Boeing as an example and again lawyers look at the fault of the you know what's happened with Boeing but in effect that that was a cultural piece right the
boards.
Boeing's board always had engineers they always had safety first and then they very, very rapidly removed a number of those sort of safety, those safety perspectives and so in addition to focus people.

(18:59):
Sorry.
They were risky safety focus people. Yeah, they were very much a sales oriented board.
They then they then got completely they turf them. But that goes again to your processes you want to have carrots and sticks for for all of these issues you can't just go I've got this policy that floats alone in the in the corporate ecosystem and I hope someone pays attention to it unless your

(19:21):
policy is embedded as a, you know, in your conduct your code of conduct if something goes wrong, and unless it's in the KPIs of your execs to get rewarded when it goes right there's no incentive for people to change so, or there's less incentive for people to change
is just, you know, having implemented data governance and being part of the recent revamp of it at UNSW. Yeah. One thing is that that once you've done once you've released it, there's an ongoing need for comms forever. Like, it's like painting a harbor bridge as soon as you finish your

(19:53):
work again. And so, like we used to sit down every year and work out a proper comm schedule because you get new people in the organization, new roles, and the organizational memory of practice can very quickly dissipate if you're not always communicating
about it.
Yep. And you reach different levels of maturity as well. You want to be uplifting and encouraging and bringing others along.

(20:17):
And going nature of it is, is really important and I think that's probably one of the biggest things that the standard will be encouraging for those who are complying with the ISO standard but just generally is good practice in your business, with or without the
standard. The thing that I keep coming back to is if you don't do this well it's going to cost you a lot of money, both in terms of the tools that you're buying that won't serve their purpose and also in reputational harm, potentially even, you know, lines and litigation.

(20:42):
You want to get it right. It can have real harm to real people. So, you know, like the Boeing example planes fill out of the sky.
Yep.
You know, so it might not be that serious but people can be harmed. Look at RoboDec, people were significantly harmed by RoboDec, which wasn't AI, I just want to make that point.

(21:04):
But, Excel spreadsheet. Yes. You know, and this is the other part of the problem. People are, if there's a lot of snake oil sales people around at the moment with AI who are trying to sell solutions that are just rubbish, you know, that are underpinned by a spreadsheet or, you know,
somebody told me the other day about one that was a control F in a Word document.

(21:30):
Wow. Imagine paying for that. I think circling back to the thesis whisper, what I was trying to sort of say at the beginning is also when you come back to your problem statement is your problem that you need to be able to attend more meetings.
And I've been in these jobs, right? I've had these jobs where you've got back to backs from international time zones from eight till seven at night.
Maybe we just need fewer meetings and other ways of managing work. Maybe we need, I know that's an outrageous thing to say, but maybe the problem is.

(21:58):
Could this meeting be an email?
Could this meeting be a to do list and people actually do the work. I'm not saying I love meetings. I love a good meeting when it's got an agenda and it's got an outcome. I'm all over it.
But a lot of our business practices are quite poor and the AI is not going to fix that just like a lot of the HR tools that are coming out and I am not going to fix your cultural problems.

(22:19):
This goes to my other thing, which is email. You know, if we're using generative AI to fix our email.
Using generative AI to write my email and I'm not reading the substance of your.
And I'm using my to read yours.
What is the purpose of that communication? What, how might that communication be different? Because if if AI can read it, they can write it and we're not really participating in that communication.

(22:43):
How real is that communication? I think this is going to cause us, well, sensible people will will start to think about these things and think about how can we do because the purpose of email is communication.
So how can we undertake the communication task more effectively because you know, emails are ancient technology, you know, it's from the 70s.
And you know, there were some attempts to make it go away by Google. You remember Google had was had a stab at it, but you know, yeah, I do.

(23:14):
A lot of people are very wedded to email, but it's probably the most inefficient form of communication.
Yeah, there's a I didn't like me on the podcast because it was three in the morning and I don't really remember much of it, but I interviewed Emily Bender who I fan girl very, very deeply on a Carnegie and equality podcast.
Emily's been doing this work forever.
And she talks about the symbol and the meaning right so what what generative AI is doing is capturing the symbols but it's not the meaning behind it and we all know, again, those of us who are older pick up the phone there are times where I'll pick up an electric telephone.

(23:45):
Because once it gets past a certain number of texts or emails, you're like, this is not really what should be happening here. I just need to have a conversation.
So, again, going back to your business practices and I think one of my favorite ads while I was at.
Booper was actually about how to have fewer meetings. It was like, he's Bob Bob has too many meetings he called it out and they showed how culturally you should do that. But again, creating those spaces people goes the easy issue really that I want to be watching all of these meetings at quadruple speed which, you know, a lot of kids do with lectures as well.

(24:15):
This is a thing that's we're kind of in digesting information differently.
But should we be presenting the information differently because if it's that boring that you need to watch it for speed.
Probably not doing it right. Well, you know, people consume people have different minds like my mind likes to consume data fast and I find a lot of people very slow.
So if I can watch a video of them at two times speed, it's actually at a good speed for me. Realize it's slow for me.

(24:44):
Yeah, I think we're quite similar. I had complaints at Melbourne when I recorded lectures because I speak so quickly that people couldn't watch it double speed. So the complaints that I need to speak more slowly. I'm like, no, just watch me on single speed.
This is how my brain works. Sorry, can't fix that.
But probably what's going to happen in the world is, you know, because there's advances happening in the world of science and medicine that are starting to understand different brains. And I think one of the outcomes of this is we're going to understand that there is no one normal brain.

(25:14):
We've had this idea there's a normal brain and then there's the neurodivergent brain. But I think we're going to discover there are all neurodivergent brains are all just different ranges of ways of thinking and that way those ways of working will work for some and not for others.
Yeah.
I'm, I really enjoy the work that I'm doing with boards and execs to really drill down on what what steps they can take for all of these pieces. I just think there's so much work to be done in this space. And there's so many people who are hungry for more information about how to do it better.

(25:44):
They don't want the negatives, but they also want to be not buying tools that are controller functions underneath.
There's just, yeah, there's, we're going to need an enormous amount of training to uplift, not just consultants, but also people educationally are going to need to have more work done.
UNSW was doing some great work in this space. They're going to need to be some serious changes across universities to be able to respond to this.

(26:07):
Yeah, well, you know, there are times when it's just really handy having an entire organization that teaches can turn around new courses really fast.
And, but, you know, I just look out there all the organizations that don't have that capability, and don't also also often have the funding to go and pay for these courses, you know, because a lot of courses that you can go and buy.

(26:31):
If you have to educate your entire workforce that becomes expensive proposition.
So, you know, and the question is, does it need to be face to face probably not.
And, and really, there's some quite good learning online out there available for free already for a lot of the big vendors.

(26:52):
But you know, one thing that I've detected, and it's come up in conversations with a number of CEOs recently is they've all made the big push to cloud.
And they're typically on one of the three big clouds.
And they're starting to get their price sticker shock now, and they're starting to realize that everything they do in the cloud costs some money, and they're starting to talk about repatriating some of their processing back into data centers.

(27:23):
So what I'm going to predict is that that we're going to see a lot more private clouds hosted on internal data centers, interconnect with the public clouds, but people are going to be starting to choose where they run workloads.
And I'm predicting a lot of organizations with sufficient technical skills are going to start running their own large language models, things like llama and stuff in their own private cloud on prem, and only running that which they need to in the public clouds

(27:59):
because the costs are going exponentially up. And one of the other things is, you know, like, open AI, I was looking at their numbers recently.
So they basically spent $2.35 to make a dollar at the moment, they took $9 million of investment recently and they've spent a lot.
And that's why they need the hype because the sales have to continue to justify the price.

(28:23):
It's not like a Ponzi scheme.
It's a little bit like a Ponzi scheme and I encourage those who are being sold to to ask those to ask questions.
But I do think there's also a benefit beyond cost in that model, which is, you know, smaller models that are tailor made and ring fence and supervised can often be far more effective as well as cheaper.
This is the this is the bit where, you know, I had someone reach out this week and say, you know, I know you're worried about catastrophic risks.

(28:47):
It's like, no, I'm actually more worried about what's happening right now in the way that people are being harmed or that businesses are making poor decisions.
And I think one of my other favorite quotes was a source C sweet exec who said, I wish that people in corporations would manage money like they manage their family budgets.
Like, imagine if you came to your to your partner or your family and said, I want to spend all of this money on this new thing.
The first question you'd ask would be why what's the problem that we have that we need to buy this thing for right.

(29:12):
And that's what you should be doing in corporations as well. What's our problem that we're trying to solve and what's the cheapest, most stable, safest solution to do so.
Yeah.
So starting from that point rather than whiz bang AI open AI selling hype.
But I think we're getting there. I think more people are asking more questions than they were definitely even three six definitely more than six months ago.

(29:33):
But part of the problem is, you know, it's been a perennial problem. It's a problem for me in 2000. My CEO at AMP went and played golf with a bunch of blokes.
Oh, the gold sales.
We need to do e-commerce. And I was like, do we even know what e-commerce is George.
And he didn't have an idea. We didn't have a business need for it, but we did e-commerce because he had a whim.

(29:56):
So management by whim has been with us for a while will be with us for a long while. But you know, organizations that can resist management by whim and systematize the prioritization of use cases.
Yeah, the systematize their, their, what I'm calling the pipeline to production for AI. So proof of concept production organizations that can can manage that effectively, they will start and can tie all of that to strategy will be

(30:28):
start to win. So, you know, if you're doing it on something on a whim, it's probably not going to stack up.
There won't be broad support across the organization.
And again, change management needs to be across all of that stuff.
And typically when you're running a project on a whim, you don't usually line up all the typical project resources.

(30:49):
So, you know, I think there's going to be some interesting winners and losers so the people that can do that whole prioritization to production pipeline stuff will do and will do well.
And the others sort of just doing random stuff will just spend a lot of money and all the cloud vendors will love them because they'll be spending all the money.

(31:12):
I'd say in addition to the whim and I love I think we need to find another word. Maybe we just call it the whim purchase. I've always referred to them as the golf purchases, but maybe we need to refrain I need to reframe my language.
We don't want to send the golfers among us.
There are golfers. Some of my best friends are golfers Kate. I just just want to make that very, very clear.
Many golfers and love them. I don't we're not we're not judging golfers on this podcast.

(31:38):
One of the other layers to that is I saw just this week someone announcing that they'd been to the singularity University's executive program and I'd say in addition to the whim, the whim purchases also the strategic snake oil selling vendors so there are trips that people take to Silicon Valley.
There's the singularity University which by the way is not actually a university.

(32:02):
People need to be skeptical and also ask questions about who's telling them what for what purpose and I always talk about social silences but what are the things that are not being said like what's the cost.
What's the environmental impact. What's the long term sustainability of this project for your company versus you know perhaps a slightly cheaper simpler version.
I think that educational push has become quite sophisticated in terms of almost like a contagion at some levels where people buy the drink the Kool-Aid and then come back and say well you know everyone else thinks this is brilliant.

(32:30):
You still need that culture where someone can put their tiny hand up and say the emperor's got no clothes on and here's why.
Yeah. Well you know I generally think this is yet another thing in the technology space that's like teenage sex.
A lot of people have fear of missing out you know they think everybody's doing it but a lot of people aren't deep up to be doing this.

(32:52):
Only a small proportion of people have really got AI production.
But they all talk about it as if they are much like teenage sex as well.
I'm part of a global chief data officers forum you know we met a couple of times a year and so Jerry of AI has the shortest adoption curve I've ever seen.
So we met in November last year nobody was touching it.

(33:15):
Everybody was trying to work out what it was and how they'd get value out of it.
Met in Q1 this year everybody had at least a proof of concept in production.
Yeah.
That's the and that was a whole lot of American and European banks including that.
So the shortest adoption curve from I don't know what to do with it.
So I've got something in production.

(33:37):
But there are pox in production they're not really and they may be scaled up box but they haven't worked out how to make profit.
They haven't worked out their cost models and they haven't worked out how they manage risk across it and how they actually manage AI.
So I think there's a big gap in organisations about the equivalent thing about application management which we've been doing for many years.

(34:04):
And I think that's what's really important about AI because you know it's it's kind of a mishmash of things.
So some organisations will be having their own custom language models some will be using you know cloud based AI technology.
And one of the things that you're going to want is some kind of single pane of glass across all of that so that you can manage it properly and and and boards are going to want to understand well what's my risk profile of all this.

(34:31):
So I can really see some gaps there.
I'm just in the process of drafting a two pager for Australian boards because I think one of the biggest gaps is just most of the pieces of work I do are about being curious about giving people the power to be able to ask questions because there is such a hyphen or around these tools.
And I really enjoyed speaking to a room full of surgeons and doctors and nurses and cancer researchers who said you know it's really helpful because it enables us to go into rooms and ask questions.

(34:57):
And I think people are hesitant they're reluctant because they feel like they don't know something and the more senior they are the less likely they are to necessarily want to ask or feel safe to ask.
And again just empowering people and giving people the right questions to ask because these tools actually embed your company values and strategy and I think that's something that boards also need to understand that when you make some of these choices they're not just operational they're also quite strategic.

(35:19):
And where that operational versus strategic divide is is is quite complex, but being able to ask those questions to have that information is really, really important.
Yeah there's so much work to be done in this space it's definitely.
There's the board space you know which which you've just spoken about but then there's the understanding for the executives because when we're talking about AI what do we mean.

(35:43):
Yeah, I mean generative AI do we meet a gentek AI do we mean deep learning do we mean neural networks like what do we mean.
You know do we meet will we just mean large language models or is something else, you know, is it machine learning, you know, all of these things and they're the questions people need to be empowered to ask like what technology do you actually mean when you use this umbrella.

(36:07):
Or is it a control function. I would say you know AI needs to be and Leslie C. Beck wrote a lovely blog on this as well about the AI definition.
My favorite one is actually from 2004 the administrative review council talks to experts about expert systems to the Australian government it's a great work back in the early 2000s, long time ago that if something's nudging something else to make a decision or is providing input that's from a legal perspective you've got it in it's sort of an administrative role that you've got administrative

(36:34):
law that comes into you know decision making play so Robo debt was a spreadsheet it actually doesn't really matter and you're allowed to ask to your point and also a little bit like the teenage sex equivalent everyone calls it AI until it works and then it's not AI anymore.
AI is a sales pitch to get it through the door. But what is it really and being able to ask those questions and realize that you know, everyone needs to be asking these questions at each level to know what to do properly to manage these systems probably is really important. But yes, I could go on for hours there's so much, so much work that's needed and so much appetite for that work.

(37:08):
I think that anyone who wants to work in a space will have endless amounts of work going forward.
Indeed, and that seems like a good place for us to draw this to a close thanks for your time Kobe always pleasure to chat with you.
Lovely to talk and I love how much the work that I'm doing just opens up more of the work that is needed from you. It's completely somebody.
Thank you bye.

(37:30):
Thank you for another episode of the data revolution podcast. I'm Kate Crothers. Thank you so much for listening. Please don't forget to give the show a nice review and a like on your podcast app of choice. See you next time.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.