All Episodes

April 9, 2025 48 mins

For this episode of Open the Pod Bay Doors host, Ian Gardiner, is joined by Rob Sibo, Head of AI/ML Customer Engineering ANZ, Google.

Ian and Rob sat down in late 2024 to discuss all things AI. From demo's to production: Scaling AI in 2025. Rob’s focus for the coming year is clear: move beyond quick wins and proofs of concept to real production use cases. He anticipates a leap in conversational and multimodal AI agents—ones that can autonomously act on your behalf via voice, text, or other inputs.

In this incredibly insightful session you can expect:
🧠 Insights on AI Adoption
📊 Advice for Startups & Builders
🧭 On Ethics, Responsibility & The Agent Era
🌐 How to Get Involved with Google Cloud

“In 2025, the conversation won’t be about what’s possible, it’ll be about what agentic experience do we want to create.”

Quick Fire Round:

📚 Book recommendation – I'm a sucker for dune by Frank Herbert. I've read them all twice
🎧 Podcast recommendation – The Well, Babbage, and AI daily brief
👨‍💼 Favourite CEO – Sundar Pichai, Google and Sam Altman, OpenAI
🗞️ News source – Google news
🛠️ Productivity tool – OmniFocus
📱 Favourite app – The Gemini App and Apple Music
📺 TV or movie recommendation – Historical dramas, whether they're accurate or not. Medicis and the Ancient Apocalypse.
🎤 TED Talk topic – ethical and responsible AI

Google Cloud program mentioned:
The Google for Startups Cloud Program helps pre-seed to Series A startups thrive by giving them access to the technology, community, and resources they need to start and scale their business.

Check out the additional benefits for AI and Web3 startups.

Eligible AI startups can get up to US$350k in Google Cloud credits, dedicated technical supports, hands-on AI training, exclusive access to events, webinars, and more. Apply today.

Web3 startups can get up to US$200k in free credits, invite-only access to gated Discord channel, foundations grants, VIP event access and more. Apply today.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:08):
Hi, everyone. Ian here. All right. We're talking about AI
once again. I know it's a constant theme on this podcast,
but I think rightfully so. It really is one of
the biggest tech startup macro trends that we've seen for
a number of years. We're sitting in November 24th right now,
and really, it's two years since OpenAI released ChatGPT to
the public. And the adoption has been incredible. And it

(00:29):
really is impacting just about everyone and every company out there.
And I'm sure, you know, you listeners can resonate with
that on the show today we've got Rob Seibel. He
is part of the Google AI and machine learning team.
Fascinating guy. He's had a pretty varied background, mostly in consulting,
but now at Google. And yeah, one of the interesting

(00:50):
themes that I picked up was how he's not really
selling to CTOs in the way that he would have
done in the past, but he's selling to the business line.
So marketing, finance, legal, whatever it might be. But marketing
in particular is what we dig into. Lots of great
insights in this one. I really enjoyed my conversation with Rob.
So why don't we jump over and have a listen.

(01:15):
All right. Welcome everyone. I am sitting here with Rob Seibel. Rob, welcome.

S2 (01:19):
Yeah, thanks for having me.

S1 (01:20):
All the way from Piermont to Bondi Junction. I really
appreciate that you are the head of AI and ML
for customer engineering in Australia and New Zealand at Google,
so that's a.

S2 (01:31):
Mouthful. Yeah, yeah, I've had shorter titles, but it works
pretty well for me now.

S1 (01:34):
No, that is very cool. We've had a couple of
people from Google on the show, and AI and ML
is definitely a focus. I mean, maybe we should start there.
I mean, like, why is AI and ML such a
focus for a lot of the big tech companies, but
Google in particular right now?

S2 (01:49):
It's funny because everybody is now an expert in AI, ML.
Everyone is sort of got on the sort of the
value proposition and they understand the potential, or at least
they think they do. I think it's a big one
for us, at least in the tech industry, because especially
with generative AI, it opens up a lot of, I guess,
call it promise areas that were sort of talked about
even two, three, four decades ago when it first came out.

(02:12):
And with generative AI, you're finally bringing the human aspect
to it. And we used to talk about stuff like
robotic process automation or document management, but without generative AI,
there was a lot of false selling or a lot
of magic sauce that went behind it. But with generative AI,
we really are able to live up to those promises.
So I think that's why everybody, everybody from Salesforce to

(02:33):
SAP to Google, everybody's getting plugged into the craze right now.

S1 (02:37):
It's probably worth just a little bit of background on you.
You don't sound like you're from these parts. Well, we're
all immigrants here, so I can't really talk. But like,
how did you end up in Google and how did
you end up in Australia?

S2 (02:49):
Yeah, yeah. Well, so I mean, going going all the
way back I went to school in Carnegie Mellon, which is,
which is a great computer science school. And that's really
kind of where I.

S1 (02:56):
Where is.

S2 (02:57):
That Pittsburgh. Actually, um, you know, and it was a
great school to get me kind of into the AIML
world back, you know, just post the the kind of
the winter of AI. Right. And we started talking about
neural networks and all this craziness. But then I actually
took a role with consulting. So I did a good
number of years with consulting and, and uh, after a
while I got the travel bug and they said, hey,

(03:19):
would you like to move to Asia? And I said, well,
I don't speak Cantonese or Japanese. What other options? And
they said, well, there's this little, little country called Australia.
Why don't you go down? And to be honest, I
showed up with two suitcases and I loved it. I
loved the country, but I also loved the potential. It
was it was small enough that you could really make
a difference versus the US, which was frankly, just such

(03:39):
a big machine. Um, so yeah, after about 20, I
don't know, four years with consulting across various places, including
some small startups here in Australia, um, came over to
Google and fortunately enough, the right timing with generative AI
kicking off. So now get a chance to kind of
lead the charge there. Yeah.

S1 (03:57):
And you spent some time in the UK as well?

S2 (03:59):
Yeah, I spent I spent a number of years in
the UK, Canada, across Asia, really working with enterprises and
small businesses, public sector as well as commercial. Um, you know,
really bring in data warehousing when that was kind of
the big topic. Um, data warehousing, machine learning, data science,
a lot of the old traditional stuff, really. Um, but
I really understood across these organisations how data is really

(04:22):
fueling a lot of that other value added. And you
can't really skip that. You need to get the data right.
You need to get some of the organisational challenges sorted.
And then you can get the reporting and machine learning
and data science use cases going. So consulting was great
just to get a breadth of that and really see
how the sausage is made, if you will. Um, and
now jumping over to Google, it's sort of now you

(04:42):
can see the products, the Lego blocks if you will,
especially some of the innovation. But really being able to
piece those two together is really what we're probably here
to talk about anyway. Yeah.

S1 (04:50):
And it's a relatively new role for you. Is that right?

S2 (04:53):
Yeah. Yeah yeah. I think I joined back in February
March of this year. So 2024. So yeah trial by
fire joining just as Gemini really hit the street and
started making kind of its momentum.

S1 (05:05):
And so how do you spend your days. Like who
are you hanging out with.

S2 (05:09):
Yeah. Well I start the morning catching up with the
US and what I missed. And seeing if there's a
new Gemini this or a new image in this. Um,
and then you get in the office. And to be honest,
just like today, I've already met with two clients. Um,
I think there's no shortage of people who are keen
to talk about AIML. Um, so I met with a
law tech firm, which was quite interesting. Um, and then

(05:31):
I also.

S1 (05:31):
Got a law firm, but not a.

S2 (05:33):
Law firm. Um, you know, and I think this is
an evolving area, kind of like there's metal medical tech,
med tech groups like Heidi Health and others. Um, there's
a number of organizations that are starting up trying to
build llms fine tuned or just off the shelf that
do everything from auto redacting of law stuff or, um,
you know, um, accident reports, police reports, um, Some evidence.

(05:56):
So applying just enough of that law background, that industry
expertise and the technology.

S1 (06:01):
I mean, you kind of touched on two of the
topics I was going to ask you about. So, I mean,
when you think about what's going to be impacted, which
sectors are going to be impacted by AI? Law is
definitely up there and the ones that are mentioned and
medicine as well. I mean, medicine has had the in fact,
on the very first podcast we did of this one,
which must be almost seven years ago, we had Daniel

(06:21):
Petrie from Airtree talking about really AI and ML, you know, where, uh,
you know, for image recognition within, uh, you know, breast
cancer tissue scan. Yeah. So that was the other one
that was, was particularly mentioned. So it's interesting that those
are still the two that you brought up in the
first one. And there's more. But yeah, I mean you're
touching on to a bit more.

S2 (06:42):
Yeah. And I think if we look at the verticalization
of AI ML, I mean, there's some industries that are already, well,
kind of covered, right? I mean, actuaries and insurance industry,
they've been doing this for literally 2000 years or so
when they started talking about lifespan calculations and algorithms. They're
they're still innovating. There's a lot of great stuff you
can do with it. But really what we're seeing now
is sort of the the ones that were too hard,

(07:04):
like medical or law maybe, or too controversial, or the
ones that just it wasn't right yet. It wasn't profitable
yet to explore. So you're starting to see whether it's
verticals like HR operations or even IT, where maybe there
wasn't enough confidence that we can make a difference, or
industry verticals like law or medical, where really now with

(07:24):
generative AI and some of the more mature machine learning
processes in place and the data and the confidence, I
should say, in the political arena to start using these models,
because medical and law are quite controversial if you get
it wrong. We're now maturing, and we're culminating to this
point where we can now start to service these underserved
areas better. Um, so I really do think that we're

(07:44):
at this pivoting point. Um, you know, no matter how
you define your verticalization. Yeah.

S1 (07:50):
I mean, when it's been two years, roughly. In fact,
almost to the day since OpenAI first released their unleashed
this new wave of of AI. Uh, you know, some
of the criticisms back in the early days were around
the getting it wrong, which you just mentioned, you know, hallucinating. Hallucinating. Yeah.
I mean, where are we at with this hallucination? And

(08:10):
should we be worried about it? Because there's kind of
things that I've seen creeping in around fixing it, but
to you.

S2 (08:17):
Well, I think there's been there's, there's a number of technical,
but there's also sort of organizational as well as cultural
norms that are now being developed. So it's been two years.
People are a lot more confident in, you know. Well,
I guess effectively that there's value in working with large
language models or multimodal models. There's value in it, even
if there is hallucinations. So take it with a grain

(08:39):
of salt and it's still a bit early to talk
about it. But the US political race, there's been some
good commentary saying that effectively, people are a lot more
skeptical of fake news or real news that came through.
So the impact of some of these hallucinations or, you know,
fake news type of, uh, you know, content had a,
had a lower impact than potentially it did even, you know,

(09:00):
4 or 8 years ago. So I think people are
a lot more aware of the risk. And that makes
it a little bit easier to work with, but also
the technology. Google spent a lot of time and energy
as well as others. But to build guardrails, auto enforce policies,
there's best practices around prompt engineering. There's better sourcing of
data to help augment the model so that instead of

(09:20):
making up something, it actually has the information in its fingertips. Um,
you know, but the models evolved quite a bit, um,
from two years ago. Gemini. When we came out with
our first Gemini branded model. Now we've been doing this
for for decades, to be honest. And we invented the
transformer architecture about eight years ago. Um, but, you know,
if we look at just Gemini and, you know, we're
now we're now well into it, multiple releases, um, a

(09:43):
lot of the information that's going into these models are
vetted better. The training mechanisms are improved. Um, I think
the checks to make sure hallucinations are mitigated are put
in place. So I think it is better results coming
out of the models, but I can't I can't belabor
the point that it also makes a difference that people
are more used to working with them and looking for

(10:05):
potential areas that are risky, and the safety guards. I
guess the checks that we're putting into place is a
lot more mature than they were before.

S1 (10:13):
Yeah. Just on that transformer architecture, I mean, I think
I did remember that it was Google, that it was
a Google researcher or research lab. I mean, I don't
know how much of this is reasonable to ask you
because you're fairly new in the role, but, I mean,
can you take us back to the early days of.
So it was eight years ago out of a Google lab.
And I mean, how did that cascade all the way

(10:34):
through to OpenAI picking it up and throwing out ChatGPT first?

S2 (10:38):
Yeah. I mean, you know, to go a little bit
in the background. I mean, you know, part of, you know,
a lot of innovation comes from necessity, right? And, you know,
Google just trying to fuel this massive search engine really
had to look for different ways. And we and we
went down a few tracks. One was hardware, so of
course tensor processing units came out of this. And you know,
we're on the sixth iteration of TPUs at this point.

(10:59):
So we've been doing this for for a decade or so.
In parallel, we had to say, how do we make
our algorithms more efficient? How do we make them more effective?
And even, you know, we just recently got some Nobel
Nobel Laureate prizes coming out. So, you know, individuals within
the Google ecosystem, whether it's Geoffrey Hinton or Demis from
the DeepMind organization, you know, a number of people are

(11:20):
getting these awards and recognition for some of the stuff
they did ten years ago. But the transformer architecture is
effectively an algorithm that really unlocked the ability to say,
we're going to scale. We're going to start taking a
lot of information, you know, you know, much more than
we could have ever conceived. We need to be able
to apply attention. So there was a paper written around. Basically,

(11:41):
attention is all you need and a bit tongue in cheek.
And the idea is that as part of this architecture,
we're going to help the algorithm know where to spend
its time, where to focus its attention, and then build
on that. And a lot of what the Llms are
doing is leveraging that sort of trick, if you will,
to be able to comb through vast amounts of information effectively,

(12:01):
everything in the public domain, including YouTube and such, and
start building models that can represent that information. And then
you add in some of the other innovations, whether it's
reinforcement learning, deep reinforcement learning, even deep neural networks. Going
back to Geoffrey Hinton and others, there's been a lot
of great, you know, they call them grandfathers of AI,
but a lot of them working with, whether it's our

(12:22):
infrastructure or our data, have built up a lot of
this great innovation. Um, and, and Google has just been
a great place to sort of support that. Much like
an incubator an accelerator might do nowadays. Um, you know, these, these,
these innovators and researchers came for the data and the
compute power and and of course, along the way, we
were able to leverage a lot of that great work.

S1 (12:43):
But it obviously wasn't patented or I mean, because the
transformer architecture is well known and used by all the
AI companies. Now, is that right?

S2 (12:53):
Yeah. No. That's right.

S1 (12:54):
Yeah. So I mean, how did you not think about
patenting it or. I mean, what was the what was
the thinking?

S2 (13:01):
Yeah. Well, on one hand, a lot of this came
out of research. So in some ways there's probably some
legal precedence to say, look, you know, the university or
the research group probably owns part of this. Yeah, a
lot of it, though, is the ethos behind Google. There's
a strong desire to make sure stuff is open source.
And just as we released Gemini a few months later,
we started releasing Gemma, which is our open weights version

(13:21):
of the model. You know, maybe not quite as capable,
and I think you'll see this trend. Just because we
released the transformer doesn't mean that that's all of our
secret sauce. There's still quite a bit that makes these
solutions work. There's a lot of magic that goes into
training these models, but the architecture itself is an evolution
from previous ones. Um, you know, LSTMs and other things
that have been evolving over the last few decades that

(13:43):
really allowed for deep neural networks to really blossom. These
things are being evolved in the US, Europe, you name it.
A number of white papers and academic papers coming out
were massive, but we've been we've been a huge proponent
of transparency and openness. Um, even Keras and some of
our own machine learning libraries, um, open sourced very quickly
to help. Um, early days machine learning engineers. Scale tensor

(14:08):
TensorFlow was another example of this. Um, and we'll continue
to do that. So a lot of the LM guardrails, um,
those are becoming open source. So a lot of our,
you know, explainable AI and trustworthy AI libraries are actually
open source. Um, you know, and synth ID, which is
our watermarking for generated text, video, images. Those are open

(14:32):
source now so that you can go and actually check
them whether the content has been created on something like
image or video.

S1 (14:37):
Yeah, it's a good segue. So maybe we'll just go
down that path now. I mean, I read The Coming wave.
By Mustafa Suleiman. A great book recommended to the listeners
out there, you know. But one of the punch lines
that I remember from it was, we kind of need
to regulate, because there's a lot about, you know, if

(14:58):
if a guy gets into the wrong hands, then, you know,
before you know it, you've got bioweapons and, uh, you know,
intelligent drones and bio engineered weapons, which I just mentioned. Um,
so this concept of the ethics and responsible AI, I mean,
that's one of the areas that you do touch on.

S2 (15:16):
Yeah, yeah. I actually on one hand, you know, for
the cynics out there, I mean, I do also agree
with Yann LeCun, who says that, you know, my cat
is smarter than any of the AI that's out there. Um,
I mean, there's certainly a truth to that. But that said,
you still can't wait for, you know, the, you know,
the singularity to occur to then say, oh, wait, maybe
we should try to regulate this.

S1 (15:35):
Yeah. And you can't, can't write your essay for you. Yeah, exactly.

S2 (15:38):
Not yet. Yeah, yeah. Um, so I think there's there
is definitely a lot of work you need to put
into it proactively. And actually, you know, DeepMind and others
have had groups focused on responsible AI research for for
probably at least a decade in, Google came out with
their AI ethics framework well before any of our other competitors,
because it was an important thing. Um, I think we
made a very hard decision back in June 2024, when

(16:01):
we started seeing some of the early results from our
Imagen Software and said, you know what? We're going to
pool the capability to generate humans in the until we
figure this out. While others were sort of going on
and kind of exploring and processing this. Um, we wanted
to make sure we took a little bit more of
a conservative and well thought out process. Um, but I
do think if I, if I take a step back
and draw comparisons to GDPR, which was a similar kind

(16:24):
of phase from 2016. Um, EU focused, but it was
around data privacy and protection, and they put a lot
of great rules out there, including the right to be forgotten,
the right to understand what my data is being used for.
I think you can, you know, you can extrapolate that
out to ML, and I think you should be talking
about the right to reasonable inference. You know the right
to know how your data and your models are being used.

(16:46):
The right to challenge and the right to say it's
all fine and dandy that you got this model to
predict my mortgage. I want to talk to a human.
Can I have an arbitrator involved in this? So I
think any company needs to start thinking a little bit
about how do you put these rights of the consumer
or rights of the employee into perspective? Now, don't take
back from the AI, ML, but just start thinking about
how do you mitigate the risk. Yeah.

S1 (17:09):
Yeah. I mean, any any further thoughts on that regulatory aspect?
I mean, how important is it for you or Google
that governments are kind of stepping in and doing regulation
around this?

S2 (17:21):
I personally I do have a bit of a bias,
I suppose, to, you know, governance and guardianship and regulation
of sorts. I think that putting the guardrails in place
enable a company even like Google, but also companies like
CBA or Optus or whatever, explore more freely when you
know what you can and cannot do. And back in

(17:41):
the old days, a customer. There was this misconception in
some ways that you can't touch customer data. You can't
do anything with it. There's no lifetime value calculation. It's
just too secure, too private. The reality is, there was
really not a lot of regulations that said what you
can and cannot do with it. So if we had
guardrails that said, you can do these things with it,
but you can't do this, that would have allowed innovation,

(18:02):
that would have allowed for a whole ecosystem of customer
insights and analytics ten, 20 years ago. But I think
there was a lot of fear about how long you
could store customer data and what you could do with it.
And most people said too difficult, too risky, do nothing
with it. So actually, I think regulations can in the
right amount can actually spur innovation.

S1 (18:23):
Yeah. Yeah. And I think it's it is. I mean,
I'm not necessarily pro government and regulation, but I do
think this is something that needs to be regulated. And
I just, you know, it's hard. It's only been out
for two years. So governments don't are not that fast moving.

S2 (18:39):
Yeah. Yeah. Well and I've seen so you know in
Australia just focus on here. You know, last year, 2023,
the most of the politicians were more about handling the
fear that was coming out of the population and talking
about how this needs to be heavily regulated. This really
needs to be almost paused until we can figure stuff out. Well,
now in 2024, a lot of the politicians have shifted.

(19:01):
Now they're looking at the opportunity cost of not getting
on board with this, not learning along the way, not
starting to position ourself. Um, you know, the comment around,
if we don't start to play a, you know, a
first person kind of role in these regulations and these
ethical privacies, we're now going to have to be forced
to be compliant with EU or US regulations, and we

(19:23):
want to make sure that we have our agenda, um,
represented in that. So I think a lot of the
political kind of feelings have shifted this year, which is great. Um,
they're talking about bringing more machine learning engineers into the country,
which I know is another controversial topic. But, um, bring
in the right people into Australia can help spur innovation,
and I think we need to include that as part

(19:44):
of any sort of political agenda.

S1 (19:46):
Yeah. I mean, jumping back, I think we talked about
this before, you know, in our precall before this, but
the waves of innovation that have happened over the last 30,
40 years, I mean, I reckon I've got ten years
in you. So I actually do remember when the internet, uh,
came out AOL. Yeah, yeah. Dude, I was I was
a CompuServe, uh, customer back in Scotland, which is, you know,

(20:09):
but look and look, you can't deny him the the
internet has been probably the most transformative, uh, human invention
in the last 50 years. Uh, but beyond that, we
had the mobile phone revolution. So, you know, again, it
took a while for it to become where we are
at now with the pixels and iPhones that are out there. Uh,
the next one would have been cloud computing, you know,
and again, I was part of the, the early wave

(20:31):
in that. And, you know, you at Google Cloud is
definitely a core part. But again, it took a little
while to get mass adoption and it's still happening. This
fourth wave. I has really been unbelievably fast and I
think there's more to come. So the question around that
is just this the adoption curve has been incredible. You know,
I don't know the numbers offhand, but whether it's Gemini,

(20:54):
Claude ChatGPT, I mean, yeah, it is in the public zeitgeist. Um,
but it's really been driven by individuals. So the question
is really around, why is the individual adoption so high?
But it's taken the corporates a little while. I mean,
is that a good question or anything to comment there?

S2 (21:10):
I think I think there's, there's a and you're right.
Like I even take it further back. Right. And like
I think maybe we talked about the automobile. Right. And
that was you know, they, they invented quote unquote invented
automobiles back in the end of the 19th century. It
took at least 50 years to really get the first
commercial vehicle out there in any mainstream. And then it
took another decade or two to invent the seatbelts. And like,
this was a slow rate of innovation. And then I

(21:32):
love the, the website one and, and I think I
probably mentioned that one of the observations I have is
that going through that whole.com craze? You know what? It
took a lot of upfront money and skills and labor
to build your first website that was actually functioning. We
were developing it on the way it took an enterprise
or company size investment to make that happen. So it

(21:54):
took a good ten years to really get reasonable websites
that could actually be more than just a bulletin board. Um,
fast forward mobile phone and mobile apps a lot quicker
because we started building the infrastructure that would make it
a lot easier to get on board and grow into
the ecosystem. I think we started understanding the ecosystem play
a lot better than when the websites and there was
no AOL tried this. CompuServe tried this, but no one

(22:17):
was really taking the approach of come build your website
on us and make it super simple. So everybody had
to reinvent the wheel. They couldn't stand on predecessors backs.
Fast forward to LMS. Yeah, there was very little barrier
of entry. You didn't even have to put a credit
card down, frankly. And because of its natural language base,
you base. You didn't have to have a computer science degree,

(22:39):
or have a grandson who graduated from the college with
a computer science to build your website. You would just
open up your your laptop or your iPad and start
playing with Gemini or ChatGPT. So all of a sudden
we're seeing this upswell where it's the masses trying to
drive the adoption of this, and they show up to work.
And guess what? If I can use ChatGPT or Gemini

(23:01):
at home, why would I want to go back to
my old school sales or CRM process? So now all
of a sudden, businesses are trying to catch up to
the employee, or they're trying to catch up with the
customer expectations. But fortunately, the barrier of entry, both from
a cost, but also time, is so small that generative AI,
even though it's been two years, we're already seeing great
use cases. You know, year and a half, two years ago,

(23:23):
even with our Palm Maldives, which were the predecessor to Gemini. Yeah.
So I really do think that, yeah, it is quite interesting.
So websites took quite a while to really get into
the the popular culture, but it's really going down. And
you can only imagine that as we keep going, more digital,
a lot more reuse and ecosystem play, with a lot
of the companies popping up and the technology becomes even

(23:45):
more human like rather than coding and syntactics, it makes
it easy for anybody to really start using these things.

S1 (23:52):
And I guess this is, I'm guessing here. Rob. I mean,
this is part of your role. Like you go in
front of big ish customers with a public service banking.
I mean, yeah, you can maybe talk about some of
the broad parameters, but you're talking to an audience that
actually understands what you're selling. If I can put quote
marks around selling, talking to them about AI and AI adoption,

(24:16):
they probably know how it works because they've used it
at home or they've seen their kids use it, but
they haven't thought about that from a corporate perspective. Is
that kind of what you're doing.

S2 (24:25):
You know, about, let's say, ten years ago to be
somewhat dramatic? My, my, my conversations would have been with
the people in the know, you know, machine learning engineers,
data scientists, you know, XGBoost, this random forest that maybe
playing with neural networks now though, it's actually the lines
of business and you know they've to all our discussion
previously they've started playing with generative AI. They've even built

(24:46):
some images there maybe in the marketing department. And they're like,
if I could generate images for all my social media posts,
that would really save me a lot of time. So
I'm actually spending a lot of time working with them,
and I'm talking about the experience or the journey, whether
it's the employee or the customer journey and the makes
or breaks along that journey and how, you know, in
my role, yes, I'm trying to sell Google, but how

(25:08):
can Google help fill those gaps? Um, or, you know, okay,
you've shown me some of the great stuff like Project Astra,
you know, this, this, this streaming agent. How does that
apply to me? I'm sitting in a food processing organization
in Queensland, so I have to not get into the
weeds with machine learning this and MLOps that and sort
of talk about here's what you could do. We can

(25:30):
improve the way a customer interacts with your organization. We
can create the same experience that you have when you
walk into the brick and mortar store or branch with
your digital channel. How do I do that? Okay, well,
let's break this down. What are the differences? It's not
just digital versus analog. There's there's experience. There's a personalization.
There's a relationship that you build when you can look

(25:51):
a salesperson in the eye. How can we replicate that
with generative AI and traditional machine learning and other stuff?

S1 (26:07):
Can you talk about specific case studies, either named customers
or types of customers, with the projects that you've been
working on?

S2 (26:15):
Well, so, um, maybe, maybe on two buckets, there's probably
the two hard until now to tackle. I'll cover that one.
But first, probably the obvious one and a bit of
a public reference for us. But like the iconic, um,
you know, the iconic is a is a great retail,
you know, online presence. They had a lot of great,
you know, subscribers and users and I guess call it customers. Um,

(26:37):
but they wanted to build a more of a natural conversation.
You know, if you go into a store, you don't
go in saying, I want a red t shirt. You
go in with the knowledge that you have a barbecue
this weekend, or you have a fancy corporate dinner tonight,
and you say, I need a new outfit. So you
go in there and now with the iconic, if you
go to the search bar at the top, you can
type in, you know, I have a barbecue this weekend.

(26:59):
It's themed like a polo or a derby theme. You know,
what do you have? It understands the intent and the
objective of you. And then it it does the typical
rank and filing of items and it gives you some recommendations.
But we're bringing back that in-person experience to their digital
kind of channel. Um, so that's a great example of
a very simple thing to do, but it's going to

(27:20):
get them multi-millions in extra revenue on the too hard bucket, though.
We have one startup that's in that medtech area. Um, doctors, physicians, masseuses,
you know, physios. Everybody's writing up notes at the end of,
you know, visits. Um, whether that note then gets just
transcribed and put into a locker, or is it being

(27:40):
used as a referral and automating that process? There's a
number of companies here in Australia that have been starting
to use our generative AI products to start building those transcriptions,
but also extracting stuff like pharmaceutical requirements, treatment recommendations, you know,
double checking the the effectiveness and the quality of service.
So you can start doing a lot more with that

(28:01):
transcription than just simply transcribing, filing and forgetting. We're actually
now using that to try to spur on the next
conversations with the physician or with the customer. Yeah.

S1 (28:10):
Yeah, it is interesting. Like you move from CTOs where
that hardcore original AI ML was to the lines of business.
I mean, that trend's going to continue for sure. Yeah, I.

S2 (28:22):
Think so, because it really has to use the word democratize.
It really has democratized it to the to the common people. Um,
you know, and we can't take all the credit for
credit for this. I mean, you know, the smartphones certainly
helped quite a bit. Websites before that. Getting people used
to having information in a digital format has been kind
of a build up until now. Now we're making those

(28:43):
devices intelligent almost at a human level. So now you
almost have a perfect agent or assistant or oracle that
can help everybody, whether you're a, you know, doctor, a politician,
an academic or a farmer, um, you know, working with
a few farmers just to pick on them a little bit.
But they can whether they have good cell phone reception

(29:03):
or mobile phone reception or not, they can use their
phone to scan potential insects on their leaves and then
using an algorithm. It used to be you'd have to
take that picture, go back home and look up on
Google what it is. Well, now using our algorithms, you
can recognize what it is. Yep. That's a that's an aphid.
You know, here's how you can start treating it. And
here's the potential cost to potentially order this. Oh and

(29:25):
you can go order it over at this supplier. So
really making it easy for for whoever to start using
these products. So yeah, it's definitely not going anywhere.

S1 (29:34):
Yeah. Let's move from the big end of town, where
I think it sounds like you spend most of your time,
but I know your team has, uh, activities within the
smaller end or the start up end of town, so, uh, yeah.
What sort of things are you doing? Or what would
you suggest that our start up audience listening should be
thinking about when it comes to AI and ML?

S2 (29:55):
Yeah. Well, I think from a mechanics, some things haven't changed.
You know, what is your unique differentiator? You know, yes,
there's the algorithms, which I think will still play a
role despite how smart Gemini is. We're still going to
have deterministic or traditional machine learning that's going to play
a key role. So focus on the algorithm is your
secret sauce. But you can no longer build a company
and get a good valuation just on an algorithm. Or

(30:17):
by putting machine learning in your title. We now need
to look at what data do you have and call
that your moat, because I think everyone in the enterprises
who you'd be selling to are looking at, I have
all this great data, I'm not doing anything with it,
but that in its own right is a huge amass
for them. So as a startup, you want to understand
how can you make use of the data, whether it's
your own proprietary data or a potential customers? What algorithm

(30:42):
are you going to put on top of that? And
then that's the value proposition. We're yes, we're seeing the
SAPs and the sales forces and others start to try
to build out their solutions. But we're also seeing a
massive ecosystem of startups that are filling the gaps. And frankly,
that's always going to be the case. We're at a
great phase right now. A lot of great ecosystem is

(31:02):
growing out of this, and Google wants to provide the
platform to build those solutions. We're not looking to build
out a lot of these industry solutions. Um, but while
I'm at it, the there's probably two courses, and Andreessen
Horowitz came out with a good paper actually recently on this. Um,
there's probably two courses that you can go down. You
can go down somewhere where it's been completely underserved. You know,

(31:23):
this is the back office of the future where no
one's spent a lot of time. There just There just
wasn't enough incremental revenue or cost savings, so nobody was
buying it and therefore there was no startups. But now
we're seeing startups pop up where they're building out the
true robotic process automation. They're building out the true mortgage
filing and pre-approval process and doing it in a minute

(31:44):
or whatnot. Right. There's there's startups that are just focusing
on a very niche, very specific, industry aligned use case.
And I think that will continue to be a strong
area for for the ecosystem. The other area and it's
a little bit more riskier. But your horizontal plays I'm
going to build a better document processing solution or I'm
going to build a better call center solution. I think

(32:05):
you got to be careful with that. But if somehow
you can, I guess to use the phrase, if you
can wedge yourself into somewhere like a Salesforce or an
SAP and find a differentiated but very sticky area for you,
that might be a good play. But I would really
focus on that underserved, underrepresented area where I think there's
a there's a huge growth potential.

S1 (32:25):
This is I'm going to ask a crystal ball question.
And this may be, you know, Rob Seibold's perspective, not
necessarily Google, but yeah, from our investor VC perspective. I mean,
there's I guess all of us in the industry are
trying to work out what the implications of this increased
adoption of AI coming. You know, one of the things
VCs do is we provide funding to SaaS companies. And

(32:50):
what do SaaS companies use? The funding that we give
them for engineers to build the product out in a
world of, you know, super duper AI, a few years
from now, then you can almost show them a product like, uh,
I don't know, I'm going to pick one at random,
slack and say, hey, can you build me slack? And
the AI will go off and do what a bunch

(33:12):
of engineers could have done and build that. So the
implication there, I mean, I'm picking it slack, but it
could be any of the SaaS companies we invest in.
So the perspective of not needing engineers to build out
a credible, maybe it wouldn't be as good as the
original SaaS product is real. So what is the implication
of that? And I'll give my quick answer maybe. And

(33:33):
then I'd love to hear your perspective. But it kind
of throws you down the the stuff that VCs don't
always do, which is the deeper tech. So hardware systems,
infrastructure rather than just, well, maybe not infrastructure, but rather
than just the easy SaaS play. Any thoughts on this
or is there a.

S2 (33:51):
Yeah, I think I think to borrow on The Innovator's Dilemma,
like there's always been a good interest in incremental, you know,
sort of quick, just minor improvement type of solutions. And
this is building a faster car or building a slack
that's maybe a little bit more friendly for business users. Um,
I think I think that will always continue to be
kind of a arbitrage game. You know, timing and everything else. Um,

(34:14):
but I do think that there is now opening or
there's opportunities for that deeper innovation using generative AI and
all that. I don't think that it's going to be
from a company who has a smarter LLM. LM there's
a lot of big players out there, super well funded,
as you know, um, that, that you're going to be
competing with. I wouldn't bother with that. I think there's
a lot of great Lego blocks out there with all
the models, open source and first party kind of Google ones. Um,

(34:38):
what I'd be focusing on is this area has been underserved.
This area has been too risky until now. There's there's
not an incumbent ideally, that's focused on this area, and
that's because it was not profitable or too difficult. So
finding a good business model that fits within that criteria,
I don't think you have to have the smartest model

(34:58):
or the smartest engineers, even just a really sticky use case. Um,
so I think Australia is really well primed for that.
I think there's quite a few, um, startups kicking off.
I mean, I think I saw the statistics about 2000
startups each year kind of getting kicked off. Um, a
number of them come to Google because it's a good
platform to be on. We try to keep it simple. Um,
we also try to have a number of accelerators that

(35:19):
sort of help accelerate that kind of quick ramp. But
really what you want to do is test it in
the in the market, so it doesn't remove the need
to go out and test your ideas in the market.

S1 (35:30):
I'd love to hear a bit more about that. So
the accelerator. So is this Google run and managed accelerators
or are these you're supporting third party.

S2 (35:37):
Supporting third party. Yeah I think, you know we're certainly
happy to throw in credits to, you know, learn and
train and sandbox if you will. But we're not putting
in money as like an investment fund or anything like that.
But what we do a number of times a year,
I think probably about quarterly, we'll run an accelerator. Multi-Week
part of it is, you know, giving them exposure to

(35:58):
VCs and investors. So there certainly is some opportunities that
pop up along the way. But primarily it's about enabling.
It's about scaling. Maybe you've got your first funding or
maybe you're about ready to get your first funding. We
kind of try to hit them at different degrees of maturity,
but you're at a point where you need help, whether
it's enablement or scale coaching. Um, so we can certainly
help out over a few weeks, um, and then probably

(36:20):
wrap up with a showcase at the end. And if
we can help connect them to people like yourself or
other investors, it's a great opportunity for both sides to
get kind of a peek into some of the emerging
AI solutions.

S1 (36:31):
And you've done this, I think, a couple of times
so far. Yeah.

S2 (36:33):
Yeah. I mean, since I joined, we've done it twice
at least. And I think this week or maybe next
week we're wrapping up a second cohort. And these are
based on AIML. We do a number of other events
around kind of DevOps and other types of ecosystem around
some of our partners in the software industry. We do
a number of other things, but just in the AIML,
we're really ramping up a lot of these support things

(36:54):
for for startups.

S1 (36:56):
Yeah, I mean, I often ask where to go. I mean,
the Google corporate site is probably quite big, but do
you know the where the what should people look for
if they want to find out more?

S2 (37:08):
Well, I mean, we do try to make it quite simple.
If you go to the website, um, you right off
the bat, you'll see a banner that says apply for
some free credit and, you know, you get in there.

S1 (37:17):
What is the website? So at google.com is obviously not it,
but is it Google Cloud or.

S2 (37:21):
Yeah. Cloud.google.com. And, you know, it's probably changing as quickly.
If you go to Gemini google.com. You can also probably
see a banner at this point. But, um, find a
way to get in there to look at some of
the Google stuff because the other area is very quickly,
you'll see some of the great training, and there is
actually a lot of great, you know, self, uh, self-training or,

(37:42):
you know, computer based type training. Um, that's actually quite good. Um,
so you don't have to go out to a cloud
guru or Cloudera or any of these others. You can
get some pretty good training on Google Cloud Platform and
the certification process, but we'll give you credits. I think it's, um,
it depends on the initiative, but I think it's 300 USD, um,
to use for, uh, I think it's like 90 days
and that gets you going. Um, and once you get going,

(38:04):
depending on your size and need, you can reach out
to somebody through the process for sure.

S1 (38:08):
Yeah. All right. I'm down to my last couple of
questions here, Rob. So I mean, let's maybe look forward
to next year, 2025. Um, have you or your team
or Google Cloud? I mean, what's the focus for next
year that you want to touch on?

S2 (38:21):
Yeah, I think right now we're still just scratching the surface,
both from a technology perspective. I think we're still it's
been two years of sort of cobbling together the sticky
tape and ironing out some of the rougher patches and
getting the regulations right. Um, but also with a customer,
we're still just doing the first quick wins. There's a
lot of stuff in the pipeline where customers are going

(38:42):
to do some pretty fancy stuff with conversational agents next year. Um,
you know, having a conversation multimodal, um, where you can
ask them to do something such as cancel your credit
card or yep, this is fraud. Please, please cancel my
credit card, cancel these transactions and start doing this without
any human intervention. But all of this stuff is still
in the POC or pilot phase, you know? So 2025.

(39:05):
I'm really looking forward to more great examples of stuff
hitting production. Um, we even have some text to voice
stuff coming out where, um, customers are adopting this to,
to try to automate some of the interaction, B2B interactions
where I'm going to call up somebody and see how
they're doing if they have this, um, you know, do
you have this t shirt? Do you have availability in

(39:26):
your hotel? And having that completely automated as an agent
to agent type of conversation. And these agents don't have
to be Google. It could be the Google agent working
for company A interacting with this agent. And it's not
through an API. It might be literally through a voice call.
And that's when I think we'll be getting some pretty
silly use cases where two fake humans are talking to
each other in human language. But, you know, hopefully it

(39:48):
can be, you know, more efficient and save some time,
but they can start doing that in the evening. They
can do it on the weekend. So a lot of
these great interesting use cases start popping up. But also
I think a lot of the great agentic type of conversations.
What is morally responsible, ethically responsible for these agents to do? Um,
how how far can they go in automating certain things? Um,

(40:09):
I think that's where we're going to start really pushing
the limits next year, where we start to say, okay,
now I think we do as a company, we need
to start defining what norms and acceptable parameters are. I
can't rely on Google or somebody else defining those for me.
So I think next year will be the year of
scaling and learning. So buckle up.

S1 (40:27):
Yeah. Any asks of our audience like are you are
you hiring? Have you got an event that you want
to talk about, uh, or where to find you? I mean,
just yeah.

S2 (40:37):
Yeah. Look, we are we are very proud of our
touch on the startup community. Um, and we would love
to have more participation. Um, we host a number of events,
whether it's hackathons. I think we have a AI for
good hackathon that's starting next week. Um, that's that's not
a startup thing. That's just engineers out there. So if
you have engineers or Bas or anybody who wants to

(40:59):
get involved in things, we would love for you to
participate in our existing events. Um, next week, actually, we
have a data and analytics conference where a number of
people are going to show up. Um, Google does this
quite a bit. Um, I'd love to have more attendance
and participation. I would love to have a good, strong
cohort of startups who are looking to do AI, ML
AIML work. Join us next year. I'm sure we'll do

(41:20):
something in Q1 and Q2, and I can connect you
to to Michael and others who have some of that information. Um,
but getting getting more is always going to be better.
We can scale out our resources. We can scale out
our capacity. Um, so I'd love to just see that
really grow.

S1 (41:36):
Yeah. All right. Well, look. Thank you. That's the end
of the formal part. That was fascinating. I learned a
lot there, Rob. So thank you very much. Um, should
we jump to the quickfire round and get some, uh,
some of your insights? Um, can you give us a
book recommendation?

S2 (41:54):
Oh, I'm a sucker for Dune, Frank Herbert. Yeah, yeah,
I've actually, I can say I've actually read, you know,
all of them twice. Um, yeah.

S1 (42:01):
Apparently the. They're pretty dense. Yeah. What you've seen in
the first two movies is about half of the first
book or something.

S2 (42:07):
Oh, yeah. Very dense. And no pictures, right?

S1 (42:10):
Yeah.

S2 (42:10):
Um, but, I mean, it's like anybody you either love
sci fi or you don't. And I enjoy the way
that sci fi allows you to explore philosophical or potentially
futuristic type of situations without being bogged down by the
reality of now. Certainly the reality of history, which is fixed.
So I love the Frank Herbert kind of series.

S1 (42:30):
And it was written a long time ago, wasn't.

S2 (42:32):
It? Yeah, yeah, yeah, it was 60s. Yeah. 60s or so. Um,
which is interesting because his background is more of a,
you know, journalist. And he was sort of talking about
certain things, but it just had a brilliant mixture of, of,
of real world, um, as cultural norms, but also what
could really happen. So quite a bit of a pretty
good story for us. Yeah.

S1 (42:51):
And I've been asking other guests this, but how do
you consume your books on the sofa? Hard copy.

S2 (42:57):
Yeah, I hard copy and I do spend money to
buy the nice, uh, hardback books. You know, call me
old fashioned. I've done the iPad ones, you know, but
I just. I can't be loyal to it. I get
too distracted.

S1 (43:08):
So you end up with a big library?

S2 (43:10):
Yeah, yeah, yeah. So I gotta, I gotta, I gotta
buy another house to get a big, uh, library room.

S1 (43:14):
Yeah. You just gotta put it behind you and then
have it your background to the podcast and then your webcast.

S2 (43:19):
Yeah, exactly.

S1 (43:20):
Intelligent? Yeah. Um, all right, so next question. Uh, podcast.
Any recommendations there?

S2 (43:25):
Yeah. I mean, recently the well, I do love the
economists and Babbage and these are all good ones and
well worth the money, even though they're way overpriced here
in Australia. Um, but the Babbage is great and stuff.
But I also really love the AI Daily Brief, which
is one that's that's really come to my attention this year. Um,
he does a really good job, um, um, talking both
about politics and economics of AI, but also some of

(43:48):
the innovation. He's not a technical guy. He doesn't he's
not going into why the transformer model is better than,
you know, this or that. Um, but yeah, that's a
really good one on podcasts. And you can get that
on Spotify and stuff like that also.

S1 (43:59):
Yeah. What about news source?

S2 (44:03):
Um, well Google news. Um, there you go. Um, I
have to admit, like popular news or current news, probably
not as heavy into it. And that's where I rely on,
you know, stuff like The Economist and some of these,
you know, not that I'm a cynic, but, you know,
with so much news and the capability to publish your
own ideas, it's I do value the role that editors

(44:26):
have to curate and, you know, approve a lot of
the content that gets pushed out there.

S1 (44:30):
Yeah. Do you have a favorite CEO?

S2 (44:34):
Oh, well, of course I'd have to say Sundar. Yeah.
You know, I actually surprisingly like I think he actually
does do quite a good job at what he does. Um,
other than that, um, you know, I have to give
Sam Altman credit. I mean, just, you know, going from
the kind of Y Combinator and then, you know, finding
this thing and then really scaling it. I think he's
done an interesting job, and it's not an easy one

(44:55):
by any stretch. I wouldn't want to be him. Um,
but he's he's he's building a he's building he's building
a gorilla out there.

S1 (45:01):
It's remarkable how he said Sam Altman's the CEO of OpenAI,
for those that don't know. But it's remarkable how he's
migrated it from this weird, not for profit thing into,
you know, full on corporate bloodthirsty. Honestly.

S2 (45:14):
I mean, I followed OpenAI, I spent about four years
in Silicon Valley working with, with, with all these companies,
and I actually liked them when OpenAI did a lot
more of that open source stuff. And they came out
with Jim, which was a great library for testing reinforcement
learning models, and it was like open source. It was like,
we need to build this for the benefit of humanity.
I think they've kind of diverged a little bit, which
is unfortunate, but they're going in a good direction.

S1 (45:36):
So, um, what about an app? What's your favorite app
on the phone?

S2 (45:41):
Oh, well, my favorite app at the moment is actually, um, well,
it's the Gemini app. But skipping past that, the other
one that I really love is the, um. Oh, now,
now you got me in a kind of a brain
brain moment. Um.

S1 (45:57):
Is it like music or is it entertainment? Yeah.

S2 (46:00):
Um, it really is. It's Apple Music. Um, you know,
not to not to throw Google under the bus here,
but yeah, Apple Music's pretty good. So I'm using that
quite a bit just to kind of relax. Yeah.

S1 (46:10):
Cool. Uh productivity tool. How do you stay efficient?

S2 (46:14):
Yeah, OmniFocus is one I use quite a bit for
kind of the to do list. I, you know, I
think they're pretty good. Um, they work across multiple platforms,
which is awesome. But I have to admit, kidding to
do list is not as natural. Um, so kind of
keeping up with that, um, is also part of the process. Yeah.

S1 (46:32):
Uh, TV or movie? Like, what's your favorite, uh, watch
on the watch list?

S2 (46:37):
Oh my gosh. Well, at the moment, um, I love
the history, the historical dramas, um, whether they're accurate or not.
So there's a lot of great ones, like the Medicis
was one probably about a year ago. Right now, I'm
watching this, uh, the ancient apocalypse, which is a. Yeah,
this is a little bit of a doomsday scenario type
of conspiracy guy. So you got to look past some
of that.

S1 (46:57):
So these fiction or nonfiction.

S2 (46:58):
These are, uh, fictional drama type of, you know, can't
no one can ever say I'm wrong because no one
was there to actually record it. So I'm going to
just throw this out there and and paint it like
a Braveheart.

S1 (47:09):
Would you put Braveheart in that fictional drama?

S2 (47:11):
Probably. Probably.

S1 (47:13):
Yeah. Well, I don't think just for the readers, I
don't think William Wallace had an affair with the Princess
of Wales. Yeah, exactly.

S2 (47:19):
But, you know, it's like my old boss used to say,
you know, never, never let the truth stand in the
way of a good story.

S1 (47:24):
You know, I mean, last question. If you were asked
to do a Ted talk, what would you talk about?

S2 (47:29):
You know, I think probably responsible. I, I think there's
there's a lot of great technology. I'd love talking about
the tech and the capabilities, but I'm an optimist. And again,
I believe that you can be an optimist and still
talk about ethical and responsible AI, because I do think
that the guardrails allow you to really speed up and innovate.
So I'd probably spend my time preparing for that kind
of talk.

S1 (47:48):
Yeah. Fantastic. Rob, if someone wants to track you down online,
are you LinkedIn?

S2 (47:53):
Yeah, LinkedIn LinkedIn's the best. Probably easier to get hold
of me on LinkedIn than email.

S1 (47:57):
So Rob Sibo Sibo.

S2 (47:59):
Yeah.

S1 (47:59):
Sibo or Sibo.

S2 (48:02):
Yeah on.

S1 (48:03):
LinkedIn. Rob appreciate you being on. And thanks everyone for listening.

S2 (48:06):
Yeah. Awesome. Thanks for having me.
Advertise With Us

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.