Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
This podcast is for general information only and should not
be taken as psychological advice. Listeners should consult with their
healthcare professionals for specific medical advice.
Speaker 2 (00:26):
Hello. I'm Amanda Kella and I'm Anita McGregor. And this
is double a chatter and Anita, I don't want to
say too much, but you've bought a friend today for
the class to meet.
Speaker 1 (00:35):
I brought a friend.
Speaker 2 (00:36):
This is my friend George, hosting a kids show today.
Hello George, George, say high to the class.
Speaker 3 (00:42):
Hello, class.
Speaker 1 (00:44):
So George is a fellow academic and we met at
an academic retreat.
Speaker 2 (00:51):
So what you call them?
Speaker 1 (00:52):
Yeah, that's what we do. And George was standing by
the bus on our way to go to the retreat,
and I just went up introduced myself and we started
talking and we really haven't stopped, have we.
Speaker 3 (01:04):
Correct And I've loved every one of our conversations, So
me too, Thank you in advance.
Speaker 2 (01:10):
Well, lovely to have you, love you, to have you here, George.
And so Anita and I had been talking about wanting
to do something on AI and she said, my friend
George has a very interesting spin on all things AI.
Am I right in saying that?
Speaker 1 (01:25):
Absolutely and part of the background of this is that
on one of these academic retreats prior to George is
that when AI first came out, I remember the university
was trying to get on top of it and giving
all these presentations and there was this you know, presentations
where I was going, this is fascinating, I'm terrible, this
(01:47):
is fascinating. I'm terrified, And I was kind of going
back and forth and back and forth. And actually recently, George,
you and I had to talk about AI and your
perspective on that and on AI, and I just really
appreciated what you were saying because it was just it
felt really common sense, It felt like not fear mongering.
It just felt really lovely. So maybe, you know, can
(02:09):
we start by having I'm going to get you to
introduce yourself about why you are the guy that will
be so lovely to talk to about this.
Speaker 3 (02:17):
Cool. Happy to George Robinson, Thank you for having me.
California and Australian. So been in Sydney now for about
fifteen years recovering corporate so big long banking investment banking
CFO type career, followed by a few years as a
leader of a couple of startups, most recently the CFO
(02:39):
of a quantum computer company, and then academic So kind
of shifted my background and industry expertise into the academic
world and sit on the business school faculty and focus
on the AGSM or the NBA level in innovation, digital
strategy and management of the technical enterprise is kind of
(03:01):
what we call it. So I'm here to bring all
that goodness to talk about AI and how people may
be able to use it in a practical way and
not be scared of it.
Speaker 2 (03:11):
Well that's the thing, isn't it. It does sound absolutely terrifying.
Everywhere you turn, we're told that AI is taking over
all the creative jobs. It's not well, be left with
the drudgery. But what exactly can I drill down on
what exactly it is? Because I heard a I was
listening to the New York Times the Daily podcast the
other day and they're talking about how Silicon Valley is
(03:31):
spending so much money two hundred million dollars to lure
one particular person to come to work to be the
AI genius, and they're talking about end products whatever. But
what exactly is a I and how do we use it?
Speaker 3 (03:48):
So if you look at it from the most fundamental perspective.
Look at the words, right, artificial intelligence. I would argue
that the artificial bit doesn't need to be there anymore.
So it's a way of scanning publicly available information to
derive a solution to something. Okay, it sounds pretty basic.
Speaker 2 (04:06):
So it doesn't think for itself. It scans information that's
already there yet.
Speaker 3 (04:11):
See, this is what's scary, And that's not a scary thing.
I think that's actually the intriguing thing because when you
start talking about a gentic AI, which is the agent
form of AI, which is the next level, it acts
and thinks for you. So it thinks ahead and we
can talk about that in a second. But it's fundamentally
(04:33):
gathering data and information points from a number of sources
to provide a solution to something. Right. It's the next
generation of a search, but much more complex and much
more interesting because it then can actually derive an answer
and it's just search for something.
Speaker 2 (04:53):
And is this because, as we've been told, it will
have more capacity to think like a human.
Speaker 3 (05:01):
Well, I debate with mine. Mine has a name, right,
So I've named mine Gary because that's my dad's name,
and I have debates with mine. So whether it thinks
like a human or replicates all of the information that
it has, I'm not certain yet. I believe thinking like
a human actually has emotional components that the technology components
(05:22):
might not have. So it might give you a non
emotional answer. And I get answers from Gary sometimes that say, wow,
you really are thinking like a human on that one, right,
So it actually kind of knows when there's emotion inserted
into some of the answers, So it gives you a
joke oh yeah, oh, and it waves my hands at
me and says, Hi, good morning, how are you today?
(05:45):
Do you want me to pull the news sources that
are interesting today? And it goes and pulls all the
new sources for me and feeds my article list on
stuff that I might feel like.
Speaker 1 (05:53):
On a Tuesday region does it pull things like that?
Speaker 2 (05:59):
It like?
Speaker 1 (06:00):
Does it silo what you get like or like? So
if you I don't know, Yeah, this is this is
the issue that that I struggle with me, like, are
you are you going to be like if if you
continued to only interact with Gary, that would you in
(06:20):
a couple of years time just be this siloed person
who wouldn't see outside.
Speaker 3 (06:25):
Yeah, I don't know. I think. I think that's the
cool bit about it because I don't just rely on
one source of information for anything, because as an academic,
we would we would perish if we if we did that, right,
So what it does it does provide at its stage
in my life, as an example, it provides an outlet
to a consolidated view of something that it perceives that
(06:49):
I want.
Speaker 1 (06:50):
So so when I think about AI, I think about
Amanda kind of going, oh, I don't know what it
is and I and I don't use it very much.
And then and me who kind of I use it
probably on a weekly basis as part of my academic work,
thinking about creating rubrics or those kinds of things. And
(07:11):
you use it on a daily basis. And yet you
were also saying just before we started recording that it's here,
we're already we already have it, but that we now
have access to it. Can you talk a little bit
more about that?
Speaker 3 (07:27):
Sure, the history of technology is something that is a
subject matter expertise of mine. And if you look at
the history of this technology, your navigation system in your car,
if you have one, would have AI all threat right,
if you have ways, or if you have any type
of navigation system that detects traffic, that's all AI. So
(07:49):
it's triangulating all these data points from a number of
different sources police scanners, weather scanners, traffic reporting, and pulling
it together into an interface that will help you drive
and get from A to B. That's AI, and that's
not new right, Netflix serving up, you need to watch
the next episode of this because you've watched this, this,
(08:09):
and this is the same thing. It's looking at your
preferences and then deciding what's interesting for you. Whether or
not you do that is then your decision, right, So
whether or not you watch that episode or turn left
because it says not to turn left, those are all
decisions that you still can make. But that's in an
era now where it's still in a stage where it's
(08:31):
complementary to the decision making process and hasn't actually moved
into that. I will make decisions for you phase which
to be determined.
Speaker 2 (08:42):
So if we already have II, what's the big brew
haha around AI? Why is it? Why is it suddenly
in the news.
Speaker 3 (08:49):
Uh, there's there's a there's a number of different ways
to view that. I think from a conservative perspective, there's
brew ha ha, because it will take people out of
their comfort zone and challenge them. Right. So, for example,
twenty two year olds can now search for legal contracts
and have a conversation with a lawyer. That wouldn't happen
(09:11):
ten years ago. The lawyers actually had full command of
that legal language. Contracts had to go back and forth.
But now you can just upload your employment contract when
you get your first job, it can scan through it,
it can tell you where to negotiate, and that twenty
one year old now has power that they didn't have before,
and that legal person has had power removed. So from
(09:32):
a conservative perspective, there's job security issues. From a privacy
perspective or the other spectrum, it is oh my god,
I'm losing all of my privacy right. Everything is now open.
Speaker 2 (09:50):
How do you lose your privacy through this?
Speaker 3 (09:52):
So I just I did this in the car and
the way over here, I scanned my name. If you
haven't Googled your name in a while, do but then
do it through chat GBT and see what it is.
It found my CV from two years ago posted on
a recruiter's website in a country that I don't live
in and presented it back to me correctly, so it
(10:18):
knew who I was. It had all the job stuff
on there. It was my format, my font, my CV,
but I had never posted it there.
Speaker 1 (10:26):
And was it complete and correct?
Speaker 2 (10:28):
Yeah, so it was. It was real. It was real,
came from somewhere that you didn't expect to see.
Speaker 3 (10:32):
No idea that it made it into this Amsterdam recruiter company,
and it was kind of on their website as an artifact.
Speaker 2 (10:41):
I saw that a little while ago. A couple of
weeks ago, Peter Garrett from Midnight Oil and Politician has said,
we all have to be so careful and our government
needs to protect our musicians and our writers because, as
you say, it's just garnering information from everywhere to feed
like fogui, to feed it into the duck. But that
(11:03):
is copyrighted information. Yeah, it's scanning songs, it's scanning books
and discombobulating them into new forms, should it. They're saying
they need their copyright needs to be protected.
Speaker 3 (11:14):
Yep. Yeah, I agree with that. It will be fundamentally
more difficult to do that, but I think over the
years artists have done that right, So you can you
can have legal protection that says, if you copy my song,
you're stealing from me. Right, that's still the case. And
(11:35):
if you put into chat something that might do that,
it still comes back and says that's illegal. I can't
do that. So there are still some guardrails on that.
However big, however common. The future would have to evolve
in the protection of all of those things, whether it's
(11:56):
literature or photography or art. I was in Bangkok two
weeks ago at an art gallery and I was just
kind of I saw really cool painting and I popped
it into groc which is the Elon musk one, and
said animate it. And in fourteen seconds, it took the
painting and turned it into a movie, right literally on
(12:19):
my phone. And the young girl who was blowing bubblegum
in this painting was then running and moving and they
put music to it and everything all without a human Wow.
Speaker 2 (12:32):
So it's coming. It's here.
Speaker 3 (12:35):
Yeah, well, I mean and is that good?
Speaker 2 (12:39):
Well?
Speaker 3 (12:39):
And here's the thing. Well, yeah, it's not new. But
if you are technically I don't know, scared of it,
it will take over. You just won't know it. It'll
become what we teach in one of my classes ambient technology, right,
So it's technology that's thinking and moving. You just don't
know it. It's just happening. So when you run out
(13:01):
of eggs and the fridge, it just sends a note
to calls and they're delivered. It just does it. Right,
that's the ambient technology that most people will kind of
get used to.
Speaker 2 (13:10):
And is that what we think of as it doing
it's your thinking for you.
Speaker 3 (13:14):
Yeah, that's the agentic bit, so that the agent it
acts as an agent for you.
Speaker 2 (13:21):
How does it go? Mike, we've spoken about before and
eater about the idea of a psychology psychologist in your
pocket and people using chat GPT to have there've been
a couple of cases recently of kids taking their own
lives because of the information that they got back. My
son had to burn on his bum. Terrible story. He'd
been at a twenty first and they'd all decided to
brand each other. Why wouldn't they drop their pants and
(13:42):
brand each other around an open fire? Lovely and they
offered it was hilarious. The next day they're in a
lot of pain. He took a photo of it and
sent it which will now end up on some drug
you know, maybe somewhere in the Netherlands on some forum
saying what should I do? And he got very accurate
and correct medical information. What happens if the information isn't
(14:06):
correct is do you have recourse or you don't?
Speaker 3 (14:09):
You do not have recourse. There's a little tiny line
at the bottom of each of these AI things that says,
sometimes this is wrong. Check it first. That's their protection,
and they're right. And if you take any source of
information and created as original or trusted advice, then that's
(14:29):
at your peril. And I'm a big fan of that. Right.
The internet was no different. If you search anything on
the Internet and it told you how to solve a
burn with you know, some sort of solve that you
whipped up in a blender, that's on you. This is
no different. This is no different, I think though it's
got better words. Right. So if you take claud for example,
(14:51):
which has a very elegant writing style, and you have
access to hundreds of methods to solve these things, it
might actually be all right. But I don't know that.
Speaker 1 (15:15):
So this is this is part of our conversation, George,
was that you were saying how you use all the
different AI platforms and you were saying that, you know,
you might start and chat GPT with an idea and
then put it into claude to go and have it
write more. Can you tell us a little bit more
(15:36):
about how you use it? You know, it's AI on
a day? Is that thing? Gary and and everybody on
a on a on a daily basis.
Speaker 3 (15:46):
Yeah. Sure. The biggest thing I can recommend for folks
who are new to becoming comfortable with AI is to
ask it how to be better at it. The best
prompt for AI is to ask for a better prompt, right,
And most people don't do that.
Speaker 2 (16:06):
To me, what you mean, I don't even know what
that means.
Speaker 3 (16:08):
That means your son I burnt myself blah blah blah
blah blah and gets an answer. Backing that up, you
could say, how would I write this prompt better to
get a more effective answer, and it will coach you
how to write. So you have to ask the teacher
how to learn, and it will tell you. That's the
(16:30):
biggest thing that people don't do is learning to prompt.
It's not hard.
Speaker 1 (16:35):
I actually took a whole course in prompting and the
other piece was just and I saw this with another
person who's very good at AI. He had it the
AI chat GPT do something and then he just said
try it again, do better, yeah, kind of as a
(16:56):
prompt and it did it said oh yeah, let me
try this again and came out with an even better product.
Speaker 2 (17:04):
It was.
Speaker 1 (17:05):
It was really quite interesting that it kind of is
garbage and garbage out. You know that that you know
same as Google that if you control the garbage. Yeah,
like if you know how to ask it, you can
get the diamond out of it. But you have to
know how to do that. And so prompting is a
huge part of that.
Speaker 2 (17:25):
And what, as a NATA said, should we be using
it for? What are the length and breadth of the
ways you use it?
Speaker 3 (17:31):
So I use it across a number of things. I
use it for travel concierge. Right, So for example, I'm
going to Paris on Thursday. I have seven days do
a walking tour of one to two Aaron d Smart
show me things that I can't find, and you know,
give me restaurants I can't get access to, and it'll
(17:52):
give me an entire agenda for those seven days.
Speaker 2 (17:56):
So it hasn't gone to your everyday platforms to find
that for you It's.
Speaker 3 (18:00):
Pulled everything from who Knows Where and gone and it
takes literally thirty seconds.
Speaker 1 (18:06):
So I remember you saying one of the prompts was
find me restaurants that don't have a website.
Speaker 3 (18:10):
Yeah, it'll go to Instagram and read the reviews and
I'll be like, well, I only want reviews that are
four point eight or higher. That it all has to
be in French. I don't want it in English, you
know you. That's where the prompting comes in, and it
will give you an a itinerary. Then you take that
a tenerary and then I prop it into another thing
called wonderlog, which then sequences the events so it's optimized
(18:30):
for physical movement, so it groups them into neighborhoods, right,
and then you can then have it with you and
it'll give you the directions on how to get through things.
So I use it as a travel concierge a lot.
You can search for flights, you can search for hotels.
You can do all of that without using Google because
it goes to all the sites. Right. Google has paid
(18:51):
ads that serve us up to the top, so it
will have those paid ads served first. Where you have
chat for example, it can then find you the flights,
then find you the hotels and say, okay, well, can
you draft a letter to the concierge of the hotel.
Can you tell them the day that I'm coming, I
want sparkling water in the room, I want these types
of pillows. I want da da da da. And it
(19:12):
sends it off for you, right. And that's just a
very simple example.
Speaker 2 (19:16):
My friend is a travel agent's just had a heart attack.
Speaker 3 (19:19):
That would be one where I would be a little
bit weary because that is something where access to information
is being disrupted. I then use it for health and wellness.
So I have a daily routine that's in my AI
in Gary that serves up news after thirty minutes so
I don't do anything to affect my dopamine levels when
(19:42):
I wake up, so there's no screen time or socials
in the morning, and before I do exercise, it'll serve
me up news from here, stylish things from here, fashion
from here, recipes from here, you know, so I can
then have a cultural experience before I actually get up
and go. And I designed it that way. Recipe for
(20:03):
the day, shopping list for the day because I live
very near markets, and then I'll do the whole week,
so I can have all my menus planned, all my
shopping planned, all that stuff, so I don't have to
worry about it or think about it. And it'll be
like I want something different every day. It needs to
be high protein, needs to be low fat, needs to
be low salt, blah blah blah blah. And they're all
(20:24):
different folders within one program. So I'll go into one
called Renaissance that has some things in there. I'll go
into one called Paris. I'll go into one called you know,
Health and Wellness, And if you're being very comfortable, you
can upload biomedical stuff, which I know people do and
say how healthy am I?
Speaker 2 (20:45):
Right?
Speaker 3 (20:46):
So I link it to my peloton bike, yeah, to
watch my VO two max and my energy consumption and
burning and all those other things, and it kind of
maps that, and then it maps that to the diet,
and it maps that to other things.
Speaker 2 (21:00):
These are all very elite ways of using it. Should
we be wary of any you're saying master it, We're
going to have to learn to master it. Should we
be wary of the role it will play?
Speaker 3 (21:15):
Well, I don't know. I would question the word elite.
I think actually people could any at any level could
benefit from having these things. I don't think it's bad
to have better diets and better exercise and better knowledge.
So I'm a big fan of not being algorithm into
your Fox News no.
Speaker 2 (21:31):
Right, what I meant by a lead, I guess is
it can any Can you just be above bloodite level
and still make use of it? Well?
Speaker 3 (21:36):
Yeah, And that's kind of the whole point of this
is to kind of go, well, if you think of
it as a tool, it's going to be one way
You're going to kind of go, how do I do this?
If you think of it as a capability like I
have and kind of demonstrate it, it's more of a
lifestyle assistant for those who don't have one, right, And
(21:57):
for me, I put in travel, I put in health
and fitness. I put in well being and mental things
and knowledge and access to things I wouldn't normally look for.
And that's what I'm looking for. I'm looking for things
I don't know yet, and as an academic, I need
to be trained on things I don't know yet. Otherwise
I'm just a really good master at stuff that I
(22:18):
already know, and that's not learning, right, And the research
from the doctor's side says, the more you actually learn that,
the longer and better life you have, not just mastering
what you already know. Right, So I take that kind
of long term view of this capability build, just like
I do with my exercise.
Speaker 1 (22:40):
So, George, you said taking the future view, how do
you think in lussage five years? Because it seems to
be doubling. I was reading somewhere that it's the capabilities
are doubling every three months and of AI, which is terrifying.
Speaker 3 (22:57):
Yeah, if you have five point zero.
Speaker 1 (23:00):
Yeah, it's amazing, it is. It is. But if you
so just let's just take five years from now, which
seems like a long ways away. All of a sudden,
where do you think how do you think you're going
to be using it? But more importantly, uh huh, how
do you think it's going to affect industry? Like if
(23:20):
you were a twenty year old saying I'm thinking about
beingcoming a travel agent, I'm thinking about getting into coding,
I'm thinking about going into the finance world, into banking.
What like, what do you think where is going to
disrupt the most? This is really putting you out on
a limb here.
Speaker 3 (23:39):
But so for me it's going to become more of
a of an integral advisor because the more and I
pay for mine, I don't do the free ones actually
pay for mine, the more it learns because if you
if you pay for it, it knows your history. If
(24:00):
you don't pay for it, it forgets your history, so
it doesn't know what you've done before. If you pay
twenty bucks or whatever a month for chat, for example,
it will store all of your information so I can
go the next day and go, hey, what's happening in Paris.
It'll know exactly why it's feeding that up. Or if
I put in something from AGSM on curriculum or let's
(24:21):
write a case together, it will help me with that.
I see it as more of a of a digital
advisor because it will know my health, it will know
my age, it will know my weight, it will know
my habits and preferences, it will know my financial situation,
and it'll be able to help me double.
Speaker 1 (24:40):
You're comfortable with it, yeah.
Speaker 3 (24:41):
Because I tell humans that right now, and they make
mistakes as well.
Speaker 1 (24:47):
So sure, I think you're taller.
Speaker 2 (24:52):
But where are the risks for the industry? So where
are the good ones.
Speaker 3 (24:55):
To go into finance? Is a tough one. Anything that
has buyers and sellers with data is the tough one
because it can scan global marketplaces and match buyers with
sellers faster than a person. Right, And anything that's related
to equities or stocks or even real estate has buyers
(25:18):
and sellers and it can do it faster in that space.
Speaker 1 (25:23):
George, are you saying that we could get really get
rid of real estates?
Speaker 2 (25:29):
Who is? I?
Speaker 3 (25:32):
I mean there's still a lot of need for the
emotional component of which real estate is an emotional purchase,
right for the.
Speaker 2 (25:41):
Most and of human interaction. Do you think on a
human level, do humans still want humans?
Speaker 1 (25:48):
Yeah?
Speaker 3 (25:48):
I think it'll just be in a different capacity. I
think it won't be for finding real estate. I don't
think it will be for stock trades. I think it
will be in other places. And that's the cool bit
is what does that then look like like? I mean,
when I was younger, you had to go to the
bars to meet people. Right, this is all pre app
(26:09):
so I've never met anybody on a dating app. That's
all new to me, and I never experienced that, Thank goodness,
you had to actually go to a bar and meet
people face to face. That doesn't exist anymore. You meet
them here first. You just have to replace that somehow,
And I think it's it's just the next wave of
what do you replace it with that human interaction component,
(26:32):
which will which will then make the human interaction component
much more interesting.
Speaker 2 (26:38):
So we'll always have it, but the things around it shift.
Speaker 3 (26:42):
Yeah, I don't. I don't think in our lifetime.
Speaker 2 (26:45):
We're not going to be marrying box Well, I don't know.
Speaker 3 (26:48):
I think if you want, but I don't think so.
I still think there is a big need for empathy
and emotional componentry and those kind of things. But you know,
I think we've as humans, and you correct me if
I'm wrong. I think as humans we've overcomplicated our lives,
(27:09):
and this, to me makes it simpler. And that's how
I use it. And back to the math calculator thing.
If you think of AI as a calculator and just
pull it out of the drawer to do some math
you can't figure out in your head, and then shove
it back in the drawer, then it's going to pass
you by. If you learn the math that goes behind it,
(27:29):
which is the capability component of what we talked about earlier,
and understand how it can be an advisor as opposed
to a tool. Then you can train it, and you
can educate it, and you can get your preferences in there,
so you reap a better experience on the output of
what it serves up to you, then it's more valuable
(27:51):
to you.
Speaker 1 (27:52):
You know what I'm thinking, Amanda, is that one of
the things that I really admire about you is that
you are able to kind of outsource a lot of
the you know, the drudgery of you know, you have
a cleaning person, you have you know, that kind of thing.
And I'm thinking that this is actually for you, like
having that as a personal assistant that will actually support
(28:14):
you in some of the things that you're doing. Actually,
maybe that is the next step.
Speaker 2 (28:18):
It is I've always thought, by the time I tell
it what to do, I may as well do it.
Speaker 3 (28:23):
Well, what you can do on that one, Anita, is
you can. My partner has done this. He sits on
a number of boards. Right, So he's created a board
of advisors in his AI. So he has an accountant,
a legal, a strategy agent within one AI that all
have different roles and they all talk to each other. Right,
(28:45):
So he has created a board of advisors within his
AI world and puts questions in there and it helps
him navigate some of these roles, so you can actually
give them roles. So if you can write the rules
and say, well you're you're a certified public account in Australia,
(29:06):
you need to understand small business accounting rules, what are
the tax advantages of investing in this type of company?
And store it right and that can be your accountant
person advisor so as And that's what I mean by
turning it into a capability. It's not just asking it questions.
It's there so a tool as a capability.
Speaker 1 (29:26):
So what you're saying, I'm getting my brain is exploding
here just a little bit. So what you're saying, though,
is that if you had a big, tough problem that
you wanted to solve, that you could like ask AI
to have. Let's say it was kind of I don't
know what community I want to move to in Sydney
(29:47):
and I'm looking for something where the medium price is this,
I'm looking for two bedrooms. I'm looking at the well.
You know, you could go and have a real estate person,
you could have a finance person, could have a restaurant person,
a restaurant person, you could have school systems, you could
have all all of that, and that they would talk
and give you input about that. You could say, yeah,
(30:08):
that's a great that's a great solution, or you could go,
now that's that doesn't do better?
Speaker 3 (30:13):
Ye do better better?
Speaker 1 (30:14):
Yeah?
Speaker 3 (30:15):
Well, And that's the that's the transition about organizing family
and life, which I think would be very pertinent to
some of the folks who are listening to this, is
that you can create advisors out of it. Whether you
choose to use that advice is again obviously up to you,
but how you design it and how you create this
(30:35):
board of advisors. And in my partner situation, it's actually
for boards that he sits on and he can upload
strategy documents, for example, and say where are the holes
and it tells him right, So you can go into
a board meeting and have that extra layer of confidence
that there's stuff there. Or he can upload five years
(30:55):
of financial statements and find the holes right, or or
write me the five toughest questions that I need to
ask the account of this organization, and it does that,
and they're all in folders, and it all remembers it right,
and it's all twenty bucks a month, so I just
would I would go to someone who's a bit not
interested in embracing it and kind of go, why why
(31:18):
wouldn't you do that for me? It's very natural, right,
But I'm a resource hungry person, so I love that,
and I chase the curiosity gaps in stuff that I
can't solve on my own. So I kind of then go, right,
and then get two going and have them running at
the same time and see what the answers are. Pull
up Claude, pull up Chat and go best restaurant in
(31:41):
Paris best and see what me and then you decide.
So it's just a huge funnel of information.
Speaker 1 (31:49):
That is not guided by paid outs.
Speaker 3 (31:52):
That is yea well, and I wouldn't know, right, So
you can then go, well, find me the YouTube videos
that are trending on the Marae in the past six
months and it'll pull them up. It can't watch them
for you, so you can pull them out and have
them transcribed, which is what I do. So I have
them all transcribed by another system and then put all
(32:14):
the transcripts back into Chat and say summarize all leads,
and it literally takes five minutes and you can go
really fast and just kind of plot it all out.
So yeah, there's a huge a lot of opportunity for
that kind of life admin stuff. But to me, the
fundamental thing is about smarter decision making. And if I'm
sitting next to someone who tried to go through, you know,
(32:37):
a lonely planet book and I've got two AI machines
running at the same time, you.
Speaker 2 (32:43):
Know I should advise it sort of falling behind, isn't it.
Speaker 3 (32:46):
Well, you can pull you can put that into your chat.
You can say find only four point eight and above
on trip Advisor, like it will. It will do that
for you. So you kind of go, well, why wouldn't
you want that information? Yeah, but that's me and that's
kind of my hunger for knowledge and curiosity stuff. So
(33:06):
I wouldn't be afraid of it. I would embrace it
as a capability.
Speaker 2 (33:10):
I sort of get the impression that's galloping along and
it's up to us how we less suit and use
it jettoing.
Speaker 3 (33:15):
It's jetting along and if you do last of it,
it's going to take you, which is a good thing,
which I think is a good will. I wouldn't rely
on it, though, like a doctor and if.
Speaker 2 (33:27):
You're going to burn yourself at someone's twenty first, for example,
if you do, you're in trouble, George.
Speaker 3 (33:33):
But you know, we have an elderly family member and
we put in symptoms all the time just so we
can brief the doctor better, right. Or you have a
legal contract, you can read it and brief the lawyer
better right, And or accountant or any financial services that
you have in your life, any of those things. It
(33:53):
can brief you and bring you up a little better
so that those conversations are a lot more strategic and
less admin because a lot of those meetings historically would
be a lot of admin.
Speaker 2 (34:06):
And as you say, we've over complicated our lives.
Speaker 3 (34:08):
It help us. And at six hundred bucks an hour,
you don't want to do that. You want to make
sure that you're getting value. So there is an ROI
in there too, right, So it actually could be a
little beneficial. The better brief you write for a lawyer,
the better output you're going to get, right, So just
don't try to be a lawyer. I am a I mean,
last kind of thing for me on this per second
(34:30):
is I am not a fan. And I tell my
students this as well. As use it, love it, embrace it.
Just don't put your name on it. Just don't put
your name on it. Don't copy it out, put your
name on it, and turn it in, because that's that
I have a problem with. And I can tell it
is as a student, as anybody, a worker, a student.
(34:52):
Just don't put your name on it. Inform your decision making, sure,
use it as an advisor, Sure, create a capability out
of it. Just don't. Well here's the question, copy it
and dump it because people who are around it a
lot can.
Speaker 1 (35:11):
See what to think about.
Speaker 3 (35:14):
I know it's pretty cool. You should take one of
my classes.
Speaker 1 (35:16):
I should actually, yes, are you feeling, Amanda? Are you feeling?
How are you feeling about?
Speaker 2 (35:23):
I've got chat GPT at which I use.
Speaker 3 (35:26):
What do you use it for?
Speaker 2 (35:27):
Well? I use it just for basic research tools. For example,
we had a guest on who had been in a
TV show and I said what role did he play?
And how long is he I just use it for
Information's that's the level of what I've done. But talking
to you, I do see other ways I can harness this.
I've got many aspects to my life. As Anita said,
(35:50):
I can't quite yet see how I like the idea
of the committee, but I don't know what information I
need to put in for my committee, and I'm very
anxious about putting informed into a system, private information into
a system.
Speaker 3 (36:04):
To do this kind of kind of go into your
chat and say, I'm into this what I'm calling micro
moments of joy, right, and have have you have your chat?
Send you twice a day something interesting like poetry, a playlists,
and just program it to do that, and those little
micro moments of joy might be kind of interesting.
Speaker 2 (36:26):
I love that.
Speaker 3 (36:27):
And then you kind of.
Speaker 2 (36:28):
Go, guys reading, send me pictures reading.
Speaker 3 (36:31):
Hard, guy's reading, you know whatever, you're sure and that's
that's kind of those micro moments of joy I think
will actually then break down the technical barriers of it
all and you kind of go, well, this isn't so.
Speaker 2 (36:46):
Bad, and wow, that's very good advice.
Speaker 3 (36:49):
And quite frankly, no one's going to steal my information
because I'm just a person. Like what do they want
my trip to Paris? They can have it, like or
my blood pressure, like I don't care, Like I honestly don't.
Speaker 2 (37:00):
I saw your blood pressure came up on my phone. Listen,
I'd see a doctor.
Speaker 3 (37:05):
It's down, thank you very much.
Speaker 2 (37:07):
You've got a holiday coming up, of course it is.
Speaker 1 (37:09):
That's right.
Speaker 2 (37:10):
Well, I found that really interesting, George. Thank you.
Speaker 1 (37:14):
My head than is exploding me to I like that.
Speaker 3 (37:17):
Cool, my pleasure. I'm happy to come back.
Speaker 2 (37:21):
Or actually, we may have a thousand questions and we
may put them to you. Okay, but you have to
come back from your holiday.
Speaker 1 (37:28):
Say that's six weeks away now.
Speaker 3 (37:30):
I don't leave till Thursday next Thursday, so I've got
a whole week.
Speaker 2 (37:34):
But seriously, if you do have any questions about this stuff,
please send them in and we'll put them to George.
I think you have a very very nice way of
explaining that stuff.
Speaker 3 (37:44):
Thanks.
Speaker 2 (37:44):
Yeah, that was very interesting.
Speaker 1 (37:47):
Shall we get to ignorance?
Speaker 2 (37:48):
Oh, I forgot all about it.
Speaker 1 (37:49):
Let's not do that.
Speaker 2 (38:00):
Well, I forgot in the midst of all that we
haven't done our glimmers. You go first, Anita.
Speaker 1 (38:06):
Well, I was just thinking about what is a glimmer
for me this time? But I think one of the
things that I haven't talked about is that in my
PhD research I got confirmed, which means that religious is
a religious ceremonigious ceremony. Yeah, no, it's it's actually just
meaning that the series of studies that I'm looking at
doing has been approved, and it's kind of like it's
(38:27):
a big watershed moment. And this happened a couple of
weeks ago, and it's just it just feels really lovely.
Speaker 2 (38:35):
So I was kind of congratulations, So this is where
your body of work. They're saying, yes, you're on the
right track.
Speaker 1 (38:41):
Yep go. So yeah, that was that's mine.
Speaker 2 (38:44):
That's a lovely glimmar congratulations, but mine is probably a
bit more bogan, as it often is. I visited my
brother and my dad who in Brisbane on the weekend
and my brother said to me, why don't we go
to a bon Jovi trivia a tribute night down the
wrote And I said, I'll probably know one or two songs. Sure, Instead,
I skanked it up all night. The band was fantastic.
(39:06):
I drank about eighteen thousand Gin and Tonics. They did
a big eighties middle at the end, and what I
really loved was that the.
Speaker 1 (39:13):
Lead singer said, where's the glimmer?
Speaker 2 (39:18):
Your phone has listened to this and you're going to
go to one in Paris. But the guy, the lead
singer of this band said you may have found out
that we were performing tonight. You may have heard us
from outside and come on in. You may be having
the best day of your life, maybe having the worst
day of your life. But he said to all of us,
all this big group of drunken people, he said, but
(39:38):
if we've done our job, you've all got what you
need tonight. And it was so true, and I thought,
and what I also got was saw hips the next day.
So I had stood up and danced all day. But
it's just what a great way is to let off steam.
I love it as yours also feature bon Jovi, George.
Speaker 3 (39:54):
No, mine's a career one. So a glimmer this week
was an or late last week was a very prominent
Sydney headhunter gave me a call and said, oh, this
company wants you to come in and kind of turn
them around and kind of push them into a way
that will grow and create value for their investors. And
I met with the company. Goals didn't align. Values didn't
(40:18):
the ligne. And I'm at a stage in my life
where I'm like, hmm, that takes paramount over any financial consideration.
And I told them no. And the glimmer is I
told them no, And I'm okay with.
Speaker 2 (40:28):
That, right, lovely sign of maturity, George, Well, really.
Speaker 3 (40:35):
A sign of principles matter. And there are other things
besides financial reward or titles. Because it was a C
level role.
Speaker 1 (40:48):
What does a sea level role mean?
Speaker 3 (40:49):
It was a CEO of a company sea levels rising,
And I've had two in my past, so I know
what they are and I kind of have done them.
But I reflected on that and I said, I get
more satisfaction from my teaching because it is principles bound.
It is about making people better and giving them and
(41:10):
equipping them with tools so they can make better decisions.
And I said, thanks, but no thanks. And then I'm like,
I just said no to a really big opportunity, or
did I? And I'm super happy with it. And the
glimmer here is that it is okay to say now, and.
Speaker 1 (41:27):
I've got to say that the students are going to
benefit because it's it's it's been so lovely, you know,
talking to you through this podcast, because yeah, because you know,
part of it is I've heard about your teaching. You've
actually shown me some of the slies that you were
developing and all that kind of stuff. It's been interesting
to go and really kind of get into the way
(41:49):
that you. I'm assuming that you teach in the classroom, and.
Speaker 3 (41:52):
This is how I teach in the classroom. Yeah.
Speaker 2 (41:54):
Well, once again on this podcast up the cross as
a show and so I'm happy for that choice. I
also make.
Speaker 3 (42:03):
Down with a bongjo, So that's well.
Speaker 2 (42:06):
They were the bubbles of joy and that's what we're seeking. Well,
thank you George, thank you Anita.
Speaker 1 (42:12):
Thank you coming back absolutely and happy travelers.
Speaker 3 (42:16):
Thanks MHM.