Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
This is why they're investing so much into
(00:01):
ai because they realize their fundamental
search business is probably cooked.
It's probably doesn't have that long left.
I dunno if you've been on a like
an AI tool directory recently, but
it is, it's basically a graveyard.
It is full of applications that
people have launched and they
haven't thought about marketing.
Good morning.
It is Wednesday, the 9th of July,
(00:24):
and I'm gonna be giving a rundown
of the latest AI news and updates.
I'm live streaming on TikTok,
YouTube, LinkedIn, Facebook,
and Instagram right now.
If you are on any platform except for
TikTok, you'll be able to see my screen.
If you're on TikTok though it's
much easier to ask me questions
and it tends to be more active.
Anywhere you are, that's
where you are is perfect.
(00:44):
So the news today, so the big thing
that's gonna be happening today,
Wednesday, the 9th of July, is
the release or additional news?
No, actually I think we're getting the
actual release of the new Grok model.
So the Grok four release live
stream is gonna be Wednesday
at 8:00 PM Pacific time.
That is.
(01:05):
4:00 AM in London, I think.
So I'm gonna wait and
see how it is tomorrow.
So Grok is the model from XAI,
which belongs to Elon Musk.
And it is the model that is built into
Twitter or X if you want to call it that.
So we're gonna get the new
version grok four released today.
There is some excitement
and also a lot of concern.
This is always the case when it's
(01:27):
got anything to do with Elon Musk.
The excitement is that there
have been some leaked benchmarks.
The Grok four and Grok four code
benchmarks were leaked and supposedly
it got 45% on human's last exam.
If you don't know what human's
last exam is, I'm gonna tell
you about that very shortly.
(01:48):
But this is.
Potentially very exciting because
the highest previous score was,
I have it here, Gemini 2.5 Pro
got 22% on humanity's last exam.
If this is true and Grok can
actually perform at this this
level, then it's a 20, what's that?
A 24% increase, which is fantastic.
(02:09):
So that will bring it up to
45% in humanity's last exam.
So HLE, which humanities last exam?
I'll actually show it to
you 'cause it's interesting.
It is a, it's a data set of I think two,
yeah, two and a half thousand questions.
With the idea being that if you can
master all of these questions, if
you can have the breadth of knowledge
to answer everything in this, then.
(02:32):
Then we potentially have reached a GI.
There are definitions here that
get in the way, but here we go.
Benchmarks, humanity's last exam is a
multimodal benchmark at the frontier of
human knowledge designed to be the final
I. Closed ended academic benchmark of
its kind with broad sub subject coverage.
The dataset consists of two and a half
(02:53):
thousand challenging questions over
a hundred subjects, and we publicly
release the questions while maintaining
a private test set of held out
questions to assess model over fitting.
So this is really important when you're
doing any kind of benchmark is that you
make sure you don't give the answers
to the AI model because if the AI model
has the answers within its dataset.
Then of course it's gonna be really good
(03:14):
at exams, so you need to hold out some
of those questions, some of the data
and train the model on other data so
that it can answer the held out private.
Questions.
Humane's last exam at the moment,
the best model is Gemini five.
Sorry, 2.5 Pro.
Which has an accuracy of 22%.
(03:35):
So it's still relatively
low on this examination.
If you can see the screen, you
can see what kind of questions.
Gosh, here we go.
So here's a question hummingbirds
within, I can't even read it.
I'm not smart enough to even read
the questions in human's Last exam.
I just see a question.
Stefan is asking other
questions publicly available.
Yes, they are.
If you go to AGI safe ai, you
(03:57):
can see all of the questions.
So I'm gonna read one.
I'm gonna try and read one, but I'm
not smart enough to even read it.
Let's alone answer it.
Hummingbirds with a proform uniquely
have a bilaterally paired oval bone
sesamoid embedded in the lateral
portion of the expanded cruciate
upper neurosis of insertion.
Of mde, depress Corde.
How many paired tendons are
supported by this sesamoid bone?
(04:19):
Answer with a number.
I dunno about you, but I do not
know the answer to this question.
And then next to that,
there's a classics question.
Which is a representation of
the Roman inscription originally
found on a tombstone, provided a
translation for the INE script.
So dm, Regina Viner, es
palate, et cetera, et cetera.
A transliteration of the texts provided.
(04:41):
So these are questions that have been
put forward by, generally, by academics.
So this Latin classics question is from
Henry t from Merton College, Oxford.
The ecology question about
hummingbirds is from Edward V at MIT.
That's a nice rhyme there.
So it's two and a half
thousand questions of this ilk.
I'm looking at some mathematics,
(05:02):
computer science questions.
Very high end questions.
And so far, Gemini 2.5 has done
the best with 25%, sorry, 2020 2%
of the answers correct, which is.
Pretty good.
So what we have seen and we'll find out
later today, but apparently the new Grok
(05:22):
four model there's been a leaked benchmark
and it got 45% in humanity's last exam.
If that is true, then this
is a huge leap forward.
It may not be true.
This may be complete and utter nonsense.
It might be made up or the subtler.
The subtler reason might be that
it's been overtrained and overfitted
(05:43):
to the humanities last exam.
So it is, it's a form
of cheating basically.
So you can train a model to be
very good at one thing, but that
doesn't actually mean that it's
getting any closer to intelligence.
Obviously it's a sticky word
here but it's just a way of
basically training to the test.
I dunno if you did a levels or GCSEs.
That's what I personally did.
I was very good at learning what we
(06:03):
needed to do in the examination and
nailing that in the examination.
Did I learn better by doing that?
No, probably not.
And it's exactly the same with an ai.
So we will find out because we are
getting grok released later today.
It will be overnight in the
UK or 9:00 PM in California.
So we will see.
(06:24):
Now, obviously this has been.
Attenuated somewhat.
The excitement because yes, we
have a cool new model coming out.
Grok four, fantastic.
It's always exciting
when we have a new model.
But over the last few days, grok again,
grok belongs to Xai, which belongs to.
Elon Musk Grok has been extremely
(06:45):
anti-Semitic and racist, and
it's been calling itself.
I think it was a Nazi
bot and stuff like that.
It's gone off the rails
in the last few days.
So much so that it is made NBC news here.
So the AI Chat bot Grok, which is
produced by Elon Musk's Xai vote,
numerous anti-Semitic social posts.
Tuesday after, and the artificial
company released a revamped
(07:05):
version over the weekend.
So this is not Gok four,
this is the previous version.
Just got an update and it's
made it incredibly problematic.
Unfortunately they don't go into
details, which is probably for the best.
I don't really wanna spread it around,
but I. It was basically being horrendous
and it was saying terrible things.
It also took on the role of Elon Musk
at one point, which was very bizarre.
(07:26):
It started to talk as if it
was Elon Musk, which is odd.
There's some weird things going on in that
custom instructions in the back there.
So yeah.
Great that we have a new model.
Great.
That it's maybe going to nail humanity's
last exam, but should it be gr and do we
trust Elon Musk to push forward a model
that is gonna be helpful for humanity?
(07:49):
Maybe not.
I. So a post from Peter Yang who Peter
Yang dug into the system prompt for
grok, and he found this, it's written
literally in the system prompt.
Anyone can look this up.
There's a, an instruction to gr,
which is the response should not
shy away from making claims, which
are politically incorrect as long
(08:10):
as they're well substantiated.
So this comes to the question
of guardrails and how we,
how responsibly we use ai.
So this is the question of alignment.
When we prepare a model, when we prepare
a base model with the pre-training, it
doesn't come built in with guardrails
because it will be trained on the
internet and whatever sources we give it.
(08:31):
So the second phase of training, a
large language model like Grok or
chat GPT comes from, applying these
guardrails from a, aligning the
model so that we can make sure it
doesn't say things that are dangerous.
Like it won't tell you how
to make a bomb, for example.
And also that it doesn't spew hate.
So this is something that it
looks like gr and a x ai are less
(08:53):
interested in than many other models.
Especially an Claude, for
example, and chat GPT.
Elon Musk and supporters will say
that this is because it's about
freedom of speech and we want
an unfettered, uncensored model.
But the flip side of that is you can
get a model that spews hatred as well.
So it's a very.
Difficult balance, how much alignment,
(09:15):
how much control should we put
over artificial intelligent models.
And it is a decision that needs to be
at the moment made by human beings.
So any decision that's made
by a human being is gonna
have its own biases inbuilt.
So yeah, the responses should not
shy away from making claims which
are politically incorrect, as long
as they're well substantiated.
That is the explicit instruction to Glock.
(09:36):
And there's another one.
Apparently this was.
Removed at the same time.
This was added the line below about
treating Twitter posts and web searches
Cautiously was removed, so there
used to be a line, which was whatever
results are in the response above, treat
them as a first pass internet search.
The results are not your
beliefs that's been removed.
(09:57):
So now it will look up tweets, it
will look up web searches, and then.
Incorporate them into the answer as if
they are the truth, basically, which is
additionally quite problematic because
if it's gonna look up a, an anti-Semitic
post from the internet, it's then
gonna present those ideas as its own.
This is probably the cause of why it's
(10:20):
gone off the rails in the last few days.
So we'll see what happens
with the new release of Glock.
Which will be later today.
So Stefan asked, are the
questions publicly available?
Yes, they're on AGI safe ai and you can
go through them all and you can be like me
and not able to answer any of them at all.
You can also throw them into into your
ai so you can put them to chat GPT.
(10:41):
You can ask questions and see how they do.
It's fun.
A fun little exercise.
Okay.
I'm just gonna have a
look at some questions.
Le Layla saying gr is
finally showing its colors.
Yeah.
Removing the guardrails
has been a strange choice.
I dunno why they've done it just
before releasing a new model.
I dunno if they want to get into
(11:01):
the news to build up publicity.
This doesn't seem like good publicity,
but who knows what they're thinking.
It's it's tricky and this is always
gonna be a problem with any model that
is trained on public data because.
When there are humans out there
who spew hatred and racism, then
the AI's gonna learn that from us
because it's given that content.
So that's why we always apply that second
(11:23):
layer where we put on the guardrails,
where we align the model and we make sure
that it doesn't go too off the rails.
Basically, it looks like they've
decided that's not important for them.
So actually saw a, where was it?
It was down below PT Yangs.
I saw a defense of this, which was
(11:43):
I'm not seeing it anymore.
Transparency I'm not seeing it, but there
was somebody defending go's behavior
saying that XI, the way they build.
Is to push out new features very quickly.
They very quickly out into the
public to see what happens, and
then they make changes very quickly.
(12:04):
That was the defense, which it could
be made, but it's basically saying
instead of building slowly and carefully
in private, they push things out and
they let the public decide what's
right, what's wrong, and then they
make changes based on that, which is
a valid way of building a company.
It may or may not be a valid way of
releasing a model into the general public.
It depends on what you think about that.
(12:26):
I think it's a bit problematic
personally, but I can see, I can
understand the argument there.
Okay, so the other big thing
that was happening, and my
gosh, there's a lot of adverts
this is very ironic actually, so Google.
When you search something on
Google, now there is an AI
overview at the top of the page.
(12:48):
What this means is less people
are clicking on websites.
So Google has historically been.
A search engine, a semantic search engine.
So what they've done over the last
few decades have is they've indexed,
they've spied, crawled and indexed
the internet basically, so they know
where all the information is online.
(13:08):
They've now used that information
to train their models.
Gemini, for example.
They're now using that ai, those AI
models to surface the information
directly at the top of a Google result.
So if I search I'm about to
buy a new guitar, for example.
If I searched what would be
a good guitar for playing the
(13:29):
heavy metal or whatever it is.
Normally 10 years ago, five years ago,
no, two years ago, we would've gone to
a website which would have here are the
top 10 guitars for playing heavy metal.
That's no longer the case because.
One, we'll probably go to chat
GPT and say, Hey, I want a heavy
metal guitar, seven string.
What's the best one right now?
And chat, GPT will help me if I don't
go to ChatGPT, which a lot of people are
(13:51):
doing, they're changing their behavior.
I'd go and Google would actually give
me an answer at the top of the page.
It would go ahead and say, oh, what you
want to get is a Jackson or a Shechter.
And it would give me the details without
me clicking through to that website.
What this means is all these people
who have created websites like
blogs directories review websites,
(14:13):
example, they're not getting the
clicks anymore and they rely on the
clicks to drive their businesses.
But now because Google has basically
taken that information, train
their models and then put it at the
top of Google, people do not need
to click through on the results.
That's a problem.
So Google's AI overview has been
hit with an EU antitrust complaint
(14:34):
from independent publishers.
Now, the reason I was saying
this is ironic is that this.
News article is from Reuters.
Reuters is a news agency.
Their model relies on people
clicking through to their website
and that's how they make money.
So if you can't see my screen right now,
what I'm seeing is, yes, the article,
but it is also covered in adverts.
It's absolutely covered.
It's very hard to read 'cause
(14:55):
it's covered in adverts.
This is another reason why people don't
want to go to websites and they're
happy to use AI overviews because going
to a website nowadays in 2025 isn't.
Absolute ball lake.
It's, you get your cookie popups,
you get the, oh, actually,
do you want to support us?
Oh, you've got a block turned on.
Maybe you should subscribe.
Okay, fine, that's fine.
And then you go through, we're
gonna show you lots of adverts.
(15:15):
Now it is a terrible experience, so no
wonder that people are just using AI
overviews rather than clicking through.
So the irony of what I'm seeing
right now is extremely funny.
So here we go.
AI overviews.
I can't even find the information
because it's squished between
adverts for bed, bedroom storage.
(15:35):
Google's core search engine service is
mis misusing web content for Google's
AI overviews in Google search, which
have caused and continued to cause
significant harm to publishers, website
owners, including news publishers
like Reuters in the form of traffic,
readership, and revenue loss.
And so this is the lawsuit
that's going ahead.
So if you run a website, you
might have seen this yourself.
(15:56):
I know a lot of people who run affiliate
websites are in trouble right now
'cause people are not clicking through.
They are.
Looking at the AI overview, if you go
on Reddit at all, if you go on slash
seo, r slash entrepreneur, et cetera,
there will be a lot of people saying,
Hey my website's traffic is down.
Does anyone know why?
Or they've worked out and they're like,
oh, people aren't clicking on my website
(16:16):
'cause they're looking at the overviews.
So here's one.
For example.
Google has stolen 90% of my revenue.
They've actually deleted the post,
but somebody, one, the commenter says,
welcome to the zero click internet.
What's gonna happen to all
of those websites that people
have worked so hard on?
Yeah, we do not need to click through
to get the information that we need
anymore because it is boom right there.
(16:37):
Which is good for the user, the
person who wants that information.
It's good for Google because it keeps
us on Google looking at the AI overview.
But it is bad for the people who have
created that website in the first place.
The people who rely on that traffic
to generate revenue, either through
advertising or by affiliate links on
their website or selling things directly
on their website, they're being cut out.
(16:59):
So the problem here is who's
gonna make new content?
Who's gonna make webpages,
who's gonna write blog articles?
The incentive for doing all of these
activities has just been removed
because Google are not sending
traffic to those websites anymore.
So why would I write a blog website
now to pull traffic to me when I know
(17:20):
Google are just going to show the
information from my blog articles
at the top in an AI overview, and no
one's gonna click through to my site.
This is the big problem here.
So Google are, I think, being
a bit shortsighted here.
They are in trouble because 80% of
Google's revenue comes from search.
And if people don't search anymore
(17:41):
'cause they're using ai, then Google
will cease to exist as we know.
Yes, they have telephones, yes, they
have Gmail, et cetera, et cetera.
Those don't make as much money as search.
Search is about 80% of their revenue.
So what they're doing is.
Trying to fix that problem, but that's
just kicking the larger problem of what
does the internet look like down the road.
(18:01):
Basically, the idea of having websites,
the idea of public information,
becomes less and less valuable
in the world of generative ai.
Google can make some money in, short
to medium term by continuing to get
people onto Google for or using AI
overviews 10 years down the line though,
that means that we aren't gonna have
any more content on the internet.
'cause our AI have already slopped
(18:22):
up everything they need already.
We're not gonna have anyone
creating anything new.
And we.
Basically run out of new
information to train the AIS on.
So then we go into synthetic information,
which is AI's training on AI information.
And that can get very messy, very quickly.
So it becomes a reversion to
the mean and potentially, all
the progress will dip off.
So it's very interesting.
(18:44):
Everyone is acting in
self-interest as always.
So the people who are suing Google,
they're acting in self-interest
because they're publishers and
they're not making any money.
Google are doing this
because they're losing money.
'Cause people aren't,
using Google as much.
So they need to do AI overviews.
The people who own the websites are
upset because they're losing money,
et cetera, et cetera, et cetera.
So everybody here is.
(19:05):
Struggling and everyone's doing
what you know you expect them to do.
They're protecting their
livelihoods, absolutely fine.
But I think in the long
run, in five years time, the
Internet's gonna change entirely.
So everyone needs to be thinking
a bit more forward about
what that's gonna look like.
And that's the big question.
So yeah, here's another one
from search engine land.
So this was from May, 2025.
(19:27):
New Google AI overviews data
search clicks fell 30% in one year.
That's a huge drop off in just a year.
And that's one year of AI overviews.
So if we see another year of AI
overviews in the same rate, it's
over for websites basically.
(19:48):
I'm gonna hop over to the questions now.
Oh, there's so many
what we've got.
This won't hurt Google, since
companies pay Google to show
their websites on the first page.
Yeah, they do for now.
But if no one's going to Google, 'cause
everyone's going to chat GPT, then
no one's gonna pay Google anymore.
So this is the big problem.
This is what actually happened to Yahoo.
Yahoo.
Lost trust.
(20:09):
The searches, the searchers stopped
going to Yahoo because Yahoo broke
the bond of trust by filling the first
page with paid for results without
declaring that they were paid for.
So for the end user, the person
who was going to the search engine,
suddenly this was not a useful
tool because they were being shown
junk whenever they went to Yahoo.
(20:29):
This is potentially what's gonna
happen with Google too if it is no
longer valuable to go to Google.
If I can't get the answer to my
question by going to Google, then
I'm not gonna go to Google anymore.
If I can get it much faster using ChatGPT.
I'm gonna do that.
At the end of the day, people are gonna
use the easiest method, and if ChatGPT
becomes that, then we're gonna use that.
And unfortunately, as people get
(20:50):
more desperate, the website owners
says they get more desperate.
They are gonna fill their
websites with more adverts.
They're going to do more
SEO to try to get the top.
So not gonna be about creating value for
the person who's landing on that site.
It's about.
Monetizing that visit as much as possible,
which degrades the experience further
for the people the visitors as well.
So it's a race to the bottom, I think.
(21:11):
And Google is left holding the
bag because they're the ones who
connect visitors to websites.
And if the websites devalue, then Google
devalues 'cause they have nothing to show.
So I think Google is in it.
Stuck between a rock and a hard place.
This is why they're investing so much into
ai because they realize their fundamental
search business is probably cooked.
(21:33):
It's probably doesn't have that long left.
Thankfully, they've got lots of cash.
Freddy says, personally, I
can't visit a site full of ads.
Yeah, so I have an ad block.
I'm looking at my computer right now
and when I was trying to read the
Voices article, even though I have an
ad block, it is, it's an absolute mess.
It was hard to read the article
which is very funny because the
(21:55):
irony of a news company who relies on
clicks to drive revenue complaining
about AI overviews when I know the
AI overviews gonna gimme a better
experience than coming to their website.
So it's ironic, unfortunately.
I use ChatGPT I use ChatGPT lot,
(22:16):
but how do I stop from using m
dashes without prompting every time.
So you can add it into
the systems instructions.
It will still slip them in,
so you still need to check.
I find Claude much better for this.
I personally use Claude over ChatGPT
for whenever I'm doing writing
or helping with writing because
I find it's written style better.
There's in either claw or ChatGPT.
(22:38):
Add in your custom instructions,
things like do not use m dashes.
I also have used British English
'cause otherwise it will as you
can probably tell, I'm British.
Otherwise it will go to American English.
So you can add in all of those little
things that are important to you.
The other thing you can do
is you can set up a project.
So if you do a certain type
of writing again and again.
So if you do social media posts for
(22:58):
example, you can put them in a project.
You can give that project
custom instructions as well.
And then when you have successful
writing samples, you can add
them into the project knowledge.
So it's no this is what
it should look like.
This is what a good social
media post looks like.
So then at least it has
references it can refer to.
And again, if there are no M
(23:19):
dashes in those references, it's
not going to duplicate that style.
So that's just kind of belt and
braces by doing it inside a project.
And then.
Manually check.
We can't fully pass over all
responsibility to AI just yet.
We still need to do some checking.
Laughing Truth says, I use lovable and
just built my first functioning app.
Awesome.
That's so cool.
Yes.
I need to know what the easiest.
(23:42):
I think your question was cut off a
mobile app, looked at Firebase, et cetera.
Oh, okay.
So you can build mobile apps on lovable?
I believe so.
What I normally do is I will work in
lovable, then move to cursor because
you have a lot more power there.
So if you are going
down the absolute yeah.
Maybe move to cursor.
It depends if you are doing an Android
(24:03):
app or if you're doing a iOS app as well.
Because iOS has its own.
Ecosystem.
Check out a guy called Maren.
'cause I know Marson just did this.
Maren Ai, Maren Ro I think his name.
I hope I'm saying it right.
So he's on LinkedIn.
Sorry.
He probably on LinkedIn.
(24:23):
He's on TikTok and Instagram.
I know that he just.
Produced and published
an iOS app in one day.
And he's done a few videos about that
to have a look at what he used in
particular, because whatever he used
worked extremely well and very fast.
He got literally I was chatting
to him and then a day later he
had published on the app stores.
(24:43):
Oh, wow.
'cause normally that takes a lot of time.
So check out Martin's stuff.
He knows a lot more about that.
AI, SEO is possible.
Yes.
I've been talking about
this a few times this week.
Especially the, we need to come up with
a better name 'cause we dunno if it's
lm, LLMO or AIEO, which is, Old McDonald
(25:04):
or AEO We don't really have a name.
But basically the idea of how do we
in ChatGPT So if I'm, again, searching
for heavy metal guitar, how do I
make sure, how do I as a heavy metal
guitar manufacturer make sure that
it is my guitar that is suggested?
(25:26):
Because AIs are trained on.
The internet, they will if a particular
brand of Q guitar is mentioned a lot
on Reddit in web search results in,
authoritative guides of what's the
best guitar to buy, et cetera, then it
will do well in inside the AI as well.
I don't think there are any
(25:47):
particular shortcuts here.
There are things like, make sure
you are listed on Bing webmaster,
because that's what ChatGPT does
ChatGPT owned or not quite owned, but
Microsoft have a 49% stake in OpenAI.
So ChatGPT is built on Microsoft systems
and it also uses Bing for that kind
(26:07):
of web scraping to make sure that you
are listed you've got your website.
Indexed with Bing as well
as on Google search console.
And then make sure you
have things like schema.
Basic SEO optimizations and being,
having links and having people talk
about you and the kind of things that
we do for SEO seem to be relevant
(26:28):
for appearing in chat GPT as well.
I'm sure there are little hacks
that probably work and can
get you there really quickly.
But if it's anything like SEO,
they will not work for very long.
So I think in the long run it's always
gonna be about just producing a lot
of value and being known to people
and being helpful to people and.
Having other people
talk about you as well.
Putting out value into the world
(26:49):
generally is the best way to build
something that's sustainable.
And that's gonna be the
same on ChatGPT as well.
Should I be focused on building an
agentic AI app instead of a website?
It depends what your goals are.
Is this for building a business?
I think that websites are
increasingly less relevant.
Again, it depends what the business is.
(27:10):
Often you'll need a website
for people to check you out.
Basically, it, it becomes a piece
of social proof so someone can
at least go to a website and be
like, okay, yep, that's fine.
They exist.
They are an entity.
They are a company.
So that's still very useful.
It's almost like a business card.
It doesn't drive the business per
se, but it becomes a an element
of legitimacy for business.
So I think websites still
(27:30):
have a purpose like that.
In terms of driving the
business though, websites.
A lot of businesses do not
rely on websites for them.
And so websites can drive business
if you rank well on on Google.
So search engine optimization, but that's
just one of many marketing channels.
There's also social media marketing.
(27:51):
There's also UCG so user generate,
so UGC, user generated content.
There is advertising, so using
AdWords, using meta ads, et cetera.
There are many ways to drive, an
audience and many ways to drive
customers into your business.
So Google, SEO is just one of them.
So if.
Building a website and ranking for
(28:12):
search is a core part of your business
plan, then yeah, maybe you still
want to look at getting a website.
However, looking at what we're doing
right now with the drop in clicks,
with the drop in search, I would say
it's probably not the best use of time.
It's probably a risky way to build
a business knowing that this is
a channel that is potentially
collapsing from under our feet.
(28:33):
So I'd be looking at other things
like social media marketing, if you
are building an AI agent, AI app,
so we can all build stuff with ai.
Now that's a really exciting thing, but
we need to get it in front of people.
I dunno if you follow Greg Eisenberg,
but he talks about the fact that now that
we're in Age of Builders that we can.
Codes have enough in lovable
(28:53):
in cursor that the barriers for
entry for building are collapsing.
That means there's even more competition.
There's even more people putting
their ideas out into the world in
a physical form, a digital form.
What that means is distribution
becomes more important than
ever because it is very noisy.
I dunno if you've been on a like
an AI tool directory recently, but
it is, it's basically a graveyard.
(29:15):
It is full of applications that
people have launched and they
haven't thought about marketing.
They haven't thought about who'd want it.
They haven't thought about
how they get the word out.
They haven't thought about
sales or anything like that.
There's no business behind it.
It is just, Hey, I built an app.
Do you want to use it?
And crickets, because there are so many
other people doing that at the same time.
So whether you build a website,
whether you build an app whether
(29:36):
you build a community, whatever
you're building needs to sit within.
A business plan.
And AI has not changed that.
It still needs to be a good business plan.
We just use AI to to, to leverage our work
and to work faster on that business plan.
But business fundamentals
are still in place.
The boring stuff, unfortunately,
but really important.
Google has done with ai exactly
(29:58):
what it did with social media.
Doesn't understand it.
Yeah.
Do you remember what was there?
What was their social media?
I can't even remember its name.
The one with the circles.
Google pl. Google Plus.
Yeah, it was Google Plus.
Gosh, that's so bad.
I even forgot it.
Yeah.
Google Plus.
Do you remember?
Google Plus?
They had the really cool idea of
having circles, so you could have
(30:19):
different groups that would see
different parts of your content.
So you could have a personal circle,
you could have a professional
circle, you could have a.
Audience circle like customers
which is a great idea.
The rest of it was it was too little,
too late, I think with Google.
It is such a large company and such a
rich company that they have this policy
(30:39):
basically including the fact that you can
work on your own projects, a 20% rule.
They have this policy of let's just
throw some stuff at the wall and
see what sticks, and then they, I.
Shut down a lot of their
projects afterwards.
So things like Google Stadia, which was a
really cool video game streaming platform.
So you didn't have a console,
but you could stream everything.
They were just a bit too early.
And the internet wasn't fast enough.
Now we stream a lot of our games,
(31:00):
or increasingly but Stadia was a
bit early, so they shut it down.
If you'd bought Stadia, if you'd signed
up for a premium one year subscription
and bought the nice control and stuff.
Tough.
So Google do this with
their products a lot.
They will throw things at the wall and
see what sticks and then kill everything
else, which is a problem if you're
trying to invest in their ecosystem.
Because you might, buy stuff, you might
(31:21):
start spending money, you might start
developing in a certain tool that that
they release and then they kill it.
They make it hard to trust them at
points because of this the way that
they release, which is a shame.
I. Bare footprint saying thanks.
I feel like I trained ChatGPT so I'm
worried about changing the Claude,
honestly, use whatever's easiest.
(31:41):
So I find most people will just
use the AI that they started with.
So there's a moat which appears,
especially with ChatGPT.
The more you use it, the more
it knows about you because
it has that long term memory.
So it makes moving from
ChatGPT really difficult.
So even if there is a
better model out there.
Chat GT are in a great position now
because chat GT knows about so much
(32:03):
about us that it can perform better than
any model that you use outta the box.
So I think adding that long-term
memory was an incredibly clever idea
on chat GT's part they've very clever.
They've created a moat which
stops us from jumping ship.
Jack says, as an accountant, how
much of a threat is a AI to me?
So large language models are
(32:24):
not very good at mathematics.
They are language models, however, they
can call on mathematical tools to do
the actual calculations for them so
they don't do the maths themselves.
So just plugging.
Accountancy questions into
ChatGPT , like giving it data
won't work very well because.
Accountancy requires not you don't
(32:46):
want to be creative with the numbers.
Like by definition, a creative
accountant gets in a lot of trouble.
The way that large language models
and generative AI work is stochastic.
It is probability based, which means,
and this is why when you put in the
same question multiple times in chat,
GPT, it's gonna give you different
answers that's no good in accountancy.
Because they, you don't
(33:06):
want multiple answers.
That said, there are AI being developed
which have a lower temperature,
so basically a lower creativity,
for want of a better word which
can perform these tasks very well.
I would say that for any profession,
so this is legal profession, this is
medical profession to a certain extent
(33:27):
accountancy, et cetera, any profession.
Like this, what's most at
risk are the lower levels.
So I talked yesterday, I think
it was about KPMG and a number
of the large advisory firms.
The big four are having reductions in
their graduate programs because they
need less people coming in because.
They are training people on AI
and getting more work done with
(33:48):
a smaller number of people and
investing that money into AI instead.
And same with Microsoft.
They just dropped 9,000 jobs and at
the same time invested I think 60
or 80 billion into into AI servers.
So this is what is happening
a lot in large companies.
So the risk is for lower level employees.
(34:09):
It's gonna be hard to one, get the
positions because there are less positions
opening up two to train with people.
There's an interesting, I also
talked about this the other day.
There are less opportunities to
train within an organization 'cause
more people are working from home,
so you can't shadow people anymore
or it's harder to shadow people.
So AI is filling those gaps in terms of
(34:30):
tutoring, in terms of, replacing jobs.
So at low levels, if you are
studying or you've just entered an
industry like accountancy or legal
work, it's risky and it's gonna
be rough for the next few years.
If, however.
You are a senior and retirement
is in sight, you're probably
gonna be fine because industries
are relatively slow moving.
(34:51):
Especially protected industries like
the law industry and any industry that
has certification and qualifications,
I. There's also the fact that if
you are a lawyer or an accountant,
there is legal responsibility.
So when you prepare somebody's tax
returns or you prepare a case for
them, or take them on that, oh, sorry.
Or you are a lawyer and you write a
contract for somebody, what that person
(35:13):
is doing by hiring your professional
services is pushing the buck to you.
So you are, you become
legally responsible.
Uhis cannot at the moment.
So that is another.
Reason why some professions will
be protected, because we still need
somebody, a human being at the end who
is liable, whether it's an accountant
or a lawyer that we have hired.
So this will slow it down a bit,
(35:34):
but I think all of us, regardless
of our jobs, need to think about
how this is going to impact us.
We don't know.
And anyone who says that they know
which way is going is probably BSing.
What I would say is, regardless
of where you are, it's a good idea
to have multiple options open.
(35:55):
So if your only income.
And the only cash that you use, for your
house and your family and living comes
from a job that's one source of income.
So if we're building a chair or a
stool, you wouldn't do that with one
leg 'cause that's very imbalanced.
So it's a good idea to be looking
at how can we flesh out our.
Income streams.
How can we get other sources of income
(36:17):
coming in that many different ways to do
this, start a business start to consult
advisory, teaching people about artificial
intelligence is a good one right now.
But basically setting up other
ways to make cash so that if our
salary, our nine to five job.
Does come under VIS score,
it gets removed from us.
We have options.
I think that's the only thing that we can
(36:37):
do that is proactively useful because we
don't necessarily know where we're going.
And we don't know the timelines either.
Oh, sorry, I'm just scrolling through.
Did you see the thermal
video of Grok data centers?
No, I didn't.
But I imagine they are fairly scary.
Let me have a look.
' cause the Grok data centers are gigantic.
(36:57):
I'm not seeing anything too scary.
I. It's a gas fence.
If you show me the reference
I'll be able to look that up.
Google Plus.
Yes.
Thank you.
Bill.
Jack says they also had
Google Duo and aloe.
See, this is what I mean.
Google have had so many products and
they've just disappeared into the ether.
There's actually a website
about like dead Google projects.
(37:19):
It's very funny.
Nixon love the green screen setup.
This isn't green screen.
This is my messy front
room, unfortunately.
Kent Clark, not Clark Kent, but Kent
Clark says, brother, do you have
any tips on copywriting AI images?
I think it's, it depends where you are.
It depends where you are, but
legally right now, AI art or AI
(37:39):
images cannot be copywritten.
I'm just checking what the current law is.
I think it depends.
It would depend on your
jurisdiction, but as far as I
know, AI art cannot be copywritten.
Here we go.
AI images cannot be copyrighted.
However, the works save here in eg.
Storybooks, graphic novels
would have a copyright write
attached to the rest of the work.
(38:00):
AI images can be licensed for use, and
the type of license would depend on
the app you're using and the license.
We'll have them to
produce commercial works.
So Midjourney only allows images produced
within the apps to be used commercially
if you generated it and if you generated
it under a subscription plan for them.
So you have to pay them.
So images produced under the free plan
or images produced by other people may
not be used by yourself commercially.
Okay?
(38:21):
So if you are creating an image.
Basically you don't have
any legal rights to it.
It's gonna be, if anyone has any legal
rights to it, it's gonna be the company
who helped you to generate that AI
image, whoever created that model.
And even then, AI images
currently cannot be copywritten.
This is something that's been worked
(38:41):
through the, the law courts, and it's
gonna become more of an issue when it
starts to be used in things like film.
And there's a lot of money being thrown
around then I'm sure the filmmakers
will want to make them protectable.
But right now, as far as I know,
somebody, correct me if I'm wrong in chat
AI art or images cannot be protected.
They cannot be copy written,
but I may be wrong there.
(39:05):
Phil saying, Kent Clark, you can't,
you need to make changes to it.
Yeah.
And Kent, you're in the uk so I think
the answer is you cannot copyright them.
Jack's saying you didn't make the ai.
Imagine you are the art
director and AI is the artist.
Yeah.
So I think that's the case.
Mid Journey or whatever the AI tool you
use, they have the most rights to it.
(39:26):
Then we go down the rabbit hole of okay,
but that trained on other people's data,
that trained on other people's images.
So what are their rights?
That's a whole legal rabbit
hole that needs to be worked
out and has not been worked out.
But I think that you as a user of the
tool have very few, if any rights at all.
If anything, you're at risk.
'cause if you use a free image
generator and use that commercially.
To make commercial products, then someone
(39:48):
like Midjourney could come at you.
That said, I think Midjourney,
you're a bit busy now with
Disney trying to sue them.
Brandon Sanderson was
talking about this last week.
What was this about?
I love Brandon Sanderson, by the way.
Cool.
Those were the main topics
that I wanted to talk about.
So we had the release of.
(40:10):
Grok four, which is gonna be
coming out later today, potentially
mastering humanity, not mastering,
doing a lot better in humanity's
last exam based on the leak.
Who knows?
We'll find out I guess later today.
Also, we'll know whether that's
a load of rubbish or not.
However, that was also tempered by the
fact that grok went off the rails recently
and started spewing antisemitic hatred.
(40:32):
So it's an interesting day
and we'll see how that goes.
And then the other things was
about the new lawsuit being bought
against a, against Google for
their AI overviews by independent
publishers in the uk in the eu.
So those are the big stories.
Today, I will be back tomorrow
as I'm, every single day doing
a live update in the news.
(40:53):
And again, all of this is
collected and tidied up.
'Cause this is more rambling.
It is tidied up in YouTube
videos and a newsletter which
comes out every single day.
So my business partner will now tidy this.
He will get rid of my ums and uhs and my
kind of rambling, me talking about Brandon
Sanderson probably, clip it into shorter
videos and into shorts which go over
(41:14):
various different social media channels.
And we prepare it into a short kinda
summary email, which goes out every
day, which has time codes, which
take you to the specific video.
So you don't need to sit and watch
me yammer for an hour, although
you're very welcome and this is the
best way to ask questions as well.
If you ever just need to catch up.
Go and check the YouTube videos,
go and check the newsletter
(41:35):
and everything will be there.
And that's all three obviously.
So
fantastic.
I'm gonna head off now and I'm
taking the rest of the day off.
I'm actually going to the zoo,
so everybody have a lovely day.
Goodbye.