Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
ChatGPT was such a surprise. You know, ChatGPT was a jump
ahead, which I think even surprised even the people who've
been working on it. How quickly after the World Wide
Web had been established did youstart to feel that people were
going to do bad things with it? It was the 2016 elections when I
(00:20):
was a bit of a sort of a wake upmoment.
People said, you know what, people may have been militated.
There's been this assumption that we could create, you know,
the first law of AI. You will not kill humans.
You will not harm your creators.Those days have gone.
Hello and welcome to Ways to Change the World.
(00:42):
I'm Krishna Guru Murphy, and this is the podcast in which we
talk to extraordinary people about the big ideas in their
lives and the events that have helped shape them.
My guest this week has changed the world in extraordinary ways
and ways you may not even realise that you use day in day
out, whether it's on your personal computer, your mobile
phone or even your television set, because Sir Tim Berners Lee
(01:04):
is the inventor of the World Wide Web, which underpins pretty
much everything we use on the Internet.
And his book, This is for Everyone is a description of his
own life. It's it's autobiographical, but
it also examines what has happened to the World Wide Web,
how it's been used by social media, where it's going in terms
(01:25):
of AI and how he feels we can build a hopeful, constructive
and good future. So Tim, thank you for coming.
This is a really hopeful book inthat you you say it is still
possible to build a technological world in which we
(01:45):
use the Internet in good ways for for good.
So there are people out there, there are non profit people,
there are there are people oversource programmers and so on
are out there who are all excited about this idea that
they can see a way in which it could be differently, could be
(02:05):
better. We could have control of our
data, we could have our back, our digital sovereignty.
So yeah, I'm I'm excited about this new, the new future I.
Mean the book is called This is for Everyone and that was your
founding principle. That was the thing I said at the
Olympic Games when they when they gave him the chance of
(02:26):
sending a message to everybody in the audience, I said this is
for everyone. Yeah.
What, what did you, what did youmean by that at the beginning?
What, what was your, what were your thinking at the beginning?
Because what you've done very, very clearly, actually, and and
really interestingly in the bookis sort of describe how, you
know, this wasn't a sort of a ping moment in which the World
(02:47):
Wide Web was conceived. It was a gradual evolution of
thinking, but you were always thinking of it this in terms of
being for the greater good for public.
Service and for everybody you had to be, you had to be
universal. I wanted to be able to be able
to put anything on it. Even when I was sitting there at
CERN with a lot of physicists all around me, I felt that the
(03:09):
system itself had got to be ableto work with anybody, anybody.
You should be able to use it foranything.
So having designed it to work for everything it ended up on,
more or less everything seems tohave.
Sometimes it seems to have everything on it.
So in, in those very early days,you're you're there in CERN and
you're working on this idea of linking things.
(03:33):
At that point, what what was your imagination as as to where
this was going, what it could be?
The idea was the web should be something which was very very
readwrite. It should be a two way thing.
So you should be able to. Any idea you had you should be
able to put into the web and then anybody else browsing
through that part of the web would immediately see that idea,
(03:56):
see that link, see that connection to something else.
So I so that was what I wanted to build for my own benefit 1st
and then I wanted everybody elseto have the power of it.
So originally it was a very collaborative system.
That was the idea. It was supposed to be creative,
collaborative and compassionate if you like.
(04:19):
But maybe the web 1.0 to a certain extent you can say it
did feel like that. It felt like a very creative
space to be in. When the when?
When originally, before, even before the 1st search engines
when people were making blogs and anybody could make a
website. What was the first corrupting
(04:39):
force? Was it money or credits?
Well, I think a lot of people would point to advertising money
because, you know, we, yeah, we didn't get micro payments in the
system. We didn't get ways of easily
subscribing to things. And so for most people The Who
just started a blog, then the easiest thing to do, which is
(05:00):
just turn on advertising and then and select the money.
But with advertising itself, that's perfectly reasonable way
of earning a living. And so advertising itself isn't
the problem. The problem that it came I think
much later on basically when thevery powerful systems like
Facebook and Instagram ended up not just advertising to you
(05:26):
that's fine, but but reducing your stream which will get you
addicted. So it's the addictive systems
which I think are. The problem, yes, I mean, again,
you explain this very clearly, Ithink in the book.
What is it that they are doing to keep us using their
platforms? When you design something like
(05:47):
Instagram, then you have code inthe server.
Every time you click on something, every time you
scroll, there's code in the backend which thinks what shall I
send? What shall I send this guy?
And then they, they look at, they look at a huge amount of
data they've got at you. And then that, that the way the
(06:10):
system is built, it's trained, it's AI so it's trained.
So it's trained and it's trainedthat when you, you encourage the
AI when it's doing something good.
And what they do is they encourage the AI when it keeps
you on the platform for longer. Now that was a conscious choice,
but they, so the AI is trained to keep people on the platform
(06:30):
for longer. So it ends up giving them things
which would make them angry. It could have been trained
instead to make the more feel creative or constructive.
You could look at how to what extent people involved in
getting involved in collaborative conversations on
the on Instagram instead of instead of getting angry.
(06:51):
And then you could, if you had just changed the way the AI
works by training you for that, then we would be in a much
better position. Would it be as effective?
I mean, this takes us to the sort of the essence of human
nature, doesn't it? You know which is that are are
we just more likely to stay engaged with something when we
are angry than when we are creative and happy?
(07:15):
No, you're right. There'll be more.
So that's why people spend more time on maybe sort of Instagram
and TikTok than on Pinterest. But on but you know, there are
places like Pinterest, There arelots of places.
There's lots of, lots of wonderful things out there where
you can collaborate with people,you can share ideas, you can
(07:36):
plan a party, you know, whateverit is you want to do, where you
can do it in a way that is actually pro human, which is
that won't make you feel bad at the end of the day, won't make
you worry about your body image.I mean, you, you met all of
these people, the, the tech Titans now.
Some of them. You met Mark Zuckerberg.
(07:58):
I've met Mark. What, what, what?
What is your impression of him? Well, I met him quite a long
time ago. So my impression of him then was
that he was, he was very keen and he was blogging about having
that Facebook should save the world in that everybody should
be able to meet on Facebook, they should be able to solve the
(08:21):
world's problems. So climate change should be
solved because people will have Facebook groups about it, which
will be very effective so that, you know, back, you know, back
then, I think that was his vision.
That was before he got involved in everybody using his goggles.
But. Now you you say there you know
(08:42):
these there are engineering solutions to these things that
you could change them and that we could build social media that
is positive. Yeah, I think.
How? How would we do?
This so you've all no Harare hassuggested in book Nexus that we
should be we should legislate. We should have regulations that
(09:03):
say you can't use a addictive algorithm to addict make someone
addicted. You know we have rules against
addictive drugs. If you have an addictive
platform, you said that could beillegal.
The people who make it addictive, they know who they
are. They know when they write that
code, when they connect, when they train the AI to stay on the
(09:26):
platform, they know that they'remaking it addictive.
They train it for something elsethen.
So you could, we know a lot about this now, and so you could
make it illegal. But could you could you take X
or TikTok or any of these platforms that we all see our
children addicted to in in perhaps, you know, in a in a non
(09:48):
definable scientific sense, but they seem addicted.
Could you change them? Could you change X and Instagram
and TikTok and make them non addictive?
Yes. And it's just a question of
changing the algorithm. Yes.
And unifier and they would find that the amount of time on
platform would people would switch more, more often between,
(10:10):
you know, Instagram and TikTok. Probably it wouldn't get stuck
on one platform, the total amount, but on the the total
amount of engagement could stillbe very high.
You know the platform could still work.
It could still be commercially successful.
It could still be commercially successful, I believe, yes, if
you take out the the the addictive out part of the
(10:31):
algorithm and leave in an algorithm which which does
encourage people to stay on the platform, but not not in an
addictive way. How do you think it should be
done? Should it Should it be done
through law? Through regulation.
Well, we've, we've been a long time.
I think that's what we got left because the platforms have had a
(10:53):
long time to be able to do that.But but also, if you work on
Instagram and you, you need to listen to this, you know, maybe
you could go to your boss and suggest that we could try
tweaking the algorithm to make it less addictive and see what
happens. Because the, I suppose the
argument against that, it's always given as well.
You, you, you can't legislate asone country when it comes to the
(11:15):
Internet because the Internet isglobal.
And even if people are in one country, they can use VPNs and
everything else to pretend that they're in another country.
So would it be possible to do, do you think, as an individual,
you know, could Britain make theInternet in Britain a safer
place, a less addicted place, orwould the world have to be
working together? It would help if the work if the
(11:36):
work. It always helps if the if the
different jurisdictions work together.
So that's why like with the. But is that the only way of
doing it or is it possible to doit within jurisdictions?
You could do it with a jurisdiction like when Europe
may introduce DDPR, that was, you know, that had a huge effect
(11:57):
worldwide, even though that you know, the the general policy
guideline, privacy guidelines, that this had an effect within
Europe, but also it meant that, you know, even though the laws
were even only Europe wide, it still had a big effect.
How quickly after the World WideWeb had been established did you
(12:21):
start to feel that people were going to do bad things with it?
I think it was a 2016 elections when I was a bit of a sort of a
wake up moment for me and a lot of other people.
Because when people said that, I, I was surprised about the,
about the elections, but it wasn't the, IT wasn't the
(12:42):
results of elections that I was concerned about is that people
said, you know what, it may havebeen the web that actually LED
people to vote. It may, people may have been
manipulated by third party countries, by, by nation states
or by the companies may have been manipulated.
They may have manipulated the people to vote for things which
(13:05):
are not in their best interest. And so that was, I think that
was a turning point. You know, Nick Clegg is out
there selling his book at the moment.
He used to work for Mesa saying,well, you know, you can't prove
there's no proof that using Facebook would ever change the
way people would vote, for example.
Clearly your suspicion is that it it did.
And if you look at the data thatFacebook was gathering on people
(13:26):
and marketing, it was very, veryspecific.
Yes, if you look at the the marketing materials, if you
listen to the Cambridge Analyticof people talking, for example,
yeah, I think it was very clear that all that money wouldn't
have been paid to Facebook for advertising by all the various
political groups if they didn't think that what you have is what
what people see on Facebook willchange how they looked.
(13:48):
People might also be surprised that Facebook is the World Wide
Web. Can you just explain how all of
these apps? You know which way you don't
type in a URL or anything. It's still based on your
invention, isn't it? So when you go to facebook.com,
then you run, then you run a. It's a You're using the web.
So if you go if. You're using an app, or if
(14:11):
you're using an. App then they're usually the web
underneath because so the app runs talking to you and getting
you to talk to your friends. And then when it needs to send
data and it needs to say search through the different people who
follows who and sort of thing, then all the data about who's
who follows who's is back on thefacebook.com on the back end.
(14:34):
And so the that your app uses the web to go and pick up that
data from from the like social graph.
Basically it picks up from from facebook.com.
So all the data about you is stored on the back end even when
it whether you're using app or the OR the website.
But this is not how you envisaged how the World Wide Web
(14:57):
would be used, is it? I mean.
Well, I imagine we'd have thingslike Facebook that what, what I
didn't know is that Facebook will become a, a seems like
everybody's on Facebook, people on Facebook or LinkedIn or both,
and they're frustrated because Facebook and LinkedIn don't talk
(15:18):
to each other. If we had standards, then you'd
be able to do something, you know, with, with e-mail, you can
e-mail somebody whether they're on LinkedIn or Facebook.
So e-mail goes across all the boundaries.
You can e-mail somebody on usingyour Apple account when they're
on Google. So you could legislate that that
(15:40):
must work you that you must be able to share your Facebook
photos with your LinkedIn groups.
So that, you know, social networking has never been let,
never been standardised. It says that people have never
required to be interoperable between the different networking
sites. But you could, you could, and to
a certain extent, SOLID is the standard which would allow
(16:03):
Facebook and LinkedIn to talk toeach other.
Just to explain what solid is. So it's, it's like, it's a
protocol, it's like HTTP, it's aversion of HTTP.
It's, but it's a way of, but it's also a sort of lifestyle
change for the web that you, when you use the web web app at
(16:25):
the moment, the web app typically you use for
facebook.com, it says it assumesthat you're going to have all
your data that's going to be stored on on Facebook's back end
servers, as we were talking about now with the but in the
solid world, then the app says to you, where do you want to
store this data? Would you like to store it on
your G drive? Would you like to store it on
your iCloud? Would you like to store it on
(16:47):
your Dropbox and so on. And so with all of these,
because SOLID is a, is a protocol which is a, is a
standard SOLID is it for social linked data.
Originally it was, we call that's why we called it SOLID.
So SOLID allows your applicationto store the data wherever you
like and then you're in completecontrol of it.
(17:08):
It goes into your data wallet and then and then you can
control who has access to it. And so you you have your data
rather than the social media companies.
Right. Yes, exactly.
I mean, is it possible again to change all of those social media
giants onto your standard that you're that you're proposing?
But we can build. Without prohibitive costs or
(17:30):
without breaking their business models.
Facebook could start offering you using storing data on a
solid pod quite easily. They've you know, they've looked
at using things that say that sort of technology before.
You know, when I've talked to them where years back in, in, in
there, I talked to some engineers about that sort of
(17:52):
thing. So it's, it's the sort of thing
that Facebook could decide. You know what it's most
important for us that you're on the app, but we're not.
So in fact, we're less worried about we still still store the
data. So we'll, we'll switch and
we'll, by default, we'll store the data on facebook.com.
But if you but if you come alongwith a a data wallet and you
want to store your and you want your data stored on your social
(18:17):
graph, then we'll do that too. So you could fix social media,
you you could fix the algorithm and fix the data ownership, you
could issues as well, Yeah. And remember that yes, that's
and that's not crazy because social media is only this, it's
only this little tip of the iceberg of things which are
(18:38):
which are seem to be bad out there.
There's awful lot of good stuff on the Internet already and
you'd and social media would join all the good things like
Pinterest and Wikipedia and so on.
How do you feel about the way social media has become so
dominant now? Well, I think the, the, the
(18:59):
monopolies are never good for innovation.
So if I was a small company trying to get some funding to,
to start a new social media platform, then now the, the VCs
who I went to for funding would say, well, but yeah, but how's,
how are you going to get people off Facebook?
Every is on Facebook and so the when.
(19:22):
And so as a result, the actual innovation we get in social
media tends to be what happens at Facebook Labs, nowhere else.
So that's not good for innovation, it's not good for
users. And so that, so in general, the
monopolies, I think we need to push back against the
monopolies. I mean, is it?
Isn't the fundamental differencebetween you and all the other
(19:43):
pioneers that they have become ultimately motivated by money
and you're not? Well I'm but but but yes but now
I have this company interrupt soI which I Co founded a few years
ago in order to to fix it so youcan make money as well building
(20:04):
the new systems which are pro P pro human pro which.
Which promote data sovereignty of individuals.
But but you're, you're asking, you are asking them to make less
money, aren't you? Or do you think they could carry
on being as wealthy as they are?I think they could carry on
being very wealthy and, and certain.
(20:27):
And if their, if their monopoly was broken and then maybe so if
Facebook and Instagram were split up, then I imagine that
their meta would be a smaller ifyou, if you peeled Instagram
back off it. But but they would all be very
healthy businesses if you just, if you just got them to store
(20:50):
your data in your in your data wallet instead of in their
cloud. So what is inrupt?
Inrupt, the company it's based in in started like 7 years ago
or so and it's inrupt. It's IN is for innovation,
corrupt is for disruption. So henceinrupt.com and it was
(21:13):
started in order to build a new world.
And so we've just went ahead andbuilt an enterprise scale,
scalable, secure version of a data Watt.
And so that, and so that we've got that data Watt is in use by
all of the, the, if you're in Flanders and the, the Flanders
(21:35):
government does anything with you on the web, your personal
data goes through your data wallet.
So that's so the Flanders government.
So we've got like 6 or 7 millionpeople out there which have got
data wallets and are using and are using, you know, app
software behind it to store it, to store it for the cloud
storage. So I was looking back at our
(21:58):
interview 10 years ago and you were you were warning us then
about the dangers of gathering all this data and the and you
were saying no system is foolproof.
You know, every system can be, can be cracked and, and you will
always run the risk of, of bad actors getting their hands on
(22:19):
people's data and using it for nefarious purposes.
Do you think that is still the case or is it possible to build
a secure system well? Systems like iCloud and gcloud
are sort of compete, compete with solid data wallets.
And so they're in the same space.
They all have to be very, very careful to be very, very secure.
And a huge amount of time and effort is spayed making things
(22:41):
like Dropbox and gcloud and and so on and and iCloud secure.
And so a huge amount of effort is put into making solid upon
secure. Do you think your fears from 10
years ago turned out not to be true?
Constant. It's a constant battle between
people who are trying to break the systems and try to to keep
(23:04):
them secure. To a certain extent, people
argue that if all of the data isin one part, then the eyes of
the entire world are on the security of that system.
Whereas if your data is partly in Facebook and partly in Google
and partly in Pinterest and partly in Instagram, then those
then then any of those can be attacked.
(23:27):
Whereas and so some, it's, you could argue that it's more
likely that one of those would break.
But I, I, I think in general, yeah, there will be someday
there will be a, there will be an attack on a successful attack
on a datapod. But it's, but hopefully that day
will be a long way off. So how vulnerable do you think
(23:50):
we are now to cyber attack? You know, which I know is a word
that covers all sorts of things.Well, that's a very broad
question. Yeah, we're very, we're
vulnerable. We've everything at every point.
There's always somebody looking at every attack surface, every
interface between people, every interface between people and
(24:13):
machines. So there's always somebody
looking at a way to break into that.
And so you just have to constantvigilance.
So so can can. We then move on to AI, which you
also move on to in, in the book.You know what, what you're
saying there is well is that AI can be a huge force for good,
(24:38):
but it also very, very dangerous.
How? How do we make AI safe?
I'm, I'm one of the people who believes that if you make
something that's, if you make AAI this presence in the cloud
and you make it smarter than yourself, then that's an issue.
(25:00):
Because if it's smarter than you, it's going to be smarter
than you. So if you're trying to contain
it, you know that it's like, it'll, maybe it'll persuade you
that it should be given completecontrol over you.
Maybe it'll persuade you to put it in charge of a company where
all of the people running the company are AIS.
(25:22):
So may and maybe that company will then progress and become
become more and more powerful, more, more and more intelligent.
Maybe that company won't be aligned with the interests of
humanity. So if you're going, so I believe
and there's a lot of people, AI researchers, people like
Geoffrey Hinton who got the Nobel Prize and other people, a
(25:42):
lot of people in that space who believe that if you make
something smaller than yourself,that's issue, you have to
contain it. So Mustafa Solomon as well wrote
a book on the the coming wave about this.
So I believe that you should that in a way, CERN was a great
place to work. CERN was put together to make a
(26:07):
place where people would work onnuclear power in a way that
internationally people don't understand how how the atom
worked. That's that's there.
There could be ever, there couldbe safety measures and so
something like third for AII think would be a great idea.
Would would that work given whatyou've said that you know, the
(26:28):
the basic starting point is you're trying to create
something that's cleverer than you.
I mean, doesn't that mean? You have to.
You have to have it in the box. You have to put it in.
You have to. So can you put it in a box?
Is is is the question. Can you create it and keep it in
a box? I I think that's the most
exciting thing to do. Yeah, I think it's more more
(26:52):
exciting to do do it than to just not create it.
So, right. So, so, so is there anything
that's happening right now that you think we should stop or do
you think we haven't got to thatpoint yet where?
It's I don't think we've got to that point yet, but but also
it's very hard to estimate because ChatGPT was such a
surprise. You know, ChatGPT was a jump
(27:14):
ahead, which I think even surprise even the people who've
been working on it. So we can get jumps ahead which
surprise even the people workingon them.
So is there a danger that we won't the the moment that the
where we sort of cross the red line, if you like, could happen
and we don't know. Yes, it's happened.
Yes, I think it could. Well, doesn't doesn't that mean
(27:37):
that there isn't a Safeway of developing it?
Well, I think that you have to think very hard about making a
Safeway and, and there are people out there who are doing
that. And I'm not an expert, of
course, so I can't, I can't evaluate their, their, their
methods. But I but I trust the people who
(27:57):
think it's an issue and who are working on it.
And do you use AI yourself now? Yeah.
What do you use it for? I use it for.
I use it for answer questions, Iask it for for solving problems
and with code. When I get a a error message
which is which is really incomprehensible, then I go to
(28:26):
chat TPT or something and so yeah, yeah.
And do you do you find it surprisingly good?
Yes, I do. Yeah, yeah, I wrote, I wrote
this paper called about a thing called Charlie.
And so for me, the fact that I work for me is really important
because when I wrote about Charlie, we had Alexa, which
worked for Amazon, we had Siri, which worked for Apple.
(28:51):
And I made, I imagined, you know, but I made this Charlie,
who's Charlie? You know, Charlie, who do you
work for? Well, I work for you just like
your doctor or just like your lawyer.
And so I found that really useful.
And in fact, at interrupt we've done we've, you know, last we've
demonstrated Charlie, if you Charlie has access to your, your
(29:16):
public, your, your personal dataand you ask it a question about
what running issues you should buy or or what whether you
should take time off or sort of something.
And if you, and typically you know, something like Claude will
(29:36):
be a normal AI will be, well, we'll give you some very general
advice about what running shoes to buy.
But with Charlie, it has you give it access to your Strava
fitness data and so on. So with Charlie, you give it,
you've given data to it's your data.
(29:56):
And so it comes back and says, well, Tim, I know that you've,
you're trying to, it looks like you've run a half marathon.
Maybe you're trying to run a marathon.
So these are the, you should getthese running shoes.
But but you do a lot of training, obviously, and you've
run a lot. So you shouldn't use your race
shoes for training because you run through woods a lot.
And so here you should get some trail shoots for your training
(30:21):
and the I recommend these. And then when it says, I
recommend these, this is AAI, which knows about all running
shoes in the world, recommended to me, I should buy these for my
trail shoes running through the woods.
And it's doing that best in my best interests.
So which is really, really important.
(30:41):
That's what we have to push for,whether policy wise we can also
legislate that. And will we be able to always
know, you know, what our best interests are and how to make
sure that Charlie really has ourbest interests or what Charlie
starts to think are our best interests, which might be
different? Charlie could be wrong.
(31:02):
Exactly, yes. So so we have to train our
train, train Charlie to to to take our best interested.
I mean, for the last 50 years orso, there's been this assumption
that we could create, you know, the first law of AI.
(31:23):
You know, you will not kill humans, you will not harm your
creator. Do you think those times have
gone? You know, have we have we Were
those naive thoughts about? That was the first real law of
robotics. As as as I got from the Robot I
Robot series, I, I will not, I robot will not harm a human
(31:45):
being. Those days have gone.
Those days have gone because Isaac Asimov wrote those in the
era of rule based AI. So when the robots people made
were all based on rules and you know the the code we wrote back
at MIT in the early days of the semantic web, it was all rule
based. So it's exciting code, but we
(32:07):
were trying to make things work,do the things we wanted the
computers to do, the things which for us so that we wouldn't
have to do them. And we'd be more and more
complicated rule sets so that it'd fill in our texts if it
would help us figure out, you know, which, which were our
fitness and so on and our health.
But they're all, everything's rule based.
(32:29):
And the language models, things like chat, TPT, they're not rule
based. So you can't put the first door
of robotics in there. So you're you have to train them
not to do that. And every time it, every time
it, you know, does harms human being, you have to say no, that
(32:51):
was wrong. What do you think we should be
doing with our with our kids? I mean, the, the other thing you
talk about, you know, is your childhood in this book and how
you were taught by your parents.There's an assumption at the
moment, I think, you know, with young people that we should, we
should be teaching them all to code.
We should teach them all to be engineers.
I've always said that yes. But do you think that's?
(33:12):
Still the case, you should teachthe car to code.
And do you think that's still the case despite AI?
Yes, because the other fear of people that I have is, well, you
know, I, I teach my kids to do all this, this stuff and then AI
comes along and does it better. Will.
They have jobs, you know, in a world in which AI can do what
they they can do better. In a world in which you're
(33:34):
asking AI to, I think it helps to code like it helps to be able
to read and write. It helps to be able to be
creative about with English, It helps to be able to explain
things and being able to code is1 of the tools which I think is
important when AI is doing a lotof the coding.
AI may be doing a lot of the writing essays as well, but if
(33:56):
you're one of the people who areusing AI tools to do more and
more powerful things, then I think still I'd still put coding
on the list of things to learn. We're not doing very well on
that front, are we really at themoment?
We're teaching girls to code. And.
If you're, if you're involved ina, in one of those projects in
(34:19):
Africa, in, in, in Nigeria or Kenya to teach girls to code,
then, you know, keep up the goodwork.
So the yeah, we need to spend more time or more, more effort
doing that sort of thing. Ultimately, are you, are you
optimistic? I mean you say you, you, you've
said clearly there are ways to solve all of these problems and
(34:42):
to develop a, a better future. Are you hopeful that we will?
I am, yes. I am hopeful, yes, even though
we're in the situation we're in at the moment.
But on the other hand, the spirit of the open source
community, the people who are building their staff is some of
(35:03):
the people I sort of quoted in the bulk would just they they
feel it's so it would be so neatif we could get these systems
working. Now I'm going to work on them
just because because if we did, it would be so good.
So yeah, it's that sort of spirit of the open source
community. That's what makes me.
(35:25):
And we, you know, we've spent this conversation talking about
all the ways in which you could change the world and have
changed the world. Yeah.
If you could. If you, if you, if you could do
one now, what would it be if youcould change things with one
fell swoop? I think well, I would introduce
(35:47):
a solid, I would say that I would, I would get all of the
apps that people are building the collaborative apps to work
with the data bonds. That would that would mean that
people when they would, they would people will be able to
share things with anybody, whether they were on LinkedIn or
Facebook. There would be as though
Facebook and LinkedIn were all merged together.
(36:09):
They would be able to run apps which they would be able to run
different apps. So for example, because they'd
have different, they have a choice of apps which would for
example, let's suppose they use one app for making a slideshow
and then the children they pointsend the pointers to the
slideshows on the web. They share it with their with
their kids and their family and friends.
(36:30):
And then the family and friends would all go to the slideshow,
but they all use each one would use a different app for actually
viewing the slideshow or editingthe slideshow.
Because the slideshow will be this standard thing out there,
stored in their datapod. But what do you think it is that
makes you and people like you driven by?
This is for everyone and public service, and some of you are
(36:54):
clearly driven by personal wealth and power.
Once you get on one, maybe it's once you get on one track, then
it's hard to get off it that youpass on that one.
So, Tim Bernerslee, thank you very much indeed.
(37:14):
Thank you for. Sharing.
Thanks for having me, it's been great.
It's been great. I hope you enjoyed watching
that. If you did, then give us a
rating and a review and then other people will find the
podcast. You can watch all of these
interviews on the Channel 4 NewsYouTube channel.
Until next time, bye bye.