All Episodes

February 11, 2025 40 mins

Welcome to season 3! My guest for this episode is Dr David Bray He has been thinking deeply about AI for many years. We had a wideranging chat about AI, cybersecurity, new AI’s, state actor attacks and the ongoing greyzone war, the need to use data to maintain free societies, and the importance of working locally and building community.

Some things that we touched on were the need to secure our hardware and data supply lines, the need for human agency – data as a kind of human voice. David is a big advocate for data rights to be managed via existing contract law, which seems like a good idea to me.

https://datarevolution.tech/2025/02/david-bray/

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to another episode of the Data Revolution podcast.

(00:06):
Today my guest is Dr David Bray, an old friend from the US who has a very distinguished background.
But his story is he basically started working for the government and he tells me what he needs about 15.
When he got a business PhD, ended up in Afghanistan, ended up as a CIO in government and then did amazing stuff with all sorts of people including Vince Searth.

(00:29):
So welcome David.
Great to be here with you, Kate. And yes, I fell on my head in early age and it made all the difference.
Tell me a bit about your journey though, because I think it's really interesting.
Sure. I mean, you know, and there's the phrase that, you know, life only makes sense in reverse. I would say I became my parents.
My mom was a school teacher. My father was a Methodist minister.

(00:54):
My mom was Catholic before she agreed to marry my father, so she had to leave her faith and I saw what that meant.
And his skill sets were actually healing fragmented congregations and capital planning.
So I realized I became both of them just in a different setting at the at the intersection of society data and tech.
And it really is trying to heal where there's different coalitions, different groups that all think there should be different directions you go with data and tech.

(01:21):
Sometimes I succeed in building coalitions and actually succeeding in doing large projects.
Other times, you know, all the coalitions want you to be exclusively on their side and that's a really difficult position.
But, you know, government found me because I was good with computers and so I had to get a work permit later was working in a classified capacity at age 17 with the Missile Defense Organization,
and some of the work that was going on there worked for Microsoft and Yahoo came back and dealt with the response to 911 and the anthrax events in the United States in 2001.

(01:51):
Later the original coronavirus got a PhD focusing on when do more people in tech make better decisions and when do more people in tech make worse decisions because that also happens.
And then did what everyone does after you do postdocs at MIT and Harvard went to Afghanistan.
I raised my hand about 45 days in and said why are we still here gave briefings but of course that was 2009 2010 and I'm not sure everyone was really to listen to that at the time, became a senior national intelligence service executive along the way got the

(02:21):
opportunity to be the one nonpartisan with six Republican six Democrats in the United States revealing all the research and development programs of the US intelligence community parachute into a role at the Federal Communications Commission where they've had nine sales and eight years always a great
sign for CEO number 10 that's a terrific job.
And on top of it they had had two advanced persistent threats prior to my arrival. So I couldn't even trust the IT systems I was inheriting.

(02:49):
And so we eventually moved everything to either public cloud or private hosting which was partly to remedy some of the APT's save the taxpayers millions of dollars.
I went to go work with Vint Cerf with the people centered in that coalition experienced a disinformation attack around that time. Short version is turned out after the fact that both small groups of both political parties here in the United States had manufactured

(03:13):
public comments, and they were spamming the system. The challenges is that's permissible.
And when I called it out and said that it looks like there is an application layer effectively an application layer denial service. They said well that's not network layer denial service I said never said it was.
And he said where's your evidence I said well, less than 1% of US government agencies receive more than 10,000 comments for public commenting in 120 days.

(03:40):
We're seeing 7,000 8,000 comments at 4am, 5am, 6am US Eastern time.
And I said well that's not for instance I didn't think I needed for instance, they said why didn't you report it to law enforcement I was like, because technically it's not a violation of law it's just spamming the system.
Anyway, four years later it turned out that what I said was right.

(04:04):
And, you know you just soldier on.
And along the way got asked to do another commission so I'm one of those people that's both shot at by both parties but also occasionally brought in to do stuff. They wanted again this was now eight and eight from both sides of political aisle but also our friends and allies including
Australia, but also UK Canada, Germany, India and Japan what we should do in terms of cybersecurity supply chains, AI and trust and trust in society, bio as well as space.

(04:33):
Despite the fun in quotation marks of doing anything that was bipartisan. We actually succeeded in doing so in 2020 and 2021. The report was hand delivered to both the Congress and the administration.
And as far as I know, because again things are happening. None of our about 50% of our recommendations have been implemented and have not been repealed.
Oh, that's great. So it's an interesting area, the conflicts between data and cybersecurity and I'm really quite interested in that because you know, if you can't trust the data.

(05:07):
Then you can't trust anything data is the foundation for everything and now with AI.
There's some pretty novel attacks that the state actors can do. So let's let's talk a bit about that.
Sure. So actually just today I was talking with some colleagues in that AI and healthcare space. There is research that if you change point 001% of the tokens associated with a generative AI system, you get really bad outputs when it comes to medical results and that's point zero so that's a very

(05:42):
slight small data poisoning results in massive outcomes. So you're absolutely spot on that that generative AI is highly dependent on the training data set. I would not recommend placing a lot of faith in deep seek.
And in fact, I did folks are interested there. There's a lot of evidence that it is less than accurate if you ask it about certain news events. And yes, you can maybe remove the filter that says cinnamon square never happened. But there's still some things that make you just go.

(06:14):
But again that points to every every AI's got its things that it won't talk about Microsoft won't talk about certain things about its past. You know so I think the main thing you need to do is understand the provenance like who wrote it and where's it coming from and use it for things that
you know we know what happened with Tiananmen school we don't really need to ask deep seat. I used to to reformat all my references from Chicago style to APA seven the other day. There you go. And I assume it didn't remove any can and find any objectionable or anything like that.

(06:49):
Well, but it also points to mean you probably remember do you remember Napster. I think right now generative AI is having a Napster like moment where it's like pay no attention to how we got this pay no attention to the fact that we're not respecting any real intellectual
equity or equity of the people that might have produced this data. Just enjoy the product. And so I'm waiting for a better approach that does do both provenance of the data but also possibly respects the equity of it.

(07:19):
Just a thought. Yeah that's a that's a good thought I wish more people had that thought. So the interesting thing for me about deep seat was how open AI were whining oh they stole our stuff and we're everyone was like, Oh, well you stole ours we don't care.
It's like, and that's where I'm hoping whether open AI realizes they've got to do a different approach, or maybe someone who's not right now a leader in AI.

(07:44):
So, back in 2017, the UK put out his AI strategy.
So, Lord Tim Comet Jones I want to give him a shout out he actually was chairing it and he and I met and we worked together. They called out the need in the UK for what's called data trust. You and I might call them data cooperatives or data collaboratives is the idea that using contract law.

(08:05):
You can have people come together and say we agree to have our data be used for these following purposes to include being used for AI.
And the nice thing is, you don't need any government regulation. It's just existing contract law that says we as a group have come together and we're going to negotiate the terms and conditions.
So I'm actually volunteering right now with an effort here in the United States for for caregivers of infants from when they're born to age three and ideally longer.

(08:29):
How do you make sure they get the necessary physical mental and emotional care. We're working on a chat bot to help them because some of them their number one source for information is tick tock.
So I don't know if it's wrong, but I raise that because maybe we need to do more pilots and we don't have to wait for government to do anything that actually treat data as a data trust data collaborative data cooperative.

(08:51):
Yeah, that kind of thing is great idea. And it's using existing law so because long so slowly if we wait for them to catch up.
And then we can move on by then anyway. And I think the United States at a moment government regulation is not in fashion at the moment for for some mysterious reason.
Well, you know, and I think the, I know there's an initiative I was just googling it.

(09:14):
The artists can use this data poisoning tool to fight back against AI scrapers, which is a reasonable response to the threat that they're under.
It doesn't help anybody. But I think the, the argument that the AI companies are making that we've already stolen it, you just have to live with it is is not very cool.

(09:38):
And, and I would to follow that through if you look so the Edelman trust barometer came out for 2025. They've just released it about two weeks ago.
It's just down across the board. But if you look at it, one of the things people report is they feel like they have lost agency choice and agency and how information is both delivered to them, but also how information about them is used.

(10:02):
And so whether that's coming and you look at the people who they are lacking trust in it's it's in Western societies it seems to be lack of trust in governments in Western society that seems to be lacking trust of social media companies.
But also seems to be, you know, increasingly across the board that there's actually some further going towards AI companies as well that have done the Napster like moves.

(10:25):
And so I would say as a customer retention strategy, folks have got to change, but it's going to require a company that's going to be bold enough to be willing to show a better model and maybe it's going to be one of the non incumbents at the moment that see this as a market
differentiator to do that.
Well, I'm fascinated by Microsoft. I love Microsoft. I have to do a lot of work with them. But you know, they're you're having co pilot whether you want it or not.

(10:51):
Model is not reading them friends like it you cannot doubt it, but it's so hard to find it's just the classic anti pattern and then just like, you need to have co pilot, even if you do not want it is just a ridiculous model and it making a lot of people
but you have to be them. Oh yeah, I mean I had to I mean, having been a Seattle I actually had to use had to edit the registry to intensely block it from ever being loaded back onto my system.

(11:18):
I'm much to know not everybody can do that, which does get to another question which is, how can we incentivize companies like Microsoft and the like to change your to change your business model because I think if anything we see right now with the AI companies they're doubling down on the
software as a service model. They're just doing AI as a service now. And I think that's because they think they can get massive valuations some of them can get IPOs and the like.

(11:47):
If we look at deep seat deep seat was almost like the dart that punctured that balloon saying, maybe the future of AI is not massive valuations with AI as a service as this mega platform, but instead it's running locally.
And maybe it's open source, which means maybe you need a model that, dare I say is consulting services around the AI, which is a different multiple than SAS, but maybe that's the future.

(12:13):
Yeah, and I think what they did at the back end of of doing doing some clever ways of processing was really good and really worth looking into and I've long said that we're all going to have LLMs on our phone that know us really well and they'll be personal to us and they won't leave our phones for example.

(12:37):
And that's the kind of future that most people are useful LLM that actually knows us not co pilot offering to help me rewrite everything that I'm writing because I know how to write.
Yep.
But I think that does point to, because, because I was at an event I was invited to speak in event two weeks ago on the geopolitics of AI. Right now, the large tech companies in the West had been sort of building a competitive mode access to high end GPUs that nobody else could afford.

(13:10):
And in quantity and no one else could afford access, you know, the idea that you needed to turn on nuclear power plants to power these things. That was the competitive mode they were building which actually in some respects disadvantage startups in Western societies from ever entering
the world. And so they couldn't get the data and to who has access to nuclear power plant as a startup. But what's interesting is the model that you're talking about exactly that is, is that one I think that's what consumers want to I think that's what's necessary if we actually want to have choice and agency and

(13:42):
thought or in reality. And three, how are we going to preserve privacy otherwise now I know some people say privacy is dead. I am not one of them. And I think, you know, this is the only way that free societies remain free 10 years from now is if we give more agency and choice both in our data, but
also when and where we run AI. The question is, how do we, how do we incentivize investors and VCs to reward those companies that do that versus alternative path.

(14:10):
Well, you know, they're just jumping on the latest bandwagon. That's what they do. We only ever make progress from the people on the outside.
Tether was a director at DARPA for a long time. He was big into the future being machine learning and so investments that DARPA did more than just do that.
You know, typically, that's why you and I do what we do.

(14:39):
And so, you know, we've got a lot of what we've got coming from DARPA.

(15:03):
And so, the question is, what are the investments we need to do today. If we want to have things that are privacy preserving freedom preserving etc. 10 years from now, because if we don't place those bets now, and one of people I will give a shout out if folks aren't familiar with take a look at the work of Carl

(15:30):
I was just having a conversation with someone yesterday from the fairies which is about valuing cyber risk in cyber organization. We're talking about how generative AI is not really helpful for organizations because they need deterministic models.
So, they can be reliable and consistent. And, you know, we're trying to do it by shoehorning a genetic AI in to serve that purpose, but it's still not really working and we're trying to then put rag models with it. So, it's problematic.

(16:05):
But that's the thing is if is if we can have that truly properly deterministic, but still generative like. But I really see generative becoming the front end, the front end of everything so we talked to generative AI because it understands us when we just talk like we are now.
And at the back end, it has some generative, some agentee beats, but also some deterministic bits because when I want a factual answer I want a factual answer.

(16:35):
Right. Yeah, well, I agree 100%. And it's worth knowing again there's been so many flavors of AI like expert systems are there decision support systems are there rule based computer vision, you know computer vision is great and it's not not in this term in this stick.
I agree with you I think generative AI has proven to be really good if it's about a conversation through text or speech. It's also really good and this is where I caution people in cybersecurity.

(16:59):
Generative AI is only as good if the present and futures embodied in the past training set. Well, if I'm a cyber attacker, the way I will attack you is I will do something completely different that I think is not in your training data sets your AI system never picks me up at all. Now, that does present the opportunity for
generative AI to establish what should be the normal patterns of life in your organization. You know going back to my example of where I got, you know, the fund that I had in 2017, you know, I was able to spot that out and say that's not a normal pattern of life.

(17:32):
Now, whether that I didn't know at the time it was being done by political actors I just said that's not normal.
And so I think that's where it gets useful in that you can say what's the normal patterns of life for my environment. And then if something's outside the range of norm. Is it a hardware failure software failure or an exploit.
But that also runs the risk that what the attackers will do is then they'll just try to blend into what the normal is if they can do enough reconnaissance.

(17:55):
Yeah, well, you know, and I think this is where you got a different check between the script kiddies who are attacking and the state actors the state actors get in and getting deep and watch for a long time.
And the script kiddies are just like drive by.
Yeah, they're just opportunistic and everything like that. Well, and that gets a deeper question which is also two things is one, I know we're here about data and AI but I'm going to share about two years ago, the US Department of Defense had a competition to see if any company could to do what's

(18:30):
called a deep hardware interrogation at the chips at level is the hardware really what it claims to be and nothing else. And there were 37 companies a company one.
You know, I'm not the type that gives visible endorsements I'm just saying this company one. And as a result they were asked to take a look at two routers from a US equipment manufacturer that had been in use for the last 10 years and an underwater nuclear environment.

(18:58):
Leave your imagination and let you guess. Anyway, they ran it they said we're getting a weird result. Can we open up the box, they open up the box for hours later they found two small wall way daughter boards soldered on to that route.
That was two years ago. I raise that because if what we're in a world in which and it was clear it was not a one off like what was doing is the wall way daughter board when it when the router booted up was loading into memory instructions to phone home.

(19:23):
I would submit we're now in a world in which we haven't verified the hardware. And I think we'd be surprised at how compromised it really is.
Well, one organization I used to work for we used to buy American routers and we used to get them at least by the ones that were manufactured in Japan, and they used to be shipped directly here.

(19:45):
The rest of them got routed through us I wonder why. And yeah, there were reasons.
Yeah, yeah.
Should we say, and I just really doing it.
Everyone's doing it.
Even even worse, even even we're even worse. The ones the there is now examples of those routers being compromised in our own country in the United States.

(20:09):
So, so the actors are actually able to through either access through the OEM or the shipper or something like that compromising it in country. And so when yeah when I brief people on cyber screen like, there is a gray zone conflict going on.
People are the threshold of being a warlike event. But that does mean cyber security is one of the ways there's other ways disinformation attacks on companies that's another way.

(20:35):
Sadly, I'm also seeing blackmail of private sector C suite officers or board members, getting them in a compromising position and saying as long as you do what you tell we're told everything's okay and I'm like, Oh, so yeah, this is a challenge.
So I think you've raised a really good point though that is a kind of the data related point, which is our supply chains. So there's our physical supply chains, the hardware.

(20:57):
There's our data supply chains because you know where if we're going to be using automated decision making in future we need to make sure our data pipelines are pure and haven't been corrupted.
So one of the things, especially in a medical context, I really can't care about the immutability of the data. Like if I've got a pacemaker, I don't want that to get hacked.

(21:21):
Right.
Well, and the challenges. And that's where I think for the pacemaker what you hope is, you know, I'm not a fan of formal methods solves everything but the, the amount of processing you need to do on the pacemaker is small enough that formal methods can be a way that you can actually do all the
possible churning states for the machine and you make sure you've mapped it out and it can't be exploited. But that doesn't solve everything like formal, you know, people who say we're going to use formal methods for your web browser and like, yeah, that's nice.

(21:52):
It's not going to work.
The moment I install a plugin is going to break but anyway.
So really, because you know we've seen some really bad supply chain hacks remember solar winds, which was really egregious and people woke people up to the fact that their software supply chains need to be managed.

(22:15):
And you need to you can't assume that even a trusted vendor is giving you good software.
Right. And then or that software that you cleared once remains clear going forward in this case the moment update came. I mean also look at XZ utils.
You know, open source software is a battleground.

(22:38):
And so circling back to AI and building on what you're saying.
Some of the currently some of the best tools for optical character recognition on GitHub are actually backed by PRC contributors.
What could possibly go wrong.

(23:01):
And again, I don't want to I mean I'm a globalist at heart as well but I need to recognize that nation states are playing nation state games, and they're doing it for various reasons and I think I think this is where for free societies where we have the luxury of choice because we do have the freedom of choice.
We've got to figure out how do we up our game, given that actually the very freedoms and the very openness that we celebrate also makes us target.

(23:26):
Yeah. And so this this leads in because you know people don't realize that data and technology are utilities to us as humans, but nation state weapons.
And above us above our pay grades, for most of us, there's there's literally a war going on. And, you know, some quite substantial attacks like, you know, normal people have routers that they that have been attacked.

(23:58):
And because the routers don't get updated as part of this war that different states are fighting.
Yeah. And, and, and yes and exactly and it's and it's also the idea that if if if they can pour kerosene on a wedge issue in a free society mean nice about free societies you can disagree.
But if they can throw accelerants on it and make the wedge issue seem even more extreme.

(24:22):
Or if you take out the connectors those that are trying to build bridges like you and I. So, yeah, we'll wait the camera right now, as our biometrics are giving up online.
Yeah, I think it's it's, it remains to be seen how a coalition of free societies because I do think it's going to take a coalition will respond, given that it is such a, in some respects it goes to free speech, it goes to individual

(24:49):
agency, it goes to capital markets.
And the last thing I would say real quick is none of the generative AI companies the big generative AI companies from the West at the moment have made a profit off of generative I alone.
They may have made it off of cloud.
Open eyes balance sheet they are spending money to make money. They took nine billion of investment recently and they spend it all before they got it.

(25:17):
And I don't know if you saw it and in fact this is just so today. Google has announced that they're committing 75 billion just for 2025.
Yeah.
You know,
And the interesting thing is when you look at China, they've actually done some really interesting stuff.
And looking at their quen models and things.

(25:40):
They've been doing a lot of research too so everybody they're sort of an AI arms race happening right now.
Right.
And I think what I would hope is so so a year ago in January 2024 I was asked to host five dinners on whether or not there was appetite from the investment community for better more energy efficient, less state intensive approaches to including active

(26:05):
inference like car first and was with someone I got to host a conversation with.
January 2024. Nope. Investors just wanted to double down on you know more cowbell more more generative AI as is and I'm like, oh,
I'm wondering if now the real politic may make some investors more interested and or again, I would say right now Jim to be I is really hard to enter into as a startup.

(26:29):
Because where are you going to get the data, where are you going to get the CPUs. And so maybe and there are there are companies. I will name a Canadian company so it doesn't look like I'm endorsing just us ones there's versus in Canada there's house on the United States that are doing better approaches to AI that are
not doubling down on deep learning.
The question is, will investors be willing to put money towards that, because I think their future is not one platform to rule it all. The reality is do any of us really want one AI platform to rule it all or like you said do we want to run it locally.

(27:02):
Well, the interesting thing for me is how do we like if you've got a generator AI startup your mad because like, there are dime a dozen, everybody, you don't own your underlying assets.
And, and the real problem that seems to me is there is the amount of data in the world that is needed to train all these models is already exhausted.

(27:28):
So we've got the real problem of with these big models that are huge, bigger and bigger need more and more training data. I think the future will be small lm's with small data sets that you can run and I keep telling people look look to building your
own capability to run and build your own lm's their open source and get them they're pretty easy to understand.

(27:53):
But look at doing small stuff with your own training data so that you own it, instead of, you know, giving your data away to somebody else because your data is valuable.
100%. And if anything, so for the last, yes, it's been since about 2021 I've been trying to brief people and say data is not the new oil oil use it up it's gone data use it it's still there, but also on top of it.

(28:17):
If you involve the stakeholders associated with the data it actually gets better. They refine it they find errors they fix it they actually create more data.
And what I would ultimately say is I think we need a paradigm that says data is a form of human voice.
It's what you want what you don't want, whether it's a policy action whether it's a preferential action in the United States here in the United States. When we initially had coven the federal government rolled out funding to the states to put testing centers for for coven.

(28:46):
And the states did what they thought was logical which is they put it where people lived. Well, if you actually looked at the data and this happened to be data that was from private companies including a big tech company.
And in certain parts of the country, including if you were black or Hispanic, where you worked was about 45 minutes away from where you lived. And so if they put a testing facility where you live and it was open during normal work hours on weekdays.

(29:12):
Well, you can't actually easily get access to it and so you had newspapers saying there's hesitancy and a part of these groups to actually get tested for coven it's like no no no.
You put the testing sites in the wrong place and the moment those states fix things and put it where people worked as opposed to where they live they saw no hesitancy whatsoever.
And so I think, especially in free society, we may need to update our approaches to say data is a form of human voice. Let's give you some to use data for decision making like it sounds like they had the data.

(29:44):
So I think in a moment think about it people who leave early to go 45 minutes away. Yeah, during business hours. No, well and so again and I'll go back to my birth to three efforts so, you know, it's troubling when you realize that a lot of single parent caregivers are going to
get tested because I'm like, for how to raise a child. But when we actually peaked underneath the hood as to, could we find enough information to train a chat bot with a nonprofit because we didn't want to monetize their data ever.

(30:14):
I mean this is a vulnerable population. What we found was the information landscape was so fragmented at the, at the state level and at the county level.
And some of these things were changing every three or four months that as even a human it is difficult me to for me to find the right guidance for services and benefits and information on how to raise a kid as a human.
And that points to a larger problem which is at least in the United States, we may need to defragment our information landscape.

(30:38):
So it's useful and usable, not just by humans but ultimately by chat bots that are trained for you, as opposed to some, some mega company. I haven't found that at a university students come to the student center and there are multiple student
centers. So there's the university level, the faculty level, the school level, and they answer shop because they go to one group they don't get the answer they want that they go around talk. And if you're trying to build chat bots, chat bots, chat bots, pregenerative AI was super hard to build good ones.

(31:09):
And our information landscape was so fragmented that finding the right answer from all of the myriad of documents that were out there was actually really hard. And around the world and help you because you had to use a human to go well this document and this document actually contradict each other.
We need to do some harmonization work. Yep, all of this to make sense so that we can then automate the responses so it was a really funny conversation to have like, like, you guys haven't got your shit together.

(31:41):
Right, no tech is going to help you until you get your humans up together exactly. Yes.
But you know that that is a lot of the conversations that I'm having now with people is all these people want to do AI, and they haven't got their plumbing right so they haven't got their data pipelines right they haven't got their data governments right, and they're going we want to do AI and I'm like,

(32:03):
Wow.
I know it's a magic one what do you mean you just weigh the magic one all is good.
I see you guys are saying it's magic. Now, on a similar note. So, obviously, there's a lot of questions in US about our export control policy when it comes to AI and AI take capabilities and given deep seek.

(32:25):
When I saw deep seek happened.
I have seen over the last 10 years whether the US did export controls over tech going to Iran because we didn't want them to get nuclear capabilities, or we did similar things to Russia or most recently China.
I was like, I really wish we had a combination of human and AI teams whose jobs were to figure out how someone might figure out loopholes and export control policy before it's passed.

(32:54):
So I just had fun I picked a GPT of choice. And when there was an export control policy passed by the previous administration and the tail end of that administration before they left I said, give me five ways that you would work around this recently passed us export control.
The GPT gave me five ways to do it and I'm like,
That was the things that hit me when chat GPT first came out of like, oh my and I was downloading with a friend some open source models like Mistral.

(33:25):
And I was like, oh God, this means bad guys can have the same power that we've got on their devices like they don't.
They can take a copy. No one knows they've got it. So the bad actors have all the power of an LLM in their hands and they can ask it questions like that and you don't even know they're doing it.

(33:46):
Right.
And then on top of it, again, what are Jim to be really good at as you know I mean they're really good at creating real sick looking human content that's not from human at all whether it's text audio video and the like and so we've produced almost the perfect fraudster
scammer tools and we haven't upgraded our society to deal with the fact that I mean, at least in the United States we've seen ransomware is, you know, it more than doubles each year in terms of global damages, but also in terms of scans are believed to be 50 times

(34:18):
as worse if not more in terms of money extorted relative to fraud and ransomware and so I think it's just going to be really bad.
Well, one of the things that we're telling people in Australia is for their families to have a code word so
100% same here. Yes.
We have a separate code word that you don't tell anybody and ask the scammer if they know the code word and if they don't, don't give them money.

(34:44):
It's a deep fake exactly it may sound like it may sound like a relative or a family member but it's not exactly yes.
So it's really interesting. But what it's let's let's talk about the I want to just take a moment to talk about the world landscape because you talked about a free society and obviously value civil society with the world seemingly with a lot of

(35:08):
governments now tending towards as more slightly authoritarian. Some might even say fascist bent. How what what tools can we use to help us maintain a free right or an idea of a free society.
So I think the first step is to step back and look at a sense of history.

(35:32):
And trying to remind people of this is is, you know, I get it, at least in the United States or maybe some people that are fans of the current administration or maybe other people who are not fans of the current administration and everything like that.
You know, I go back to the Federalist papers that set up the United States, in which they said what is government by the greatest reflection of humanity, if all men and women were angels no government would be necessary.

(35:56):
So they recognize that you know, the whole point of the system is to have checks and balances and they said they wanted ambition to counter ambition.
So it is worth remembering that our second and third presidents.
At the time, when he was elected president second president was john Adams, his vice president was Thomas Jefferson, both of which who have monuments and statues to their honor.

(36:21):
Thomas Jefferson as a vice president to john Adams hired a political hitman to spread disinformation about john Adams wanting to go to war France when that was not true.
And john Adams, as a sitting president was writing op ed saying his vice president Thomas Jefferson literally was the devil.
That ladies and gentlemen was 18 others.
All stuff, you know, if you know any history you know that disinformation and misinformation have been around for a long time.

(36:47):
Exactly. I think the difference now is it can spread exponentially at scale around the world. Whereas before you know that the handful of people have read that article that the president.
So, well, not everyone even had access to reading and everything like that I agree.
And what I tell people is through your through the internet through your smartphone now through gender to be you have the capabilities the CI and the KGP had circa the late 1970s.

(37:13):
You know, and there's amazing things you can do you can call anyone at the moment notice we can have this conversation.
You know, if you ideally get someone's permission but you can geolocate.
You can download commercial apps and get satellite footage as recent as 15 minutes ago at point to five meter resolution.
That's incredible. However, yes.
And so I think I think there's a whole lot of anxiety out there at the moment to be candid about the future. And I'm trying to figure out ways for people to step back and channel that anxiety to actually taking whatever steps they feel they can do in their own lives because that

(37:49):
that that overcomes a sense of learned helplessness and helps you overcome the anxiety.
I mean I for one I go running a lot.
And that that helps channel it.
I think I'm not saying, I'm not saying be complacent but recognize that our, our sentiments as to where the world is going may be colored by our own preferences and I celebrate for one that we are still allowed to have those different preferences.

(38:14):
And so I think in the night in the world.
Work at the local level, you know, in some respects it's always at the end of the day comes on the local level.
And so I think that's how I still I mean personally I channel my energy to giving people stakeholders are equity and data. I think a lot will flow from that.

(38:37):
In the United States and I realize we're a little bit different than other personal world. I'm a big believer in helping people figure out how we shift from fee based service for health care to value based health care, which actually you'll find there's a lot of people that are
not aware of market based mechanisms that want to do that too, because in some respects fee based health care motivates wrong outcomes.
So, so I guess for me it's almost like it you we, it while it while it really looks like history we're in the midst of living in it, it remains to remains unseen where this will go.

(39:11):
And I think if you find opportunity to do what you can at the local level.
And I think that's all that unless we are politically elected official which I'm not. That's, that's at least what we can influence. So, yeah, and I think that's good advice to everybody build local networks because that's where you leave find an outlet running for some people walking for me.

(39:33):
Yeah, whatever works for people exactly yes.
This has been a really great chat thanks David it's been lovely to see you again sorry I didn't get over in October I was in London for the launch of our blockchain startup. I want to hear more later yes please tell me more about your startup and it's great to see you hopefully we'll see you at CC this October, and we'll give a shout out to Ray Wong too.

(39:54):
Yeah, he's always a marvel.
Thank you.
Thank you.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.