Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to this crossover show ofCybersecurity Today and Hashtag Trending.
Occasionally I get shows that I thinkare of interest to both audiences
most of us have been thinking aboutartificial intelligence and how
it's being used in programming.
And of course in cybersecurity, weknow that it's being used by both
the good guys and the bad guys.
(00:22):
If you're in IT, or especially ifyou follow cybersecurity today, you
are aware of what I've describedas the arms race in cyber threats.
The bad guys are constantlyon the watch for software
vulnerabilities they can exploit.
The good guys are continually tryingto find these vulnerabilities before
the bad guys can exploit them.
(00:44):
Companies are constantly lookingfor issues in their code.
They provide a lot of incentivesfor others to help find them in
terms of bug bounties and otherprograms, but occasionally they
find out about a vulnerabilityonly because somebody exploits it.
These are the so-called zero dayexploits where the vulnerability's
not known to the developer or thevendor until the exploit has happened.
(01:08):
Everyone's constantly on thelook for these vulnerabilities.
, And nobody, developer or vendorwants a zero day attack on their
customers or their systems.
Developers and vendors are constantlylooking for vulnerabilities,
and if they find them, they needto evaluate them and prioritize
them for patching or remediation.
(01:30):
some vulnerabilities are real,but they're not likely to be
exploited or not easily exploited.
And even if they were, somecan't do a lot of damage.
On the other end of the continuum,there are those that are easily
exploited or where the consequencesof an exploit are severe.
(01:51):
But finding them andpatching them is not enough.
You need to make the industry awareof these vulnerabilities so that all
users of the software will patch it,upgrade it, or remediate the issues.
at one time, this process wasad hoc, managed by individual
companies for the most part.
But today we have a centralizedrepository used by virtually
(02:11):
all vendors and developers.
It's called the Common Vulnerabilityand Exposures Database.
Now, The CVE database provides astandardized, publicly accessible
catalog of cybersecurityvulnerabilities and exposures.
It also provides a scale for themeasurement of the risk, impact or
severity of the vulnerability, and itprovides a single common identifier.
(02:35):
It's maintained by the nonprofit MitreCorporation with funding and sponsorship
provided by the US Department of HomelandSecurity and the Cybersecurity and
infrastructure Security Agency, CISA.
It coordinates the work of more thana hundred other organizations, and
at one point there was, and may stillbe a worry that the US government
(02:57):
might cut off funding as part of theDOGE or other austerity measures.
it seems, at least for now, like wemay have dodged that bullet, but it's
still a huge worry because the databaseis now essential for organizations to
manage a growing list of vulnerabilities.
It allows organizations and researchersto identify, reference, and share
(03:19):
information about specific security flawsusing unique identifiers, CVEs, Once a
vulnerability is identified, it takestime to develop a patch or remediation.
And Once that's available, youwanna make that widely known so
that as soon as the vulnerability ispublished, organizations can patch,
upgrade, or remediate the problem.
(03:43):
But it's not as simple as it sounds.
Even once an organization is madeaware of a vulnerability and even when
there's an actual patch available,that patching and upgrading can take
days, sometimes weeks, depending onthe severity of the vulnerability.
Systems are complex, teams are overworked.
(04:04):
You have to schedule the timeto implement and test these.
It can take days,sometimes weeks, , months.
and in some cases, organizationsnever upgrade or patch at all.
Now, fortunately, it takes the bad guyssome time to develop their exploits.
The CVE might give them some clues, buteven if they know the vulnerability,
(04:28):
they still have to develop the attack andpropagate it In some way, the vast numbers
of CVEs offers some kind of insulation.
There's about 40,000 ofthese raised per year.
More than a hundred a day.
As a result, only a small percentageof these CVEs actually have exploits.
(04:49):
Even the bad guys have toprioritize their activities as well.
So last year there were 768 exploitsthat were publicly reported The
average time from CVE report toa development and exploit in the
wild is approximately 170 days.
(05:10):
even though organizations arerelatively slow at patching,
they have some time a buffer.
But what if AI turned that on its head?
What if you could develop exploits inminutes automated with minimal effort,
turning those CVEs into exploitsbefore anyone could actually mount a
(05:31):
defense, that would be devastating.
Well, it's happened.
Two Israeli researchers, Effie Wiesand Nahman Khayet discovered how
to do this and built a proof ofconcept and put a post on LinkedIn.
I was able to track them downand bring in  Nahman for an
interview for this weekend Show.
(05:54):
my guest is Naman Khayet.
He is the researcher, many of youmight have heard of, , over the past
week he found himself in a positionwith one of his fellow researchers.
Of developing a way to acceleratethe development of exploits.
That's the only way I can explain it.
Well, I think it blew people away.
the average time, if I've got it right foran exploit , logged into the CVE database.
(06:17):
to be developed is about 192 dayson average, most research says that
you've taken that down to 15 minutes.
Less.
That's scary.
It is, it is.
We didn't expect such a response.
We knew that we did something that is veryspecial because at the end, what it showed
is that, you know, exploit generation isno longer bottlenecked by human expertise.
(06:42):
That's the real meaning behind it.
Again, when the vulnerability isgetting published, you just know that
there is a that may be exploited.
But you don't know how to actuallyweaponize it, make it an attack.
And you have to have specialknowledge of an attacker of a
vulnerability researcher to be ableto make it, into a working, attack.
(07:03):
And, you know, it's 190 days for mostof the cvs, but the critical ones, it
usually takes around four to seven days.
to create the working exploit,so organization understood it.
Yeah.
It's not something that is new for them.
they have these controls to patchquickly, critical vulnerabilities in their
systems and it usually takes a few days.
So I would say that, we were able to doit in 15 minutes and still, challenges the
(07:28):
core assumption they had, which is likehaving 24 to four days to patch the CV
like 400 times faster to create an expert.
Yeah.
And you're not the first people to useAI for this, or at least for the analysis
of these, of these, weaknesses thatcould cause exploits I know Google has
Big Sleep, which is named After a movie.
(07:50):
and Nvidia has Project Morpheus.
, If you don't know who Morpheus is,you're not the sci-fi community.
You know, but Those really would detectvulnerabilities and normally report those.
So there is the ability toautomatically detect them.
But you've taken that a step further andsaid, we can turn that into an exploit.
(08:10):
Yeah.
Using AI Yeah.
I wondered how big sleep is, structured.
Like, they are able to findvulnerabilities, but the challenge
after finding vulnerabilities isto validate they are actually.
Vulnerabilities, Well that's gonna beone of the first things that, we'll talk
about the positives of what you've done.
'cause right now you're justscaring the pants off people.
(08:30):
but the positives are that justbecause you see a section of code
and you say, we think there's avulnerability in there, doesn't mean
that there really is one that exists.
Or you could exploit it Exactly.
So how, first of all, why did you dothis and what was the idea behind it?
We have been researching, ai,changes fundamental assumptions
(08:52):
in cybersecurity for a while.
you see that it changes every field,even in security, AI generated attacks
and, ai, email assistant attacks thatyou see, you see all these, different,
Vulnerabilities in sites like lovable.
We also did a smallresearch, around that found.
Many issues there too.
it changes every aspect.
(09:12):
we thought about how it impactsvulnerability research in general,
which is if you're able to dosomething disruptive there, it'll be
really interesting for the community.
And when we realize thatit's, it's pretty easy.
We became a bit scared because wethought that, you know, I don't
think, I don't, we don't feel likethe industry understands that.
They may be suspected, but onlyafter we actually published in the
(09:34):
article and we proved, by the way,that it actually happens 10 or 15
minutes after the CV is published.
Because somebody may think that we didit in a day or two and then, you know,
use the old cv, did it manually orsomething, but we actually proved it.
it happened that we published it 10 or15 minutes after the CV is published.
When we did it, the industry finallyunderstood that there is something real
(09:54):
here, and we have to defend differently.
We have to question our core assumption.
So that's the reason why wedid it, because we think that
people Don't get it really.
See, but this idea should havebeen around for ages, like, I
mean, it's so obvious when yousay it that why don't we try this?
First of all, let's give a shout out toyour fellow researcher too, because Effie
Vice is the genius behind all of that.
he is award genius.
(10:16):
we know each other for many, many yearsand we did many things together, but
he's the genius behind all of that.
And he had some, very cool ideasof how to actually make AI do it.
It's not that easy, okay?
you have to chain a few agentstogether and then you have to
create a system that, check that theexploit that was generated is real.
Because sometimes AI hallucinatesand so there is a lot of tech
(10:38):
there and did a lot of the job.
Well, when you start out, first ofall, and you used Claude for this, Many
people call it Claude, but I'm Canadian.
we call it clo.
I just, I'm not gonnacall anything Claude.
I don't care.
That's, but Claude, it should,first of all, why didn't it
prevent you from doing this?
(10:58):
They did prevent, Hey, Claude,develop an exploit for me.
I mean, that's sort of, youthink that'd be basic guardrails.
So at the start, we told it to,Hey, develop an exploit for me.
And then it, refused to do that.
what we actually did is we startedto use Qwen, the Chinese model
locally on a MacBook because wealso wanted to show that it's.
(11:19):
Cheap that you don't have topay a lot of money for that.
but then after we splitted it up intodifferent agents, then Claude decided to
go with us and help us to generate it.
Because think about it,it's just a code issue.
It's just a code problem.
It's not an exploit.
You can say, just look at the code.
Tell me what's wrong here,what's wrong with the code queue?
It's not, too difficult.
(11:40):
Yeah.
And AI agents are doing that, . Butnow what are the issues in this?
you detected what you think is an exploit?
how do you find what you thinkis an exploit in the first place?
okay, so what's an exploit isyou have to, like, you have a
vulnerability somewhere in the codeand you need to find the, route.
To get to that vulnerability in the code.
(12:03):
think about it like some of thevulnerabilities is not reachable, right?
the industry knows that very well.
You have findings, sometimes thatare generated by SaaS tools or
SE but you don't know if they areactually reachable in your code.
what you try to do is you try tofind a path that leads to that point,
Understand if you have an actualway of attacking the system through
that, which by the way is a problemfor people who manage these exploits
(12:26):
because They're getting all kindsof reports where people have run AI
and say, there's an exploit here.
and they look at it and go,no, this isn't an exploit.
How are you able to use AIto validate that exploit?
You had to create, an environmentwhere the old vulnerable version exists
and another environment in which thenew non vulnerable, version exists.
(12:50):
And then you need to try orexploit, On both of them.
if you see that some that an expertworks on the unpatched version and then
stops working on the patch version,then you have probably succeed.
Okay.
So you need to set those twoenvironments up run your analysis and
say, okay, this is a real exploit.
(13:11):
or at least the code is exploitable.
I guess that's really what I'm saying isyou understand the code is exploitable.
How do you do that?
Is that the next step?
we have a few agents there, but ifwe want the whole flow, then we start
from, the vulnerability advisories.
We extract the diff between the versionsbecause in the advisory you have a hint of
(13:32):
what the vulnerability is, and then, youhave also the version that it's fixed in.
you extract the diff between them.
You detect the vulnerable code inside.
Using an AI agent.
a set of AI agents, reviewsthat does a technical analysis.
Then you put those two environments,and check your exploit.
And if it doesn't work, like if theexploit doesn't work on the vulnerable,
(13:55):
version, then you start again and youfeed that feedback back to the ai.
Okay.
And then you have such alook that fixes itself.
You get the feedback fromtrying the exploit, and then
you give it back to the ai.
And ai, helps you to, generate anotheroption for the exploit until it works.
And of course, it's not that easy.
you may create this engineering workand make it all work, but at the
(14:16):
end you have to teach the AI how tofind the vulnerabilities themselves.
Well, that's what I was gonnaget at and, I wouldn't think that
the CVE gives you the abilityto understand the vulnerability
enough to develop an exploit.
I, I'm sure they're not publishing that,but they're publishing enough information
for you to start digging into it, and thenyou develop, first of all, the analysis
(14:39):
of the exploit and then the exploit.
This has caused quite a stir.
Why is it so difficult?
Why hasn't somebody done this before?
Because in order to, again, youhave the hints in the advisory,
but you need to guide the AI.
To find it in the code andevery, type of vulnerability.
(14:59):
Not every type of vulnerability, but someyou have like classes of vulnerabilities.
those who are SQLI injections, SRF,and different types of vulnerabilities.
And, in order to find them inthe code, even if you know the
diff, in order to find them inthe code itself, you have to know.
how such vulnerabilities look like.
So you have to teach the AI.
(15:21):
Look, there are, hundreds of examplesof this type of vulnerability.
This is the way, I would think aboutit as a vulnerability researcher now
try to imitate my way and do the same.
And if you do that well, youhave to have the expertise.
You have to know how to do it yourself.
So that's the reason why it's difficult
And the second thing isthat it's not that easy.
(15:42):
You have to chain a lot ofagents together because it's
too complex for one agent to do.
Okay.
Yeah, and that's what I'm tryingto wrap my mind around is, the
number of agents that are in there.
Can you just give us a bit of the flowof what those different agents are?
I mean, just so we can sort of understandwhat they are and how they work.
Yeah, I'm trying to give more details,but I don't want to give too much details
(16:05):
because I'm afraid that, you know,attackers are, I totally understand.
yeah.
Because at the end, the idea wasn'tto help the offensive industry.
the idea was to helppeople defend against that.
So, and raise awareness.
but I'll give some information.
You have one part, whichis a set of agents that are
doing the data preparation.
Those agents responsible for taking theinformation from the advisory, taking the
(16:28):
diff between the versions, the vulnerableversion, and the patched version, and,
generate options for the vulnerable code.
That's one part of the agents.
The second part is those who areresponsible for the context enrichment.
We know that in AI, one of themost important things is to create
the right context for the agents.
(16:48):
And then you have, those agentswho are responsible for, doing
a deep technical analysis ofthese, options of vulnerable code.
They're writing an exploitation.
flow.
Then they draft an exploitation plan.
From that flow, like how you wouldexploit such a thing You have also
a review agent who checks that plan.
(17:08):
And that's the second, part of agents.
And the third one is the evaluation loop.
Like that's actually creating those,putting those up, the two versions,
trying the exploit and feeding thefeedback back to the context enrichment
one, to create a better expert.
And so you've been able todo this in under 15 minutes.
You've been able toprove that is an exploit.
(17:30):
How did I read about this In DarkReading where a lot of people, yeah.
did you just release a paper?
how did people find out aboutthe work you were doing?
we also, again, we were very surprised.
The response has been really intense.
we published it on LinkedIn.
We did a, like, published a Substack,article, published it on LinkedIn.
We thought that some of thefriends and colleagues would.
(17:51):
respond nicely to that and maybe repostit, but then it just broke out and
everybody, started to talk about it.
we saw it across the CISOs,researchers, executives, Their
responses were both validationof what we thought is a problem.
Like this 15 minutes thing is real.
and this is scary becauseour vulnerabiliyt management
programs are not built for that.
(18:12):
they're not built to handlewhat we can do today.
Microsoft Exchange vulnerabilityreported, I guess a couple weeks ago.
We cover these in Cybersecurity Today.
We report them regularly.
two weeks later.
They're still ripping through that code.
and that's a relatively contained number.
I mean, there's going to be thousands,maybe more worldwide, but a Microsoft
(18:34):
Exchange on site implementation,that's a pretty contained environment
that's a limited data set.
And yet two weeks later,it still wasn't patched.
I mean, there was stillfinding unpatched, exchange.
and I think this is A big problem'cause you can always find unpatched
stuff . There are exploits that peopleare fighting three years after the
patches have been out, so there's plentyof unpatched stuff out there even today.
(18:58):
Why?
I think it happens becauseof the talent shortages.
you have this vulnerability growth.
You have the same amount of talentthat have to handle all of that,
and you see all these mediums andlows stay in your organization.
you only try to handle thecritical ones that you have, but
you don't go over all of that.
And you have problems in yourorganization, in the culture,
like how people respond tohandling vulnerabilities.
(19:20):
Is that something that it's likefirst priority for them or not?
I think that's the reasons andwe have to do something about it
because the wave of attack is coming.
Is the reality for the last few years,but think about it 400 times faster.
It's not only the exchange one thatwas, published, and, I think you'll be
able to do that in a very cheap manner.
you'll be able to generate exploitsfor all CVS released like every day.
(19:44):
there were 40,000 of them last year.
the rate, this isvulnerability is reported.
The year on year growth ofit, by the way, it's 17%.
it's only getting up and with AI generatedcode, I guess it'll be much worse.
and the stats I read were correct.
It was that only like 700or 800 vulnerabilities were
actually exploited in that time.
(20:06):
That leaves 39,300 othervulnerabilities, of which some, and,
I agree that some of them will be.
I mean, if they're reported atthe CVE level, they're pretty,
that's not just speculation.
Those are real vulnerabilities.
Correct.
for these vulnerabilities, thevulnerabilities exist, but, the ROI for
creating an export is not that high.
So people do, do not, you know, again,you need this special expertise, but if
(20:30):
those, capabilities will be published,I don't know if more people will be able
to do, what I think already is happening.
In some places.
And then yeah, the much more ofthem would be able to weaponize.
I like to call it the, theend day is the new zero day.
Okay.
Yeah.
The end day is the new zero day.
every cv even that was publishedtwo weeks ago, or two months ago,
or a year ago can be like a newzero day on your organization.
(20:53):
we're not gonna startreferring to end days anymore.
We're gonna start N minutes, you know?
Yeah.
This is N minute eight, you know,and for those who are listening
to this who don't understand zerodays is the date of the detection
and reporting of the vulnerability.
the end days is what we startto count for how long it takes
to get an exploit together.
And we did that in days.
you know, and now we couldbe talking about N minutes.
(21:16):
Have you and I just go backto this because it's been,
I run a publication now.
I was a CIO, so I did, run it shopsand you know, we do our patch days.
And here's the problem you have ispatching requires making your systems
possibly vulnerable to downtime.
Yeah.
(21:36):
So you have to schedule that.
You have to do backups.
You're not going to post new code andnot have a backup that you can restore
to, then you'll apply the patch.
Maybe you'll have to test the patchbecause, they don't always work.
and so, you know, have users oreven testing people around to say,
we applied this patch to your,are your systems coming back up?
(21:58):
Do they still work?
These are all the constraints now.
Maybe smarter people, people withmore money made, you know, people who.
Could find better ways to do things,could collapse that, but unless you
can get to something like, virtualpatching that you can just apply and
see if it works and get rid of it.
I don't know how we'regonna cope with this.
(22:21):
Yeah.
I think two things.
First of all, response timesmust accelerate drastically.
Like we have to assume that,in a few months, maybe a year
vulnerabilities will be, commoditized.
you'll be able to easily findvulnerabilities into things.
And there is a storm coming in that sense.
(22:43):
we have to, Accelerate our response time.
That's the first thing.
And the second thing is, we must investmore we need to assume that our security
strategies must assume that the attackersalready have a working expert and
what it leads to, if we assume that.
That we have to put runtime protectionsand compensated controls in our, systems.
(23:05):
Create a more resilientarchitecture from the beginning.
otherwise I'm not sure how wecan handle this wave of change.
Well, one idea that I had, and it just,it makes me sort of crazy thinking about
it, is if Google can develop somethingthat can find potential exploits and
Nvidia can, why aren't people runningthat on their code before they release it?
(23:27):
Great question.
I think.
Yeah.
So what I'm just saying, we maybe the, the only answer may be
building code that has fewerexploits, that's like the solution.
When you look five or 10 years ahead,but think about all the vulnerabilities
that you have now in production.
and that's like your real problem.
even if I put those controlsinto my development lifecycle,
(23:49):
You have systems running for 10years full of mediums and lows and
highs that you can exploit easily.
Yeah.
What do you think?
Have you come up with ideas thatthe next projects you wanna work on
in terms of helping address this?
Yeah, we are trying to understand, howto create defender tools that use the
(24:10):
same techniques, as the attackers toauto generate those mitigations and
detection rules, to close that gap.
we're still talking with, weget a lot of responses, like
people have different ideas.
it's really interesting the industry is,full of smart people And with great ideas.
So some of them talked about likedetection rules and mitigations, for CVEs,
(24:31):
and we are looking into that, direction.
we still think that it'll taketime for organizations to adapt.
So we try to raise awareness That peopleneed to advance the response times
accelerate them as much as they can.
What are your recommendations forpeople, advancing what they're doing,
are there tips or tricks that youcould offer that people could apply
(24:54):
that might help them be better atmitigating these vulnerabilities?
I think they need to build their,architecture, more resiliently.
meaning here is to reduce the, waysof, like even if you exploit one
vulnerability in order to get to thecrown jewels of your organization, like
to the important DBS and stuff likethat, you have to find the whole way in.
(25:19):
So it's not only just protectingthe entry to organization,
You need to think the whole way throughhow you secure everything, how you
minimize the attack surface, to get,into things In your organization.
So what I recommend is when you buildsoftware and when you build architecture,
especially, when it's combined ofdifferent parts, think about that.
(25:39):
Think about how youreduce that duct surface.
Surface as much as you can.
So, when you.
Go on to fix the vulnerabilities.
It will be published later.
You'll need to do less job.
I was thinking about this 'cause I didan interview with the gentleman who's
called the Godfather of Zero Trust, andas I was preparing for this interview,
I was thinking, is there a way to writecode so that we can do what you were
(26:01):
talking about to say, Hey, everybody'sgonna break into some part of code.
Can you segment it in the same way you'dsegment a network so that just because you
get here doesn't mean you can get there.
And it goes down to this whole ideaof, and I think maybe something
we don't think about enough iswhat are we actually protecting?
What is the most importantthing to protect in code?
(26:21):
And I think that's something wedon't give enough attention to.
, We really don't think about.
This is of all the things wewanna protect, this is it.
I don't know.
It's interesting to thinkabout it from that perspective.
Well, you have the easy answer, which is
Write better code.
Yeah.
Well, I don't like that answerbecause At the end, you have
deadlines, you have to go fast.
You want to use those codingagents who make your organization
(26:45):
go in a much faster velocity.
We can't stop that.
In order to make our organizationmore secure, we have to adapt to
that change you can use one thing,which is AI coding agents that help
you to write a more secure code.
And then if you use those codingagents, it can help you to create a
more secure code, from the beginning.
I think that the different solutionsin the market for sas, for example,
(27:07):
have to do a much better job.
like they have to find realvulnerabilities and they have to
work together with the coding agents.
as fast as they can.
Because this may help us to reducethe amount of vulnerabilities
coming from the first place.
but yeah, we have a problem in that.
And when you say about protecting whatespecially we're protecting and we
don't think we are protecting code.
we want to protect our data.
(27:28):
it depends on the organization.
We want to make sure the uptime is high.
So I think that we need to think aboutit again, that's the reason why I go to
architecture, because it doesn't matter ifyou have any vulnerability here or there.
You have to make sure that evenif you are breached, the attacker
can't find your, critical assets.
you have to think that way.
that's the reason why you have tothink it when you deploy your software.
(27:49):
You have to think about ifan attacker would come and
already ran code, your endpoint.
what's now?
Yeah.
And I think the aim of that was,in terms of how I was expressing
it, everybody thinks about uptime.
uptime is big in code.
you don't want crashes.
you don't want stops, you want uptime.
If we apply the same rigor toprotecting our assets, mostly
(28:12):
data, but other things as well.
but if we apply that same rigor as wedo to uptime, we might get better code.
I talk with practitioners and some ofthem, say to me that, they tried to use
this, continuous penetration testing,the new continuous penetration testing.
Solutions to convince theirorganization that what the findings
(28:33):
that they find are important.
they use horizon three, for example,to attack the organization, but not
to actually find something, but toconvince, like, show to decision makers.
Look, this issue is important.
So that's one way of addressing the issue.
Like, for example, alwaysfinding new problems in your
system, new ways of attackingit, and that way raise awareness.
(28:54):
So you're a little overwhelmed bythe attention that this has brought.
what's your favorite interviewthat you've done so far?
What was fun about this?
I really like the whole attention,but, I really like that we found
something that actually interest people.
And we had some otherideas of, things we can do.
I really liked the, I had oneinterview in Israel, in Hebrew.
(29:14):
they tried to ask us, like,also why we are doing it.
Okay, why we did it.
And, the true answer is thatit was just really interesting.
Like, check what we can do withAI So what are you up to next?
What are you gonna do next?
But you tell me, it's just, it's just usand a few thousand, another 10,000 people.
Nothing big.
So again, we keep exploring,whether AI can adapt to zero
(29:36):
day style challenges too.
Like, not having the advisory itselfTo try to, exploit, vulnerabilities.
We want to keep trying to use thosegenerated exploits, to find maybe
new vulnerabilities, which is alsoanother assumption that we may
break, if we'll be able to do that.
But, the main, effort now is investigatinghow we can create defender tools.
(29:59):
How we can maybe release somethingopen source that help the industry
to protect against such things.
I also want to mention that, we didn'tpublish the whole idea behind it.
Like when you read the article,you believe that it's possible.
You see the proof, but youcan't reproduce it easily.
And, I think that it buys us a lotof time, and you still have to be
an expert to be able to do that.
(30:20):
But once you can do it, itbecomes really easy and cheap.
It buys us some time.
There's a lot of smart peopleout there who are trying to
figure out how you did this.
At this point, I just wanna takeanother case just to make sure we give
a shout out to your research partners.
It was Effie, I can'tremember his last name.
Effie Wies and Nahman Khayet.
Thank you so much forsharing this with the world.
(30:42):
hopefully we'll get you backin the program when you work
out the solutions to this
We'll, a lot of people be interestedto hear about those as well.
Sure.
Thank you.
Thank you for having us.
Great.
Thanks.
and that's our show for this weekend.
Thanks for joining us.
Just to note, if you're getting thison an Alexa or Google Smart speaker and
you're experiencing some interruptions.
(31:04):
Dreadfully, sorry.
But our tech guys are workinglike mad and in fairness,
Google has been very supportive.
It's next to impossible to find aperson at Amazon, but we're chasing
it and trying to get it resolved.
It's one of those things thatworks and then doesn't, the hardest
type of bug to chase and fix.
We will get it fixed, and I understandit's part of many people's morning
(31:27):
routine, but until that time, you canfind us on every major podcast platform.
I'll be back Monday morning withthe tech news on Hashtag Trending.
David Shipley will be on the news desk onMonday morning for Cybersecurity Today.
Thanks for listening andhave a wonderful weekend.