Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Tim (00:15):
Hello and welcome to
another episode of the Cables to
Clouds podcast monthly newsupdate.
And uh as always, I'm Tim andI'm here with Chris, my
excellent co-host, and we arejust gonna dive right into the
news.
Um, of course, we have the newsavailable in our show notes.
Uh you can get a list of allpast, uh present, but not future
(00:37):
yet uh uh articles for uh yourviewing.
Chris (00:41):
We don't have that
technology yet.
Tim (00:43):
Yeah, we're not there yet.
Uh we will be though, becauseAI's gonna bring it to us
somehow.
All right.
Uh let's just jump into it.
Our first article is about uhHuawei releasing a new open
source technique that issupposed to shrink LLMs to make
them run on less powerful andless expensive hardware, which,
you know, if it works, God, it'sabout time.
(01:05):
Somebody went the other wayversus uh making it run on
bigger, more expensive, and moreuh energy hungry hardware.
So uh according to thisarticle, uh Huawei's computing
systems lab has introduced a newopen source quantization method
for LLMs uh aimed at reducingmemory demands without
sacrificing output quality.
So they call this thing syncS-I-N-Q, and uh the basic gist
(01:29):
of it is that you know it's froma network engineering
perspective, it reads kind oflike it's like cut-through
switching or something.
So so it's it's basicallyshrinking the uh the heavy math
involved with uh with you knowAI LLM training and LLM uh
expression to just just lowerthe amount of memory that's
(01:49):
required to run some of thesemathematical algorithms um
without too bad impacting the uhthe output quality.
So it mentions that it isavailable um you know on GitHub
and Hugging Face.
Uh it's uh using the Apache 2.0license, so it's completely
open source.
Uh they've tested it on a fewuh a few models like Deep Seek,
(02:11):
for example, and it just goesthe article goes into the
details far, far, far more thanI could as someone who is not a
uh an AI, you know, geek here onthe the learning uh
perspective, but it goes intothe algorithms and how it
actually cuts down the the math.
And uh yeah, so this is areally interesting article.
I would love to see thisvalidated.
(02:32):
Um said God God knows we coulduse some some cheaper hardware
uh for wants on this.
Uh you you read this articlerecently.
What do you what do you think?
Chris (02:41):
Yeah, it's uh I think
kind of to what you said, it
kind of reads like you know theidea that we all commonly see in
network engineering about doinga little bit more with just
kind of the first uh few detailsof uh of the you know kind of
of the packet rather thanlooking at the entire thing end
to end, right?
And and only looking at thestuff that's relevant.
So that's kind of how it reads.
(03:02):
Um but to your point, there'sthere's a lot of math involved
here that goes very in-depthabout floating point numbers and
things like that.
So if that's if that's yourshtick, then I highly recommend
checking out the article andreading into it because it is it
is quite interesting, um, butprobably a little above my head
on a lot of this.
Um it does make me wonder twothings.
And and the first thing iswhere where does this uh plan to
(03:23):
get us?
Like I understand running oncheaper hardware is easier, but
like is this kind of movingtowards it you know, running
these models locally on um uh onthings like handheld devices,
like actual you know,smartphones or even OT type
devices, um, scanners, thingslike that, or if does this put
it running on pieces ofinfrastructure in um you know,
(03:46):
in um on site in inside of arack somewhere?
Um and the other thing is uhunfortunately we have to call
this out, but this is this isbrought to us by Huawei, which
Huawei is one of the largest um,you know, consumed uh vendors
for IT infrastructure in certainparts of the world, um, but
(04:09):
they also have uh kind of a uhresigning reputation about them
in the rest of the world.
You know, there are certainorganizations that will not
touch Huawei just based on uhthat reputation alone.
So I wonder if that will umkind of impede um potential
adoption of this, which is, youknow, this is a completely open
source thing.
It's free to use um based on umuh Apache 2.0 licensing, so
(04:32):
it's it's ready to be used umfor free.
Um, but I wonder how much youknow it'll it'll actually take
off just because of thatreputation.
Tim (04:40):
Yeah, it's a great point.
Maybe that's maybe that's whythey went the way they did,
actually, with the the opensource uh licensing and just
kind of throwing it all outthere, uh kind of being aware of
of that, or or maybe just whoknows, right?
But that's a that's a very goodcall out.
Chris (04:55):
All right.
Uh next up, this this will be abrief one, but it's uh just
funny to even see this uhwritten down.
But um, we have an article herefrom CIO Dive saying that
global AI spending uh isattended to approach 1.5
trillion spending this year,according to Gartner.
Um, talking about the obviouslylarge investments from many
(05:16):
different organizations.
Um actually doesn't seem likethere's a lot of customers
investing in this, it's allother vendors investing in this,
um, et cetera, other serviceproviders, et cetera.
Um, but just crazy that we'rethinking in terms of trillions
now, um, 1.5 trillion inspending.
Um so, you know, that's that'sthe numbers that we're talking
(05:37):
about.
If you think enough, you know,if you think tons has been have
been spent already, um, itsounds like it's only going to
increase um in a pretty uhexponentially uh larger pace.
Um this is um and I think Ithink ChatGPT just launched uh
like in I forget what it'scalled where you can actually
(05:59):
let Chat GPT do transactions foryou.
Like you can give it yourcredit card info and it can do
purchases for you, like in-apppurchases.
So um, you know, this is thisis the this is the future we're
headed towards.
Tim (06:12):
How I mean, how could uh
how could that go wrong?
Oh man.
Now the article points out aswell, like capacity constraints
continue to be a problem, andsimply throwing more money at
investment in AI is not reallysolving that problem.
Like, I mean, part uh uhthere's plenty of of money being
(06:33):
thrown at development of newdata centers and and grid, but
like the grid is what it is, youknow?
Like we've got uh companies nowbuying nuclear reactors to to
fuel their AI workloads.
Um, but like you can't build itbut so fast, right?
This this whole thing, thiswhole bubble could have popped
long before the first datacenters that were built as a
(06:55):
result of all this moneychanging hands even goes online.
Um, so I'm very curious to seewhere all this money is really
going.
Because the money's gettingspent very, very quickly, but
all of the horizons on theresult of the thing we're
spending money on are like yearsaway, which as we've seen AI
(07:15):
changing this quickly, it'sgotta stall out at some point
because they're just gonna runout, you know, of runway
essentially, no matter how muchmoney they throw at it.
So I'm this that's why I thinkthe the Huawei thing actually is
is somewhat interesting to mebecause I think we are getting
to that point where we're gonnahave to start stop worrying
about you know getting to thenext level of technology and
more and more start worryingabout like how do we make this
(07:37):
thing scalable, you know, justnot not just balls the wall
against uh you know the infinitesky.
Chris (07:44):
So scalable and maybe
just a little profitable, maybe
making some of this.
Profitable would be good.
Oh no, it was plenty of profit.
Tim (07:52):
I mean Larry Ellison made
tons of money, right?
Like, yeah, I mean not grantedit was the same money changing
hands between the samecompanies, but hey, the look at
that motil the the they callthat uh velocity of money.
They call that in economics thevelocity of money, right?
Um and speaking of actually, soI have a uh we have an article
uh from uh from finance, YahooFinance, talking about an
(08:16):
analyst who sees a 38% downsidefor Oracle, um, mentioning that
it's a quote unquote a riskyblue sky scenario.
What they what they mean bythat, uh, or what the analyst
means by that basically is thatessentially he's saying it's a
bubble, right?
Like the Oracle, the thefive-year deal or whatever that
was uh signed, you know, for allof this money, uh, is saying
(08:38):
that you know the the the priceof the stock is so far above the
value of what is beingdelivered.
Uh, you know, in a five-yearwindow with AI is just
absolutely phenomenally long.
And you know, basically saying,you know, the the the stock is
not worth that.
And this um these deals thatare these five-year deals that
are that are being pennedprobably won't actually uh go
(09:02):
the distance, right?
That's basically what the uhthe analyst is saying.
So very very interesting uh tosee that finally where
somebody's calling out the thebubble for what it for what it
is, kind of.
Chris (09:14):
Yeah, it's exactly what I
was gonna say.
It was, you know, justpreviously you were just
speaking of a bubble.
Um this is kind of themanifestation of the bubble,
right?
I mean, you know, obviouslyNVIDIA is in this in this camp
too, um kind of adding to this,but I mean Jensen Wong is is not
a stupid person, but also he'she's not gonna go against the
grain on this one, right?
(09:34):
If it if it if the the pathforward, you know, by the market
seems to be open and a open AIand Oracle um doing this deal,
he's you know, i it there'sthere's money to be made from
him no matter what, right?
So it's uh that's uh uh not abad triangle to be a part of for
now.
But yeah, it's just like itseems like there was a fallout
(09:54):
with the relationship withMicrosoft, so now we've switched
over to Oracle, but like thethe facts are still the same at
the end of the day.
It doesn't matter which uh youknow which big uh you know
S-corp is is funding this, likewhere uh open AI is not even
profited or not even uhspeculated to be profitable for
another what five years,something like that.
So maybe.
(10:16):
Um yeah, maybe that's that'sprojected, right?
Um but I'm sure we can run thisfucker into the ground well
before then.
Uh oh yeah, no doubt.
Tim (10:25):
But it's funny because the
you know they're they're not
they're burning this money, andhow they're burning this money
is promising it to like Oraclefor and then Oracle doesn't have
the capacity that OpenAI isbuying.
They have to build itthemselves.
And to build it, they have togo to NVIDIA and give them a
hundred billion dollars to popto buy the infrastructure to
(10:46):
build the data center for openAI.
It's just ridiculous, thisvelocity of money thing.
Chris (10:52):
Maybe this will happen
again with uh with Google or
something after after thisrelationship goes sour.
We'll see.
Tim (10:59):
It's it's truly ridiculous.
I mean, if you think about it,like there's so much money being
created from you know, it'slike Wolf of Wall Street style.
Like it's a Fagazzi, it'sfucking fairy dust, you know?
Chris (11:09):
All right.
Uh next up we have a an articlehere from Cyberscoop.
Um, this is pretty big news,probably reported on them by
multiple um outlets here, butthis is the article that we
pulled up today.
But um, so Cisco has uncoveredum just a few weeks ago a new
SNMP vulnerability um thatallows attacks on iOS XE-based
(11:32):
devices.
Um it looks like this flaw,which is I don't know if it
calls out exactly what the gradeof the CVE is, but um, there's
several of them.
Yeah, right.
Um but the flaw allowsauthenticated um remote attacks
with low privileges to forcetargeted systems to reload,
causing denial of service.
(11:52):
Higher privilege attackerscould execute arbitrary code
with root level permissions onaffected Cisco iOS XE devices,
effectively gaining completecontrol.
Um so obviously, like uh I hatekind of coming in here and just
reporting on CVEs and thingslike that, because this happens
to all of us, you know.
But Tim and I both work inVendorland.
We're we're both going to becoming across CVEs on a pretty
(12:14):
regular basis.
That's just the nature of thegame.
Cisco has taken, you know, theapproach, kind of pushed out an
immediate patch and is urgingpeople to patch quickly and uh
effectively.
Um the problem is patchingtakes time, right?
So um and not everyone does iton time.
And uh, you know, hopefully, Imean, to be honest, let's let's
(12:36):
be completely honest here.
We shouldn't be exposing SNMPto the internet anyway.
Uh so if that's uh if that'sthe uh route that you've taken,
then I would patch immediatelybecause you're um you're already
to be honest, you're alreadydoing bad practices.
Um so at least patch and thenmaybe move that to a um more
(12:57):
modern uh exchange of uhmodering uh capability.
But um yeah, uh it's uh justwanted to call this one out
because it's pretty important.
So if you are um potentiallyaffected by this, um definitely
patch and please, please stopexposing SNMP to the internet.
Tim (13:15):
Yeah, this one there's
there's actually like three or
four of these um that werereleased at the same time, or
just close to the same time.
But yeah, and they're allreally around C SNMP, and most
of them are in the high skill.
But all of them are like, whywould you yeah, why would you
like find yourself in thissituation where somebody could
explain this?
You know, if you've done anybasic cybersecurity, you really
(13:39):
shouldn't have found yourself inthis uh in this hole.
Not not to make excuses, right?
I mean a C V E is a C VE needsto be fixed, but uh all right.
Um our next one has to do withEquinix.
Equinix just did uh whatthey're calling an inaugural AI
summit.
And so they've released kind ofa press release after after
that summit talking about kindof the the new feature
(14:02):
functionality products that havecome out of this summit uh that
Equinix is offering.
So let's go through it.
Um it's a they're calling itdistributed AI infrastructure to
help businesses accelerate.
Um they mentioned fabricintelligence, so Equinix Fabric,
of course, is the um kind oftheir uh automation platform
where you can uh create virtualmachines and connect virtual
(14:24):
machines to to stuff that's inthe Equinix rack or to third
parties or to the internet or toyou know cloud providers or
whatever.
Um this they're they're addingwhat they're calling fabric
intelligence, which is AIinsights based on telemetry
tools that are available uh tobe connected to the Equinix
fabric.
So they can do things like, youknow, not only find the
(14:45):
insights, you like make you likeyou can chat with your uh your
network essentially, type ofthing, where you can say, hey,
what was my you know, wherewhere am I having problems or
something like that.
Um but they can also, it sayshere specifically that you know
it can take corrective action uharound the making the network
responsive to demands and stuff.
So pretty interesting if if iftrue and and if uh you know
(15:08):
let's see how it gets how it'simplemented.
Um it also mentions AISolutions Lab at quote unquote
Equinix Solution ValidationCenter facilities.
So I think what that means isthat they'll there's like
certain Equinix facilities wherethey'll have this connectivity
to global AI uh partners.
So this this reminds me verymuch of the uh the Megaport AI
(15:28):
exchange thing that Megaportjust came up with, with the idea
of just kind of inference or AIas a service where they'll
connect you to third partyproviders, you know, like maybe
with one of these many datacenters that are being built to
house AI workloads.
Um so interesting.
I don't know how much of thatis out there yet, still, that
like this these third-partyproviders that will give you AI
(15:50):
as a service, but they'rebuilding the capability out.
And again, Megaport already hasit with the uh also, you know,
already did something similarwith the uh AI exchange, so it
makes perfect sense for forEquinix to offer this.
Yeah, it's kind of the that'skind of long and short of that
one.
Anything to to add there?
Chris (16:04):
No, yeah, I think I was
um kind of with you.
I was gonna draw somecorrelations to the uh Megaport
AI exchange that they launched.
I think it was earlier thisyear or even close to late last
year.
Tim (16:17):
I think it was this yeah, I
think it was this year.
Yeah, earlier this year, Ithink.
Chris (16:21):
Yeah.
Um so uh this sounds like youknow, kind of Equinix is also
heading in that direction.
It sounds like they've um notnecessarily one-upped it, but
just obviously they have a fewmore capabilities that they've
announced in this one um onesummit as well.
Um sounds like the fabricintelligence piece is available
Q1 2026, so that's coming upnext year.
(16:43):
And then the AI Solutions Lab,which we've kind of, you know,
kind of made um adjacentcall-outs to the Megaport AI
exchange is available now.
Um yeah, it's it's like thething is these things sound like
big announcements, but at theend of the day, it's just kind
of connectivity between umright, services.
(17:03):
Yeah, services and um a serviceprovider and a customer, which
you know that's kind of Equinixand Megaport um together.
That's kind of their bread andbutter, right?
Is is building connectivity.
Um the thing is like it there'seven at this point, um, there's
only a certain caliber ofcustomer that is actually
(17:24):
building their own AI typeapplications and things like
that, right?
So um probably the kind of topend of town is who's gonna use
services like this, which uhobviously they're gonna pay the
most money as well.
So that's good for sure.
Um but I just wonder I wonderhow much this stuff is even
getting consumed versus, youknow, some of it is vaporware at
this point, you know, it'sannouncements that things
(17:46):
haven't actually been baked intothe platform yet.
But um I wonder how much thisis if this is actually getting
consumed, because I would Iwould see adoption probably
being rather low at this point.
Um, but that could that couldhockey stick at any point, but
that's for us now.
Tim (18:01):
Well, and I think it's easy
for Megaport and Equinix to
just build that kind ofessentially build the road,
right?
It doesn't even really costthem much to do that, right?
And then they can offer it as aservice for interested
customers.
So for them, it makes probablyperfect sense, right?
Chris (18:15):
Probably probably makes
much more sense for Equinix
because they probably own thegoddamn data center anyway.
So they're selling the datacenter space to the provider and
then just be like, yeah, we'lluh we'll we'll plug some cables
in in the meet room for you andbring you stuff.
Yeah, it's probably relativelyeasy for them.
All right.
Sorry, I was getting my windowssituated.
(18:36):
Okay.
All right, next up we have anannouncement from uh Alkira.
So if you're familiar withAlkira, um they are a uh
multi-cloud networking vendor umadjacent to um the likes of
Aviatrix or Prosimo, etc.
Um, they have basicallylaunched a uh two products
called Alkira MCP and AlkiraNIA.
(18:58):
Um Alcura MCP, you can kind ofalready determine what that is
if you've um been somewhatplugged into the world of AI at
all.
So they have a um uh MCP serverthat's basically allowing you
to interact with their AI agentsand talk to your network built
on Alkira.
And then they have this othertool as well called the NIA,
(19:21):
which is the networkinfrastructure assistant, um,
which it sounds like thosethings probably go together.
I would imagine the MCP is howyou would talk to the NIA
ultimately.
Um but you know Alkira is kindof building that multi-cloud
backbone.
You know, they have their owncloud exchange points or CXPs
(19:42):
where they um interconnect witheach public cloud provider and
kind of build that backbone foryou to build between uh cloud
regions or even cloud providers,um, and even into on-prem uh
with the likes of their umhybrid connectivity offerings.
If it points out, uh I'll quotethis directly, but there
(20:03):
there's a section in here fromAlkira that says why this is
different.
Uh and it says point tools givesnippets, not the story.
Alkira pulls togethereverything on-prem, multi-cloud,
between regions into onereliable picture and lets your
platform, your sorry, lets yourin-platform copilot or trusted
AI assistants use it to helpgain a faster path from
observation to action toverification.
(20:23):
Now, you know, obviously ifyou're using your own uh
in-platform uh AI tool likeCopilot or any of your trusted
AI agents, obviously this doesgive them a mechanism to talk to
the network that you've builton top of Alkira, which makes
perfect sense.
Um you probably can gain a lotof insight from that
interconnectivity.
(20:44):
Um, but I I would I wouldquestion how much of that can be
between on-prem, um, becausethe I mean Tim and I worked for
one of these vendors for a longtime, so we have a lot of
context around this.
Is you you own most of thecloud network, but you may not
own any or much of the on-premnetworks.
(21:06):
And then there's even thingslike with the introduction of um
providers like Zscaler or anySSE or SASE type providers that
also they may integrate intothis, but that is that doesn't
necessarily mean that those typeof things are going to be
exposed via the MCP or via thisnetwork uh network assistant
that they're offering.
So I would question really whatuh extent you can go end to
(21:28):
end.
Um I'm not saying there's notvalue in seeing uh what what
Alkira owns um end from their umfrom their platform, but I do
question whether or not it canreally go as far as as they're
claiming.
But we'll we shall see.
Um any thoughts from you, Tim?
Tim (21:44):
Nah, I think I think you
nailed it.
I mean, that's that's thequestion.
The question is how much is canthe NIA see to provide the
insights that you're gonna chatto it about?
And then the MCP, I mean, it'san MCP server, right?
So it's going to be exactly asuseful as what you can write to
do with it, right?
I mean, in theory, it should,it should basically it'll be it
(22:05):
worked just like an API type ofthing where you know you can
make changes or connect to orwhatever you need to do with
Alkira using their MCP server uhinstead of their API or
something like that.
I just think of it like an AIAPI at the end of the day.
So if you build an agent tomake connectivity or use Alkira
in some fashion, you do nowthey've you've got an MCP to do
(22:26):
it.
You don't have to write the uhthe API stuff yourself.
Uh the last one is honestlyfeels a little science
fiction-y, uh, and then it's alittle over my head, but I'm
gonna we're gonna cover itanyway because it is really
cool.
Uh so Cisco is is expanding itsquantum networking portfolio
with new software prototypes.
So uh what that means is, andthis is where things really
(22:47):
start to fall apart.
So I've read this article a fewtimes.
So there's a they have apackage of prototype software
that they're really that willfacilitate quote unquote
distributed quantum computingnetworks and support real-time
applications.
So the quantum labs designed asoftware stack that includes
three layers, and I'm justreading this straight from the
article because I don't want tosay it wrong.
An application layer with anetwork-aware distributed
(23:10):
quantum computing compiler thatsupports quantum algorithm
execution in a networked quantumdata center.
That means very little to me.
I mean uh a control layer withquantum networking protocols and
algorithms that support theapplications and manage the
devices that make up the quantumnetwork uh through APIs, and
then a third layer for devicesupport that's like an SDK and
(23:33):
API to the physical devices.
So basically, think of it, allI can get from this is that
we're talking about a bigquantum software development kit
that, like, you can, you know,goes from the application layer
down to the hardware layer, andCisco's making this available.
The what's really cool aboutthis is so OutShift, which is
you know essentially Cisco'skind of spun-out think tank, if
(23:53):
you will, uh, you know, forstartups, announced a quantum
entanglement chip.
So they're able to actually doquantum entanglement with photon
with photons uh via thisnetworking chip.
And what this allows you to do,because quantum entanglement
means essentially instantaneousstate transfer between two
entangled photons, is todistribute your computing in a
(24:17):
way that it essentially becomesthere would be no latency in any
kind of like network operation,you know, any operations
between these uh these networkedcomputing quantum computers,
which is crazy.
You don't have to worry aboutfire, we don't have to worry
about fiber or anything,apparently.
Like can't get faster thanentangled photons.
So 200 million entangled pairsper second.
(24:38):
The chip operates at roomtemperature, uses minimal power,
and it and functions usingexisting telecom frequencies.
The thing sounds like freakingmagic.
Like, I don't I don't know elseto say.
We're we're just talking aboutbuilding data centers that are
gonna suck up the ocean and burndown the rainforest, and these
guys are talking about quantumcomputers that operate at room
temperature, like crazy stuff,right?
(24:59):
So, I mean, again, read thearticle.
I can't do it justice.
The the the level of of justcraziness involved in in this uh
this new software stack is justreally, really interesting.
And what it might allow.
It's funny to see technologynot diverge, but just almost
diverge, like just take twocompletely separate paths.
(25:20):
So here we got AI doing whatit's doing and just eating every
resource on earth to becomesmarter, faster, better.
And then we have stuff likethis that's kind of taken the
other track where quantumcomputing is actually so
efficient that you're almost,you know, you're not quite
getting energy back, but likeit's almost it's almost not
using any.
So I don't know, what do youthink?
Chris (25:38):
No, yeah, I mean, uh like
you, Tim, I am but a simple
man.
So many much of this articledoes go over my head.
Um, but the one thing that I dolike they called out was the
the importance of the errorcorrection, um and to make sure
that this is obviously accuratefor uh transmission of you know
(26:00):
photons and things like that.
Because that's the thing thatwas, I think, the craziest thing
is that it just sounded likeall the existing infrastructure
doesn't need to change.
Um maybe there's there's somechips involved here, but like
that you can use the existingfiber, etc., like you said, runs
at room temperature.
Whereas when we're hearingabout these quantum computers
today, it's like, oh, it needsto run at this temperature, it
(26:21):
needs to be in this containedenvironment because the um the
errors that can be introduced bysuch a small variance in that
type of stuff um could becritical and potentially
resource intensive and wastefulif it you know has to re rerun
some of these calculations.
So uh, like you said, the proofis going to be in the pudding.
Uh I think, you know, obviouslythere's a lot of detail in
(26:43):
here, and Cisco's probablyskating towards the puck in a
lot of these um type ofscenarios.
Um but um I'm interested to seewhere this goes.
The thing that's the thingthat's crazy about this is like
this this is all related to likecomputation, right?
It's not necessarily related tothe network.
Um, so it's kind of hard tograsp um what this means for us
(27:07):
as like network operators,right?
It's like it just sounds likeshit's gonna be be moving really
fast, like all the time.
There's a lot of data movingreally fast.
Um, so it's hard to kind ofgrasp what this means for where
things are gonna go, but umthings are changing.
That that's kind of the theonly constant is change, right?
And I think that's uh that'sdefinitely prevalent here.
Tim (27:30):
Yeah, I forgot to mention
one thing that I thought was
also pretty cool.
Um another research proto thisis a prototype, this doesn't
exist as such yet, but this ideaof quantum alert, which is uh
like a security feature that cantell if two endpoints in a
quantum link if there's an uh anintruder in the system,
basically.
Because it'sentanglement-based, apparently
they can rapidly detect ifsomebody shouldn't be there,
(27:53):
essentially if there's a thingin this that shouldn't be there.
So they've already talked aboutthis, and they're talking about
like um, you know, butbasically by an intruder even
entering, you know, there'sentanglement that has been
screwed with and therefore isdetectable.
So think about cybersecurity,the future with quantum
entanglement.
I don't even know what to sayabout this, right?
(28:14):
Like anyway.
So this is really cool.
I highly recommend everybodyread this article.
Don't don't skip out on it.
It's really, really interestingstuff.
Um, and with that, I think thatdoes uh finish off our news uh
episode.
Uh thank you for sticking withus, assuming that you've made it
this far, which you if you'rehearing this or say you know,
you you must have.
So thank you.
Chris (28:35):
You skip to the end
because you just wanted it to
end that much faster.
Tim (28:39):
Yeah, there it is.
Or you or you skip to the lastthe last story because this was
the interest this was probablythe most interesting one.
So anyway.
All right, everybody.
Well, thanks for sticking withus, and uh, we will see you next
month with another news update.