Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Glenn Beckmann (00:12):
Hi everybody.
Welcome to another episode of atthe boundary, the podcast from
the global and national securityInstitute at the University of
South Florida. I'm GlennBeckman, communications manager
at GNSI here today, again, to beyour host for at the boundary.
Today on the podcast, it's allabout our future strategist
program and the student ledconference they're hosting in a
(00:35):
couple of weeks here at USF.
It's called the cyber frontierSummit. Lily shores will be
here, and she'll be talking withone of the featured speakers at
the conference, Joe Blankenship,co founder and chief data
officer at service, before webring them into the studio.
However, just a couple of quicknotes. Our latest GNSI
(00:57):
newsletter drops this week,featured in the newsletter will
be our latest decision. Brief, adeeper dive into a topic we
discussed on the podcast a fewweeks ago, the Military
Recruitment crisis. GNSIresearch fellow Dr Guido Rossi,
digs a little more into thatcrisis and tries to formulate
potential answers and solutionsfor military leaders. If you
(01:19):
haven't had the chance to checkout the latest episode of our
GNSI video series on YouTube. DrMohsen milani's book talk has
become quite the attraction,32,000 views and counting, as he
discusses his latest book,Iran's rise in rivalry with the
US and the Middle East with GNSIfaculty Senior fellow, Dr Randy
(01:40):
bourb, if you haven't had achance to look, highly recommend
you go over there. We'll drop alink in the show notes.
Tomorrow. We're publishinganother video episode as GNSI
strategy and research manager,Dr Tad schnaufer sits down with
Dr Maria snugovaya, a seniorfellow at the Europe Russia and
Eurasia program at CSIS, that'sthe Center for Strategic and
(02:02):
International Studies. Theirconversation is a continuation
of her appearance at Tampasummit five earlier this month,
where we examined the lessonsfor future conflicts arising
from the Russia Ukraine war. Ifyou don't want to miss any of
these episodes, we recommend yousubscribe to our channel while
you're there. Okay, we told youearlier today, in today's
(02:24):
podcast, it's all about ourfuture strategist program and
their upcoming conference, thecyber frontier Summit. That
conference is scheduled forApril 15. The Marshall Student
Center here at USF in Tampa. Ourteam at GNSI has been helping
support the conference, but thestudents of FSB have been doing
all of the heavy lifting, andman, they've built quite an
(02:47):
impressive event on the agenda,panel discussions about AI in
cybersecurity operations, zerotrust architecture for critical
infrastructure, Quantumreadiness for protecting data,
securing the digital economy,the intersection of cyber
policy, strategy and ModernWarfare. One of the most
(03:08):
compelling and rewarding aspectsof this conference is that the
students will lead and moderateall of the panel discussions.
Then a student research posterevent will also be featured
during the conference. Inaddition, GNSI and cyber Florida
Executive Director, retiredMarine Corps General Frank
McKenzie, will be the keynotespeaker. You can see the
(03:30):
complete agenda and list ofspeakers on our website. We'll
drop a link to that in the shownotes, one of the featured
speakers in the securing thedigital economy, the future of
trust and transactions will beJoe Blankenship. He's the co
founder and chief data officerat Tampa Bay startup Service
Corps. It's a software companystarted by veterans dedicated to
(03:53):
changing the way people andmachines interact with data.
Let's bring him into the studionow, along with Lily shores, an
officer with the GNSI futurestrategist program and one of
the primary planners for thisupcoming conference. She's
working towards herundergraduate degree in
International Studies at USF,and is an aspiring national
security professional. I'll handit over now to Lily. You. On.
Lily Shores (04:25):
Thank you so much
for the introduction. And we are
welcoming Joe Blankenship, theChief Data Officer of Certus
core, and we are going to talkabout cybersecurity. So my first
question will be, as the digitaleconomy continues to grow. What
are the emerging cybersecuritythreats that businesses need to
(04:45):
be most vigilant about?
Joe Blankenship (04:46):
Oh, man, this
is really a question and a
really good topic fordiscussion, especially in 2025
going to 2026 it's two yearsinto the Gen AI trend, and it's
only going to be moving forwardwith more speed moving the
coming years. So I would saythat when it comes to digital
economies, specificallybusinesses, cybersecurity
threats that we have to worryabout, it's the same old, same
(05:08):
old human vectors. They're goingto be the biggest things,
malware, phishing, ransomware.
It's systems, in terms of justtheir resiliency against cyber
threats, have become muchbetter, but it's still, you
know, people clicking on thewrong things and getting into
the wrong spaces, looking at thewrong sites, they're going to be
persistently the main vectorsthrough which, you know, threats
occur. And once again, there areno easy solutions to that,
(05:30):
especially since AI is gettingbetter at generating automated
responses and getting these AIagents in the loop, where they
are presenting more newparadigms, essentially, and how
people are going to be engagingwith AI and how they can be
tricked by them. So there areplenty of examples of, you know,
cybersecurity firms taking bigleaps with AI to figure out what
the best placement is for them.
(05:52):
You know, really, quest is alocal company. They're doing
that right now and but onceagain, it's early days. You
know, it's, you know, roughlytwo to three years into Gen ai,
llms, their utilization, andwe're still trying to find out
the best paradigms through whichwe can actually use these
systems to help counter badactors, black hats, malicious
software. AI has presented newparadigms in how malware is
(06:15):
generated. Ransomware is beingcoded and deployed, and it
doesn't help that AI isproducing more human readable AI
or emails and other types ofsocial media content that humans
are engaging with now. So Idon't know from your
perspective, like, what's beenon the more, I guess, non
traditional routes, you know,social media stuff like that, if
(06:36):
you've seen trends like that,but I know that within at least
your traditional business,enterprise type stuff that you
know, spam emails, phishing,smishing, ransomware, malware,
those kind of things have provento be persistently more tricky
as time has gone on. So.
Lily Shores (06:55):
So how would you
like could you expand on what
you mean by the human vectorpart? Yeah, absolutely.
Joe Blankenship (07:02):
Once again,
when it comes to enterprise
security and people usinginformation and communication
technology infrastructure,emails, chat, content management
systems, knowledge bases, theirdata infrastructure, you know,
databases, data, APIs, we'rewe're assuming that humans that
are in the loop, you know, it,professionals, data analysts,
(07:25):
non technical leadership, areusing technologies responsibly
within, you know, the trainingguidelines for cyber security
practices within organizations.
And what I, what I mean by humanvectors is essentially those
people and their interactionswith the technologies that
they're using inside theorganizations, you know, their
laptops or devices, even theircell phones. You know, we in
many cases, allow people to usetheir personal devices to, you
(07:48):
know, interact with and or use,or, you know, gain access to
business and organizationalinfrastructure. So the challenge
there is, can you trust theirpersonal devices? Do they have
proper authorization and trustedinfrastructure to know that
their identity is beingprotected and essentially
(08:09):
delineated from personal stuffversus business stuff? So it's
one of those things where it'seasy for people to copy paste
from one thing to another thing.
You know, it's just simple stuffthat humans do to expedite doing
their jobs fast, efficiently, toget the effect for leadership
and organizational goals, butalso as a byproduct, essentially
kind of derails the securitypractices that were intended to
(08:32):
keep people from making misstepsby trying to gain expediencies
and effect elsewhere. So I knowthat in defense and I see people
copy pasting things in the wrongsystems and accidentally
transferring things betweenclassification levels. Happens
all the time. But even withinmore traditional, you know,
enterprises, you know, thingslike Deloitte Accenture still
(08:53):
very tricky, because they have RD systems. They have more
business enterprise systems, andthey still, they still need
maintain segregation for a lotof both data privacy and
personal information andtechnique, techniques, tactics
and procedures. You know,there's a lot of things that you
wouldn't think would beindicators or vectors for
hackers to gain access systems,or cyber security. You know,
(09:16):
vectors to essentially leveragewhat they do and how they do it
to gain access to the systems,but it happens all the time.
Bad, bad, bad password practicesusing passwords that are just
too simple, and these passwordsystems and checkers themselves
within the enterprise systemsdon't catch the simplicity, or
don't catch they can be bruteforced. So there's a lot of
things in terms of just humanactivity that are still the main
(09:40):
threats for businessinfrastructure, and I think,
within the context of digitaleconomy and businesses that are
looking to be fully digitized,especially now that post COVID,
most people are fully remote.
They use a mix of personalcompany devices. These are just
complicating factors in terms ofthe cybersecurity threats that a
company can engage with or canincur as a result of even.
Emergent digital economyparadigms. So,
Lily Shores (10:03):
so how would you
balance between the need to keep
like efficiency but also tryingto keep like the infrastructure
secure? Oh, it
Joe Blankenship (10:12):
comes down to
training, comes down to personal
responsibility, comes down tomaking sure that people can be
trusted with what they do andhow they do it. But I'd say
education is definitely a firststep. Make sure you have
something like know, beforesomething has good educational
and practical use cases for howenterprises train human beings
to use their systems, while alsoenabling people that are non
(10:34):
technical to engage withtechnical people inside the
organization, people that are incharge of maintaining the
infrastructure and all thetechnologies that go into how
the company is around, so theycan better communicate to them
what they're seeing, how they'reseeing it, and how they can best
counteract malicious actorsgoing against overall enterprise
infrastructure. So secondarily,it would be making sure that you
(10:55):
have good software in place oncompany systems and people's
devices to make sure thatcertain actions cannot be
performed within certaincontexts. Of using applications
for company use that way you'rekind of helping keep separate
personal and professional typeactivities on devices that you
might be, you know, might beusing a personal device. So
(11:16):
that'd be the second part. Theother part is, you know,
leveraging AI, you can't escapeAI nowadays, so you're gonna
have to find creative ways inwhich to enable people to
understand what is at high risk,using AI to enable them to
understand the risk and to helpthem counteract risky activities
within the context of both thedigital economy and how
businesses need to observe howpeople are, You know, kind of
(11:39):
working within these emergentparadigms.
Lily Shores (11:42):
So there isn't a
lot of concerns between like AI
and a lot of people actuallyscared to use it. So how would
you introduce the use of AI tosomeone who is more cautious
about implementing that in theirlike, personal life or their
work?
Joe Blankenship (11:57):
Oh, start with
the basics. AI is not
threatening. It's not anythingthat is, you know, it's not
Skynet. We're not battlingterminators here. It's, it's,
you know, these are NLPalgorithms that are very nuanced
and trained on a lot of data toproduce a very good effect,
human reader responses, goodprompt engineering. But yeah,
(12:19):
it's starting with basics,starting with, like, a focus on
what they do and how they do it.
You know, what are theirbusiness goals? You know, what's
the vision and mission there?
How does AI enable those gaps,that mission and vision goals
that humans can't or, how canhuman activities be enabled
better through AI? So I would,you know, start small, start
with practical effects. Youknow, there's you don't need to,
you know, conquer the world inone day with AI. It's one of the
(12:40):
things where it's going to be anincremental and gradual process
for organizations to both learnabout what it is and how it can
be best leveraged to affectwithin their organizations. And
Lily Shores (12:50):
then couldn't AI
also be used to attack cyber
security systems?
Joe Blankenship (12:55):
Oh, absolutely,
it already has, my knowledge,
the ability for people to promptout malware and other bad
scripts from Ai access points,things like chatgpt, like even
things like anthropic you know,it's like Claude has been really
fine tuned to be a very safe LLMand a very safe AI. But even
(13:16):
with that, you can use promptengineering to essentially trick
these things into telling youhow exactly to produce these
malicious things that could beused to break people's security
practices. Like I said, malwareand ransomware are just two
examples of scripts that couldbe generated from these llms and
deployed within a matter ofminutes if they wanted to. It's
one of those things where Ithink it's twofold. It's one on
(13:38):
the ability of human beingsinteract with these things in an
ethical manner, but also a heavylift on the chat gpts And the,
you know, the Facebooks and theanthropics of the world to
really constrain what thosefoundation models do and how
people can leverage thosefoundation models within ethical
boundaries of utilization,without keeping them from being
(13:59):
innovative and creative withthose llms At the same time, and
which is a it's like said earlydays. So there are no good
answers to that yet true.
Lily Shores (14:07):
So when it comes to
like protection, as you said,
How would like blockchaintechnology be integrated into
cybersecurity solutions toprotect the digital economy and
like, what are some specificsectors where it has already
shown promising results.
Joe Blankenship (14:23):
Oh, man, that's
yeah, once again, good
questions. I would say, justinitially blockchain and
decentralized technology verylarge. Keep in mind, Blockchain
technologies are an ensemble ofa bunch of different
technologies pushed together foran effect. But I would say the
two big things that I wouldthink of right off the top of my
head would be data protectionand transaction verification. I
(14:45):
think those two things have beenshown to be viable and very
useful in the context ofdecentralized technologies, like
blockchain technologies in moreconventional and traditional
business use cases, specificallydo. Sectors I think, have
benefited or are going tobenefit, probably most from this
are going to be your supplychain management and vendor due
(15:06):
diligence, I think, in terms oforganizations knowing who to do
business with and how thoseorganizations have done business
in the past with otherorganizations, is going to be
increasingly important forpeople to do safe things with
other safe organizations, toavoid hacks like Target
experience, like Sonyexperienced, and to move
towards, you know, maybebroader, broad conversations on
(15:27):
how data is becoming like,essentially, more and more
critical as it's being used totrain and fine tune AI agents,
but Also how, just basic digitaleconomy practices. You know, I
buy this. It was spent here.
They're tracking these, thesetransactions, and better, just
better solidifying a chain ofresponsibility and a chain of
(15:50):
Providence and lineage on whatis happening inside the digital
economy and how blockchain helpsyou best preserve that canonical
knowledge and protect againstmalfeasance within those
transactions over time, peoplegoing back there and changing
records so forth, so but I wouldsay with swipe management as
well, we can see with, I guessthis kind of leads into the
(16:12):
regulatory stuff as well, thatnation state actors, governments
Around the world, you know,they're leveraging things like
sanctions, like tariffs, likethat. And it's becoming more and
more important to track theeffects of these things, both in
terms of the broader macroeconomic looking at how digital
economy is being affecting andhow it's affecting larger
(16:35):
markets, but also how that'saffecting smaller spaces of
economy. Just time,manufacturing, logistics,
multimodal transportation,container shipment, stuff, and
when it comes to not just thevendor due diligence, looking at
what organizations, people havedone with other organizations,
but looking at also how thataffects things like supply chain
management, how that affectsbroader macroeconomic effects
(16:55):
through smaller economic shifts,in terms of how physical world
meets the digital. So I thinkblockchain ontologies, and I
guess decentralized technologiesvery large, are going to be
critical in gaining insights onhow those things are both done,
practically and both and moresafely, I guess, in the long
term.
Lily Shores (17:16):
So when you say,
like decentralized economy,
like, What do you meanspecifically? Because I feel
like a lot of people will hearthose words and, like, hear
blockchain, but they don'tnecessarily know, like, what it
actually means, or, like, thegoal of this technology and why
it's so important as we moveforward. Yeah.
Joe Blankenship (17:34):
Yeah,
absolutely. So once again, this
would be my own, my terminology.
I'm sure there'll be feedbackfrom folks in the audience, but
so centralized, if you thinkabout centralized economies,
these are things that areregulated by nation states or
states, or at large stategovernments in different scales
of state government, mainlybecause there's an implication
of taxation, an implication ofregulatory purview over what
(17:56):
happens within certain scalesand political boundaries of
economy, a decentralized economyor a decentralized technology
would be something that kind ofsupersedes that Bitcoin can be
interacted with across nationalborders in a way that does not
require the states or regulatoryactors to be inside that loop.
So I think as a baseline thatwas the promise of Bitcoin and
(18:19):
other cryptocurrencytechnologies, and that's still
the promise of things likeripple and things that have been
promoted as decentralizedtechnologies that don't focus so
much on the cryptocurrencyaspect of it, but focus on the
decentralized canonicalpreservation of knowledge of how
transactions occurred over time.
So if you look at from thatperspective, then you have
something that is a betterfootprint for tracking and
(18:40):
keeping safe transactionalknowledge and information, while
also keeping people'sinformation transparent to a
scale that you know protectspersonal data privacy but also
encourages purview in a waythat's not or just invasive, it
would be a good word for it. Soyeah, I think that would be the
distinction there, and kind ofthe implication of using
decentralized, or goingdecentralized versus more
(19:02):
decentralized, or somethingthat's more control in a central
regulatory
Lily Shores (19:06):
manner. And I'm
just wondering how like with
Blockchain technology, how thatinteracts with, like AI, and how
like those two technologies,since they're both emerging
like, how they interact witheach other, or almost like
counteract one another.
Joe Blankenship (19:20):
Ooh, okay, so
working with and working
against. So once again, myknowledge as of recent, not very
deep, but I do know that they,like any other technology
sector, they are looking forbest practices, and they're
looking for ways that are themost practical ways to apply
these things over time inapplication spaces, where it's
(19:43):
more about the cryptocurrency,more about leveraging the
digital economy aspects of it, Ithink it has been more minimal
in the fact that they use it forthe same thing that most people
use in those at least from thetechnology standpoint. You know,
it's co piloting of code andproducing stuff like, you know,
decentralized contracts. Yeah,but also summarization,
understanding broader effectswithin those ecosystems, and how
(20:05):
people can, from a non technicalperspective, can better engage
with more technical aspects ofthose, those economies and those
communities of practice. So butagainst, I would say, I think
it'd be people running too fastto a finish line with the
assumption that AI is going tostop or bridge a technological
gap that still has no clearanswer from a human perspective,
(20:27):
and then people rush to dosomething with an LLM or an AI
agents like that, which onlyfurther complicates the initial
challenge. And I would say, as ajust a segue of that, at least
from my personal experience ofbuilding, building technologies
that enable better artificialintelligence and better
generative AI, better LLMutilization, there's, I think
(20:47):
there's a presupposition withmost customers in that space
that AI is going to magicallysolve some kind of problem that
they don't have clear way tocontextualize themselves, like
within a organization, whetherit's an NGO, a nonprofit, a four
business organization or adigital economic segment writ
large, if an organization cannotclearly describe or
(21:09):
contextualize an issue withintheir vision, mission, goals, or
they have no way to clearlyconnect an objective, they have
to a practical methodologythrough which that objective can
be realized, an AI agent won'treally help with that. It will
from an initial prompting pointof like, hey, what do I do? It
can give you general like, layof the land of topically based
on semantics and syntax, like,what you're kind of looking for,
(21:31):
but the due diligence is stillon the human being to bring
those the technology and thepractical aspect of what they
want to do and how they want todo it to the table is, you know,
you can't really depend on AI todo that for you. AI is a great
force multiplier. Can give you alot of insights from a lot of
different core port ofinformation. But I would say
that there's going to be anincreasing, increasing trend,
(21:54):
increasing feedback frommarkets, that AI isn't going to
be the panacea that solveseverything. It's going to be
something that has either madetheir lives a lot easier because
they kept something small, orthat's made their lives more
complicated because they wenttoo big too fast, and now they
have to bring in more humanbeings with technical
backgrounds in AI engineering,data engineering, data science,
(22:14):
to then readdress things thatwere complicated through
leveraging AI in a way that wasunintended or one that does not
produce consistent or rigorousresults as an output of those
processes. So once again, stillearly days, I think we'll see a
lot of feedback, especiallymoving from 2025 2026 because it
seems like this year, people arenow realizing like, okay, LMS do
(22:37):
have a lot of limitations. It,you know, things, they require a
lot of fine tuning, a lot of raggraph rag processes to better
train these foundational modelswith your information to make
your responses more fruitful. Sowe're getting to a point where
having the ocean isn't enough.
You just need a little tiny pondof your specific knowledge to
help you better direct knowledgeto your solutions. So hope that
(22:58):
answered the question.
Lily Shores (23:03):
I think it did.
There was some, like, technicalwords I didn't necessarily
understand. So when you saylike, LM ends like, what are you
Joe Blankenship (23:12):
LLM or large
language model? Okay, so
something like chat, GPT fromopen AI, they that API gives you
access to a large languagemodel. Essentially, it's a next
word next word prediction modelbased in natural language
processing algorithms to helpyou, essentially give it a
prompt, and it gives you themost human response based on an
(23:33):
expert prediction. So now, onceagain, there's a lot of variety
of those things that are alsoopen source models. So llama
from Facebook. They have an opensource model where you can
download it, build it yourself.
There are ones that areprivately owned, like chatgpt,
and I believe Claude fromanthropic is still closed
source. I forget that, but, but,yeah, large language models are
(23:54):
some people talk about AInowadays. They're talking about
large language models,
Lily Shores (24:00):
okay. And then
there was a point that you
brought in where, like, you haveto bring in more people that
have, like, a technicallytrained background. But wouldn't
that also open up more, like,human error mistakes, and how
that how those mistakes could betraining, AI, and like, how that
all connects in like howbringing in more humans to train
(24:25):
AI can also open it to havingmore human error within the AI,
absolutely.
Joe Blankenship (24:31):
Yeah, it's a
vicious cycle. And it's just,
you know, you think about oldsoftware paradigms from
devsecops. It's like peoplebuild software, the software
presents a new security issue.
They have to solve that securityissue. They've refactored and
redeployed the software, andthen hope they're not repeating
the same mistake twice. Like Isaid, AI is just going to
exacerbate that. If you don'tknow how to use it, you don't
have to constrain its use, thenyou're going to, yeah, you're
(24:52):
going to open it to more errors.
The humans are going to come inthere. The human injection will
provide more solutions, but itwill also create new problems.
But that's that way technologyhas worked at large for many,
many decades. So it's one ofthose things where it's not
really AI. It is really just ahuman loop that once the initial
questions of like, what are themajor you know, cybersecurity
(25:13):
issues? Well, it's still gonnabe human human loop. And once
again, decentralizedtechnologies solve a little bit
of that. But it, once again, italso leaves the drop into new
paradigms of risk that we justdid not account for when the
previous technology didn'texist. But overall, it's a good
process, and it's a good processto be engaged with, because
overall, if you solve oldproblems, get new problems.
(25:34):
That's still progress. It's aconstructive progress. So long
as you're leveragingtechnologies to constructive
effect within organizations, sowithin the digital economy of
leveraging AI for cyber securityeffect, it's good that you're
finding out new issues, like newzero days, new things like that,
new malware paradigms. You know,it's one of those things where
it's only going to help peopleget better as security
(25:55):
practices, and it's going tohelp improve cyber security as a
community practice. Movingforward, it's just, it's once
again, it's early days wherepeople are learning like, Oh,
these, these type of LLM andthese kind of AI applications,
you know, solve this, but theyexacerbate these other things.
So like I said, overall, it's agood process, it but yes, to
your point, it is very, verytricky to bring a human loop
that may have no pre existingknowledge of the previous issue
(26:16):
and then reproduce that issue ina way that was not intended the
first time around. So yeah, thatwill happen for sure.
Lily Shores (26:21):
Yeah, it was just
an interesting like point. I
noticed as you were talking thateven as they train the AI to be
better, like, is the trainingempty of errors type of thing.
So I was just very interested inthat. And then a point that you
brought up earlier, about like,the regulatory frameworks. What
(26:43):
regulatory frameworks do youbelieve are necessary to secure
the digital economy, especiallyas technologies like AI and
blockchain continue to evolve?
Joe Blankenship (26:52):
Oh, man, so I
am one to say that probably we
don't need more regulatory morefast. I guess better way to
explain that would be, we tendto at least the history of
computer technologies withinlike the past four or five
decades. And regulation is thatoften governments rush to
(27:13):
regulate stuff too quicklybefore they understand the
implication of how thetechnology is actually working
in the first place, whether itwas CFA or SOPA and PIPA or, you
know, GDPR. It's likegovernments rush to and
rightfully so. They rush toprotect individual data privacy,
and overall, you know, effectsof how they can actually gain
(27:34):
access to these things make surethey are protected before they
actually compromise, you know,the citizen's injuries. You
know, overall day to day life,you know, a a hack in your your
personal information could leadto a credit score issue that
completely destroys your credithistory for getting home loan,
you know, stuff like that. Andgovernments are concerned about
that, because that affectsrevenue, affects taxation,
(27:56):
affects a lot of stuff in termsof centralized regulatory
systems like a state, you know,state government, national
government. So when it comes todigital economy and kind of
understanding how to regulateand what to regulate, I think
it's, once again, it's not veryclear how AI is going to affect
that, how blockchain should berelated. Once again, Blockchain
is a decentralized technology,so there's a question of, even,
(28:17):
should that be regulated in thefirst place? Especially since
the goal of like blockchaintechnologies, like Bitcoin in
the first place were to becompletely removed, you know,
from regulatory purview. Youknow, the chain was supposed to
be the regulatory mechanismusing consensus, consensus
algorithms like proof of work,to engage and control how the
canonical record was producedand how it was maintained.
(28:39):
However, beyond a very basicscale of operation, that becomes
very tricky, very quickly. Yougo from something like Bitcoin
to Ethereum. Ethereum has muchmore broader application space
as decentralized contracts,those contracts have a broad
number of application spaces.
But to your point, you bringhuman programmers into the loop
producing those contracts, youopen up a lot more risk
paradigms for people to havecommodities or cryptocurrencies
on those systems. So then onlybecomes more complicated when
(29:03):
you have nation states likeChina who want to produce
digital currency systems, andthey want to use those things or
leverage them within a globaleconomy. In a global economy,
that to, in my opinion, when itcomes to the definition of
digital economy, is almost 100%digital. Like almost every
currency can think of, has adigital footprint. Every
economy, at least from amacroeconomics perspective, is
(29:24):
has a digital, you know,representation somewhere within
a national or regionalgovernment regulatory system. So
when it comes to, like, youknow, how do we use AI and
blockchain to, kind of, youknow, control these things while
they're also kind of evolvingthemselves, very, very murky
waters, and once those, I thinkit's just way too early to kind
(29:49):
of determine outside of, like,basic data privacy and
interoperability. I think it'dbe very hard to say, like, how
we can constrain these systemsin a way that is. Safe for the
people using them, but it wasalso constructive to people
trying to apply them toaccelerate good things inside of
digital economy. You know,everybody wants faster access to
commodities for a cheaper price.
(30:11):
And every time we try toleverage technology, that's the
kind of the goal that's beenkind of the Industrial
Revolution goals, you know, it'slike we build a technology to
bridge gaps, that acceleratesthings, reduces costs,
specializes labor. But at thesame time you're also talking
about you shift labor. You shiftlabor skill sets, which takes
time to retrain. That means thatthere's a lowering in terms of
(30:31):
labor access and different typesof jobs, which affects salaries,
which affects overall marketpenetration and market
capabilities for people thathave those kind of jobs. So
there's like weird cascadingand, you know, ebbs and flows of
economy what, when you'retalking about AI and blockchain,
and how those things can be bothregulated or can help
regulation, because I thinkthere's a lot, there's a lot in
(30:52):
terms of decentralized ledgersthat can help regulatory
systems. I think when it comesto keeping track of who votes on
what, or what was voted on tohave, what kind of digital
economy impact. It'd be nice tohave a canonical record of that
and that. I think it would helpwith people understanding what
pitfalls not to make a secondtime around when it comes to
implementing regulation on agrand scale. So I think with
(31:15):
that, plus AI and AI, givingpeople more generalized layman
access to much highly technicallanguage legalese inside
regulatory systems. I thinkthat's another benefit of AI in
terms of regulation and how wecan better engage with
regulatory processes. I thinkthe average citizen in most
countries have really bad issueswith not getting good enough
access to people that representthem in government and
(31:36):
understanding what they're doingand how that what they're doing
is affecting them individually.
I think that's something thatAI, in terms of regulation and
access to regulation, can reallyhelp, because AI can give you,
like the one paragraph, youknow, simple answer to things,
and really kind of help youbetter become an engaged
citizen, I think. But I'll do itin a way that helps you with
data privacy, helps you withvery integrating your concerns
(31:57):
into a broader conversation aswell. So there are those
intrinsic interstitial spacesthat blockchain and I also help
with, that aren't immediatelyclear when people are talking
about regulating thesetechnologies and kind of keeping
them safe, so people don't kindof like, run too fast, too far
with these things and break alot of things. But I also think
if you think about, like, justbeyond digital if you go beyond
(32:21):
digital economy, you know,government's very large in terms
of regulating, because they'rethe main regulatory bodies of
anything. They have a lot oflegacy systems that they deal
with, and a lot of them arebased heavily in physical
reality. You know, it's like oldinfrastructure for electric
water. You know, it's all thesesystems that have very
(32:43):
antiquated technologyconnections. AI will kind of
expose those a bit more, becausewe've done a really good job of
writing up what's wrong aroundregulatory systems without
fixing it. So when it comes tocyber security around
blockchain, and I to bring itback to the original, you know,
(33:04):
cyber security focuses is thatit will exacerbate hacks that
you don't expect, and there'llbe hacks based on older
technologies. You know, if wehave dam systems and we have,
you know, important systems thatare based on old, you know,
19/20, century technologies, andwe've written about all the
issues with these port systems,all these supply chain
(33:24):
management systems, all thesevendor due diligence systems,
and we haven't put the actualtechnology offices in place. AI
is going to expose what we whatwe definitely know we have gaps
in but have no solutions forthem, which means we're in a
race, essentially, with badactors in the cybersecurity
space to fix these things,because AI is really helping
them more than helping us atthis point. But there's also
opportunity there for us to findthose things first and really
(33:47):
address them and actuallyproduce good regulatory
artifacts, you know, laws, solike that that help us bridge
those gaps and fix those issues.
So once again, it's one of thosethings where, with AI
decentralized technologies, it'sjust kind of really ramping up
people's awareness of thingsthat we have really neglected
over time. And I think it's it,to me, it's hopeful, because it
(34:09):
does give you chance to reallyfix things so
Lily Shores (34:13):
well. That was very
interesting. So do you have any
final thoughts or points thatyou wanted to touch on that we
haven't gotten the chance toyet. Now,
Joe Blankenship (34:21):
I mean, at this
point, I dead horse, but it's
one of those things where it's,you know, just emergent
technologies, like, there's,there's a lot of potential
there. They were created for areason, and those reasons
weren't malicious in any way,shape or form. It just
technologies, build technologiesfor cool stuff. It's always the
second stage of like incybersecurity, especially like
cybersecurity professionals,hackers, they all see
(34:43):
technologies in like, it's asystem. How can I break if you
look at it in terms of it beingjust another system on top of
systems, then you start, can,you can see, kind of, you know,
both sides of the arguments. Yousee all sides. Wouldn't say
false dichotomies, but it's,it's critical. You. Especially
in terms of digital economy,because when things are
digitized, they're moving at thespeed of an electron, you know,
(35:05):
through wires, cables, fiberoptics. It's like you really
have to think about theimplications of deploying these
technologies before they'rereally matured. So but then
again, the question is like,what is maturity at this point?
Like, is it to the point whereyou can ask it questions, or is
it where you can get the samequestion back twice based on
different prompts. You know,it's it's still early days. I
(35:25):
think people should beconservative in their their
estimations on how to leveragethese things while also not
cutting off the potential forthem to be used in a broad array
of solution sets. And that meanskind of constraining what we
would do regulatory to allowcybersecurity professionals to
really engage with these thingson a practical, fast level, to
where they can really kind ofaddress zero days, address these
(35:46):
emergent issues with LMS and GenAI, to kind of help better
secure the digital economy andbusinesses writ large, around
how humans in loop may affect,be affected by these things, but
how infrastructure in and ofitself can be better secured
against these kind ofactivities. And like I said,
decentralized technologies are agood part of that. Like I said,
data privacy and their abilityare gonna be something that are
(36:09):
gonna be continually addressedas we go along, from year to
year, especially in those 2030sif these models continue to gain
speed and continue to gainapplicability, they only become
more and more embedded in yourday, to lives. You know, the
Siri of yesterday is going to bereplaced by a Gen a agent that
is much more tailored to yourexperience on your smartphones
than, you know, one year ago,five years ago. So it's going to
(36:30):
be great, because it's going togive you access to all your
stuff, and it's going to knowexactly what you do and how you
do it, and it can give youbetter advice on how you've done
what you've done. But it's alsogoing to be a repository of all
of your individual knowledgethat can then be copy pasted to
someone's system to replicate adigital version of you. So I
think moving forward, we need tobe very cognizant of how AI is
embedded with our lives, day today lives, and how both they can
(36:53):
be used to enable the best partof it, while also constraining
on exactly how multi couldleverage that to something
negative in your fact, how wecould affect address regulatory
aspects of that in a way thatdoesn't cut you off from having
to use the technology the first
Lily Shores (37:09):
place. Okay, well,
that is a great point to end on,
and thank you again, so much forjoining us today.
Joe Blankenship (37:13):
Yeah, absolute
pleasure. Thank you. You
Glenn Beckmann (37:24):
Joe, there you
have it, a conversation between
Lily shores of our GNSI Futurestrategist program and Joe
Blankenship, co founder andChief Data Officer at
certiscorp, a Tampa basedstartup. A special thanks to
both of our guests today, andwe're really looking forward to
hearing more from Joe at theupcoming cyber frontier Summit,
a student led conference on theTampa campus of USF on april 15.
(37:48):
There's no cost to attend, butregistration is required. You
can find more info in the shownotes. Next week on at the
boundary, our special guest willbe Dr Zachary Selden. He's
currently an associate professorat the University of Florida.
Previously, he was the directorof Defense and Security
Committee of the NATOParliamentary Assembly and the
(38:09):
author of a book called economicsanctions as instruments of
American foreign policy. Thatbook will be the primary focus
of our conversation with himnext week on the podcast. Thanks
for listening today. If you likethe podcast, please share with
your colleagues and yournetwork. You can follow GNSI on
our LinkedIn and X accounts atUSF underscore GNSI, and check
(38:32):
out our website as well.
Usf.edu/gnsi Or you can alsosubscribe to our monthly
newsletter. You Glenn,that's going to wrap up this
episode of at the boundary. Eachnew episode will feature global
and national security issues wefound to be worthy of attention
(38:53):
and discussion. I'm GlennBeckman, thanks for listening
today. We'll see you next weekat the boundary you.