All Episodes

March 10, 2025 • 39 mins

Text the ATB Team! We'd love to hear from you!

🎙️ AI, Cybersecurity & the Future of Computing—A Must-Listen Episode! 🤖💡

Professor John Licato joins guest host Glenn Beckmann on the "At the Boundary" podcast to discuss groundbreaking AI advancements and the launch of USF’s new College of AI, Cybersecurity, and Computing. Dr. Licato breaks down the impact of AI models like DeepSea and Lucy, the growing potential of quantum computing, and the critical work of Actualization AI in ensuring AI privacy and accuracy.

Don’t miss this in-depth conversation on the future of AI and its ethical challenges. Listen now! 

At the Boundary from the Global and National Security Institute at the University of South Florida, features global and national security issues we’ve found to be insightful, intriguing, fascinating, maybe controversial, but overall just worth talking about.

A "boundary" is a place, either literal or figurative, where two forces exist in close proximity to each other. Sometimes that boundary is in a state of harmony. More often than not, that boundary has a bit of chaos baked in. The Global and National Security Institute will live on the boundary of security policy and technology and that's where this podcast will focus.

The mission of GNSI is to provide actionable solutions to 21st-century security challenges for decision-makers at the local, state, national and global levels. We hope you enjoy At the Boundary.

Look for our other publications and products on our website publications page.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Glenn Beckmann (00:12):
Hi everybody.
Welcome to another episode of atthe boundary, the podcast from
the global and national securityInstitute at the University of
South Florida. I'm GlennBeckman, communications manager
at GNSI, and your host for atthe boundary today on the show,
one of our favorite guestreturns as Professor John

(00:32):
licatto joins us to talk aboutall things artificial
intelligence. I know it'sdifficult to believe, but a lot
has changed in the AI worldsince the last time we had John
on the podcast before we starttalking with him. However, I
want to say thank you from theentire GNSI team to everyone
involved in last week's GNSITampa summit five, the Russia,

(00:54):
Ukraine war lessons for futureconflicts. The conference was a
smashing success again, and ourkeynote speakers, retired Marine
Corps General Frank McKenzie, aswell as John Kirby USF, alumni
class of 85 go Bulls and formerWhite House National Security
communications advisor, wereexemplary, as were all of the

(01:15):
speakers and experts who spenttime With us, sharing their
insights analysis andexperiences. We're grateful to
all of the people who attendedTS five, as well as to the
groups involved with theconference, the USF Institute
for Russia, European andEurasian Studies, Iris, the
College of Arts and Sciences,us, CENTCOM, and also the
research one program, part ofUSF research and innovation and

(01:38):
a very special thank you to Irisdirector, Dr golfel Alex
Apollos, who played an integralrole in developing Tampa summit
five. If you weren't able toattend last week, the videos
from the conference will beavailable this week on our
YouTube channel. When yousubscribe to our channel, you'll
be notified when those summitvideos are online, as well as
any other new content we create.

(02:02):
Speaking of content, the bestway to keep up with everything
going on at GNSI is to follow uson our socials, at USF,
underscore, GNSI on YouTube andx at USF, GNSI on LinkedIn.
Okay, it's time to bring in oneof our favorite guests of the
podcast, and he's certainly beenthe most frequent guest. What is
this? John,

Dr. John Licato (02:23):
five No, second or third. No, no, no. We

Glenn Beckmann (02:27):
had you on, and then you were on with Norma, and
then also two Times with Craig.
Yeah.

Dr. John Licato (02:35):
Craig might tell Yeah, right, yeah, the two
parter, yeah. So

Glenn Beckmann (02:39):
I think, but anyway, but this is the first
time we've had you on whereyou're appearing as a big, big
star. Oh yeah. So to betterexplain, John is one of the
stars of a new University ofSouth Florida marketing campaign
the next phase of the Be boldcampaign for USF this time
around, be bold is focusing onthe heroes all around us here on

(03:01):
the USF campus, students,faculty, staff everywhere. So
John was chosen to be part ofthis campaign. So give us the
story. Did you answer a castingcall? How did they find you?
Give us

Dr. John Licato (03:13):
I got, I got nominated, and, you know, I
guess they figured out that, youknow, I do a little bit of AI
work, and have talked about inthe past. So, you know, for lack
of better choice, I suppose theyfilm me. And, yeah, I was, I was
really honored to be chosen forit. And, you know, it's, it's a

(03:35):
really exciting initiative thatthat I'm able to talk about. So,
you know, it's Be bold campaign.
I guess. The idea is that thisnew college that we're
establishing, College of AI,cyber security and computing,
right is is really a courageousmove to take right, because

(03:56):
putting AI as the first word inthe college's name in the middle
of you know, what some mightdescribe as a hype wave. But I
think those of us in the fieldknow that it actually is a real,
lasting technological advancethat, you know we can't, we
can't ignore that's affectingevery single field of study,
every single job on Earth,right? We're still talking about

(04:19):
it right in this today'spodcast, we're going to talk
about it. So I think it's a boldmove to try to take leadership
in this field and to do so forthe state and be one of the
first in the country to set up acollege like this. So I think
the marketing campaign is reallyset up to make that clear that
you know what we're doing hereat USF is not an easy move, and

(04:39):
it's a

Glenn Beckmann (04:43):
little bold, yeah, no, it's fantastic. And we
were talking before we startedrecording, and I'm really happy
to hear that the acronym for thenew college cake is getting some
traction, although we're notsure how long maybe.

Dr. John Licato (04:57):
Yeah, I do like the acronym cake. Caicc, the
name might change. We'rehopefully going to find out news
about a potential sponsor nextweek. So we're all really
excited to find out what's goingto happen, and things are going
to change really quickly overthe next few months. You know,
we're expecting to officiallylaunch the college in the fall,

(05:19):
and that's that's not too faraway. So,

Glenn Beckmann (05:21):
right? And I think if I, if I remember
correctly, you're, you're goingto begin offering a an
undergraduate degree inartificial intelligence, right?
That's

Dr. John Licato (05:30):
right, yeah, we're in the stages of planning
everything out, you know,getting all the curriculum
approved, creating all theelectives for it. Initially,
it's going to spin out from thecomputer science degree that's
already existing, you know,which already has a lot of AI
electives that the faculty havebeen teaching. But, you know,

(05:50):
we're really hoping that it'sgoing to turn into its own
thing. It's going to be one ofthe few AI centered degrees in
the country. And just anotherexample of how we're trying to
take leadership in this

Glenn Beckmann (06:02):
field. Yeah, well, and obviously, you know,
GNSI is all about nationalsecurity, but what we've
emphasized over the couple ofyears we've been around is that
national security takes on manydifferent forms. It isn't just
military, it isn't just thethree letter agencies in
Washington, DC. It's all kindsof different things. You know,

(06:22):
we've heard talk about thestories of, I believe China is
trying to develop AI hospitals.
And then I know that couple ofmonths ago, you were, you guys,
part of the the agreement withUSF Health for the AI, the voice
recognition research thatthey're going to try to use to
diagnose patients with, usingvoice and AI,

Dr. John Licato (06:45):
oh, yeah, no, I'm not directly involved. But,
you know, I know the people thatare working on, yeah, good
researchers,

Glenn Beckmann (06:51):
yeah. So it's, it's really fascinating to see
where it all goes. And just alittle bit that I personally
have been involved in, just, youknow, just using little chat
gpts and things like that. It's,it's, it isn't, it's not going
anywhere there. There is nodoubt about that. So since the

(07:11):
last time you were on thepodcast, there have been some
big developments in the AIspace, and that's not a
surprise. I guess the two thatcome to mind immediately are
deep seek and Lucy, and we cantalk about each of those a
little bit more deeply. But forany, for any of our listeners
who are unaware of those twothings, can you kind of give us
a little brief description ofthose two programs? Sure,

Dr. John Licato (07:35):
yeah, I'll start with deep seek. So this,
this made the news a coupleweeks ago the the company that
created deep sea, because theChinese company, and, you know,
just kind of set the context theGPUs, graphics processing units,
which is essentially thehardware that does a lot of the

(07:56):
heavy lifting For training deeplearning systems, right? Is our
export restricted? So we're, youknow, we're limited on how many
of those we can export to China,for example, right now, the
company behind deep sea is aChinese company, and they came
out with this version of theirLLM, which called Deep Sea r1

(08:22):
December, late December, orsomething, something like that,
and it was able to show animprovement on a bunch of
benchmarks of reasoning, evenbeyond the performance reported
by some of the best you know,American models that we know of,
right? Open a eyes, models andsome of the open source stuff,

(08:44):
right? And that caught everybodyby attention, because, you know,
they wondered how, how did they,how did they know how to do
this, right? How did they, youknow, because open AI, for
example, is is fresh off theheels of these many billions of
dollars of investment. And partof the way that they got that
investment was they made thecase that we need to make

(09:06):
language models really big. And,you know, we at open AI, have
this, this lead over everybodyelse. We have technology that's
so far ahead of other AIcompanies. And all of a sudden
this Chinese company comes out,and they got things that can
beat them on a bunch ofreasoning benchmarks, right? So
immediately Nvidia stop stockdropped, and, you know, I think

(09:28):
it was reported as the largestsingle day loss in value of a
company's stock in history, like600 billion or something. That's
saying

Glenn Beckmann (09:40):
something, because there have been some
really big crashes, especiallythe last 20 years.

Dr. John Licato (09:45):
Oh, yeah, yeah, this, you know, and part of it
might be just market correction,right? Nvidia's stock, because
they're the ones that areproviding the most commonly used
GPU. Their stock has been justincreasing tremendously over the
past. Months, and maybe it'sjust a correction for that,
right? So, whatever it is, youknow, people said, I don't know

(10:06):
how they're doing it. Did theyfind some secret technique that
doesn't require GPUs anymore?
And, you know? And the one thingthat makes what they did
remarkable is, not only did theybeat a lot of bench, uh, not
only get, did they get the bestperformance on a lot of
benchmarks, but they made all ofthe details available. So they
completely published how theydid it, the training technique

(10:30):
they used. They made it allfree, right? We can download the
the full weights for theirlargest, uh, 600 plus billion
parameter model, right? You canuse it locally. They made a
version where you can it's a bigmodel, so you have to have a lot
of compute power to run itlocally. If you don't want to do

(10:51):
that, you can interact withtheir version on their website,
just like you can do with openAI. And they made that much,
much cheaper than, for example,open AI's API.

Glenn Beckmann (11:04):
Were you surprised by that? Because the
Chinese have a well earnedreputation for being very
secretive about their IP. Wereyou surprised that they just
made it available for anybody tokind of comb through, look
through?

Dr. John Licato (11:16):
It was very surprising. Yeah, in some sense,
you can describe that as maybe,maybe a offensive move, right?
You know, it caused a massivedrop in confidence of the future
of Nvidia and open AI's leadright, and kind of sent a
message that, Look, you guysaren't as far ahead as as you

(11:39):
claim you are, right? They may,by making the details available
about how they did the training,they make it so that a whole
bunch of other companies now canpick up where they left off and
build off of it, right? So, youknow, in that sense, maybe,
maybe it was a strategic move.
Oh yeah, okay, it certainly wasa strategic Yeah, right, yeah.
But I mean, speaking as ascientist, I'm very happy they

(12:05):
did, because it's really nice toactually know how the technology
you're talking about works.

Glenn Beckmann (12:15):
So deep sea came out, was introduced in a
spectacular fashion. Um, Lucywas just as spectacular, but in
a bad way. You know, it crashedand burned. And, in fact, like,
they turned it off two or threedays after it, it was
introduced. So, yeah, tell us alittle bit about about that,
what your reaction was, and, youknow, to the whole thing. Yeah,

(12:38):
that's so what is? What is Lucy,first of

Dr. John Licato (12:41):
all. So one thing that they all have in
common, Lucy, deep seek, chat,GPT, they're all based on llms,
large language models, right?
And they're all based on verysimilar underlying technology,
what we call the transformerneural neural network
architecture. And the thing is,it's not when you're trying to

(13:03):
figure out how to make thesemodels better and more capable.
It's not just a matter ofthrowing more compute power at
it and then magically it's goingto turn into something smarter,
right? To be fair, there is someof that right there. The past
few years, we've seen thatsimply taking the models that we
have and then making them twiceas large, three times as large,

(13:24):
does seem to increase reasoningcapability, but we're reaching
kind of a plateau with thatright. We're realizing that you
have to experiment withdifferent things. You have to
find small tweaks in thearchitecture. You have to change
how it incorporates its trainingdata. You have to try a whole
bunch of different things,right? No one company can do

(13:45):
that, and so, you know, that'sone reason why what deep seek
did, I think, is actually goodfor the science, because now it
makes it so that a lot ofdifferent companies can try
different possibilities, and alot of them are going to fail.
Some of them are going todiscover some new tricks, and
then if they reveal theirdetails, it advances the whole

(14:05):
field, right? That's, that's theideal. That's, that's how we
love for it to work doesn'talways work that way, because
money is involved, right? Soanyway, Lucy was one of those
cases where, unfortunately, whatthey tried did not work out as
well, right? They their basicidea is the French company, and
what they wanted to do wascreate a language model that was

(14:28):
trained on, number one, a lot ofFrench language data. So a lot
of the models that we're usingopen AI, they're trained on
massive corpora of primarilyEnglish text, right? Just
because they're widelyavailable, you know? And then
they wanted to say, Okay, we're,we're not going to remove the

(14:51):
English text. So there's a lotof data there, but we're going
to make it think, like, I thinkit's like a 30, 35% of. French
text, something like that sameamount of English text, and then
you got programming code,multilingual stuff. So they
wanted to make this sort ofnatively French trained language
model, and they use the smallerarchitecture size as well. So 7

(15:13):
billion parameters, I think,compare that to the largest
version of deep seek, which islike six, 50 billion parameters,
right? So when you do that, yougot to expect that its
capabilities are going to bereduced. They also did not use a
lot of the so when you take alanguage model, you train on a
lot of data, we call that pretraining, and then you have to

(15:35):
do some subsequent stages oftraining, like rlhf, which is
where you train it on humanconversation data, so it learns
how to talk more human like and,you know, engage in interactions
that seem more natural. There'sa whole bunch of the subsequent
training stages you gotta do torefine it. They didn't do a lot

(15:56):
of those subsequent trainingstages not to say they couldn't
have just, just didn't. Theydidn't do it yet, right? And
they got excited about it. Theyput it online, and then I think
people used it and expected thatit was just as powerful and just
as well trained, tested as chatGPT, and it wasn't. And then
they they had to take it down,but

Glenn Beckmann (16:22):
so it got a little bit of an unfair I think.
So there's a little bit there,yeah, I think so the
expectations, it wasn't, itwasn't that it was, it was just
unfinished. And, yeah, theexpectations were so high,

Dr. John Licato (16:36):
I think, yeah, you know, to compare this to
five years ago, right, when aresearcher would try something
unique. They say, Oh, I'm goingto try this little thing
different with training mylanguage models, right? They put
it out. Only researchers payattention. Researchers know the
limitations of these things.
They know what they're trying todo, right? But now that AI is in
the mainstream, right? Someoneputs out a language model. They

(16:59):
make it so anybody can use it.
You're gonna have people thatdon't understand what it's for.
They use it expecting that it'sfully open AI, and then it goes
and says silly things,hallucinates, right? And that's
I

Glenn Beckmann (17:15):
remember your conversations with Craig Martell
when he was a guest on thepodcast. The both of you, the
conversations you had. And Craigwas the first chief artificial
intelligence and Digital Officerfor the Department of Defense.
He stood up that entiredepartment for the DOD. I
remember something you twotalked about, and it was the
trust bond that humans have tolearn to trust. Ai, so the trust

(17:41):
bond between Lucy and humans isbroken, yeah, is, is there any
hope that for for the people whodevelop Lucy, that Lucy 2.0
might be able to bridge, bridgethat broken bond? Oh,

Dr. John Licato (17:55):
yeah. Oh, absolutely, you know this is
they're, they're probably goingto have to hire a marketing
person you know and and figureout how to communicate the next
release. I Yeah, it's alwayspossible to create another
version that can fix it. I thinkwith Lucy itself, you can, if
they've used some of the tricksthat we already know work, like

(18:17):
rlhf and training on humanconversational data, that sort
of thing, right? Maybe evenusing the reinforcement learning
techniques that were pioneeredby the deep sea team, right? I'm
sure that they can get someimprovements. And then, you
know, you know, they're using a7 billion parameter model, which
is relatively small for eventhough, five years ago, that

(18:39):
would have been massive, right?
But you know, they're going toneed a lot of funding to do
training at the scale that theyneed, and if the government's
supports them, then they canprobably acquire that but they
can easily show improvement onbenchmarks, and just as easily
companies that are in the lead.

(19:01):
Can can drop. And, you know, seethat with open AI, right? They
just released GPT 4.5 and it'sbeen long awaited, but I'm not
hearing really a lot of buzzabout it, right? I'm not hearing
a lot of excitement about it.

Glenn Beckmann (19:17):
So outside of the university, you're also the
founder of a company calledactualization. Ai, yes, on the
surface to someone like me whojust, I can spell artificial
intelligence, and that's aboutthe breadth of my knowledge of
it. It seems like you, you'redeveloping, or you have a
product that that would kind ofprevent what happened to Lucy,

(19:41):
proactively. Yeah, is that afair assessment? It seems to me
that what you're working on, andyou tell us a little bit about
it, is to prevent any future AIsystems from having those false
answers and nonsense answers andjust ultimately breaking the
trust bond. Between the peopleusing the system and the system,

Dr. John Licato (20:02):
yeah, yeah, that's a great transition,
because I think, you know,researchers understand that
large language models aresubject to hallucinations. Even
if you tell them to dosomething, you tell them don't
give away this piece ofinformation. They might just
slip up and give away that pieceof information, right? Which,
you know, researchers arefamiliar with that, but because

(20:25):
AI is now in the mainstream, andevery company is trying to throw
AI into their products, right,they are not necessarily aware
of these possibilities forfailure, right? What I'm
thinking, what I'm seeing for2025 is that companies that are
embracing AI are going to lookfor more confidence. They want

(20:48):
to know if I'm going to put AIinto my chat bot and put that
out on my website. How do I knowit's not going to give away
private information? How do Iknow it's not going to make up
stuff, right? Just say violentthings, for example, and that's
what we're trying to do withactualization AI is to give you
that confidence, and we're doingthat by providing testing tools.

(21:09):
So let's say that you have an AIproduct, a chat bot, and you
want to be able to test to makesure that it's not going to
violate privacy, right? Well,our tool helps generate test
cases that are customized toyour use case so that you can
find out when your languagemodel or when your AI fails and

(21:31):
fix those failure points beforeyou actually put it out into the
market, right? And what we'retrying to provide is some
confidence that you haveactually tested. You've been a
actualization AI approved beforeyou put that product out, and it
does something embarrassing thatmight you know or might might
leave your company liable. So weare NSF, SBIR funded National

(21:56):
Science Foundation, and thatfunding started in late 2024 so
we're very early stage, but youknow, this is a University of
South Florida spin off and anapplication for AI technology
that think could benefit a lotof people.

Glenn Beckmann (22:14):
Yeah, that's fantastic. So taking a small
pivot, but maybe not too largeof a pivot. I was really
interested to see Microsoftintroduced a couple of weeks
ago, their newest quantum chip,Majorana one. Now Microsoft,
being Microsoft, they've gone sofar as to claim Majorana one is

(22:36):
a fourth state of matter. It'snot a solid, it's not a liquid,
it's not a gas, it's a fourthstate of existence. From a
marketing perspective, that'slegit. I mean, that's that is
hyperbole at its purest farm.
Ignoring that for a second, whatare your thoughts on something
like that and its potentialeffect, not necessarily on

(22:57):
quantum computing, but on AI,will it make AI building better,
faster?

Dr. John Licato (23:08):
Yeah, it is really exciting. You know, when
we think about what is going tocome next, so AI changed the
world. Got everybody excited,and we think about what's going
to come next, what's going to bethe next big change, right?
Quantum computing? QuantumComputing is definitely one of
those that's on the horizon,along with major advances in
robotics and so on. I'm not aphysicist, so I don't know if

(23:33):
the fourth state of matter claimis pulls any water. I think
there already are more thanthree states of matter. I think
plasma is counseling. Imagine

Glenn Beckmann (23:42):
that the Microsoft marketing guy didn't
tell the truth. Yeah,

Dr. John Licato (23:46):
I don't know if he has a physics degree, but
we'll have to check it anyway.
But so anyway, yeah, thepossibility of quantum computing
is something that AI researchersare paying attention to. It's
been known for quite a whilethat if quantum computing can
scale because the way that itdoes computations allows for

(24:06):
many computations to go on inparallel that could break a lot
of you know cryptographyalgorithms that We use a lot of
ways that we use mathematicallimitations to protect our data,
right? Um, if quantum computingcan actually perform the

(24:28):
computations that we hope itdoes, then it might be that we
can break, uh, credit cardencryption, uh, reasonable time,
right, which, which is a huge,huge problem, right? Um, lot of
systems are are built on on theassumption that it's not
breakable, right? The good newsis that there are a lot of
people working on what we callpost quantum cryptography. So

(24:50):
they're trying to figure out, ifwe do have quantum computers,
how do we make even tougherencryption so that even Quantum.
Computers can't break it. Andquantum computers do have some
limitations. So they can't juststraightforwardly, run any
algorithm, any AI algorithm thatyou throw on them. There's

(25:10):
limitations in how you canactually read the results of a
quantum computation, because,you know, with qubits, if you
read the result, then itcollapses it down to so that the
superposition no longer holds.
And, you know, and, and, yeah,the it seems that one class of

(25:31):
algorithms where quantumcomputing might bring the most
benefit is certain types ofoptimization. And optimization
is, I mean, that's, that's allAI. That's AI is always doing
optimization solving, right? SoAI could be something that
benefits quite a bit fromquantum computing. So we're

(25:52):
paying attention to all theseadvances.

Glenn Beckmann (25:55):
So I've only briefly read a little bit about
it, but you probably can touchon it a little bit more deeply.
The power requirements, or AI,are, in a word, massive, and to
the point where I see companies,you know, some of the tech
giants actually consideringbuilding their own power plants,

Dr. John Licato (26:18):
nuclear power plants, to power the AI,

Glenn Beckmann (26:22):
what are your thoughts on that? And, you know,
and there's obviously thecorollary, the effect on the
planet, that kind of powerconsumption and the need to
create that power to run those

Dr. John Licato (26:33):
things, yeah, yeah. The So, just to kind of
put those into into numbers,it's been estimated at GPT three
to train it once cost about$100,000 just in electricity
costs, right? But remember, Isaid that it's not just a matter

(26:58):
when you when you create anstate of the art AI system, it's
not just a matter of training atonce. And then all of a sudden,
got, you know, new, newbenchmarks being broken, right?
You have to train it. And thenyou figure out something doesn't
work, and then you go back andtweak one of the parameters, and
then train it again. And you gotto keep doing that, a little bit
of trial and error, right?
There's, there's a little bit ofart to it, too. You might train
it 1000s of times, and then yourealize that none of them work,

(27:19):
and it's just there's no way toguarantee that, right? So if you
multiply that $100,000 cost byhowever many times it took to
train it, that's already a lotof electricity used, and that's
just one company, and that wasGPT three. GPT three is
estimated to be, you know, likeanywhere from 10 to 100 times
smaller than GPT four. And youknow, we're on GPT 4.5 now,

(27:43):
right? And then that's onecompany. So now you've got all
the other companies, Lucy anddeep seek. So the power costs
are massive, and there is a lotof research in how to do the

(28:03):
computation more efficiently,right? However, the trend in
machine learning and deeplearning in AI is that whenever
they give us more powerfulhardware and they allow us to do
more computation with with lessmoney and less energy, we find a
way to fill it up again, right?

(28:24):
And there's no reason to believethat's not going to change it.
They give us more efficientpower consumption. So I don't
think that's going away. Thattheir AI is such a powerful
asset for any country ororganization to have that it's
it's it's going to be an armsrace, right? If, if one company
says, if one country says, Well,we're going to make it so AI can

(28:47):
only use this much power, andthat's it, right? Then they're
going to fall behind quickly inthe in the arms race. And
that's, yeah, that's, that's notsomething that I think countries
are willing to do right now. We

Glenn Beckmann (28:59):
just have to figure out how to generate
massive more amounts of powerand do it cleanly, right? Yeah,

Dr. John Licato (29:08):
that's, that's the goal, but not only because
of AI, right? And, you know, ohyeah, electric cars and just
general power consumption isgoing up. So, yeah,

Glenn Beckmann (29:15):
for sure. So as someone who's both researching
and building AI, what's onedevelopment in the field that
you're really excited about? Oh,yeah,

Dr. John Licato (29:26):
it's so hard to not so hard to limit that down,
right? Oh, and we'll give youtwo, okay, that might make it a
little easier. So, yeah, I'llfocus on things that we're
anticipating for 2025 so one ofthem is, as I already mentioned,
the increased attention tosecurity for models. You know, a

(29:50):
lot of companies are realizingthat AI is not magic, that it's
incredibly powerful, but it'snot magic. You have to, you have
to test it. You. Have to findout what its limitations are.
You have to account for those.
And I think people are going tobe looking for a lot more
confidence building solutions.
So that's what we're trying toprovide with actualization AI.

(30:13):
Another thing that isanticipated with this year is
the rise of agentic AI, sothat's an agent is essentially,
you can think of it like aclassic, large language model,
except now it has access totools. So if you it can search
Google on its own, it can fillout websites, right? There are

(30:36):
agent frameworks emerging sothat the AI can decide, can, you
know, move the mouse around andclick on things and fill out
forms on the what on yourcomputer? Right?

Glenn Beckmann (30:47):
Job seekers everywhere are jumping for joy
right now,

Dr. John Licato (30:51):
it's, you're gonna see that. I'd say a month,
I'd say a month from now, right?
Yeah, you're and what makes themdifferent is they have a higher
level of agency. So it's notjust a chat bot that you give
more tools to, but you give itthe ability to decide when to

(31:12):
use those tools more agency.
That's where the agentic hypephrase comes from, right? So
we're going to see more of that,and we're going to see
frameworks to make it easy touse. So you know, if I, if I
tell Siri to search the web, itcan already do that, right? But

(31:33):
if I tell Siri go to thiswebsite and copy and paste this
text and then put it into thisPDF and then print it out,
right? Those are the kinds ofthings that are in the agentic
space. So, I mean, the theincrease in capability of the
tool, the AI tools that wealready interact with, is gonna,
I mean, if it hasn't explodedalready, it has, just imagine

(31:56):
what it's gonna look like. Yeah,

Glenn Beckmann (31:57):
I think it's just gonna be an ongoing series
of explosions from now untilhowever many years from now?
Yeah, we

Dr. John Licato (32:03):
haven't hit the peak yet. That's, that's, that's
for sure. Well,

Glenn Beckmann (32:08):
I want to wrap up today with a question that
I've kind of wrestled withmyself for a number of years,
and now, with the accelerationof AI across literally
everything, I've walked right upto the line of despair. In some
cases, I have two daughters, 25and 22 years old. I've tried to
coach them through theirformative years, the social

(32:28):
media phase, you know, and triedto tell them about the
artificialness of thoseplatforms and the deliberate
manipulation and things likethat. Now, with the ubiquity of
AI, I've been telling them, thegreatest challenge they their
friends and the youngergenerations are going to face in
the future is to be able to tellwhat's real and what's not, and

(32:52):
also to care that there's adifference, because the lines
between those two worlds are notOnly blurred, they're almost non
existent anymore. What do youthink? Am I being fatalistic?
What do you think about it?
Yeah,

Dr. John Licato (33:06):
it's, I mean, are we? Are we just being, are
we just being old men about thisand saying more than I know that
this, this insistence on realityis, you know, important, and the
new generation doesn't, doesn'tcare about, you know, I saw, I
saw a comment on the socialmedias, right? That That said,

(33:29):
Oh, man, it's so funny whenpeople just lie for no reason,
right? And you see this on, youknow, Instagram posts, where
someone posts a video clip, andthen the comments will will say,
What movie is this from? Andpeople will just put random,
random movies, just, you know,just to be funny, right?
Whatever. But, but there is a,there is a sense in which it

(33:51):
doesn't even matter whether whatyou say is true anymore, right?
It's a, it's, it's, it's funnierif it's not, the confusion that
it causes is fun and okay, maybethat's maybe that's just, maybe
that's just a thing to do whenyou're young, right? But there
is a deeper problem that we needto be aware of, which is that

(34:12):
sometimes being accuratematters, right? And because it's
going to be so easy to createfalse versions of everything
that we typically use to verifytruth, like videos, right?
Videos aren't quite there yet,right? I don't, I don't know if

(34:35):
the ability to generaterealistic videos is going to
accelerate that much this year,maybe maybe five to three years,
right? But it's definitelycoming right now. There's
definitely going to reach thepoint where you can't tell the
difference. There is reason tobelieve that we are never going

(34:57):
to be able to tell thedifference between. Mean AI and
human content. There's deeptheoretical reasons why that may
be the case. I can, I canexplain it briefly so we have
this paradigm in AI calledadversarial learning, right,
where you have a generator,something that can create an
image of cats, right, and then adetector that can tell whether

(35:22):
an image of a cat is a real or afake one, right? Okay, so the
problem is that once you havethese two systems and they have
you have the ability to trainthem. You can have them operate
in an adversarial way. Thegenerator generates images, the
detector tells the differences,and then they learn based on

(35:42):
that. And then the generatorlearns how to do more realistic
images of cats, and the detectorlearns how to tell the
difference between those andthey just keep iterating, right?
That's an arms race. So theykeep going. They keep getting
better and better until a humancan't tell the difference
between the what the generatorcreated, and an actual human, an

(36:04):
actual image of a cat. But asthey get better and better, they
start to reach this point wherethe signal disappears. There's
no longer any variance that thegenerator and the detector can
use to actually tell thedifference between you, because
the variance approaches thevariance that you would expect

(36:30):
in reality, and there's nolonger any signal, so it could
actually tell the difference. Soyou know that same pattern can
exist with videos text, right?
It could just be that we'realready at that point with text
generation where I show you arandom piece of text you can
never say with 100% confidencethat this is chat GPT generate.
So what do you do? What do yousay to the to kids who are

(36:56):
growing up and uh, because

Glenn Beckmann (37:03):
Don't, don't. We need them to be able to tell
what's real and what's not real.

Dr. John Licato (37:09):
Yeah, we need some ability to tell, but I
think that we have to acceptthat That ability is just not
going to exist, and we have tothink about what that world's
going to look like I think it'seven more important to teach
them to value the difference,right? That is something that I

(37:31):
don't know how to do. I don'tknow how to get people to
actually care that what they'relooking at is fake. Maybe that's
just one of those old manthings, values, I hope

Glenn Beckmann (37:42):
not. I'm gonna go crawl off into the corner
corner and curl up in a ballnow. Yeah, nothing's real. John,
thanks so much. It's alwaysgreat having you on and look, I
can't wait to see your story onthe on the new Be bold campaign
and tremendous success to you.
And cake, yeah, and also toactualization AI and doing lots
of great things. And USF islucky to have you, and we're

(38:04):
lucky. We're really gratefulyou're willing to share some
time with us. Always happy to behere. Many thanks today to
Professor John licatto from theUSF College of artificial
intelligence, cybersecurity andcomputing, or as the close
personal friends of the collegecall it cake. It's a new college
created here at USF less than ayear ago. John is playing a key

(38:24):
role in its creation, and he'salso one of the stars of the new
USF Be bold marketing campaign.
Keep an eye out for him on allyour screens. It's been a great
update on what's going on withUSF, new college, as well as a
deeper look behind the currentheadlines in the AI space, John,
we look forward to the next timewe have you on the podcast,

(38:47):
assuming, of course, we'll beable to afford you anymore. Next
week, on at the boundary, we'llgather together some of our
thoughts on the recentlycompleted Tampa summit five. We
started this after our previousconference, and it was a great
success, though we thought we'dtry it again. Next week, we'll
have a round table conversationabout key takeaways from Tampa
summit five. If you don't wantto miss it or any of our future

(39:10):
episodes, be sure to subscribeto the podcast on your favorite
podcast player.
That's going to wrap up thisepisode of at the boundary. Each
new episode will feature globaland national security issues we
found to be worthy of attentionand discussion. I'm Glenn

(39:31):
Beckman, glad to be with youtoday. Thanks for listening, and
we'll see you next week at theboundary. You
Advertise With Us

Popular Podcasts

Bookmarked by Reese's Book Club

Bookmarked by Reese's Book Club

Welcome to Bookmarked by Reese’s Book Club — the podcast where great stories, bold women, and irresistible conversations collide! Hosted by award-winning journalist Danielle Robay, each week new episodes balance thoughtful literary insight with the fervor of buzzy book trends, pop culture and more. Bookmarked brings together celebrities, tastemakers, influencers and authors from Reese's Book Club and beyond to share stories that transcend the page. Pull up a chair. You’re not just listening — you’re part of the conversation.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.