All Episodes

September 11, 2025 38 mins

 The SNIA Data, Storage & Networking (DSN) Community launched a multi-part AI Stack Webinar Series designed to provide education and a comprehensive view of the AI landscape from data pipelines to model deployment. Hear experts, Erik Smith, Justin Potuznik, and Tim Lustig answer AI frequently asked questions: 

•       Exploring the differences between AI (the broader concept), machine learning (systems learning from data), and deep learning (neural networks extracting features from massive datasets)

•       Understanding the "token economy" monetization model where AI services charge based on inputs/outputs 

•       Examining the shift toward on-premises AI deployments driven by data sovereignty, security concerns, and cloud cost management 

•       Implementing security through data validation, sanitization, and guardrails to protect AI systems from misuse 

•       Recognizing AI's transformative potential beyond current generative applications into agentic systems and physical embodiments  

If you'd like to contribute to the AI Stack Webinar Series or have topics you'd like to see covered, contact the SNIA Data, Storage & Networking (DSN) chair at dsn-chair@snia.org or visit the SNIA website at snia.org. 


SNIA is an industry organization that develops global standards and delivers vendor-neutral education on technologies related to data. In these interviews, SNIA experts on data cover a wide range of topics on both established and emerging technologies.

About SNIA:

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
All right, welcome everybody to another amazing
Experts on Data podcast herewith the SNEA community.
My name is Eric Wright.
I'm the co-host, or rather thehost, of this amazing podcast,
Also the co-founder of GTM Delta.
You can find me everywhereonline.
I'm Disco Posse on all socialmedia, so always love to connect

(00:24):
with folks both through SNEAand anywhere in the world, and
I'm really really lucky todaybecause I got some fantastic
humans and we're going to talkabout what seemingly is the most
non-human hot topic these days,which is around AI, the AI
stack and really a lot of whatthe AI stack webinar series

(00:46):
that's coming up from SNEA isgoing to be about and really why
it's important.
So thankfully, I got experts inthe room where they always say
the last thing you want to be isthe smartest person in the room
.
Never a problem for me on theexperts on data podcast.
So quick round of intros We'llstart with I'll say the first
Eric, because I like to say thatI'm the other Eric.

(01:07):
So, Eric Smith, you want togive a quick intro and then
we'll work our way around.

Speaker 2 (01:13):
Sure.
Thanks, eric.
I'm Eric Smith.
I'm a distinguished engineerworking for Dell's CTIO team and
I'm also the chair of the SNIADSN community.

Speaker 1 (01:23):
Fantastic, and Tim also, the chair of the SNEA DSN
community, fantastic.

Speaker 3 (01:26):
And Tim.
Hello, good afternoon, evening.
I'm Tim Lustig.

Speaker 1 (01:35):
I work for NVIDIA, where I'm a relationship
development manager for aninception program Fantastic.
And last but very clearly notleast, also because one of my
sons is named Justin, so I lovethe name.
Justin, introduce yourself andtell us where you're from.

Speaker 4 (01:46):
Hey everybody, justin Petuznik, I'm an engineering
technologist at Dell and I workwith the first Eric.
And, yeah, I'm up inMinneapolis right now.

Speaker 1 (01:55):
Fantastic.
So let's start with what is theAI Stack webinar series and
just to give folks a bit of apreview of what they can expect,
I know we've already got thefirst one live, so, depending on
when people are watching this,we may have more than one that's
already published.
But with that, I think, eric,you wanted to walk us through

(02:15):
what the team's working on withthis.

Speaker 2 (02:18):
Sure, Thanks, Eric.
So the AI Stack Webinar Seriesis basically basically a planned
11 webinar segments that'sdesigned to give IT
professionals a clear end-to-endview of the AI landscape.
By that I mean, instead ofdiving right into all the niche
details, we walk through thestack as a whole, covering

(02:40):
everything from the datapipelines to infrastructure to
model deployment.
It's really just all aboutenabling people to see how the
layers fit together and again,the goal isn't to really make
everybody an expert overnight,but to provide what I call a
framework of understanding thatcuts through the noise, reduces
overwhelm and confusion and justgives people the confidence to

(03:02):
start experimenting, askingbetter questions and building
their own path towards AI.
So yeah, that's the basic idea.

Speaker 1 (03:10):
In looking at.
One of the things we often haveto begin with is pure
definitions, and so we have thisidea of AI, ml and deep
learning, and this is oftensomething that we kind of use
them interchangeably, or atleast the marketers do.
God bless us.
Fine folks, I'm a marketer anda technologist.

(03:32):
I'm a split duty, but in thecommunity I find there's a
little bit of confusion.
Sometimes we talk about AIversus machine learning, versus
deep learning.
So how can we best describethat?
And maybe let's, tim, let'scall on you and give your split
of how you see those definitionsbeing important.

Speaker 3 (03:57):
Yeah, good question.
First off, and just to add onto what Eric was saying, the AI
stack starts with a very generaland it gets broad, so anybody
can jump in at any time.
If there's people who haveexperience with AI, if you know
some of the stuff that can getin the middle.
For those who are beginner andwant to know a little bit about
how artificial language, machinelearning and deep learning are

(04:18):
worked together, artificiallanguage is really the larger
bucket that encompasses bothmachine learning and deep
learning.
When you get a little bit morespecific, you can dive into
machine learning, whichbasically learns data to improve
over time without beingexplicitly programmed.
So it's a little bit moreencompassing of artificial

(04:40):
intelligence with a little bitmore training to it down the
line.
When you get into deep learning, it's more of a specialized
area within machine learning anddeep learning utilizes
multi-layered neural networksinspired by the human brain to
basically automatically extractfeatures from massive data sets.
This makes it especiallypowerful for complex tasks like

(05:01):
image recognition, languagetranslations, voice assistant,
things of that nature.

Speaker 1 (05:08):
And this is, I think, also why, from the outside,
people get very confused thenormies, the normal folks that
don't inject themselves into thenews every day about this.
We often see things too whenpeople say, say, like AI, and it
gets confused with, like,generative AI.
Like you know, I've beenworking for decades in this

(05:30):
industry with AI, but generativeAI suddenly is the thing that
we, we know AI as.
So it is funny that it's beencommoditized.
That AI means chat, gpt, youknow pardon, I'm calling out a
particular vendor, but we knowit's like Google, it's become a
verb as much as anything else.
So it's interesting that.

(05:51):
That's why I like to call outthe definitions.
Now, the other thing that'sinteresting is the switch to,
I'll say, the token economy.
And so what do you each thinkabout this?
The way that we're going tomanage, how do we sort of charge
and understand consumption ofthis stuff?

(06:11):
Because it's no longer rawcompute as far as, like, mips
and hours and, you know,megahertz.
Now, the tokenization of how wecharge for this stuff or
calculate it is going to bereally tough for a lot of people
to understand.
So, justin, I'll start with you.

(06:31):
So what does a token mean nowin how we describe the
technology and how we're usingit.

Speaker 4 (06:39):
Sure.
So to start at the base level,right, a token is just any input
or output coming into or out ofone of these models.
Right, that can be words, thatcan be portions of a picture,
what have you?
It's sort of settled as thelowest common denominator for a
lot of the cloud-based models,especially because, again, what

(07:00):
you're putting in is tokens andwhat you're getting out is
tokens.
So you know, the modelsthemselves, once they're loaded
into that GPU memory, areessentially static and inference
, and then it's just what youput in and out that makes the
difference.
That will change.
I mean, we still have all thoseMIPS and gigs and everything
else.
They're just obfuscated by thattoken in the cloud.
When we do look at on-premsystems, though, tokens are

(07:24):
still useful and valuable as away to measure and maybe to
charge out to your customers,but, as the folks building those
systems, you still are going toneed to care about what's
behind it and really build thesystem that way.

Speaker 1 (07:37):
Yeah, I guess maybe it's definitely where we're
going to see the merger of.
You know understanding thecosts and thus where we can
apply margins and make theseviable.
You know understanding thecosts and thus where we can
apply margins and make theseviable.
You know business platforms aswell, so it makes it fun.
Now the other thing is peoplegetting started with AI and

(07:58):
they're beginning their journey.
You know, what do we start with?
Do we need super servers?
Do we need GPUs?
Where does AI begin for peoplein different waves of adoption?
And maybe let's start with you,eric, on that.

Speaker 2 (08:20):
I mean, it depends on what you want to do.
If you're trying to train thenext large language model,
you're going to do that withyour own hardware and you know
you're going to need to set up a.
You know there's severalexamples of my company working
with other companies to do that.
But you know you don't have tostart there.
You can start with somethingthat I've been doing a lot of

(08:53):
working with a lot of work with.
Actually, I think colloquiallyit's called vibe coding, and
vibe coding is basically usingnatural language to interact
with the chatbot and describewhat you want to create in terms
of an application.
And a great example of somebodyusing vibe coding to produce an
application is actually thebinary digit trainer app that we
provided as a demo for thissession.
So I think at the end of thesession there was like 10
minutes where we went throughhow you train a model, how you

(09:17):
can use it for inferencing, whatcheckpoints are, and it kind of
goes into all the details andbetween the application.
The site that I used was calledReplit and I did use a lot of
ChatGPT5 to sort of help whenReplit couldn't get it done by
itself.
But that's a really good placeto start, and it really unlocks

(09:41):
you from your whatever skillsyou might be limited on and
allows you to work at the speedof your imagination.
So that's where I think this isgoing to go and how you can get
started.

Speaker 1 (09:52):
Yeah, and I think I like the idea that we very
quickly jumped to fractionalavailability of resources and
that, because we had, the cloudwas there, the model was
available, and then you know, ofcourse, getting access to that
king-sized hardware that youneed to run these like
supermodels and do fast training.

(10:14):
Yep, it became tough to get, butit also became quick.
That, almost like the SETIproject, you know you've got all
these people who are like, hey,I'll share part of my, my
access to my grid with you andit's we really got to a sharing
economy with hardware quickly,which I I'm encouraged by Cause

(10:34):
that, as you say, with vibecoding, now you can, you can
kind of vibe build, or as I callit, vibe vulnerability creation
, but, uh, you can.
You can definitely quickly getstarted and get those prototypes
ready and gosh, it's just suchthe bar has been lowered so
beautifully that anybody canaccess this stuff and it's not

(10:55):
just us happy nerds who are justreveling in what we can do with
it.
I went to my barber the otherday and he's telling me about
stuff that he's doing with, likegenerative AI and it's like
that's just such a fantastic wayto like what we do is being
used every day by everydaypeople, and I think that is the
beauty and the commoditizationin the community offering of

(11:19):
everything with AI.

Speaker 4 (11:20):
Well, Eric, I think you make a good point that one
of the weird things with AI isit, unlike previous compute
revolutions we've seen, orthings, it was already in
everyone's hands before theyeven realized it, right?
I mean, the vast majority offolks have a smartphone that's
doing some form of AI on it andwere even before.
You know the iPhone moment ofchat GPT getting launched a few
years ago, right.
The iPhone moment of chat GPTgetting launched a few years ago

(11:42):
, right, and?
And the nice part is AI canwork on a small platform, as
that, or, like you said, it canbe distributed and
fractionalized through a cloud,and and what we see is is this
idea of essentially the, the,the impact you want to have,
whether it's on one person forfive minutes or a thousand
people.
You know, concurrently, thatthat's how the hardware needs

(12:05):
scales, right, that's how yourcompute and and and in your need
scales.
So, yeah, it's really easy toget started as one person, kind
of just picking away at akeyboard and trying to make what
, what you need to happen, andthen it can scale very quickly,
uh, assuming you can get thehardware Right to really just

(12:41):
connect with the rest of thevendor ecosystem so that we can
accelerate all of us.

Speaker 1 (12:46):
That rising tide lifts all boats type of delivery
, that we can all go fastertogether.

Speaker 3 (12:54):
Yeah, eric did a great job kind of starting the
discussion around bringing thisAI stack to SNEA, and SNEA is an
extension arm, an educationalpiece, and I've been involved
with it for quite some time aswell as Eric, I believe, too and
that's actually what we'retrying to do is work together as

(13:15):
teams to make sure we'reeducating the community, and
there's other arms within SNEAthat works together to make sure
that standards are set, andit's all goodness for everybody,
whether we're educating orwhether we're coming up with new
standards to take thetechnology further we are at
this importance of amulti-vendor.

Speaker 1 (13:39):
I'll say it's a coopetition in a sense, like
because we're all, eachorganization has specific
commercial goals but yet we canall beautifully come together
because we have a shared beliefin you know and a shared goal of
all of us getting there, likeadvancing the entire ecosystem.

(14:01):
And this is probably the firsttime I've seen such beautiful
interplay, because it is notjust vendors but it's every kind
of vendor Network storage,compute memory, gpu,
hyperscalers, the local folksthat are doing like mini AI
desktops of the world.

(14:21):
So that really, as you said, isdefining the standards,
bringing the people together,and that allows us to have that
base that we can really quicklyaccelerate from Now.
On the storage side, you knowSNEA did we're kind of big on
storage here at SNEA what is theimpact now on storage in how AI

(14:44):
is changing the consumptionpattern?
And let me just see I'll letyou volunteer, but I haven't
talked to Eric in a second.

Speaker 2 (14:53):
Oh, yeah, no, I'm happy to take it.
Yeah, so it depends.
You know and we had thissubject come up during the
webinar as well you know how isAI changing storage and it's
having a massive impact All of asudden.
Latency and throughput areextremely important.
I mean they've always beenimportant, but they're much more

(15:16):
important these days, and itreally breaks down.
Important, I mean they'vealways been important, but
they're much more importantthese days, and it really breaks
down.
You know, one of the questionsthat we got is do you need all
SSDs or can you just have someHDDs and SDDs?
Because it's expensive and weget that, and it really depends
on the modality of the model.
You know what it's doing.
You know, one great example ischeckpointing.
You're really got to have very,very low latency and high

(15:42):
bandwidth to be able to getthose GPUs back to training as
quickly as possible, becauseyou're losing money for every
moment that you're checkpointingand then, depending upon the
type of data, video hasdifferent requirements than
audio, than text, and so youreally need to know what you're
training, what the data is, andthen you would structure your

(16:03):
storage solution appropriately.

Speaker 1 (16:07):
And now we definitely have the.
We thought it was going to allend up in the cloud and there
was this sort of race to whichcloud would be the fastest to
acquire all of the GPU hardware,but I'm seeing much more.
That's moving on-prem, becausethe idea of sovereign AI is also
super important to you know,vendors or, like you know,

(16:28):
companies and customers who theyneed to have that sort of a bit
more control, and there's quiteoften state by state or
province or country level.
You need to have separationjust for regulatory purposes.
There's a whole lot ofinteresting boundaries that now
sovereign AI and on-prem AI areimportant.

(16:49):
Justin, you mentioned beforeabout on-prem.
What are you seeing as far asthe shift to a lot of on-prem
experimentation, where probablytwo years ago that seemed a bit
far away, Absolutely.

Speaker 4 (17:02):
And to the points we've made earlier.
Cloud is great to very quicklyspin up and get started and,
kind of like we said, it's agreat place to start right and
have those experiments and thatsort of thing.
As you mentioned, though, asyou start to use more and more
of your own data, where thatdata resides is often very
regulated.
There are security concerns,that sort of thing.

(17:23):
Many different countries aretrying to, you know, promote
their own systems and do this ata national level.
So all those factors, as wellas exploding cost in cloud is
both driving and allowing folksto bring this stuff back on-prem
, and they're finding thatactually there are a lot of
advantage to that.
Anyway.
Often that's where your dataalready is and you are having to

(17:46):
lift and shift it, and thatbrings its own set of
complexities.
And then, if you own thehardware, you can choose how it
gets used.
You can have it doing one thingfor one block of time of the
day and something else during adifferent one.
So, additionally, I think we'veseen different size models
allow for different scalingfactors.

(18:06):
Even within a few servers, youcan support many users and do
those sort of middle tier of youknow, of the continuum where
you know you start with one datascientist in the cloud poking
around proving viability andthen you move to that point
where, hey, I want to get 500people using this system to
really prove out that it canwork and that the quality is

(18:29):
there and my scaling factors.
That's generally when we seefolks really start looking at
on-prem solutions to do that.
And then you can just scale upthat middle-sized system as your
user requirements demand it.

Speaker 1 (18:45):
Now let's talk about risk.
There's definitely a lot goshwe could make.
I could go for eight hours withyou guys and we could cover a
lot of stuff.
But I want to quickly tap intowhat we see as kind of the risks
that we know we should be verymindful of.
And what are you seeing aroundhow we mitigate some of those
risks, especially with stuffwhere opportunities for people

(19:05):
to do data injection insidemodels it's very hard to untrain
a model you know, and expensive, and so there's always that
risk of when the foundationmodels could potentially be
poorly built and then from thereit's really hard to undo.
So how do we set guardrails andstuff like that?
Tim, we'll start with you.

(19:28):
What have you seen as kind ofearly risks and what are you
seeing around mitigation?
That's being more widelyunderstood now?

Speaker 3 (19:38):
mitigation.
That's being more widelyunderstood now.
I would say a key thing is toprevent bad information.
You have good information andgood information.
Now Put controls around thedata that you have.
You want to make sure that themodel is being trained
straightforward.
What methods that ensure thequality of the data?
It's very critical and this canbe achieved by multiple ways

(19:58):
making sure that the data youhave is trusted, comes from
trusted sources, it's beenprotected.
There's automated checkingthrough recruitment oversight,
as well as there's additionalapplications with things out
there that can basically gothrough the data and check for
anomalies and things of thatnature.
But key strategies you need tomake sure that the data is

(20:21):
validated.
You need to make sure thatyou're sanitizing it and monitor
the sources of data.

Speaker 1 (20:28):
And Eric, I imagine you've also had similar exposure
, especially given the crossworkyou're doing within SNEO.
What are you seeing as kind ofidentified risks and where
people are getting excited aboutfiguring out how to build those
guardrails in early, before weget caught out?

Speaker 2 (20:47):
Yeah, I come at it from a different angle, you know
, as a user of the technologyright now so I don't do a lot of
work, training models orhardening them against hackers,
but a lot of what Tim said andyou know.
So I don't do a lot of work,training models or hardening
them against hackers or you know, but a lot of what Tim said is
you know things that I've heardas well.
You know you got to be carefulabout that.
Don't use just raw data, usecurated data.
You know, and, rag, you have toknow where your documents are

(21:11):
coming from, those sorts ofthings.
So those are somewhat obvious,but, as a user, one of the
things that I find frustratingfrom a security perspective is
my company and.
I'm not arguing with theirrationale for doing so, but it
doesn't want us to put any IPinto any sort of publicly

(21:33):
available chatbot.
So scraping docs from onevendor and throwing them into
ChatGPT-5 and saying, summarize,sort of publicly available
chatbot.
So you know, like taking, youknow, scraping docs from one
vendor and throwing them intoChatGPT-5 and saying summarize.
You know we can't really dothat and we do have our own
model and you know our own waysof doing things like that.
But so it's.
What I'm finding is thatthere's this tension between

(21:56):
approved tools by the companyand what's available, and that
gap is huge.
It's because it's changing sofast.
So if I could do a project thatI'm working on right now, if I
could do that completely in avibe coding tool or at least get
a prototype or a proof ofconcept ready for it.
One of the challenges that I'mdealing with are what training,

(22:20):
what data can I give it?
You know we were going tooperate on company proprietary
data.
This application that I'mthinking of would use company
proprietary data.
I cannot upload that into thetool, so it makes it challenging
that way.
And there's also concerns aboutwho owns the app.
You know what's the licensingand those things are.

(22:44):
I don't think they're fullysettled yet.
At least I wasn't comfortableas I was looking through the
literature about what theanswers to those questions are.
So that's kind of how Icommitted security from an AI
point of view today.
From a AI point of view today.

Speaker 1 (22:57):
Yeah, and it's funny too, because we often there's no
greater lie than one backed bystatistics, other than, of
course, the one that says I haveread and agree to the terms and
conditions, like no, no, let'sall be real.
No one like I don't even clickthe link half the time, I don't
even pretend to read it.

(23:17):
So you know, but given I'll saythat on the enterprise side too
, justin, not that I'm sayingyou're only enterprise, but you
obviously got a have gear thatthey want to put into use.
Where are you seeing themworking with guardrails and what

(23:41):
are the tools and tips you'reseeing?

Speaker 4 (23:44):
Well, and I think, firstly, we've been saying this
guardrails as a phrase, and Ithink it gets used two different
ways and it's worth exploringthat.
You know one it's used as sortof a blanket term for how do we
keep our AI from going off therails, right.
And then the second way isguardrails are a component of
your AI system and I thinkthat's what a lot of folks, when

(24:05):
we dial back out, we look atsecurity as a whole.
We recognize, or we're startingto see folks recognize, that
it's more than just one modelsitting in a container, tokens
in, tokens out.
You need to build an AI system,right, and that system probably
has multiple models in it andmultiple agents and multiple
tools.
And so you know, completelyagree on the points we made

(24:28):
about the quality and thesecurity of your input and how
you build that model isimportant.
But then after that it's likehaving a teenager you did all
that work and now you handed thekeys to a car and away.
It goes, right, the guys doingthe training don't sit on it
forever.
So we have stuff liketraditional injection attack

(24:49):
vectors that are now suddenlyavailable.
How do we guard against those?
How do we deal with bad actorsspecifically trying to hit the
system and either getinformation out of it we don't
want them to or tilt that AIsystem.
So guardrails as a specificsystem and component of your AI
system is a very important oneand they're very tunable.

(25:11):
You can use them for any numberof things.
You can use them for any numberof things, but especially it's
good for forcing your system toanswer with no.
That's not something I talkabout.
That's a big piece that you cando with guardrails right,
especially for companies.
All you want it to do is talkabout you know specific set of

(25:32):
things or a specific section ofthe world, so you don't want to
ask it political questions, oryou know if it doesn't, or the
weather, or what have you right,if it's not a science system,
don't ask it science questionsor don't respond to those.
So, having that larger systemthat's going to protect against
injection on the front end right, and that's standard web

(25:55):
injection protection that we'veall had, or database injection
protection, but have guardrailson the back end, have injection
on the front end, have multiplelayers, like any good security
system and I think folks, youknow, for a little while anyway,
we're really trying to find asilver bullet one thing to do at
all, and, like everything elsein security, it's going to be a
layered, nuanced system thatfilters at different layers and

(26:16):
makes sure only the good getsout and we stop the bad
somewhere.

Speaker 1 (26:21):
Yeah, it's funny too, because we you know, with vibe
coding being a fast way toprototype stuff, unfortunately
it's not a.
It's fast to prototype a tool,but they very rarely prototype
security and vulnerabilityprotection inside it.
And I say it just with allhonesty, because if you don't
think like a systems architectwhen you're building that, I

(26:44):
love the capability that it'sopened up, like I want everyone
to have access to be able to dothese things.
But what I want to do is alsoremind them like, hey, when you
put this thing into the world,you know on a vercel or a replit
or whatever, or you put it outin heroku, the whole world has
access to it like fire, awindows ec2 instance up and

(27:06):
you'll find out real fast howmany you know network
connections are poking aroundlooking for RDP.
It is just all these fantastichoneypots are out there, and so
whenever somebody vibe codesomething, the first thing I do
is I check it.
I'm like, hey, this is reallycool.
And then I usually send themback their system prompt and all
the model information that Ican pull from it with like two
queries.
So I do like that we'restarting to think about security

(27:30):
and, you know, hopefully itbecomes more you know, before it
goes out the door, and I thinkthat's the, it's too late, we
can't stop it, like, and I it'sgreat, I love that it's out
there and we're already using it.
If you think you're preparingfor AI, that's like preparing
for oxygenation, like sorry,you've been using it every day.
It's just you didn't know it.

(27:51):
So, tim on, let's talk aboutthe positive.
I want to.
I'd love to hear what's the?
What's the thing that reallymakes your heart beat faster
about what is being done withall these tools and technologies
we're talking about.

Speaker 3 (28:10):
Well, you know, I like to think of it.
Jensen said we're at the iPhonemoment, and Justin said that
earlier too.
So we're really just seeing thetip of this iceberg, and you
know where it's going to go isreally going to be incredible in
the next.
You know, five, 10, so on,years.
So and you talked a bit about,I think, generating the AI.

(28:32):
That's really what we're seeingtoday.
We're able to create text,we're able to create images.
Around the corner, we're goingto be seeing it deployed in
factories, where we have agenticAI, and in businesses, where
we're able to have these agentsthat are going to be working
together inside an artificialintelligence system that are
going to accomplish so much more, and we're going to see

(28:54):
efficiencies just escalate.
You know, then, past that,we're going to get to a physical
AI where we've got robots thatcan interact with environments
and we'll have them train sothat they can do things that are
not programmed to do.
They can learn on the fly.
And you know, I think it's areally exciting time right now.
I think this AI stack series isgreat because we'll get people

(29:18):
in at that ground level.
People who know a little bitmore can scale up with us as we
go through more of the webinars,I really want to encourage
people to stay in touch withSNEA and just follow along with
the AI stack and join where youthink it fits with your
abilities.

Speaker 1 (29:35):
Absolutely, Eric.
I'd love to hear what's yourthing that gets you really
jazzed about what we're seeingas the outcomes from all the
nerdness that we're excitedabout.

Speaker 2 (29:49):
Yeah, I just I see it , I know there's a lot.
It's causing a lot of chaosright now, you know, with
employment and everything else.
And it's funny, you know, likethere was a recent MIT study
that was put out that said like95% of all Gen AI pilots have a

(30:10):
deliverable, measurable impact.
And what struck me about thatwas when I read that I sort of
started thinking about what wasbeing said about the cloud back
when we were all thinking SaaSwas going to be everything,
everything was moving SaaS, andthe reason that it sort of
reminded me of that, becauseback then we had shadow IT and

(30:32):
if you wanted to get somethingdone, you'd go get a VM, you'd
go get an instance and you'd dosomething really quick and then
you pull it off.
I see the same thing happeningwith AI and there's this concept
of shadow AI or shadow AIservices doing the same thing,
concept of you know, shadow AIor shadow AI services doing the
same thing, sort of like what Idid with this binary digit app,

(30:53):
just to create a demo because,you know, because we needed one
and that was the only way it wasgoing to really get done, at
least based upon what I was ableto do, and so I find that what
I think is going to end uphappening.
I think, much like we had withthe cloud, we're going to see
that this is going to increaseproductivity in ways that we

(31:14):
can't even imagine at this pointin time.
And thinking back to when wewere thinking about cloud, I
mean, this is yet anotherextension of that sort of
mindset shift when we thinkabout what we're able to do with
on-prem IT equipment and whatthe cloud enabled, and now what
we can do with AI, where I don'teven have to think about the

(31:34):
infrastructure anymore.
I can just use natural languageand tell it what I want it to
do.
I just think it's really Idon't know that my imagination
is big enough to come up withall the ways that's going to
impact us.

Speaker 1 (31:47):
Yeah, I forget, was it Ilya?
I'll butcher his last name, butsort of famous our early guy.
Yes, yes, yes, yes, and I thinkone of his tweets at one point
a few months ago was theprogramming language of the
future is English, and it's suchan interesting way to talk

(32:07):
about like that.
We've done it like this is whatwe've always wanted.
It's like can I?
I want to be able to interactwith my system in a way that is
natural for me to do so and thencreate the right conversion
layers in between, and we'vedone this with programming for
so many years.
And now that we can even go onestep higher and it's just now

(32:28):
really the way that, likeobservability, is meant to look
at the system as a whole thatnow we treat the inbound inputs
as a system as a whole and wecan do it in natural language.
It's super, super cool, justin,what do you see is like the
stuff that has been on theground that you've seen come out

(32:49):
, that it maybe even surprisedyou in how people are using
these tools.

Speaker 4 (32:54):
Well, I don't know about surprise, but one area
that you know when I talk tocustomers and they want to start
their AI journey, I say I tellthem you know, think of two
really easy examples that cometo mind for you for AI, and then
tell me what your two hardestproblems are.
And those are like your fourstarting use cases to go tackle.

(33:15):
What's interesting is often oneof those two hardest use cases
will be one of the first theyactually get a working system
for and then everything changesfor them, right.
Suddenly, they're an order ofmagnitude faster at processing
something or making a decisionor what have you, and they're
very surprised, like that's aproblem we've had for 20 years

(33:36):
and we've built systems aroundthe fact that it's just never
going to get any better.
And now we can use AI and we'vemade it better and we've made
it just another step in ourchain to execute.
So I think some of that's bothsurprising, but also it's very
powerful, and that's what we seeis, if you get the right, if
you put AI in the right places,the change that it can have on

(33:57):
an organization is huge, youknow.
Additionally, it just helps usscale as a company, as a
population as humanity.
Right, there's always more workto do than there are people
willing to do it in any giventime or able to do it in any
given time.
This can fill those gaps right,and I know that leads to
discussions.
Oh, you know, robots takingover the world no one will have

(34:19):
a job.
To be honest, again, we alreadyhave more work than we have
people.
So let's fill the gaps right.
And how much does the qualityof life for all of humanity get
when?
When we can do that, whenthere's no waiting for anything
Right and empowering the peoplewe do have and luckily we're
seeing some folks talk aboutthat, and you know it's not.

(34:40):
The AI is going to take overfrom people.
It's going to make the peoplewe have doing the job so much
more effective at what they'redoing.

Speaker 1 (34:48):
Yeah, and I think this is another perfect point of
wherever people say AI is theend and then, at the same time,
ai is the beginning.
It is this sort of dichotomy ofwe see it as the end of many
things but then when you openyour eyes wider and really look

(35:09):
around at what it creates, andyou look back over time, into
patterns, over history, andsince we're talking about
generative AI, let's delve intothe crucial and critical
paradigms of the past.
You know, like, the reason whygenerative AI does what it does
is because it's taking historyand then in compacting it

(35:31):
together and then distilling itout in, in, in tokens and
phrases, and now what we'regoing to get is new stuff that
never existed, that is going tocontinue to train and retrain,
and then the people, as you said, justin, are going to like move
faster, and now their biggestproblem is no longer their
biggest problem, and then theyfind the next one.

(35:53):
Jim Keller said it great, agreat way.
I love this.
He says we, as engineers, arein the business of solving
extremely hard problems until wecreate new ones, and so we're
just moving, we've subjugated abottleneck and we found the next
one and we're going to keepdoing that and it's going to go

(36:14):
faster and I'm excited as heck.
But most importantly, I'mexcited about the ai stack
webinar series because there isso much more.
We've literally danced on somany topics in a short time, but
people want to dig in.
This is a great place to do it,so we'll have links and, of
course course, people can followSNEA on all social media.
Make sure they subscribe tothis podcast and many others,

(36:35):
but let's do a quick round tableand remind folks where they can
reach you and find you amazinghumans to have better and bigger
discussions on this stuff.
We'll start with you, justin.

Speaker 4 (36:46):
Oh, I think you can find me on LinkedIn.
Otherwise, oh, uh, I think youcan find me on linkedin.
Otherwise you'll contact methrough dell fantastic and uh
and tim uh.

Speaker 3 (36:56):
similar linkedin um as well as uh um on on x though
tlistic, at xcom or at tlisticfantastic and uh, eric, sure
I'll.

Speaker 1 (37:11):
I'll not just because you got a great name I fully
support.
Let's talk about how we canreach you.

Speaker 2 (37:18):
DSN chair at sneaorg will get you to me and just
check out the sneaorg website,look for the DSN community and
you can contact the entire groupthat way.
If you have an idea orsomething you'd like to see in

(37:40):
the AI webinar series oractually if you'd like to
contribute, we do have a fewspeaking slots open, so we love
to include other people in thisas well.

Speaker 1 (37:51):
Aha, yeah, that's it.
So there you go, folks.
A call to arms.
Let's get more fantastic humanstalking about fantastic
technology.
And, of course, folks do wantto stay in touch with me I'm
Disco Posse all over the placebut, most importantly, make sure
you smash that like button andhit subscribe to this podcast,
because we're going to do a tonmore.

(38:12):
And thank you all for sharingyour time today, and we'll see
everybody on the next Experts onData podcast and the AI Stack
webinar series.
It's all kinds of goodness andit's free.
How much more Like this is it?
We've done it.
We've commoditized access toknowledge.
Gosh, it just doesn't getbetter than that.

(38:33):
So thank you all for joining ustoday.

Speaker 2 (38:35):
Thanks, Eric.

Speaker 1 (38:36):
Thank you.

Speaker 3 (38:39):
Thank you, thanks for having us.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.