All Episodes

January 29, 2025 42 mins

Get ready to explore the groundbreaking evolution of artificial intelligence with our special guest, Scott Hebner, a principal analyst at SiliconANGLE and theCUBE. Scott's journey from IBM engineering to becoming a pivotal voice in AI analysis provides a fascinating backdrop to our conversation. We navigate through the transformative tech cycles he's witnessed, from the internet and cloud computing to the rapid advancements of AI today. Scott shares his insights on how AI is reshaping industries by optimizing content and simplifying complex tasks, significantly impacting both everyday life and business operations.

Discover the revolution of generative AI and its profound impact on the business world over the past decade. Uncover the transition from predictive AI to the innovative capabilities of generative models like ChatGPT, including the rise of specialized small language models tailored for specific sectors. We discuss the potential of AI in coding and simplifying tasks, offering a glimpse into a future where specialized AI models continue to enhance various domains. The conversation further explores the shift from task-oriented assistants to goal-oriented agents, emphasizing the need to integrate causality into AI systems for better understanding of objectives and consequences.

Navigate the intricate landscape of AI regulations and data privacy with us as we examine the balance between innovation and protection. We delve into the risks associated with AI, such as IP rights and biases, and the global competition in AI development, drawing parallels to cryptocurrency debates. The episode also underscores the importance of building networks for digital success, particularly within the Master of Science in Digital Media Management program at USC. Join us as we encourage students to engage with industry leaders and continue their educational journey, expanding their knowledge through connections.

This podcast is proudly sponsored by USC Annenberg’s Master of Science in Digital Media Management (MSDMM) program. An online master’s designed to prepare practitioners to understand the evolving media landscape, make data-driven and ethical decisions, and build a more equitable future by leading diverse teams with the technical, artistic, analytical, and production skills needed to create engaging content and technologies for the global marketplace. Learn more or apply today at https://dmm.usc.edu.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Welcome to Mediascape Insights from Digital
Changemakers, a speaker seriesand podcast brought to you by
USC Annenberg's Digital MediaManagement Program.
Join us as we unlock thesecrets to success in an
increasingly digital world.

Speaker 2 (00:23):
Welcome to another episode of Mediascape Insights
from Digital Changemakers,promoted by USC Annenberg
Digital Media ManagementMaster's Program.
I am one of your co-hosts,anika Jackson, and I am here
with one of my favorite, mostrecent AI connections Scott
Hebner.
He is a principal analyst forAI at SiliconANGLE and the Cube.

(00:44):
Scott Hebner he's a principalanalyst for AI at SiliconANGLE
and theCUBE.
Scott, you're also a boardmember for an advisory role for
so many different startups andyou also had a very lengthy
career at IBM, moving up intovarious CMO and VP roles.
So I'm really excited to haveyou here to kind of unveil a
little bit about the history ofAI.
I've spoken about it a littlebit on the podcast, but not

(01:06):
going super in depth, and I knowthat you have some recent
papers and you have a reallygood perspective on where it's
been, where we're going, wherewe are and then where we're
going next.
So just thank you for beinghere.

Speaker 3 (01:18):
This is thank you.
This is going to be a lot offun and I appreciate it.

Speaker 2 (01:28):
These are always fun conversations for me because I
love talking about where we areand getting into the nitty
gritty, but then also imaginingwhat the possibilities are.
What was your career trajectoryinto?
You know, working in tech inthese various roles and then
moving into AI.

Speaker 3 (01:38):
Oh God, if I go all the way back.
It was like, okay.
So my dad was an engineer atibm and I didn't know what I
wanted to do.
I just wanted to go touniversity of massachusetts and
have fun, right, and he talkedme into getting, you know, an
engineering degree.
So that sounds good.
So I didn't realize it wasgonna be you know torture, but
somehow I got through it.

(01:59):
I actually got through itbecause I had a friend, one of
my roommates, who was justAlways had a Heineken in his
hand, but just really brilliant,so he was like my tutor.
But by the time I finally gotmy degree I realized I wasn't
going to be doing real liveengineering work.
I mean, that stuff's reallycomplicated, as you can imagine,
and I ended up going to IBM and, long story short, I think I

(02:21):
did more talking about what thetechnology can do because I was
able to bridge.
I could sort of understand thetechnology at a deeper level,
but I was also able tocommunicate it.
So that got me into likeproduct management roles and
marketing.
I was the CMO of several IBM'sbusinesses and been through
three major technologicaltransformations.
I actually when I was incollege believe it or not I mean

(02:44):
we were just starting to getPCs.
There was no cell phones, nointernet.
You know, if you wanted to looksomething up you had to go to
the Cyclopedia Botanica, but youknow.
So I was able to go throughmany of these technology cycles.
You know the internet, and thencloud computing, the internet
of things, which you know we'reall familiar with now.
And then, most recently, beforeI left IBM, was AI.

(03:06):
So I was the chief marketingofficer for IBM's AI business
and then, when I ended upleaving IBM, I got into the
analyst world and I've beenfocused over the last couple of
years on just you know what'sreally the next frontier,
where's this all heading?
Because it's so much in itsinfancy.
I mean, it's really in itsinfancy and I guess the good
news there's bad news and goodnews about my being my age One

(03:28):
is you know you're older.
The good news is you get to seethe same movie over and over
again, but it's just differentcharacters and slightly
different plots.
So I think the AI thing isfollowing some of the previous
technical, you knowtransformations that have
occurred in society and theworld, but at a much faster clip
.
So what you can do today versuswhat you could do three years

(03:48):
ago versus what you're going tobe able to do in three years
from now with AI is absolutelyincredible.

Speaker 2 (03:54):
Yeah, you know there's so much to discuss with
this topic.
One thing I really try toimpart on my students as we were
talking before we jumped on isI remember when we started
talking about generative AI acouple years ago and getting
students to start using it andfeel comfortable and not being
afraid of using it.
And now we see that in theconsumer base, where brands talk
about the latest technology intheir TV or in their products

(04:18):
instead of saying AI driven,because consumers are still wary
.
And these students are comingfrom various degrees of
expertise or brand new to thedigital space and they're trying
to learn all the differenttools.
I'm trying to really encouragethem at least start using Gen AI
, start understanding what thistool looks like, and then let's
go in the next class we'll talkabout this other thing that you

(04:39):
can use to apply on top of that,to just enhance, increase their
knowledge of different techstacks within the ecosystem.

Speaker 3 (04:47):
Yeah, I mean I think that's the right approach is I
think there's so many newcapabilities coming out at such
a rapid pace.
But the foundation for useveryday people you know, not
businesses, but like everydaypeople I do think is generative
AI.
You know, be co-pilot forMicrosoft or it's.
You know, chatgpt or Perplexiais the one that I really like.

(05:09):
I use that quite a bit.
It's amazing the things you cando.
I mean from you know, when Iwrite like a you know 15 page
research note and you know youwant to put tags in it where you
publish it, you know, forsearch optimization, you just
throw that thing in there andsay give me all the best tags
and you couldn't paste it.
You don't have to do it by handanymore, right?

(05:30):
Or if you have one table of a,you know a whole bunch of like
say, it's 500 different rows andcolumns and you have another
one that was for two years laterand you want to know what the
change is.

(05:50):
You just throw them both inthere, say tell me what has
changed in.
You know what was in row C butis now not in row C two years
later, and it just does it foryou, right?
I mean, not to mention, youknow, doing research.
I use it all the time.
If you just have a question likehow is statistical
probabilities in AI differentthan causality, and it just
gives you a beautifully writtenout thing, I think the
productivity just for just doingalmost anything is really

(06:11):
really incredible.
Again, it's just really thestart of what we're all going to
be able to do.
Having said that, I do thinkyou got to be careful with what
we call hallucinations and bias,and sometimes it conceals
influential factors so you can'trely on it like a hundred
percent, because there's stillthings that need to be fixed,
that I think over time it'll getbetter and better.

(06:32):
But I'm a huge fan of AI.
I think all this talk about ittaking over humans and all that
stuff is overblown.
It's just going to make us allsmarter, more productive, make
better decisions, problem solvebetter.
You know it's just going tomake life easier for everybody.
I think it's going to be.
The good is going to wayoutweigh any potential bad that
comes from it.

Speaker 2 (06:52):
And definitely want to go down that rabbit hole, but
maybe in a little bit.
One of the things that I havebeen using and testing this week
is custom GPTs, along withvoice wave technology, so that
I'm able to just go into and Iwill say I love Claude.
But for this particular usecase, I have a custom GPT that

(07:12):
somebody created and it's takingme through an exercise for a
new product that I'm developing.
Right, it's a course and it hasthese other things, and so it's
taken me through the wholebuilding the brand persona,
customer personas, pain points,messaging, demographics,
psychographics, what I'mactually delivering and I love
the fact that I can voice memoto it, so that, instead of

(07:35):
having to spend all the timetyping which I love writing, but
it's sometimes it's easier toget your thoughts out if you can
just voice them.
It's picking it all up,synthesizing it and then saying,
okay, here are some otherquestions for clarification and
then summarizing the key pointsto put into my deck.
And that's, I think, a reallybeautiful use case for AI

(07:56):
technology, for gen AItechnology specifically.
So you know, you left IBM.
You've been working with all ofthese different organizations.
You are truly getting deep intoresearch, analytics, thinking
about what's current and what'scoming next.
Will you take us back to?
We know that AI big AI has beenaround right for 50-ish years.

(08:16):
What's happened in the last 10years that you were mentioning
before we jumped on that hasreally helped to get to the
point where we are now.

Speaker 3 (08:24):
Yeah, if we go back about 10 years, I think two
things started happening in theworld of AI.
First of all, the data thateveryone had to deal with, and
particularly in businesses.
I think this started a decadeago in earnest in business.
So the concept of AI inacademia and research and some
of the very technical sciences,if you will have been there for

(08:47):
decades, but in business itstarted about a year ago and I
think what was really driving itwas two things.
One is businesses had so muchdata and so much of it changed
all the time that there was noway humans could understand it.
So there needed to be moreintelligent ways to process the
data and you know, analyticaltools could only do so much.

(09:10):
The second thing that startedhappening is companies came out
with new platforms and toolsthat democratized the ability
for everyday businesses to useAI.
So I think the first phasemaybe up to a couple of years
ago I think the first phasemaybe up to a couple years ago
was what we would callpredictive AI.
Think of it as the underlyingfoundation of all this, which

(09:31):
basically are massivestatistical probability machines
and they just swim in a lake ofmassive data and they can
identify patterns and anomaliesright within, you know,
statistically, within thesemassive data lakes, and then
from that they're able topredict what may happen in the

(09:51):
future, because it learns fromall those problems, right, and
so it can predict or forecastsomething that may happen or
whatever.
That's how businesses startedusing it predicting, well, what
is my financial quarter going tolook like next quarter?
Or you know, if there's ahurricane that comes up to East

(10:12):
Coast, you know, what should Ido with inventory?
Because it would learn frompast experiences and be able to
give you here's what you shoulddo.
It tells you the what.
So that's how businesses use it.
It was kind of under the coversof everyday tools they would
use.
And then, as you alluded to afew minutes ago, when was it
Late?
2022, I think Maybe it was 2021when chat, gpt came out right,

(10:33):
and that was the firstgenerative AI tool that the
masses started to see.
And what that did is it tookthe predictive models under the
covers and took it to a wholenew level of what they call
neural networks.
So your brain is like I don'tknow, billions and billions of
little neurons that somehowcommunicate to process

(10:55):
information and think of it aswhat handles all your memory and
then your instincts.
So, like my dogs, right, youknow, if I get up in the morning
, I start packing.
They know I'm going somewhereand they're instinctive and they
get, you know, defensive, right?
That's sort of how generativeAI works behind the covers.
It builds all this memory inwhat's called a large language

(11:17):
model and then it's able toprocess all that memory to
actually generate content.
So you can have a dialogue withyou now and then you can say I
want to create me an image of,you know, a rabbit, you know
being chased by a fox, and itwill create it, right?
And then you can start saying,well, can you make it more
cartoonish and can you make abig sun in the background?

(11:40):
And it will just iterate withyou and you can create a nice
image.
Or and you gave some of theexamples before right, you know,
you're able to just do someincredibly amazing things.
The genre of AI that ispredominant today are like from
the big players, like Microsoftand OpenAI and you mentioned
Claude.
You know you got Google, allthese players right.

(12:01):
Think of them as one size fitsall, like the massive solved
world piece.
They just suck up all the data.
The data lake is the internet,so it's great for everyday
things, right.
What you're alluding to is sortof.
I think where this is startingto head to is now you're
starting to get what they callsmall language models, which

(12:23):
SLMs.
They're not necessarily small,but think of S as being
specialized.
You're getting these moredomain-specific,
industry-vertical,profession-centric.
So one of my three sons isgetting a law degree down at the
University of Texas and thereare chat GPTs for law right, for
case law and doing all thatkind of stuff.

(12:44):
There's those for if you'rerunning a retail shop.
There's those for the you'rerunning a retail shop.
There's those for the digitalmedia and the publishing and
what you're seeing now is awhole bunch of these more
specialized models that aren'tas big and huge and massive, but
they're really focused on whatyou're trying to do domain of
what you're trying use, case ofwhat you're trying to use it for
, and it may feed off the motherof all you know generative

(13:08):
stuff, but it's really builtwith more knowledge and more
usability for what you're tryingto do and you can see how
that's going to progress overtime.
Where you're going to be,things are going to be built for
very, very specific things,like potentially discovering
drug.
You know new, new therapies anddrug therapies and you know,
and that's sort of like I thinkwhat we're in now is the big

(13:31):
one-size-fits-all are now beingcomplemented by hundreds and
hundreds, if not thousands andtens of thousands, of really
small, more specialized modelsfor like the one you mentioned.

Speaker 2 (13:41):
Yeah, and I also know that AI is really good at
coding.
Just to think about when we'rethinking about people working in
digital media they may not knowhow to code, they might not
know how to put together thebackend of a website, but we're
moving also to a phase where wecan ask AI to create the code
for us thing I would recommendeveryone do is the easiest thing

(14:09):
to do with the Gen AI stufftoday is ask it any question.

Speaker 3 (14:12):
Give up on your search engine, because you'll
see that it's not even worth itright.
Go to like ChatGPT or Copilotor whatever, and any question
you have.
How do I do this?
You know what happened at thistimeframe.
You know what was the day of.
You know January 24th 1942?
What?
You know what day of the weekwas it?
I mean anything you want toknow.
You know I used it this morning.

(14:33):
Someone said I can do a meetingat you know four o'clock GMT
time.
I'm like what the hell is GMTtime?
That's a quick question.
By the way, you can do it byvoice, right?
Yeah, but I think it's reallygood at how do you actually
accomplish something Like I needto go through this and actually
asking it to do it for you, andthat's where you get into
coding, right?
I think this is what we call inthe industry AI assistants or

(14:56):
chatbots the fact that it'sbuilt on huge amounts of memory
of how people have accomplisheda task in the past and therefore
can work out the instincts ofthat.
You can ask it to write codethat does something and it will
write it for you.
That's a task right, and it'sinteresting because marketing

(15:21):
and sales and coding are the twobiggest use cases in business
for generative AI.
Right, because they're usingthem as assistants and it's
incredible the amount of codethey can write and the quality
assurance of the code andrunning a system to make sure
that they work.
The productivity is like offthe ceiling, you know, just off

(15:43):
the walls.
But again, what it's good foris if you have a task to be
completed.
Now where we're heading fromhere is more than just task, and
that's where I think, it'sgoing to get interesting.

Speaker 2 (15:55):
And that's what I wanted to talk about next,
because you alluded to the factthat you just put out a paper
about this a couple of days ago.
I know that when I'm lookingover my show notes, one of the
big topics was going to be thenext frontier of AI we're
removing, which is AI agents.
So can you talk a little bitabout the transition from this

(16:15):
task-oriented use case into whatis next and how you're seeing
it, either in action now or howyou see it in the future, like
will we all have our ownpersonal agents now, or how you
see it in the future, like will.

Speaker 3 (16:30):
we all have our own personal agents.
You know what does this looklike.
Yeah, so there's a lot ofstreams we can go down with a
question like this, but let'skeep it at a more impact.
Where I think the impact isgoing to be, you know, in a more
of just like you know, aneveryday kind of person's world
Today, like I've been alludingto, is generative.
Ai is really good at if you havea task, create this image.
Summarize this 45-page researchpaper.

(16:50):
I can't read it all, I won'tunderstand it, but what is it
really trying to say?
You know those kind of tasksit's really incredibly effective
at because, again, it works onmemory, past learnings, if you
will, and then is able toaccomplish these tasks.
Where I think this is going tohead is you have agents that we

(17:10):
call now.
So you have a task is done byassistant.
So when you hear the terminologyabout an agent, what they're
really saying is an agent ismore about a goal.
There's two dimensions to thinkabout.
You're going from a task toasking it to help you accomplish
your goal.
I'll come back to that.
The second is you're going fromusing it to get information to

(17:31):
using it to actually help figureout what actions to take.
So think about a goal.
If you're asking it to create,I want you to create me some
code that will automaticallypublish the article, track all
the places people come to readit from and then send me an
email once a month.
It will write the code for you.
That's a task You're asking itto accomplish a task.
If you were to ask it, I want togrow my following by four times

(17:56):
and I want to become recognizedas an expert in using AI in
academia and in research andstuff like that.
That's a little bit moredifficult because you're asking
it to solve a goal Mm-hmm,mm-hmm, which.
What does that mean?
Well, that means it needs areason.
It needs to actually say okay.
First of all, I comprehend whatyou're asking me.

(18:17):
Two is I can help you plan whatactually needs to be done and
if you're going to accomplishyour goal, you're generally
going to have to make decisions.
When you make decisions, there'salways consequences of your
decisions and there's usuallymultiple different decisions you
can make.
So how do you know what theright decision is right and then

(18:39):
, collectively, you know makingthe right decision and being
able to evaluate differentoptions and how it would change.
What outcome you're trying toaccomplish is still not the same
thing as problem solving.
So how do I take a whole bunchof decisions and decisions that
potentially other people maymake and then actually
understand how to solve theproblem?
That's a whole new world.

(19:01):
That's what's meant by being agoal-oriented.
So when you hear the word AIagent as opposed to AI assistant
, that's the difference task togoal.
And then sometimes you'll hearagentic AI being referenced to,
and that is for more, much morecomplex problem solving,
generally involving anorganization of people like you

(19:22):
know, teams of people, andthat's when one agent is too
just can't handle it by itselfand network of agents that
actually collaborate together.
They have their own decisions,their own data sets, they work
off, but they work together.
It's like the wisdom of crowdsor like swarm intelligence.
Like you know, if you ever watchbirds fly, I mean it's they're

(19:44):
actually acting as a swarm, orit's the collective intelligence
of the group that guides thegroup right, and we can go, if
you want, a little deeper intowhy today's AI cannot do that.

Speaker 2 (19:56):
Yeah, I'd love to hear that and I'd love to hear
some examples and use cases ofthe move into agents and then
agentic AI.

Speaker 3 (20:07):
Okay, so let's start with what and I'll try not to go
too deep here, but luckily myyoungest son is getting a PhD in
this stuff, so I have technicalsupport.
He's getting a math PhD.
So, because what's underneaththe covers, by the way of all
this AI is some incrediblycomplex and eloquent math.
It's all math, right, but sothe way it works today with

(20:30):
generative AI is it'scorrelative statistics.
In other words, it'scorrelating data in a massive
lake of data.
It identifies patterns,anomalies, associations among
different data points.
Think of it as statisticalprobabilities being applied to
this huge amount of data, andthose probabilities could then

(20:52):
tell you, given what hashappened in the past your memory
here's what's likely to happenin the future, given whatever
you're asking it.
And so think of it as a bigmachine that knows how to
generate what, the probabilitiesof something happening, given
what's happened in the past.
That's good for tasks, becausetasks tend to be repetitive and

(21:12):
if you knew how to do it in thepast, it can help you do it in
the future.
But it works in a staticenvironment.
When you start getting to agoal-based, then you're starting
to have to deal with mechanismsof causality.
In other words, everything thatexists has a cause, right, and
therefore that's how you figureout what the effects is.

(21:33):
Aristotle I think that's somefamous quote that says once you
prove the cause, you immediatelyprove the effect and,
conversely, nothing can existwithout its causes.
And you think about it.
It's true, right?
You do almost nothing all daylong without thinking about
cause and effect.
I make it the cost going backbefore I have to do the podcast,
right, which I just did, right,you're thinking cause and

(21:55):
effect.
So, just going back to thetechnical thing here is, if
today's AI does probabilities tohelp you execute a task, what
causality tells you is how thoseprobabilities change when the
world around you changes.
That's what causality means,and we all operate in a dynamic
world where conditions andpeople and opinions, in other

(22:19):
words, a dynamic world, right?
Yeah?

Speaker 2 (22:21):
all the time.

Speaker 3 (22:23):
So you need to have causality to be able to
comprehend a problem or a goal,make the right decisions,
because everything has cause andeffect relationships.
If I do this, the consequencesis this effect right, which is
based on statisticalprobabilities, and infuse into

(22:45):
it causality right, it's calledcausal AI so that it can
understand how to dointerventions, understand
counterfactuals to the currentstate, identify what they call
confounding effects, things thatmay affect the outcome, but
you're not even thinking aboutit.
There's always, you know, thinkabout humans, right?
There's this thing called tacitknowledge.

(23:07):
You do a lot every day Like Ican ask you how you do something
and you probably won't even beable to explain it because you
just do it.
It's called tacit knowledge.
Are going to help humans dowhat is really the most

(23:27):
difficult thing, which is plan,problem solve, read, make
decisions.
So what an AI agent does that'sgoing to help you solve goals,
right?
A goal-based approach is itneeds to have those mechanisms
of causal and tacit knowledgeinfused into it.
And if you go back to what Iwas saying before, today's AI is

(23:48):
really just it's memories.
It's a whole bunch of data.
It's trained on data from thepast, but from that it learns to
say, okay, given what happenedin the past, whatever you're
doing now with your uniquecircumstances, here's what you
can expect to happen in thefuture.
But that's only good if youexpect your future to be just
like your past, which, inbusiness and society and

(24:09):
everyday lives, is not the case,and that's why you really won't
hear anyone today say thatgenerative AI can help you make
decisions or problem solve orcan help you.
You know reason, because itreally can't.
It's just not designed to do so.
What's starting to happen nowis there's new advancements that
are starting to infuse thatcapability into the AI models

(24:31):
and therefore you get theseagents where you can have a goal
right, I am losing money in my15 stores.
I'm not.
For some reason, people havestopped coming, why?
First question you're going toask is what is the problem?
It's going to say well, youdon't have enough people at the
store, at the cash registers,and people are waiting too long

(24:52):
in line to get out.
So that's why you're losingpeople.
But today's gender, the eye,can tell you the what.
It can tell you that.
But as soon as you ask it okay,well, I'm able to hire 22 more
people because I can afford it.
Where would I put those peopleacross the 25 stores to optimize
my revenue and my profit?
That I can't do, because that'sproblem solved.

(25:16):
And think about that's what-ifscenarios and say, well, we have
four here and three there andtwo there.
That's a cause and effect,right?
You're saying, okay, that's onewhat-if scenario.
What happens if I put six thereand eight there and one, two
and three there?
Then it would be a differentoutcome.
So you're starting to problemsolve, right?
Today's AI models aren'tdesigned to do that, because

(25:36):
you're changing the conditions,you're intervening in the model,
you're not relying just on pastlearnings, but you're starting
to do counterfactual reasoning,the facts being where I put
people right and these modelscan actually tell you yeah, if
you put them here, you knowpeople get in and out of the
store this much faster, you'llmake this much more money, given
your run rate in each store andit actually I mean that would

(25:58):
be an example of this, right,and I think that's coming.
It's really going to you know,today it's helping you be more
productive in doing tasks In thefuture.
It's really going to help youmake better decisions and
problem solve.
I don't think it's going toever replace you.
I don't think it's capable.
Maybe 100 years from now, atleast, when I'm gone who knows,
I'll be living on Mars and allthat kind of stuff.

(26:19):
But this is in the near future,in the years ahead.
Where.
Learn generative AI now,because that's going to help you
with tasks, but it's comingwhere it's going to really be a
great little person to hang outwith you to do things, and it's
definitely going to be true inbusiness.
It's already.
People are already building itand deploying it.
But I think to your earliercomment, I do think you'll get

(26:40):
your personal assistance.
I think these things will growwhere they get to know you and
they're and you know right nowthey can get to know you and
they know your calendar and theyknow your email and then you
know you can do tasks better.
But it'll get more interestingand more useful.

Speaker 2 (26:54):
Yeah, and that's something I really actually
welcome, because I do a lot ofdifferent things we all do.
We have our personal lives, wehave our professional lives, we
might have our academic lives.
I'm podcasting, I'm teaching,I'm working for a nonprofit, I'm
a single mom.
I can input all of these thingsand have it help me better plan

(27:15):
my schedule or look ahead to.
You need to be thinking aboutthis, this and this, which are
big deadlines or other things,so I really embrace this idea.
One thing of course we'rerecording this the week of the
presidential election.
I know that there was anexecutive order from Biden that
he says he's ahead of on for AI.

(27:36):
There have been different stateimplementations that have
either happened or been vetoed.
There are different you knowpolicy groups in different
states.
When it comes to thinking aboutGen AI and the future of AI,
what do you think are someethical ways that we can move
forward when we're thinkingabout how we're approaching the

(27:56):
best use cases, how we're thedata that's being input and just
making sure that we are beinggood stewards of the way that
we're using this amazingtechnology?

Speaker 3 (28:06):
Well, I think to your point, this is going to become
a huge challenge I think italready is particularly in
businesses, and particularly inthe highly regulated businesses
like financial serps, like onWall Street, or if you're doing
pharmaceutical kind of stuff.
Some industries aren'tregulated as much and they have
to report to the government.
Last I saw there were 189countries across the world that

(28:28):
are coming out with AI-relatedregulations and unfortunately
they're like all different,right, amy?
Then, like you alluded to, inthe United States, there's
different approaches againstdifferent states also, and so
some are going to be morerestrictive, some are going to
be more open, if you will, forinnovation, because there's
always that trade-off betweenstifling innovation versus

(28:49):
protecting people, right, thinkis data privacy, like people's
actual privacy, and you knowthat's a toughie, because if ai
is going to start figuring stuffout and you know you want to
protect people's privacy,there's also the into the
intellectual property.

(29:09):
You know you can take someone,you know, a famous entertainer,
a singer, and you can basicallycreate a new song with their
mannerisms, their video, theirvoice, and you can create like a
winning song.
But do you really have IPrights to that song?
Because all you did was youbasically took some of the

(29:30):
essence of someone else.
Is that intellectual property?
I mean there's a lot to thatright.
And then you get into like the,you know, the, the, the, you
know, let's say, the bad peoplelike I.
Actually, my father-in-lawactually got a call from my son.
What he thought was my one ofmy sons saying I need four
thousand dollars or somethinglike that, and he found it
suspicious.

(29:50):
And you know, you can rip offpeople's voices, then make phone
calls or whatever, and youthink it's actually, I think, if
you're, you know some of theelderly like my, you know,
before my mom passed, I mean Iworried about that kind of stuff
because you see how the scamkind of works on phishing and
email and text messages.
And imagine you start getting acall from someone you think you

(30:10):
know and they're asking you todo something right.
So I think there's all thatprotection of the criminal, the
criminal element of all this.
So there's a lot to unpack whenwe think about regulations and
laws and stuff like that.
But you know, by the way, Ialluded to this earlier there
are inherent problems in thesemodels in that they can they

(30:33):
call it hallucinations.
It basically said you know,super confidently, here's the
answer, but it's like completelywrong.
You know it goes back to thetechnical stuff here, which is
correlation.
Correlating two events or twobehaviors doesn't necessarily
mean causation.
Correlation doesn't always meancausation right, and that is a

(30:57):
big source of a lot of theerrors or inaccuracy.
And then, depending on whotrained the model and what data
they use, you can get biases.
There can be biases arounddemographics of people, there
can be biases in business, andso the more they open up these
models and the more it trains onyour data and your stuff and

(31:18):
what you're doing, the more lessbiased there'll be in your
world, like if you're talkingabout the personal assistants.
But then that gets you, you know, circles you around into the
data privacy problems.
So there's a whole idea.
And this is what makes mewonder.
When you know, without gettingon one side or the other in
politics, but you kind of watcha lot of these people in the

(31:39):
Senate and the House ofRepresentatives and you sit
there and say to yourself arethey really able to understand
this enough to?
They need to bring in a lot ofdata scientists and industry
people that kind of reallyunderstand this before you start
making these regulationsbecause you don't want to.
You know a lot of the competingcountries, like China and
others.
I mean, you know you want toprotect and you have to protect

(32:00):
and this country, it's just youhave to and you should, but at
the same time you don't want togo too far than restrict
innovation and have us fallbehind.
It's going to have hugeeconomic impact and you know
it's kind of like the wholedebate around crypto.

(32:21):
You know, should we go cryptomore aggressively?
We're holding back right now,but a lot of other countries are
and you know you can fallbehind economically.
You can become a big problem.
It's the same problem here,right?
So yeah that's just no matterwhat profession you're in ai
help you make more money,because they need smart people
to figure this all out, I think.

Speaker 2 (32:36):
Yeah, and there's.
I was just reading an articleyesterday about how Meta uses
they train their AI tool on allof our data, except for Brazil
and Europe, because of theprivacy restrictions and data
privacy around the GDPR andBrazilian privacy regulations.
Gdpr and Brazilian privacyregulations.

(32:57):
So it's that concept of if youare not paying something, if
you're not paying a fee forsubscription, you are the
product.
So we think about these socialmedia platforms and how people,
or even YouTube you knowdifferent channels and how
you're using that data.
So then you think who's onthese platforms, whether it's X

(33:17):
or yeah.
This could go along a wholeother half hour hour plus of a
discussion, which we willabsolutely save for another time
, but these are all things thatI think about when I'm thinking
about data privacy and how arewe good stewards?
So I think everybody who'slistening just needs to think
about these issues, do someresearch, check what your

(33:38):
privacy settings are, you know,and think about what you're
feeding into the model.
If you're feeding it into anopen model versus, you might
work for an organization thathas a closed model.
That's fantastic.

Speaker 3 (33:51):
That's good advice, because you know, at least in my
life, thousands and thousandsof these terms and conditions,
things that pop up and I justsay, okay, never read it, Yep,
we all do, we all do.
Some of these, depending on whatyou're doing and what you're
dealing with, some of them areprobably worth reading, or at
least trying to.
But remember, you don't have toread it now.
All you've got to do is cut andpaste it into a generative AI

(34:11):
thing and say and just tell whatis the essence of this.
So summarize this for me.
What do I need to know?
You know what's going to impactmy.
You know, ask it whatever youwant, right?
How many misspellings do theyhave?
You know you can.
You can ask it a milliondifferent things.

Speaker 2 (34:27):
You don't even have to really read it anymore, you
just have to cut and paste it,right?
You've just given me a newexercise to give next time I
teach 510 in the DMM program andwe talk about and we look at
websites and cookies andwebsites in different countries
and different industries.
So thank you for that.
I love that.
I'm like okay.

Speaker 3 (34:43):
You know, I think the more you experiment with it I
mean, I've learned this sincejust over the last three or four
months, since I've been doing alot of writing and research I
mean I'm finding things everyday that I didn't know what can
do, and I must use these twogenerative AI tools that I use.
I must use these two genitiveAI tools that I use Perplexity
and ChatGPT, I don't know 20, 30times a day, literally, but I

(35:05):
think a big picture here is.
There's and I mentioned thesame movie just over and over
again but different characters.
I mean, even if you go back tothings like this may sound
stupid, but I'll say it anyhowLike when we all first started
in cars great, you can, you canget to one place to another.
You can, you know cars aregreat, but they're still
dangerous to them.
Right, you can get in anaccident, you can hurt somebody

(35:26):
else.
There's, you know, and it's yougot to go into this cautiously
and how you use it.
What you believe it is tellingyou.
Don't assume everything is ahundred percent accurate, but
it's definitely better than nothaving it.
Just like having the internetis better than not having it.
Like I have this debate with mythree kids all the time because
they're unbelievably smart.
I mean, one of them can tell melike you name any country you

(35:49):
know.
He can tell you the capital ofevery country in the world, for
countries I've never even heardof, ever, and so you know they
luckily, you know they used itover the years as they were
growing up, for good.
I mean it became like, just youknow, maybe they're just really
smart, but I think a lot of itis like I would be.
I always tell them I could beas smart as you guys if I grew
up on the internet, but I didn't.

(36:09):
I had to go to EncyclopediaBotanica or whatever.
I think AI is the same thing.
Right, it's good, but you haveto be cautious with it and just
realize it's not perfect andthere are flaws and there's a
lot of things that can tell youthat you know are either biased
or wrong or concealinginfluential factors, right.
Right accomplish, not because itdoes it on purpose, just

(36:40):
because all it's doing iscorrelating patterns and
behaviors and events right, andit can miss something.
So whole causality andcorrelation aren't equal, you
know.
The other way of looking atthis too, Anika, is that it's
going to become a way of life,like the internet and your phone
.
So even from that angle, it'stime to start really playing
around with this stuff, Becausewhen you get in the business
world you'll be using it.
You know you're using it.
More and more professors areusing it.

(37:02):
You know, if you're in academia, researchers, I mean.
It's going to become just aneveryday tool of life, like I
think the Internet has become.

Speaker 2 (37:09):
Yeah, absolutely, and you have a great research
newsletter, the YouTube channelfor the Cube and also what you
write on LinkedIn as ways thatpeople can really learn more and
get deep dives into what you'vebeen sharing today.

Speaker 3 (37:24):
Yeah, what I'm trying to do is it goes back to the
first minute of our conversationtoday right, I play a technical
engineer, but I'm not one.
My name is it's beyond my paygrade but what I do try to do is
understand what is allhappening, what real businesses
are doing, what some of theinnovations that are coming and

(37:45):
try to translate them into, likeI kind of joke you know the
mere mortals of the world, whichis 99.9 of percent of us, right
, you don't need to be like adeep, you know analytical ai
engineer with three phds justtrying to make it like here's
how it's going to impact theworld, how it's going to impact
business.
So, yeah, follow me on LinkedInand I try to put out, you know,

(38:07):
some research in this area andI'll be talking a lot about the
agents and goals and problemsolving and causality and those
you know.
The other thing AI doesn't doreally well today either is
explain to you.
They're black boxes, they justgo off and they do all their
statistical stuff and then theyjust tell you the answer.
But you know we're going back toregulations.
How do you do regulatorycompliance when it doesn't tell

(38:28):
you how and why it recommendedor forecasted something that you
just used in a real business.
Now I got to explain to thegovernment, to the regulatory
process, how to do it.
So there's a lot of that's.
Another thing that will becoming in time is these models
are going to be able to explainthemselves much better and
you'll be able to intervene withthem and basically have a

(38:49):
conversation with them.
It's all coming To me.
The stuff's not necessarily,but it's inevitable.
It's going to happen Now.
Will it happen next year, threeyears from now, five?
I mean that's an open question,but the technology is there and
we're going to get there,because we're barely falling now
with the potential of this.

Speaker 2 (39:08):
Amazing.

Speaker 3 (39:09):
Yeah, it really is.

Speaker 2 (39:10):
Yeah Well, thank you for sharing some of your
insights today, Scott.
To be continued, we'll makesure to put your information in
the show notes that everybodycan follow you on LinkedIn,
subscribe to the newsletter andget more information, or perhaps
reach out to you if they have abusiness case that they need to
solve and need your assistance.
So is there one last thingyou'd like to leave everybody

(39:34):
listening with today?

Speaker 3 (39:36):
Yeah, when you just said about reach out, you know,
take that to heart.
You know we're all busy, but wedo the best to get back.
And as I look back over a30-year career, I would say one
of the most important things todo is build a network of people
and try to stay in touch thebest you can.
It's kind of like exercise,right, you know you should
exercise.

(39:56):
Some people do it less, some doit more, but it's just like
good, healthy approach to life.
And I think in the world of youknow, work and business and you
know, build a network out andyou know, and it's easier now
with things like LinkedIn, right, try to stay in touch with
people.
And, because it's amazing, sinceI left IBM and I went on the

(40:16):
job search, nine out of 10people I reached back out to in
my network they were, you know,almost everyone's willing to
help you out and, generallyspeaking, the ones that don't
are just, you know they're toobusy, right, which is
understandable.
Some people are busy and youknow, at different times and all
that.
But, yeah, I would, you know,learn from others.
You know your insights and yourability to be successful in

(40:40):
life in some ways are only asgood as your ability to learn
from others and, unfortunately,someone like me.
I learned that way later inlife.
I used to be, you know, Iwasn't the best listener in the
first half of my life.
I don't think you know, kind oflike I know the answer, and it's
just not true for anyone youknow and second part of my life

(41:01):
so far, I've learned to listenand like welcome all these
comments and feedback, and likeI would love to hear from people
like did this even make senseto them?
Like am I archaic as well, oryou know whatever?
So, yeah, that would be.
My last thought is keep build anetwork, stay in touch.
Yeah, well, and that's a lot ofwhat we're about at usc and in
this.

Speaker 2 (41:18):
So yeah, that would be my last thought is keep
building network, fantastic.
Yeah Well, and that's a lot ofwhat we're about at USC and in
this program is building thosenetworks, whether it's somebody
who's been a guest on thepodcast, somebody who's a guest
speaker in class, or just witheach other and with your
professors, if you're listening,we love to help you out, we
love to connect you to ournetworks.
This is how we grow, this ishow we learn.
I love that.

(41:38):
I get to learn from expertssuch as yourself.
Every time we have aconversation and we'll have more
conversations Some will berecorded and some won't, and
every time I'll walk away withsomething else I want to study
and learn, and that's anotherlittle piece that I can add to
my knowledge base.
So thank you for that, scott.

Speaker 3 (41:55):
You bet, this has been fun.
Yeah, it has been.

Speaker 2 (42:01):
Thank you to everybody who is watching or
listening to this episode ofMediascape Insights from Digital
Changemakers.
I'll be back again, or myco-host, Joseph Attia, will be
back with another amazing guestto share some insights on your
digital journey.

Speaker 1 (42:14):
To learn more about the Master of Science in Digital
Media Management program, visitus on the web at dmmuscedu.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Ridiculous History

Ridiculous History

History is beautiful, brutal and, often, ridiculous. Join Ben Bowlin and Noel Brown as they dive into some of the weirdest stories from across the span of human civilization in Ridiculous History, a podcast by iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.