All Episodes

December 26, 2023 44 mins

In the season finale, Graeme is joined by Intel’s VP of Data Center AI Solutions Group, Jeni Barovian. With a role that oversees product strategy, she has a wealth of knowledge for the responsible development of AI and understanding how it is ultimately received by end users. Focusing on the future of technology as being interactive with the human experience, this conversation offers a look into how open sourced AI is empowering developers around the world to make new and interesting technology like the products discussed in this season.

 

Learn more about how Intel is leading the charge in the AI Revolution at Intel.com/stories

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
As technology progresses, AI becomes more routinely a part of
people's everyday interactions in the world. From self checkout machines
to generative AI search engines, AI has many offerings to
how people pursue solutions in their day to day lives.
I myself have been using AI powered copilots to help
me with my coding projects. This AI assistant not only

(00:24):
understands my chosen coding language, but also plain English text
as well. It automatically creates code for my prompts. It's
been a massive help for me these AI driven coding assistants.
They're always there and available. It's like having a senior
developer by my side without the grumpiness. This was my
first experience working directly with an AI copilot, and for me,

(00:46):
it has reinforced my view that AI is a tool
meant to help, not hinder our endeavors. I, like many
developers using these assistants, can now focus on creative problem
solving and working on harder and more impactful solutions. This
synergy empowers myself and other developers to tackle complex projects
with confidence, leading to faster innovation and higher quality software.

(01:11):
SAI continues to evolve so as the potential for groundbreaking
advancements in the tech industry. I'm all in, Hey there,
I'm grain class and this is technically speaking, an Intel podcast.
The show is dedicated to highlighting the technology is revolutionizing
the way we live, work, and move. In every episode,

(01:34):
we'll connect with innovators in areas like artificial intelligence to
better understand the human centered technology they've developed. This has
been such an interesting season, filled with not only amazing guests,
but also mind blowing technologies that have so much impact
on the way we live. In episode two, developers Reshakish
and Nihadka protected farm life from pests using Intel's open

(01:57):
Vino technology. In episode five, Joe and Juan showcase the
autonomous public transport and its positive impact on the local
community of Lake Dona. It's been a real honor to
talk and learn from all of our guests. As we've
shown throughout this season, AI exists at the intersections of culture, commerce, philosophy,

(02:18):
and education, and with that it is subject to a
very critical lens. In this episode, I want to dig
a little deeper into the relationship between AI itself and
the developers guiding these projects from development to distribution. Before
we get into the virtual nuts and bolts of things.
I want to introduce a very special guest joining me

(02:41):
now is Intel's Vice President of its Data Center AI
Solutions Group and General Manager of Data Center AI Solutions
Strategy and Product Management, Jenny Brovian. She leads a team
responsible for product strategy, product planning, management, and execution of
Intel's data center Silicon Software and systems Worry Matt. Jenny
also serves as an ambassador for Responsible AI, champining the

(03:04):
ethical development of AI technology and ensuring responsible practices in
Intel's product development and deployment. I've been really looking forward
to this discussion. Welcome to the.

Speaker 2 (03:14):
Show, Jenny, Thanks Graham, it's great to be here.

Speaker 1 (03:23):
I will just start really about you, and this is
a new role for you. Can you tell us a
little bit more about the responsibilities you've inherited in this
new role. I mean, perhaps start with what does it
mean to be the VP of Data Center AI Solutions Group.

Speaker 3 (03:39):
Yeah, So, simply put, I lead the strategy and the
product portfolio for Intel's data Center AI Accelerator business. If
we look at the compute power in GPUs and accelerators,
this is what has really enabled AI to skyrocket and
these products are a really critical growth area in the
industry and also for Intel specifically the silicon and systems

(04:02):
that we build. They operate on a multi year development cycle,
so at any point I could be working on selling
products that are already in the market, or path finding
and defining products that are as far as five years out.
And my day to day kind of includes what you
might expect about running a business, managing investments, working with
our development teams to build our products, developing teams themselves,

(04:25):
and employees. But by far the most important thing that
I do with our customers is understanding their needs and
their goals to build out AI infrastructure that will power
the world.

Speaker 1 (04:36):
So as part of that data center AI, I have
a background of micro electronics, so I'm quite interested in
some of the future in terms of the Intel's AI
hardware and silicon. Is anything you could tell us a
little bit about that.

Speaker 3 (04:50):
We definitely are innovating on a broad scale and a
long life cycle. As I mentioned, we're working many years out,
and so it's all about learning alongside our customers and
our partner in what they're sensing in the market and
what they believe needs to be deployed in the future,
and this is moving faster than anything I've seen in
I've been an Intel for over twenty four years, and

(05:11):
the pace of innovation in AI is tremendous. Most of
my time at Intel previously was actually in our networking business,
and that's kind of one of the superpowers that I
think I bring to this new role. If you look
at the needs of deploying at scale AI systems, it
really is about massively interconnected processors and accelerators. So the

(05:33):
networking is actually just as important as the compute itself.
So that's definitely an area of innovation that I'm particularly
interested in driving.

Speaker 1 (05:41):
Okay, and are there any sort of projects you could
give us examples of some of the innovation at Intel.

Speaker 3 (05:48):
What I'm really excited about are the products that we're
delivering in our accelerator and our GPU max portfolio, our
Gouty accelerators. These are powering next generation AI infrastructure, and
these are products that are available today, but we're working
on that innovation for the next generation as well.

Speaker 1 (06:07):
That's great. And if you could explain really the critical
aspect of this GPU hardware graphics processing unit and why
is it so important for the AI technology they use
like chatbots and AI assistance. Why is it so critical.

Speaker 3 (06:26):
When you look at the unique needs of AI computing
by comparison to more traditional general purpose computing. It really
is about massive scale and the types of processing that
we're talking about is massively parallel processing, and so that's
really what the GPUs bring by comparison to general purpose processors.

(06:46):
General purpose processors also deliver a tremendous amount of value
in AI computing because very often what's happening is you're
serving multiple workloads at the same time. And so what's
great about working at a company like Intel is that
there's a variety ofsets that we bring for all different
types of AI infrastructure that customers are seeking to build out,
whether we're talking about large scale deployments and centralized data

(07:09):
centers as well as infrastructure that extends all the way
out to the edge as well as to the client.
And so it really is about that continuum of AI
compute that really needs to be powered by both this
massively parallel graphics processor units, but also by different types
of compute depending upon the deployment size and the use case.

Speaker 1 (07:29):
Okay, and we've already discussed in previous episodes on this
show some pretty excellent and really interesting AI deployments using
Inteller's partners. We also talked a little bit about the ethical,
responsible nature of developers when they're creating these sorts of systems.

(07:50):
What steps are Intel taking to keep these sorts of
AI implementations responsible and secure.

Speaker 3 (07:57):
Yeah, it's really important as weight and deploy AI technology
that we look at the positive outcomes that we're trying
to create. So this really is about creating positive global change,
about empowering people with all kinds of new tools, and
really improving everyone's lives around the planet. So those are

(08:18):
the positive outcomes that we can create, but we also
need to make sure that we balance that with a
full approach to lower the risks and also optimize the benefit.
So it's about managing both sides of the equation. So
Intel has a tremendous amount of work underway in responsible AI.
We've been working on this for several years, and this
initiative that we have has really evolved to include very

(08:41):
structured and rigorous and multidisciplinary processes to advance AI technology
responsibly all the way from that entire product development life
cycle that I was talking about, from development through deployment,
and we have a whole frameworket INTEL focused on global
human rights principles and SOPs very well to that we're

(09:02):
looking at this really from a multidisciplinary approach. We have
a responsible AI Advisory Council that reviews these goals throughout
the life cycle of an AI project, and we look
at potential ethical risks within these projects and actively mitigate
those risks as early as possible. And really our goal
is to actively manage it, but also be transparent about

(09:22):
our position and our practices so that we can really
address and advance solutions for shared challenges across the industry,
so that we're not only just improving our own products,
but we're improving what's happening across the industry as well.
And so we also contribute to various standards and methods
at the national level, the international level. There are organizations

(09:45):
like the Business Roundtable on Human Rights and AI, Global
Business Initiative on Human Rights, the Partnership on AI, and
so this is really an opportunity for us to come
together with our peers to establish these parameters across ethical
and moral and privacy standards so that we can build
thriving businesses and innovate really exciting technology, but do so

(10:09):
in a responsible way.

Speaker 1 (10:10):
Our audience may have heard about the various open source
or closed source AI models. I'd like to get your
thoughts about Intel's approach to this perennial open versus closed
source approach to development.

Speaker 3 (10:26):
Yeah, this is a mission that's really essential to Intel.
We look at this open approach and open ecosystem priority
as essential to lowering barriers to entry and unlocking AI
innovation for developers and for customers, and we're really focused
on accelerating and open AI software ecosystem specifically that is

(10:50):
needed to break down proprietary walled gardens and closed approaches.
We really believe that open ecosystems are more powerful than
closed ecosystems. They drive a higher level of innovation, and
they focus on democratizing computing. And we certainly have a
long history as a company in this area and it's
been true for many years and other areas of computing,

(11:10):
and it's definitely true for AI as well. We saw
the industry move from centralized computing to decentralized computing and
back to centralized computing with the cloud, and we believe
that the pendulum of computing's next swing to the edge
is now also underway. I talked about that end to
end AI deployment across the entire spectrum, and so as

(11:30):
we look at all these different areas, we have a
proven track record of working with open ecosystems, building open ecosystems,
partnering with software vendors and developers to scale technologies, and
we're definitely committed to continuing this legacy with AI, and
we really believe that to realize the full benefits that
we've been talking about and focusing on delivering AI everywhere,

(11:53):
it truly means AI everywhere to everyone, to developers, to
all your devices, to all types of compute, to all
all of your use cases in business models, and it's
an open ecosystem and open source software that are foundational
to ensuring that this can be fully accelerated.

Speaker 1 (12:09):
You talked a little bit as well about some of
the tools that Intel are supporting. We've had earlier episodes
leveraging the open Veno platform, and there's also like some
other AI software tools OneAPI, and also support for the
open source projects like PyTorch and hiking Phase. Perhaps you
could explain to us a little bit about these tools

(12:31):
while they're so useful for developers.

Speaker 3 (12:34):
Yeah, I'll talk about a few of these because it
really is about unleashing the power of developers to realize
the power of AI technology. We're creating this foundation in
our silicon and our systems and our software, but it's
really about empowering developers. So if we look at a
few of these, I'm going to start a little bit
actually with one API and looking close to the hardware.

(12:54):
We have a OneAPI programming model that's really the software
foundation of our overall AI strategy. So this is really
about providing a standard programming model that allows these developers
to get value across multiple different hardware architectures. I talked
before about the fact that you've got heterogeneous hardware architectures
depending upon the type of AI solution that you're developing

(13:18):
and deploying, and so being able to access the value
of these different hardware types from one set of source
code is really crucial. It's a really critical aspect of
our goal to democratize AI and allow developers to access
all of this to build and deploy AI solutions everywhere.
So if we look at the different stages of AI

(13:39):
development in terms of both development of the model and
treatment of the data throughout the entire life cycle. One
of the really fastest growing areas of AI development is
in fine tuning. So there's so many models out there.
You mentioned hugging face, right, that's a massive community for models,
and it's just again the pace of innovation there is
absolutely huge. So if you choose a pre train model,

(14:02):
that's a great way to get ahead start on a
solution that you're trying to build and ultimately means faster
timed insights for your enterprise. And so you can take
a pre train model and then it's really okay, what
do you do to get ready for what you're trying
to do. So this aspect of fine tuning it, bringing
in your own data and making sure that it's ready
for your application, it's a really set of crucial steps

(14:24):
to accelerate that development and deployment life cycle. Then once
you do that fine tuning, right, you've got your data,
You've fine tuned your model. Now it's about okay, how
do I deploy that and get that ready for inferencing.
So obviously you've got to worry not only about the
deployment but updates across the various locations where you've deployed
the model. And this is really where open Veno comes in, right.
The open Vino tool cloud can really be a good

(14:45):
asset in deploying all sorts of AI technology, but specifically
vision and natural language processing models across a variety of
different types of deployment targets. And if we look maybe
a little bit more broadly across other tools, we've got
libr that enable the AI ecosystem across a variety of
toolkits and optimized software and libraries and frameworks, particularly ensuring

(15:08):
that the popular frameworks are optimized for what developers are seeking.
So what this is important for is enabling orders of
magnitude performance improvements and increasing productivity for developers across AI workloads.
So if we look across all of these different tools,
whether it's you know, the libraries, one API, open Veno
optimizations for these frameworks, our goal is really to give

(15:31):
developers the openness and the choice that they need with
all of the hardware architectures that they're using, and making
them easy to program and easy to access the power
of that underlying hardware and really ideally with one code
base across many different architectures.

Speaker 1 (15:46):
Yeah, our guess. Previously there weren't traditionally AI engineers or
AI developers, And I think one of the powers of
the tools that you were just talking about is that
you can be a developer, but not specifically an AI developer,
and this will give you a leg up to actually
start innovating and start experimenting with all of these models.

Speaker 2 (16:04):
Yeah.

Speaker 3 (16:04):
I think that's a really good point. And really the
developer landscape is changing. As you said, when you look
at the entire population of developers that we are seeking
to reach, something like eighty percent of developers operate at
the framework level and above. So meeting developers where they
are is absolutely crucial to our strategy. And so there
are those who will operate at that level and never

(16:26):
directly access the hardware, but they need to be able
to unleash the power of that hardware for the applications
that they're building, and so ensuring that no matter where
they are, we have the tools in order to empower
them to achieve those goals, that's really our objective.

Speaker 1 (16:42):
Yeah, and this leads onto the Intel Developer Cloud initiative
that has Intel's latest hardware and software for testing and building.
Can you explain a little bit about that and the
importance for the developer community.

Speaker 3 (16:56):
Yeah, I mentioned getting access to all of that great hardware, right,
Developers really they want the most performance solutions, and for
that they need to have access to the latest and
greatest AI platforms. And so that's really what Intel Developer
Cloud provides. It's access to current platforms, and it's also
access to new and future platforms that may be available

(17:16):
prior to their launch for initial development and testing. And
so this is something that we've been building over time,
and we've expanded it to provide developers and partners early
and efficient access to a variety of Intel products and technologies,
in many cases from a few months all the way
up to a year in advance of full product availability.

(17:38):
And what's important about cloud based development is it's the
best way to get access to clusters of systems for
at scale development and testing. I talked a little bit
before about the fact that when you look at large
scale AI deployments, it's not just about a single node,
it's not just about a single processor. It really is
about clusters and large scale development, and so being able

(17:59):
to access clusters in a cloud based environment is really
important for developers to be able to not just do
that initial development and testing, but actually ultimately be able
to test it at scale. And so if we look
specifically at some of the products that are available in
Intel Developer Cloud, it includes our Xeon processors, so fourth
gen and fifth gen Intel zon scalable processors. It includes

(18:22):
our Xeon Mac series, also our GPU Mac series processors,
our Zon D processors, our Gouty accelerators, and our GPU
Flex series. So really what we're trying to do is
provide developers with that full breadth of solutions that meet
the needs of AI solutions that they're seeking to develop
across that entire spectrum. There was an actually really great

(18:43):
example that I heard from our peers in Intel Developer
Cloud about a professor at Kansas State University who was
specifically working on COVID research and so he was using
machine learning techniques enabled by one API on Intel Developer Cloud,
and his goal was to focus initially on drug candidates

(19:05):
for clinical studies and experimental drugs, and he was able to,
through access to Intel Developer Cloud, use that compute power
that was available in that deployment in order to drive
breakthroughs in his medical research. And so, you know, It
definitely is about our direct customers to the technology and
giving them access to our latest generation products, but it's

(19:25):
also about users of technology across the entire spectrum, including
key partners and research driving innovation and AI.

Speaker 1 (19:34):
We'll be right back after a quick break. Welcome back
to Technically Speaking and Intel podcast. You have a really

(19:55):
interesting project with the Agon National Laboratory. Can you tell
us a little.

Speaker 3 (19:59):
Bit about Yeah, this is a multi year collaboration and
I'm really excited actually about the progress we've made and
I'd love to talk about some of our innovation there.
So Argon National Lab is a leading research center in
the US, and it's really focused on being on the
forefront of our nation's efforts to deliver excescale computing capabilities

(20:22):
to advance science. And they've got a specific project around GENAI,
which is a collaboration between Argon and Intel and a
number of other partners to enable the power of GENAI
to create state of the art AI models specifically targeted
at science. So these are models that are being trained
on scientific techts and code and science data sets from

(20:45):
a whole bunch of different diverse scientific domains. And so
if we look at using these foundational technologies, and we're
unleashing the power of the hardware but focused on technologies
called Megatron, and we're using the deep speed frame works.
This Genai product is going to serve us a number
of different scientific disciplines. If you look across biology and

(21:07):
cancer research, and climate science, cosmology, material science, the list
really goes on and on when you talk about a
machine of this size. So we really have an opportunity
to work on really hard, large scale problems. And if
we look at the types of problems that are slated
for initial deployment on the Aurora supercomputer, it's areas like

(21:28):
developing safe and clean fusion reactors, neuroscience research, understanding the
dark universe, designing more fuel efficient aircraft. These are problems
that require an intensive amount of compute and an intensive
amount of data, and so it really is about a
level of scale of the number of parameters that are
supported by the model, far in excess of any computing

(21:50):
that's deployed today, and so it's really important for us
to partner with Aurora to prove out the ability of
this technology to be deployed at scale. And so what
we announced last month that the super computing twenty three
conference in partnership with Ragon was starting with a one
trillion parameter GBT three large language model on the Aurora supercomputer.

(22:10):
And the way that we were able to support that
very large number of parameters was to be able to
leverage the underlying capabilities of our Intel Mac series GPUs.
As I mentioned before, those GPUs are essential to driving
massively parallel compute to be able to compute that tremendously
large model and large number of parameters.

Speaker 2 (22:30):
But we didn't stop there.

Speaker 3 (22:32):
It was crucial to prove out the ability to handle
one trillion parameters, which we did on sixty four nodes,
which was far fewer than typically would be required, but
extend beyond that scale. Beyond that, so we ran four
instances on two hundred and fifty six nodes which demonstrated
the ability to pave the path to scale to training
trillions of parameter models with trillions of tokens on more

(22:54):
than ten thousand nodes.

Speaker 2 (22:55):
That's really our ultimate goal. So when you think.

Speaker 3 (22:58):
About workloads that fall in these different domains that I
was talking about before, so things like modeling complicated chemical
processes in drug design, or enabling efficient screening of massive
chemical data sets to focus on innovation and molecular science.
This is the performance necessary to tackle these massive problems

(23:19):
and some of the most important questions that are facing science,
but ultimately really humanity today. And so it's really this
work that we're driving between Intel and Argon National Laboratory,
in which we've already started to prove that we've got
these tangible examples of how people can really expect unprecedented
levels of innovation and groundbreaking research focused on this foundation

(23:44):
of exoscale computing.

Speaker 1 (23:46):
So we talked about the large models, maybe talk a
little bit about some of the smaller AI models. What
are your thoughts on that and how it's currently being used.

Speaker 3 (23:55):
The airtime really is on a lot of the large models,
isn't it. And certainly there's a tremendous amount of innovation
that's happening there. But they're very costly to deploy, train,
and operate, and it's got tremendous potential and promise from
everyone who is developing them and ultimately using them, but
we need to make sure we solve the problems of
access and cost as well. And so really what we

(24:18):
want to look at also optimizing is not just those
large models, but also the other end of the spectrum,
as you said, and this is where we see even
more action, more new developments, more new use cases. That's
why these communities that are driving model innovation. Hugging faces
is a great example, is really accelerating this end of
the spectrum. And really smaller models are what enable a

(24:40):
much larger number of developers to make AI come to
life where they're focused on developing new applications, where companies
are extracting value by building new business models around AI deployments.
And one of the fastest growing portions of just the
overall AI development and data flow is looking at deployment specifically.
And one of the most interesting challenges in driving this

(25:02):
wider range of deployments is making these models smaller but
still delivering the same level of accuracy. And so you
start to look at deployment areas like smartphones and smart
home devices, to surveillance and industrial, IoT sports, and entertainment domains, manufacturing, transportation.

(25:22):
There's so many different domains and this is an area
that I'm particularly really excited about. I talked about you
know how, I came from the networking business at Intel
and also focused a great deal on edge computing, and
these are areas where we're not talking about at scale
AI computing, but we are absolutely talking about needing to
unleash the power of AI, and so these are areas

(25:43):
where those smaller AI models are essential. And if we
just look at one area in transportation, airlines are exploring
the use of AI copilots to help do things like
suggest a better altitude to prevent creating contrails, to meet
climate and sstainability objectives. If we look in the entertainment industry,
game developers are using smaller AI models that are based

(26:07):
on gestures and conversation data, you know, as gamers interact
to build more lifelike game characters. So the opportunities are
really endless as we look across both business use cases
as well as consumer use cases, and you can really
see in these examples where the needs of those small
models need to continue to grow and where we really

(26:28):
need to focus on driving innovation at that end of
the spectrum as well.

Speaker 1 (26:31):
Do you think in the future, you know, you might
have the AI small low models like on everyone's desktop
sort of thing.

Speaker 3 (26:38):
For sure, we're already there.

Speaker 2 (26:40):
So you know, we talk a lot about chat GPT.

Speaker 3 (26:42):
Everyone likes to go into their browser and innovate and
leverage the power of that technology by using that very
large model that exists in the cloud. But we also
have local GPT applications where you can use the power
of your local compute, even your PC. And of course
AI's driving a number of innovations around on the AIPC
and ensuring that you can unleash the power of AI

(27:04):
locally within your office, at your location, and as I mentioned,
with edge computing, even devices that have far less horsepower
than your local PC.

Speaker 1 (27:13):
I'm looking forward to it. Also, there's another partnership that
you have with MILA, which is the Quebec Artificial Intelligence Institute. Yes,
and it has the largest concentration of world class deep
learning academic researchers and they have a mission of inspiring
innovation and progress and AI for the benefit of humanity.
Can you talk a little bit about the work Intel's

(27:35):
done with them. Yeah.

Speaker 2 (27:36):
Absolutely.

Speaker 3 (27:37):
This is really under that mission that we have of
advancing AI everywhere, and we're really grateful for the collaborations
that we have to advance this across many different areas
and it really requires partners across the value chain. So
we talked about hugging face a couple of times. Hugging Face,
for those who don't know, develop tools for building applications

(27:58):
using machine learning, and it really exists as the single
largest model hub for transformers and large language models and
all types of models. And together we're partnering with Hugging
Face here at Intel to build optimization tools to accelerate
training and inference with transformers. And so what's exciting about
these partnerships is when we really see them in action,

(28:20):
and we see customers and partners leveraging these Intel optimizations
and tools to really advance AI and deliver on this
goal of AI everywhere. And our strategic research and collaboration
with MELA is really going to speed the research and
development of advanced AI to solve some of the world's
most critical and challenging issues. As part of the specific commitment,

(28:43):
more than twenty researchers between Intel and MELA are planning
to focus on developing advanced AI techniques to tackle areas
like climate change and material discovery, and also molecular science,
so molecular drivers of disease and drug recovery, and if
we look specifically in that area, biology in particular is

(29:06):
just a tremendously exciting frontier in natural sciences, and so
you can imagine the results that could be created. You've
got the opportunity to truly usher in the era of
precision medicine, where we have an opportunity to learn from
all of the massive data that exists to bring benefit
to certainly to humanity but specifically to individuals. And so

(29:26):
it's just one area of focus, but the potential, as
you might imagine, is huge, and this is where Intel
and Mila are really leaning into evolving that relationship and
really focusing on this opportunity for technology to really benefit humanity.

Speaker 1 (29:40):
We've seen that AI has been used already for customer
service and communication, and we've also talked about the ethical
and responsible use of AI. What are some of the
challenges that business face when trying to implement an AI
solution in respect to trying to keep it responsible.

Speaker 3 (29:57):
Yeah, we've already talked about the fact that as we
look at the promise of AI, it definitely is not
one size fits all, and there's a tremendous amount of
complexity and it doesn't just come from how do you
manage the data? How do you drive data science innovation?
There's also technology complexity that we have to work through
and help our customers work through in order to really

(30:18):
derive that business benefit. And so for us, it's really
about the innovation that we drive, but also the ecosystem,
so the data and AI software ecosystems and the hardware
ecosystems that are available across that entire spectrum of AI
solutions that I was talking about, from cloud and large
scale centralized deployments to edge to client. There's so many

(30:39):
different ways that this needs to be implemented, and the
complexity of that technology needs to be managed. As AI
gets widely adopted and deployed and consumption increases, we also
need to ensure that it's accessible and cost effective. I
talked earlier about the massive cost associated with training models
and with driving inference on large scale models, and so

(31:01):
this definitely is a top goal for us as we
look at helping our customers derive as much business value
as they can. So you've got to look at the
entire spectrum. How do you bring data into the model,
how do you keep track of your models and model versions,
how do you drive updates, how do you do training,
and of course fine tuning like we talked about before,
but also retraining, and then how do you look at

(31:22):
the different deployment options and then ultimately integrate them into
business applications. And really, you know a lot of existing
business applications, there's a legacy code base there, there is
legacy infrastructure that isn't going to go away, and so
you have to manage all of that together. And so
our goal is really to offer a complete portfolio of
AI hardware and software and tools optimize for any needs

(31:47):
and then help them choose the best solution that meets
their needs.

Speaker 1 (31:50):
Yeah, and kind of leads me to talking about business
in general. My family has a history in small businesses
and I'm particularly interested to know your thoughts on how
so maybe the smaller end of town can leverage some
of this really cool stuff that's going on.

Speaker 3 (32:05):
Yeah, what's exciting about it is that even a small
business can take advantage of AI technology deployed locally, but
also take advantage of AI technology that exists in large
scale deployments as well that are interconnected through cloud based deployment.

Speaker 2 (32:17):
So how do we do this?

Speaker 3 (32:19):
It really is about taking these hardware and these software
tools provided by Intel, provided by our ecosystem to enable
organizations to accelerate performance and really expedite the results for
the goals and the KPIs that they're driving and really
ultimately improve their return on investment, and then just turn
it into a learning cycle, drive that AI development and

(32:41):
workflow process, but really learn from that and ultimately drive
improvements and streamline development for expanded use cases from there.
So it's kind of starting in one domain but then
having potential to expand to other areas as well. If
you look at an area like healthcare, this can mean
accelerating research and patient outcomes with more accurate analysis of

(33:01):
areas like medical imaging. If you look at manufacturing, there's
large scale manufacturing and very small scale manufacturing, right, but
all of them have potential to transform data into insights
that can improve performance and minimize downtime and improve safety.
Those are of concern of a manufacturing site of any size.
Retail's a huge area of focus for small business no

(33:23):
matter the size of the retail institution. They're all very
interested in wanting to understand their customers better. So you
look at areas like inventory management and loss management and
loss control and other key metrics, and so taking the
data that you have gathering even more data and then
using the power of AI to drive those business outcomes

(33:44):
is essential. Really, it just starts with understanding what the
business needs and challenges are before you ever even talk
about the technology. What are the business needs and the challenges,
and then partnering with Intel, partnering with our customers and
the value chain in identifying the best AI solutions and outcome. Really,
you know, for us, we're just trying to accelerate the
time to market for those AI enabled offerings across every

(34:05):
industry to maximize business value for both the large enterprises
as well as the small businesses.

Speaker 1 (34:12):
In terms of improving business productivity. Are there any specific
examples of AI technologies at Intel actually achieve some really
significant gains.

Speaker 3 (34:23):
Yeah, so for sure, I'd love to share a couple examples.
So we did talk a little bit before about healthcare
and sciences, So I want to kind of shift gears
into a little area.

Speaker 2 (34:33):
Oh, shift gears.

Speaker 3 (34:33):
I was going to talk about manufacturing, So I guess
it's a little bit of a pun. So a major
beverage bottler in Asia turned to Intel ANDAO two in
twenty twenty two to build a framework that could transform
a variety of manufacturing and safety inspection processors at a
number of their regional factories, and what they really hoped

(34:55):
to do was to take the safety monitoring process into
the digital AI age through machine vision and AI, but
still integrate that into their core IT systems. I talked
a little bit before about the fact that you need
to integrate into legacy IT systems legacy software. So within
tell NAO two on board, the company embarked on a

(35:17):
whole safety transformation project and this was focused on revamping
security processes at these factories really all the way across China.
So this company, they were really focused on taking humans
out of the safety monitoring loop wherever possible to improve results,
and they had some tremendous results. They were able to
reduce manual workloads by eighty percent, they were able to

(35:40):
trim the costs that were related to their health and
safety and environmental compliance by sixty percent. They were able
to boost the violation detection rate from less than twenty
percent to ninety percent, which varied from a four to
five x improvement in time saved as well, and they
also reduce safety violations by thirty five percent. So this

(36:02):
is through the use of AI technologies, its computer vision,
and so it's taking inputs from multiple different cameras on
the factory floor and being able to detect different patterns
like worker movement or worker machine interaction, or being able
to enforce safety.

Speaker 2 (36:17):
Boundaries around equipment.

Speaker 3 (36:19):
You achieve tremendously better results for the business's bottom line,
but also safety for the people who were involved in
the factory. And so really almost overnight they witnessed enormous
benefits that totally altered how they approach monitoring and inspection,
and they put this advanced solution in place, and it
really accommodated the very complex and very intense automation requirements

(36:41):
that they were seeking to build. Let me shift gears
a little bit again and talk about the entertainment industry.
So if we look at the broadcasting industry, it's really
benefited already enormously from digital transformation, but at the same time,
the hardware and the networking infrastructure sometimes really hasn't kept
pace with that level of change, and so many businesses

(37:02):
still rely on cumbersome and sometimes dedicated hardware that can
be very expensive, and very often they're actually locked into
specific brands and equipment. And so if you think about
the massive opportunity associated with very large entertainment events, so
eyes around the world tuning into Formula on racing or
World Cup or next year Summer Olympics in Paris. This

(37:26):
can translate into a lot of great business outcomes in
advertising and other revenue dollars for broadcasters, but viewers really
want a seamless experience as well, and so these broadcasters
depend upon a number of different vendors across so many
different events and locations, and they have to deliver on
those performance needs. They have to deliver on performance and

(37:47):
low latency and real time visibility and monitoring and really
rich viewing experience. It's not just about viewing the event,
it's about viewing additional data associated with the event, and
real time analytics and real time commentary. And they really
want a solution that is powerful and agile and easy
to deploy, that doesn't sacrifice performance, doesn't sacrifice quality. And

(38:08):
also they need to be able to from a technology standpoint,
leverage on site infrastructure as well as the cloud and
do that seamlessly between the different locations. So this is
where we've been able to innovate alongside our partners, and
in this case the example, I want to talk about
is Fox Sports.

Speaker 2 (38:24):
Fox Sports used TAG Video.

Speaker 3 (38:26):
Systems monitoring and visualization platform which ran on Xon CPUs.
They use this in all of their control rooms and
their operations to show sixty four of the soccer matches
in the twenty twenty two FIFA World Cup in catter
across the US and also on Fox and FS one channels,

(38:46):
and these matches were being live streamed also on the
Fox Sports app, and they also had to be of
course available on demand for replays later. So the system
had to be able to monitor the integrity of over
twelve hundred sources and drive over one hundred and fifty displays.
So how did they tackle this massive challenge. They deployed

(39:08):
this really first of its kind live production system called
a flypack. This system includes a full control room, forty
tech core racks, ten venue racks, and these equipment racks.
They were built around Intel Xeon processors and they can
actually be flown in a plane. They can be flown
via seven forty seven from venue to venue instead of
having to travel aboard container ships. So the flypack arrives,

(39:31):
it's fully pre wired and it can be powered up
and ready to go within six hours and they report
that it's allowed them to shave weeks off of their
setup times. So it really is about improving interoperability and
portability and agility, and really it's transforming and revolutionizing how
live production is done. And so Fox Sports really also

(39:52):
has been able to change and add to the system
as their needs evolve. So a lot of really exciting
innovation in that combination of harnessing the power of compute
and edge applications and AI and really impacting a number
of different industries.

Speaker 1 (40:06):
And just to wrap up, I'd really like to get
your thoughts on the future of the data center and
AI technologies at Intel and what's your number one area
of excitement for the future.

Speaker 2 (40:20):
Yeah.

Speaker 3 (40:20):
I'm relatively new to this role, and in the short
amount of time that I have spent in this role,
I have seen growth like nothing I've ever experienced. The
pace of innovation in terms of the solutions that are
being deployed, but the pace really incumbent upon Intel for
innovating technology to empower those solutions is massive. It's truly

(40:42):
undeniable that AI is changing the way that we're living
our lives, the way that we're working, the way that
we're driving business transformation, and I'm really an optimist when
it comes to the power of AI. My goal professionally
is to innovate and deliver the technology and the products
that really enable AI to help humanity become the best

(41:04):
version of itself. There are so many big problems that
we have to solve. We've talked about a couple of
them today. When you look at climate to discoveries and
science to driving social change, AI can be an engine
to create solutions for all of these and more.

Speaker 1 (41:21):
So.

Speaker 3 (41:21):
It's really exciting to be part of the technology innovator, who,
of course is one of the most significant in the
history of computing, but really on this precipice of AI
innovation and really driving this next phase of at scale
compute deployments and AI innovation to power those deployments and

(41:41):
create solutions for the future.

Speaker 1 (41:44):
I definitely share your optimism. Thanks very much, Jennie. That
was awesome.

Speaker 2 (41:47):
Thanks Graham, it was really great to talk to you.

Speaker 1 (41:54):
Thanks very much to my guest Jenny ber Ivan for
joining me on the season finale of Technically Speaking and
Intel podcast. This episode illustrates that the significant advancements and
contributions made in the field of open source AI worldwide.
As a developer actively engaged in AI projects, particularly appreciate
the support provided by corporations like Indel, both financially and

(42:16):
in terms of fostering open source developer communities. Journey provided
an extensive discussion on the topic of responsible AI, emphasizing
the necessity for companies to ensure that AI implementations adhere
to ethical principles. In my view, it is imperative for
businesses to explore responsible AI initiatives and establish a set

(42:36):
of standards that can be clearly understood and implemented by
their executives, managers, employees, and contractors. It is essential for
employers to encourage their workforce to voice concerns if they
perceive any AI projects to be in conflict with their personal,
moral and ethical standards. Engaging in open and candid discussions

(42:57):
with our peers is crucial in developing technology that benefit
humanity as a whole. The development of large scale AI
models and supercomputers by companies like Intel may appear daunting
and seemingly unattainable for smaller enterprises. However, it's important to
remember that technological advancements often start with significant investments by
pioneers in the field. The first computers, costing millions of

(43:20):
dollars and possessing only a fraction of the power of
a modern calculator, were necessary stepping stones that led to
the explosion of personal computing and the advent of mobile
devices in the same vein the cost of deploying and
operating AI technology is expected to decrease over time, enabling
businesses of all sizes to utilize AI in ways that

(43:41):
positively impact the employees and customers. I'm excited to see
all the cool things that people come up with to
improve our lives. Thanks for following along for the first
season of Technically Speaking, an Intel podcast. To learn more
about how Intel is revolutionizing the future of AI, check
out Intel doc Slash Stories. If you enjoyed this season,

(44:03):
please stay subscribed to get updates about season two, which
will be coming out in the spring of twenty twenty four.
Thanks again for listening. Technically Speaking was produced by Ruby
Studios from iHeartRadio in partnership with Intel and hosted by
me Graham Class. Our executive producer is Molly Sosher, our

(44:25):
EP of Post production is James Foster, and our supervising
producer is nikiir Swinton. This episode was edited by Sierra
Spreen and written by Tyree Rush.
Advertise With Us

Popular Podcasts

Dateline NBC
The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.