All Episodes

September 24, 2024 32 mins

As the scale of artificial intelligence continues to evolve, open technology like many of IBM’s Granite models are helping enhance transparency in AI and improve efficiency across businesses. In this episode of Smart Talks with IBM, Jacob Goldstein sat down with Maryam Ashoori, the Director of Product Management and Head of Product for IBM’s watsonx.ai, where she spearheads the product strategy and delivery of IBM’s watsonx Foundation Models. Together, they explored the shift from large general-purpose AI models to smaller, customizable models tailored to specific needs.

 

This is a paid advertisement from IBM. The conversations on this podcast don't necessarily represent IBM's positions, strategies or opinions.

 

Visit us at https://ibm.com/smarttalks

 

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tex Stuff, a production from iHeartRadio. Welcome to
tex Stuff, a production from iHeartRadio. This season on smart
Talks with IBM, Malcolm Gladwell and team are diving into
the transformative world of artificial intelligence with a fresh perspective

(00:24):
on the concept of open What does open really mean
in the context of AI. It can mean open source
code or open data, but it also encompasses fostering an
ecosystem of ideas, ensuring diverse perspectives are heard, and enabling
new levels of transparency. Join hosts from your favorite Pushkin

(00:45):
podcasts as they explore how openness in AI is reshaping industries,
driving innovation, and redefining what's possible. You'll hear from industry
experts and leaders about the implications and possibilities of open AI,
and of course, Malcolm glad Well we'll be there to
guide you through the season with his unique insights. Look
out for new episodes of Smart Talks every other week

(01:07):
on the iHeartRadio app, Apple Podcasts, or wherever you get
your podcasts, and learn more at IBM dot com, slash
smart Talks.

Speaker 2 (01:22):
Pushkin Hello, Hello, Welcome to Smart Talks with IBM, a
podcast from Pushkin industries, iHeartRadio and IBM. I'm Malcolm Glabo.
This season, we're diving back into the world of artificial intelligence,
but with a focus on the powerful concept of open

(01:44):
its possibilities, implications, and misconceptions. We'll look at openness from
a variety of angles and explore how the concept is
already reshaping industries, ways of doing business and our very
notion of what's possible. In today's episode, Jacob Goldstein sat
down with Mariam Ashuri, the Director of Product Management and

(02:05):
Head of Product for IBM's Watson x dot AI, where
she spearheads the product strategy and delivery of IBM's Watson
X foundation models. She is a technologist with more than
fifteen years of experience developing data driven technologies. The conversation
focused on how enterprises can use technology to build and

(02:28):
deliver greater transparency in AI. With Granite, Mariam explained how
Grantite can be utilized to improve efficiency across various domains.
She discussed how these models are being used in real
world business applications, particularly in areas like customer care, where
AI can help enable quick, accurate responses based on internal

(02:53):
company data. Mariam provided a fascinating look into how enterprises
have moved from me experimentation with generative AI to actual production,
navigating challenges such as increased latency, cost and energy consumption.
She highlighted how the emerging trend of smaller models customized

(03:15):
with proprietary data can potentially deliver high performance at a
fraction of the cost, marking a significant shift in how
enterprises leverage AI. Whether you're an AI enthusiast or a
business leader looking to harness the power of artificial intelligence,
this episode is packed with valuable insights and forward thinking strategies.

Speaker 3 (03:42):
Let's just start with your background. How did you come
to work at IBM.

Speaker 4 (03:46):
I join IBM right after I graduated. I have an
AI background, and throughout the years, I've held many roles
in design, engineering, development research, mostly focused on AI application
development and design. In my current job, I'm the product
owner for What's the Next Day I, which is the

(04:10):
IBM platform for enterprise AI. What excites me about this job,
I would say, is the technology advancements over the last
eighteen months in the market. We've been witnessing how generative
ALI has been changing the market. The way that I
see that is JENNAI has been perhaps one of the
largest paradigm shifts when we think about productivity. The same

(04:31):
way that Internet and personal computers impacted the productivity of workforce,
now we are witnessing another wave of all those opportunities
that it can unlock for especially enterprise AI when it
comes to enhancing the productivity of the workforce and releasing
some time that can potentially be put into creating more

(04:54):
value work for enterprise. So that's the major part that
I picked this team to have an impact on the
market and the community, but also of course using the
skills that I gain through all these years through IBM
to help to establish IBM as the market leader for

(05:14):
enterprise AI.

Speaker 3 (05:16):
So you talked about JENAI as this sort of generational,
transformational technological force, and I'm curious just in terms of
how it's going to come into the world, Like, how
do you see market adoption of GENAI sort of evolving
from here?

Speaker 4 (05:33):
Well, last year was the year of excitement about generative AI.
Most of the companies were experimenting and exploring with GENI.
We see that energy shifted towards how to best monetize
that technology. Almost half of the market has moved from
investigation to pilots ten percent has moved to production. When
you're exploring with this technology, you're looking for a valve factor,

(05:57):
you're looking for an AHA moment. That's why very large
general purpose models shine. But as companies move toward production
and scale, they soon realize the past success is not
that straightforward. For example, they're larger the model, the larger
computer resources it requires. That translates to increased latency that's

(06:18):
your response time. That translates to increased cost. That translates
to increase carbon food print, and energy consumption. So think
about that. At the scale of enterprise in production, some
of them can be a show stopper. Because of this reason,
what actually c is emerging in the market is instead
of focusing on very large general purpose models, coming back

(06:43):
to very small, trustworthy models that they can customize on
their own proprietary data. That's the data about their customers,
that the data about their specific domains to create something
differentiated that is much smaller and delivers the performance that
they want on a target use case for a fraction

(07:05):
of the cost.

Speaker 3 (07:06):
Uh huh. So let's talk a little bit more specifically
about what you're working on. Let's talk about granite. First
of all, tell me what is Granite.

Speaker 4 (07:16):
Granite is our industrial leading family of models, flagship IBM models.
These are the models that we train from scratch. When
offered to our platform, we offer indemnification and we stand
behind them today. It comes in four flavors, language code,
time series and geospecial models. Granite language series is covering English, Spanish, German,

(07:44):
Portuguese and Japanese. We have a combination of commercial and
open source language models on Granite. For example, we recently
release the Granite seven B language model small powerful English
model on the front. Our models are state of the
art models ranging from three billion to thirty four billion parameters.

(08:07):
These are very powerful models that performs or outperforms in
some cases the popular open source models in their weight class.
So very powerful models.

Speaker 3 (08:18):
So I get the idea a big picture about these models,
but it would be helpful to just get a sense
specifically of what they're doing. Like, can you give me
any specific examples of how these models are being used
in businesses in the real world right now?

Speaker 4 (08:32):
Well, the top use cases for generative AI are really
content generation, summarization, information extraction. Perhaps the most popular use
case that we are seeing in enterprise is content grounded
question and answering. So using these models as a base
to connect them to a body of information let's say,

(08:54):
their policies, their documents that is internal to the enterprise,
and get the model to provide answers based on that question.
One example of that is for customer agents customer care,
when a customer is asking a question. Previously, the agent
that responds to the customer had to answer the question

(09:16):
and if they don't know the answer escalated to the product.
Especially is keeping people on hold on the line to
go figure out the answer for that and then come back.
You can think of the time it takes to resolve
an issue. But now we llms, we have an opportunity
to automatically retrieve the information based on the internal documents
of the company, formulate an answer, show it to the

(09:39):
human agent, and then if they verify with the sources
of varies coming from, they can just translate it directly
to the customer. This is a very simple example of
how it's impacting the customer care.

Speaker 3 (09:51):
So one big theme of this season is this idea
of open and one of the things that's interesting to
me about the work you're doing is you are using
not only granted this model IBM developed, but you're also
using third party models right from other places. So tell
me about that work and how that is sort of

(10:13):
fitting into your kind of real world typically enterprise GENAI work.

Speaker 4 (10:17):
When it comes to model strategy, our strategy is really
focused on two pillars, multimodel and multi deployment. It means
that we don't believe one single model rules all the
use cases. And I think at this point the market
has also realized the enterprise markets in average today are
using five to ten different models for different use cases.

Speaker 3 (10:39):
Oh interesting.

Speaker 4 (10:40):
So in our portfolio, if you look into what's on
extradai today, we are offering a large sets of high performing,
state of the art models coming from open source commercial
models that we are bringing through our partners and also
IBM developed models. In addition to all of these, we
also have an option for bring your own model from

(11:01):
outside the platform. Let's say you have a customer model
that you made it yourself, you can bring it to
the platform and really helping the customers to navigate through
avoid range of models and pick the right model for
their target use case. Throughout that, we've been heavily working
with our partners and you know this is the market

(11:22):
that is evolving rapidly. We've been at the forefront of
a spit to delivery. One example that I like to
highlight is recently Meta released Lama four or five billion,
such a powerful model. On the same day that it
was released to the market, we made it available in
our platform to our customers the same day, and not
only we delivered it on the same day. We are

(11:45):
offering competitive pricing but also flexibility in where to deploy.
So we are giving an option to enterprise to deploy
these models on the platform of dage choice, either multi
cloud it can be gcpaws as youreb cloud, or on premises.
The same for mistrall Ai. Mistroll Ai recently released the

(12:06):
model misroll Launch two on the same day we delivered
that through the platform. That's an example of a commercial model.
Lamaba's open source, but large two is a commercial model
that we made available through the platform. Great.

Speaker 3 (12:22):
So I want to talk about enterprise grade foundation models,
just to get into it briefly. What's a foundation model.

Speaker 4 (12:31):
People associate foundation models with a large language model, but
large language models are really a subset of foundation models.
Large language models are focused on language, but foundation models
can be code generators, can be focused on time series
model we talked about, they can be images, it can
be jewy special models. So foundation model, as the term

(12:53):
suggests that your foundations to create a series of subsequent
models that can be customized for a downstream use case,
and that's why they are calling them foundation models. LM
is a good example of that as a subset for
language that you can further customize on your specific data
to get the model to do other works. So the

(13:16):
core of these foundation models, they are basically trained on
an ab third amount of data data sets that most
of the institutions today are sourcing them from the internet,
So you can imagine what can potentially go to those
models and then it comes to the enterprise and they
start using it. So for us also, when we started

(13:37):
looking into in particular, it was triggered by customers asking
us to provide client protections on these models, and we
started thinking about, let's look into how the models are
trained and if you are comfortable of friend client protections
on the models that are available in the market. And
guess what, For a majority of these models, there is

(13:59):
absolutely no visibility into what data vent into those models.
Not much transparency into how the model trains and the
responsibility lies on you as the customers to start using
those models.

Speaker 3 (14:11):
So just to be clear, that is presenting like potential risk,
real potential risk to a company that is using these models.

Speaker 4 (14:18):
It is. It is a potential risk in particular for
the customers in highly regulated industries. So what we did
for Granite was when we started training these models from scratch,
basically we went to the corpus of data that was
available to us. So, for example, the very first version
of Granite was exposed to twenty persons of its data

(14:41):
from finance and legal because we have a lot of
financial institutions as our clients. We worked directly with our
IBM research to identify detectors for harmful information like haytype
use and profanity detectors.

Speaker 3 (14:57):
Okay, so we're talking about granted. We're talking about this
set of models IBM has developed. Let's talk about using
Granted on Watson X compared to downloading open source models, Like,
how do those differ?

Speaker 4 (15:09):
While using Granite and Whatson X you get two things.
The first one is the client protection and in thementification
that we talked about, you get that if the model
is consumed through our platform. And the second one is
really the ecosystem of platform capabilities that we are offering
to help you create value on top of those data.

(15:30):
So for example, bringing your data to customize granted for
your own specific use case. But also one thing that
I like to highlight in particular is the AI governance.
So when you get one of these pre train models,
you put it in front of your own users. Through
the input and instructions that the user provides for the model,

(15:53):
they can notdge the model to potentially create undesired behavior
and change the behavior of the model. Because of this,
is extremely important to automatically document the lineage of who
touched the model at what point, so if something happens,
you can trace it back and see where it's coming from.
And that's what's an extra governance is offering automatically documenting

(16:17):
the lineage. When you use the Granite within the platform
you get all of those, you can have the end
to end governance, you can have access to all these
scalable deployment opportunities that is available for you, like to
allow you deploy them on the platform of your choice
that we talked about, either multiple cloud or on prem

(16:37):
and it also helps you to have access to avoid
range of model customizations, approaches, prompt tuning, fine tuning, retrival
augmented generations agents. There is a series of them available
to use an apply to your model.

Speaker 2 (16:51):
This distinction between large language models and foundation models is
eye opening. Mariam emphasized that foundation model can be tailored
to specific tasks, but with that versatility comes a significant
challenge the lack of transparency and how these models are trained.
This composed a real risk, especially in highly regulated industries

(17:15):
like finance. Essentially, by using Granite and watson X together,
enterprises get powerful and customizable tools.

Speaker 3 (17:24):
So let's talk about the future a little bit. What
do you think are some of the big developments we're
likely to see in the realm of AI models?

Speaker 4 (17:32):
Very good question. I feel like the generative AI of
the past was powered by large language models. The generative
AI of the future is going to reason, plan, act
and reflect.

Speaker 3 (17:47):
Huh, and so I mean in the context of Granite
in particular, like, what are we likely to see both
you know, in the near term and in the sort
of medium to long term.

Speaker 4 (17:58):
There are multiple elements to implement an agentic workflow that
I just mentioned. One element of that is the LLM
itself to be able to do the planning and reasoning
and acting and doing something that we call tool calling.
So basically, a series of tools are available to the model.

(18:20):
You ask the model to call those and make a call.
For example, we can say, hey, Granted, what is the
weather like where Jacob lives. It's going to connect to
web search API, look up your location. Then it's going
to connect to weather API, calculate the weather and come
back and formulate an answer and respond to that. So

(18:42):
during this process, it first has to plan the task
of how to answer that question, look into what are
the tools that are available to it and call them,
and that's an ability of the model to do that.
What we did with Granted was we expanded the granite
capabilities to be able to do to function callings. So
for example, today we have an open source granted to

(19:05):
an eb function calling that is available on hugging face
to try on and you can grab the model and
the model has capability to do the tool callings. I'm
anticipating that in the near future the planning and reasoning
and acting and reflecting capabilities of the large language models
are going to continue to evolve.

Speaker 3 (19:24):
So thinking now from the point of view of buyers
and users of AIS, really people who are listening kind
from that perspective, as people are evaluating AI tools and solutions,
what is the most important thing they should be thinking about?
How do you think about kind of that process?

Speaker 4 (19:45):
I think they should always start with the area at
which they think it would benefit from AI, and then
within that area, look into what data they have available
to potentially fit into those a I service architects do
they have access to quality data? And the second question
that they have to ask themselves is do I have

(20:07):
a trusted partner that can supply what I need to
be able to implement AI. That can be a collection
of the foundation models that you're going to need, That
can be a collection of the platform capabilities that the
trusted partner can offer you to implement such a thing.
The third thing is go and evaluate the regulations. Does

(20:30):
regulation allow you to applyoy AI to the specific area
that you are investigating and you're targeting for AI. And
the last part, but not least, is back to the
principles of design thinking, what is the problem in that
area I'm solving with AI? And if AI is even appropriate?

(20:51):
Because we want to make sure that you use AI
not just because it's a cool hot toy in the market,
but you are convinced that it can significantly and hence
the user experience of your customers in that area. And
once you have an answer to those all these four questions,
then maybe you have a good candidates to start applying AIT.

Speaker 3 (21:11):
And what about from the side of project managers who
are trying to just keep up with how fast things
are changing, how fast innovation is happening, Like, what advice
would you give those people?

Speaker 4 (21:24):
My advice would be focus on agility. This is a
market that is evolving rapidly and the winners of the
market would be those that are able to take advantage
of the best the market can offer at any point
of time. So in order to do that, they need
to be open to experimentation, continuous learning, and to rapidly

(21:51):
adopting the new ideas.

Speaker 3 (21:54):
And when you think about the future and GENAI, is
there a particular, say proper that you are most excited
to solve.

Speaker 4 (22:02):
I think that would be productivity. If you look into
the stats that are out there, there are surveys that
confirm that sixty to seventy percents of the time of
our employees can be potentially enhanced to the productivity gains
of generative For example, I personally myself use my product

(22:23):
for content generation a lot, so the time that it
frees up can be potentially put into generating a higher
value work. And because of that, I'm super excited with
all the opportunities that it represents for enterprises to go
and dedicate the time of the employees to higher value items.

Speaker 1 (22:44):
Great.

Speaker 3 (22:45):
Okay, a couple Granite specific questions, So what are like
the key things you want the world to know about.

Speaker 4 (22:53):
Granted Granite is open, trusted, and targeted to raise to
think about openness. One open as open weights it's available
for public to download, and the second one is open
as in there is less restrictions on how the customers

(23:13):
can legally use these models for a range of use cases.
We have released Grantite open source models on their Apache
license that is enabling a large range of use cases.
The second one was trusted. We talked about that like
it's rooted in the trustworthy governance process that we established
around how we are training these models and the responsibility

(23:36):
that we take for these models. And the third one
is targeted, targeted for enterprise. We talked about like exposing
granted to enterprise data or the domain specific granted some
of them like Cobalt Java Translation that is targeting to
solve the specific enterprise needs. And that's Granite, so open,
trusted and targeted.

Speaker 3 (23:58):
So there are a lot of models out in the
world all of a sudden, right, it's a crowded market.
Where does grant it fit in that universe? What is
the market for granted?

Speaker 4 (24:08):
We talked about enterprise market shifting away from very large
general purpose models to target a smaller models, and Granted
is a small model that enterprise can pick up and
customize on their proprietary data to create something that is
differentiated for a target use case. So Grantite is well

(24:31):
suited as a small, domain specific business ready tailored for
business and trained on enterprise data to solve enterprise questions.

Speaker 3 (24:42):
You mentioned small as one of the things that granted
is why is that useful in some contexts for enterprise
for businesses.

Speaker 4 (24:52):
The larger the model, the larger computer resources it requires.
It translates to increase latency that's your response time. It
translates to increased costs and in translates to increase carbon
footprint and energy consumption. So at the scale of enterprise transactions,

(25:13):
when you move to production and you want to scale,
some of these challenges can be multiple times stronger, like
costs can add up. The energy consumption can be a
serious thing, and the latency is depending on the application,
can be a showstopper and blocker because for longer, larger models,

(25:36):
more powerful models, it just takes the way longer time
to process and calculate the output.

Speaker 3 (25:41):
For you, we are going to finish up with a
speed round and I want you to just answer with
the first thing that comes to mind. Don't overthink this, Okay,
complete this sentence. In five years, AI will be invisible. Ah,
I like that. What do you mean by that?

Speaker 4 (26:01):
Today? AI is everywhere? But if you ask my kids
at home, they know AI. But if you say very like,
how do you use AI, they don't know the answer
because it's so blended in their life that they don't
feel like it's something that they are using. They are
getting used to that. So when I think of next

(26:22):
generation and the years to come, that generation is so
used to AI being part of their life that they
feel like it's just there. That's one and the second
one is the simplicity of interaction with AI, that you
don't feel like you're interacting with the system. It's just there,
like you talk to AI. Everything is automated. So I

(26:44):
would say the simplicity and being blended to solve the
right problems is the part that I'm referring to as invisible.
Like Internet is everywhere and it's invisible. But we used
to dial in, Like you remember the dialing zone connecting Internet.
It's gone. Internet is completely invisible today.

Speaker 3 (27:05):
Right, Like we used to talk about logging on, right,
and you don't log on anymore because you're always logged on.

Speaker 4 (27:11):
Yep, you're always connected.

Speaker 3 (27:12):
Yeah, what's the number one thing that people misunderstand about AI?

Speaker 4 (27:18):
AI is anivitable but should not be feared.

Speaker 3 (27:23):
What advice would you give yourself ten years ago to
better prepare you for today?

Speaker 4 (27:29):
I would say, develop a broad range of skills. Even
if you think they will not help you today, they
may be valuable in the future.

Speaker 3 (27:39):
So on the consumer side, right now, we hear a
lot about chatbots and image generators. But on the business side,
what do you think is the next big business application?

Speaker 4 (27:50):
AI? Influencers generating content?

Speaker 3 (27:53):
Huh? How do you use AI in your day to
day life today.

Speaker 4 (27:58):
One simple example is LinkedIn posts. I love it to
just go to my product. I'll give you an example
which is my favorite one. Lama three point one four
A five b the post that I announced on LinkedIn
on hey, IBM is releasing the model on the same
day it was generated by Lama three point one four
five billion, So using the same model to post generate

(28:21):
the announcement.

Speaker 3 (28:23):
Note very elegant. Is there anything else I should ask you?

Speaker 4 (28:27):
Oh, we didn't talk about instruct lab. So when you
grab a model, you start from the model, but you
need to then customize it on your proprietary data to
create value on top of that. So instruct lab is
giving you a method based on open source contributions to

(28:48):
collectively contribute to improve the base model. So if you're
an enterprise, you can leverage your internal employees to collectively
all contribute to improve the models. And I'll give you
an example of why it matters, Like if you go
to higging pace today and look for Lama, there are

(29:10):
about fifty thousand different lamas coming up. And the reason
is because there is no way to contribute to the
base model. If you're a developer, you have to make
a colon of the copy of the model and finding
need for your own purpose. We figure the method that
we call instruct lab to be able to collectively collect
all that information and contribute to the base model and enhance.

(29:33):
So that's instruct lab. I just wanted to highlight the
value of being open because that's another topic that has
been emerging in the market over the past eighteen months.
In particular, I believe the future of AI is open,
and we've been seeing how the open source markets has
been changing, how the models are accessible to a wider audience,

(29:57):
and good things typically happen when you make technology pieces
accessible to a broader range of community to stress test them.
And that's the direction that we've been adopting with granted,
and I feel like that's really the adoption that the
market is going to emerge to moving forward.

Speaker 3 (30:14):
Yeah, there's this interesting I think maybe naively unintuitive, but
it makes sense once you think about it, thing that
open source things are safer. You might naively think, oh no,
put it in a box so nobody can see it,
and that'll be safer. But like it turns out of
the world if you let everybody poke at it. The
world will find the vulnerabilities for you and you can
fix them.

Speaker 4 (30:35):
Right, That's exactly what's going to happen. Yeah.

Speaker 3 (30:38):
Great, it was lovely to talk with you. Thank you
so much for your time.

Speaker 4 (30:42):
The same here, thanks Jacob.

Speaker 2 (30:45):
And that wraps up this episode. A huge thanks to
Marian and Jacob. Today's conversation open my eyes as to
how open technology and AI are intersecting to create more
transparent and efficient systems for enterprises. From the power of smaller,
more targeted models like Granite to the importance of trust
and governance in AI, these developments are reshaping how businesses

(31:10):
operate at their core. As we continue to unpack the
complexities of artificial intelligence, it's clear that openness, whether in data,
technology or collaboration, is not just a concept, but a
driving force that can unlock new possibilities. Smart Talks with

(31:30):
IBM is produced by Matt Romano, Joey fish Ground, Amy
Gains McQuaid and Jacob Goldstein, who are edited by Lydia
Jane kott Or. Engineers are Sarah Brugerer and Ben Tolliday.
Theme song by Gramoscope. Special thanks to the eight Bar
and IBM teams, as well as the Pushkin marketing team.
Smart Talks with IBM is a production of Pushkin Industries

(31:52):
and Ruby Studio at iHeartMedia. To find more Pushkin podcasts,
listen on the iHeartRadio app, Apple Podcasts, or wherever you
listen to podcasts. I'm Malcolm Glapwell. This is a paid
advertisement from IBM. The conversations on this podcast don't necessarily
represent IBM's positions, strategies, or opinions.

TechStuff News

Advertise With Us

Follow Us On

Host

Jonathan Strickland

Jonathan Strickland

Show Links

AboutStoreRSS

Popular Podcasts

2. Dateline NBC

2. Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations.

3. Crime Junkie

3. Crime Junkie

If you can never get enough true crime... Congratulations, you’ve found your people.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.