Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Hey everyone, it's Robert and Joe here. Today we've got
something a little different to share with you. It's a
new season of the Smart Talks with IBM podcast series.
Speaker 2 (00:09):
This season, on smart Talks, Malcolm Gladwell and team are
diving into the transformative world of artificial intelligence with a
fresh perspective on the concept of open What does open
really mean in the context of AI. It can mean
open source code or open data, but it also encompasses
fostering an ecosystem of ideas, ensuring diverse perspectives are heard,
(00:31):
and enabling new levels of transparency.
Speaker 1 (00:33):
Join hosts from your favorite pushkin podcasts as they explore
how openness and AI is reshaping industries, driving innovation, and
redefining what's possible. You'll hear from industry experts and leaders
about the implication and possibilities of open AI, and of course,
Malcolm Gladwell will be there to guide you through the
season with his unique insights.
Speaker 2 (00:53):
Look out for new episodes of Smart Talks every other
week on the iHeartRadio app, Apple Podcasts, or wherever you
get your podcast and learn more at IBM dot com,
Slash smart Talks.
Speaker 3 (01:11):
Pushkin Hello, Hello, Welcome to Smart Talks with IBM, a
podcast from Pushkin Industries, iHeartRadio and IBM. I'm Malcolm Gladwell.
This season, we're diving back into the world of artificial intelligence,
but with a focus on the powerful concept of open
(01:32):
its possibilities, implications, and misconceptions. We'll look at openness from
a variety of angles and explore how the concept is
already reshaping industries, ways of doing business and our very
notion of what's possible. In today's episode, Jacob Goldstein sits
down with Rebecca Finley, the CEO of the Partnership on Ai,
(01:55):
a nonprofit group grappling with important questions around the future
of AI. Their conversation focuses on Rebecca's work bringing together
a community of diverse stakeholders to help shape the conversation
around accountable AI governance. Rebecca explains why transparency is so
crucial for scaling the technology responsibly, and she highlights how
(02:19):
working with groups like the AI Alliance can provide valuable
insights in order to build the resources, infrastructure, and community
around releasing open source models. So, without further ado, let's
get to that conversation.
Speaker 4 (02:41):
Can you just say your name and your job?
Speaker 5 (02:43):
My name is Rebecca Finley. I am the CEO of
the Partnership on AI to Benefit People and Society, often
referred to as PAI.
Speaker 4 (02:53):
How did you get here? What was your job before
you have the job that you have now?
Speaker 5 (02:58):
I came to about three years ago having had the
opportunity to work for the Canadian Institute for Advance Research,
developing and deploying all of their programs related to the
intersection of technology and society, and one of the areas
(03:19):
that the Canadian Institute had been funding since nineteen eighty
two was research into artificial intelligence.
Speaker 4 (03:28):
Wow, early, they were early.
Speaker 5 (03:31):
It was a very early commitment and an ongoing commitment
at the Institute to fund long term fundamental questions of
scientific importance in interdisciplinary research programs that were often committed
and funded to for well over a decade. The AI
(03:53):
Robotics and Society program that kicked off the work at
the Institute eventually became a program very much focused on
deep learning and reinforcement learning, neural networks. All of the
current iteration of AI, or certainly the pregenerative AI iteration
(04:14):
of AI that led to this transformation that we've seen
in terms of online search and all sorts of ways
in which predictive AI has been deployed. So I had
the opportunity to see the very early days of that
research coming together, and when in the early sort of
two thousand and twenty and tens, when compute capability came
(04:38):
together with data capability through some of the Internet companies
and otherwise, and we really saw this technology start to
take off, I had the opportunity to start up a
program specifically focused on the impacts of AI in society.
There was, as you know, at that time, some concerns
(04:58):
both about the potential for the technology, but also in
terms of what we were seeing around data sets and
bias and discrimination and potential impact on future jobs. And
so bringing a whole group of experts, whether they were
ethicists or lawyers or economists sociologists into the discussion about
(05:20):
AI was core to that new program and continues to
be core to my commitment to bringing diverse perspectives together
to solve the challenges and opportunities that AI offers today.
Speaker 4 (05:32):
So specifically, what is your job now? What is the
work you do? What is the work that PAI does.
Speaker 5 (05:38):
I like to answer that question by asking two questions.
First and foremost, do you believe that the world is
more divided today than it ever has been in recent history.
And do you believe that if we don't create spaces
for very different perspectives to come together, we won't be
(05:58):
able to solve the challenges that are in front of
the world today. My answer to both of those questions is, yes,
we're more divided, and two, we need to seek out
those spaces where those very different perspectives can come together
to solve those great challenges. And that's what I get
to do as CEO of the Partnership on AI. We
(06:21):
were begun in twenty sixteen with a fundamental commitment to
bringing together experts, whether they were in industry, academia, civil society,
or philanthropy, coming together to identify what are the most
important questions when we think about developing AI centered on
people and communities, and then how do we begin to
(06:45):
develop the solutions to make sure we benefit appropriately.
Speaker 4 (06:48):
So that's a very big picture set of ideas. I'm
curious on a sort of more day to day level.
I mean, you talk about collaborating with all these different
kinds of people, all these different groups, what does that
actually look like. What are some specific examples of how
you do this work.
Speaker 5 (07:05):
So right now we have about one hundred and twenty
partners in sixteen countries. They come together through working groups
that we look at through a variety of different perspectives.
It could be AI, labor and the economy. It could
be how do you build a healthy information ecosystem. It
(07:26):
could be how do you bring more diverse perspectives into
the inclusive and equitable development of AI. It could be
what are the emerging opportunities with these very very large
foundation model applications and how do you deploy those safely?
And these groups come together most importantly to say what
(07:46):
are the questions we need to answer collectively, So they
come together in working groups. I have an amazing staff
team who hold the pen on synthesizing research and data
and evidence, developing frameworks, best practice resources, all sorts of
things that we can offer up to the community, be
they in industry or in policy, to say this is
(08:08):
how we can well, this is what good looks like,
and this is how we can do it on a
day to day basis. So that's what we do, and
then we publish our materials. It's all open. We make
sure that we get them into the hands of those
communities that can use them, and then we drive and
work with those communities to put them into practice.
Speaker 4 (08:26):
You use the word open there and describing your publications.
I know, in the world of AI on the sort
of technical side, there's a lot of debate, say, or
discussion about kind of open versus closed AI, And I'm
curious how you kind of encounter that particular discussion. What
is your view on open versus closed AI?
Speaker 5 (08:49):
So the current discussion between open and closed release of
AI models came once we saw a CHAT, GPT and
other very large generative AI systems being deployed out into
the hands of consumers around the world, and there emerged
(09:11):
some fear about the potential of these models to act
in all sorts of catastrophic ways. So there were concerns
that the models could be deployed with regard to different
development of viruses or biomedical weapons or even nuclear weapons,
or through manipulation or otherwise. So this are emerged about
(09:34):
over the last eighteen months, this real concern that these models,
if deployed openly, could lead to some level of truly
catastrophic risk. And what emerged is actually that we discovered
that through a whole bunch of work that's been done
over the last little while that releasing them openly has
(09:56):
not led and doesn't appear to be leading in any
way to catastrophic risk. In facts, releasing them openly allows
for much more greater scrutiny and understanding of the safety
measures that have been put into place, And so what
happened was sort of the pendulum swung very much towards
concerned about really catastrophic risk and safety over the last year,
(10:18):
and over the last year we've seen it swing back
as we learn more and more about how these models
are being used and how they are being deployed into
the world. My feeling is we must approach this work openly,
and it's not just open release of models or what
we think of as traditional open source forms of model
(10:40):
development or otherwise, but we really need to think about
how do we build an open innovation ecosystem that fundamentally
allows both for the innovation to be shared with many people,
but also for safety and security to be rigorously upheld.
Speaker 4 (10:56):
So when you talk about this kind of broader idea
of open innovation beyond open source or you know, transparency
and models, like, what do you mean sort of specifically
how does that look in the world.
Speaker 5 (11:10):
So I have three particular points of view when it
comes to open innovation, because I think we need to
think both both upstream around the research that is driving
these models and downstream in terms of the benefits of
these models to others. So, first and foremost, what we
have known in terms of how AI has been developed,
and yes, I had an opportunity to see it when
(11:31):
I was at the Canadian Institute for Advanced Research is
a very open form of scientific publication and rigorous peer review.
And what happens when we release openly is you have
an opportunity for the research to be interrogated to determine
the quality and significance of that, but then also for
(11:51):
it to be picked up by many others. And then secondly,
openness for me is about transparency. We released a set
of very strong recommendations last year around the way in
which these very large foundation models could be deployed safely.
They're all about disclosure. They're all about disclosure and documentation
(12:12):
right from the early days pre R and D development
of these systems, right in terms of thinking about what's
in the training data and how is it being used?
All the way through to post deployment monitoring and disclosure.
So I really think that this is important transparency through it.
And then the third piece is openness in terms of
who was around the table to benefit from this technology.
(12:35):
We know that if we're really going to see these
new models being successful deployed into education or healthcare or
climate and sustainability, we need to have those experts in
those communities at the table charting this and making sure
that the technology is working for them. So those are
the three ways I think about openness.
Speaker 4 (12:55):
Is there like a particular project that you've worked on
that you feel like you know reflects your approach to
responsible AI.
Speaker 5 (13:04):
So there's a really interesting project that we have underway
at PAI that is looking at responsible practices squarely when
it comes to the use of synthetic media. And what
we heard from our community was that they were looking
for a clear code of conduct about what does it
mean to be responsible in this space? And so what
(13:25):
happened is we pulled together a number of working groups
to come together. They included industry representatives, They also included
civil society organizations like WITNESS, a number of academic institutions
and otherwise, and what we heard was that there were
clear requirements that creators could take, that developers of the
(13:48):
technology could take, and then also distributors. So when we
think about those generative AI systems being deployed across platforms
and otherwise, and we came up with a framework for
what responsibility looks like. What does it mean to have consent,
what does it mean to disclose responsibly, what does it
mean to embed technology into it? So, for example, we've
(14:10):
heard many people talk about the importance of water marking
systems right and making sure that we have a way
to water mark them. But what we know from the
technology is that is a very very complex and complicated problem,
and what might work on a technical level certainly hits
a whole new set of complications when we start labeling
and disclosing out to the public about what that technology
(14:32):
actually means. All of these, I believe are solvable problems,
but they all needed to have a clear code underneath
them that was saying this is what we will commit to.
And we now have a number of organizations, many many
of the large technology companies but also many of the
small startups who are operating in this based civil society
and media organizations like the BBC and the CBC who's
(14:56):
have signed on. And one of the really exciting pieces
of that is that we're now seeing how it's changing practice.
So a year in we asked each of our partners
to come up with a clear case study about how
that work has changed the way they are making decisions,
deploying technology and ensuring that they're being responsible in their use.
(15:18):
And that is creating now a whole resource online that
we're able to share with others about what does it
mean to be responsible in this place. There's so much
more work to be done, and the exciting thing is
once you have a foundation like this in place, we
can continue to build on it. So much interest now
in the policy space, for example, about this work as well.
Speaker 4 (15:39):
Are there any specific examples of those sort of case
studies or the real world experiences that say media organizations
had that are interesting that are illuminating.
Speaker 5 (15:50):
Yes. So, for example, what we saw with the BBC
is that they're developing a lot of content as a
public broadcast are both in terms of their news coverage
but also in terms of some of the resources that
they are developing for the British public as well. And
what they talked about was the way in which they
(16:11):
had used synthetic media in a very very sensitive environment
where they were hearing from individuals talk about personal experiences,
but wanted to have some way to change the face
entirely in terms of the individuals who were speaking. So
that's a very complicated ethical question, right, how do you
(16:31):
do that responsibly? And what is the way in which
you use that technology, and most importantly, how do you
disclose it? So their case study looked at that in
some real detail about the process they went through to
make the decision responsibly to do what they chose, how
they intended to use the technology in that space.
Speaker 4 (16:52):
As you describe your work and some of these studies,
the idea of transparency seems to be a theme. Talk
about the importance of transparency in this kind of work.
Speaker 5 (17:03):
Yeah, transparency is fundamental to responsibility. I always like to
say it's not accountability in a complete sense, but it
is a first step to driving accountability more fully, so,
when we think about how these systems are developed, they're
often developed behind closed doors inside companies who are making
(17:25):
decisions about what and how these products will work from
a business perspective, and what disclosure and transparency can provide
is some sense of the decisions that were made leading
up to the way in which those models were deployed.
So this could be ensuring that individual's private information was
(17:46):
protected through the process and won't be inadvertently disclosed or otherwise.
It could be providing some sense of how well the
system performs against a whole level of quality measures. So
we have all of these different types of evaluations and
a measures that are emerging about the quality of these
systems as they're deployed. Being transparent about how they perform
(18:08):
against these systems is really crucial to that as well.
We have a whole ecosystem that's starting to emerge around
auditing of these systems. So what does that look like
we think about auditors and all sorts of other sectors
of the economy. What does it look like to be
auditing these systems to ensure that they're meeting all of
those both legal but additional ethical requirements that we want
(18:30):
to make sure that are in place.
Speaker 4 (18:32):
What are some of the hardest ethical dilemmas you've come
up against in AI policy.
Speaker 5 (18:40):
Well, the interesting thing about AI policy right is what
it works very simply in one setting, can be highly
complicated in another setting. And so, for example, I have
an app that I adore. It's an app on my
phone that allows me to take a photo of a bird,
and it will help me to better understand and you know,
what that bird is, and give me all sorts of
(19:02):
information about that bird. Now, it's probably right most of
the time, and it's certainly right enough of the time
to give me great pleasure and delight when I'm out walking.
You could think about that exact same technology applied. So
for example, now you're a security guard and you're working
in a shopping plaza, and you're able to take photos
(19:24):
of individuals who you may think are acting suspiciously in
some way and match that photo up with some sort
of a database of individuals that may have been found,
you know, to have some sort of connection to other
criminal behavior in the past. Right, So what goes from
being a delightful Oh, isn't this an interesting bird? To
a very very creepy What is this say about surveillance
(19:47):
and privacy and access to public spaces? And that is
the nature of AI. So much of the concern about
the ethical use and deployment of AI is how organization
is making the choices within the social and systemic structure
they sit. So so much about the ethics of AI
(20:10):
is understanding what is the use case, how is it
being used, how is it being constrained? How does it
start to infringe upon what we think of as the
human rights of an individual to privacy? And so you
have to constantly be thinking about ethics. What could work
very well in one situation absolutely doesn't work in another.
(20:31):
We often talk about these as socio technical questions. Right,
just because the technology works doesn't actually mean that it
should be used and deployed.
Speaker 4 (20:41):
What's an example of where the partnership on AI influence
changes either in policy or in industry practice.
Speaker 5 (20:51):
We talked a little bit about the framework for Synthetic
Media and how that has allowed companies and media organizations
and civil society or organizations to really think deeply about
the way in which they're using this. Another area that
we focused on has been around responsible deployment of foundation
and large scale models. So, as I said, we issued
(21:14):
a set of recommendations last year that really laid out
for these very large developers and deployers of foundation and
frontier models, what does good look like right from R
and D through to deployment monitoring, and it has been
very encouraging to see that that work has been picked
(21:34):
up by companies and really articulated as part of the
fabric of the deployment of their foundation models and systems
moving forward. So much of this work is around creating
clear definitions of what we're meaning as the technology evolves
and clear sets of responsibility. So it's great to see
that work getting picked up. The NTIA in the United
(21:56):
States just released a report on open models and the
release of open models. Great to see our work sited
there as contributing to that analysis. Great to see some
of our definitions and synthetic media getting picked up by
legislators in different countries. Really just it's important, I think,
for us to build capacity, knowledge and understanding and our
(22:17):
policy makers in this moment as the technology is evolving
and accelerating in its development.
Speaker 4 (22:25):
What's the AI Alliance and why did Partnership on AI
decide to join?
Speaker 5 (22:30):
So you had asked about the debate between open versus
closed models and how that has evolved over the last year,
and the AI Alliance was a community of organizations that
came together to really think about, okay, if we support
open release of models, what does that look like and
(22:51):
what does the community need? And so that's about one
hundred organizations. IBM, one of our founding partners, is also
one of the founding partner of the AI Alliance. It's
a community that brings together a number of academic institutions
many countries around the world, and they're really focused on
how do you build the resources and infrastructure and community
(23:15):
around what open source in these large scale models really mean.
So that could be open data sets, that could be
open technology development. Really building on that understanding that we
need an infrastructure in place and a community engaged in
thinking about safety and innovation through the open lens.
Speaker 3 (23:36):
This approach brings together organizations and experts from around the
globe with different backgrounds, experiences, and perspectives to transparently and
openly address the challenges and opportunities today. I poses the
collaborative nature of the AI Alliance encourages discussion, debate, and innovation.
(23:57):
Through these efforts, IBM is helping to build the community
around transparent open technology.
Speaker 4 (24:05):
So I want to talk about the future for a minute.
I'm sure is what you see as the biggest obstacles
to widespread adoption of responsible AI practices.
Speaker 5 (24:17):
One of the biggest obstacles today is an inability and
really a lack of understanding about how to use these
models and how they can most effectively drive forward a
company's commitment to whatever products and services it might be deploying.
So I always recommend a couple of things for companies
(24:39):
really to think about this and to get started. One
is think about how you are already using AI across
all of your business products and services, Because already AI
is integrated into our workforces and into our workstreams, and
into the way in which companies are communicating with their
clients every day. So understand how you are already using
(25:02):
it and understand how you are integrating oversight and monitoring
into those One of the best and clearest ways in
which a company can really understand how to use this
responsibly is through documentation. It's one of the areas where
there's a clear consensus in the community. So how do
you document the models that you are using, making sure
that you've got a registry in place. How do you
(25:24):
document the data that you are using and where that
data comes from. This is sort of the first system,
first line of defense in terms of understanding both what
is in place and what you need to do in
order to monitor it moving forward. And then secondly, once
you've got an understanding of how you're already using the system,
look at ways in which you could begin to pilot
or iterate in a low risk way using these systems
(25:47):
to really begin to see how and what structures you
need to have in place to use it moving forward.
And then thirdly, make sure that you structure a team
in place internally that's able to do some of this
cross departmental monitoring, Knowledge sharing and learning boards are very
very interested in this technology, So thinking about how you
(26:08):
can have a system or a team in place internally
that's reporting to your board, giving them a sense of
both the opportunities that it identifies for you and the
additional risk mitigation and management you might be putting into place.
And then once you have those things into place, you're
really going to need to understand how you work with
the most valuable asset you have, which is your people.
(26:32):
How do you make sure that AI systems are working
for the workers, making sure that they're going into place.
The most important and impressive implementations we see are those
where you have the workers who are going to be
engaged in this process central to figuring out how to
develop and deploy it in order to really enhance their work.
(26:52):
It's a core part of a set of Shared Prosperity
guidelines that we issued last year.
Speaker 4 (26:58):
And then, from the side of policy makers, how should
policy makers think about the balance between innovation and regulation.
Speaker 5 (27:10):
Yeah, it's so interesting, isn't it that we always think of,
you know, innovation and regulation as being two sides of
a coin, when in fact, so much innovation comes from
having a clear set of guardrails and regulation in place.
We think about all of the innovation that's happened in
the automotive industry, right we can drive faster because we
(27:35):
have breaks, we can drive faster because we have seat
belts in place. So I think it's often interesting to
me that we think about the two as being on
either side of the coin, but an actual fact, you
can't be innovative without being responsible as well. And so
I think from a policy maker perspective, what we have
(27:56):
been really encouraging them to do is to understand that
you've got foundational regulation in place that works for you. Nationally,
this could be ensuring that you have strong privacy protections
in place. It could be ensuring that you are understanding
potential online harms, particularly to vulnerable communities, and then look
(28:17):
at what you need to be doing internationally to being
both competitive and sustainable. There's all sorts of mechanisms that
are in place right now at the international level to
think about how do we build an interoperable space for
these technologies moving forward.
Speaker 4 (28:32):
We've been talking in various ways about what it means
to responsibly develop AI, and if you're going to boil
that down, you know the essential concerns that people should
be thinking about, like what are the key things to
think about in responsible AI?
Speaker 5 (28:52):
So if you are a company, if we're talking specifically
through the company lens, when we're thinking about responsese of AI,
the most important difference between this form of AI technologies
and other forms of technologies that we have used previously
is the integration of data and the training models that
(29:15):
go on top of that data. So when we think
about responsibility, first and foremost, you need to think about
your data. Where did it come from? What consent and
disclosure requirements do you have on it? Are you privacy protecting?
You can't be thinking about AI within your company without
thinking about data, and that's both your training data. But
(29:36):
then once you're using your systems and integrating and interacting
with your consumers, how are you protecting the data that's
coming out of those systems as well? And then secondly
is when you're thinking about how to deploy that AI system,
the most important thing you want to think about is
are we being transparent about how it's being you with
(30:00):
our clients and our partners. So you know, the idea
that if I'm a customer, I should know when I'm
interacting with an AI system, I should know when I'm
interacting with a human. So I think those two pieces
are the fundamentals. And then of course you want to
be thinking carefully about, you know, making sure that whatever
(30:21):
jurisdiction you're operating in, you're meeting all of the legal
requirements with regard to the services and products that you're offering.
Speaker 4 (30:29):
Let's finish with the speed round, complete the sentence. In
five years, AI will.
Speaker 5 (30:37):
Will drive equity, justice, and shared prosperity if we choose
to set that future trajectory for this technology.
Speaker 4 (30:47):
What is the number one thing that people misunderstand about AI.
Speaker 5 (30:52):
AI is not good, and AI is not bad, But
AI is also not neutral. It is a product of
the choices we make as humans about how we deploy
it in the world.
Speaker 4 (31:08):
What advice would you give yourself ten years ago to
better prepare yourself for today?
Speaker 5 (31:17):
Ten years ago, I wish that I had known just
how fundamental the enduring questions of ethics and responsibility would
be as we developed this technology moving forward, So many
of the questions that we ask about AI are questions
(31:40):
about ourselves and the way in which we use technology,
and the way in which technology can advance the work
we're doing.
Speaker 4 (31:49):
How do you use AI in your day to day
life today?
Speaker 5 (31:53):
I use AI all day every day. So whether it's
my bird app when I go out for my learning
walk helping me to better identify birds that I see,
or whether it is my mapping app that's helping me
to get more speedily through traffic to whatever meeting I
need to go to, I use AI all the time.
(32:14):
I really enjoy using some of the generative AI chatbots
more for fun than for anything else. As a creative
partner in thinking through ideas and integrating it into all
aspects of our lives. Is just so much about the
way in which we live today.
Speaker 4 (32:31):
So people use the word open to mean different things,
even just in the context of technology. How do you
define open in the context.
Speaker 2 (32:41):
Of your work.
Speaker 5 (32:42):
So there is the question of open as it is
deployed to technology, which we've talked a lot about. But
I do think a big piece of PAI is open minded.
We need to be open minded truly to listen to,
for example, what a civil society advocate might say about
what they're seeing in terms of the way in which
(33:04):
AI is interacting in a particular community. Or we need
to be open minded to hear from a technologist about
their hopes and dreams of where this technology might go
moving forward. And we need to have those conversations listening
to each other to really identify how we're going to
meet the challenge and opportunity of AI today. So open
(33:26):
is just fundamental to the partnership on AI. I often
call it an experiment in open innovation.
Speaker 4 (33:36):
Rebecca, thank you so much for your time.
Speaker 5 (33:39):
It is my pleasure. Thank you for having me.
Speaker 3 (33:43):
Thank you to Rebecca and Jacob for that engaging discussion
about some of the most pressing issues facing the future
of AI. As Rebecca emphasized, whether you're thinking about data
privacy or disclosure, transparency and openness are key to solving
challenges and capitalizing on new opportunities by developing best practices
(34:05):
and resources. Partnership on AI is building out the guardrails
to support the release of open source models and the
practice of post deployment monitoring. By sharing their work with
the broader community, Rebecca and Pai are demonstrating how working responsibly,
ethically and openly can help drive innovation. Smart Talks with
(34:30):
IBM is produced by Matt Romano, Joey Fishground, Amy Gaines McQuaid,
and Jacob Goldstein. We're edited by Lydia jen Kott. Our
engineers are Sarah Brugaer and Ben Holliday. Theme song by Gramoscope.
Special thanks to the eight Bar and IBM teams, as
well as the Pushkin marketing team. Smart Talks with IBM
(34:51):
is a production of Pushkin Industries and Ruby Studio at iHeartMedia.
To find more Pushkin podcasts, listen on the iHeartRadio app,
Apple Podcasts, or wherever you listen to podcasts. I'm Malcolm Glapwell.
(35:11):
This is a paid advertisement from IBM. The conversations on
this podcast don't necessarily represent IBM's positions, strategies or opinions,