All Episodes

September 11, 2024 • 25 mins
Tech giants are sounding the alarm over California's proposed AI regulation, SB 1047. The Software & Information Industry Association and the Consumer Technology Association are urging Governor Newsom to veto the bill, arguing it would stifle innovation and harm the state's economy. On this episode of The Tech Zone, Paul Amadeus Lane dives deep into the controversy, exploring the potential impact of SB 1047 on the tech industry and California's future.Paul Lekas, from SIIA is my guest.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
In this world of technology is ever change rearranging that
you need someone to help you out.

Speaker 2 (00:08):
I know someone that.

Speaker 1 (00:08):
Can be alone.

Speaker 2 (00:10):
You'll be with Paul.

Speaker 1 (00:11):
I'm a day Lane.

Speaker 2 (00:12):
In the Tech Zone. Welcome back to the show me, Paul,
I'm Dave Slaine.

Speaker 3 (00:19):
Thank you so much. This is our second segment. If
you missed our first segment, check it out. And talked
to Steve Yule from the Consumer Technology Association Foundation, and
I also talked about social media and.

Speaker 2 (00:34):
Just how we have to be smarter with it.

Speaker 3 (00:37):
I mean, people are making money off of it because
of our anger, and to me, that doesn't make any
sense at Allso if you miss my rant, please please
please check it out. Here in California, there is a
bill that's on the governor's desk that's ready for him
to sign that Some who agree with this bill are
saying that it's needed to make sure there's from guard

(01:00):
on AI and protections of their but there are also
some who feel that this may steiny innovation. So we're
going to talk to an organization that is urging Governor
Newsom to veto this bill that is on his desk,
and we're going to find out why. And also this

(01:22):
Organization A. They were not alone in reaching out to
the governor. The Consumer Technology Association was too. We're going
to find out about this and more with my next guest.
I am so delighted to welcome back to the program.
Our good friend, Paul Leckis is the senior vice president

(01:43):
over at si A.

Speaker 2 (01:45):
Paul, what's going on? How are you, Paul?

Speaker 4 (01:48):
I'm doing great. It is such a pleasure to be
back here with you.

Speaker 2 (01:51):
Hey, it's great to chat with you.

Speaker 3 (01:53):
I always enjoy our chats because we talk about some
things that we should all be aware of, especially when
it comes to the not the Internet of things, but
the intelligence.

Speaker 2 (02:05):
Of things AI, AI and everything.

Speaker 3 (02:08):
Plus I really like your name too, so you know,
I gotta gotta give you problems with that, Paul. Let's
get down to, as we say, not business but business,
all right, Paul. So, there is a new I guess
bill that's out here in California. So it affects me
and for our listeners who are regional here in California,

(02:32):
but also our listeners who are worldwide and even here domestically.
Why don't we give them a little background about this
bill and what it's aimed to do, and why many
are urging Governor Newsom to veto this.

Speaker 4 (02:52):
Yeah, absolutely so. I We're talking about a b called
SBE which has now past the California legislature. It is
on Governor Newsom's desk to sign or to veto, and
it is titled the Safe and Secure Innovation for Printier

(03:14):
Artificial Intelligence Models Act. It is quite a mouthful, but
essentially what this bill aims to do is provide a
framework to address concerns around the safety and security of
the largest models out there. We call them frontier AI models,
basically models that are on the frontier of the most

(03:37):
advanced AI models today, the models that are powering things
like CHATGPT, Gemini, and other services that people use regularly.
What the bill would do is impose a number of requirements,
primarily on the developers of these models, requirements that they

(03:57):
would need to undertake before the developing these models, after
developing these models, before they're put into service or made
available to the public, and it would focus on a
couple of different types of risks that are associated with
these sorts of models. Some of them are described as
critical harms, and these are things that include the creation

(04:20):
of chemical, biological, radiological, or nuclear weapons using these models,
the potential for mass casualties of at least five hundred
million dollars, the potential for potential for other grave harms
to public safety and security, and I'm quoting a little
bit from the bill itself. There are also requirements around

(04:45):
what are called AI safety incidents, and these are a
variety of different things that could happen by using an
AI model, including malicious use in indvertain release, unauthorized access,
the behavior that is not intended from the models, either

(05:05):
because these models act in ways that are unpredictable or
because they're misused. And in addition to putting these requirements
on developers, including doing third party audits, conducting assessments of
the potential for risks, reporting any incidents that a developer
is aware of, it would also establish a framework for

(05:30):
the Attorney General to bring suits against developers. And I
think this is one of the reasons why the bill
has been so contentious, and not to get to ahead
of myself, the bill has a number of other things,
but you know, we are talking about really advanced AI
models where the techniques and the technologies to accurately measure

(05:55):
the risk of harm of misuse of misappropriation and so
forth are still being developed. We don't have any concrete
standards that developers can use, and out of the industry community,
there's grave concern that if a bill like this were
to be enacted into law, it could essentially halt innovation

(06:20):
because the specter of liability is too great, it's too ambiguous,
and developers don't have a sense of, you know, what
would be appropriate here. There's a number of other concerns
I think with the general structure. But one other thing
to note is, you know, this is the sort of

(06:42):
thing that the federal government is looking at very closely
right now. They are looking at ways to work collectively
collaboratively with industry to try to develop the right kind
of standards that we need to investigate, interrogate these models
and see what are potentially capable of. It's something that's

(07:03):
happening at the federal level, something that's happening internationally, and
you know, there is a view that it is too
premature to build a new system of liability here while
we're still kind of building the airplane.

Speaker 2 (07:20):
That definitely makes sense, Paul.

Speaker 3 (07:22):
If we have a standard here in California, and let's
say that the federal government passes a law, then the
federal law will supersede the state law.

Speaker 2 (07:34):
I assume there are to be a lot of confusion
right when it comes when it comes to a lot
of things.

Speaker 3 (07:40):
And and looking at at this letter that was sent
to Governor Newsom, it was not only your organization, but
also a powerhouse of the Consumer Technology Association too, And
what was it like to.

Speaker 5 (07:56):
Kind of have them get on board with with so
with SIIA, to really come together and collaborate to talk
about some things that needed to be put in front
of the governor before he made a decision on what
to do with this actual bill.

Speaker 4 (08:13):
Yeah. So, the Conservative Technology Association CTA is one of
the organizations in the tech space that we collaborate with
a lot. We work with them a lot on issues
and what they're hearing from their members. There's some overlapping
members of our two associations, but they have a number
of members that are not part of SIA and vice versa.

(08:35):
They're hearing the same sort of things that we're hearing
that there is concern that this sort of a bill
being enacted at a state level could really stand in
the way of the kind of innovation we need. But
another thing that we raised in the letter that I
think is really important is that nobody is going into
this thinking that AI models are inherently safe or don't

(09:00):
need some sort of guard rails. Frankly, I think the
safety and security questions that we're talking about at least
as significant is kind of the immediate issues that people
are addressing right now and looking at right now around
the socio economic impact of AI models and discrete situations.

(09:22):
But safety and security is of paramount importance, and if
we are going to develop a framework, we need to
do this in the right way. It is I think
that contrasting this with the area of privacy, and I'm
sure many of your listeners are familiar with the fact

(09:44):
that California has a privacy law and the federal government
does not, and California has really helped to move forward
the debate in the United States around privacy. We are
very hopeful there would be a federal law, but when
we're in the realm of AI safety, this is something
very different. It is it makes a lot of sense

(10:05):
for states and other bodies to be looking at ways
in which AI is used in their jurisdiction, and the
states and counties and cities, and you know, to put
guardrails around things like the use of AI in making
employment decisions or criminal justice or those sorts of things
that people touch every day. But here we're talking about

(10:27):
something that is really at the threshold of where science
and research is and we are really concerned about the
impact of this on innovation. I think that will be
felt more acutely in California, given that this is a
bill coming out of California. But it's also hard to

(10:48):
see how the impact of a bill like this would
be limited to just California. In some ways, it would
be a de facto regulation of AI across the country
in number of cases. We're concerned that this is something
that CTA agrees with as well, that this is going

(11:08):
to lead to really hurt our own industry, and our
industry is working some There may be some naysayers out there,
but the industry is really working to try to be
as safe and secure as possible. And we can't say
the same for AI models that are being developed potentially

(11:30):
in other countries that will probably swoop in and fill
the gap that is created if if we're not able
to continue to release advanced models out of US developers.

Speaker 3 (11:45):
You know, Paul that it makes a lot of sense.
Here's here's kind of the I guess concern when it
comes to all all things that AI that we you
know that that we have now we all know that
something needs to be done. You know, you can look
at even though there's this election cycle, how AI has

(12:06):
been used to just show so much disinformation, deep fakes,
all these other things.

Speaker 2 (12:13):
That that are going on.

Speaker 3 (12:14):
And what can be done I guess for the public
now to kind of, I guess, you know, protect themselves
while while the governments are trying to figure out a
good policy that can number one, won't stifle innovation, but
number two, we'll have some some guard rails and safety,

(12:38):
you know, for for consumers, businesses, and and so forth.

Speaker 6 (12:42):
What can consumers, businesses, organizations do now to make sure
that that that they're really having some self accountability to
make sure that they are protecting themselves.

Speaker 4 (12:55):
Yeah, and that's such a great question. And you know
that we of the legislatures outside of California tend to
move slowly. And you know, Congress, I think there's a
lot of proposals before Congress that would be really helpful here.

(13:16):
But you know, California is moving very fast. California has
actually taken a number of steps that have been really
positive in the AI space. But you know, what can
we do beyond relying on our elected officials to actually
protect ourselves individually, try to make use of these tools

(13:38):
in a positive way. You'd mentioned misinformation. I think that
that is a concern that people are seeing all the time,
and you know it is I think on that front,
people have to be smart consumers, they have to be
critical thinkers of what they see on the Internet. I

(14:00):
think it's it's really hard to it's really hard for
legislators to try to stop the spread of misinformation online,
even AI enabled misinformation. I think that we've seen some
progress in a few areas and I'm hopeful that will continue.
And that includes deep fakes of non consensual intimate imagery.

(14:22):
We've seen a number of state laws, including in California,
that are really focused on people who are uh uh,
who are the bad actors here, the people who are
generating and disseminating those kind of harmful images. Uh Regulation
of content is something that's very difficult for the government
to do though, because of the First Amendment, Except in

(14:45):
certain discrete uh, situations where there's a grave threat to
public to public interests. So people need to be spark consumers.
And I think, you know, you don't want to put
the onus on individuals, but we need to look with
a good degree of skepticism at information that we're seeing

(15:07):
out there. And I think there is some value in
relying on, you know, reputable sources of information to verify
anything that you see that seems to be shocking or
concerning or you know, out of the norm. I think
that you know, you're seeing in the news space that

(15:30):
reputable news publications are probably seen as more authoritative now,
and that's because there's a lot of information floating out
there that's not correct, and some of it is being
circulated inadvertently, and some of it is intentional. There are
malicious actors out there, for sure. I think that you know,
businesses that are looking to adopt AI can take a

(15:52):
number of steps, and there are frameworks out there to
assess the risk of the tools that they're using to
make sure that they are using them the right way.
I think everyone needs to be careful in adopting AI tools.
I think they can do a lot of good, but
we need to approach them in a reasonable way. Some

(16:15):
of the guidances coming out of the National Institute of
Standards of Technology is NISSED is incredibly useful for those
who are developing AI as well as for those who
are looking to purchase AI or apply AI tools in
different places. And I think that that kind of guidance,
which is being developed in collaboration with academia, civil society,

(16:39):
and industry, is really useful. There is you know, we've
done a lot of work about how to safely incorporate
AI and classrooms, and I think that that the bar
is being raised in that space, as in a number
of other spaces, and I'm optimistic that we're actually from

(16:59):
an industry perspective, creating tools and applications that are safer
and more secure and avoid a number of the concerns
or mitigate the concerns that were present in earlier AI models.
Now we're not totally free of things like algorithmic bias,

(17:21):
but many of the applications that are out there are
doing a much better job with those sorts of things.
Now that we have a track record of how AI
is being used in the field. I still do think
that we need we need government to take some action here,
but we need policy that's going to be that's good,

(17:43):
that's good policy that is going to support innovation and
also provide the kind of guardrails. You know, one issue
in ten forty seven that I think hasn't gotten enough
attention is that it, you know, seeks on the one hand,
to control the kind of grave potential harms that could

(18:05):
come out of these most powerful AI models, But in
other aspects of the bill, it's looking at things that
are more routine, guard of variety and things that that
can be expected from AI models. And I think that's
where you're really seeing some of the pushback from industry

(18:26):
that you know, an obligation to report any incident that arises,
or an obligation to avoid any sort of impact on
public safety where that term isn't even defined, could lead
to liability, is really threatening those who are looking to

(18:49):
build the more, more powerful AI models that are capable
of doing a lot of things, a lot of really
positive things for our society right now. And and that's why,
you know, an effort like this needs needs more attention,
it needs more more focus, and we really have to
think about whether it's the right approach to go.

Speaker 3 (19:11):
And just curious, Paul, the legislatures here in our in
our state, California, have they reached out to like S I,
I A or other you know, thought leaders out there
in AI to kind of get their input when they
kind of crafted this bill and then put it together.

Speaker 2 (19:30):
And or do they just.

Speaker 3 (19:33):
You know, rest on their own understanding of AI maybe
even misinformation out there. Do they try to kind of
get the community involved.

Speaker 2 (19:45):
In these decision makers decision making them.

Speaker 4 (19:47):
Sorry, Yeah, I think it's you know, there's certainly been
outreach from industry and from other thought leaders, and this
is the kind of bill that reading it does reflect
engagement with folks out there who are really thinking seriously
about this. But I also think it reflects an approach

(20:12):
that doesn't account for it doesn't really account for the
industry perspective, and reflects a view that really there's that
the industry is always oppositional to legislation, and and for

(20:35):
that reason, I think it is somewhat concerning. You know,
certainly I represent industry, I work with companies in industry,
leading AI developers, leading AI deployers. But I don't think
that there are just two ways to go about everything.
I think there are parts of this build that are

(20:56):
very positive, but then there are parts that that really aren't.
I've said before that I think the right way to
look at this set of issues is at the federal
level in a unified manner, in a manner where you
also have the infrastructure, the expertise, and the compute resources
necessary to work collaboratively with developers to try to find

(21:19):
better ways and find ways to mitigate the risks associating
with those models. But we are in the AI debate today,
it is developed into something that seems more oppositional than
I think it really should. I think there are there
are a number of different ways to go about developing policy,

(21:41):
and it's not a black and white situation, but this
bill reflects a pretty far reflects a view that is
supported by one end of that spectrum in the debate.

Speaker 2 (21:55):
Yeah, I.

Speaker 3 (21:57):
Definitely concur with that all looking at it and everything,
and that's why we're glad we have organizations as as
as yours, the s i A and to really keep
us in form of what's going on with policy, but
also innovation out there and really kind of you know,
walking walking that that that great balance of innovation but

(22:21):
also also safety as well, Paul, Before I let you go,
is there anything else out there you'd like to share
with the audience who are watching us and who are
listening to us.

Speaker 4 (22:34):
You know, I think that we've covered it. This is
a you know, California being so active in AI, and
I think it's been very helpful for further in the debate.
But we need to take care. We need to take
care here. I think when we're talking about UH, we're

(22:54):
talking about regulating things where we don't actually have the
science in place and the technology in place and the
standards in place to achieve some of the goals that
we lay out, and we have to be careful there.
We can't just we can't just demand that industry takes

(23:15):
certain steps where those steps aren't possible. And the spectrum
of liability here is so great that this kind of
bill is going to have a real lasting effect on
the USAI industry. And I really think we need to
be looking at these things in a collaborative manner and

(23:35):
trying to figure out what the best solutions are. I
you know, would credit the California legislature for taking some
really positive steps on other AI bills, and I think
that it's great that they are so focused on this area,
and I think I basically set my piece Paul.

Speaker 2 (23:59):
All right, what sounds good. We'll call that a mic
drop moment. And thank you so much, Paul.

Speaker 3 (24:05):
And if ones like to know more information about some
of the great things that they ask I A is doing,
how can they keep up?

Speaker 4 (24:13):
Yeah, well, follow us. We are on We are on
Twitter now known as x si A Policy. Check out
our website SAA dot net. We are very active in
the space and always issuing new thoughts and hopefully constructive recommendations.

Speaker 2 (24:33):
On the way forward.

Speaker 4 (24:35):
Well, I appreciate I appreciate the time, Paul, Thank you
again for having.

Speaker 3 (24:39):
Me back on a huge shout out to my friend
Paul Leis. I really enjoyed chatting with him. When it
comes to AI and what it means for business is
just a really balance of views of what's going on
here in California and Paul is in d C. We're
doing a lot of great things to make sure not
only things are safe, but also the innovation is not steiny.

(25:01):
So you want to hear your thoughts. Where do you
come down on this? Please share with me. We would
love to hear your view, all right, folks. It's time
for me to make like a tree and get out
of here. But until next time, do me a favor.
Stay healthy, stay safe, and remember I love you guys

(25:21):
your life.

Speaker 2 (25:22):
CEUs is right around the corner. Can wait to see
everyone here. Take care.

Speaker 1 (25:28):
In this world of technology, things are ever changing, rearranging.

Speaker 2 (25:34):
You need someone to help you out. I know someone
that can be alone. You'll be with Paul.

Speaker 4 (25:39):
I'm a day

Speaker 2 (25:40):
Slaine in the tech Zone.
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Clay Travis and Buck Sexton Show

The Clay Travis and Buck Sexton Show

The Clay Travis and Buck Sexton Show. Clay Travis and Buck Sexton tackle the biggest stories in news, politics and current events with intelligence and humor. From the border crisis, to the madness of cancel culture and far-left missteps, Clay and Buck guide listeners through the latest headlines and hot topics with fun and entertaining conversations and opinions.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.