Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:12):
Welcome to Votes and Verdicts, hosted by the Litigation and
Policy team at Bloomberg Intelligence, the investment research arm of
Bloomberg LP. This podcast series examines the intersection of business
policy and the law. I'm JUSTINTERREESI and to trust litigation
and policy analysts here at BI. Just a quick word
about Bloomberg Intelligence for those unfamiliar. We're the investment research
(00:35):
platform on the Bloomberg terminal, with five hundred analysts and
strategists working across the globe and focused on all major markets.
Our coverage includes over two thousand equities and credits, and
we have outlooks on more than ninety industries and one
hundred market indices, currencies and commodities. Today's topic Artificial intelligence AI.
(00:55):
It seems to be everywhere, from simple use of chat
GPT to grow prevalence of AI across business platforms. It's
hard to sit through a single news show, much less
a whole day without hearing something about AI. Some say
it's the next big thing. Some say it's the next
version of the space race where the US and other
Western nations need to show their dominance. But one thing
(01:19):
seems to be certain. AI and all of its surrounding
issues have become ubiquitous as of recent news reports regarding
antitrust implications and investigations around the technology by enforcement agencies
across the globe, and all of that begs the question
who's controlling the development of AI and what exactly does
(01:40):
that mean From the perspective of competition in this developing sector,
are smaller AI actors really able to get off the ground.
With all of that, I'd like to welcome my guest,
Sarah Myers West, Co executive director of the AI Now Institute,
the leading policy research and advocacy organization on the subject
of AI. HI. Sarah, thanks so much for joining me today.
(02:02):
Thank you for having me absolutely so. I could spend
a half hour just telling you all about Sarah's skill set,
but what I'll say is Sarah spent the last fifteen
years or so focused on the role of technology companies
and the growth of their political influence, and her current
role is as co executive director. Sarah focuses on addressing
(02:22):
the market issues and infrastructure that guides the role of
tech in society and holds a sector to task in
serving the needs of the public, including a relevant to
our chat today antitrust issues. Her research is featured in
leading academic journals and a wide range of media outlets,
and she's a go to for governments on questions surrounding
(02:43):
tech and AI, having testified before Congress and advised the
White House, European Commissioned CFPB, and other agencies. Also incredibly
important for today's discussion. Sarah recently completed a term at
the FTC as a senior advisor on AI. She worked
on competition and consumer protection issues, and last, but certainly
(03:04):
not least, Sarah's also working on a book, Tracing Code,
which we'll examine the origins of data capitalism and commercial
commercial surveillance. Sarah, There's so much more I could say
about you, but there's obviously a lot to talk about
in this space, so let's get to it. We're recording
this on July nineteenth, twenty twenty four. Obviously some really
big news today considering the Microsoft out it just wondering
(03:26):
if you have anything you'd like to lead off with
on that subject.
Speaker 2 (03:30):
Yeah, So, as we speak, there are airlines and hospital
systems and banks all around the world trying to bring
themselves back online after a bug in the systems that
have brought everything to a sudden halt, and I think
(03:51):
what that really shows is the fragility of our infrastructure,
particularly given the very heavy reliance on one or a
small handful of players. In this instance, it was Microsoft
and CrowdStrike, where the two companies kind of behind this,
(04:11):
you know, massive technological outage that uh, you know we're
all experiencing today.
Speaker 1 (04:17):
Yeah. Absolutely, I think I think that point is super
well taken, and you know, I think it really leads
us into our conversation about about AI too, where we
might see that similar kind of issue of consolidation and
you know, a few a few players, if you will,
kind of running the space. But if you could talk
a little bit about the role of big tech and
AI and that, I think that'd be really grateious as
a foundation to lead things off.
Speaker 2 (04:38):
Yeah, so I think it might be helpful to sort
of step back and look at the broader history behind
the field of artificial intelligence. Over an almost seventy year history,
AI has been used to mean a whole range of
different things, from robotics to the generative AI systems that
I think have really captured everybody's in genations today. And
(05:03):
you know, the definition of what AI is in the
present moment, is in many ways constituted by it's defined
by an existing concentration of power within the tech industry.
So if you wind back the clock maybe twenty years,
you see these tech giants Google, Amazon emerging on the
(05:28):
scene and developing out business models that involve the collection
of massive amounts of data and the development of huge
computational infrastructures needed to process that data. And once you
had those two big pieces in place, these companies began
(05:48):
to develop new techniques of trying to process that data
to make sense through analysis. Often it's you know, sort
of basic statistics, but sort of combining massive data sets
massive computational infrastructures. And those are two of the core
elements that you know, make up what we call AI today.
(06:12):
The third is the field of research and the the
big tech firms also began to dominate the leading conferences
in artificial intelligence. The latter their corporate labs became the
place that computer science graduates wanted to go get hired.
(06:33):
They were offering the highest paying salaries. There became the
most prestigious places to go and work. And so between
the sort of business elements of what kind of AI
was being developed within these firms as well as defining
what constitutes the cutting edge in the research space. Big
tech firms I think have really significantly shaped our understanding
(06:57):
of AI, what it can be used for or what
is most of interest over the span of at least
a decade and probably more. And so getting back to
our subject today of thinking about antitrust, it really brings
to question, you know, if we were to intervene in
(07:18):
the AI market in a more structural way, not only
sort of like tackling the the you know, concentration in
big tech, but they're shaping hand in shaping research and
development as well. I really wonder how that changes what
kind of AI we're building and what vision it serves,
(07:38):
because to my thinking right now, it seems somewhat stale
compared to what you know, we were being sold a
year ago from companies like open Ai. Like I find
it kind of hard to get all that excited about
a chatbot and more excited about the vision of what
could be possible if we were to to break up
big tech.
Speaker 1 (07:57):
Yeah, I think that's super fascinating, and you know, just
even looking back at AI now's most recent kind of
large scale report on the subject. I saw the twenty
twenty three Landscape report that was put out. You know,
I think you make some really there's some really important
points that come from that. I kind of guide, you know,
where we're head to hear this discussion too, And I
think those are that, you know, big data really is
(08:18):
big AI or has the potential to kind of you know,
influence where AI goes. It just so is so dependent
on the data that it's taking in, if you will,
And that really nothing nothing regarding AI is inevitable or
has to be inevitable. And I think with those two
kind of points in mind, you know, what do you
really see at the most pressing antitrust considerations right now
(08:38):
from maybe a company standpoint or problems that have really
emerged recently that you think really are right for a challenge.
Speaker 2 (08:46):
Yeah, great question. I think. So what I think is
most pressing right now is the ecosystem dominance of that
small handful of firms. So a year ago, when we
published that report, we were really calling it into the
infrastructural dominance of big tech firms. And at that moment
in time, it was a really unpopular thing to say
(09:07):
that these cloud monopolies were shaping a race to the
bottom in AI development. Most people were saying, let's wait
and see. Maybe we're going to see startups starting to
bubble up, and you know, we'll have new entrants that
are going to be competitive with Google and Microsoft. But
now I think we're seeing a recognition that this heavy
concentration in these firms and their control over the data,
(09:31):
their control over the infrastructures are presenting real barriers to
innovation and competition in the market. Now there's a long
way to go on actually meaningfully tackling that issue. You know,
the DOJ just I think maybe three weeks ago, announced
that it's going to be looking into in Nvidia, which
is the maker of the chips that are powering a
(09:54):
lot of AI development. The French authorities are a little
bit further along. They conducted a ra aid on in
videos headquarters last year. But I think that infrastructure is
only one part of the problem. These firms also control
crucial paths to market, and that I think is still
(10:15):
an undercovered antitrust issue. It's why we see promising startups
like Mistral striking deals with dominant firms like Microsoft, because
these new entrants need help connecting to their customers. There
just isn't really a clear business proposition for a lot
of new startups. And we've seen firms like Goldman Sachs
(10:37):
and Sequoia, firms that have themselves made big investments in
this market issue increasingly skeptical reports about the potential of
new players to compete. So I think addressing those two pieces,
the role of the big firms in dominating the resources
that you need to build AI, the infrastructure that you
need to build AI, as well as their role in
(10:59):
shaping the market it through their dominance of the ecosystem
of the path to you know, to profit is key.
Speaker 1 (11:06):
Sure, sure, And I think from a really basic standpoint too,
you know, it's really fascinating. I think, you know, from
a standpoint of what the FTC or DOJ can do
in the context of traditional anti trust enforcement, right, you
see these situations where perhaps making a large investment in
a smaller AI startup, if you will, or hiring most
of that that startups you know, labor folks or skilled
(11:28):
labor who you know has the brain power behind AI
as a way to kind of get around that traditional
concept of a heart Scott Redina review or something like that.
You know, where it would be a traditional full skill
out acquisition. It raises a lot of questions about what
can be done from an enforcement perspective, and you know,
we'll get to that a little bit later on, you know,
in terms of legislation and you know what what might
(11:49):
reshape the way things are proceeding right now. But you know,
I think another issue that you know, I'm hearing a
lot about from my side here too, is you know
this concept of this AI you know space race, you will, right,
the US has to compete with other global entities that
are also developing AI. And you know, no surprise, I
think in political rhetoric we hear you know, China comes
(12:10):
up a lot, and this purported need for the US
to compete with China on the subject of AI. How
do you view that And do you kind of think
it's this red herring if you will, in some ways,
as you know, you know, as a reason not to
really reiin in AI from an anti trust perspective.
Speaker 2 (12:27):
Yeah, So the discourse around this AI arms race isn't
just an invention of this past year. It really goes
back a lot further. If you look back at some
of the discussions about the anti trust package that Congress
was considering a couple of years ago. That China arms
race discourse was very much utilized by industry in its
(12:50):
lobbying to push back on the notion that we needed
a new anti trust rules for the industry. And you
saw prominent people in the Defense establishment, any of whom
we're also on the payroll of Eric Schmid adjacent firms
out there saying that if you do anything to restrain
(13:12):
big tech firms, it's going to diminish our competitiveness with China.
And I think the Biden administration, to its credit, clap
back by issuing this Executive Order on Competition that said so, actually,
national monopolies are not in the national interests. What we
want is a really vibrant and competitive sector, and that's
what's going to set us up for a success. But
(13:32):
I don't think that other parts of the government have
fully lived up to that vision. I mean, certainly, as
you know, Congress hasn't really meaningfully moved, and in part
that's due to fears about the US falling behind on
innovation if it regulates the market. But I think that's misguided,
if anything, regulation of a sector that's gotten this bloated
would really foster more innovation and would benefit a wider public.
Speaker 1 (13:56):
Sure, thanks definitely seeming a bit stagnant right now on
the hill. There's there's no question out that. And you know,
I think, you know, building upon the rhetoric around you know,
China and this just this idea of an arms race,
you know, I think there's this other ick factor, if
you will, sometimes AI where people think about how that
might implicate things like biometrics, surveillance, right, how does that
roll out in the future, and what industries is that
(14:17):
does that become an issue and if you have any
thoughts on about that aspect of AI too. I think
that's something folks are super interested in right now as well.
Speaker 2 (14:25):
Yeah, absolutely. I mean biometrics I think haven't gotten as
much attention as they should over the last year because
of this you know, surge and attention to generate AVEAI,
But there has been a really quiet and significant expansion
in their use places like in cars. For example. You know,
(14:48):
there are if you're a delivery driver, one of the
ways that drivers are being monitored by their employers is
through the use of biometrics in car can that purport
to be able to evaluate you know, whether they're falling
asleep or whether they're you know, they're signaling aggression. And
(15:11):
that use case of what we call emotion recognition is
one of the areas that I'm especially worried about. Emotion
recognition systems have largely been disregarded by the scientific community
as pseudoscientific. You can't sort of with any statistical validity
(15:32):
take you know, the measurement of somebody's facial expression and
be able to evaluate what their interstate looks like. But
that certainly doesn't stop a company from selling a system
that claims to be able to make those kinds of assessments.
And we're seeing the rollout of emotion recognition in a
(15:53):
lot of workplace surveillance contacts. You know, mentioned drivers, call
center workers are having their voice print measured and they're
getting feedback about, you know, whether or not they're sounding
pleasant enough to their customers, and all of these other
places where often they're used in ways that have quite
(16:16):
significant consequences for the people that it's used on. So
I think that biometrics are we remain an undercovered area
and one that we really need much more policy scrutiny of.
Speaker 1 (16:30):
Sure. I mean, so many privacy issues there obviously, you know,
and so many privacy issues. I think we're obviously a
bit behind other global counterparts in our privacy development here
from a legal standpoint too in the US. But yeah,
just so many issues there, and again you know, kind
of robing adgit trust into that situation too with biometric surveillance,
(16:51):
you know, beyond the privacy issues itself, it's all of
that information the hands of a select few foot people too.
If it continues to roll out the way it does.
Sounds like so many, so many flags I think on
the play with that one. But you know, kind of
turning back a bit to actual enforcement or the regulatory
oversight here with anti trust. You know, so we see
(17:12):
all these these reports of investigations going on. I feel
like several times a week I'm seeing something about investigation
happening either in the US or abroad, you know. I
know you mentioned the French investigation of Navidia, for example.
But what do you see as that kind of problems
right now with the available methods for oversight, you know,
and the way that regulators are going about employing their
(17:34):
oversight or investigations of these entities.
Speaker 2 (17:37):
Yeah, So, I mean the in the current environment. You know,
one enforcement agencies are certainly doing the most. They're taking
a very assertive and muscular stance to enforcing the laws
on the books, and they deserve all all credit for that.
They're also in an unenviable position of trying to play
(18:01):
whack a mole after the fact with companies that are
so big that they're very comfortable sort of experimenting out
in the wild and just seeing, you know, how these
technologies are going to work, and you know, have people
give them feedback on where they fail. And so like
the the enforcement agencies and investigative journalists and independent researchers
(18:26):
are all like playing whack a mole, trying to find
and surface harms long after they've occurred, and then you know,
attempting to use the law to remediate harm. But in
some cases it's it's way too late. So I think
that that's you know, far from an ideal mode of
(18:47):
regulating a sector that has such significant consequences for the public.
And there's you know, limited incentives for companies that have
grown this large to really comply. You know, for a
lot of these firms, they just allocate a budget line
for the finds that they're going to get hit, sure, sure,
and so we need stronger incentives for compliance and stronger
(19:11):
incentives for firms to be you know, engaging in basic
levels of you know, testing and validation of their systems
even to make sure that they work as intended before
they roll them out, let alone, make sure that they're
not causing harm.
Speaker 1 (19:26):
Right, right, And I guess also from from the standpoint
of you know, they have TC you know, d O
J you know do I'm wondering if you have any
thoughts on just you know, what they should be looking
out for from the from the perspective of merger activity
or something that looks like a merger, right, you know,
I'm thinking about the kind of classic brick and mortar
merger context right where you know, to Let's say two
retailers are merging and I look at a map and
(19:47):
I draw a circle and say, hey, you've got you know,
five stores, and that's really concentrated relevant market here. The
merger is not good for this particular reason, right, But
obviously AI, you know, it's a different scenario. What kind
of thing should folks be looking looking at around merger activity?
Around AI? Should we actually start seeing you know, real
acquisitions in that space.
Speaker 2 (20:07):
Yeah, so a couple things stand out. One is mergers
that are conducted for the purpose of acquiring data. And
we saw examples of this back in the day. Facebook
acquired a company and the name of the company I'm
blanking on right now.
Speaker 1 (20:28):
But it was too many companies.
Speaker 2 (20:30):
That's why many companies, too many acquisitions too. So they
acquired a company that was a VPN provider and once
they acquired that company, they pushed out a new research
tool on Facebook that users could sign up for that
would allow them to monitor those users traffic using the
(20:50):
VPN and they were able to see, well, what kinds
of apps and services were those users using, and from
that they determined that everybody was using WhatsApp and that
was one of the factors that led to you know,
while at the time was one of the largest acquisitions
in tech history, the Facebook's acquisition of WhatsApp. So, you know,
(21:14):
the use of data obtained through acquisition can lead to
competitive harms. It can also lead to privacy harms. I mean,
certainly for the users whose activity was being monitored. There
were similar concerns raised.
Speaker 1 (21:33):
Uh.
Speaker 2 (21:34):
For example, Amazon was considering acquiring one Medical and the
FTC was evaluating that acquisition. Uh FTC commissioners Bedoya and
Slaughter issued an opinion saying, you know this, this merger
is going to go through, but we do want to
(21:54):
raise some concerns about the new pools of data that
Amazon's now going to have access to, because they're now
going to have access to sensitive health data that may
not you know, violate privacy rules, but nevertheless, could let
Amazon make inferences about people in ways that will will
be highly consequential. So that kind of activity, I think
(22:18):
is one thing that deserves scrutiny. And also what you
raised earlier around you know, these sort of non traditional
paths to not acquiring companies but making these unusual business
arrangements through provision of compute credits or things that don't
reach a Heartscott Regina threshold sort of and runs around
(22:41):
traditional acquisition are the other that we're seeing a lot
of in the sector.
Speaker 1 (22:45):
God, I got it so much to consider it there,
And you know, I think we've also been seeing so
much activity recently in the way of kind of private
lawsuits or class actions if you will, you know, and
really what stood out to me recently is these class
actions we've seen regarding that you of AI pricing algorithms
and setting residential rental rates, hotel rates, things like that.
Where where what's happening, or what's alleged to be happening,
(23:09):
is that you know, competitors are acting in kind of
this hub and spoke you know, conspiracy where they're sharing
their private information with a third party but also getting
access to their competitor competitors information when they by virtue
of them doing that. But you could talk a little
bit more about those lawsuits and you know what, what
other kinds of suits maybe do you expect that we
might see coming up regarding you know, AI and its
(23:31):
uses in those contexts.
Speaker 2 (23:32):
Yeah, So there's this quiet but endemic problem of algorithmic pricing.
It's it's gone everywhere from hotels to realtors to UH groceries, insurance,
even some companies are using algorithmic systems to set the
(23:53):
wages that they'll pay to workers. Viena Duball has really
documented really well the use of algorithmic wage discrimination in
the gigwork sector, and what that means is that a
lot of people are being squeezed on both sides that
that the firms, you know, on the one hand, are
pushing prices as high as they can get people to
(24:15):
pay and trying to target them to pay as much
as possible, and then on the other side, paying them
as little as they possibly can, trying to you know,
utilize consumer surveillance data on both ends, and the firms
that have the ability to take advantage of information asymmetries
are the ones that benefit. So, like you've said that,
(24:38):
we've seen these cases focusing specifically on the use of
algorithmic methods to set rents, and this is problematic for
many reasons. You know, we already have a real affordable
housing problem in this country that's exacerbated by the existence
of monopolies in housing markets and cities around the country.
(24:58):
And what this what this does is it essentially extends
cartel like behavior into markets where it otherwise couldn't extend.
It would be really difficult for rental companies in Houston
and Chicago to coordinate with one another on setting prices,
but through systems like you know, those provided by real Page,
(25:26):
it creates sort of this hub and scope, hub and
spoke model across all of the firms that are relying
on this one company. There are a couple of bills
that also try and tackle this issue. Senator Klobuchar has one.
Senator wide, it has one, but like you've mentioned, nothing's
really moving through Congress right now. So what that means
(25:47):
is these lawsuits are really the front lines of trying
to tackle these problems in the absence of regulatory movement,
just doing the best that they can with the law
we already have on the books right right.
Speaker 1 (25:59):
You know what I know, both in the real page
suit and you know the some larger hotel class actions
that are also penning. DJ certainly has filed statements of
interest and you know, sharing its view that you know,
it does believe that use of these pricing algorithms can
in fact be a violation of the Sherman Act. So
you know DJ is out there, DJ is I think,
you know, commenting on the issue as best as it
(26:20):
can with existing law. But you know, to your point
about this legislation pending in Congress, you know there's no
shortage of anti trust legislation that that seems to be
pending on the Hill right now. I know you testified
regarding these bills before Senator Klobashar's subcommittee last December, but
you know, beyond just those those bills you just mentioned,
(26:41):
what do you view as the most pressing legislation that
kind of needs to get over the finish line right
now in DC?
Speaker 2 (26:47):
Yeah, I mean there's there's very little hope of much
moving within election.
Speaker 1 (26:51):
Yeah, very very true. Maybe even after Unfortunately, we'll see,
we'll see what happens.
Speaker 2 (26:56):
But yeah, I mean so like my my real but
a little and avasive answer is, you know, our enforcement
agencies are trying to do the work, and the interims
of making sure that they're adequately resourced is probably the
most likely thing that we will hopefully see. But you know,
beyond that, it is striking that we still don't have
(27:17):
a federal privacy law in the US, and data privacy
law is AI policy. What we saw in the EU
after chat GPT was released is the swiftest action that
regulators took took place under the GDPR, the EU's privacy law.
We saw, you know, the EU be able to clap
(27:40):
back on Uber and Lift's use of algorithmic pricing methods
and algorithmic wage setting also under the GDPR. So I
think that's a really good place to start. The other place, also,
non conventional but I think reforming our labor law and
whistleblower protections because so much of what we know about AI,
(28:00):
which is often a very opaque sector, and many of
the early policy wines that we've seen have come from
movement by workers, whether that's organizing or whistle blowing, or
you know, codifying restrictions on how AI can be used
through collective bargaining. So I think protecting workers is the
(28:21):
other place where I think we can do a lot more.
Speaker 1 (28:25):
Got it? Got it? And I think, you know, maybe
not so much on the labor pieces you just mentioned,
but you know, I know that some of the legislation,
at least some of the legislation that might be a
bit more tailored or narrowed to a particular antitrust concern
in a specific industry, like you know, the rental market
or the residential rental market, or you know, things along
those lines. You know, some of that legislation does seem
(28:45):
to have some bipartisan support, you know, And I think
you might have alluded to this a little bit earlier,
you know, when we first started speaking too. But you know,
what are your view? What's your view? I'm perhaps withstalling
this legislation and if you could talk a little bit about,
you know, just the influence of tech and D see today.
I think that's also something that you know a lot
of folks might not be considering as part of what's
(29:06):
happening with the legislative angle as well.
Speaker 2 (29:09):
Yeah, tech's influence in DC has really grown and it's transformed.
Big tech lobbying now outspends the lobbying giants of yesterday,
like big oil and big tobacco.
Speaker 1 (29:21):
Oh wow. And I think that's always what people think
of when they're thinking about the classic lobbyists.
Speaker 2 (29:25):
And it's like, if anything, there's been a huge uptick
in lobbying over the last year in the EU over
the AI Act. In the US through things like you know,
Senator Schumer's Insight Forums, which you know, really prominently foregrounded
industry voices. We're also seeing the re emergence of you know,
(29:50):
purported self regulation through things like the voluntary commitments the
big firms made around AI through this Frontier AI Forum.
So big tech lobbying is very prominent and it's gaining
traction as you know, the industry tries to push the
narrative that AI is too complicated for a government to
(30:14):
regulate and that we need to trust them to determine
next steps instead.
Speaker 1 (30:19):
Got it, got it? Honestly, Sarah, I feel like we
could take any one of these topics or questions and
make a separate and complete podcast episode about just that
question itself. There's just so much going on in the
space right now. I think, you know, as we even
said getting started, but kind of the closing, do you
have anything else you might just want to add, or
any considerations that you want to touch upon before we
(30:39):
wrap up? Yeah?
Speaker 2 (30:41):
Yeah, I mean, so one other quick thing that I'd
add to, like the last point about how lobbying is transformed.
We're also seeing new constituencies emerge, and I think that
we've seen this particularly come to the four with the
elections that there's you know, on the one hand, this
existential risk lobby, which was very effective in shaping a
(31:04):
tech agenda focused on like this sort of doom and
gloom vision for AI, and then on the other hand,
this effective accelerationism lobby, which is, you know, promoting unrestrained development.
That's the Andres and Horowitz and y Combinator and sort
of they've established their their presence in shaping sure Trump's
AI agenda, and you know, with those two camps, I
(31:29):
think it's it's really important to make sure that the
scope of the agenda, like the vision for what we
hope to see with this technology, doesn't just get narrowed
to that range of interest from Silicon Valley, because I
think that there's a lot more that we could build
and see, but that the industry's version has sort of
benefited only that narrow set of actors. And that's where
(31:52):
I hope that stronger anti trust enforcement will give us
fresh perspectives that we can really sort of like revitalize
this industry and make sure it's serving all of us.
Speaker 1 (32:03):
Got it, Got it? Sarah Myers Wes, thank you so
much for joining us today, and hopefully we'll have you
back again again sometime soon. There's just so much to
talk about here, and obviously I think we're going to
continue seeing all of this roll out, you know, quite
a bit more in the coming months and years ahead.
So thank you so much for joining us today. Really
appreciate it.
Speaker 2 (32:20):
Thank you for having me.
Speaker 1 (32:22):
Absolutely so a fascinating topic here, so many moving parts
from a litigation and policy perspective, and as we just said,
only going to grow busier with the passage of time.
I really appreciate you taking the time to discuss it
here today. Sarah. Also thank you to the listener for
tuning in. As a reminder, you can read all of
our Bloomberg intelligence research on the Bloomberg terminal at bi GO.
(32:44):
And with that, I'm JUSTINTERREESI signing off. This was folks
and Vernex