All Episodes

December 12, 2024 35 mins

This week, Kevin Werbach is joined by Wendy Gonzalez of Sama, to discuss the intersection of human judgment and artificial intelligence. Sama provides data annotation, testing, model fine-tuning, and related services for computer vision and generative AI. Kevin and Wendy review Sama's history and evolution, and then consider the challenges of maintaining reliability in AI models through validation and human-centric feedback. Wendy addresses concerns about the ethics of employing workers from the developing world for these tass. She then shares insights on Sama's commitment to transparency in wages, ethical sourcing, and providing opportunities for those facing the greatest employment barriers.

Wendy Gonzalez is the CEO Sama. Since taking over 2020, she has led a variety of successes at the company, including launching Machine Learning Assisted Annotation which has improved annotation efficiency by over 300%. Wendy has over two decades of managerial and technology leadership experience for companies including EY, Capgemini Consulting and Cycle30 (acquired by Arrow Electronics), and is an active Board Member of the Leila Janah Foundation. 

https://www.sama.com/

Forbes Business Council - Wendy Gonzalez

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Wendy, thank you for joining me on the road to Accountable AI.
Very glad to be here, Kevin.
First, tell us a little bit about what Sama is and what role the company plays in the AIdevelopment process.
Sama is a data annotation and model valuation company.
So we basically take care of structuring, validating, or fine tuning the data that goesinto training and fine tuning AI models, whether it's traditional ML models or a GenAI.

(00:34):
And where did it come from?
How has it developed over the years?
We have a bit of an interesting origin story.
So we actually started as a nonprofit back in 2008, really focused very specifically on asocial mission of lifting people sustainably out of poverty by providing them jobs,
full-time jobs in the digital economy.

(00:56):
So we work in underserved communities.
We are working to provide first-time entry-level jobs in tech and full-time jobs, whichare scarce.
So six out of seven people in Sub-Saharan Africa are in the informal economy.
And our goal was to provide full-time jobs with benefits, medical, et cetera.

(01:17):
And this was really based off of the belief that talent distribution equally, butopportunity is not.
And the right way to really solve for a sustainable lift out of poverty is through work,not aid.
So the idea was let's do digital upscaling.
Let's bring people into jobs for the first time.
to give them the experience to allow them to get a next job and stay in the formaleconomy.

(01:41):
So how does that get us to the work that we do now?
Well, because we have a purposeful hiring model of hiring people on the basis of impact.
So if we just think about that for a second, especially knowing this is a Wharton podcast,most people hire on the basis of experience or college degrees, et cetera.

(02:03):
And we were trying to flip the model on its head to say, we want to provide entry intothese jobs.
So we hire on the basis of impact.
So household income level, you don't need to have a degree because our idea was we coulddo the digital upscaling.
And because of that nature of how we do the hiring, we were not expecting people to comein and begin to do software engineering or programming.

(02:25):
So we started with human judgment.
And that's really where this started was let's do human judgment tasks, whether it's
transcribing business cards like back in the day before optical character recognition,things of that nature.
And I joined almost 10 years ago now.
And when I joined the company, one of the things I observed is that human judgment is alsoreally necessary in artificial intelligence.

(02:50):
You need to have humans validate and train the data.
So we started to shift our human judgment work, which is trainable.
Basically, you just need to have human judgment and have a platform where you can receivethe tasks.
That's ultimately how we started.
It was really driven off of our social mission and wanted to create jobs through thedigital economy.

(03:11):
So how do you go from that human judgment work, which presumably doesn't requiresignificant skills and expertise to things that may require more training or that someone
is just not able to do, just being hired for it?
That's an excellent point and observation is what we've seen as well is that the kind ofAI work that we did a decade ago is very different than the work that is being done now.

(03:38):
So one of the concepts we had in our social mission is really about the future of work.
those skills change.
Our goal is to build digital skills that allow people to stay employed and in the formaleconomy.
So what happened is we ended up really needing to
shift a significant amount of focus into training.

(03:59):
So not only what is AI and how do you leverage our platform, we also leverage our platformto sort of isolate those things that are human and let the machines do kind of the heavy
lifting, if you will.
And we ended up building domain expertise.
So we have people who've been doing autonomous vehicles and are deep experts inunderstanding how to do LIDAR and sensor fusion.

(04:21):
all the way to building partnerships with local universities to support things like domainexpertise in GenA
And can you talk a little bit more about what some of those tasks are that you do now,especially in terms of Gen AMR?
So we primarily focus on both supervised fine-tuning as well as initial model development.

(04:44):
So we provide either the initial training data sets.
We've been focusing quite a bit on multimodal models.
So the combination of video or images in context with text or passages.
AI agent type work with HR, legal, finance as an example.

(05:06):
So those are the types of work that we're doing now.
And it can also include validating synthetic data.
So the cost of developing these models is quite high, and they require vast volumes ofdata.
whether we are validating synthetic training data, we are helping build some of thetraining data, or we are fine tuning and doing things like prompt engineering.

(05:29):
There's all these challenges around accuracy and bias in AI systems.
Does the work that your workers do address some of those issues?
Yeah, I'm a big believer and I think a lot of others are that AI models, right, to developresponsible AI is really on four different pillars, right?

(05:50):
One is related to data provenance.
So where did the data come from?
Do you have the right data to create a, you know, fit for purpose application, right?
Because if it's too little, too much, you know, there's a whole bunch of challenges thatcan result in bias in the model.
So we really focus on that data provenance, that data governance.
But the other couple of pillars are human-centric validation.

(06:12):
You can leverage models to build models, but you can imagine you want to go from A to B,you might end up somewhere over at C or D.
There's not a model evaluation or checkpoints in place.
So we provide that level of human validation.
But also, we spend a significant amount of our time talking to our clients about what doesgood look like?
I know it sounds like a simple question, but really understanding what is your use case?

(06:36):
how are we expecting to perform really feeds into the process of normally ensuring thatyou have the right data, but you're doing the right validation checkpoints.
And you can actually confirm that even after that model is deployed and new data is beingingested, that you know what it looks like, right?
You can validate and ensure that it stays performant.
So for someone who's not familiar with how the model development and fine tuning processworks, is that essentially a worker gets a series of outputs and is saying, this is right,

(07:07):
this is wrong, and it's going to the reinforcement learning engine, or is there more toit?
Yeah, that is one example use case to where you can get a set of data.
So for example, in prompt engineer or fine tuning, the model has responded to a prompt andthe human is saying, okay, this is accurate.

(07:28):
It's not accurate.
The sentiment is off.
That's super awkward wording, you know, as an example.
And so that's what a human is doing is just doing some of that fine tuning and saying,hey, the model didn't get it right.
And then typically the
the ML engineers who are kind of doing the turning the dials and knobs, right?
They're doing the weights and bias, the fine tuning.

(07:48):
We'll then say, okay, great.
We need to trade more in this aspect to get the model smarter for whatever it might be.
It might be math or programming, et cetera.
So that is oftentimes what we're doing.
In some cases, we're actually validating synthetic data that's used to train the engine.
We could be validating the, or creating the data.

(08:09):
So,
In some cases for specific applications, you need a specific type of data that's noteasily accessible through your existing dataset.
And so that actually needs to be created to initiate the training.
So it's kind of like a full life cycle.
Typically it starts very iteratively and I just think of it as a big circle, right?
So you start, you train the model, you figure out, it, you can it do certain functions?

(08:32):
You evaluate its, you know, scores and then tweak it.
And you continue to go through this process.
sort of over and over again until you get to a point of saying, this model's ready forprime time.
I would assume in some cases there are cultural or other kinds of issues that becomerelevant and what is an accurate response for a model, especially if we're talking about

(08:57):
things like toxicity or something like that, is going to depend on some contingentjudgment.
So if you've got people that are coming from all around the world, how do you ensure thatyou're matching up the decisions they're making with what the model developer wants?
Yeah, that's a really, really, I think important point because what does the role ofcultural context play?

(09:20):
given that these models, they know no boundaries.
I mean, I think that's the thing that's kind of crazy about Gen.
is that it's not like you're saying, hey, I'm building a, know, a autonomous vehicle thatwill operate in London, you know, versus around the globe.
You're talking about literally any topic and any subject that these models, you know, canbe asked so that cultural context can become important.

(09:42):
Like what does the...
We can see that performance, example, in India, know, with chat GPT is going to lookdifferent than does in the, U S in terms of performance.
So one of the things that we, we do is, is we, try to isolate and understand what is thatlevel of context.
Now we found that, you know, with the strong English languages that we have in East Africathat we've had, you know, good, we we've been able to, balance some level of training

(10:11):
with,
you kind of existing knowledge.
I mean, the reality is, that, there'll be differences.
I remember working on a very large e-commerce project.
You had to identify pudding.
Well, pudding may be one thing to us.
What does blood pudding look like?

(10:31):
What your pudding look like?
And so I remember...
in the UK, right?
As opposed to like a particular thing here, right?
Exactly, exactly.
Or a jello pudding cup, right?
And so those are some of the examples to where that cultural context can become helpful.
And there are ways to train and sort of guardrail for and do sampling and things of thatnature.
But there is an aspect of that, which is also, I think, one of the reasons why it's veryimportant that you get, you know, diversity, not just in the data, but also the people

(11:02):
that are actually touching the data.
And with regard to issues of bias, the challenge is often the humans.
It's not that the AI system generates these bias results.
It's training on data and decisions that are made by humans.
So if you are putting more humans in the loop in the process for developing and improvingthese models, how do you combat the potential for those humans to add to the biases?

(11:32):
Yeah, the potential inconsistencies, you know, I think that that's absolutely a factor,but I would probably come bring in, I would answer that in really a couple of ways.
One is that the model valuation.
So how do you define quality is a really critical thing.
Cause if you don't know what you're looking for, nobody ever gets it right the first time,right?
You may not, you may be starting, you know, the old adage of garbage and garbage out, youmay be starting with the wrong starting point, which again, if you don't have all the

(12:00):
right,
data, then you get inference bias, right?
That just occurs naturally because you're missing something.
The other component of having an evaluation rubric is that you might have underfit oroverfit your model, right?
So if you underfit it means the next time this new data comes in, it's like, whoa, I don'tknow how to deal with this.
So it starts with understanding what are we really trying to tweak and work for?

(12:25):
And then there are a lot of different methods in which you can drive consistency.
so consistency can be anything from we are leveraging, it sounds ironic, automationbasically to make sure that the human loop responses are consistent.
And it could be everything from automated quality checks, grammatical checks, that youneed to ensure consistency, all the way to other approaches, which can include consensus

(12:53):
building.
as an example, that creates bias in a crowd situation because in the crowd, you only getpaid when you get it right.
So there are so many numerous examples and studies that say you just keep hitting thatbutton, know, just keep hitting until you get it right.
And if that's how people are treating that, you know, just to get to the point of gettingpaid or, you know, I'm really in the Philippines, but I say I'm from the U.S., you know,

(13:19):
to get to, there's lots of gamemanship that can happen in that place.
So,
Even in that scenario, sometimes the volumes in the consensus can also have some flaws.
What do do to combat those problems?
From our perspective, we do a couple of things.
So one is we really focus very heavily on identifying the model evaluation standards andthe quality rubric.

(13:42):
So working with our clients to understand, hey, what is it that we are really validatingfor and identifying quality and parameters associated with that and that accuracy?
And then we leverage everything from systematic quality checks to actual quality andquality sampling.
you know, statistical sampling to ensure that we are seeing a level of consistency.

(14:05):
And we've got a way to judge what good looks like.
And in the cases of stuff that's subjective, I mean, it's not, it's not perfect, but it'sgoing to get you to a place of understanding what are those ultimate sort of business
rules that are necessary to create an accurate output, which could be factual accuracy,which could be grammatical accuracy, which could be tone, sentiment, et cetera.

(14:27):
You name it.
And you've talked about the different stages all the way from the original data provenancethrough the process where these kinds of mechanisms come into play.
mean, do you have a sense, is there an evolution in terms of where there is the greatestneed for these quality improvements, or is it always across the board at every stage?

(14:53):
Well, I think that's the tricky part about AI is that it doesn't sort of sit on a shelf.
It grows and changes as the world under our, you know, the ground under our feet changes.
And so it's an ongoing thing.
You have to have some level of performance and feedback mechanisms.
So for models that are put out in production, and we talk a lot about gendered VA, but youcould say the same thing for a self-driving car or autonomous vehicles.

(15:18):
It's the same thing.
where it may go on a new road and you have to have a feedback mechanism for a user, right?
Or internally to be able to flag and say, hey, that's not accurate, right?
Or I don't know if you drive a Tesla or have any of the driven those vehicles before, ifit suddenly slams on its brakes, you hit the report button, right?

(15:39):
To capture the camera information to say, hey, you need to fix this.
So that feedback mechanism is constant.
A lot of estimates are that
It even takes 15 plus percent or more to just maintain the models that are put out there.
So it's a long way to say that, yes, it does occur at each stage, the sort of level ofeffort and the costs associated with it reduces ultimately over time.

(16:06):
And you're more in an evaluation and tuning stage.
But actually that addressed the next question I was going to ask you, is whether in timethere's enough data and enough accuracy in the models that the need for the kind of human
feedback you're doing decreases.
It sounds like that still continues because there's always something new.

(16:28):
I think it does still continue.
Does the volume continue at the same pace it used to?
No, I don't think so.
I mean, we've even seen that in the world today where, you if more can be automated, itwill be automated.
But it's very difficult to, and I would say I think most experts would very, very muchagree that you don't want to lose the notion of human centric validation, that something

(16:51):
is a model, there's not enough transparency that says this is both how it's been built.
and how it's being checked.
So kind of when we come back to responsible AI frameworks, right, we talked about dataprovenance, we talked about, you know, transparency, we talked about model evaluation,
etc.
I think those things are really key because at the end of the day, there's going to be,you know, there's some activity that's happening towards regulation and compliance, if you

(17:19):
will.
At the end of the day, like really where we are able to balance innovation is going to bein standards.
right, and open source and, you know, getting everybody to come to the table say, hey,this is the best practice way in which you develop models.
Because at the end of the day, the adoption is about the user's right?
So if you don't trust that system, and I think this is kind of this really interestingstate we're at, Kevin, is that, you know, I'm in California and there was a bill that

(17:46):
failed pretty spectacularly, SB 4057, I think I got the numbers right on that, but...
It had input from Anthropic, had input from Meta, it input from all these differentcompanies.
So it was the right kind of intention of public-private partnership, but also was prettyflawed.
So lots of good intentions, ultimately failed because some of those weren't, I think, verypragmatic.

(18:09):
And there was a real concern of stifling innovation.
Yet at the same time, every poll that you read on that said eight out of 10 Californiansthink it's a good idea.
We want to know how those systems are being built.
We want to know that if we're
You're building technology that's going to touch a financial system and say lendingservices that is not totally biased against a certain type of, or could be subject to

(18:32):
cyber attack.
So beyond the specific work that you do at Sama, how do you think we get to a moretrustworthy AI environment?
I think a lot of it is about education and standards.
So I think it's got to be a combined path of some level, some position.
I I don't think you can solve innovation through government policies.

(18:56):
I mean, not to ding, you know, on that, but I think there's a role for a government toplay, which is to encourage the right kind of behavior or discourage the wrong kind of
behavior.
So I think there is some framework.
It's a really difficult problem to solve, but I think there's value to identifying higherresistance.
I think there's value to saying,
If this technology can be used to can be hacked or otherwise, right, and is embedded intocritical areas of like national security, these are important things that we should be

(19:22):
doing.
Absolutely to have the right guardrails in place.
But at the end of the day, the decisions that are being made on these AI models aren'tdecisions that are being made in Washington, DC or in Sacramento and California.
It's the engineer who's building it, who has to have that understanding and framework.
So I'm more of a fan of
I should say, I think it's a two-pronged approach, is this notion of responsible andtransparent AI, getting those guidelines and best practices out there, enterprises who are

(19:50):
investing heavily in this should take responsibility to educate, train, share, and thenthat same thing needs to go on the consumer side.
I a lot of people go and they start entering in like, you know, I know enough to know thatif I'm going to leverage any LLM, I'm not going to give or ask it information about myself
because

(20:10):
That is me then giving up my own personal data to train that model.
So I just opt out of the LinkedIn auto AI algorithms to collect my data and send morepersonalized.
So that education also exists on the consumer side because the way I look at it even it'svery simple.

(20:33):
Like even hairdryers have warning labels.
Don't plug it in next to water.
you need to that kind of visibility, it's multi-pronged.
Let me jump back to Sama and your business.
How do you find quality people and then how do you effectively train them?

(20:55):
So we have a couple of approaches.
So we have our impact focus.
So that is the hiring and the basis of impact.
We go through kind of an AI 101, is everything from just post-digital literacy to what isAI?
Like, what are the inputs that you're providing?

(21:17):
How does it train the model?
And being transparent about that, being transparent about how AI is used in our ownplatforms.
and teaching about use cases and sort of quality and business rules associated withcreating, you know, predictable and accurate data.
So we go through sort of a baseline training and then of course, customer specifictraining.

(21:38):
And what we are certainly finding with Gen.AI is that oftentimes because of the alsosomewhat subjective nature of the task that you have to start with some level of domain
experience.
So it's gonna be pretty difficult to validate
a set of Python code or create training data for Python code if you have never programmedbefore.

(22:00):
And so this is where we have been focused on our same communities where job creation isquite important, but we're partnering with the University of Nairobi and Swarthmore and
Makary in Uganda as an example.
And there's been a longstanding, sorry?

(22:21):
yeah.
to try to create an employment ecosystem.
Sorry, there is a long standing debate about outsourcing in many industries ranging fromIT to apparel and the question about whether this is actually benefiting people when their
railing wage is going to be significantly less than in the developed world.

(22:44):
So is there anything different about how that plays out within the realm that you'redealing with?
I think there's a couple of components.
The first and most important one is, there's a lot of wealth and opportunity that's beingcreated from AI and from the digital economy.

(23:07):
Why should not the global world participate or should it be relegated to a small group ofpeople in the Silicon Valley or in Austin or in other locations?
This is a huge opportunity and I think the world is flat.
There's an opportunity for
economic prosperity and growth across the board.
So that's one thing.

(23:28):
The second component is that things don't cost the same everywhere.
There's a purchasing power component associated with this.
it's, you know, you look to any global company and you tell me is the, the, you know,front desk reception person, you're to make the same thing in Bangalore, India, that they

(23:49):
would in San Francisco, California, that they would in any of those other locations.
The way that we do this, I'm certainly a big proponent of this, and this is part of bothresponsible and ethical AI, is transparency in wages.
So it's very easy for this to be erased at the bottom.
So certainly what we advocate for in the model that we take on at Sama is we leverage theAnker methodology.

(24:14):
So that's a globally recognized methodology for identifying living wages, which is basedoff of these standards to have safe housing.
to have 10 % savings to afford education, medical care, healthy foods, things of thatnature.
In my location, what is that living wage?
That is really the representation or tailored approach.
And so we do that across every location that we work in.

(24:36):
Say this is what the living wage is, and it typically corresponds, it obviously rises withinflation.
And as an example, in Kenya, the living wage is going to be about
almost four times more than the minimum wage, the bare minimum wage.

(24:56):
Right.
So, so there is a relative kind of view if you can think about it like that.
And that's what we look at is we say, Hey, what does the living wage is our benchmark.
evaluate that every year through a series of surveys, leveraging this methodology.
And yeah, that, that is both it's, it's, it's fair.
We, know that, that, that, we're providing to meet the standards and, know, it's not easyliving wage goes up more than.

(25:21):
you know, necessarily market wages do, but that's a commitment we've made.
And I think the transparency of this notion of full-time employment and living wages isreally important because otherwise it is a race to the bottom, right?
You know, the top, it's just going to continue to get cracked down further and further andfurther.
And I think the thing that we should be incentivizing, you know, well, one for humanity,but second for also accuracy is do you want to have the, you know,

(25:50):
the people that are training your models or working for this, be incentivized to do asmuch as they humanly can in the shortest amount of time possible.
You want it to be quality.
It's really difficult to do that if you haven't set basically a safe and effective workenvironment.

(26:11):
And that's the benefit of a formal job.
You get sick, you still get paid.
You have to go paternity leave or maternity leave.
you still have those protections and I think those actually ultimately equate to betterwork at the end of the day.
How else do you ensure that you're operating in ethical ways?
You talked about the original instantiation of the company was as a nonprofit, but nowyou've raised back your capital, you scale it up a lot.

(26:37):
What does it mean to realize that commitment to ethical sourcing?
So we do in a couple ways.
The one primary way we do it is we hold to our impact hiring practices.
The goal is to be able to provide opportunities to people who've got the greatest barriersto employment.
So we do that.
We track that.
We have our commitment to living wages.

(26:57):
So we have an impact team that does that evaluation review every year.
We report on that in our UNSDGs.
So we hold ourselves accountable to ensuring that we drive that forward.
And then we work towards, you
There are five UN SDGs that we are targeted on specifically, includes a fair and safeworking environment, includes gender diversity.

(27:22):
So one of the things that was part of our core social mission was that we would hire atleast 50 % women or people identify as female.
And that was a purposeful approach we took from a mission standpoint.
because there's a lot of data that tells you that when women succeed, communities succeed.
And so we've continued to keep that focus of diversity as well from kind of the bottom upto the top.

(27:49):
You had an incident with OpenAI where, as I understand it, workers were havingpsychological consequences of viewing toxic content for the processes they were involved
in.
As I understand you, you canceled that contract.
So I'm interested to hear how you think about making those decisions in terms of whatkinds of work to engage in and when to decide that you need to step away.

(28:10):
Yeah, I appreciate that.
There are a couple of key things.
So first off from an OpenAI standpoint, that was a short pilot project to do training.
It was one of two projects that we've done in harmful content.
And ultimately the learning from that was that's not part of our core.
We're not going to do this.
So we had, it represents two out of probably 3000 engagements and projects we've run overthe course of the decade.

(28:39):
So it was
learning to focus on our core competencies.
And we did a couple of things that were really, I think were very good sort of learnings.
The first thing we did is we said, okay, how did we get here?
We didn't have a clear enough view of what we will do and what we will not do.
So we instituted a service line boundary.

(29:02):
And that's basically something we instantiated up through both our board and into ourcontracts.
One example is we will not
do any projects that involve harmful content.
We also took that step further and said, okay, are there industries or areas that we thinkwe should just clearly draw that boundary on?
And we made a call to say, we're not gonna focus on weapons.

(29:23):
We are not gonna focus on, you know, big tobacco.
We just say, hey, these are things that we're gonna just draw on and say, hey, let's stayclear.
But the thing that is very interesting is that there is, the world changes, right?
What is harmful now could be different.
in 20 or 30 years, think about, know, urban safety, could that data be taken to, you know,ultimately become, I don't know, invasive or surveillance?

(29:49):
Like there's just, there's sort of all sorts of things.
said, Hey, we don't know everything now, but let's create these boundaries.
And then let's also create something that we call an ethics guild, which is representativeof our associates in the Stafford hub.
So the people are doing the annotation, no management, you know, people from our researchteam, marketers, you know, et cetera.
a cross-functional team that anybody in the company, if we have a new opportunity orclient we're pursuing, client we're pursuing can raise and say, this seems to fall into a

(30:17):
gray area.
Like it's, you know, it's clearly it's not harmful or it's not in this industry, but itseems like it's gray.
And so that process has worked really well.
We've had maybe a dozen or so cases raised since we launched the Ethics Guild.
And like one example was,
This is work that's right in our wheelhouse.
It was like computer vision, self-driving cars or something, smart cities or something tothat effect.

(30:43):
But somebody said, wow, we've actually done some observation and this company is actuallybased in a country that has a really, really bad history of violating human rights and
some other things.
So it got broad.
We looked at that and then said, okay, maybe one of the aspects of this is that countriesor other

(31:04):
you know, areas that we say these, you know, companies based in these countries, we wouldnot go down that path.
Right.
So that was an example of where something new came up.
We didn't have the, you know, we had a lot of things documented, but we had a mechanism tocapture, you know, something new and make a draw line in the sand.
So yeah, it's been, you know, there's definitely some learnings from that, from thatsituation, but I think we have tried to channel that into, you know, just continually

(31:31):
improving and being really clear.
And last thing, it is a big world and AI is a technology that has implications everywhere,but the vast majority of the major AI development labs are in a small number of countries.
What can be done to better globalize these systems so that they are most useful to peoplein other places like the development world?

(31:59):
That is such a good question.
It's one that I do deeply think you need greater diversity, right?
I AI knows no boundaries, right?
It's not like we built something here.
It's not relevant there.
I think one of the ways we can do this is we can create ecosystems.
So I was just in a conversation with some folks from the G7 who were really talking about,how do we find this balance of government support?

(32:24):
You know, some people are like, well, I think it's about, you know, compliance and, and,you know, regulation and we're going to tell folks how to do this.
And then somebody was around and they said, you know, isn't that kind of like theopposite?
Like what if you're in a developing nation, like say some sub-Saharan nations insub-Saharan Africa, wouldn't we actually want to instead focus on creating an ecosystem?
I was like, yeah, that's, that's at the end of the day.

(32:46):
What's really key is that the way that we get greater diversity and greater representationis we get the, you know,
where the governments can engage is build STEM programs, you know, like focus onproviding, you know, labs and GPUs and build those ecosystems that ultimately there's
great adversity in both the talent and the technology that's being developed.

(33:10):
So I think that's really the right broader way to do it.
And then in the meantime, I think that there is a great opportunity for, you know, thecompanies, if they're thinking about this in that kind of, you know, broader responsible
AI framework to say, hey,
you know, who can I involve both in terms of the data and through the supply chain tocreate something that's more representative?
So that's probably how I'd answer that.

(33:32):
Great, Wendy, thank you so much for your time.
Thank you, Kevin.
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.