Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Welcome back, branded
Bunch, to another episode of
the Brandy Interpreter Podcast.
This is Mireya, your host, andbecause I haven't spoken to you
in a few weeks, happy fall.
It's, as they say, suera-ueda.
I say that as I'm sweatingwearing a sweater that is still
a bit too warm for, butnevertheless it is fall and the
(00:29):
autumn leaves per the treesoutside.
Tell me so.
And speaking of fall and autumnleaves, if you've never
experienced the fall in the eastcoast, particularly out here in
the northern parts of Virginiaor just the DMV area, by the way
, for my West Coast people DMVis not the place where you go
(00:56):
get a ticket and await for yourturn on anything related to your
car.
I had to learn that the longhard way because I kept not
understanding what the DMV hadto do with the conversation.
But turns out it is an acronymfor us interpreters, that is for
(01:16):
the DC Maryland, virginia area.
So there you have it Now.
You won't be as lost if youever hear the DMV out here in
the East Coast.
Anyway, dmv, or the East Coast,does fall very differently,
very beautifully, if I may add.
The colors of the trees areexactly like the scenic routes
(01:41):
that we see in movies, withthose long winding roads and
rows and rows of beautiful treesin all sorts of different
shades and colors.
I mean, the very first timethat I experienced that, I
thought, oh my gosh, this isreal and I felt like I was
(02:04):
actually dreaming driving downthose roads.
So I would say, if you're evergoing to plan on coming out to
the East Coast, this particulararea because you know this is
where I've been driving aroundand spending the last year Can
you believe that the last yearout here, beautiful, just
(02:27):
beautiful, during the fall.
So if you make plans to comeout, try coming out during the
fall so you can experience it.
Thank you for joining me todayagain, by the way, I'm always,
as I hope you know, verygrateful for your support and
for always coming back to listento whenever a new episode drops
.
I hope that with some of theselong stretches between episodes,
(02:48):
you've had the opportunity tocatch up.
I know that they have not beencoming out as perhaps often as
they were just a few months back, but I'm hoping that it's going
to be picking up speed herepretty soon.
So appreciate and takeadvantage of the time that I've
slowed down just a little bit togive you enough time to catch
(03:11):
up on just over 100 episodes inthe show.
If you're new to this platform,welcome and thank you so much
for tuning in.
I always appreciate youreaching out on social media and
letting me know that you're newto the show, that their time as
guests to share their storiesand just generally speaking with
(03:48):
the show in general, it alwaysmeans a lot because of the
algorithm.
Obviously, when individuals aretuning in, it does its little
magic behind the scenes torecommend it as a potential for
individuals that are listeningto podcasts.
So thank you so very much andwelcome.
So for today's episode, I hadthe opportunity to speak with a
(04:11):
representative from the Safe AITask Force and we got to talking
a little bit about just theunderstanding a bit more, or
breakdown, of AI in the languageservices industry.
I know that many of us haveheard so much uproar and there
(04:33):
are so many different, sometimesconflicting, stories of what AI
may or may not be in thelanguage services industry, and
we may perhaps have gotten evento the point where we're sort of
tired of hearing about AI andwhat it could or could not do.
But I think that the more weare informed, especially the
(04:57):
more information that is veryspecific or very related to the
work that we do, I think, thebetter decisions we're able to
make on it, and includingdecisions having to do with
whether or not we want toutilize it or, better yet, how
to utilize it.
So whenever I have theopportunity to invite someone on
(05:18):
the show to expand on thistopic, I will, and so today's
show is really going to be notjust learning a bit more about
the topic of AI in the languageservices industry, particularly
in the community setting, butalso learn a little bit more
about a task force called SafeAI, and here with us today to
(05:43):
talk about both topics is HollySilvestri.
Dr Silvestri has significantexperience in the field of
translator and interpretertraining, in addition to running
her own language serviceprovider agency, as well as
freelancing for other agenciesand government entities.
Currently, she works as asenior coordinator for
translation training andcurriculum at the National
(06:06):
Center for Interpretation at theUniversity of Arizona.
Her working languages areSpanish, french and English.
She is a founding member ofAmerican Association of
Interpreters and Translators inEducation, as well as the chair
of Public Relations Committeefor Stakeholders Advocating for
Fair and Ethical AI inInterpreting Task Force.
She is also a member of theAmerican Translators Association
(06:29):
and her state's professionalassociation, arizona Translators
and Interpreters, dr Silvestriregularly presents on various
topics relevant to theprofessions for these
organizations and others aroundthe United States.
So, without further ado, pleasewelcome Dr Silvestri.
(06:50):
On behalf of Safe AI Holly.
Welcome to the show.
Thank you so much for beinghere today.
Speaker 2 (06:58):
Thank you for
inviting me.
It was lovely.
Speaker 1 (07:00):
Yes, indeed, I
actually am excited number one
because it's so, so close to allof the information that's
coming out with regards to AIand the interpreting industry
and, of course, with Safe AI,which we'll get into in just a
little bit.
So I'm honored and I'm happythat you're here on the show
with us today to give us a lotof great information on behalf
(07:23):
of Safe AI, to give us a lot ofgreat information on behalf of
Safe AI.
But before we begin, I'd likefor the audience to get to know
Holly a little bit more and justfind out a little bit more
about you and how you gotinvolved in the language
services industry, if you will.
Speaker 2 (07:37):
Sure, where do I
start?
I was born a poor no, I wasborn a poor no these days.
I like to start with the factthat I am probably
representative in some ways ofthe industry, in at least a
population that I tend to dealwith community interpreters
because, like everyone else, Istarted out wanting to help my
(07:59):
parents right, because they werenon-native English speakers
when they first came to theStates.
So I de facto became the onewho was managing a lot of things
that were going on in thehousehold.
It wasn't until much later thatI realized that this could be a
career, like most people, and Igot some training, which is a
(08:22):
very good thing, because I wasdoing all the wrong things, as
everybody starts out doing.
And then at that point I wasthinking okay, what am I going
to do with the skills that Ihave, because I was originally
trained to be a foreign languageteacher.
And then I added these you know, the interpreting and
(08:45):
translation skills on top ofthat and I thought, well, there
must be a need in schools.
There's a multilingualpopulation there.
And then that's where we met,because we all started AAITE
together and that has taken offand done very nicely.
And then the pandemic hit, ofcourse, and then, around 2020, I
(09:10):
opened the news and it was likeah, chat, cpt, ai, oh my God.
And I thought, oh, this isinteresting.
I know that there were peoplein the industry that were just
as concerned as I was about theimpact that this might have, the
disruptive impact on bothtranslation which it has, of
(09:30):
course and interpretation, and Ithought, oh, there has to be
some place I can do somethingabout this and educate people.
So I got involved with SafeAI.
So that's just a short versionof my trajectory so far.
Yeah, like a bite-sized versionright, Because yeah, we all know
(09:51):
.
Speaker 1 (09:51):
We definitely have
stories with regards to how we
got involved in the profession.
You also are with Arizona StateUniversity, is that correct?
Speaker 2 (10:00):
I'm with University
of Arizona down in Tucson.
Asu is up in the Phoenix area,but absolutely I work with NCI
there.
We're the ones who well, I saywe because I wasn't involved at
that point, because NCI has beenaround for probably about 50
years now was originally startedas a way to do research on this
(10:25):
topic and to sort of create alegal framework.
They were very instrumental ingetting the 1978 Court
Interpreters Act put into lawand they were the first
organization that created thefederal exams for Haitian,
creole, spanish and Navajo.
(10:48):
Of course, the only survivingone now is the Spanish one and
we are no longer in charge.
That grant has long sinceevaporated and another company
has taken over.
But that's part of the reasonthat NCI exists and I thought
that I could do somecontributing there as well, and
I've brought a lot of myknowledge of educational
(11:09):
interpreting into that fold andnow we do a lot of professional
development in that arena aswell as the legal interpreting
arena.
Speaker 1 (11:16):
So it's nice?
Yeah, no, definitely,especially something that is
offered not just at a statelevel, but I do believe that
there's also opportunity, suchas you just mentioned, with
continued professionaldevelopment, so others from
other areas are able to alsotake part in the professional
development, correct?
Speaker 2 (11:37):
Yes, and since oh, I
don't know the exact date
honestly, I think it waspre-pandemic which was lucky for
us because we were able tocontinue during the pandemic
that we went from an in-house,on-site program for the city,
which is the Court InterpretersTraining Institute, to online,
(11:58):
so now we have a vaster numberof people that come and do our
trainings in the summer months,which is really nice.
Yeah, that is reallydesperately in need of court
interpreters, God that's crazy.
Speaker 1 (12:11):
Yeah, that's crazy.
I only say that because I doremember back in the day, back
in the day when I first startedas well, that was my aim.
My goal was to get started inthe legal field, meaning I
wanted to become a courtinterpreter.
And one thing led to another,which ended up, you know, I
(12:34):
ended up becoming a communityinterpreter and staying there
Again.
I thought, oh, it's going to betemporary, until we were going
through just that economic issueback in 08, 09.
Oh, yeah.
And I started seeing morelayoffs than demand for cordon
trippers.
I couldn't even Spanish inCalifornia and I was like, oh my
(12:57):
gosh, I'm not going to have ajob when I finish schooling, and
that's the only reason why Imade the swap.
And it was thanks to theguidance of one of my
instructors, actually one of myprofessors, that said you know,
there's, there's this newcertification rolling out in the
medical field and, and I highlyencourage you all to to just go
(13:19):
for it you know you've had thetraining, you know the basic
training that would, that wouldsupport you anyway.
And so now to hear that it'slike they're in need, like I, I
went from, oh, there was, therewas no need.
To.
Now that there is a need, it'slike well, now I'm so, I'm so
(13:40):
involved in community and tobring I don't want to go to
legal.
I know, I know, yeah, but youmentioned with regards to the
technology and being able tobring professional development,
and I think that I know I meanin the States, I want to say
here in the US, but I meanreally, when you go online and
(14:11):
you go virtual, anyone at thatpoint can join.
It could become a universalprogram, oh sure.
Speaker 2 (14:19):
We've had a lot of
Mexicans join our city program.
I know that we've also hadpeople from Europe, which is
terrifically difficult becauseof the time change, but they do
come to the synchronous sessionsas well as the asynchronous
ones.
So it's been a real boon, Ithink, for a lot of conference
(14:39):
interpreters who have perhapslanguages that are not as
popular in the United States butthen can get called over here
to do Zoom interpreting, mostlybecause obviously they're not
going to fly over for a courtcase.
But it's been a very enrichingexperience to interact with
(15:00):
those people as well and talkabout the European environment
for interpreting, which I thinkis a little different than here.
Speaker 1 (15:09):
Yeah, and I think
again I go back to this, this
topic of of the technologycomponent, because we have seen
how it's been able to help theindustry expand or grow its
wings Right and and in all sortsof different ways.
Obviously, we've seen thetechnology just advance and
really take a huge role in ourindustry, and one of those
(15:35):
technology components that justbecame like this buzzword was AI
and, you know, chat, gpt andall these other different new
technologies that have rolledout in the last, made public, I
should say in the last couple ofyears, because I do believe, if
(15:56):
I'm not mistaken, they'veactually been around for quite a
long time and it's just onlybeen recently that it's been
open to the public and that weas public have been, you know,
made aware of and and basicallyit's like oh, here you go, try
play with it and see how youlike it, and it just became this
uproar suddenly, right, whatdid you begin to see when you
(16:18):
started to hear you personally,ai, and I started to look at
know.
Speaker 2 (16:24):
I started to look at
all of the different use cases
that people were putting outthere, because it was, you know,
obviously you want to keep yourjob and I was like, okay, well,
if this is a language-based,you know, performance generative
AI, how good is it attranslation?
Because everyone keeps sayingit can translate no problem.
(16:45):
It can translate no problem.
And I knew at that point wewere still, and we still are, at
the point of using MT reallyfor what they call interpreting.
But we'll get to that later.
(17:19):
So you know, right, becausethat's computer-assisted
translation by a human, that'sjust bringing it in as an aid,
whereas mt is when the machinedoes it right.
And that's when we started tolike, right before gen ai, right
, generative um, artificialintelligence, which is what chat
, gpt is, came in.
(17:40):
We had that, we have the waveof um.
I don't know if you did this Iused to do this I would test
Google like every six months tosee how good it would get Right
With a, with a difficult, youknow, sentence.
I'd be like, okay, and I knowthat I was training it and
that's a stupid thing to do, butthere's no other way to figure
out how, how good, you know, thecompetition is because people
(18:04):
who don't know go and use that,and we had that.
We had that initial neural netwave where Google got suddenly a
whole lot better Right.
And then the second wave waswhen they started to use LLMs,
and that's when Gen AI was born,right?
So that's the second part ofthe puzzle that allowed it to be
(18:24):
released to the general publicand to be creating this massive
tool that everyone used.
You know, some form of chat.
Gpps and chat boxes are thethings that people got all
excited about and it's great.
They can do all sorts ofrepetitive tasks really well,
but the translation part is whatbrought me to the table.
(18:45):
They can do all sorts ofrepetitive tasks really well,
but the translation part is whatbrought me to the table and I
said well, you know, is thisgoing to be a disruptor in the
industry?
And what is this going to dofor interpreting?
Because if they accept which alot of you know industries have
(19:06):
machine translation with a humandoing post-editing.
If that was going to happen ininterpreting, are we going to
end up being like the courtreporter and watching a you know
a script appear on a screen andjust correcting it as we go
along, is that the end of ourbrains having to do the work
right right for the most part.
So there's good elements tothat and there's bad elements to
(19:28):
that.
I thought you know there needsto be a nuanced response to this
because it could potentiallymassively expand people's
knowledge of the industry, butalso language access in this
country, which needs to beexpanded.
But it also could be reallydisruptive to those who have put
in the time and energy andmoney to train.
(19:49):
You know, if that's whatinterpreting ends up being, the
training is going to be verydifferent.
Speaker 1 (19:56):
Yeah, and we saw this
massive wave of fear for many
in the industry correct.
Speaker 2 (20:04):
Well, yes, that
mirrored the general
population's fear.
You know, everything about aiis extremes.
I think that's part of part ofthat is just the media hype.
Anything that you put out therehas to scream at you to get get
in front of people's eyeballs.
So it was either ai is going tosave the world it's the next
coming of jesus or ai is thedevil and it's going to put us
(20:25):
all into this dystopian hell.
Speaker 1 (20:27):
Yeah, the end of both
ends of the spectrum, right,
right, yeah.
Speaker 2 (20:32):
So that was kind of
why I sort of got involved in
Safe AI.
When I found it, I was like, oh, this is perfect.
I need more information to beable to make an informed
decision about what I'm going tobe talking to my clients about.
Because they were coming to megoing this Gen AI thing it's
great, you know a lot of theschool districts and I was like
hold the boat.
Speaker 1 (20:52):
Yeah.
Speaker 2 (20:53):
Wait a minute,
because you do not know how this
works and you do not know ifit's secure.
Do not be running your IEPsthrough that.
Oh my gosh, translate themplease.
You know to translate them,please, you know.
But you, you get that in theindustry, right, you have to do
a lot of client education beforebefore they get to realize like
, yeah, if it's free, then youare the product, not the thing
(21:18):
that you're translating.
They are using your materials,as you know, information to
train, their, train, their bot,and that's their public
information.
Then and it's like, oh, that'snot what you want to be putting
anybody's private information inthere for exactly the lack of
parameters immediately.
Speaker 1 (21:38):
It was just like
people just want to jump into it
quickly, like this is.
This is the answer to um, yeslike every new tech right, it's
panacea.
Speaker 2 (21:48):
They've been doing
this in ai.
I went back and looked justbecause I was curious.
They've been doing this in aisince, like the 1940s, even
before ai had a name, because aioriginally.
You know, the first aiconference was in 1955, but in
the 1940s, when they weremessing around.
You know, the first AIconference was in 1955.
But in the 1940s, when theywere messing around, you know,
(22:08):
with computers, they weretalking.
They've been hyping this samehype.
They were talking this is backwhen, you know, computers had
like one millionth the capacityof the cell phone computer that
you now have in your back pockethalf the time and they were
talking about it.
It's, you know, can thinkfaster than Einstein.
Well, yeah, okay, here we arein 2024.
And I'm pretty sure if that wasthe case in 1940, we would
(22:31):
already have sentient robots.
Speaker 1 (22:33):
So yeah, I was
thinking back actually when I
saw how long ago like I'm likewhat do you mean They've been?
They've been having theseconversations and doing running,
you know whatever they'rerunning in the background since
before I was born.
Speaker 2 (22:46):
Absolutely.
Yeah, well, it's a tougherproblem than most people realize
.
It is amazingly difficult toget something that you know that
the public would accept as theanswer to their prayers.
Right, we've been trained allof us who have been.
Well, even if you move to thiscountry, you have the Star Trek
(23:08):
kind of thing about technology.
Right, someday we'll have theuniversal translator.
Someday it will be able tocreate food out of nothing.
You know, because you grew upwatching this and beginning to
expect this out of the tech.
And now it's sort of comingtrue and people are like oh,
there, it is Like no, not quite.
Speaker 1 (23:27):
I would have wished
they would have started actually
with, like I grew up with, theJetsons, you know, like as a
cartoon.
I would have liked for them tohave started with the assistant
helping me fold clothes and washdishes.
Speaker 2 (23:41):
Absolutely.
I want the wash the dishesrobot.
Thank you very much, and Iwouldn't mind the flying car
either.
Speaker 1 (23:48):
Yeah, I would have
been right, exactly, I would
have been right behind that.
Like, sign me up, I'll test it.
Well in your search for, youknow, trying to to learn more
about how this can impact theindustry, our jobs and, of
course, the way in which weapproach the responses to the
(24:09):
people that we work with.
You are part of, or became apart of, a task force named Safe
AI.
Now, if you would be so kind asto describing what Safe AI is
and why this task force wasformed although I imagine some
of us have sort of come up witha conclusion already with an
(24:31):
answer to that, but why don'tyou walk us through that?
Speaker 2 (24:35):
Okay.
So, like I said, you know, safeAI is a task force at the
current stage of its development.
At the current stage of itsdevelopment, and it started
because a group of stakeholderswhich was pretty wide it
included interpreters,leadership in professional
organizations that doprofessional development for
(24:56):
interpreters, lsps, advocatesand even some tech companies all
began talking and realizingthat the disruption from AI
would be really severe and rapid, and we wanted to make an
industry-wide response to bepart of the conversation and to
impact policy in real time, notafter the fact.
(25:18):
No shade to translators, as Iam one myself, but we wanted to
avoid, kind of if we could, whathappened to translators, as I
am one myself, but we wanted toavoid, kind of if we could, what
happened to translators whodidn't really necessarily ask
for a seat at the table and thenjust sort of primarily got
stuck with doing, you know, mtpe, machine translation,
(25:38):
post-editing.
It's not 100% of the industry,but it is a large chunk now.
It's not 100% of the industry,but it is a large chunk now.
So we decided to form Safe AITask Force and then, you know,
we got to work thinking okay, weneed something to talk about
other than just the news andwhat we suppose this is going to
(26:00):
you know be like and what wesuppose people think about it.
That's true.
So we you know, because youcan't you can sit around and
whine in a room together, butthat's only your opinion.
You need data to be able totalk about.
Okay, is this actuallyimpacting the industry?
What are people thinking aboutit?
And you can glean some of thatfrom the press.
(26:21):
But, like we said, the press isuh, first of all, they very,
very rarely know what we do, sotheir opinion I take it with a
grain of salt a lot of the time.
Um, but also, you know, weneeded to understand exactly
what people expected out of thisin order to understand what the
tech companies might want tosell.
(26:43):
Right, because they want tomatch those two up right
People's expectations of anykind of artificial tech and what
they're making.
They want there to be a match,so we needed some data.
So we decided to do the surveyas our first step to getting
(27:04):
that kind of data, because ourmission was to, you know,
document like the state of AIcapabilities in real time,
language interpretation Right,and that includes for all
languages, by the way, we wereworking with, which is unusual
in the spoken language world,but we were working with signed
(27:25):
interpreters as well, becausethey were also very upset at the
possibility of being replaced.
So we wanted to document thecapabilities, which include
speech-to-text, speech-to-speechmultilingual captioning,
translation captioning all ofthe elements of the language
(27:46):
industry that could be affectedby this, identify where the key
use cases of AI could soon beapplied to each of the domains
of interpreting conference,medical, legal, educational,
business, other settings andthen identify, like what the
(28:08):
positives and negatives of theimpact on these use cases would
be, so we could talkintelligently with our you know
customers about okay, you know,because it's a give and take,
there's always, you know, ifthere's a positive, there's
always some drawback that comesfrom the tech and then create
(28:31):
best practice guidance for eachof the specializations.
Right, because nobody was outthere doing that either.
They were all just sort ofstanding around with their mouth
agape going, oh yeah.
And we were like OK, well,somebody has to come up with
best practices, because if leftto its own devices, the industry
will do that de facto and we'llbe left out of the conversation
(28:51):
.
We'll be left out talking abouthow language access could be of
poorer quality as a result, andwe didn't want that to happen,
right?
And we would also be left outof the conversation.
Like which languages is thisbeing applied to?
Right?
Because everyone thinks thatthis is just a panacea.
It's going to work for alllanguages.
Yeah Well, no, right, surprisesurprise, surprise, surprise.
(29:17):
Everything works, you know,works out for the best.
So we we wanted to get thatbest practice guidance targeted
to practitioners, to buyers, tovendors, to training and
academic organizations, and thenalso to end users, to get the
general public educated, like,okay, you can choose this option
, but this, this and this aregoing to be a consequence of
(29:40):
that right, much like we didwith Zoom Although Zoom, kind of
, was the worst upon us becauseof COVID, there was no other
option, whereas here you have amenu now and you have to talk
people through the menu.
Like, yes, you can choose it onsite.
Yes, you can choose Zoom.
Yes, you can choose, you knowAI interpreting, but here's
where that's best and here'swhere that is perhaps not your
(30:03):
best use of the money that youhave, so let me go back a little
bit and ask Holly safe AI inthis context?
Speaker 1 (30:12):
is AI talking about
artificial intelligence or is it
artificial interpreting?
Speaker 2 (30:18):
Ah, okay.
Well, yes and no.
Yes and no To be clear.
(30:42):
I love the name because it's abit of a double entend eye,
because that was also the, the,the big argument in the press
Right Is AI safe, is it not safe, is it going to destroy the
world, et cetera.
So we want it to be fair andethical to everyone involved
Right, fair to the person thatjust got their MA in conference
(31:03):
interpreting and said, oh my God, why did I just spend eighty
thousand dollars to be replacedby a robot?
Ethical to the end user Right,to make sure that they're still
having the same level of serviceand the quality and also the
ethics involved are beingmaintained.
So there were a lot of factorsthat we wanted to address with
(31:28):
respect to.
You know, just, instead of justhanding it over, here you go,
robot, go ahead.
So yes and no, in the sensethat we also think that the
response should be nuancedbecause there is a movement
toward augmented interpreting.
You know, for many, many of us,we know, as practitioners, that
(31:52):
your memory is one of the besttools you have as an interpreter
, because you need to keep ahold of a lot of information
while you're doing multiplethings at the same time, while
you're doing multiple things atthe same time.
And there have been earlystudies not definitive, right,
but early studies in how thiscould possibly help with the
(32:15):
memory issue.
Right, there are certainstudies that say you know, with
numbers, all the things that arehard numbers, names, dates, all
those things that you know.
You have to have that, on topof managing the two languages,
all those dates, particularly inthose high risk situations
You're not going to.
You know, in medical, you haveto make sure that the dosage
(32:36):
that you're saying in thelanguage you're interpreting
into is the same, exact dosage.
So, and in court, you know, yes,you make sure that it's the
exact same time, because that'swhen the time of the crime was
supposed to be committed and youcan't be changing the time just
because you forgot.
So it's important that thosedetails maintain their integrity
and this you know, having arunning transcript that you
(33:00):
could refer to instead of justhaving to go off of what you
heard, could help that.
So there are beginning studiesthat say that that could be
something that this technologycould be used for.
I don't think we're ready yetto insert it into training
(33:20):
programs or to actually say, yes, this is the way we want to go.
I've seen how that worksbecause I've I.
Immediately, when I saw thatthat was a possibility, I was
like, oh, it's interesting.
So I have been practicing mysimul with friends with the
captioning and zoom runningreally right.
Speaker 1 (33:42):
If you know, I once
tried that and I I felt like it
just I was like there's no way.
Speaker 2 (33:47):
He just said beast
yeah, it's not great because the
the caption, the speechrecognition technology is not a
hundred percent, of course, butif you're, you know, but you're
not feeding like I'm, I wasn'tnot listening, I wasn't reading
the thing you are correctinglike as you go along, because no
, of course they're not going tosay that that's stupid.
(34:08):
Wait, she just said this andyou know sometimes you burst out
laughing because you'm right,you're wrong, ridiculous,
absolutely.
Right, but I did.
You know, that was the bestestimate that I could come up
with to try and practice alittle bit, and I thought, oh,
okay, and so I wrote a scriptthat was full of, like the
numbers and the dates and thingslike that, and I was like this
(34:30):
is actually really helpful to meto actually be able to see that
on the screen when it came up,right right, sometimes the
numbers were wrong, and thenit's really so.
I think the technology's notquite there yet, but once it
does get there, I'd like to seethat and be a very great help to
(34:51):
conference interpreters rightwhen they're going all day long
and they need that support, evenin court.
I'd like to see that, if that'spossible, absolutely, but that
you know there's a lot offactors that need to fall into
place to make that happen.
I think, sure, we're not thereyet.
Speaker 1 (35:07):
And one of those I'm
hoping is the inclusion of the
practitioners.
Speaker 2 (35:13):
you know the oh
absolutely that, would you know.
Then it's what I like to callaugmented interpreting.
Right, that's my AI.
Right, that's just another wayof saying a cat tool.
That's just slightly different.
Right, this is helping meinterpret, because it's giving
me all the things that I don'tneed in my memory anymore on a
screen in front of me.
Absolutely fabulous.
(35:37):
If we actually got to the pointwhere it just extracted it and
it didn't even have thetranscript and all I needed to
do was look up and go, oh,that's the date and what I was
interpreting.
I am all for that.
Speaker 1 (35:47):
Only show me the
dates, the names, the addresses,
right as they come up, thestuff we know to write down
right.
Speaker 2 (35:53):
If we get there and
do our little pre-session and
like what's that person's name,what's that person's name?
And oh, when did this, when didthe, the, the theft supposedly
occur, or whatever you'reinterpreting for Um, I would
like it to do that Sure.
Speaker 1 (36:07):
Wow, suddenly, yeah,
we're not there yet.
We're not there yet, right.
Speaker 2 (36:10):
So if it were
augmented interpreting and not
you know, not just replacing me,but helping me do the job.
Huh, Bring it on.
Speaker 1 (36:18):
Help facilitate, I'll
take any help I can get.
Yeah, no kidding, exactly Shortof actually implanting
something right?
Well, you know, you never know.
Speaker 2 (36:26):
I'd probably die
before that happens.
But who knows that happens.
But who knows, maybe we'll haveyou know, maybe we'll have that
now running across our eyeballs.
Who knows Exactly 20 millionyears from now?
We have no idea.
Speaker 1 (36:38):
You mentioned the
survey then that SafeAI pushed
out in trying to sort of gatherinformation from all the
stakeholders.
And the stakeholders here youhad also mentioned are the
practitioners, so theindividuals that are providing
the service, language serviceproviders, LSPs, correct?
(37:01):
Yeah, End users, so the LEPindividual that is receiving the
service.
And then who else was part ofthis survey?
Do you recall?
I think we tried.
Speaker 2 (37:15):
Well, under LSPs some
of the tech companies function
as both, so you know we triedalso to get some tech companies
to respond as well.
The survey, of course, did havecertain limitations.
Right, it wasinterpreting-centric.
We wanted it to focus onlanguage interpretation, both
(37:36):
for spoken and signed.
Eventually we had to separate,unfortunately just due to time
limitations, because we had totranslate the survey into so
many languages, including signedlanguages, that we had to
separate the spoken survey fromthe deaf advisory group, which
(37:58):
put out their own focus groupsand survey, just because doing
it and getting it out at thetime we said we wanted to was
beyond our capacity at thatpoint.
But it was interpreting centric.
It was, on our side at least, aspoken language only.
Right, because the deafadvisory group did their own
(38:19):
version of the survey for signedlanguages.
It was also US focused for theobvious reason that we're all
here.
That didn't mean that we didn'thave international respondees.
Right, because when you sendout a survey you can't say no,
you can't respond.
And we also realized that itwas like a first step.
There's definitely more work tobe done and more more data that
(38:45):
we need.
In fact, I'll talk about thesecond step that we took a
little bit, but it wanted tocapture the current perceptions
regarding the use of AI ininterpreting from all of those
different stakeholders.
So, but on their own right, thefindings of the report are not
sufficient to develop guidelinesthat are permanent right.
(39:09):
We need much more data, furtherresearch into, you know, use
case, scenario and industry toestablish, like that strong
framework of okay, when is itsafe and good to use it in this
particular case?
Because it's a lot more complexthan people realize.
More complex than peoplerealize.
(39:38):
So we had about 2,500respondents and definitely we
had from other countries.
Right, we had 82 countriesrepresented in the responses,
but 79% were from the US.
Right, because that was ourgoal.
We wanted it to be US centric,but we didn't want to say no,
you can't respond.
Two thirds of the respondents,of course, were interpreters,
because I think they were.
I'm guessing here I may betalking out of turn, but I'm
guessing that they were just asnervous and they wanted to
(40:00):
express their opinion and theirfear in some way.
Then it did come in, it didcome through in the survey
results and more than threequarters of those interpreters
that did respond were working inhealth care.
And I think that also reflectsthe channels that we went
through, because getting peopleto respond to a survey I don't
(40:21):
know if you've ever done it is anightmare.
Yeah, it really is.
To get to respond, to get abalance Right, and so and we did
have, we did have a challengegetting the end users as well.
We got enough to make itstatistically significant, the
data, but of course you wouldalways like more.
But that's always the challengewhen do you go to find them?
(40:44):
How do you advertise?
And of course, we didn't havemoney.
We already had to pay for thesurvey, which I can tell you was
not no mean feat.
So, and of course, the ethicsof do you want to pay people to
respond to a survey were also anissue, right?
So you want to get them tovoluntarily respond, and how do
(41:05):
you do that?
It's a long story, but it is adifficult challenge.
I think Particularly the LEPcommunity.
Speaker 1 (41:12):
I feel because,
interestingly enough, we are
speaking about technology inthis case, and in this case more
advanced technology to identifythe gap between the technology
(41:36):
that's there and the people thatit's supposed to service that
there's a major gap, a majordiscrepancy, because I'm always
thinking for that particularcommunity.
It would be like boots on theground.
I'm thinking the surveys thatyou know, back in the day we'd
go knocking on the doors.
Speaker 2 (41:51):
You have to go
knocking on doors Right, yeah,
well, that's part of the problemof the digital divide in.
Speaker 1 (41:52):
You know, back in the
day we'd go knocking on the
doors.
You have to go knocking ondoors, right, yeah, well, that's
part of the problem of thedigital divide in this country.
Speaker 2 (41:56):
right, we often talk
about that, at least in the
press, on the long racial linesof Black and white communities,
but it does affect the Latinocommunities as well.
I do think that there is anelement of that and, although I
could get in trouble for sayingthis, there is an element also
of particularly perhaps with thenewer immigrants to this
(42:20):
country who are in that LATpopulation.
They may not have theeducational background to
understand the technology 100percent.
Not that your average Americanhas a fuller grasp either.
Speaker 1 (42:32):
A hundred percent,
not that your average American
has a fuller grasp either.
Speaker 2 (42:35):
Oh yeah, that's true,
right?
So I don't you know.
No, shade, not.
Please don't write in and say,oh my God, she's a horrible
racist.
Speaker 1 (42:51):
It's not that, it's,
it's just you know your
educational level does isreflected.
Speaker 2 (42:53):
I think in how much
you read the newspaper, how much
you know about it, how much youhave time to research it.
Right, Sometimes you're just atthe stage where it's like, OK,
where am I going to spend thenight?
And I have to survive.
I don't.
I'm not reading the New YorkTimes about AI right now.
I'm trying to figure out how toget a visa to stay here.
Speaker 1 (43:03):
Right, yeah, yeah,
it's different.
Speaker 2 (43:05):
It's a different, a
different set of priorities, I
think.
Speaker 1 (43:08):
For sure yeah.
Speaker 2 (43:09):
I mean.
Speaker 1 (43:09):
I spoke to
individuals that this was, you
know, before it actually hit themainstream media and it became
like this buzzword.
But as we're beginning to hearmore about it in the industry, I
remember one time askinganother interpreter in the field
(43:29):
and the response was I'd rathernot hear about that stuff
because it's like doomsday forme and I don't want to know
about it.
So it's like then you have theindividuals that deliberately
sort of put their you know, burytheir heads in the ground, if
you will just so that they don'thear about.
And so now you've gotindividuals in the industry that
are uninformed by choice.
They don't yeah.
Speaker 2 (43:50):
Well, some of that
was also, you know, because they
were saying, you know, thepress was ridiculously saying
there will be some disruption.
Any new technology will bedisruptive.
And I don't mean to bedismissive, because I had that
same panic attack as everyoneelse, like, oh my God, I just
got comfortable in this careerand now it's going to disappear
(44:11):
and now change careers yeahright, you know I've done it
enough times to.
And now it's going to disappearand now change careers.
Yeah right, uh, you know I'vedone it enough times to know
that it's going to be okay, butat the same time, you're just
like, oh god again.
So I understand that that andalso I do.
I do understand that people justlike the whole press thing was
you know, 50 of the jobs in thisindustry are going to be lost
it was kind of doom scrolling inthe beginning are going to be
(44:32):
lost.
It was kind of doom scrollingin the beginning, like when AI
just was like.
Everyone was just like, oh myGod, chat CPT is going to be
your secretary, your your doctor, your every.
It's like wait, just calm down,calm down people.
So, so, true, I understand thatresponse.
Speaker 1 (44:50):
But I also feel like
the responses from the survey,
as you just mentioned, opened upthe the just this practices.
Necessarily, there is more workto be done, but it did
(45:14):
demonstrate some prettyprominent findings, right?
What did it reveal?
Speaker 2 (45:19):
Oh yeah, so okay, I'm
going to group them in sort of
under different categories,because I'm still in the stages
of digesting, because the reportwas 350 pages and then the
summary was like 50.
I was like, well, okay.
So you know, some of us have ajob and we can't spend all day
reading, but we do have somecategories that we can talk
(45:43):
about.
So, on the use and testing ofwhat they call automated
captioning, which is sort of theoverall category of this, you
know, 27% of the surveyrespondents said that they used
or tested automated captioning,either from moderately to
extensively, and of those thathad used it, over 50% reported
(46:08):
their experiences.
So that just highlights thatthere is a growing integration
of AI in interpreting services.
Whether we like it or not,they're already deploying it.
Okay, with respect to, you know, the acceptance or, let's say,
I don't want to say rejection,but skepticism of the this
(46:30):
particular technology, the studyyou know revealed a combination
of both, which is kind of, youknow, the general public's view,
I think as well.
There's a consensus on thepotential of AI to contribute
positively under certainconditions, right, but defining
those conditions and ensuringthat AI's deployment respects
(46:54):
everybody's rights and needs andpreferences, that's going to be
a challenge, right, there arestill some ethical questions to
be answered.
There's going to be, you know,privacy rights that need to be
addressed.
Lots and lots of questionsremain with respect to that.
A lot of people thought, oh well, this is, you know, this is
(47:16):
just about cost, so let's talk alittle about that, because it's
yeah, the financial aspect ofAI was obviously, you know, and
its adoption was identified asone of the driving forces.
I do think, in some ways, thatwas a simple response by a lot
of interpreters, like, oh, it'scheaper, they're going to go for
(47:36):
that option, because a lot ofus have seen that before, sadly.
Right, but it wasn't only acost reduction strategy, which
was an interesting part of thissurvey, right, despite the
potential sacrifices in quality.
People did say that that wasone of the factors.
But the study also noted, right, a variability in acceptance
(48:00):
based on who bears the cost ofthe interpreting services.
Right, so they were morewilling to accept AI
interpreting, whatever that maymean, as opposed to no services
at all.
Well, yeah, of course, right,you can have a half a piece of
bread or you're going to starve.
(48:20):
Okay, yeah, I'll take it.
So there is a complex interplayright between the financial
considerations, the quality ofservice and the ethical
standards you have to maintainin order to do, you know,
language access properly.
So, but there are otherconsiderations.
A lot of the non-interpreters,right, the LSPs were largely
(48:46):
concerned with, you know, makingsure that they filled the slot
that they didn't have aninterpreter for, that they could
provide the service even thoughthere's a shortage of
interpreters.
Quite frankly, I think that'ssomewhat misguided, because a
lot of the time, we're lookingat filling services for LLDs,
(49:09):
languages of lesser diffusionand I got to be honest with you,
the degenerative AI using LLMsis good for maybe eight
languages and none of them arethose LLDs.
Right, it's the big Europeanlanguage.
(49:30):
Well, why?
Because if you look at the llms, they're trained on information
scraped off the internet.
Well, hello, the internet is, Ithink, 60 english speaking,
right.
And then you've got the otherbig countries, the developed
countries, primarily european,and some, you know, larger asian
(49:50):
countries that have a presence.
You know, well, go ahead andlook, see if there's any Mixteco
pages on the internet.
I don't think so, right.
So there's no data to trainthem on.
So there's a mismatch in theexpectation.
Right, this is the panaceawe're going to be able to fill
all these things, because thecomputer is going to stand in
(50:13):
for the non-existent interpreter.
Well, ok, except you havenothing to train it on.
So good luck with that.
Speaker 1 (50:21):
Yeah, immediately,
I'm thinking you know the, the
unethical piece which I amcertain exists out there, going
with that whole notion of, hey,it's better than nothing, but
then choosing that, primarilybecause potentially cost-driven
efforts right, or cost-drivendecisions, and then replacing
(50:47):
that particular interpreterbecause, oh, we tried to obtain
or secure an interpreter of thatlanguage.
No, can do, but here's the nextbest thing.
But it's like how do wedetermine whether or not that
was there was reasonable effortbehind that?
Speaker 2 (51:01):
Because it's already
difficult.
Well, that's part of the youknow, that's part of the problem
with the industry, right, right, and the guidelines, and the
fact that there's just ashortage of interpreters.
I mean, I even think, you know,I work in Spanish and that's
the gorilla in the room, right,I still think there's a shortage
of interpreters in Spanish aswell.
So, forget about Mexico.
(51:23):
Or, you know, pick another LLDlike Vietnamese, for instance.
There's not enough material totrain them on, so these bots
don't exist in those languages.
So I think that the companiesthat expect that that is going
to be, you know, the AI solutionis going to work for them.
There are going to be sorelydisappointed, but maybe I'm
(51:45):
wrong there may come a timewhere the internet then has its
own issues of multilingualisminteresting and then there'll be
enough information to train on.
But we'll see.
But okay, there was also.
Let me go back to what I wassaying.
There's also issues of thetransparency and informed
(52:07):
consent issue, right?
Uh?
An overwhelming majority ofrespondents advocated for clear
disclosure when AI is used ininterpreting, right?
And emphasize the need fortransparency and informed
consent that all parties shouldbe aware of and consenting to
the use of the automatedsolution.
(52:27):
I think that's only fair.
The ethical considerations werereally a major part of this as
well.
The majority of the respondentsagreed that replacing people
with machines for interpretingis not right.
Of course, the sentiment isparticularly strong among
interpreters and servicerecipients who view that shift
(52:48):
with apprehension, which we'vetalked about multiple times,
that makes sense.
That shift with apprehensionwhich we've talked about
multiple times.
That makes sense.
And I think that our survey wasperhaps skewed because of the
number of interpreters whoresponded in that way, as
opposed to the number of LFPs or, you know, tech people.
But they also had diverseperspectives on AI use, in the
(53:09):
sense that there was asignificant divide of opinions
on the use of AI when no humaninterpreter is available, right,
it was equally split betweenthose who prefer automated
interpretation to none at alland those who would rather have
no interpretation than rely onAI.
I think some of that has to dowith people's apprehension just
(53:31):
with technology to begin withand then rely on AI.
So it was, you know, I thinksome of that has to do with
people's apprehension, just withtechnology to begin with.
Speaker 1 (53:39):
That's so interesting
.
Speaker 2 (53:40):
You and I both know
as trainers of interpreters.
There's different levels oftechnological capacity amongst
those people and amongst the endusers.
We've already talked about thefact that there could be some
technological challenges, sothere are a lot of factors
involved.
Speaker 1 (53:56):
Yeah, for sure, I
know that I'm thinking about it.
It's so interesting that thatwas the result because, in
thinking back, I took the surveyand I remembered just pausing
on that one, because it's likeoh, like this, is this one's a
hard one for me.
Because it is, because it'slike, ah, like this, is this
one's a hard one for me, becauseit is it's like do we leave
(54:21):
them there with no service,which you know, it's just, it's
a, it's a disservice at thatpoint, right, not not having any
access or having to come back.
It's like no different than hey, our interpreter's gone, right,
come back at a later time, orsomething, right, which is often
a challenge for a lot of ourend users right, Come back at a
later time or something Right,which is often a challenge for a
lot of our end users right.
Speaker 2 (54:34):
Yes, yes, they don't
necessarily have the funds to be
coming back and forth.
Speaker 1 (54:39):
Yeah, that one was a
tough one for me.
But going back to the other one, with regards to the disclosure
component, I know that thatmaybe potentially for other
people, potentially even thosethat are listening, might be
like well duh, well duh, yeah,they should be disclosing.
But let me tell you, just outof the stories that I've been
hearing thus far, it's not a duh, you know.
(55:00):
And they're pushing it out asif an actual human gave the
translation, and there'spushback between management and
(55:26):
the people that are actuallysupposed to be providing the
service.
With hey, could we at leastdisclose?
And management not feeling theneed to have to disclose?
Speaker 2 (55:35):
that it was.
As a lawyer I've got a lot ofopinions on that that I can't
really put into words right here.
Yeah.
Speaker 1 (55:44):
So meaning to say
that, yeah, that question is
important because we see weabsolutely see that the results
of the survey.
People are saying how importantdisclosure is.
Let individuals be transparentand let individuals know that
they're reading something thatwas created by a machine versus
(56:10):
a human, and I feel like it'sjust.
It creates this sort of senseof hey, if there is a mistake, I
understand why, because amachine used it and I can, I can
say, by the way, there's a blip, right.
Speaker 2 (56:26):
There's a blip, yeah,
I mean the translators, right.
I wouldn't want my work to bedenigrated as having been done
by a computer and peopleassuming that I made mistakes.
That would be very damaging toyour professional reputation.
Speaker 1 (56:40):
Absolutely, and I
feel just that it would almost
give like this sense of likeokay, well, they're putting
something together for me toread, they're letting me know
it's a machine.
And the way our communitiesthat we service for the most
part you know this is generallyspeaking they're appreciative of
even the effort, and we seethis time and time again.
This is not new either.
Speaker 2 (57:01):
Right, we've always
had to make certain adaptations,
like I know well, particularlybecause I work primarily in the
schools, and the schools neverhave enough money right and for
anything, let alone languageaccess.
So the budgetary concerns arealways primary and there's
always some kind of give andtake.
(57:23):
Right, the service is not 100%coverage, 100% perfect there's
always in any service right.
It's based on the budget thatyou have and therefore you have
to make some accommodations.
So this is not new, it's justbefore.
(57:45):
It was fairly clear, I think ina lot of the cases the use
cases that, okay, we can'tprovide you with an interpreter
for the IEP, but we're going toget you a translation after the
meeting, kind of a thing.
So it was clear that this wasan accommodation and that this
was being, this was being done,and everybody in the room, you
(58:06):
know, assented and consented toit.
So I think over time that willbecome more part of the
conversation, because now thereare multiple options, not just,
you know, zoom, or in person.
Speaker 1 (58:22):
Yeah.
So, going back, what otherprominent findings did we did?
We come to see that isimportant at least for this
audience to know about what.
What other things did we seethat is important at least for
this audience to know about what.
What other things did we seethat as very relevant.
I really liked the idea beforeyou answer, holly, that, um, or
the notion rather, that therewas a lot of medical, uh,
(58:44):
interpreters or interpreters inthe medical field that responded
to the survey and I feel almostlike there was not to speak for
them, but you know, it gives asense of hey, we saw it happen
to us when the pandemic hit.
We had to sort of evolve.
For many of us that weren't wenever even operated in the
(59:06):
virtual world of you knowinterpreting.
Not that it had just existed orcome up, it'd been existing for
years.
But a lot of us sort of had tolearn to go through the motions
without anyone asking us what doyou think or how should we?
Or you know it was sort of likeyou got thrown in there and
then you learn to navigate inreverse.
Speaker 2 (59:28):
Exactly, I think that
was definitely the fact that a
lot of interpreting moved toremote with COVID and, like you
say, it was I won't say a forcedmove in the sense that there
was no physical force involved,but it was an obligation given
the circumstances, and peoplefelt, I think, that, oh, this
(59:50):
time I want to have a stay Right, and so I think that perhaps is
the reason that we had a lot ofmedical interpreters responding
as well.
They were just like well, thistime.
Just a second, I have stuff tosay.
Speaker 1 (01:00:01):
I'm given the
platform.
I'm yeah, I'm definitely comingin and sharing my thoughts.
Speaker 2 (01:00:06):
It's so true, so true
, you know the other thing that
this talked about too.
As it walked you through thecase uses and asked people is
this okay to use AI for?
Is this okay to use AI for?
There was a definitive feelingand I think this is instinctive
okay, this is going to work forlow-risk, non-complex kinds of
(01:00:30):
conversations.
Non-complex kinds ofconversations.
That's a simple answer.
The problem comes whendetermining what exactly is that
, and anyone who knows anythingabout communication knows that
it could be this kind ofconversation in the first five
(01:00:53):
minutes.
And then it takes a left turn.
Right, we've all been there,you have.
You know.
You're in the middle of awellness exam and you're
interpreting and life is goinggreat, and then boom, a sudden
cancer diagnosis.
Well, that's not simple andnon-complex and low risk anymore
, is it Right?
(01:01:13):
Or you're in the middle of theregular questions for an
immigration interview and then,all of a sudden, bam comes the.
I've been abused by my husbandand that's why I took my
children and ran up.
So you know and you can't tellthat in advance Nobody's walking
around with a sign going I'mgoing to flip your world in
(01:01:33):
about five minutes, right?
So deciding when the best caseuses are and how we decide that
and at what moment it falls outof that category.
Is not, first of all, that'snot something I want to leave to
a machine.
Is not, first of all?
(01:01:54):
That's not something I want toleave to a machine, like you
know.
Here's the list of questionsthat mean it's complex.
Or?
Or?
Or even human beings like oh,all immigration interviews are
fine, simple, right?
Or all school intake is fineand simple.
Really, uh, I can talk to youabout this time that when that
happened and this happened, Ican talk to you about this time
that when that happened and thishappened, anyone who does the
job knows that you can't, inreal life, talk and say this
(01:02:18):
conversation is going to be thiscategory for its entirety.
It just doesn't work that way.
So you know, that's asimplified answer to a much more
complex problem how do youdecide, and how far in and at
what point do you decide?
Oh, this has to flip over to ahuman being if you start with,
(01:02:40):
you know, uh, artificialintelligence, chatbot yeah I
zero for customer service.
Yeah, exactly and and at what?
And who's going to educate theconsumer?
To make that press zero, right,to have that work, there has to
be education all around, right?
(01:03:04):
And are we gonna?
What are we gonna do at theborder?
We're gonna be passing outpamphlets, you know.
Don't forget to press zero.
Yeah, exactly.
So I'm not sure the peoplereally have thought it through
100 like, oh, the robot's gonnatake over.
Really, I can't think it.
It can't think it, just can't.
(01:03:26):
So, you know, take a Xanax,take a deep breath and think
about it.
For a second right, even withself-driving cars, they've been
promising that for years.
They haven't gotten out of thesuburbs of Phoenix because they
still can't deal with weather,you know, other than beautiful,
(01:03:47):
sunny skies.
So you know, yes, thistechnology seems to be moving
fast and there's this big hype,but we're in the honeymoon
period and it's going to, youknow, it's going to ebb and flow
.
So, deep breath and think justa little bit deeper at what you
do.
Can you really be replaced by arobot?
(01:04:08):
I don't think so in a lot ofcases.
Speaker 1 (01:04:11):
Yeah.
Speaker 2 (01:04:12):
I think we're, I
think we're safe.
Speaker 1 (01:04:15):
I just believe not.
Not at this point.
Not at this point and, whoknows, in the foreseeable future
.
You know to what extent,necessarily, but it definitely
says a lot with regards to us asprofessionals.
And you know that proverbialinterpreter's toolbox.
What are we going to be addingto this interpreter's toolbox
(01:04:38):
that's going to position us in away in which, now, we're more
competitive, not necessarily atpar with AI, necessarily at par
with AI, but to the point where,you know, we have some
knowledge and some skill sets,that sort of differentiate us in
a way in which I'mdemonstrating I'm not afraid,
you know you actually are goingto need me in this part- the
(01:05:01):
other day I just saw a title forthe first time.
This was something that Ilearned a couple of years ago
when we first started hearingabout AI in the industry, and
somebody that was Bill Glasser,if I'm not mistaken, was the one
that actually was talking aboutwhat we're going to see with
new, potentially new, titlesthat do not exist right
(01:05:24):
Currently, that do not exist,and, thanks to AI, these new
titles are going to come up.
And it was in the news where,or on television where I saw
something about, you know, thenew ai chief of or chief of
something, something aicomponent, and I was like I
tried to take a snapshot.
I was like live tv tried totake a snapshot.
I didn't take it because it waslike that's the first time not
(01:05:45):
to say that they have a new jobdescription right A new job
description rolled out withregards to this, so we're
starting to see that.
Speaker 2 (01:05:59):
And I do.
I do think that that's the casein a lot of professions.
It's like most technology is isit going to be a disruptor and
are some people going to be leftby the wayside?
I do think so, not necessarilyin only in our profession, but
it's, and you see this, it's.
Anyone who's been in theprofession long enough knows if
you're not learning, you will beleft behind.
That's why I kind of love thisprofession.
(01:06:21):
You need to constantly up yourgame and up your skills, and
that can take many forms.
It can mean adding a language.
It can mean adding a skill setthat deals with how am I okay?
If chat, tpt is the thing thatpeople are going to be using, I
better know how to use it.
I better practice, get the freeversion, learn how to do
(01:06:42):
prompts, because maybe, maybeyou're going to be the one that
they call on and say, hey, isChatDPT good for this?
Well, you better know theanswer as their language service
provider.
That's what they depend upon.
Right, in almost all industries,when they talk about language
access, they look to the personwho's doing it.
(01:07:03):
So you better have the answerfor them.
If not doing it, so you betterhave the answer for them.
If not, they'll go to someonewho will.
So, you know, experiment wouldbe what I would say.
Like you said, experiment withit, don't, don't be fearful of
it and see if you can expandwhat services you offer.
Maybe you also, in addition tointerpreting, want to offer, you
(01:07:24):
know, secret-like services andhave chat PPT, write the letter
that they want in Spanish andyou edit it and then you send it
back to them because they don'thave a bilingual person on
staff but they need tocommunicate something or
something like that.
Just don't do the deer in theheadlights freeze.
I think is the best responseand the best advice I've ever
(01:07:47):
gotten to right, don't do thatin an interpreting exam and
don't do that in real life,because all that means is you're
frozen with fear and you're notgoing to process properly.
Speaker 1 (01:07:57):
You're not going to
process properly.
I like that.
So, holly, tell us now where isSafe AI headed next, now that
we've got this over 300 pagedocument, over 2000 stakeholders
that participated in this,what's next?
What's to come?
Speaker 2 (01:08:15):
Well, so, the whole
objective right, we talked about
our objectives as anorganization in the beginning is
to create some sort ofbeginning guidance document for
language access and use ofSafeAI or client education,
right?
Exactly what I was talkingabout.
(01:08:35):
If you don't know the answer,if they don't know the answer,
they're going to come to you asa language service provider and
you have to have well, generalguidance says right abilities to
be able to talk to them aboutwhat range of services you can
offer and what the consequencesof those ranges are.
It's much more complex than alot of people realize.
(01:08:57):
I think I've said I've talkedabout the simple, low-risk
conversations.
I've talked about how you know,there are maybe eight to ten
languages that have the largeamounts of data that canLMs were
trained on and then it fallsoff a cliff.
So trying to solve for languageaccess for languages of lesser
diffusion is probably not goingto work for those languages.
(01:09:23):
People don't realize that thereare ways that your LLMs, if you
want to train them, can betrained domain specific, but
it's costly.
That's the other thing thatpeople don't realize, right?
This chat GPT is not $1.99 tocreate.
(01:09:44):
It's a very expensive option.
So a lot of the places that saymaybe, oh, this is a cost
saving factor, may end up saying, oh, it's not so much of a cost
saving factor because if I haveto train my chatbot with an LLM
on all educational things,right, it's going to cost me X
(01:10:04):
amount of dollars, and that isactually three times my human
interpreter budget.
Speaker 1 (01:10:10):
So take a deep breath
, LLM large language model,
large language model.
Speaker 2 (01:10:15):
See the basis, all of
this chat CPT, sort of
generative AI, which chat CPT isa prime example of, and
everyone that's the stand-in forit.
The stand-in for it came aboutbecause of the LLMs right.
Once I talked about having liketesting Google, right, and you
(01:10:36):
know, google got much better.
Once we had neural machinetranslation right and that were
like 2017, it started to like,oh, this isn't so funny anymore,
like the sentence comes out,like it makes a little bit more
sense and because it had moredata to train on.
Well, llms is when they had thegajillion amounts of data which
(01:10:58):
makes that GPT sound reallyhuman, because it's trained on
gajillions amount of data.
Now, you know, stepping outsidefor a moment, that doesn't.
That takes time and, by the way, a massive amount of energy.
That's the other thing thatpeople aren't talking about.
If you care anything about theenvironment, maybe that's the
(01:11:20):
best solution.
Just saying it eats up a heckof a lot of water.
Talk to the people in Ohio,where a lot of schools eats up a
heck of a lot of water.
Talk to the people in Ohio,where a lot of cool you know
servers are about how much wateris being eaten up to cool the
servers to allow this to happen.
(01:11:45):
Talk to the energy companies.
Like you know, one training ofone LLM is the same consumption
and energy as about 30 to000 to40,000 homes for a year.
And then we talk about hmm, youknow, there's a cost-benefit
analysis here, that perhaps, butthose are external factors that
a lot of times people don'ttake into consideration.
But the time and the moneydefinitely right to actually
train the LL, llm.
(01:12:05):
If you want to maintain thatprivacy, uh, and the, the
transparency and you know, ohwell, like a lot of businesses,
education may say, well, we wantour own llm, then we don't have
to worry about, you know, theprivacy issues and so on and so
forth.
Okay, it's going to cost thismuch, oh, okay never, Never mind
(01:12:26):
.
Yeah, because they don't realizethe the tons and tons of money
that went into the dot VBP, likeI said, didn't appear without
billions of dollars of Elon MuskTesla money Not my bank account
anyway, unfortunately.
So there's those issues as well.
(01:12:52):
I think, you know, this may notbe the solution that everyone's
saying, like, before it was soexpensive interpreting and now
this is the cheaper.
Have you thought about thatactually?
Because I don't know if that'sreally true in some cases.
So that's, you know, theguidance is what we're looking
(01:13:14):
to create a beginning guidancedocument, right?
We're also looking at doing adifferent amount of research,
which we've started to put moneytowards, so that CSA is doing
round two.
It's not a survey this time,but they want to come out with a
much more well-researcheddecision tree so that people can
(01:13:36):
then, you know, look at it andsay, okay, in this case, use
what is my best option, so thatthere's a way to look at how
those decisions need to bethought out and not just, you
know, made at the snap of a hat,like looking at all the
different aspects that need tobe, you know, considered when
(01:13:57):
you're making that kind of abusiness decision.
So that's where we're puttingour efforts now, our efforts now
, and then, of course, we're inthe process of still digesting
the survey results and, inaddition to commissioning that
work through CSA, again, we'relooking at becoming, I think, a
(01:14:17):
more permanent fixture in theT&I landscape, right, because
we're a task force now.
We don't have a structure.
We're really not anorganization.
It's just a bunch of peoplethat got together and said let's
do this.
And this isn't a one-shot deal.
It's not like we're just goingto say here's your guidance and
(01:14:38):
we're gone.
I think because TNI is alwayschanging and AI is advancing so
rapidly, we're going to have tobe iterative in our responses.
You know, every six months it'sgoing to be like oh okay, now I
can do this.
And we have to respond this way.
So we need to exist formallyand we're in talks to see how we
(01:15:01):
morph into something morepermanent right now.
Speaker 1 (01:15:05):
I love it Great.
I'm very much looking forwardto that, definitely would love
to continue being a part of itand just staying abreast with
all the different findings andresources that hopefully will
soon come to be, as a result ofall the data that's being
collected and eventuallycompiled into perhaps something
(01:15:27):
that, like you just mentioned, awell-researched decision tree
that would be able to beutilized by a variety of
different stakeholders in theindustry.
I very much continue to lookforward to being a part of
SafeAI, continue to spreadingthe information on the industry
to our audience here at thepodcast and just being able to
(01:15:50):
continue the conversation.
So thank you so very much,holly, for joining us, and you
are so welcome.
Is there anything you wouldlike to share with the general
audience about safe AI?
Speaker 2 (01:16:05):
We are always open to
new stakeholder members.
Please come and join us.
You know, shoot me an emailthrough the site or you know,
because they all come to me,because I'm PR, but I'm happy to
put you on any of thecommittees.
We, you know, the more hands,the better.
This is the kind of thing that,on a personal level, can raise
your profile, if that's yourthing, but also it's very
(01:16:27):
satisfying to be able tocontribute to the industry and
to leave a legacy of okay afterI'm gone.
The ethics will still bemaintained, the language access,
broad access in this countrywill still be maintained.
I think it's all of our duty tomove the profession forward and
(01:16:48):
this is one way that you can dothat.
So if you have time, I coulduse some hands and brains.
Come on over.
Speaker 1 (01:16:55):
Absolutely and I
totally agree.
I think it's definitely amoment in time in which we
haven't had necessarily theopportunity Many of the
standards, potentially in otherareas, were created before some
of our times in the industry,once we joined the industry, and
this is definitely a greatmoment to take part, to be a
(01:17:16):
part of the solution, to be apart of the decision making or
even just to have your voicecounted.
There are so many differentskill sets that you're able to
put into service with somethingsuch as safe AI, and so I
definitely agree with Hollyplease take part in this.
Speaker 2 (01:17:33):
Visit their website,
which is what Holly Ah, yes,
safe a I t forg, right.
Speaker 1 (01:17:40):
Safe a I t forg, and
I'll make sure to have the the
link to their website, of course, in the episode notes, as
always and the ability for youto be able to look a little bit
into Safe AI and its mission andhopefully, you're able to have
more interested parties takepart in Safe AI and volunteer
(01:18:02):
their time.
Once again, holly, thank you sovery much for the opportunity
and I look forward to sharingyour episode with this audience.
Thank you so much.
Have a great day.