Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Let's be real here, we're not even in AI one point O, we are
in AI 2 point O and two point O.It's where it's omnipresent.
Welcome to episode 48 of Tool Use, the weekly conversation
about AI tools and strategies. I'm Mike Byrd, and today we're
talking about AI tools to save time and money while going
global. When working at a global scale,
being hyper efficient can make or break a company, and we're
(00:21):
going to explore which AI workflows can reduce cost for
customers by up to 80%. This week, we're joined by Olga
Beregovaya, the VP of AI at Smartlink, a Global leader in
translation. Olga, welcome to all use.
Thanks so much for having me. Absolutely.
So do I give us a little bit of your background?
I've been in language technology, localization,
globalization, internationalization, natural
language processing industry forover 25 years.
(00:45):
I come from a structural linguistics background and then
pretty much needed to take classes continuously and learn
in the job as it was becoming more and more obvious that
structural linguistics by itself, as exciting as of a
field as it is, might not necessarily be as applicable
when it comes to real job market.
So there I am getting out of grad school was my second
(01:07):
master's degree in structural linguistics.
Like, OK, what am I going to do with this?
And then it became very clear that NLP is the path to go.
And then obviously, if you look at the evolution of NLP, you go,
OK, natural language processing,a little bit of computing.
And then as it evolves, you inevitably find yourself in the
world of machine learning, AKA AI.
So that's my trajectory and that's my journey.
(01:29):
Actually very cool. So being like from the origins
with NLP, getting into more of the wide scale LLMS, how have
you found integrating AI into your your personal life or your
work life has kind of gone throughout the past few years?
I think the first time where it was like a key things are
changing dramatically was probably when the tooling,
natural language processing tooling and overall
(01:51):
globalization tooling started pivoting from rule based to more
potentially statistical based and then machine learning based.
So you could just see that the tooling itself, the open source
libraries that are out there, like then of course, hugging
faces taking over the world, right?
So eventually you just realize that, OK, Google tools still
(02:12):
serve the purpose, but professionally, there is so much
more that we can do with actual modeling and letting models
making decisions. So I would say professionally,
probably when GPT 2 came out andI started playing with it, I was
like, holy Lord, one was OK, like, OK, no, no, we can live
with that. But then when 2 came out, it was
(02:34):
very, very apparent that the world of globalization and
translation is never going to bethe same.
So it's basically gone from interesting, oh wow, scary to
OK, here we are inevitable. So I think now we are was just
listening to Andrew Ng talking at a conference who said, look,
(02:55):
guys, I mean, let's be real here.
We're not even in AI one point O, we are in AI 2 point O and
two point O, It's where it's omnipresent.
So I think, yeah, I mean, equally professionally and
obviously our personal lives, our we get Netflix
recommendations, right? We get like, we get
recommendations everywhere, everywhere we go.
(03:16):
Sometimes I'm laughing because sometimes I have some questions
to the UX of AI assistance assistance.
Because sometimes I'm just thinking like, OK, I'm sorry, it
might sound horrible. I just want to be honest.
Sometimes in the personal life and in my own productivity world
and my back in the world of Microsoft Clippy, like, I mean,
(03:37):
this thing just pops up and basically takes over the
process. I think UX lies there.
There are things to do, but otherwise we're definitely
there. Do you have any thoughts on
that, how the UX is going to evolve?
Because everyone is. We're getting tired of the chat
interface. There's always that empty box
that you type instructions into to start from scratch, and
there's so much potential here to evolve.
Do you have any predictions on how things might go, or just
(03:58):
even in your idealistic world where you'd like it to go?
Yes, I think and that's where we, I think we start talking
about agentic workflows, right. So one thing is part of the UX
will just natively disappear because part of the US will just
inherently be taken over by agentic park closed.
(04:19):
So you know, you said the initial parameters, you go with
your asks and then you really let the agentic environment,
whether it's agentic with a capital A or agentic comprised
of multiple agents, eventually that environment will remove
some of the UX. Now my prediction for the more
copilot assistant based UX, I think that eventually we're just
(04:43):
figuring it out. And I think eventually it's just
be a little bit less intrusive and a little bit less invasive.
Because right now sometimes I have a feelings like, oh, damn,
let's just plant the child aboutsomewhere.
And let's say like, hey, I'm your assistant.
Like hey, assistant, I was goingto write this letter myself.
So I think it will be more subject to user configurations
and user preferences. And I also think that I think
(05:08):
just the arrangement of the realestate, whether it's an English
or any other language, I think the arrangement of the real
estate which has be a tad less invasive and we can talk about
UX for I mean, now we need so much UX.
We need especially in our industry.
You need prompt work Surface, you need post editing works for
Surface, you need fact checking.I think the whole world of UX
(05:30):
translation on our translation will never be the same.
Yeah, I I fully agree. It's going to be a lot more
ambient. It'll do things for you before
you ask based on other triggers or whatnot.
It's it's very exciting as a background in systems
integrations, just knowing that we can kind of plop a little AI
right in the middle of a pipeline and you can get more
functionality out of it. I'd love to pivot to to smart
link. How does or how have you
(05:50):
approached incorporating it intowhether it's workflows or at the
engineering level, how is a global company adopted AII?
Mean we are an AI enabled company, right And we offer and
provide at least state-of-the-art actually, let's
say, yeah, definitely state-of-the-art AI powered AI
solutions to our customers. So when we tell our customers
(06:14):
like, hey, AI is really what's going to help you gain
efficiencies in cut costs, we better be able to eat our own
dog food, right? We can all just I mean, practice
what you preach, right? And that's I think that's a
necessity. So we use, let's start with a
translation process. We are equally services and
(06:35):
technology company or AI poweredfull end to end services
solution. So we provide the best
experience to our for our translators, giving them
Productivity Tools using automated quality estimation,
using automated quality assessment using agents.
So by the time our translators, and I'm talking about our
(06:56):
internal translators, by the time the translation workflow
hits their desk, a lot of preprocessing, AI preprocessing
steps have already taken place. Then interestingly back to the
UX, there is something in our industry that I suspect you
might not have heard of which iscalled CAT.
Not a cat with a tail, but a computer assisted computer
(07:19):
assistant translation. There so many industry memes
where it is a cat, but it's not.It is computer assisted
translation tool and obviously we keep on enhancing our
computer assisted translation tool to plug in more AI.
Like right now for instance, translations get ranking of
potential sentence complexity tosee how much cognitive effort
(07:42):
they actually need to put into that sentence.
So the string translator is shown string and translator
shown string with a label like hey, words doesn't work, good,
bad, ugly, indifferent. So that's for translators.
Now we have the most amazing transformation in our lead
generation and business development function.
(08:03):
And right now the effort has been cut down.
I don't remember the exact number, but let's say by 60%
because there is so much you cando with lead Gen. right.
There was so much you can do with targeted mass mailers.
You can do so much with trackinghow your leads actually whether
they've been responded to or not.
So I would say that our businessdevelopment function was
(08:24):
probably one of the first adopters in the industry in and
definitely in our company where we actually work with our global
potential prospect base using AI.
So you already have two, you have translation, then you have
lead generation, Then obviously my team, my team of data
scientists, do you have any choice but using a high?
(08:47):
So obviously I'm not even going to go there.
I would say that late. I mean, obviously all the
modeling tools, quality estimation tools, LLM as a
judge, all of that is natively apart of my team's workflow.
What we did, what we introduced recently is we really looked
very closely at DSP, right? And we are pivoting from prompt
(09:10):
engineering to prompt programming, right?
I mean, and I think that's wherethe world is going again, our
industry or not our industry again, it's there are only this
many prompts that human can engineer, right?
But if you give, if you give enough examples and if you are,
I mean, if you are very explicitwith what you want out of that
prompt prompts, writing prompts is a part of my team's workload.
(09:33):
And then obviously you have the whole programming and our
engineering team and they're obviously, again, that's where
the world is going. The world also is very well
aware that AI writes code. Now, is this code always perfect
question mark, but are we using,I mean, but our engineering team
is very diligent at using AI andtesting AI before anything is
(09:57):
implemented. So here you are with pseudo
testing, right, LQA automation. So what else would there be
coding there, coding boilerplates using Git Copilot.
So I would say that we internally are very AI powered
organization. There is still work to be done.
(10:17):
We're actually running an internal hackathon where we're
going to be, we pulled every function in the company and
said, OK, where, where, where are you?
Where are your efficiencies? Where can the efficiency be
improved? And now we are going on a 90 day
hackathon to see OK, what else internally.
So that's again, eating our own dog food.
That's that's what we do. I love that I'm a huge fan of
(10:40):
internal hackathons giving people who are actually at the
ground level doing the operations the ability to kind
of take some leeway to try to build tools for themselves.
We're not very powerful. I'm curious how you go about the
build versus buy debate where there's a few off the shelf
tools. Everyday there seems to be more
and more, but a lot of companieswith the resources will build
their own bespoke tools to accomplish certain goals.
(11:00):
Do you have a process for going through either auditing what
exists versus just building yourself, or are you partial to
just, you know, let's just buildit because we know our stack
better than anyone else? I would say the process.
I mean, first and foremost there's money, right?
I mean, you look at the ROI and you sometimes work backwards
from the ROI and you see, OK, I've just invested into this,
(11:23):
let's say model testing tool, models management and prompt
management and prompt testing tool.
And you are look, OK, here I am 100 grand down.
But I have a sneaky suspicion that they might be using the
same approaches, potentially thesame, the same software
libraries and potentially even same like as I said, models as a
judge. And then I was like, OK, well,
(11:45):
here we go. We can actually build it.
And that's for instance, how we build our model performance
estimation tool. Yes, there are tools in the
market, but quite often. And I think we also need to talk
about the democratization of AI.Everybody can access it.
And we also happen to be AI experts.
So when we see that, OK, commercial ROI is not there and
(12:07):
we can build it ourselves, then we would just go with a build.
So I think ROI and do we have the talent?
Are we willing to like you measure the T-shirt size, Are we
actually willing to go in there?Shall we do it ourselves?
And here we are, our model performance management suite
uses the standard metrics. We see commercial solutions and
we can, we can most definitely, we can most definitely build it
(12:29):
ourselves. So ROI and capabilities and I'd
say when would we make the by decision?
I think it's again looking at internal capabilities and just
looking at OK, is the Jews forcethe squeeze?
Is there something that's reasonably priced?
Is there something that's commercial?
I mean, is it something that like for instance, is pay as you
go or unlimited subscription that would take us like that
(12:52):
would take take us years to build and that's where we'd
make, that's where we'd make that by decision.
For instance, all our sales process and business development
process optimization, we definitely were very happy with
off the shelf tools. Yeah, really.
Capabilities comment commented with open minds and also be
(13:12):
very, very realistic on shall webother and if we will, why
always. Got to have a justification for
it. Along the same lines, I'm
thinking how? How do you know which experiment
to run first or which process toprioritize first?
Is it justified by the bottleneck in the business or do
you wait to a certain engineer brings up a good use case?
What's the process for saying OK, here's the priority list,
(13:34):
let's run these experiments first?
My hypothesis, and I think it's a very well validated hypothesis
is that innovation consists of three pieces piece one
state-of-the-art what's out there right?
What's best in class and we all live in the same universe.
And that's that's one, one universe where you wake up to
(13:56):
five new models a day. Right now, 10 new models a day
or versions of the same model. Like, hey, you know, we just had
a couple trillion parameters. Well, OK, thank you.
We just finished our experimentswith the previous version.
How fun is that? Right?
So part of it is what state-of-the-art, what's out
there and monitor very closely both academic research,
commercial releases and open source releases.
(14:18):
I mean, we all read like, I don't know, 40 tail DRS and 45
newsletters and follow again religiously what's happening in
the Hugging Face and other communities.
That's a third, right? 1/3 is just our own vision of
where we want to be as a productand where we want to be as an
organization. And that would drive, OK, these
are the experiments that we needto run to cater to this product
(14:41):
vision, right? Like for instance, we decide to
introduce for internal and external purposes, automated
language quality estimation, language quality assessment.
And LMS are not, again then verybright when it comes to
assessing language quality. So that's where you're like,
yeah, hey, we need it. It's a pressing need.
You can look at 5% of your content or you can look at your
content across the board. So business requirements would
(15:05):
be business requirements and vision would be 1/3.
And the other third would be just listen to your customers.
The customer comes to you, the customer comes to you with a
problem. And that's where we would know
what experiments to run. Now back to my initial statement
about not to fall for the frenzyand not to be in armored with
like everything like, oh, damn, that's a new that's, that's what
(15:25):
I've got to experiment with. We have very robust testing and
benchmarking environment. So before we embark on any kind
of experiment, we, we just assess the model and we'll see
is the model going to do what wewant it to do?
Like for instance, our agentic functionality we would usually
know is it worth a while to lookat something new or of a good
(15:47):
way? We are.
So and then just again, the market traditionally translation
market was dramatically text based and now you like it, you
dislike it, you need to experiment with multi modality.
So it is also where the world isgoing outside of our maybe a
little bit of myopic localization space.
Hope I answered your question. Yeah, absolutely.
(16:08):
And also I'd love to double click on the benchmarking
aspect. So it's very important to
companies to have proper eval set up so that when they
interest a new model or try something new, it's not going to
cause a regression. Do you have any strategies for
how companies of of any size canimplement benchmarking to make
sure that they when they do an upgrade, it's actually an
upgrade and not just oh it worked well in case A and then
every other cases regressed? OK, here is our world, our world
(16:31):
of global content delivery. There is what, 17, 132, unless
I'm mistaken, world languages. So every model change, process
change, pipeline change, just multiplied by 17132.
And then you'll find yourself inour universe.
(16:54):
And we quite often find ourselves in, I mean, we're
blessed with having multilingualworkforce.
So we have people from differentbackgrounds.
It's very easy to pick up the phone and like say hey, such and
such. Do you mind looking at Spanish
and Portuguese or like, I mean, I speak a few languages so we
can do that as well. Our engineering team is
predominantly used in European. So we have those languages
(17:14):
covered. But I guess my message is we
need as a global content, AI powered global content delivery
company, we need to think global.
We have no choice. But so for us, we split
languages into language complexity groups.
We by now have accumulated enough knowledge about, OK,
(17:37):
Brazilian Portuguese is going tobe a dream for the set of for
the set of LLM, whatever the assessment, translation or post
automated post editing or anything in between.
But don't nurse the same hopes for Turkish.
So when we test models, when we benchmark models, we absolutely
benchmark across different language groups and different
(18:01):
content types. We actually are super rigid
about our golden data set. In collaboration with our
quality assessment team at Data Scientists and Engineering, we
develop a golden data set acrossdifferent language groups,
different content types, different train formalism,
basically the golden data set that would allow you to run a
(18:23):
suite of tests. And once you've run it, you
usually have a pretty good idea.OK, this is where I'm in danger
now. It's very important.
Again, I said that sometimes we ask a human to have a look.
We're also religious about direct assessment and engaging
humans to give us a point of reference.
But quite often it's a Press of a button.
And here you are with UF measures, right?
(18:45):
Precision, recall, accuracy. So it's somewhere between having
a super robust golden data set and benchmarking environment
that like you press the button, you look at a table and you see,
OK, danger, no danger, red flag.Let's proceed with further
experimentation. A.
Few people have mentioned the golden data set and that's
becoming the norm where you justneed to know what data you're
(19:06):
actually looking to get out of the system when you put a given
input in. So when you you make a change,
things looking good, all the emails are passing, what's the
deployment like? Do you try to do a gradual roll
out or is it more flicking a switch at the global scale?
How does 1 deploy AI into their systems?
OK. Well, again, in our specific
case, I run the R&D department and we work super closely with
(19:31):
our product and engineering department.
So whatever works and it's what I'm going to say is absolutely
mission critical. It works in the lab.
You start deploying it and enterprise scale and we're like,
I mean, we're a huge enterprise scale platform, right?
I mean, we have clients from allways of life and clients of all,
(19:51):
all sides. As you can see all the huge
logos on our website. So here we are that things work.
Yeah, yeah, go global, 5 minute latency.
Goodbye, right. Or like inference taking an
hour. So we always make sure in any
organization looking to deploy. A successful AI implementation
into their pipeline absolutely needs to run latency and
(20:15):
inference tests and make sure that everything that runs in the
lab actually it's scalable. Have a fall back mechanism and
just ensure you know whatever they will be queuing or multi
threading multiple techniques. LLMS still land.
And again, we all know that, right?
Some have mitigated it, some have not mitigated it.
(20:37):
We're blessed to be able to workwith data science teams from the
most prominent research companies out there.
But First things first, check for latency.
Run your pseudo tests in the sandbox environment and make
sure that either you don't introduce any delays or your
engineering. Your brilliant engineering team
(20:58):
has developed techniques for mitigating potential issues and
and deploying at scale. So that'd be #1 making sure
don't, don't rest on the laurelsif something works in the lab,
right? So first, first, first rule of
deployment, basically make sure,make sure that it's not going to
introduce any surprises and always have a Plan B, always
(21:19):
have a fall back mechanism. So that'd be that.
But there are so many other deployment things you want to
take into consideration. Again, things work great.
How do you know whether the model is hallucinating or not?
So you obviously need the guardrails for mitigation of
potential semantic issues, right?
And that would be is the model dreaming things up.
(21:41):
So you absolutely need to have hallucination mitigation
mechanism. And I'm not quite sure what's
there yet with self policing models.
So you do need some form of hallucination mitigation.
So you have scalability, you have hallucination.
Now throw it all in or deploy gradually.
I mean, if you have a specific feature in your product or
(22:04):
specific efficiency potential inyour internal processes, you
probably want to deploy that entire feature, right?
And test that in for them. Because I mean, what what good
is going to do us if we just deployed our adaptive
translation memory, right and did not see how it works in
accord with our, for instance, lossery management glossary
(22:26):
insertion? So I mean services architecture,
deploying by feature, but definitely not throw the whole
thing in. I would say deploy, deploy
gradually. We're an agile, agile company.
So once the tests have been completed, usually we operate in
two weeks sprints and usually when again research has come up
(22:46):
with something, product has vetted it.
Usually engineering takes two weeks to deploy it and then test
it. So continuous nothing, nothing,
maybe nothing new, right? Continuous, continuous
development, continuous integration, continuous
deployment. But risks, right?
Mitigate your risks, know your risks and mitigate your risks.
Exactly, we had enough risk by introducing deterministic code
(23:07):
as soon as you bring LMS into the mix.
It's just, you know, always havea Plan B like you said.
One thing I'd like to turn on briefly.
You mentioned hallucination mitigation.
Do you find LM as a judge is sufficient for that, or is it
still going to require human in the loop for something as as
important as translation to get right?
I'd say that it all depends on the content risk tolerance.
(23:28):
And again, we are, I mean we areall about again, we don't fall
for, we don't fall for the wild,Wild West of AI.
We're all about measured deployment and we have a fairly
strict and fairly rigid grid of what content we can send through
AI only pipeline and lead happily ever after and what
(23:50):
content types, for instance, we work with regulated industries.
We absolutely would say that human would need to have a look
and Fact Check and check for hallucinations and anything that
I mean the usual suspects, hallucinations, biases, you
would decide based on the content type, based on the
language complexity and content risk tolerance.
(24:11):
You decide how much of human in the loop you actually need.
And there are a lot of time, a lot of times where you actually
do need human in the loop. Now, no, back to 17,032.
OK, about fact checking. I'd like to check my facts on
how many word languages there actually are.
But back to that, we all do knowthat models tend to hallucinate
(24:34):
much more in other languages. And a lot of implementations
outside of globalization industry are quite often either
monolingual or naive. Monolingual right is when
development and product is English centric and naive is oh
wow, open AI API, Chen, GPC API Gemini, you know, Vertex AAPI.
(24:56):
Here I go. I solved all the world's
problems. And I think we are again through
testing, through correlating with the human things,
correlating with direct assessment, we usually have a
pretty good idea of how how models are going to behave in a
particular language. Now having said that, we do have
(25:17):
automated AI based checks in place that sold probably for 90%
of hallucinations of models hallucinations.
But then it's your own decision how much of the human you want
to based on the parameters aboveyou want to you want to see.
But LLM is a judge, and especially even LLM even more so
perhaps one LLM judging the behavior of another LLM.
(25:39):
It it works in a lot of cases. One thing just.
Along lines of eval, you mentioned earlier DS Π and I'm a
fan of DS Pi I, I fully agree that the idea of prompt program
is going to become more important as people kind of
figure out what patterns really work and what structure gets
better results. How do you work with evaluating
the output of DS Π where the thenormal process would be, you
(26:00):
know, you have to give an input your prompt and they give an
output and then you just rank it.
But DS Π, it's an extra component in the middle that
adds a little more twist to the to the pipeline.
Do you have any strategies around evaluating how dspy is
improving performance in a system?
I don't necessarily think that we're looking at dspy for
improving performance. I think we're looking at it more
as an efficiency tool, right? And just basically sparing the
(26:24):
time as opposed to full human prompt engineering.
Now again, we're in a very fragile space.
Linguistics, language, language is a fragile thing, right?
Human language is you really want to make sure that you
deliver accurate outputs, even more so when it's translation.
So as of right now, I would say,and let's also remember, I'm not
(26:46):
the legs on the ground or pig feet on the ground, whatever
it's called. I wish my colleague who works
predominantly with DSPI would behere.
But again, we usually just look at the accuracy of the output,
right? We programmed those prompts for
a specific for a specific task and usually that's where you
would measure whether the task was performed or not performed
(27:09):
compared to human engineered prompt.
And this is where again, you have a gamut of automated
traditional NLP metrics, both text based and semantic
referential and non referential metrics.
So you can apply those to the output.
And also again, we make a fairlystrong investment into labeling
and see how the DSP output actually delivered on the task.
(27:34):
So somewhere again between humandirect assessment applying
automated metrics, I would say that we measure that performance
based on the output and based based on whether the task was
delivered on, was executed on ornot executed on.
So Smartlink definitely sounds like a company.
Like you said, AI, native AI hasbeen incorporated into a lot of
different aspects of the business.
When you talk to your colleaguesor or just see different
(27:55):
companies operating, what do youthink slows them down from
integrating AI into all these workflows?
Why? Why do some companies adopt it
very quickly and are willing to put it into the process of that
matter, while others are kind ofdragging their feet and
potentially getting left behind?If we look at enterprise AI
deployments, there is one factorthat makes companies move very
quickly for the good or for the bad and it's C-Suite mandate to
(28:18):
implement AI. And that's I was, I was at a
conference like even more so probably private event not so
long ago, which which resembled a little bit of support group,
but we're just mandated to implement AI.
What do we do? How do we implement it?
And sometimes, again, for the good or for the bad, without
(28:40):
necessarily measured deployment,a lot of companies are just
forced into implementing AI intotheir work flows because they
need to, because they have to, because everybody does.
And as we know, I don't know, it's probably a year old data,
right, that 75% of AI implementations are not
successful and 25% of AI implementations are successful.
(29:02):
I guess that's the data from a year ago.
I'm pretty sure those numbers have changed.
I think a lot of companies rush into it.
But then 7 experiments did not work, 7 implementations did not
work, but three did. So sometimes it's just a mandate
and it's like, I have to do it because I have to do it.
Some companies, I think just more of the innovative and
adventurous spirit and maybe a little bit less risk adverse.
(29:25):
Some companies I would say that just the structure of their
product, if they implement AI features into their product,
just the architecture of their product allows them or
architecture of their platform allows them to easier integrate
additional components right intotheir less monolithic and more
service based architecture. So I think it also has a lot to
(29:45):
do with what the platform architecture looks like.
So mandate less risk adverse that platform and the product
just allow them or whatever thatis a solution, allow them to
integrate additional AI based components because it's easier
to do so. And I would say the companies
that are slower, I would a little bit divided by verticals.
(30:08):
Again, I mentioned regulated industries and regulated
industries just traditionally ormore conservative when it comes
to implementing new technology. I lived it with machine
translation, I lived it with text summarization, sentiment
analysis, and now AII would imagine that regulated
industries like life sciences are just slower at implemented
(30:30):
AI and probably for yours and mine benefit, right?
I mean, they would probably, they probably should be
implementing patient chatbots orlike adverse events analysis
after thorough testing. So yeah, I think, I think these
would probably be the factors and sometimes the company would
move slowly, but they would go through very rigid compliance,
(30:55):
regulatory ethical AI process. They can move slower, but they
will probably land at a safer and more measured place.
Excellent. So when I first heard about
smartling, I thought translationas well as areas that seems ripe
for AI to just completely decimate where everyone kind of
has a translator in their pocket.
(31:15):
But the impression I've gotten, feel free to correct me if I'm
wrong, is that smart link's not going for the general like
everyday translation. It's going for mission critical
when you need an accurate representation of text in
another language. How do you foresee different
industries which are at the threat of AI surviving?
Do they have to go along the same lines of work up the chain?
So instead of just going for mass market satisfaction, which
(31:37):
AI can probably solve in its ownrealm, but something where it's
either more valuable or more it just a higher need for success,
how do you see other industries kind of surviving this AI wave?
Well, we always know that you definitely want to be a part of
the solution, right, rather thanbe a victim, victim of the
solution. So I think that by now through
(32:01):
trial and error, and again, that's my thinking, 23 was, you
know, damn AI, hey, but we don'tknow what to do with it. 24 was
probably a year of learnings andtrial and error, right?
And implementation. I think by now here we are
halfway through 25, we're in a place where pretty much narrowed
(32:21):
down the use cases where it actually does work, right?
And we're talking about entry level coding, right?
We're talking about, again, forgive me, people in those
industries, but we're talking about technical writing.
We're talking about SEO. For all we know, SEO is we know
it may be a thing of history, right?
So there are so many industries,verticals, trees that most
(32:46):
obviously are impacted by it because we do know that by now
generative model, I mean not let's leave AJI out of the
equation for now, but generativemodels, other ML models do
deliver. And I just mentioned a few
industries and translation industry is one of them.
I think understand where it works, understand where it cuts
(33:10):
out the necessity for human steps, understand and reinvent
the role of human professionals is basically where the
industries may be similar to ours, that work with text, that
work with image generation, multimedia voice generation.
(33:31):
I think just understand. OK, this piece is now safely
taken by AI. How do I reinvent myself, and
how do I reinvent the workforce?As I said, inevitably they'll be
jobs that will be gone in our industry.
There is this ongoing debate. Do I want my child to be trained
as a translator? And quite often, the answer is
straightforward. Translator is a no.
(33:53):
But do I want my child to be a multilingual prompt engineer?
Hell yeah. Do I want my child to learn to
be a program manager and analyzing business intelligence
delivered by I Yet again? Hell yeah.
So we talk a lot. I personally talk a lot and
mentor students on. I wouldn't go as far as survival
techniques, but there are so many roles for you and even if
(34:18):
the industry seems to be at threat, beef up your skill set
and you'll be striving I'd. Love to actually get into that,
that grain, you know, the human level where we are entering an
age where everything's changing faster than it ever has before.
What skills should people even focus on?
Because I love the way you framed translation versus
multilingual prompt engineer. What other type of pivots or
(34:40):
slight adjustments should peopletake to their education or their
reskilling if they feel threatened by the oncoming AI
wave? What What do you feel is a
valuable skill in the next few years?
OK, so here is my own son. I hope I don't know, probably I
don't know if he'll listen to this or not.
My own son is getting his master's in data science.
But being at well, first of all,mother, second control free,
(35:03):
concerned industry professional.I check his curriculum
religiously, making sure that they are actually being taught
what's not only relevant today, but will be relevant in a year
35. But no matter.
I mean, yes, and I'm fully satisfied with our curriculum,
but that aside, I think that familiarity, even basic
(35:26):
familiarity with data science isan absolute inevitable.
Understand statistics, understand the basics of machine
learning and data science. No matter which field you are
in, either you'll find yourself directly working in some form of
data science field, even into the rudimentary form, or you
will be furnished the output of AI and you better be able to
(35:51):
interpret it and not just AO. Damn the model just spat
something out. I have no idea where it's coming
from. I will again go back to
translation industry. The most sought after unicorns
or somebody who is multilingual,understands the principles of
linguistics, can test and iterate whether it's prompts,
(36:11):
whether it's RAG, whether it's whatever, wherever the wherever
the content is coming from, and most importantly, interpret the
metrics that they receive. Initially, we started with a
very naive and again, I mentioned data labeling before.
Initially we started with thumbsup, thumbs down.
It's good enough, right? OK, well, it's formal in German.
(36:32):
It's not formal in German. Happy, unhappy, thumbs up,
thumbs down. But it only gets you this far.
But if there is a Unicorn out there who equally understands
the core principles of how the models operate, how how ML
operates and how LLMS operate and are able to interpret the
signals, that's an absolute precious skill.
(36:53):
So my forecast is get a degree in the trade where your heart is
at. Wherever your heart is, pursue
your dream you don't need. I was just speaking at a
linguistic conference where people are like, hey, we're in
humanitarian field, why not humanities, Humanities, we're in
(37:13):
humanities field. We're not born data scientists.
What do we do? Take yourself where your heart
takes you, but make sure that you augment your potential trade
with ability to understand the core principles of everything
that leads to machine learning and machine learning and data
science and be able to apply to your day-to-day.
(37:36):
Yeah, again, quoting, just presenting a attending and
presenting at that MIT conference and quoting the
quoting Andrew Ng again, you better know how to program even
a little bit, even a little bit because you will.
Again, entry level programming is going to be a thing of the
past, but you understanding where the outputs are coming
from are essential. So do what you want, augmented
(38:00):
it with additional set of skillsand yoga.
I think that's wonderful advice.People with a bio degree or life
sciences where they feel they'recomplete in the physical world,
but if they just understand AI literacy, like you said, both
the interact with AI as well as interpreting the output, being
skeptical healthily is I think very important.
So that's wonderful advice. Last question for me, in your
(38:21):
day-to-day life, what AI tools do you use and what do you wish
existed to help make your life just a little bit easier?
I mean, obviously the search, all of my search has migrated to
Perplexity ChatGPT. I still do Gemini summarization
and I usually have I think. I think at this point in time,
I'm pretty think I'm pretty wellunderstand which of the models
(38:43):
will deliver the best results for a specific task.
So I would say search and analysis most definitely.
I am still very, I was talking to this professor who's like,
why am I even teaching people when they can write their PhD
They have they can have another limb write a pretty pretty
sufficient PhD within 5 minutes,Why bother?
(39:03):
But I still like to do my own writing, maybe just to overcome
that staring at the blank page, the writer's blog.
I definitely when I write articles or when I write blog
posts, I do, but I always just use it as the bare bones and as
boilerplate. And then and then obviously when
myself, I was talking to a recruiter who said that she
(39:24):
received 17 identical CDs and she was not thrilled.
I just want to make sure that I don't find myself find myself in
that space. One piece of advice again, when
you beef up your career, there is so much happening in the
world of natural language processing Gen.
AIAI in general. There are so many articles out
(39:46):
there, but who has the time? Just use summarization.
Just use summarization, extract core concepts.
And here you are. You've gone through ten articles
over 1 evening. So I definitely do a lot of
that. Absolutely do a lot of data
extraction and a lot of summarization.
And then again, I was speaking to professor of ethics who said
(40:07):
that there was a new field in ethical AI, which is treating AI
addiction. It's becoming a thing.
And when I caught myself, when Icaught myself trying to write a
poem about cyclists and motorcyclists in the style of
Homer's Iliad. I was like, OK, well that's a
fun way of spending the evening.So I'd say for entertainment
(40:30):
purposes only, just to see what that thing can do equally in
images and text and obviously machine translation.
That's not going anywhere, so I'd say it's omnipresent.
It's everywhere. I just wish UX was a little bit
less intrusive. Yeah.
And, and I think we'll get there.
And actually just on the blank page problem, one thing I've
kind of grown into is actually have the AI generate a list of
questions for me to answer to adhere to a certain structure.
(40:52):
So that way it's not actually writing anything, but it's
prompting me along the way so I can kind of keep things targeted
and structured in a more productive way.
Because I agree, like thinking is paramount to writing and we
need to keep doing it. We can't just left load
everything to the AI, but Olga, this was awesome.
Thank you so much for coming on and and sharing your insights.
Before I let you go, is there anything you want the audience
to know anywhere they can keep up with you?
I would say that the best thing I mean first of all I have my
(41:14):
own LinkedIn page so feel free to add me on LinkedIn and
usually the interviews their articles would always be linked
to just just add me. But even more so, smart Link is
super active on LinkedIn. So if you just add smart link,
if you just follow smart link onLinkedIn, you'd have pretty good
visibility into my team's work and other teams work.
(41:36):
If you go to smart links website, we again, we post
whenever there is a podcast, whenever there is a webinar,
whenever somebody writes something, it's it's always
reasonably easy to to follow us.Awesome.
Well, thank you very much. OK.
Thanks so much for having me.