All Episodes

July 4, 2024 • 33 mins

Send us a text

Discover the fascinating journey of AI technology from initial concept to real-world application with invaluable insights from Swathi Young, the CTO at Allwyn Corporation. Gain a deeper understanding of how AI transitions from individual use to organizational implementation and learn about the crucial importance of identifying clear use cases and business goals. Swathi shares compelling examples of how different sectors have successfully integrated AI, achieving quick wins and substantial returns on investment.

Navigate the complex landscape of AI governance and security, focusing on data stewardship, access control, and the critical issue of privacy. Hear about the challenges of proving machine learning model outcomes in high-stakes fields like fraud detection and criminal justice. We also delve into the potential of generative AI in cybersecurity, emphasizing the need for robust AI governance frameworks to ensure transparency, compliance, and responsible AI use.

Finally, explore the cutting-edge world of AI-driven wearables and video generation tools. From Meta's Ray-Ban AI glasses to health-monitoring rings and bracelets, we discuss their potential and the mixed reviews they've garnered. We address the challenges of mainstream adoption and the importance of user-centric design, as well as the ongoing debates around copyright in AI-generated content. Join us for a comprehensive look at the future of AI technology and its transformative potential.


Swathi Young LinkedIn
https://www.linkedin.com/in/swathiyoung/

Support the show



Follow TechTravels on X and YouTube

YouTube Channel
https://www.youtube.com/@thetechtravels

Tech Travels Twitter
https://twitter.com/thetechtravel

Tech Travels
https://techtravels.buzzsprout.com/

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Welcome back Tech Travelers.
Today we're diving into thetopic that sits at the
intersection of innovation andpractical application, ai a path
from prototype to productionand to take us on this journey.
Today I'm thrilled to have backon the show Swathi, young, cto
at Allen Corporation, who's alsoa renowned expert in artificial

(00:21):
intelligence technology andleadership, to help kind of shed
light on this topic.
Swathi, thank you so much forjoining us again on the podcast.
It's great to have you back onthe show.

Speaker 2 (00:34):
Steve, thank you so much.
I appreciate the opportunityand always welcome to dig deep
into these very hot topics rightnow, but I've had the good
fortune of working on it forquite some time and I guess I'll
start off with, you know,helping our audience
understanding this here is thatyou know we're really starting
to see AI.

Speaker 1 (00:56):
it seems like the rate of innovation and where
it's going over the last coupleof months just seems to be so
fast, and I think it's importantfor our listeners to just kind
of understand, from a technologyperspective, what it really
takes to go from kind of a proofof concept to a fully
operational, fully scalableproduction system.

(01:16):
And I'd love to get in thishere, but I think I want to
throw this number out here isthat you know we just there was
an industry study here and itshowed that basically was that
75% of people are already usingartificial intelligence or AI at
work.
46% of them started using itless than six months ago, which

(01:37):
is a huge number.
So I would love to dive intoyour thoughts around.
Okay, so I'm a company, I'mthinking about putting in some
sort of AI entity.
Kind of walk us through somethings that we need to
understand around this type oftransformational technology and
where we should start.

Speaker 2 (01:56):
Yeah, I think the first point I want to
differentiate is a lot of peoplehave started using AI at work,
but not necessarily atorganizational level, correct?
So think of the days beforeSalesforcecom Every salesperson
used some sort of software totrack their leads and prospects

(02:19):
and converting into sales or anExcel spreadsheet right.
So right now, whereorganizations are is employees
are using, to maximize theirproductivity, their own tools,
mostly charities.
My friend just told me that sheuses gemini at her work.
So employees are using thesetools from a consumer
perspective.
Now, from an enterpriseapplication perspective, there

(02:43):
are some success stories withlarge organizations who have
moved especially generative AIto production, but fewer stories
, I would say, success storiesfor smaller companies who have
successfully taken generative AI, which is either hugging face
like an open source API or anopen API, open AI APIs for their

(03:09):
organization that we have notseen at scale adoption at scale.
So I just want to throw thatcaveat out there.
But when you talk about AI,it's this ambiguous umbrella
term, right?
So there is machine learningthat started off a few years
back, decades.
I want to say, with netflix,did their recommendation engine

(03:30):
more than a decade ago that thenetflix actually recommends,
based on your past viewinghistory of movies, it would
recommend a new show for you.
That's a traditional machinelearning, which also falls under
the umbrella of AI.
Amazon recommends a product toyou that also is machine

(03:51):
learning, that falls under AI.
But what has changed now?
Which?
The rate of change of largelanguage models that's available
to public is really high.
So, after OpenAI's, chat, gpt,we saw Gemini, we saw Mistral,
we saw Publicity, and one of ourfavorite tools these days is
Cloud.
So all these are givingconsumers.

(04:14):
They have taken AI and put itin the hands of the consumers,
basically where you can easilyask a question, get responses.
Now, large organizationsimplementing at scale are still
struggling because it's not easyfor, first of all, identifying
a good use case that will giveyou a good what I want to say, a

(04:37):
quick win or return oninvestment type of situation.
So for large organizations,it's still a challenge situation
.
So, for large organizations,it's still a challenge and I
would like to start with, alwaysthink of what are the goals
you're trying to achieve, whatare some problems you're trying
to solve?
And if you don't know whatproblems large language models
can solve, you can look at theindustry example.

(05:00):
So, for example, if you are apublic sector, because I was
just telling you I came from AWSpublic sector conference.
So if you're a public sectoragency, there are so many use
cases, whether you're governmentof Australia or US, for you
know, easing congestion oftraffic, for easing congestion

(05:28):
of your air, traffic control orroadways there are some common,
you know use cases in.
If you look at IRS, maybe youcan think about how to detect
fraud in tax filings and thingslike that.
How to detect fraud in taxfilings and things like that.

(05:49):
Similar fraud detection you canalso do for HHS, department of
Health and Human Services, whereMedicare and Medicaid you know
the claims data.
Also you can use it for frauddetection.
So there are very high level usecases for every industry sector
.
For manufacturing, there is issmart manufacturing, inventory
management and forecastingbecoming better.
So I would always think isbegin at what problem you're

(06:11):
trying to solve and how you canleverage AI.
And if you don't have an ideawhat problems you can solve with
AI, then look at your industrysector.
There are a lot of publisheduse cases over there and this is
where I come in to help and Icall myself a technology
storyteller because I canconnect the dots and my strength

(06:35):
is understanding businesses andthe business workflows, and
what is the best tool in thetool set, whether it is
traditional AI, supervisedlearning, unsupervised learning,
or you want to use a largelanguage model, so that would be
the first step.

Speaker 1 (06:48):
It's interesting you mentioned that you mentioned use
case.
You mentioned fraud detection,intelligence searching, document
processing.
There seems to be, there's tonsof data that a lot of entities
are sitting on right nowno-transcript.

(07:11):
So let's say, for example, I'man organization where I say I
have found our ideal use case.
We've identified a specific usecase we want to target and
let's just say you know it'seither maybe it's a customer
engagement or sales andmarketing analytics From your
perspective.
You know, kind of how does oneget going with going into it,

(07:32):
kind of with eyes wide open fromgetting things together to
build a prototype, to move thatall the way through into
production.

Speaker 2 (07:42):
Yeah, I think, and even for there are some
considerations for the prototypeand production which are
similar.
Number one who who is the one?
Who's going to deliver thatcode and the algorithm?
Do you have in-house experts ordo you have to bring external
entities, or you knowconsultants or contractors to do

(08:04):
that work?
You do an assessment of whereyou are.
Number one right.
And number two is that if youalready have a consulting
company or a system integratorworking with your organization,
maybe you extend them.
Or the third interesting optionthat most often organizations
might or might not consider isyou can collaborate with the

(08:26):
university.
Now I'm in Washington DC and weare blessed with a lot of
universities.
We have American Georgetown.
My alma mater there is GeorgeWashington, george May, so you
name it and there's a lot so youcould collaborate with the
universities to do your pilot.
And that's an important step,because too often organizations

(08:48):
are like the minute they thinkof AI, first of all, there's a
fear around it, but secondly, oh, we don't have a million
dollars to invest in it.
So maybe not start with amillion, but start with these
collaborations to make sure youunderstand what your pilot
project is, and I wouldencourage to do a pilot compared

(09:09):
to just a prototype or a proofof concept.
Like I was telling in the otherconference session, too often
AI projects are going to thegraveyard without seeing the
light at the end of the tunnel.
So we would rather start apilot.
And the difference betweenpilot and prototype in my mind
is a prototype can be a quickand dirty saying oh, we thought

(09:32):
this is a use case that mightwork and it worked right.
Instead, a pilot is actualvalue to your business.
So if you're taking yourdocument processing, say in um,
in sales, maybe you are sellingsomething in a physical store
and you have paper documents.
So you want to make sure andyou have like 10 stores.

(09:54):
Take one store, convert thephysical documents into a
digital format using a largelanguage model and do whatever
automated processing Either youwant to see the sales, you want
to see what products are sellingthe most, whatever is your KPI
and the metrics that you'reusing it for and do it
successfully for that onelocation.

(10:16):
So that's a pilot, so you'vedone it.
It is still valuable.
It's not throwaway work.
And you said, okay, it worksfor this location.
But what if we had 10, 1500locations?
How do we scale?
Then we think of moreinfrastructure to deploy into
production, which basically it'snot just throwing hardware or

(10:40):
cloud compute, it's alsoarchitecting the solution in a
way that's scalable, as perhapsyou have 10 locations today, but
end of 2025, you're expandingto 100 locations, how would you
have a scalable architecture?
That's what you have to keep inmind when you're doing a
production-ready deployment.
And the other new thing that'son the horizon is obviously

(11:02):
operationalizing.
It means you have ML Ops now.
Llm ops, basically, once youmove it to production, you have
to keep monitoring it because asyour data changes, your output
might change.
Because unless, unliketraditional software, um, your
large language model, whetherit's generative, ai or machine

(11:23):
learning, depends on data andthe way your data changes, your
outcomes might change.
So you have to monitor it andmake sure there's not too much
of a drift from what youdesigned to what's in production
.

Speaker 1 (11:34):
It's funny because you mentioned the scalability of
the large of the AI entity andkind of the pilot that you're
putting together and how thatscales across an organization,
right.
We've kind of always thought intechnology, right, that, like
the cloud is a great place foryou to be able to consume
resources, consumption-basedmodel you can infinitely scale.
But I would imagine that theproblem might be and of course

(12:00):
I'm leaning on your expertisehere is around how do you kind
of maintain and monitorsomething that is continually
growing and evolving?
What is the talent and skill Isthere?
a talent and skills gap there.
You mentioned MLOps and LLMops.
At the rate at which things areprogressing so fast, it's very
difficult, I would think, formost people who work in

(12:20):
operations day to day.
They're now having to managesystems that are completely net
new to them and they're thinkingwhat so kind of where do you,
where do you?

Speaker 2 (12:30):
And this is where I think if you're a large
organization, you can lean into,you know, cloud providers like
AWS just coming back from theconference, they have introduced
Bedrock and all thesecapabilities, but truly it's too
early to produce an end-to-enduse case in enterprise at scale

(12:52):
in generative AI.
To be very frank, I knowMcKinsey has done an internal
implementation of a chatbotacross their, however many
thousands and thousands ofemployees.
They use a chatbot called Lilythat uses generative AI
capabilities.
It's been a case study.
They published the learnings,they've done folks articles on

(13:13):
it, but truly at scale we arestill not yet there.
Form of going to marketstrategy for that would be to,
you know, augment yourtechnology teams with the AWS or
Google Cloud, because they havethe tools, but they are ready
for a real use case to make itto production and you're right

(13:38):
as the LLMs expand.
So there are two ways of lookingat it when it comes to LLM
infrastructure.
One is most organizations willuse the APIs of OpenAPI, so you
don't need to build a largelanguage model from scratch.
You're leveraging the largelanguage model and, like I said,
you can pick something off theshelf like an API provided by

(14:02):
OpenAI.
There are APIs provided bycloud or perplexity I'm not sure
about perplexity, but clouddefinitely and then you have
your open source APIs, like ahugging face that you can
leverage.
So, essentially, once youdecide which LLM API you use,

(14:26):
that is the best fit for yourorganization.
On the other hand, if you arelike a big pharmaceutical
company, you want to build,maybe, your own language model
because you are building on topof your proprietary data, and
Bloomberg has done that.
I think Bloomberg has builttheir own large language model
using financial data, andBloomberg has done that.
I think Bloomberg has builttheir own large language model

(14:47):
using financial data.
So, essentially, if you'rebuilding your own large language
, model, then infrastructurerequirements is totally
different.
That's where you will haverelationships with NVIDIA, get
all your GPUs and things likethat.
But I think for all the rest ofus, we just leverage the large

(15:08):
language model APIs and thenthey also have a scalability,
just like cloud providers.
They are also billing onconsumption basis.
It's very different the waythey calculate the consumption,
but it's still like scaledaccording to your consumption.

Speaker 1 (15:28):
I think also it's important to also keep in mind
is that I think some of thecritical elements around
building a kind of a generativeAI model is really thinking
about the key components withinside your AI governance
framework.
Right, and I know that you andI talked about this on the last
podcast as well as you know,talking about the governance
framework and how the guidingit's basically going to serve as

(15:50):
the guiding principles for howthis thing is going to grow and
scale.
What do you think that are someof the most critical components
that everyone needs to considerinside their governance
framework, regardless of wherethey're building it?

Speaker 2 (16:04):
Yeah, that's a great question because governance is
one of the things that I observefalls in the wayside and it's
like an afterthought, but Ithink, in order for a better
output and outcome, I thinkgovernance should be given more
priority than it's currentlybeing done.
I think three key aspects whenit comes to governance is

(16:26):
whether it is a large languagemodel or traditional machine
learning.
The input of any AI solution isdata.
So the first thing is about thewhole governance principles
around data.
That includes metadatamanagement, that includes having
solid data stewardship, havingdata committees, and the second

(16:48):
important aspect is accesscontrol.
So it's so much more important.
And also the data lineage.
Right Prominence of data is soimportant, especially for AI,
because at some point,especially when it comes to
fraud detection.
So it was interesting a fewyears back I worked on a proof
of concept taking the CMS Centerfor Medicaid and Medicare that

(17:14):
data for fraud to apply machinelearning algorithms for fraud
detection.
But one of the interestingthings is that CMS, whenever
they do the fraud analysis rightnow, they use traditional
statistical modeling and theysay a provider which would be a
doctor, a doctor's office oreven a large hospital say there

(17:37):
is a fraudulent activity in yourbilling statement.
Any of them, or all of them,can actually take CMS to court
and say no.
And in the court you actuallyhave to prove that.
Your statistical modelingbecause if somebody is
submitting 5 million bills peryear and you found fraudulent

(18:00):
patterns, you're not going tomanually see the 5 million bills
or investigate, you're going touse some sort of statistical
modeling, random sampling, whatthey call Now machine learning.
If subject to a court order,how would you prove in the court
order that you know, using themachine learning techniques,

(18:25):
this provider was deemedfraudulent?
So it's very important for usin this type of very critical
situations, especially the onesif you're using for criminal
justice or even to predict crime, like a minority report, you
have to be very, very careful,both as a business user and as a

(18:48):
technologist, to understandwhat are the inputs to your
model and what is the weightageof the model that is making this
recommendation.
Why is it saying this provideris fraudulent and where did you
take the data from?
Did it come directly from theprovider?
Is it historical data?
So there's so many parametersyou have to be cognizant about

(19:09):
so that you're very, very youknow careful before you give a
prediction and say, okay, thisis a fraudulent.
So to my point.
The second one data lineage andalso how the data is being used
by the machine learning model.
That governance principles areso important.
And third one I would say datasecurity and privacy.

(19:31):
Obviously, anonymizing the data.
When it comes to a lot ofthings like HR data, recruitment
data, data in healthcare, theyhave to be anonymized.
And healthcare is a veryinteresting use case because
there are some use cases.
You need to have at least thegender data right Because the

(19:54):
demographics and gender.
There are some diseases thataffect certain demographics and
genders than others.
So to that extent you cananonymize the data you have to
in healthcare instances.
So I would say the privacy andsecurity considerations.

Speaker 1 (20:11):
It's interesting the security implications you know
you mentioned kind of around,like you know, for organizations
, you know looking in and kindof bringing into kind of the
aspects of cybersecurity youknow thinking about, you know
leveraging Well, look, you knowthere's.
I got a letter from my internetprovider that basically said
that I was subject to a databreach.
I got an email fromTicketmaster just last week that

(20:31):
says that my Ticketmasteraccount, by all my concert
tickets, was basically subjectto a data breach, hack.
Right, you would probably thinkthat most organizations would
jump at the chance to want totry to leverage things like
generative AI for things likerapid security or forensic
analysis or things like that.
Do we start to see this sceneso much happening now, with data

(20:53):
hacks and data leakage andthings like that, people wanting
to rapidly use this withoutgiving real thought and
consideration to how certainmodels may or may not be
considered in terms of using GenAI for cyber?

Speaker 2 (21:08):
I think it's the chicken and egg question A lot
of people are still notexperimenting.
There is, you know, there is alot of strong use cases of using
generative AI for preventingcyber attacks right, but we
don't see that.
I think where we are seeing isproduct companies like a

(21:30):
cybersecurity product company isincorporating Gen-AI in their
product.
It's not like.
I think what needs to happen isboth.
Almost all product companiesare trying to incorporate Gen-AI
in their products because theyare technology companies, they
are future thinking and theywant to be competitive in the

(21:50):
marketplace.
But what has to happen is it'snot enough if the product
companies do Large organizationslike a Ticketmaster, should
invest and investigate the usecase.
Again back to production.
Maybe they are doing someprototypes and proof of concepts
but, how would they take it toproduction and prevent this,
because this is a great exampleof using generative AI to

(22:13):
prevent security breaches.
Right Again, I think it's amatter of time.
From the consumer side, we arebenefiting, getting Claude, 3.5,
sonnet and all that.
I think the enterprise side hasto play catch up, which usually
takes longer.

Speaker 1 (22:31):
It's incredible to see this.
I think there was a Gartnerstudy that said that AI
investments expected to reach$97 billion just by the end of
this year alone the growingimportance and commitment of
organizations toward AIinitiatives.
I found this study incredibly,incredibly funny.
Where it also said at the sametime is that 50% of companies
have adopted the at least onebusiness function and 20% of

(22:55):
them has been able tosuccessfully scale.
But exactly right back to yourpoint again is that a lot of
organizations are trying tofigure out what is our use case,
what's our return on investment, what's the true business
outcome we're looking to solveversus just throwing something
out there.
But again, I can't I can'tstress it enough as as as what
you said earlier, as you know,kind of starting from the

(23:16):
beginning with building a real,true generative AI governance
framework, understanding yourdata models and then having that
completely being able to come,you know, transparency with
compliance and auditingcapabilities as well.
I think those things are keyand essential.
What else am I leaving out here?

Speaker 2 (23:36):
I think responsible AI is a big aspect of it,
especially like when I talkedabout use cases in criminal
justice and criminal justice,yes, ai has the capacity to do
predict certain crime, maybebased on how algorithms work.
But is that the right use case?

(23:57):
And then bias is huge.
If you think about use cases inrecruitment, hr, even in
healthcare, right, the bias caninherently seep in because, like
for image processing, we knowthere have been enough research
papers published where imagesare still not able to deal with

(24:18):
dark skin versus fair skin andthings like that.
So, responsible AI, bestpractices of evaluating for bias
.
Secondly, evaluating forfairness.
And then transparency of models,in that if you're taken to a
court of law, can you prove youralgorithm is fair or not.

(24:39):
Going into technical details,but what are the inputs,
parameters to the algorithm?
What is the weightage in whichthe recommendation was made
given to those attributes andparameters?
To that extent right?
So definitely transparency, andthere's a lot of technical
study in the area oftransparency, called
interpretability of the modelsand so on.

(25:02):
So that's another importantaspect.

Speaker 1 (25:05):
Yeah, I completely agree.
I think that there's still alot more to be done in this area
.
I know you mentioned theimplications around CMS and
claims denial.
I think I was also readinganother provider was also
basically being a class actionlawsuit.
They were basically saying thatour claims were basically
denied in less than 2.3 seconds,or something like that, and a

(25:29):
lot of people were making theclaim.
This is well listen.
Is that your, your ai, youralgorithm has got bias built
into it, and is that thesepeople are being denied valid
claims that should be looked at,you know, individually, uh, and
examined that way, versus justa vast swath of them were
basically automatically denied,and they were.
I think they were into themillions in terms of number of
cases that were denied in lessthan a second or something like

(25:51):
that, which was incredible.
So I do think that there needsto be more work and attention
being paid into kind of thealgorithm and how the, the
governance and how the bias isall built into it.
Um, what are some of the coolthings you're working on, you
see and see coming up in thelandscape in the next six months
around ai?
What do you see as some of thecool things you're working on
you see and see coming up in thelandscape in the next six
months around AI?
What do you see as kind of thenext sizzles?

Speaker 2 (26:14):
I think we are going to continue seeing all these
newer versions.
And there is all generative AI.
I call it the generative AIwars, but we are the benefactor.
But we are the benefactor, Imean we are benefiting from all
the available chatbots andgenerative AI chatbots that are
for the consumers and increasing.

(26:35):
I would say I'm a power user.
There's not a day which goes bywhere I'm not spending multiple
.
Cool on the horizon are theimage processing and video
generation tools that are coming, especially as I played with
the Luma Labs AI and it was socool and my son gave me a prompt

(26:59):
.
I was able to show him a videofor five seconds that it
generated and he really loved it.
And I know Sora, which isOpenAI's video generation, is
still not publicly available,whereas LumaLabs is already
publicly available.
So there's going to be somevery, very cool video and
imagery generation.

(27:21):
And now, of course, on the flipside, there is a debate around
copyrights of the data that'sbeing used to generate these
images and videos.
So, but ultimately, it's apandora's box.
It's already opened up.
You can't put it back, but wewould benefit to create.
You know, I I can't evenimagine the kind of jobs my son

(27:45):
will have who's only eight,because it's going to be a brave
new world it's so funny.

Speaker 1 (27:50):
I um what my twin four-year-old boys and, uh, my,
my youngest one, of course he'sbasically already overwhelmed
Alexa to the point to whereshe's basically got the red
circle on it and my wife and Iare having, and my wife and I
have this conversation.

Speaker 2 (28:03):
We say what age?

Speaker 1 (28:04):
should we unleash him on chat GPT?
Right, because he Because he'sliterally like a thousand miles
an hour, right.
But you mentioned the adoptionand the things like the
wearables and things like thatand I wanted to get your opinion
and thoughts, kind of as westart to round out the segment
here.
You know the wearables is ahuge market.
Recently I was talking aboutthe meta Ray-Ban sunglasses

(28:25):
right, where you kind of had theAI camera that was already in
there and you had the commercialwas like hey, meta, read this
menu.
That's in a different language.
Hey, meta, what does the signsay?
Because I'm a traveler in aforeign country and to me it
looked really, really cool.
I thought this was a greatcombination of AI and AI into

(28:46):
wearables that was affordablefor the consumer.
I think some of the sunglasseswere around $2.99, somewhere
around there.
But then some of the peoplestarted kind of saying, well,
listen, I tried these glasses.
I went to certain restaurantsthat had different menus and
different languages.
It did not work for me.
I tried to basically ask itwhat I was looking at.
It told me a street sign.

(29:06):
I was actually looking over abridge, so it wasn't there, kind
of like.
It wasn't like it was asadvertised, and I kind of wonder
you know we talk about goingfrom prototype into production.
I mean, was this really readyfor prime time?
I don't know.
That has got some pretty coolproducts, but what's your
thoughts?

Speaker 2 (29:27):
Yeah, I think there are two aspects when it comes to
wearables.
I still feel that something todo with humans and the physical
interaction with the device.
There is still frictionno-transcript friction that

(30:13):
would still be there, I think,for adoption of wearables maybe
something like a ring that beepswhen your heart rate goes up.
Those, those have been in themarket but again, adoption has
not been so great.
I think maybe the meta classesare not ready for prime time,
but even for a longer period oftime.

(30:33):
I think there has to be somebioengineering research to be
done there about increasingadoption of wearables and the
friction there is that peopleexperience this feeling of being
uncomfortable, even though it'sbeautiful.
It's like I'm in the movie.

(30:53):
All that, but for two hoursjust so close to me, was nerve
wracking, right, I couldn't doit.
I couldn't even do it for onehour.
So there is this friction part,but I think there is, I think

(31:17):
there is a good use case forwearables such as a ring, a
heartbeat monitor, which AppleWatch already has, but even
enhanced versions for olderdemographic who are susceptible
to falling, susceptible to heartattack, stroke, etc.
So there is a market there, butI think the technology is there
for such a use case.
It's about prioritizing thoseuse cases and flooding the
market with a lot of differentwearables.

(31:37):
Right, we only have like one ortwo right now for the glasses
and others, but for glasses,personally, I think there is a
frictional element, but like aring or a bracelet.
There are a lot of other usecases.

Speaker 1 (31:52):
So your outlook on the future when it comes to
wearables is more positive, oris it still more skeptical, like
I'm still waiting to see how itlooks?

Speaker 2 (31:59):
I'm still more skeptical.
There are a lot of positive usecases that could be possible
and they have been there, but myquestion is that adoption is
not.
I think when it comes tophysicality of people, there is
more friction to adoption, likeyou can think of um elon musk's
neural link that's embedded inthe brain, right I?

(32:22):
I don't know how many will signup for that.
I'm I for one.
I'll simply opt out I'll, I'llwait.

Speaker 1 (32:28):
Uh, I'll wait a little bit longer Again.
Prototype to production right,it's like how much has this been
tested?
How many case studies have youwent to?
What's the rate of success?
Will I die?
on implant or something likethat, Something kind of crazy.
Right, it's interesting, Veryexciting to see Swati.

(32:48):
I want to thank you again forcoming on the show and sharing
your insights and again, thiswhole journey from prototype to
production is really a very itis indeed a very complex topic,
but I think your approach andunderstanding to it really kind
of helps us understand thetransformational aspects of that
.
So again, I want to thank youso very much for joining us on
the podcast.

(33:11):
Thank you, Steve Always fun totalk to you Awesome and thanks
everyone for listening.
Until then, stay curious, stayinformed and, most of all, happy
travels.
Thanks so much for listening tothe Tech Travels Podcast with

(33:31):
Steve Woodard.
Please tune in next time and besure to follow us and subscribe
on the Apple Podcast andSpotify platforms.
We'll see you next time.
Advertise With Us

Popular Podcasts

Are You A Charlotte?

Are You A Charlotte?

In 1997, actress Kristin Davis’ life was forever changed when she took on the role of Charlotte York in Sex and the City. As we watched Carrie, Samantha, Miranda and Charlotte navigate relationships in NYC, the show helped push once unacceptable conversation topics out of the shadows and altered the narrative around women and sex. We all saw ourselves in them as they searched for fulfillment in life, sex and friendships. Now, Kristin Davis wants to connect with you, the fans, and share untold stories and all the behind the scenes. Together, with Kristin and special guests, what will begin with Sex and the City will evolve into talks about themes that are still so relevant today. "Are you a Charlotte?" is much more than just rewatching this beloved show, it brings the past and the present together as we talk with heart, humor and of course some optimism.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.