All Episodes

June 24, 2025 26 mins

Send us a text

AI in home healthcare serves as a decision support system that helps professionals make better choices by analyzing data, surfacing key information, and providing suggestions while keeping final decisions in human hands.

• AI scribes can transcribe caregiver-client conversations and automatically populate required forms
• Digital assistants prepare caregivers for visits by providing quick client summaries and highlighting recent changes
• Predictive models identify patients at risk of adverse events, helping clinicians prioritize care
• AI tools must maintain the same security permissions as existing systems and comply with HIPAA regulations
• Explainable AI models help build clinician trust by showing why specific recommendations were made
• Voice technology represents the exciting future of AI in healthcare, moving from assistive to autonomous capabilities

Episode Resources:

If you liked this episode and want to learn more about all things home-based care, you can explore all our episodes at alayacare.com/homehealth360.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Naomi Goldapple (00:00):
I think, overall, using AI is just
another decision support tool.
So we always want the healthprofessionals to use their best
judgment and use this sometimesto do the heavy lifting, but
more to pre-crunch, pre-chew alot of the data for you, give
suggestions, summarize things,make life easier, bubble

(00:21):
information up to yourfingertips, but ultimately they
make the decisions based ontheir training and their
experience.

Erin Vallier (00:38):
Welcome to another episode of the Home Health 360
podcast, where we speak tohome-based care professionals
from around the globe.
I'm your host, Erin Vallier,and today I am joined by Naomi
Goldapple.
Naomi has the unique experienceof being both a startup
entrepreneur and a seasonedbusiness executive.

(00:58):
She is currently the SVP ofData and Intelligence at
AlayaCare, where she leads ateam of data scientists and
engineers to leverage emergingtechnologies to build
production-ready products thatwill drive present and future
value for AlayaCare clients.
Naomi is a data intelligenceprofessional with a lot of

(01:18):
experience, leading innovationteams, specifically in the field
of artificial intelligence andmachine learning, which is our
topic for today.
Having worked in healthcaretech and home care for the past
six years, she has becomerecognized as a thought leader
on leveraging data for betterdecision making, creating

(01:38):
operational efficiencies withthe goal of improving health
outcomes for those who wish torecover and age in the place
they call home.
Welcome to the show, Naomi.
Thanks so much.
Really happy to be here.
Oh, I'm so excited to have youon.
Every time I talk to you, Ilearn something new and I come
away inspired in some way.
I'm very excited for thisconversation.

(01:59):
I want to start by just askinga general question about AI
because it feels like magic tosome people and a little bit
scary, especially as we'reapplying this to healthcare.
So I'm wondering how do youspecifically explain AI's role
in home care to customers who'venever used it?
They don't know anything aboutit.

(02:20):
Is there anything you can sayto make them feel more
comfortable when adopting thesetools?

Naomi Goldapple (02:30):
Yeah, sure, chatgpt has really blazed the
trail for us, which has beenreally nice.
I noticed a huge differenceover the past couple of years
talking to health professionalsabout AI than in previous years,
because almost everybody unlessyou've been living under a rock
has at least asked ChatGPT onequestion, one prompt, over the
past few years.
So I feel like the generalpopulation is a lot more open to

(02:52):
using AI or using these largelanguage models, probably more
than they ever thought.
It is definitely an imperfecttechnology.
It does make mistakes, but it'samazing how people are okay
with that.
It does make mistakes, but it'samazing how people are OK with
that.
So now, having said that, inhealth care, we're not OK with
that and we need AI to give veryreliable results.

(03:12):
So definitely one of thechallenges is building the trust
and making sure that everybodyfeels comfortable and that the
security and the privacyconcerns are taken care of and
everyone feels comfortable withthat.
I think overall, using AI isjust another decision support

(03:32):
tool.
So we always want the healthprofessionals to use their best
judgment and use this sometimesto do the heavy lifting, but
more to pre-crunch, pre-chew alot of the data for you give
suggestions, summarize things,make life easier, bubble
information up to yourfingertips, but ultimately they
make the decisions based ontheir training and their

(03:54):
experience, gotcha, so I likethat perspective.

Erin Vallier (03:58):
It's not just a robot doing all the work for you
.
It's helping you, pulling allthe information that would take
you a lot of time to consolidate, and making your job a lot
faster by making somesuggestions and letting you make
the final decision, absolutely.

Naomi Goldapple (04:14):
In fact we specify that it is a decision
support tool.
If it makes decisions on itsown, then it becomes a device,
and if it's a device that makesdecisions, then you actually
need FDA approval, and so westay very far from that.
So it stays as a decisionsupport tool, so that you can
use it to augment the worker,but not necessarily make

(04:35):
decisions on its behalf.
Then we get into a kind ofmessier territory.

Erin Vallier (04:39):
Gotcha, I want to expand on something that you
just said about how we can useit.
So are there some stories orexamples of how AI is currently
being used specifically in homecare to improve the employee
experience?
And the folks I'm thinking ofare like caregivers,
administrators, schedulers,billers.

(04:59):
What have you seen?

Naomi Goldapple (05:01):
Yeah, all of the above.
I'll go a little bit backwardsin terms of what I see a lot of
right now, which is veryexciting and I know it's
something that we're activelybuilding, which is really like
the AI scribe, so the ability touse ambient listening.
So if a caregiver is going to aclient's home, they can have the

(05:21):
device listen to theirconversation with an opt-in and
make sure everybody'scomfortable with that, and it
can transcribe the visit andthen afterwards it can actually
take that transcription andautomatically fill in whatever
reports are needed to be filledin.
So whatever forms need to befilled in Not all, but it can
fill in most of what's neededand then the caregiver obviously

(05:44):
has the last say they can go,they can edit, they can accept.
But that is a huge time saver,instead of taking a lot of notes
and having your nose in yourtablet while you could be
looking your client in the eyeand having more interactions
with them, knowing that theinformation, the vital
information that you want tocapture during the visit, is

(06:05):
being captured and can actuallynicely be categorized into the
proper forms at the same time.
So that's a huge cost savingsand can really see tools popping
out of the woodwork, offeringdifferent flavors of this in the
home care space for differentforms especially.

Erin Vallier (06:21):
Cool, that's like the nurse's dream oh yeah,
absolutely.
And the quality management'sdream, because they're always
chasing the nurses to get theirnotes done.

Naomi Goldapple (06:29):
Yeah, and it's a huge time saver.
So instead of having to go toyour car afterwards and fill in
a bunch of information, a lot ofthe heavy lifting 80% of it can
be done for you and then youcan get onto your next visit
really quickly.
So while talking about gettingonto the next visit, so some
other applications that could bereally great is, the caregivers
are spending a lot of time onthe road right, so they're going

(06:52):
visit to visit and sometimes tobe able to prepare themselves
for the next visit, they have togo, find the information, see
what happened since the lastvisit.
Maybe it's not somebody theysee all the time, so they have
to go into their file and get alittle more of their medical
history, see what are the latestprogress notes, so that they
can be well prepared.
And we've seen the ability toeither asking an assistant for a

(07:16):
client summary so give me asummary or what changed since
the last time I saw this client,and that can just bubble up to
their fingertips.
They can see right away asummary, read it very quickly,
or have it read to them veryquickly and then they can be
ready to go visit their nextclient.
So, again, huge time savers tobring that important information
up to their fingertips.

Erin Vallier (07:37):
Absolutely, and it seems like something like that
would help us deliver or producebetter outcomes, because things
that we may have missed aregoing to be bubbled up to the
service.

Naomi Goldapple (07:48):
Absolutely, absolutely In terms of missing
things.
Another thing is actually usingpredictive models, so being able
to pre-chew all of the data.
So a clinician and a clinicalsupervisor it's hard for them to
remember everything about everyclient or every patient and
know what happens at every visit.

(08:09):
Some are visited three times aday.
There's a lot of informationthat is being collected at every
single visit so that data canbe captured and fed into a
predictive model that willpredict things like who is at
risk of an adverse event, who isat risk of a hospitalization,
and those alerts can be sent outto clinical supervisors, to the

(08:31):
clinicians, to the caregivers,and they can mitigate those
risks.
Hopefully, sometimes not, butthey can at least have that
information to perhaps go seethem right away, to perhaps make
sure that they see theirphysician right away, make sure
to call a family member, makesure to remove a carpet if
they're prone to falls all kindsof information that can be

(08:51):
almost like a companion, thatcan tap you on the shoulder and
say, hey, have you thought aboutthis?
Did you know that this justhappened, and maybe you can do
something to prevent an adverseevent.

Erin Vallier (09:01):
That's fantastic Real-time data fed up to
understaffed agency because,let's face it, there's a
shortage of nurses, there's ashortage of healthcare workers
in general, and we all have todo more with less now, and I
feel like this would be anexcellent tool to draw your
attention to the people whoreally need to pay attention to,

(09:22):
if you will, that they needimmediate assistance, and that's
just going to help deliverbetter care and streamline
workflows and save people abunch of time.
That's awesome.

Naomi Goldapple (09:31):
Yeah, and it doesn't even need to necessarily
be real-time data.
A lot of the clinicalsupervisors we work with
sometimes they'll just look inthe morning, so it's a good way
to start their day.
Who is on my list, what is mycohort of patients, and let me
filter by who is at highest risk, and that can help them to
allocate their resources for theday.
So it can also help in thattype of planning.

(09:53):
And maybe it's not somethingwhere you want to be poked every
two minutes and have alertfatigue, but you want to use it
to make better decision-makingand better resource allocation
to those who need it the most tomake better decision making and
better resource allocation tothose who need it the most.

Erin Vallier (10:11):
Still very exciting possibilities there.
I want to shift to talk alittle bit about security and
privacy.
You did mention that as one ofthe challenges and,
understandably, healthcareorganizations are really
cautious when it comes to all ofthese things.
It seems like healthcareorganizations always have a
target on their back, so I'mcurious how do we ensure that AI
tools are secure, private andcompliant?
Is there anything specific thatproviders should be asking or

(10:34):
looking for when they'reevaluating tools?

Naomi Goldapple (10:37):
When you're building these tools, you always
have to make sure that when youuse your EHR, not everybody has
access to all information,right?
So there's certain permissionsthat different roles have.
Those permissions should followin whatever AI tools that
you're using.
So, in the examples that I gavebefore, if I'm a caregiver and
I'm asking for a certainclient's giving me their health

(10:59):
summary, but I actually don'thave access to those patients
because they're not in my cohort, I should not be able to ask
that.
I should not be able to getthat information.
So it should really follow allof the permission rules that are
already in place.
That's definitely.
Also, you have to be carefulwith some of the tools that are
free, some of the open tools.

(11:19):
You don't want to be sharingany PHI, any personal health
information, with third-partytools that are going to use that
information for training theirmodels or making their data
better, because you don't wantthat to be in the public arena.
So that's important to takecare of.
And you want to make sure thatthe HIPAA compliance is there in
the vendors that you choose.
You want to make sure that thedata is housed securely within

(11:42):
your regions and make sure thatall of the privacy controls are
there.
A lot of it still applies, butI think the big difference is to
make sure that these tools thatyou're working with you have to
make sure that you're notpassing information that you
shouldn't be passing to theseopen source tools.

Erin Vallier (11:58):
That makes sense and it sounds like a lot to
consider when selecting theright tool.
The next question I want to askis more on the reliability,
because, as we know, ai istrained on historical data and
it's only as good as the databehind it.
What are some key factors thatdetermine accuracy in these
tools for home care, and how dowe make sure that our AI

(12:21):
solutions provide reallyreliable insights?

Naomi Goldapple (12:24):
If we split this into almost like
traditional AI and then reallylike these large language models
, there's a bit of a difference.
So in the traditional AI, whereyou are using historical data
to train your models, when youtrain to have some sort of
prediction or somerecommendation, there's many
things that are important inyour data sets.
You want to make sure thatyou're using enough data.

(12:45):
You want to make sure that youhave a balanced data set,

(13:09):
no-transcript, trained yourmodel for a certain level of
accuracy and you have to makesure you monitor to make sure
that, as it's ingesting new data, that the results aren't
starting to slide right, thatyou're not starting to have
decreased accuracy or precisionin your models because the data

(13:30):
may have really changed.
So you have to make sure thatyou're monitoring and also that
you use new data sets to retrainif the distribution of the data
has changed.
You might be going into acompletely new market and you
have very different datadistribution than you had in a
previous training and thereforeyour results are going to be
different.
So you have to make sure tomonitor them.

(13:51):
Now, in the world of largelanguage models, we know there
was a lot of funny stuff at thebeginning when people were
playing with ChatGPT and withGemini and it was just giving
some wrong answers.
So that definitely can happen.
You've heard the termhallucinate.
They can tend to hallucinateSometimes if they don't know the
answer, they will just make itup because these models they aim

(14:12):
to please, which is great.
But the models have gotten alot better.
There are a lot more reasoningthat goes into the models and a
lot more fact checking that goesinto the model.
So they've gotten better.
One of the ways that you canmake sure is that you can direct
it to fetch the data fromcertain data sets.
If you are asking it aboutcertain patients, you're only

(14:32):
going to be basing it off of thedata that is brought back from
the APIs, from your particularsystem.
So you're not asking it to goout into the interwebs and find
this information.
You're saying go here to fetchthat information and you can
also load up, for example, someof your standard operating
procedures.
So you can ask it in general,tell me how to use a Hoyer lift.

(14:54):
But if you would like it topull back that information
specifically on yourorganization's operating
procedures, then you can load upyour knowledge bases and direct
it to only fetch theinformation from there, and
that's a technique called RAG,which is retrieval, augmented
generation.

(15:14):
You don't just go out into thewild, but you actually go
retrieve it from certainknowledge bases that you direct
it to.
So that's a good way to keep itin control.

Erin Vallier (15:22):
That's fascinating .
So it's a lot of ongoingmaintenance and some really good
prompt engineering.
I'm curious, though is it ashared responsibility between
the end user and the personwho's maintaining or providing
the service of that AI tool tomake sure that the models are
constantly trained Like?
How does that really work?

Naomi Goldapple (15:44):
Your provider.
If they're providing you withpredictive models and they have
trained them to be a certainlevel of accuracy, absolutely,
it's up to them to make surethat they're being monitored and
to continuously update them andretrain so that they stay
within there.
However, they usually make adata contract with the providers
.
So the providers you say okay,my model has this information

(16:08):
that it's been trained on.
You have to make sure to keepcapturing that information
Because if you stop feeding thatinformation then the results
aren't going to be as good.
So we usually call it like adata contract and we say okay,
for a patient risk model, youhave to make sure that you
capture falls andhospitalizations and your
diagnoses and your medicalhistory and the visit details

(16:30):
and all that because we'vetrained a model with all that
data.
So if you stop including that,it's going to have a negative
outcome on the results of thatmodel.
So we usually have a datacontract to make sure that
everybody has a sharedunderstanding of what the inputs
and the outputs will be.

Erin Vallier (16:46):
Gotcha.
So it's a shared responsibility.
The provider continues to inputand the service provider of the
tool continues to monitor andtweak and make sure that it's
reliable.
Ok, that makes sense and I knowyou've had a real played a real
pivotal role in the developmentof Alaya Care's tools.
So I'm curious, as you've beendoing this development, what are

(17:09):
the biggest challenges andlearnings from this process?

Naomi Goldapple (17:13):
So definitely, adoption and change management
is huge.
So when I think of nurses andclinicians, they're suspicious
and they should be, becausethese are people's lives.
So a lot of times they want toknow why are they higher risk
than they were yesterday?
What happened?
So they really want to be ableto peek into these models.

(17:34):
You know sometimes they saythat AI the models are like a
black box.
You don't know why it comes upwith these answers.
That doesn't really fly.
In this situation, you need tomake your models very
explainable so that you can gaintheir trust.
That's something that's veryimportant, and the way that we
do that is a few different ways.
One is we give a lot of data toshow the why.

(17:54):
So if you see that a patient isnow a higher risk than they
were the day before, we havelittle icons to show.
Maybe their medications wereincreased, maybe there was a
fall, little icons of somebodyfalling, of this and that.
So they can get little glimpsesas to what happened and to know
what happened there.
And we also give a view this iswhat happened over the past 28

(18:15):
days, so they can really see thetrend and see what were the
events and what were the keyevents that happened that
actually triggered a change inrisk.
We have another model that weworked on, which is really a
visit optimizer, so for thescheduler to be able to have
decision support to choose whois the best match for this they
can visit, or who can fill inthis call off right away and it

(18:38):
will then serve up.
Well, it should be Sally.
And they're like, oh, whyshould it be Sally?
I thought Mary would have beenthe best choice, but we try to
give as much data as possible,so it would be Sally, because
look how many miles she has totravel, look how many times she
has seen this patient before,look she has the skills that
match.
Look, we give all thatinformation so they can really

(18:58):
start to trust and go oh yeah,okay, I see that makes a lot
more sense.
And the last thing is reallygiving it the ability to even
communicate.
We have something that iscalled Notable that
automatically reads all of thenotes that are captured during
the day by clinicians, bycaregivers overview notes,
progress notes, adl notes, visitnotes and sometimes clinical

(19:20):
supervisors or supervisors.
They don't have time to readall those notes and sometimes
there's gold in those notes,right.
So we built something where thelarge language model will
automatically read those notesand then pull out what's most
important and sometimes theclinical supervisor will be.
I don't know if I agree withyou.
You said that was a situationof medication non-adherence.

(19:40):
I don't know if I agree withthat.
So we give them the ability toX that out and then say actually
I think it was this, so theycan actually participate and
help us to make it better.
And that gives them more senseof control that they don't have
to take 100% what that modelrecommends, but they can
actually choose their own.

Erin Vallier (19:59):
I like that.
I think it's very important interms of being able to increase
adoption is to give them theopportunity to help us train it
Because, like you said, itdoesn't always have the right
answer, and then, given thevisibility of where all of these
answers are coming from, Ithink that's super important.
I have another question alongthis line, because we did

(20:20):
develop our own in-house tools,but there are so many things
that providers can choose fromand sort of bolt on.
I'm curious, from yourperspective, how does building
AI in-house compare to using athird-party solution, especially
when it comes to security,accuracy and all the workflow

(20:40):
integrations?

Naomi Goldapple (20:42):
Yeah, that's a great question.
Obviously, me and my team webuild things in-house to
integrate directly into theplatform and I think the key is
really you want to make thoseworkflows as smooth as possible
and you want to avoid cliniciansor back office workers having
to log into different systems,so you want to make sure it is

(21:03):
as smooth as possible so that itreduces the amount of time that
they have to spend doingwhatever it is that they do.
The security is something thatyou have to take into
consideration, especially withPHI data.
You want to make sure that witha third-party vendor you are
giving them, they're going tohave access to that data.
So you want to make sure thatthey are HIPAA compliant.
You want to make sure thatthey're SOC 2 compliant.

(21:24):
You want to make sure that theyhave all the controls.
Obviously, in-house, you cancontrol that a bit better
yourself.
I think the other thing is whenit's in-house, you can really
tailor it or they call fine-tuneit to the home care domain,
whereas sometimes thethird-party tools they can be
used for many differentindustries and adapted, but when

(21:44):
you have it in-house, you canreally tailor it to your
specific needs, sometimes howquickly you want it and for
somebody to build it, it couldtake longer.
If you want to buy it off theshelf, you might be able to bolt
it on a lot quicker.
So it depends where you're atand how quickly you want to get
going.

Erin Vallier (22:01):
Yeah, Sounds like there are some pros and cons to
each, but I feel like,ultimately, if it can be
included in a solution that youalready have it's like in there
embedded, fully integrated andmake things a little bit easier
and a little bit more secure,Maybe even provide some better
information down the line interms of accuracy and all that
stuff what do you think aboutthat?

Naomi Goldapple (22:22):
Yeah, sure, and it's become so democratized
like with the advent of theseLLMs that you know a lot of
these, the models, a lot of thetechnology is open source.
People can play around with it.
It's not as difficult as, let'ssay, deep learning from a few
years ago, where you want tostart integrating deep learning
models, and it can be easier forpeople to learn in-house to be

(22:44):
able to do this.
It's worth testing and worthtrying it out because there are
a lot of fun to work with andthe costs are definitely coming
down.
The LLMs can be expensive.
They take a lot of compute, sothere's GPUs that are processors
that can be quite expensive touse.
But obviously when you'rein-house you can control things
a bit more and when it's thirdparty, you just have to pay

(23:08):
whatever it is that they wantyou to pay.
So there's definitely pros andcons.
For sure, there's one thingthat's a guarantee is that it is
evolving extremely quickly andthe improvements are amazing and
it's a very exciting time to beintegrating this technology.
I think we're very fortunate tobe at this period of time when
we could really take advantageof these advancements.

Erin Vallier (23:27):
And that's a nice segue to the last question I
want to ask you because I wantyou to give us something to be
excited about when do you see AIgoing in the home care space
and the health care in general?

Naomi Goldapple (23:38):
Oh, my See, it's going in so many directions
.
But what I'm really excitedabout is voice.
I really find that there's somuch that we can do with voice.
I gave an example about theforms but being able to
transcribe, but even being ableto dictate progress notes.
To be able to dictate aprogress note and you say I want

(23:58):
it in this particular tone andI want it in this particular
language and I want it sent tothis family.
This is all very exciting, andwhere the industry is going is
not just these assistants, whereyou ask a question and it gives
you an answer, but actuallygoing into actions, where it can
actually take actions when youtell it to, but then even

(24:18):
autonomously.
So we've heard about realautonomous AI agents.
To really be able to come anddo a lot of those repetitive
tasks with intelligence is avery exciting frontier that
we're getting into the way touse them with these large
language models, where it canactually make some smart

(24:40):
decisions and executeautonomously, but making sure
that it's staying within theguardrails.
This is within reach now.
So these are things that peopleare actively working on and I
think within the next year wealready see a lot of them coming
out, but within the next yearthere's going to be like a real
paradigm shift from how we usesoftware and I think this is a
very exciting time, I wouldagree.

Erin Vallier (25:01):
Let's do more with less.
You know and I love chat.
Tpt Makes me sound much moreprofessional and kind as my
first draft of the emailsometimes.

Naomi Goldapple (25:12):
Yeah, my favorite these days is Gemini
2.5.
Really, really enjoying that,yeah, awesome.

Erin Vallier (25:18):
Thank you so much for coming on to the show.
This has been reallyinformative and I think that
it's really an exciting time tobe in the industry and just
watch what AI is going to do torevolutionize it.
So thank you for coming on andsharing some of the wisdom with
us.
Absolute pleasure, thank you.
Home Health 360 is presented byAlaya Care and hosted by Erin

(25:42):
Valliere.
First, we want to thank ouramazing guests and listeners.
Second, new episodes air everymonth, so be sure to subscribe
today so you don't miss anepisode.
And last but not least, if youlike this episode and want to
learn more about all thingshome-based care, you can explore
all of our alayacare.
com/ episodes or visit us onyour favorite podcast platform.
Advertise With Us

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.