Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:21):
Welcome to another exciting episode of the Vanguards of Healthcare podcast,
where we chat with the leaders at the forefront of
change in the healthcare industry. My name is Jonathan Palmer,
and I'm a healthcare analyst at Bloomberg Intelligence, the in
house research arm of Bloomberg. We're very happy to welcome
Henry O'Connell for today's episode. He's the CEO of Canary Speech,
which he co founded in twenty seventeen with Jeff Adams.
(00:45):
Henry cross paths with Jeff when they were both working
at government agencies years ago. This encounter led them to
co found Canary Speech after both had successful forays in industry.
Welcome to the podcast, Henry.
Speaker 2 (00:57):
Thank you very much, Jonathan, I appreciate the time with you.
Speaker 1 (01:00):
Well, I'm really excited to learn about Canary Speech and
what you all are doing. So why don't we start
with a thirty thousand foot view of what solutions Canary
Speech offers and where you fit into the healthcare system.
Speaker 2 (01:11):
Appreciate it very much. Let me give you a little
bit of background. As you mentioned, Jeff and I met
many years ago, nearly forty years ago, and I pursued
a career After working at the National Institute's of Health
for five years, I pursued a career in industry. Jeff
was working at another government institution, but pursued a career
(01:33):
in speech and language. After that. He was responsible for
establishing creating the first natural language processing commercial product, and
led the team that built Dragon Naturally Speaking, and also
the team that built the Amazon Echo. So during his
career he contributed significantly to the foundational technologies of speech
(01:57):
and language. Seven years or so ago, when Jeff and
I got together in a bagel shop, we were talking
about what we might do together. While we have been
friends all this time, we had never actually engaged in
work together. We were at a point in our careers
where we could and during that conversation I asked a
simple question of Jeff, who has had a story career.
(02:22):
I asked him, what would you do if you could
His response was, I've always wanted to apply speech and
language to the analysis of human condition and disease in
a practical way that could be used in healthcare. And
with that it started a two and a half three
(02:45):
hour conversation about what we could do if we could
apply conversational speech as a tool to augment information available
to clinical teams in the treatment of patients, and that's
how Canary got started. The application for the previous twenty
(03:08):
years had been the use of the analysis of what
words are being spoken to provide information about disease, and
there had been some success, particularly with Alzheimer's disease, but
it was difficult to apply that in a practical way.
(03:30):
Most of us use different language based on how much
we read, what our education levels are. Language is specific
to country changes, so trying to get a constant application
of it across different languages and across different regions proved
to be very difficult. However, language itself and NLP is
(03:55):
not based on the words we speak, but rather on
sublanguage characters that are across to all populations. And when
you think about it, biomarkers in speech are common across populations.
Words are not. The word we use to describe cat
is different than the word used in Mandarin to describe cat.
(04:19):
It's describing the same animal, but it's distinctly and uniquely different.
Because of that, using language level indicators for the analysis
of disease or human condition doesn't apply. You have to
build a brand new model every time you go to
a different language. However, if you and I both have
(04:44):
the same alleles for blue eyes, we will have blue eyes.
And it doesn't matter where our ancestry were born. And
that's true across populations. What we found where there are
elements in speech, in fact, twenty five hundred in forty
eight of them that are driven by the central nervous
(05:06):
system to create language. So the vibrational characteristics of our
vocal cords, when I get excited, they vibrate faster and
you'll get higher tones. That's true across all languages. And
so if you're analyzing things like excitement or stress or anxiety,
(05:27):
there are common elements in our speech that are independent
of the words we're speaking. And Canary Canary approach, Canary's
speeches approach was to analyze those elements in speech. So,
in fact, from a five thousand foot level, every ten
milliseconds we take a slice of twenty five hundred and
(05:49):
forty eight different features of speech, and in a minute
we've gathered fifteen and a half million data elements and
we yeah and we analyze it in near real time,
and we correlate that to clinical diagnosis and create an
algorithm that then can independently identify a range of different
(06:10):
diseases in the human voice during conversation.
Speaker 1 (06:15):
Now for the way person, have you guys identified those
twenty five hundred and forty eight different elements, or is
there some foundational work that that's based off prior.
Speaker 2 (06:27):
That really is NLP is based on it, for instance.
So natural language processing uses those elements. We knew about
them for twenty five or thirty years. They've been reviewed
and studied, and they've been available for research forever. There
was an aha moment in this bagel shop where we
(06:48):
were having this conversation. Jeff was describing where the field was,
where it had been, the research that was being done.
Plause for a moment and said, Jeff, let me ask
you a question. He knows our children. My children, my
wife and I have five children. He knows them very well.
(07:10):
In fact, when my oldest daughter, Caitlin turned eight years old,
he wrote her a happy birthday song and played it
on the piano for her. And she's now an adult,
she's married, she has children. Not my best moment, she
moved out. I would have preferred they stayed forever. But
(07:31):
life is a process of change, and so our kids
get to grow up and they get to be independent.
When my daughter was in our house and coming home
during middle school and high school from soccer practice, when
she would walk across the room, I could tell from
how she was walking her gate whether it was a
(07:52):
good day or a bad day. She turned and looked
at me. Her facial expressions gave me insight into how
her day went. When she opened her mouth and started
to speak, maybe in response to how are you doing,
and her common response would be I'm fine, thank you.
I mean, this is common for all of us.
Speaker 1 (08:14):
I was thinking about that same example with my daughter.
Speaker 2 (08:17):
Yeah, and I would go time out, now, come on
coming back, so tell me how your day really was,
and we would talk. And those are the moments that
are just treasured moments, as you know. And when she
left the home and she went off to school, I
would call her a few times a week, and I
(08:39):
wouldn't have in that conversation. I don't have any of
those physical cues anymore. I don't get to see her gate,
I don't see her facial expressions, her body language, all
the other things that contribute to an understanding that I
had in those moments when she would get home from
soccer practice. But within moments of the conversation starting on
(08:59):
the phone, and it's irritatingly accurate. I would know whether
it was a good day or a bad day, and
it had nothing to do with the words she was speaking.
And so I said to Jeff explaining that. I said
to Jeff, how am I doing that? And his answer
was I'm not sure. And so that caused us to
(09:23):
think about what other information was being offered up in
conversation that wasn't the words we were forming. Because I
don't have to tell you that I'm excited about what
we do with Canary. You knew that in the first
few moments of our conversation. All of that is being communicated.
(09:46):
And when you think about it, clinical team members, nurses, doctors,
nurse practitioners, the whole range, when they're sitting with a patient,
when they're speaking with a patient, they we have a
broader understanding of what's going on with that patient than
simply the words that are being spoken. We call it
(10:07):
a wish, and we call it a number of other things.
But the truth is, of those fifteen million features in
speech biomarkers, if you will, they're communicating many things to
our machine learning tool our brain are not spoken elements
(10:27):
or words that are being conveyed. It's possible that all
of the emotional words we use are really mimics of
those communications. And so from that concept, we began looking
at these features in speech, which have been known for
twenty years. They were used to create through natural language processing,
(10:52):
the algorithms that identified what words were being spoken. We
use them instead to understand what emotions and potentially even
diseases were present. And what we have found is that
they're very rich in information. We're capable of identifying correlations
(11:13):
to a whole range of diseases, and today we analyze
for diseases behavioral health diseases such as anxiety and depression,
stress and mood fatigue. We analyze for progressive neurological diseases
such as Huntington's Parkinson's. We're working on an MS model
(11:36):
for cognitive health. We can tell the difference between normal
cognitive function, age related cognitive decline, mild cognitive impairment in
Alzheimer's for early stage and the accuracies, particularly in progressive
neurological diseases, are in the high nineties, and we do
(11:56):
this within within forty seconds of everything I mentioned could
be measured in that same forty seconds, So it provides
and returned within milliseconds of time, so within two three
seconds scores are coming back. Clinical team members can use
that as any other tool they have in their toolbox.
(12:18):
They can effectively understand a range of different human conditions
and emotions and diseases while talking to a patient. So
if we're having a conversation with an older patient and
we ask them how they're doing, I would characterize myself
as an older patient. It's our habit when asked, just
(12:41):
like our children when asked how are you doing today,
to thank the person and to explain that I'm having
a good day. Well, it may not accurately reflect my depression,
or it may not accurately reflect my anxiety level, and
it would have have no indication on whether or not
(13:01):
I had early stage Parkinson's, but our voice does, and
these biomarkers in speech that we analyze are very rich
in that information. And we found that to be true
in Japanese and in English and in Spanish, and we're
building other language models. We don't use the level, the
(13:22):
language level and the word level, which means a translation
from a model built in English to that same model
in Japanese. We have an accelerated kind of headstart, so
we're able to create the same model in Japanese quicker
with smaller data sets.
Speaker 1 (13:42):
It's fascinating. So how do you prove out? You think
about the concept in the Bagel Shop and I guess
you want to prove it out. How do you actually
do that? And how do you prove that what you're
seeing as a signal for any one of these conditions,
actually the signal isn't just erroneous, it's actually the person
has depression or they have Parkinson's. Did you have to
(14:03):
do a clinical study?
Speaker 2 (14:05):
We do. Canary. We decided in the bagel shop, although
it's a longer road and it's a deeper hoe to plow,
we decided that we would create something that would be
meaningful within the healthcare marketplace. So Canary at its inception,
(14:25):
started the path that we would be a healthcare company.
So and work with a range of different organizations for
the building of these models and these correlations. The algorithms
that we use for the assessment analysis of various diseases,
(14:46):
we build those within clinical environments. So in the US,
we're working with organizations like the Harvard Beth Israel Group.
Last year we published with them. We work with them
within their neuralot department studying a range of progressive neurological diseases.
We have worked with organizations in Europe Telehospital in Dublin,
(15:11):
Ulster Hospital in Belfast. We've worked with the National Institute's
of Health in Japan and several other clinics. We've hired
contract research organizations CROs to conduct clinical trials with us.
But this all comes back to what is considered the
ground truth with respect to a diagnosis, which is the
(15:34):
doctor and the clinical team and the gold standards. So
Canary compares it and builds its algorithms against the ground
truth or the current standard of healthcare. And what we
do is then gather both the diagnosis of a patient,
(15:55):
for instance, Huntington's or Parkinson's disease, and of forty seconds
of their audio. And we might do that with one
hundred patients, and we would do that in different healthcare institutions,
so that we've spread so we don't believe and have
validated that accents and regional differences in speech don't interfere
(16:20):
with the accuracy of the models. But we didn't just
assume that we proved it, so we actually did control
studies across all fifty states with a very broad population
and a very large population. So those are the types
of things that I think when you're building a product
(16:40):
to be used in the healthcare environment, it requires a
higher level of discipline, a higher level of rigor, and
then peer review, peer papers, academic papers that then review
that and can be seen by the community at large.
So that's the kind of approach. It's one that we've
(17:03):
taken earlier in our careers, and it's one that I
think is necessary.
Speaker 1 (17:08):
So if you take a step back, you know, I
think you talked about the clinical workflows, and then I
think also on the research side, if I think about clinicians,
they're very tough to change practice or their their workflow.
Can you talk a little bit about the adoption of
your tools into the clinician's workflow. Has there been friction
(17:29):
or pushback?
Speaker 2 (17:30):
There's a curiosity and there's also a recognized need. Part
of the workflow is what we analyze in the work
that we're doing with these clinical team members. So we
will take our tools which operate in a hip compliant,
(17:51):
high trust environment. We function at the same data security
level as the hospitals we're working with, so we're a
good partner for them. And then part of the process
is under part of the development is understanding their process.
The concept which is broadly understood now of ambient listening.
(18:16):
Companies like Nuance are ambient listening companies. There they they're
listening to the conversation between the doctor and the patient
inappropriate informed consents and then creating a textual record of
both the doctor conversation and the patients conversation and they
(18:37):
use AI then to contribute to the completion of insurance forms. Well.
Speech is an ambient listening partner. We don't impact on
the time the clinician is spending with the patient, but simply,
very quietly under an informed consent, are listening to the
conversation consistent with data record requirements. We gather information from
(19:02):
that conversation and analyze it for a range of different diseases,
so it varies nicely fits into that conversation. There also
are because of the pandemic, there were adjustments made to
some of the CPT codes that allow for imbursement of
that analysis using vocal biomarkers, so these institutions cannot only
(19:29):
get an objective measure of a range of different diseases anxiety, depression,
cognitive health, behavioral health, but they can also apply for
and have reimbursement for that without expanding the time the
clinical team member is spending with the patient because it's
simply listening to the conversation. One of the other applications
(19:52):
of this is not just the patient's analysis, but also
for the clinical team member, because burnout is a significant
thing where else, we can also at that same time
measure anxiety depression levels in a confidential way for the
for the clinical team member. So there's healthcare moves slowly,
(20:17):
as you're aware, it sure does, and and they take
they take those cautious steps because of their commitment to
the welfare of the patient, which includes confidentiality, their health,
their their understanding of the tech and so on and
(20:37):
so forth. With the team at Beth Israel, they reported
to us that the patients they were meeting with when
they described to them the ability to listen to their
voice and provide information about their about their disease, the
patients were intrigued with that. They thought that they were
(21:00):
on the cutting edge, you know, they that what was
being applied was like, you know, cool and so, and
we get that response often. I think we have, for
as long as we've been around, listen to people and
understood understood many things about them that were unspoken. I mean,
(21:25):
we do this all the time. I believe that what
we're really reading are these elements and speech that are
augment the understanding that we get from words themselves.
Speaker 1 (21:37):
I have to imagine there were some maybe surprising or
unexpected discoveries when you looked at some of these biomarkers.
Do any come to mind?
Speaker 2 (21:46):
Well, you know, we we try to stay in the
middle of the fair way. So I think there are
in progressive neurological diseases, things like Huntington's disease, Parkinson's MS.
They're very complex. Disease is to impact on the central
nervous system. The central nervous system is the pipeline that
(22:10):
is the control center for how we form and articulate words.
We found and while we expected that accuracies would be high,
we were surprised how accurate they are. I mean, in
Huntington's it's ninety six to ninety eight percent accurate. And
(22:32):
at the same time we're looking at an individual's the
presence of Huntington's disease. It's a dominant gene. Individuals with
that gene are going to get Huntington's, but the onset
of their symptoms is not well known, so it's called
(22:54):
pre manifest of the disease. To manifest of the disease.
We're able to see that transition with accuracies that are
astonishingly high. It's a wonderful tool for a physician to have.
It's validating, if you will, to their own understanding of
the progression of the disease. The specificities and sensitivity of
(23:16):
the tool also means that longitudinally, if a patient is
being treated, we can follow improvements in that patient over
time or a lack of improvements. So those are the
kind of things that were pleasant results of the work
that we did to see that speech could actually not
(23:39):
only identify the presence and severity of a disease, but
it could track a disease over time with an individual.
And we're doing this all on smart devices, so tablets
and phones. We capture audio within a compliant data secure environment.
But nonetheless this this can reach populations. We did a
(24:02):
study in Ireland, for instance, with Ulster Hospital where they
were serving a population of migrant people gypsies if you will.
They're called border population now the words about ten thousand people.
Our technology was used to connect those individuals to the
Ulster Hospital even though they were hundreds of miles remote
(24:24):
from them, using their personal smartphones. So that allowed the
Ulster the Ulster team to be connected over smartphones and
to be capturing information about anxiety and depression and other
illnesses and fatigue and things like that which they could
(24:46):
not do before. So it connects populations regardless of where
they are in the world. Most people have smartphones, most
people have access to that. Now that's exciting for us.
Speaker 1 (24:58):
There's a lot to unpact, Henry. I mean, one of
the things that I was just thinking about was, you know,
who owns the data in these examples? Does the data
go back to Canary or does it go back to
the provider.
Speaker 2 (25:11):
So generally speaking, the provider always owns the data. Okay,
when we're working with them, we establish a connection, which is,
as I mentioned before, a fully compliant connection. We function
at the same data security level as our partners do.
(25:33):
Audio small snips of audio forty seconds of audio will
come over to us. We're extracting on the order of
twelve million data elements out of that. Now, those data
elements cannot be once we've extracted that, they can't be
put back together into a voice. They're features part of
(25:56):
that twenty five hundred plus different elements of speech. We
we then analyze that for correlations to disease. Ultimately, we
build an algorithm that can then independently measure that that
specific targeted disease without a baseline. So we're capable of
(26:19):
measuring anxiety and depression or Huntington's disease or Parkinson's or
whatever from an individual the first time we meet them.
The first time we have an audio measurement of them,
so Canary Speech. Canary Speech uses that to perfect the
(26:40):
models and the technology we have. I haven't mentioned, but
we have twelve issued patents, six in the US. We've
just recently been told by the US Patent Office we
have two additional patents that will issue shortly. Congratulations, thank
you very much. We have in the EU, and we
(27:01):
have patents in Japan. What we found was that this
particular approach was unique, and the uniqueness of that allows
us to truly use vocal bio markers in the analysis
of a broad range of human condition and disease. We
use those sub language elements. We don't Our algorithms are
(27:24):
not built from any characteristics of the words that have
been spoken. We don't use them at all. So then
from a healthcare standpoint, healthcare institutions will maintain to their
policies and they differ from institution to institution, but they'll
maintain records healthcare audio records at their discretion.
Speaker 1 (27:47):
So speaking of the models, can you talk about the
evolution of them? You know, are they constantly changing as
you get more data and more signal in there?
Speaker 2 (27:57):
Sure, so we're we're constantly augmenting and improving models. There
are broadly speaking, probably two or three areas that we
work in. One is the obvious additional information coming from
patients that are known to have a target disease or
(28:18):
illness can be used to improve accuracy and even specificities
of the models. Another form of improvement that we're constantly
doing and have rigorously done, is to ruggedize the models.
Not every environment audio is captured in is going to
(28:39):
be identical, so it'll have different density characteristics, so noise
moves around differently. It gets absorbed by ceiling tiles or rugs,
So if you're in a room with marble tile versus
a rug, noise will vibrate and move through the room differently.
(29:00):
So we actually train our models for different types of
noise characteristics as well, and then does one of the
studies we routinely do is to then test that across
multiple populations, not within the same language, so you validate
(29:22):
assumptions that accents or regional uses of language don't change
the model. So that kind of work is constant, and
we do that and we reverse We release new versions
of models when there's a recognizable improvement in performance. And
(29:44):
then of course we do things like take a model
that we have in one area and develop it in
a new language. That's additional work. So I guess they're
simply put about four areas where we're constantly working. Might
be new diseases. For instance, we're building out PTSD and
(30:05):
multiple scrosis at this time, so those models will launch
later this year. We're appropriate, we will continue to develop them.
Just like you have a blood profile that looks at
different chemistries of the human body, we see ourselves as
developing profiles for behavioral health, cognitive health. We're working with
(30:31):
the CDC right now on some protocols that would begin
to address child illnesses, things like autism and other things
like that.
Speaker 1 (30:42):
That's fascinating. Maybe just shifting gears here a little bit.
I mean, we spend a lot of time on the product.
Maybe can we talk about a little bit about the landscape.
And I guess this is maybe a more US focus question,
but can you chat a little bit about the regulatory
landscape from an FDA perspective, and then you touched on
it a little bit earlier, but also the lens of
reimbursement and how you actually get paid for providing these
(31:05):
models to providers.
Speaker 2 (31:07):
Sure, so you know, the pandemic forced was a reality
check for all of us. It forced us to think about,
particularly the delivery of healthcare. And through that process, there
were adjustments made to the CPT codes where they expanded
(31:30):
the reach and use of them so that we could have,
prior to FDA designation as a medical device, we could
have reimbursement for things that are assessments, things that are
providing information to help make decisions about where a patient
should next be sent. For instance, do they go to
(31:52):
a specialist? So in the area of cognitive assessments or
behavioral assessments nonspecific, so that's not it's not an anxiety
depression test, but a COG a behavioral test for purposes
of referral. There is CPT codes that support that at
a range of different levels within in healthcare and have reimbursements.
(32:17):
Canary Speech has been working with the FDA for a
number of months UH and has created a protocol. Just
two weeks ago, we completed a pilot on that protocol
kind of a concept of measure twice SAW once, so
we actually ran the protocol through one hundred patients with
(32:41):
two psychiatrists evaluating the patients and UH a GAD seven,
a PHQ nine and an audio sample, and then we
will compare that information consistent with the protocol that we've developed.
We anticipate having a protocol for anxiety and depression in
(33:07):
the third quarter of this year and then submitting that
information to the FDA during before December for their review
and potentially for approval as a medical device. An algorithm
as a medical device will.
Speaker 1 (33:23):
That goes through a pm A or a five to
ten K pathway. I apologize I'm not that familiar with
how algorithms actually work their way through.
Speaker 2 (33:31):
It's a Denovo track only because it's not been done before,
and so the FDA has been working with us in
the Denovo track, and then once that has been established
and reviewed by the FDA, it future future applications can
be in the five to ten K track.
Speaker 1 (33:50):
Okay, So if we think about where you sit today,
you know who else is working. I'm sure there's competition
in this space. I guess what separates Canary from a
the others who are doing something similar?
Speaker 2 (34:02):
I think, I think aside from Jeff, and Jeff is amazing,
He's not just a dear friend of mine. His contribution
to speech and language is it's difficult to imagine. I mean,
it's when one considers, you know, the Dragon naturally Speaking
was a product that the core product for Nuance, and
(34:27):
it's been evolving and expanding over time. But the creation
of that when when the thought that it could be
done was just a thought. And then farfield speech, which
is what the Amazon echo, is just amazing, amazing accomplishments,
and there are many things on the way to that.
(34:48):
Probably the areas that we differentiate most clearly is one
that I mentioned already, the patents that we have. Previous
approaches using word based analysis did not prove to be
a practical solution in healthcare environment. They require a much
larger sample, they are less specific, they're less accurate. The
(35:10):
approach we took was one that fit the process. These
the medical processes that we were talking about before the
flow of work. You don't have to ask someone to
read a script for five minutes. You just listen to
the conversation that the clinical team members having with the patient,
(35:31):
which is a practical way of doing it, and it
fits into what already existed. That's what Nuance does every day,
millions in times, you know, month, and so it fit
into that process. The other thing, so the patents we
have are significant. I mean we have close to three
(35:51):
hundred issued claims. We have six pending patents. The space
was relatively wide open. Our initial patents received one hundred
percent of every claim we asked for. So that's a
barrier to others entering the space. We don't mean it
as a barrier, but we are proud of the fact
that we created something that didn't exist previously. The other
(36:14):
thing is that Canary did not look at this as
a behavioral health tool, which most of the competitors did.
We looked at this as a platform of technology that
could create the ability to analyze voice. So we have
a range. We have about twelve different commercialized models across
(36:39):
the range behavioral health, progressive neurological diseases, cognitive health, the
childhood diseases. And when an organization and canary begin working together,
within three months we'll have a model they're testing in
their facility. It's not three years. We're adding a half
(37:03):
a dozen new model disease models every year. So we
created a platform that allows us to reach a range
of different conditions and diseases that's really critical. We created
a team of individuals from a scientific level that are
from four different continents. We have individuals from Europe, from
(37:27):
the US, from Korea, from input from teams in Australia.
So we recognize that when you're building something new that's
not in the box, as broad a group of experiences
you can get, is as diverse as that group can be,
(37:51):
is probably going to benefit the creation of something brand new.
So that was a different approach. We weren't a spin
off of a university. There was a a very qualified,
very capable but generally pretty stern mindset. You know, we
(38:12):
come from a range of different experiences. A couple of
our individuals have as much as ten or more years
working with Nuance, and so we had that experience in
our genome as well. And then of course we had Jeff.
And I don't speak of Jeff like he's Hulk or something,
but he definitely brings just his ability to think outside
(38:36):
the box has been a lifelong experience for him.
Speaker 1 (38:40):
You know, one of the things you touched on was
the platform, and I guess the commercialization. Can you chat
a little bit about your go to market strategy and
what does that look like? You know, how do you
sell into providers and what does that look like from
a business perspective? I guess how do you get paid?
Speaker 2 (38:55):
So currently we have some neurological clinics and behavior clinics
that use and healthcare institutions that use our product and
receive CPT code reimbursements. We generally get paid between a
dour and two dollars per assessment. Reimbursements range from forty
dollars to one hundred dollars assessment. So we're we're so.
Speaker 1 (39:19):
It's embisodic as opposed to software as a service that
sort of model.
Speaker 2 (39:24):
Yeah, we're really you know, it would probably be falling
the area of software as a service, but we charge
like that model. Would we charge for an assessment. So
if we capture audio and we process that, we return scores,
and those scores come back during the time they're having
a conversation with the individual. It helped direct the conversation,
(39:46):
if you will, prior to an FDA medical device approval.
But it's still going to be a tool in the toolbox.
Our intent is to provide information to clinical team members
in decision and they're making concerning the health and care
of a patient. Those are the primary ones we are
looking to roll out in other areas. We are in
(40:10):
the contact center business. In Europe, for instance, we've partnered
with NTT doing business. There was a new regulation that
required from a Consumer Protection Act. It required a risk
assessment of individuals buying products online. Well, an area's ability
(40:32):
to assess behavioral and cognitive health was square in their toolbox.
So we have launched with the consumer health applications in
the in Europe. With that.
Speaker 1 (40:46):
You know, we've we've talked a lot about you know,
where you've been, you know, where do you want to
be in five years?
Speaker 2 (40:53):
Well, when the question I asked Jeff in that bagel shop,
when I said to him, what do you want to do?
I was asking a question of someone who I know
very well. You know, I wasn't going to hear from
him I want to go to Disney, which he in
fact really enjoys. I knew the answer was going to
be about what impact my life could have on people,
(41:19):
and I knew that that's where the answer I was
going to get, and the specifics were around applying what
I've done with my life to make a difference in
the lives of others, not just you know, a product
that is convenient, but something that could make a lasting
(41:40):
change in the lives of individuals, which is we all
see in healthcare. The other huge thing is that healthcare
should be ubiquitously available to anyone. The talent of a
doctor at Mass General or Harvard at beth Is or
any of the hospitals.
Speaker 1 (42:02):
We know.
Speaker 2 (42:04):
If we can serve rural populations better, that's going to
be augmented by having objective information about that patient who
doesn't happen to be in your clinic with you personally.
That's what Canary can do. Canary can provide objective information
about the welfare of an individual, the health of an individual,
(42:27):
regardless of where they live in the world. And we
have been in conversations with WHO and their efforts in
Africa to bring technology solutions to populations that are underserved
from a practical standpoint, and all those are practical with
(42:47):
this type of technology, but from a practical standpoint, we
don't have enough clinical clinically trained people, doctors, nurses, nurse practitioners,
everyone else. For the nature of the populations that we have.
We're becoming an older population, we're living longer, our needs
(43:08):
are changing, They're more demanding on healthcare. Having tools that
can help us reach broader populations with objective information rather
than subjective where you have to you have to sideline
the person to take a subjective test. Just do that
while while we're talking with them, and you could do
that before they arrived at the clinic, or you could
(43:32):
do that in a telemedicine call. So Canary's hope was
to provide a tool, a technology that could augment both
the reach and the quality of the care that was
being provided.
Speaker 1 (43:49):
No, Well said, I was going to ask you you
know where do you think voice would go over the
next five to ten years. But I think you answered
that in your vision there. One of the ways I
like to wrap up the conversation is to ask the
guests about their personal life or a lesson they've learned,
whether it's in business or something they learned as a child.
(44:09):
Maybe it's a book you've read. You know, what guide
you in your life, and maybe what principles do you
try to apply to what you're doing at Canary and
maybe what you tell your team.
Speaker 2 (44:21):
You know, whenever you're working in a company a startup
and it's a small group of individuals, you have both
the benefit and the challenge that everybody counts, that everybody
you're working with is making a difference. And a story
that I have told our team is I heard told
(44:46):
one time the speaker was relating relating a story about
horse polls, and if I remember correctly, it was in
the New England States where you have a team of
four to six very large horses and they have a
load on them that they have to pull, and inertia
(45:10):
at rest is what they have to overcome. Of the
six horses, if five of them hit the yoke, the
load doesn't move, the six horses have to hit the
load like that as one. That alignment. That discipline is
(45:31):
hard to have. In a smaller company, it's easier. And
what I tell our team is that what we're trying
to accomplish is like that load. And if you extrapolate
that same analogy, that's also true of our healthcare system
right now. And I said, if we're going to change
(45:56):
what's happening in the world in a positive way, every
one of us has to act in unison with the
other and we need to hit that load like that now.
Once we get that wagon, that load moving, the horses
will pull it progressively faster. What I have said to
(46:18):
our team is they will always find me to the
right of them, yoked equally with them, and that the
greatest joy I have, truthfully is that I'm yoked with
them and they're yoked with me. I firmly believe that
(46:41):
when you're trying to accomplish something like we spoke about Denovo,
when you're trying to accomplish something that hasn't been done before,
you have to believe you can't. And I tell our
people believe you can get the next step done and
(47:02):
we'll finish the trip.
Speaker 1 (47:05):
That's a very powerful lesson. Thank you so much for
sharing that. I think we could all use some of
that in our lives, and particularly our healthcare system could
use us all pulling together for sure. So I think
we'll wrap up here. Henry, it's been a fantastic discussion.
I can't wait to see what you guys do next.
That's Henry O'Connell, CEO and co founder of Canary Speech.
(47:25):
Thank you so much for joining us for our latest episodes,
and please make sure to click the follow button on
your favorite podcast app or site so you never miss
a discussion with the leaders in healthcare innovation. I'm Jonathan Palmer,
and you've been listening to the vanguards of healthcare podcasts
by Bloomberg Intelligence. Until next time, take care.
Speaker 2 (48:04):
Gas as
Speaker 1 (48:09):
Users, tips and snaps.