All Episodes

August 18, 2021 18 mins

On this episode of the Informonster Podcast, Charlie is on the road talking about the relationship between clinical lab data and interoperability. He talks about LOINC, as well as the use of lab data in AI and machine learning, and some of the challenges faced in these applications.

Contact Clinical Architecture

• Tweet us at @ClinicalArch
• Follow us on LinkedIn and Facebook
• Email us at informonster@clinicalarchitecture.com

Thanks for listening!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Intro (00:10):
(inaudible)

Charlie Harp (00:10):
Hi, I'm Charlie Harp.
This is the for monster podcast.
Now, on today's podcast, I'mdoing something a little bit
different.
I'm recording this podcast as Idrive across the lovely desert
between Grand Junction,Colorado, and Los Angeles,
California.
So right now I believe I'm inUtah.

(00:30):
The view is breathtaking.
I'm helping my middle sonrelocate to sunny Los Angeles
from the Heartland of, uh,Indianapolis, Indiana.
Now, today's podcast, what Iwanted to talk about is lab data
interoperability, clinicalinteroperability specifically,

(00:53):
because there's a lot ofactivity right now with Shield,
and LIVD, and all these thingsthat are happening when it comes
to the ability to get labresults and, uh, and utilize
them in an interoperable fashionat a large public health level,
which is similar to a enterpriselevel, but obviously on a much

(01:13):
grander scale.
So first let's talk about, onceagain, what we mean by
interoperable.
Interoperability can mean a lotof things.
It can mean semanticinteroperability, where you're
normalizing to a standardnormative terminology like LOINC
.
It can mean, um, syntacticinteroperability, where you're
taking a message and being ableto format to a way that you can

(01:38):
unpackage it and make use of thedata.
Canonical interoperability,where the data is, is logically
organized in a way that makessense so you can use it, which
is slightly different thansyntactic.
And of course, physicalinteroperability, which is easy
part, which is getting theinformation from you to me,
which is, should be the easypart, but when it comes to

(01:58):
faxing and things like that,obviously, you know, I can get
it to you, but it's still notphysically inter-operable if
it's not actually data, but Iwon't climb up on that
particular high horse.
So, um, now that we have kind ofa rough, high level definition
of general interoperability,let's focus in on labs.
Now let me start out by sayingI'm not a clinician.

(02:19):
I'm not a subject matter expertwhen it comes to the clinical
aspects of lab data.
I am just a simple countryprogrammer who's been working
with lab data since 1987.
And no, I didn't do that while Iwas in vitro.
I, uh, I was a functioning humanadult.
I know, I know.
I have...
Despite my, uh, youthfulexuberance, I've been doing this

(02:43):
for a while now.
So, um, lab data can be lumpedinto buckets, and let's start by
taking microbiology and thingsthat are like microbiology out
of the picture, and let's justfocus on analytical lab.
So you might think thatanalytical lab is pretty
straightforward when it comes tointeroperability.
If you can normalize to anormative terminology, um, you

(03:07):
should be able to leverage thatdata.
The challenge with analyticallabs is a little more esoteric
than that.
And by the way, if anybodylistening to this podcast wants
to chime in or have anotherpodcast to get into it in
details, or to debate certaintopics with me, I love that kind
of thing.
I'd be happy to do it.
And, you know, if I'm, if I'mincorrect or if I'm off base,

(03:29):
I'm the first person to admitthat if it provided evidence to
that point.
So, um, when you think aboutclinical interoperability for
labs, one of the things we haveas an advantage is we have
LOINC.
LOINC has been around a while.
It's very comprehensive.
It has essentially, um, I think14 axes that are...

(03:50):
Some axes have multiple axeskind of hidden within them.
But the bottom line is there's acertain amount of specificity,
um, and granularity there.
They've put a lot of work intoit, it's been around a long
time, and it's awell-established standard for
reporting and, uh, and providingkind of a reference anchor for

(04:11):
lab analytes.
Especially if we stick with likegeneral analytical lab; things
like chemistry, hematology,things like that.
So you'd think if we have thathurdle overcome, of having a
common, um, normativeterminology, that we'd be in
great shape.
And you're kind of right becauseLOINC provides a lot of value,

(04:33):
and we've put a lot of effortover the last decade to getting
people to try to leverage anduse LOINC.
And don't get me wrong.
There are some challengesbecause, you know, LOINC goes to
great lengths to becomprehensive, and so sometimes
the thing you're mapping to isvery specific.
And one of the challenges thatpeople have struggled with is

(04:53):
kind of the ontological onearound LOINC, and you know, what
if I want to roll things upclinically?
And what if I want to groupthings and do things of that
nature?
And that's not what LOINC is, atleast not today.
It has an ontology, but theydon't really recommend using it,
at least last time I checked,and it's really just a way to
kind of decompile the axes.

(05:15):
But even in that, in thatunofficial ontology that
Regenstrief provides with LOINC,it's still pretty cool.
There's a lot of really coolthings in there, and if you've
never taken a look at it, um,it's worth looking at for the
synonymy and all the other coollittle nuggets that are hidden
within that, and we couldprobably have a podcast where we

(05:35):
talk about that.
If you'd like us to do that, letme know.
So when it comes to lab data,there are different people with
different opinions, and I'm justcoming at this from a pragmatic,
analytical, engineering point ofview because in the olden days,
and maybe today, when I get labdata from somewhere else, my

(05:55):
number one concern is I want tobe able to take that lab data
and I want to just display it tothe clinical person that is
trying to make a decision aboutthe patient.
So a lot of the mapping thathappens in EHR's today, when it
comes to lab data, external labdata, is I want to get it on the
chart so that the provider cansee it and factor that into

(06:17):
their calculus when they'recaring for the patient.
But I think one of the thingsthat's happening in public
health, and in enterprise healthfor lack of a better term, is,
um, we're trying to use thatdata to do analytics and look
for patterns and to do thingslike AI and machine learning.
And the challenge is if you'resuper granular with some of that

(06:41):
data, well, you can't combinethat data.
You can't turn that data intoinformation in motion, which, um
, when you start looking at,especially this quantitative
data as a vector, or you starttrying to use it for things like
inferencing or other reasoninguse cases, it can be challenging

(07:02):
because you have to kind ofresort to value sets or you
resort to,"If it's this or thisor this or this or this,"
because you don't have a way toreasonably combine it into a
single compatible thing basedupon the use case you're
presented with.
And there are a couple examplesof this.
So you might normalize a labtest, say for SARS-COVID

(07:26):
positive or negative, and youhave that test, but one of the
things we may not have is astandard way of resulting that
test.
In fact, I said in a previouspodcast that, you know, we found
there were 74 ways to say"positive." And of course that's
a challenge, so thestandardization of things that
are not quantitative, that areordinal, um, can be a challenge

(07:47):
because then you have to say,"Well, if they say this, this or
this or this or this," and thatmakes building the rules and
making sure they're going tofunction appropriately a
challenge.
Now, when it comes toquantitative results, you have a
similar problem.

The first problem is (08:03):
Are they the same result unit?
So, you know, you can havethings that have properties
where the property is, say, massconcentration, but the people
doing the testing might use adifferent unit of mass
concentration.
Now, LOINC puts a lot of energy,and they've kind of established
these properties, but most, uh,laboratories that do testing

(08:26):
don't really deal with theconcept of property because it's
inferred based upon the unit.
So I have a lab test, and I'mgonna say that it's, you know,
nanograms per deciliter.
I'm not going to necessarilypoint out to you,"this is mass
concentration." I'm saying,"here's a unit." So if I've got
one mass concentration unit,mass over volume, and you've got

(08:50):
a different one, let's say(in)your test, you're doing it in
pounds per gallon, which I knowpounds is weight and not mass,
but it's, I'm giving ahyperbolic example.
At some point you have to have amechanism for taking the test,
which for all intents andpurposes might be clinically
equivalent, and bring themtogether by converting the value

(09:12):
to a common base unit; In thiscase, let's say it's nanograms
per deciliter.
So the first stumbling block,when you go to combine
quantitative lab results, isthis stumbling block of getting
them to a common base unit sothat you can analyze them, trend
them, and graph them using thesame unit.

(09:33):
And that's prettystraightforward.
The other thing that's a littlemore esoteric is the method of
the test because, you know, insome cases, the lab test itself,
how it's performed, and maybeeven with the same unit, you
might interpret the resultdifferently.
That may not always be the case,but when it is the case, it's

(09:54):
important.
So another thing you need tolook at is, well, does the
method matter in how I'mclinically evaluating this or
how I want a reasoning algorithmto clinically evaluate this?
And I'm really curious, Ihaven't spoken to a subject
matter expert about this, but asan engineer, as somebody who,

(10:16):
you know, understands a fairamount of, uh, science, one of
the things I run into every nowand then is where people say,
"Well, you can't combine thesetests, the results, if they have
different reference ranges."Now, I tend to believe that when
it comes to a lab test, thereare things that are contextual

(10:38):
and there are things that arenot.
Things like what were theyeating, what time of day was it,
what's the age of the patient,what's the weight of the
patient?
Those things are all relevant.
They're all relevant to thecontext of when the test was
performed, but the result itselflives outside of that.

(11:01):
Now you might decide that,"I'mnot going to combine things and
have different contexts," but ifI do a, um, I don't know, an
albumin test on serum, and I geta result, regardless of the
context, you could argue thatthe result is the result.
If it's the same unit ofmeasure, the result is the

(11:23):
result.
The context might factor intohow I'm interpreting it, but the
result is the result.
It's this number for this unit.
Now, um, when it comes toreference ranges, my
interpretation of a referencerange is that reference ranges
are used by the peopleperforming the test to decide

(11:45):
how they're going to flag thetest as normal or abnormal, or,
you know, low(or) high panicvalues.
And it might based upon howthey're looking at the tests
relative to how they'vecalibrated, the instrument that
they're running it on.
And of course, reference rangesoften are also specific to the

(12:05):
age, and gender, and possibly,you know, comorbidities or other
factors going on with thepatient.
But the result is the result.
And so the question is, if I gettwo lab tests from two different
places and they have differentreference ranges for the same
patient context, should Icombine or not combine those

(12:27):
tests, the results into a single, um, a single vector?
Now I'm not gonna, you know, putforth an answer to that
question.
I want, I'm really curiouswhether people think...
Well, maybe I will put forth ananswer to that question.
I'll say that to me, I wouldthink it's legitimate as long as

(12:48):
the context is the same.
And let's say there are nospecial circumstances.
There's no time aspect.
There's no, you know, there's nopost-dosage information.
It's just a regular,run-of-the-mill test, same type
of instrument, or samemethodology, let's say, um, same

(13:11):
specimen type, but I'm gettingit from two external labs, and
let's say it's a week apart.
And the labs for the same ageand gender have different
reference ranges.
Now this is kind of a thoughtproblem.
A hypothetical question.

(13:33):
If the one lab's reference rangesays the value is low, and the
other lab's reference rangewould say that it's not low...
So let's say one of the labresults is normal for both, in
both lab's reference ranges, butit's low.

(13:53):
The other result is low for thefirst reference range and normal
for the second reference range.
I know I'm making this reallycomplicated without a
whiteboard, but my question iscan I combine those results?
Because some, I would say theanswer is yes, I would be able
to combine those results if youknow, I believe that both

(14:15):
reference labs, both externalsources, are valid and they're
doing a good job testing.
They might have differentinterpretations of that result,
but the result is the result.
Because if the answer is,"no, Ican't combine those results,"
then it's almost as if you can'tever combine results, you can
only combine...

(14:36):
You can almost evaluateeverything as an interpretation
of a result, and everythingbecomes an ordinal result at
that point.
We say it's low, we say it'shigh, which makes the whole
concept of quantitative analysisand lab results kind of squishy,
if you can't combine thoseresults.

(14:56):
I think this is a relevantquestion because the more we m
ove down towards this idea, whenyou look at initiatives like
Shield and LIVD, of being ableto combine things into a
continuum, into a set of vectorsrelative to a patient or a
collection of patients, there'sthis idea that I have to believe

(15:17):
that I can trust the result.
I have to believe that I cantrust the quantitative r esult
in context and make decisionsbased upon those quantitative
results in context.
If I can't, then the wholefoundation of public health is
in question, I would think.

(15:40):
Or we have to more strictlymandate things like reference
ranges a nd how we calibrate labinstruments.
So, I think that if we c anagree that we can assume that
the results are right orcorrect, and we assume that we

(16:01):
can do base unit conversions,and we can decide which c
ontexts caused things to fall inor out, then we could establish
a meaningful way to combine dataso that we can do analytics at
scale.
And I think those are some ofthe things we need to be able to
do, to do some of these publichealth initiatives that I think,

(16:21):
u m, as a, as a nation and as anenterprise, those are the things
y ou g ot t o kind of get yourarms around so you know that you
can trust, normalize, andmeaningfully leverage this data
that you're collecting from yourpartners out there doing work in
the field.
All right.
So that's kind of what I wantedto talk about on this edition of

(16:45):
the Informonster Podcast.
As I make my way through thedesert, um, I appreciate you
guys tolerating whatever roadnoise came through, if I can
sneak this past the people thatpublish the podcast.
Enjoy this little personaltouch.
Um, I look forward to yourfeedback.
I would love to have a panelpodcast where we talk about this

(17:07):
very topic.
So maybe I'm kind of poking thebear with a stick a little bit
to get some people to come outof the woodwork and talk with me
about this.
Because I think, you know, inpublic health, as we look in the
rear view, or hopefully in therear view, at COVID and look to
the future at how we can do abetter job of Biosurveillance,

(17:28):
monitoring and understandingthese patterns that we see, um,
I think getting our arms aroundsomething that should be
solvable is going to beimportant.
And to do that, we're going tohave to answer some of these
pragmatic questions and decide,you know, where we're drawing
the line.
Anyways, I am Charlie Harp, andthis has been your"in the wild"

(17:50):
edition of the InformonsterPodcast.
Thank you, and take care.

Outro (17:54):
(Inaudible)
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.