Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Illuminated by IEEE.
Speaker 2 (00:04):
Photonics is a
podcast series that shines light
on the hot topics in Photonicsand the subject matter experts
advancing technology forward.
Hi everyone and welcome totoday's episode of Illuminated.
My name is Akhil and it's mypleasure to be your host today.
I'm a biomedical physicistworking at the University of
Glasgow as a LeavieHume EarlierCareer Fellow and a research
(00:28):
fellow.
In my role at the IEEE PhotonicSociety, I'm supporting and
promoting initiatives very muchlike this podcast to raise the
profile of valuable youngprofessionals within various
sectors.
Now within the IEEE PhotonicSociety, the young professionals
initiative is for anyone up to15 years post their first degree
.
The affinity group within theIEEE Photonic Society is
(00:51):
committed to helping one pursuea career in Photonics.
We're here to help evaluateyour career goals, better
understand technical pathwaysand subject matters and refine
skills and help grow yourprofessional networks through
mentorship.
This podcast is one such wayfor information.
On to our podcast.
(01:12):
Now it's my pleasure tointroduce the moderator for
today, professor Peter Munro.
Peter is a professor ofcomputational optics in the
Department of Medical Physicsand Biomedical Engineering at
University College London andthe Vice Dean of Research in the
Faculty of Engineering Sciences.
Peter's research focuses on theuse of computational approaches
(01:35):
to study, optimized, interpretand design optical imaging
systems.
He has worked on a range oftechniques, including confocal
microscopy, optical coherence,tomography, x-ray phase imaging
and photoacoustic imaging.
He's recently released an opensource software package, tdms
time domain, maxwell Solver,which can be used as a platform
(01:59):
for simulating a range ofoptical imaging techniques.
He has served for the lastthree years as IEEE Photonics
Conference Biophotonics andMedical Optics Topic Chair, and
it was at the previous iterationof the conference where Peter
and I met for the first time.
Over to Peter.
Speaker 1 (02:20):
Thank you, achille,
and it's my pleasure to
introduce the guest speaker thatwe have today, professor Adowan
Oskhan.
Professor Oskhan is theChancellor's Professor and the
Volgenau Chair of Engineeringand Innovation at UCLA and an
HHMI Professor at the HowardHughes Medical Institute.
(02:43):
He is also the AssociateDirector of the California
Nanosystems Institute.
I'm going to give a few ofProfessor Oskhan's key
achievements in a moment, ofwhich there are many, but before
getting into that, I justwanted to highlight that he is
an expert in our field, such ascomputational imaging, deep
(03:04):
learning, optics, microscopy,holography, sensing and
biophotonics and around thosetopics that we will spend much
of our time talking about.
Now just a few highlights fromProfessor Oskhan's career.
So he was elected fellow of theNational Academy of Inventors
(03:27):
and holds more than 70 issuedpatents in those research topics
that I just mentioned, and he'sauthor of one book and
co-author of more than 1000 peerreviewed publications in
leading journals and conferences.
(03:48):
He has received a long list ofawards, including the
Presidential Early Career Awardfor Scientists and Engineers,
international Commission forOptics, ico Prize, dennis Gabor
Award from the SPIE, the JosephFraunhoffer Award and Robert M
Burley Prize.
There is a very long list and Iwill, with Professor Oskhan's
(04:15):
permission, abridge that list,but I will direct you to his
website because you will be ableto get a full rundown of that
list.
But I think it's worthmentioning that he has been
elected a fellow of Optica, aaas, spie, ieee, aimbe, rsc, aps at
(04:36):
the Guggenheim Foundation andis a lifetime fellow member of
Optica, nai, aaas and SPIE,which is a comprehensive
collection of fellowships.
So with that, I think we willnow move on to the technical
(04:59):
part of this podcast, and I'mjust going to start that by
asking Adawan if he could startwith some words about what
motivated you to get into thefield of computational optical
biomedical imaging.
Speaker 3 (05:18):
Well, first of all,
thanks for having me.
It's great to see you, peter,and so wonderful to be here and
spending this next hour talkingabout science and optics and
machine learning and how it canperhaps change the way that we
do microscopy and oscopy or, ingeneral, imaging through
(05:39):
computation.
So, going back to your question, so I think when I first
started my independent career Iwas bombarded with a lot of
problems to solve, most of whichwere around global health.
(05:59):
For example, how do you look atblood specimen or tissue
specimen, sputum taken frompatients in resource limited
settings?
How do you bring imaging,microscopy, solutions for
diagnostic, sensing, solutionsfor diagnostic that could work
(06:23):
in a village where there is noreal infrastructure?
So that was kind of like part ofmy training at Harvard Medical
School, where I was kind ofconstantly looking around and
trying to understand what's agood direction to kind of apply
optics for, and I realizedsensing and diagnostics and the
(06:46):
intersection of that field withoptics was a great avenue.
I soon realized computation wasa wonderful way to enhance the
performance of poor devicesmobile devices, inexpensive
parts, inexpensive optics,plastic lenses or even no lenses
, and that's how I startedactually, and I was lucky at
(07:09):
that time in the sense thatsmartphones or not so smart,
then this is back 2006, wheresmartphones didn't really exist
as we understand today.
Mobile phones, cell phones,were picking up and penetration
of cell phones, even to theremote part of the world, was
(07:31):
constantly rapidly increasingand they had some very
interesting platforms to docomputational imaging and that's
how I started to bring advancedmicroscopy and sensing
solutions through mobile phonesinto global health settings
where diagnostics, imaging likemicroscopy, topology and
(07:51):
services were not conducted aswe understand in a modern
infrastructure.
Speaker 1 (08:00):
And those initial
approaches that you developed as
far as I'm aware, they wereprint that the computational
side was principally physicsbased.
So how did you go abouttransitioning to AI based
approaches?
Speaker 3 (08:20):
So for quite a bit of
time, between 2006 and perhaps
2015 and 16, we created allkinds of computational imaging
microscopy systems, all of whichwere using physics as the core
principle and coherent way,propagation, solutions to the
(08:45):
Maxwell's equations in principle, coherent imaging, partially
coherent imaging, incoherentimaging and holography.
It was fascinating and wecreated lots of chip scale
microscopes where the microscopewas as tall as a few
centimeters, benefiting fromCMOS images from mobile phones,
(09:07):
5, 10 megapixel, creatingdiffractional, limited imaging
across a very large samplevolume, needle in a haystack.
Problems finding pathogensacross the large sample volume,
good for UTOM T2 samples andblood smears or blood based
diagnostics.
That was essentially afantastic journey of a decade,
(09:29):
using physics and holography,creating on chip microscopes
installed with mobile phones orstandalone handheld systems.
But at that time, especially by2015, machine learning and
specifically deep neural netswere conquering a lot of the
(09:56):
performance milestones incomputer science that were
previously thought as verydifficult to achieve surpassing
human-based decision making insome cases for especially like,
for example, recognition, etc.
And soon.
(10:16):
Then I was very lucky in thesense that I had a lot of data
that be generated, likeliterally terabytes, hundreds of
terabytes of data, of samples,with the ground truth of
construction methods in mypocket, driven by physics, and
we started to compare, in termsof the fidelity of the
(10:39):
reconstruction and machinelearning mimic, what physics is
doing.
This was my baseline, basically, in this comparison.
We were fortunate to have a lotof high quality data that led us
to train neural networks tomimic what our understanding
with the physical basereconstruction was performing,
(11:01):
and we understood the major twoadvantages for machine learning
and computational imaging, whichis non-etrative, extremely fast
reconstructions, because a lotof what I was doing for the
first decade of my independentcareer, I was trying to speed up
(11:22):
algorithms for imagereconstruction.
A lot of them were iterativeand we were increasing the
dimensionality of ourmeasurements to support
convergence faster, better andachieving super-resolution of
one kind, even when we realizedthat with that rich data, neural
networks could actually betrained.
Yes, it could take a couple ofdays at that time to train a
(11:46):
neural network, but it was abeautiful approximation for what
the physics-based solution wasdoing faster, non-etrative that
was my baseline.
That was very exciting.
And we applied this to not justholography.
We applied this to bright fieldmicroscopy to improve their
resolution.
Depth of field with lightestmobile phone-based microscope
(12:09):
created transformations thattook care of color aberrations
and distortions and resolutionlimitations of inexpensive
plastic lenses of mobile phonesfor microscopy, and shown that
it can do diffraction-limitedimaging and do establish,
(12:29):
basically, transformationsbetween one form of microscopy
to the next.
After the baseline, we soonrealized there was even a bigger
set of opportunities that wecould not reach with physics and
that was, I think,approximating functions that are
not having physical forwardmodels and that opened the
plethora of new opportunitiesfor deep learning in microscopy,
(12:51):
in my opinion, which included,for example, virtual staining of
tissue, establishing, forexample, transformations from
ortho-phylorescence contrast ofsamples into bright field
contrasts, where it also mixedthe staining coming from a
histology lab, so kind of likefunctions where normally the
(13:14):
physical forward models beyondour understanding.
And that still holds, I think,as one of the biggest advantages
of supervised learning inmicroscopy achieving
transformations that are perhapstoo difficult to understand how
they're working, but throughimage data you can, and if you
can establish thosetransformations, they're
(13:35):
transformative for variousdifferent applications,
including histology andsupercology.
Speaker 1 (13:42):
And so we will come
on to talk a little bit more
about the challenges, butperhaps you could just talk a
little bit about some of thechallenges you faced as being an
earlier doctor of AI approaches.
Speaker 3 (13:55):
Well, of course, if
you start with supervised
learning which was wonderful Forexample, you take a hologram
and you can now reconstruct ahologram with bright field
contrast.
We call this bright fieldlograppy, but it was
establishing a coherent toincoherent single frequency,
(14:19):
multi-color transformation.
This was enabled by, basically,data from two different
modalities that arecross-registered that was fed to
a supervised learning model.
But that in itself is perhapsone of the highlighting some of
the challenges of supervisedlearning approaches.
(14:40):
First of all, you need a lot ofdata high-quality data, I think
, for AI in computationalimaging, image registration,
workflow, cleaning of data andbringing domain expertise to
make sure that garbage does notleak into your training is a
(15:02):
tedious task with an expensivetask.
Sometimes access tohigh-quality data is not
available, especially if thespecimen are expensive and hard
to find.
On top of all of that, thepotential of piqueness of the
model and what is learned isthis a double-edged sword.
(15:22):
I'm opening a new set oftransformations that I couldn't
understand before, and now I canperform them and validate their
accurate against the groundtruth, but at the same time, I
don't have a very goodunderstanding of how they work.
What does that mean?
That means there can bepotential hallucinations.
You've got to create watchdogsto constantly monitor your
(15:44):
experience and your model.
This sounds scary to a lot ofpeople at first, and I was one
of the early adopters, as yousaid.
If you look at 70 years ago,eight years ago, when we were
presenting these ideas inconferences, yeah, there were a
lot of skepticism abouthallucinations, about the
opaqueness of what we'relearning.
(16:05):
But, to be fair, thisopaqueness actually is
everywhere in the pipeline.
Think of, for example, pathology.
You give your biopsy and itgoes to a lab for a diagnostic,
into the diagnosis, but withinthat clinical workflow, a lot of
things are actually stochasticand they're opaque.
(16:29):
We just don't know how opaquethey are.
But if you work with the labsystem and if you give, for
example, hundreds of specimensto a lab, you understand what
fraction of those are messed up.
The clinical workflow has itsown checks and balances.
Make sure mistakes in theworkflow, which you can say
(16:50):
hallucinations in a differentdomain, are carefully filtered
out.
This is the same thing when AIin computational image.
It enters, for example, thepathology workflow.
It's still going to be workingwithin the clinical workflow
with different kinds of checksand balances, the first and
foremost, though, which is thepathology looking to those
(17:13):
images and saying, hey, now thisis not staying well, staying me
again.
So then you just directly takemore tissue and stain it.
So hallucinations are concerns,but I think it's not a major
concern, it's not a game stopperfor us, because there are
different ways of looking at itand regulating it and creating a
(17:36):
workflow that is rigorous tothe standards of diagnosing
patients with the higheststandard that you can imagine.
So those are some of thechallenges, but nowadays I think
the community is less skeptical.
When we first started this lineof research, yeah, the
discussions were at a differentscale how I was doing some
(17:57):
polling, when I was presentingthese things with large crowds,
half of them were very skeptical.
I was asking them would youbelieve in an image that you see
generated by AI?
Literally half of them weresaying no, I wouldn't.
The other half we're lookingaround and raising their hands
yeah, I would.
So that's now changed.
Speaker 1 (18:22):
So could you perhaps
say something about how you
mitigate the hallucinations?
What do you do to counteractthem?
Speaker 3 (18:31):
So actually, if you
kill the creativity of the AI
model, so AI models arewonderful in creating things
that from a distribution pointof view, that the distribution
point of view would bebelievable.
(18:52):
This domain is very powerfulbecause a lot of the
applications for these kinds ofgenerative models powerful
models is actually, for example,to create new images, new art,
new faces.
You would believe that, yes,this person must have lived, but
(19:13):
that's very dangerous.
That creativity is verydangerous.
The more physical insights, themore conditioning you bring to
the generative AI to kill thatimagination space, the better
your models get.
For microscopy, we do that bybasically putting structural
(19:35):
loss terms during the trainingphase, where we regulate the
generator to be faithful to themicro and nano structure of the
input fields.
A lot of what we do is notgarbage in noise in and a
beautiful image out.
It's actually a diffractionlimit that our highest and our
(19:57):
beautiful input data in.
But I want something else atthe output.
That input micro structure,nano structure is actually used
to regulate the hallucinationsspace, to be faithful to the
actual realities of the tissue.
In other words, I don't want tocreate a new patient that 100
(20:19):
different pathologists would say, yeah, this is actually a
patient's specimen, beautiful asthey.
Let's diagnose this imaginarypatient.
Nobody wants to pay forimaginary patient diagnosis.
We must be creating the imageof that patient and their
structural conditioning is veryimportant.
We should realize that goodquality, diffraction of limited
(20:42):
images are at the input, helpingus to do this regulation.
Another way of bringingregulation to the generative AI
models have them not hallucinate, but bounded within the
physical world.
Limits of the physical world isactually used physics as a
(21:02):
regularizer.
One of the recent work that wedid in 2023 on this actually
used Maxwell's equations to useas a way to use the wave
equation driven from theMaxwell's equations to regulate
(21:23):
learning of a model.
In this case, the gist of theidea was this this was zero-stop
learning, meaning that thisrequired no data, no experiments
, no experimental setup and noprior knowledge about the sample
that you will be imaging.
(21:44):
It's actually just opposite tosupervised learning.
It's self-supervised learning.
We call it Gedankinet afterGedankinet experiments
popularized by Einstein thoughtexperiments, and the idea was
this the equation is repeatable.
If you rely your learning onthe repeatability of a physical
(22:07):
law, then it actually makes thegenerator model learn within the
bounds of wave propagation.
Basically, if I, for example,take a piece of stone in my hand
and release it and repeat this10 times.
All of them are exactlyfollowing the same trajectory,
the same time point in space andtime.
(22:29):
It's repeatable.
Ai was actually performingGedankinet experiments based on
the wave equation and panelizingitself, self-supervising itself
to be consistent in itsreconstruction for the wave
equation, and that provedphenomenal.
You trained it with no data, noexperiments, hallucinations of
(22:50):
samples and just havingGedankinet thought experiments.
It actually first time it sawan experimental data.
It was able to do quantum phaseimaging, holographic
construction of samples.
If you actually try to push itto hallucinate, it was actually
not hallucinating, but it washallucinating within the bounds
(23:12):
of the wave equation and thephysics.
That was a beautiful learningfor us to understand the power
of physics as a regularizer forthe learning of models, in this
case without any experimentaldata.
It could be opposite toself-supervised learning in a
self-supervised manner withphysics.
All in all, in one sentence,you got to condition and
(23:38):
regulate generated models to bewithin the bounds of what you
believe is the physical world,whether it's the nanostructure
of tissue or the way that wavesare propagating in space at
second.
Speaker 1 (23:56):
So I think we could
have an entire podcast on many
of these individual topics, butif I was just to summarize, I
guess, what you've been talkingabout, would it be fair to say
that you're both using physicsto assist AI and AI to assist
physics, like you're doing bothof those.
Speaker 3 (24:19):
Absolutely, it's a
two-way communication.
Actually, you're absolutelyright.
In some domain it opens updoors where our physical
understanding was limiting us todo things that we always wish
to do, but we didn't know how toestablish a forward model to
solve.
That's where AI was really verypowerful.
(24:41):
But then supervised learninghas its own bag of problems that
I summarize.
That's where physics can alsobe very powerful to regulate it.
Every specific domain of theproblem that we want to solve
must have some bounds dictatedby the laws, like, for example,
(25:01):
for holography, coherent imaging, maxwell's equations they work
everywhere.
Precisely the same very strongcondition for AI to be penalized
.
We call it physics consistencylaw, which means physics is
consistent.
It means if AI is going torepeat the same Gedankine
experiment a million times, itmust be consistent.
(25:23):
So inconsistencies in physicsis only because of AI's mistake.
Then learn from your mistakes.
That's the gist of Gedankinelearning, self-sacrifice
learning.
(26:20):
I think this is a very richtopic and, depending on what you
want to do with AI, I think theanswer is still floating around
and how you do it, how youexecute it, who's going to use
it for what purpose, is going todictate the regulations?
(26:42):
And, of course, entities likethe FDA is constantly looking at
this.
It's a push-pull mechanism.
Startups, companies large-scale, mid-scale, small-scale
companies they're constantlycoming up with ways of using AI
for human health and it's apush-pull.
(27:04):
What needs to be done and itreally depends on how you're
going to use the technology, forwhat purpose and how the
patient is going to benefit fromit.
What are the risk factors?
I'll give you an example fromvirtual staining.
This will tell you thelandscape of the maze of
opportunities and how each oneis treated differently.
(27:27):
Let's say, we want to talkabout the opportunity to
highlight it.
Virtual staining you takesamples of label-free tissue
from biopsies and you stain themWith that chemistry.
It's a very powerful technology, a very good user of AI,
(27:49):
because you take nativeendogenous contrast of the
fluorescence of tissue biopsytissue, let's say and you
transform it, using a generativemodel that's conditioned to a
beautiful supervised learning,into an image that mimics
exactly what comes out of anhistology lab the next day or
(28:10):
the next week, but no chemistryinvolved, faster, more
consistent and cheaper All thegood things that you want to
have.
Well, how is this going to beused for impact in the world?
For biomedical space, there aremany opportunities.
If you're talking about primarydiagnosis of a patient, let's
(28:30):
say a patient comes, biopsy istaken and the pathologist is
going to diagnose for the firsttime what's happened you need a
rigorous approval process forthis, a class 3 approval from
the FDA.
To use this in the UnitedStates.
That would mean you will haveto work with multiple sites in
(28:51):
the world, each with maybe a fewhundred cases, and evaluate the
equivalency of this as opposedto the clinical care for the
same patients under the hands ofdifferent pathologists,
different hospitals, et cetera.
It's really probably the mostrigorous class 3 approval
process that you would have togo through.
(29:12):
However, imagine another casewhere this technology is going
to be used for secondary opinion, teleconsultation.
Let's say there's a case fromAsia or Europe and you're
seeking for secondary opinion.
Well, it's not primarydiagnosis and it's not actually
as regulated as a class 3process.
(29:33):
It goes through a separate setof approval processes which will
follow.
In the United States, at least,the College of American
Pathologists have guidelines fora new technology because it's
teleconsultation, it's secondary, there's already a diagnosis
made, which means the risk islower.
(29:54):
Another application of this sametechnology in animal fisheries,
for example, looking intotoxicology studies for different
drugs.
You have these animal models.
Every pharma company has to getthese reports submitted to the
FDA.
That because it's animal tissueand you apply virtual standing
(30:17):
on it to generate evidence forthe FDA, it goes through a
separate regulatory process.
These need to be, depending onwho is using it for what purpose
, need to be well mitigated.
Of course, in the researchdomain it's an entirely
different game.
Let's say, for cancer researchin a university using a product
(30:42):
of a virtual standing technologyfor their own research.
That's a different level Riskfactor, that's a different level
of essentially approvalprocesses that you have to go
through.
So it all depends on who'susing it for what purpose and
what are the risks involved inthe final outcomes that you
generate.
Speaker 1 (31:01):
Okay, and changing
topic a little bit, you've done
a lot of work throughout yourcareer on solutions in low
resource settings, so I wonderedif you could say something
about what the implications ofAI based computational imaging
are for health inequality.
Speaker 3 (31:26):
I think it's a great
question and it has multiple
facets to it.
So in terms of cost ofequipment, access to advanced
measurement tools, I think it'sgoing to revolutionize access to
advanced systems.
(31:46):
I'll give you an example, sincewe were talking about the
pathology.
That's a very expensive domain.
Digital pathology in general isa very expensive piece of
technology for a hospital toconvert their samples from gloss
based biopsy storage of tissueto a digital storage and digital
(32:09):
scanning system, one of thosemicroscopes that they're about
$300,000.
These pathologists scanners areperorids in the sense that
they're rapid, they can have anenormous throughput of scanning
giant amount of tissue withdiffraction, limited resolution,
hey, but they are $300,000.
(32:30):
And a digital pathologyinfrastructure for a hospital,
you're talking about $5 millionto $6 million and these numbers
probably increase because of theinflation.
Right now these are oldernumbers.
So access to advanced devices,sensors, microscopes with AI is
going to be, I think,revolutionized because we've
(32:53):
shown this repeatedly you cantake inexpensive, lower
throughput, lower resolution,aberration having optics and
transform them into by trainingagainst a ground truth that the
expensive one, by trainingmodels against them, you can
actually have beautiful modelsthat, from device to device to
(33:16):
device, generalize Very well.
We're taking inexpensiveoptical instrumentation and work
for you as if it's an expensiveone For research, for clinical
use, for microbiology labs, forhistology labs.
Globally, I think this is goingto be democratizing access to
advanced equipment, which meansthere will be new companies that
(33:39):
understand the market and tryto produce inexpensive tools
that, of course, can bedistributed and used in
different parts of the world.
I always mentioncommercialization for this,
because here are the impact isnot just creation of the IP or
(34:01):
creation of a prototype, becauseyou can donate prototypes of
new technology but if there's noreal product in there, there's
no service and once somethingbreaks, it's not replaced again.
That's why impact cannot reallycome with donated smart
(34:22):
equipment, because they willsoon be broken.
If there's no infrastructure ora commercial model, then it
won't be sustainable.
That's why I'm alwaysinterested in commercialization
of technology, because that's anaspect of impact for senior
technology, really with theneedle in terms of inequality,
(34:43):
for example, to access to healthcare and advanced instruments
that come with advanced healthcare systems.
Same is true with sensing.
I was talking a lot about theastrology and microscopes, but
the same exact message appliesto sensing, especially point of
care sensing.
Point of care sensing forespecially developing work is
(35:05):
very important.
A lot of times when a healthcare professional sees a patient
, that's your only opportunityto diagnose and to administer a
treatment.
The chances of you seeing thepatient again is very slim.
That's why you need technologiesthat are mobile, cost-effective
, portable, that can access thepatient and immediately, within
(35:29):
15 to 20 minutes, diagnose thecondition from a sliver, from a
finger prick, blood sample,urine, et cetera.
That's where point of carediagnostics and solutions of AI
and power point of carediagnostics will be very
important for basically reachingout to developing countries
(35:51):
where patient return is an issue.
How do you get that patient?
Unfortunately, the same problemalso exists in a different
manifold in the United Statesand other parts of the world,
the western world, especiallyfor treating, for example,
homeless patients.
It's another problem.
(36:11):
You need mobile clinics where,when you see the problem, you
can actually start the treatment, because there's no guarantee
that you will be able to see thesame patient again when you're
doing your surveillance.
Speaker 1 (36:26):
So that's really
interesting.
I mean so you've gone into whatI was about to ask you about,
which is what the future mighthold for computational imaging,
and I think if I ask you thatquestion, it may be very
difficult for you to give aconcise answer.
So I'm going to help you alittle bit.
Of course, you can add to it.
But if, supposing, you met thepresident or your president and
(36:51):
your president was saying right,I've got this money to give you
, why should I give it to you?
What would you say to him?
Speaker 3 (37:03):
Well, so currently,
whatever I told you is something
that I'm very passionate about,but there's something else that
I didn't tell, which issomething that I'm trying to
scale up in, and that's thepush-pull between AI and optics
and physics, but this time fromAI side to the optical design of
(37:26):
optical systems.
This is a little bit differentthan what we've been discussing
so far, and if I were to meetPresident Biden or any rich
daughter, I would seek fundingto actually build optical
processors designed by AI.
(37:47):
So let me give you a little bitof context for what I mean by
this.
Today's computational imaging,by and large of course there are
exceptions is driven by humanperception, and the
instrumentation that we used aremostly built for humans to
enjoy and humans to define theground, humans to perceive and
(38:14):
understand.
So that's the language that webuilt, with these beautiful,
spatially invariant points ofgood functions that, at
different scales, we've beenengineering for centuries.
But from the perspective of AIas the new head in the block
(38:34):
next to humans, that's not thecase, because AI doesn't come
with the eye that we have andthe brain that we have and the
perception that we have.
As a result, there must be anew language between the scene,
the object, the analog worldrepresented by waves at
different scales, at differentparts of the spectrum, and AI
(38:59):
and I think our research ispointing more and more on the
amazing richness that AI-baseddesigns optical instruments.
We call them all opticalprocessors as front end to a
digital system.
These are systems where AI canprogram in the physical domain
by the understanding of lightmatter interactions.
(39:20):
Ai can optimize a physicalembodiment that can achieve an
arbitrary set of spatiallyvarying points with functions to
build computational camerasthat actually establish a new
language between the analog waveworld represented by photons
and the digital back end.
(39:40):
I call this a new dictionary,so to speak, by program
diffraction, and that, in myopinion, is very exciting
because it takes the human appand asks hey, if I want to have
a robotic system with their own,let's say, back end, with the
product assumption requirement,with the speed requirement, with
(40:04):
the mobility requirement, withthe cloud access requirement,
find me a dictionary that thatAI model, with all these
engineering constraints, shouldestablish with the world.
That's where those programlight matter interactions will
come in handy, because those arejointly optimized systems where
(40:24):
AI will tell the digital AI andAI will tell hey, this is the
only thing that I can have.
I need a microstackle inference.
I could not have more than afew microwatts per inference and
I don't have a megapixel.
I only have 0.5 megapixel and Idon't have access to the cloud,
and my battery is this much.
(40:44):
Well, with that, what is thelanguage that I should establish
with the surrounding worldrepresented by photons?
That's where AI dictates itsown dictionary that it likes to
speak, and that dictionary isdictated by basically a program
matter, a material that hassub-wavelength features on it in
(41:05):
3D, and I call them diffractivenetworks, diffractive optical
networks or diffractive opticalprocessor.
Interchangeably used, that, inmy opinion, is creating a
plaster of opportunities atdifferent parts of the spectrum,
all the way from terahertz toinfrared, to visible and maybe
even shorter wavelengths.
However, we weren't accustomedto thinking with the concept of
(41:29):
spatially varying points withfunctions.
That was actually a noise termfor previous generations of
optical designs.
Now, all of a sudden, I can askthe question of hey, this is
the set of spatially varyingpoints with functions For a
reason.
I know this is very good for me.
Ai says that for a differentreason.
How do you approximate that andwhat do I do with that?
(41:51):
Well, what you do with that isthis you take a certain
computational task for a roboticsystem, you divide part of it
to be all-optically computer,part of it to be digitally
computer, and they joint theright to recognize each other.
The communication or thecurrency between them is AI
generated, has nothing to dowith human perception or human
understanding, which meanshumans would be just like
(42:12):
looking at garbage-lookingimages, which would be wonderful
for AI.
What do you gain out of this?
Faster speed of operation,compact modules that do not
require many pixels, a lot ofpower and, ultimately, these are
green technologies, because theoptical front end is passive.
(42:32):
I'm talking about light matterinteractions, that you can think
of it as a transparent stampthat is looking like frosted
gloss, but it's actuallycomputing as the light is
penetrating through it in a fewpicoseconds or tens of
picoseconds of time.
That is what I would raisefunding from my president if I
was given the opportunity.
Speaker 1 (42:54):
That's very
interesting.
Thank you, I don't want that.
That was my final technicalquestion.
But I have one final question,and you might call this
moderators prerogative.
But I noticed in your bio thatyou are the recipient of a
National Geographic emergingExplorer Award and that piques
my interest because I thoughtyou are an explorer in the field
(43:17):
of optical imaging.
But I thought maybe you couldjust explain how you came to
receive that award.
Speaker 3 (43:25):
Well, first of all,
grateful it was one of the
National Geographic and NationalGeographic Society, ngs for
recognizing some of the earlierthings.
Well, at that time I wasobsessed with mobile phones
Mobile phones as the primaryplatform for me to enjoy
advanced optical measurements inresource limited settings, and
(43:49):
it had huge implications formedicine in Africa, in different
parts of the world whereinfrastructure is broken, and
they recognized the power ofthis story and they they.
At that time they had thisprogram selecting some young
scholars, young you knowexplorers of different kinds,
(44:13):
and they also started to opentheir door and embrace
scientists like myself and thatperhaps I was one of the first
generations of engineers,optical physicists, to be
admitted to this cohort of youknow explorers to share how we
can actually use science andengineering STEM fields for
(44:37):
broader impact and explorescientific disciplines to kind
of change the needle for ourclimate, our healthcare and, in
general, democratize access togood things that unfortunately
still not democratize globallyfor different reasons.
That's essentially the story ofit, and I learned a lot from
(44:57):
NGS, national Geographic Society, in terms of storytelling.
You know, when we were firstadmitted to the program, they
told about, they told us aboutNGS, and that was the first time
that I understood the power ofstorytelling.
These are things thatscientists, during our PhDs,
(45:19):
were not accustomed to.
Kind of thinking like that.
The power of storytelling isamazing, and I think national
geographic is one of thepioneers in that domain.
They're taking very difficultconcepts rooted in, let's say,
science, ecology, for example,and explaining it to the masses
(45:40):
for, you know, shaping up publicopinion about important
problems that we face as thewhole global set of countries
that suffer from these problems.
That was essentially the story,and I'm grateful, of course, to
NGS.
Speaker 1 (45:59):
Well, thank you for
explaining that and thank you
also for this very interestingdiscussion.
I've thoroughly enjoyed it andI'm sure the listeners will too.
So with that, I would like toask a kill.
I know it kills.
Very keen to ask you somefurther questions, so I'll hand
over to you a kill.
Speaker 2 (46:18):
I've just been.
I've just been sitting andenjoying the conversation.
It's very, very interesting andI think everything that's been
discussed everything AI and nonair related it's been very, very
interesting.
I just thought I'd pitch inwith a few questions, and
obviously the questions are foryou and for Peter, so we can
(46:39):
make this sort of a discussionbetween the three of us.
My questions are quite generaland more related to things like
career transitions, experiencesalong the way, the journey that
you have had, the both of youhave had along everything you've
done up until this point let'sput it that way.
So your career transitions tome were extremely interesting
(47:01):
and I've seen through theconversation and obviously
reading about you before, you'vehad quite a few transitions,
quite a few sort of researchinterests along the way as well.
How did you find that journey,both personally and
professionally, and what do youthink was the most important
skill set that you developedalong the way?
Speaker 3 (47:21):
So you know my key
feature that helped me.
My key, perhaps you know thekey element I believe that
helped me navigate at differentstages of my career, is when I'm
passionate about something, Iget obsessed with it and all of
(47:43):
a sudden it's not anymore aresearch question or research
field for me.
I'm just following my career asif I like things.
I really dive deeper into itand try to cover all aspects of
it till I'm, you know, satisfied.
My curiosity is satisfied andthat makes it enjoyable because
it transforms the professionalat least part of the
(48:06):
professional work that I do, atleast part of it into my hobby.
Yeah, I'm just enjoying myself,you know, and I'm being paid to
follow.
My curiosity and passion drivesit.
So that makes it, I think,easier to navigate.
It also gives me the feel toinsist on difficult problems
(48:27):
till I kind of have a bettergrasp of solving them and it
gives me patience and it givesme the kind of motivation to
stay with a problem.
I think it's very important wegot to stay with difficult
problems for a long time becausethere's a reason why they're
difficult and kind of surpassthem, to kind of create a new
(48:50):
field that overcome majorlimitations of the previous
decades you got to stay with theproblem quite a bit and I like
that phase of, you know, havinga bunch of things that I don't
understand and I'm obsessed withtrying to understand it.
That's what I think.
You know what I do.
Speaker 2 (49:07):
Excellent, and, peter
, I've seen obviously, I've read
about it the IEEE PhotonicSociety in the past has actually
done a webinar series duringCOVID, and David Samson was on
it.
I can see that you've workedwith him before.
There's a bit about the timeyou spent in Australia as well,
(49:27):
so how have you found thejourney and what do you think is
one of the key skill sets alongthe way?
Speaker 1 (49:33):
So actually much of
what I would say is actually
quite similar to I1.
And one of the things that Ireflect on is, especially in the
early stages of my career anddoing my PhD, giving myself the
opportunity to really go deepinto problems, really understand
the real fundamentals of thetools that we use, and often
(49:59):
that felt indulgent, you know.
It felt like maybe there wasn't,you know, an endpoint to this,
but actually that was crucial insort of generating ideas for
later in my career andgenerating understanding.
And what I think was thenuseful was after a period of
(50:20):
time when I started working in amore applied role and then I
started to work with colleagueswho were solving, let's say,
real world problems, and then Ifound that I had the tools and
the capability to actually tosolve some of these problems and
(50:42):
then I became more aware of theneed and I was able to really,
you know, focus in on some ofthose problems.
But I think it's reallyimportant, especially these days
, as we seem to be gettingbusier and busier, to just
allocate time for trying tosolve problems, as I1 was also
(51:05):
saying, because that's how wegenerate ideas, that's how
curiosity grows and we generateideas, and those ideas they're
really.
You know, they are the sort ofthe currency in a way that you
know that we work with.
Speaker 2 (51:23):
That's quite
interesting, isn't it?
Because I'm fairly early in mycareer.
The most of you have been inyour fields for a while now.
It's quite interesting toactually see you get the ideas.
You sort of hold onto them fora while, wait for the right
opportunity and just make surethat at that point you've gained
the skills that you can use toapply, while you hold on to the
(51:44):
idea for much longer than you'veactually applied it to the
first place.
Speaker 1 (51:50):
What I find now is in
me, you know, working with and
mentoring PhD students, and, youknow, in postdocs, I find that
the PhD students, my PhDstudents, are coming back to me
with these ideas.
You know, I don't feel likeI've got quite as much time as I
(52:10):
did, you know, for sort ofexploring and curiosity, but now
, working with staff and PhDstudents, that cycle is
continuing and I think that'syou know why it's very important
for academics to invest in thepeople that they supervise, as
(52:34):
well as for those peoplethemselves.
But it supports this cycle ofidea generation.
Speaker 2 (52:42):
That is quite
interesting.
I've supervised a fewundergraduate students in recent
times and we've come across thesame cycle as well, where they
always go oh, we've had anotheridea, and you sort of sit back
and go.
This is obviously the same fewpapers you've read to come to
this conclusion that I had a fewmaybe two or three years ago,
not any more than that.
(53:02):
But that quite nicely brings meto the question then of you
have all of these ideas.
You're obviously working withsomebody at every stage of your
career.
Have you had any interactionsalong the way with people you
consider to be your mentors?
Speaker 1 (53:22):
Ah no, why don't you
have a first go at that?
Yeah, I mean so this the most.
Speaker 3 (53:29):
Of course, many, of
course, many, right you know,
along along our graduate work,undergraduate work, graduate
work and postdoctoral work, manymentors you know here and there
of course contributedenormously to our growth During
that phase where we wereacquiring the fundamentals.
This was all the push-fullbetween our mentors and us as
(53:53):
students and trainees.
If I could just single out one,the transition that I've had
from school of engineering,which I've got my PhD from, to a
medical school where I wasliterally doing biophotonics as
(54:15):
part of a hospital.
It was kind of like a uniquesetting in Boston.
So that transition for me wasthe transition of my mentor
being an engineer to aphysician's scientist and an MD
PhD.
That was striking becausethat's the part where the
(54:36):
feedback that I got was sodirect in the difference between
engineering novelty and impactfor the patient.
If you're doing biomedicalresearch and all of a sudden,
the most brilliant, in myopinion, ideas that are very
innovative from an engineer'sperspective was facing the enemy
(54:58):
of hey, what is the impact forthe patient?
I remember when I first got thisquestion by my mentor at that
time, gary Tierney from Harvard.
He asked me this question whatis the impact in your innovation
for the patient?
And that was the first timethat I got that question asked
directly to me and I wasn't goodat.
(55:19):
The answer I kind of liked washey, it's just different, it's
so innovative, we've got to dothis because it's so clever,
it's engineering-wise, beautifuland never done before.
But he was unsatisfied and Ithink that was just the question
itself, nothing else.
It was just a five-minutediscussion.
That was an eye-openingexperience for myself and it
(55:40):
helped me to position myresearch portfolio at the
intersection of engineeringnovelty and impact.
I understood that innovationnovelty in itself, if it doesn't
meet the intersection set withimpact, whatever you do can be
biomedical or something elseimpact that is actually a very
powerful set to operate it.
(56:02):
And that was just a five-minutelesson from my mentor then,
which was Christ.
Speaker 1 (56:09):
So I guess, if, and
similarly for me, I've had
multiple mentors along the wayand continue to have a mentor,
but I think one thing that justcomes to mind immediately is
when one of my mentors justencouraged me to trust my
(56:33):
judgment.
Now, I know this, some peopleare perhaps better at this than
others but I realized that I wasperhaps spending a lot of time
sort of evaluating am I makingthe right decision?
But I think there comes a pointwhen you just have to say you
know what.
I've thought about this, I'vedone the research, I've got
(56:55):
evidence here, this is myjudgment and I'm going to trust
my judgment and take thatdecision and let's move forward
from there.
Because for me and I know noteveryone does this, but I was
prior to that at timesessentially stalling and wasting
time because I wanted to besure I'd made the right decision
(57:17):
.
In actual fact, sometimes youjust need to trust your judgment
.
Speaker 2 (57:24):
I had one final
question for the both of you and
, funnily enough, this is one ofthose questions you set for
students in examinations.
Here's the question.
It has a part A and a part B,and each of the A and the B part
has a part A and a part B.
So I'm going to break it down.
The first question is technical.
So, in your current fields,where do you are right now with
(57:45):
your experience and youroverview of what's happening in
the field?
What do you think students,early career researchers, people
who are interested in thegeneral field, should be looking
at, should be considering whenthey're making their decisions
of how they want their career tobuild?
This is technical and also, ifyou could factor in an element
(58:07):
of the fact that both of youwork on opposite sides of the
pond, that would be great aswell.
Speaker 3 (58:14):
So I guess this is
like an advice to the students.
First of all, the fields aregetting fuzzier and fuzzier in
terms of the boundaries and therequired set of information and
skills that you need to have.
That is constantly expanding.
All in all, I thinkself-learning skills,
(58:38):
self-training skills of thestudents, us, everybody, will be
very important going forward.
We got to constantly updateourselves as a student, as a
trainee, as a mentor, as a PI,and that's never going to stop.
And if that stops we're fallingbehind because the amount of
(58:58):
new field, new subfield, newinformation being generated has
never been this high in thehistory of MENCO.
We're seeing an exponentialgrowth in the amount of
information that we're puttingto archive as an example.
So that means self-learningskills will be very important,
in my opinion.
And second, students andeverybody.
(59:21):
This is true for early stageresearchers, early career
researchers, postdocs, juniors,faculty seniors.
I think, in general, ifeverybody is looking at right,
perhaps some people should lookat left.
And it's very important tounderstand game theory
(59:42):
approaches because if there is apopular thing that 99.9% of
your colleagues at a given classare selecting, perhaps you
should better reflect yourstrengths, interests, passion,
perhaps to see if you can lookat left to see if there's
(01:00:03):
something there it is going toprotect you, because I think
lost in the crowd could be areason why we sometimes lose,
spend, feel or trace.
So that's where I always do thesame thing If I feel judge that
something is getting toocrowded, a little bit high for
(01:00:25):
some times, I try to look atleft and follow a passion of
mine, because to me that's agood way to secure the freshness
of what I work on, theexcitement of what I work on,
and that's where a lot of myinterests are.
I want to see things at a veryearly stage.
Not late stage that's mepersonally, but just everybody
(01:00:49):
looking at right instead oflooking at left.
Speaker 1 (01:00:54):
And I think so.
What I would say is somewhatsimilar to that, but I agree
that there is an expanding skillset that is needed because
increasingly, what we do isinterdisciplinary, so it's not
just multidisciplinary, it'sactually crossing boundaries.
(01:01:21):
So I still think that thefundamental building blocks of
what we do are still theclassical subjects of maths,
physics, various forms ofbiology, biochemistry, chemistry
, neuroscience, that type ofthing.
They are all great things, andthat is not a definitive list,
(01:01:42):
by the way, but they are allgreat things to study.
But what I think is reallyimportant is students,
undergraduates, should be aimingto really understand what
they're studying and with thatunderstanding they will then
(01:02:04):
with the self-learning that AdaWan was mentioning.
By developing a really goodunderstanding of something
fundamental, it really aids thatself-learning later on and it
means that that will always beable to be applied.
But if surfacing and I thinkthis is really a challenge, as
(01:02:25):
we're asking younger people tolearn more, it's harder for them
to attain the depth.
But I think that is thechallenge is to really try to
obtain a conceptualunderstanding, because that is
going to pay great benefitslater on.
Speaker 2 (01:02:50):
Brilliant.
Thank you very much.
I'll try and end this on arelatively optimistic note, as
we always like to do, and I'mgoing to ask you a question that
I will answer first myself andmake it harder for you, just in
case you have the same answer.
So I've worked with people atGlasgow for a long time.
One of the most influentialpeople in my career is Mike
(01:03:11):
Turrand PI, the person that Iwork with the most, and I once
asked him a question.
If you had to give a piece ofadvice to anyone at this stage,
thinking about what to do, whereto go, who to work with, what
would that be?
And his suggestion is somethingthat now I try and follow as
best as I can.
The people that we work withare sometimes more important
(01:03:33):
than the actual aspect of thetopic that we're trying to solve
.
If we go to work every singleday, enjoying the company around
us and the people that weactually work with, the
solutions that we can come upwith are incredible.
So there was a prominence andthere was a value given to the
element of teamwork, but alsoidentifying what's a good work
(01:03:55):
environment.
So if you had to leave thisthis is your one piece of advice
to end today.
What would that be?
It could be something thatsomebody's already told you, and
that would be a very nice wayof taking that advice forward as
well.
Speaker 3 (01:04:09):
I would say anyone
should follow their passion, and
passion is a very powerful fuelfor success, as Peter was
mentioning, with the amazingemergence of new information
going deep to the fundamentalsat different parts of your
(01:04:34):
research portfolio and interestscan be really challenging.
With a lot of distraction fromsocial media, other things,
politics you can just spend timethere without getting anything
useful.
With all of that, it's going tobe challenging your defense,
for those is your passion,because if you're passionate
(01:04:55):
about something, it can be forvarious different reasons.
I'm all okay with anymotivation, but if you're
passionate about it, that'sgoing to be your energy to
overcome some of those hurdlesand work with your team and
succeed together in whatever youwant to achieve, and I think I
(01:05:16):
always try to do the same.
I don't work on things that Ireally am not passionate about.
It's helping, motivating me,keeping me kind of awake, as
opposed to spending time lookingin social media and politics,
which creates a lot of problems,a lot of things to consider.
Right, I'm motivated to keepthem awake.
(01:05:38):
Focus on what you believe willmove the needle for science, for
your research, for improvingthe world through science and
engineering.
That's what I would sayExcellent and Peter.
Yeah.
Speaker 1 (01:05:52):
And so I think that's
a very good advice.
So what I would say is that, ashumans, we are all different.
It's very tempting to come intoyour institution and look at
the leaders in your institutionand think I need to be just like
them, and there is some valuein that.
(01:06:13):
However, what I think is reallyimportant is to look for every
person to look at themselves,understand what their strengths
are, understand where theirareas for growth are, and
understand what their passionsare, and really think and talk
to others about how you candevelop to be the best that you
(01:06:39):
can be and to get the most outof your career, both in terms of
success but in terms ofenjoyment.
Because, yeah, everyone isdifferent.
Everyone has a different set ofabilities, limitations, and I
think, working in this type offield, it's really important, I
(01:07:02):
think, for people just to bereflective about really what
their individual strengths andareas for improvement are and to
not be afraid to really examinethat.
Speaker 2 (01:07:17):
Excellent.
Well, I couldn't think of abetter note to end on.
So thank you very much, peter,thank you very much Adan, thank
you for joining us today forthis podcast episode.
It's been a pleasure talkingabout AI.
It's been a pleasure talkingabout biomedical imaging.
We've talked about so much andthis has been a fantastic
(01:07:38):
conversation from start to theend.
So thank you very much, thankyou for joining us and see you
in the next one.