All Episodes

June 1, 2024 • 56 mins

How do cutting-edge optical technologies elevate the performance of AI systems? Tune in to our latest episode of Illuminated, where we feature Laurent Schares from IBM Research. Laurent walks us through the pivotal role optical interconnects play in modern data centers and AI clusters, illustrating how these advancements are transforming high-performance environments. Discover the intricate relationship between advanced optical networking, system integration, and the groundbreaking hardware innovations that are driving the future of AI.

In the episode, the moderator and speaker we dive into the human side of tech innovation with insights on professional growth and mentorship. Laurent reveals his personal journey, underscoring the importance of adaptability, teamwork, and a supportive work environment. From peer reviews to mentoring the next generation of scientists, we unpack how contributing to the scientific community fosters both personal and professional development.

Host:
Akhil Kallepalli
Chancellor's Fellow and Leverhulme Early Career Fellow
University of Strathclyde, UK

Moderator:
Brandon Buscaino
Research Scientist
Ciena, USA

Expert:
Laurent Schares
Senior Scientist
IBM Research, USA

Have a topic you're interested in hearing about? Let us know!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Illuminated by IEEE.
Photonics is a podcast seriesthat shines light on the hot
topics in photonics and thesubject matter experts advancing
technology forward.
Hi everyone and welcome totoday's episode of Illuminated.
I'm Akhil and, as the pastAssociate Vice President for the
Young Professionals, it is mypleasure to be your host today.

(00:24):
As the past Associate VicePresident for the Young
Professionals, it is my pleasureto be your host today.
I'm a biomedical physicist andengineer working at the
University of Strathclyde as aChancellor's Fellow and
Leverhulme Early Career Fellow.
However, in my role for theIEEE Photonics Society, I'm
supporting and promotinginitiatives very much like this
podcast to raise the profile ofvaluable young professionals
within various sectors.
Now the Young ProfessionalsInitiative is for graduate

(00:46):
students, postdoctoralcandidates and early career
professionals Basically, anyoneup to 15 years after their first
degree.
This affinity group within theIEEE Photonics Society is
committed to helping one pursuea career in photonics.
We're here to help.
We're here to evaluate yourcareer goals better understand
technical pathways and subjectmatters, refine skills, grow

(01:09):
your communication and yourprofessional networks through
mentorship and help basically inany way.
We can Now on to our podcast.
In this podcast, we're going todiscuss optics and AI systems
with our special guest, lohanSkars from IBM Research and
moderator Brandon Buscaino fromCiena.
In today's episode we will hearfrom Lohan as we discuss optics

(01:33):
and AI systems, his journey,and also we go beyond academia
with career and research advicefrom Lohan, and there is
something for everyone today, sostay tuned.
Now to your moderator and yourhost right after me.
Brandon Buscaino received a PhDin electrical engineering from
Stanford in 2020, where hisresearch focused on

(01:54):
electro-optic frequency combgenerators and enabling
efficient and resilientintradata center optical links
using co-packaged optics.
Brandon is now a researchscientist at Sienna Corporation,
where he helps develop the nextgeneration of digital coherent
optical modems operating at datarates of 1.6 terabytes per

(02:15):
second.
That's really, really good formy Netflix subscription.
Brandon has co-authored over adozen journal articles and
conference papers, as well asseveral patents, and is an
active technical reviewer.
In 2021, brandon was awardedthe Camino Outstanding Early
Career Professional Prize.

(02:35):
Now Brandon is also an activevolunteer of the optics
community.
As the president of theStanford Optical Society, he
organized conferences, outreachevents and industry seminars.
After graduation, he's decidedto continue his professional
involvement, participating withthe society and across groups

(02:56):
and conference committees,including the Optical Fiber
Communication Conference, aswell as advocating for
congressional funding for opticsand photonics within the
National Photonics Initiative.
He serves in the IEEE PhotonicsSociety Young Professionals
Advisory Committee, as well asthe Industry Engagement
Committee, whose focus is tosupport members in the industry

(03:18):
with educational,entrepreneurial and standard
based resources.
He could not have had a bettermoderator for today's episode.
So, brandon, take it away.

Speaker 2 (03:25):
Hi Akhil, thanks again for that introduction.
Yeah, I think we're going tohave a really good discussion
today.
So, as many of us know, therecent surge in popularity of
large language models such asChatGPT has brought artificial
intelligence to the forefront ofour culture.
It is now seemingly impossibleto avoid AI in our personal and
professional lives.

(03:46):
What many do not know, however,is that research and
development in AI has beenongoing for decades.
While AI systems have becomepopularized due to their recent
accessibility and impressiveinterfaces, artificial
intelligence has powered atransformation in computing,
finance, healthcare and manyother fields.
The performance and impact ofthese systems has been
shepherded forward by hardwareadvances in semiconductor

(04:10):
manufacturing, chip design andoptical technologies.
Today, we'll focus on that lastitem.
Optical interconnects werealready crucial for connecting
servers inside and betweenhigh-performance data centers,
and their proliferation into AIclusters is ongoing and
seemingly inevitable.
We'll explore these trends andmore with our expert speaker and
guest, laurent Skars.

(04:30):
So Laurent Skars, from IBMResearch, is an elected board
member of the IEEE PhotonicsSociety of Governors.
He received his PhD in physicsfrom ETH Zurich, switzerland, in
2004.
After graduating, he moved tothe US, starting as a postdoc
and currently as a seniorresearch scientist at the IBM TJ
Watson Research Center inYorktown Heights, new York.
His current research focuses onadvanced optical networking for

(04:53):
AI supercomputers.
He has led or contributed tonumerous programs on optical
technologies for computingnetworks which he has received
an IBM Outstanding TechnicalAchievement and multiple
Research Division Awards.
Over his career he has workedacross the stack from devices
such as high-speed lasers,amplifiers and switches to
optical interconnectingpackaging and, more recently,

(05:14):
into networking and systemintegration.
He has more than 150publications, 20 issued patents
and is a senior member of theIEEE and Optica.
Dr Skars has also been alongtime volunteer in the
optical networking community.
He's been an elected member ofthe IEEE Photonics Society Board
of Governors since this year.
He's currently the deputyeditor-in-chief of the IEEE
Optica Journal of OpticalCommunications and Networking

(05:36):
and for the Optical FiberCommunication Conference.
He has served as both atechnical program and general
chair, as well as a steeringcommittee chair.
He's been a frequent invitedspeaker, has been a journal
guest editor on data centeroptics and has served on program
committees of leadingconferences.
Outside of work he has been along-time youth soccer coach and
referee and he has widelyvolunteered to promote STEM

(05:57):
education in schools.

Speaker 3 (06:03):
So welcome Laurent.
We're happy to have you here.
Thanks so much for having me.
Thanks Brandon, thanks Arkeel,and also thanks to the Photonics
Society for hosting this.

Speaker 2 (06:10):
So let's.
Why don't we get started?
So, laurent, to start off forthe listeners, could you talk a
little bit about what an AIsystem looks like, what are the
basic building blocks and howare they all connected?

Speaker 3 (06:21):
Yeah, ai systems.
Yeah, that's a good question.
It's really a full stack playright.
So I think it's not only thehardware, it's not only the
software, but it starts, youknow, at the very top level is
the applications.
You know chat, gpt, as youmentioned.
You know it's more on theconsumer side.
Or you know we, ibm, we play inthe enterprise space, so our

(06:43):
platform is Watson X.
In that context, you might haveeven seen that on TV in recent
advertisements or so Now, underthe applications, that's where
it's getting interesting there'sthe AI models and large
language models and that'sreally where the generative AI
comes in.
That really made theChagik-Diesel and those

(07:05):
applications take off in thelast few years.
But you know AI models, theyrely on a full software stack.
You know a data platform.
There's a huge amount of datathat needs to go in there.
And then even under that, youknow you have typically a very
large number of servers thatneed to be, you know, connected
to a network.
Together you have storage whereyou store all your data and all

(07:27):
that stuff, and then at thevery lowest layer you have the
hardware.
It's typically lots of serversin large centers, especially for
training, and what's slightlydifferent in those servers than
in cloud is that they areheavily GPU-based or
accelerator-based.
Then on the hardware side, ofcourse, there's the connectivity

(07:49):
, which is the focus of thispodcast.
Now maybe just one more thoughton that front.
You know AI systems.
You talk about training on oneside, but then also there's, you
know, the inference.
You know, when you go onChatGPT or whatever model,
you're putting one request therethat makes essentially use of a

(08:10):
pre-trained model already.
That's before.
Now, training systems tend tobe very, very large at some
point big data centers, highpower consumption, but inference
you want to do that at the edge, you want to do that on your
handheld device, or somethinglike this.
So the requirements there aretypically totally different Very

(08:31):
low power, very small.
In that sense, maybe just onemore thing, since we are talking
about communications here.
What's also different than incloud generally is that it's
estimated that generally about60% of all communications is for
accelerator to acceleratortraffic in these AI systems and

(08:53):
if you take specificallytraining systems, that
communication bandwidth goes up90, 95%.
So we're talking a totallydifferent ballpark than what
we've been used to in the pastyear.

Speaker 2 (09:07):
Wow, that's quite a lot of data and quite a lot
focused on the training aspect.
So what is your role in thisecosystem?
What is your research focusedon?
How did you become involved inthis type of research?

Speaker 3 (09:22):
Yeah, so that's also a good question.
How did I get involved intonetworking for AI supercomputers
?
As somebody who was essentiallytrained in device physics,
optics, device optics and so on,so it's been quite a journey.
I've been working across thestack from PhD and early
research years, more on photonicdevices and interconnects,

(09:45):
high-speed lasers, amplifiers,optical amplifiers and switches,
and then all kinds ofinterconnects and packaging how
you put that together and thenmore recently I moved more into
networking and systemsintegration.
Now, all this backgroundessentially, I feel that helped
give me a really broadperspective and it's kind of key

(10:07):
for all the system designs.
As I mentioned before, you know, we have to full stack across
it and while nobody's an experton everything, it really helps
if you are able to see a littlebit beyond your little expertise
and you can bridge thedifferent layers together.
I think that's a broad, helpfulthing.
So, in terms of importance ofoptics here for these AI

(10:32):
supercomputers, I think there'sa big focus.
As mentioned before, highbandwidth requirements, a big
focus on data movement At datamovement.
We can essentially classify intotwo groups, the way I classify
it.
The first thing is to avoiddata movement.
If you can have models orapplications such that you can

(10:55):
keep them as local as possible,say within the server, within
the GPU, within a rack, youdon't necessarily need to move
them all across a big datacenter.
That makes your network designa lot easier.
And you don't need need to movethem all across a big data
center so well.
That makes your network designa lot easier.
You don't need to shufflebandwidth across when you don't
need it.
So that often requires you knowa solid understanding of what

(11:15):
workload is actually doing, whatcommunication patterns you have
in those workloads.
One thing you need is oftensmart scheduling as well, so you
know if this job is finishedand I have another job in my
pipeline, where do I place itbest to minimize the
communication requirements,stuff like this.
And then, of course, on themodel side, you know I think
it's a relatively recent fieldand it's fair to say that those

(11:38):
models will become moreefficient over time.
So to kind of minimize datamovement if you don't need to do
it.
So I think that's the generalthing on the avoiding data
movement side, but then peopletake every bandwidth they get.
So there's a heavy focusgenerally on building faster
networks.
Faster networks, vendor serversacross your data centers and so

(12:02):
on.

Speaker 2 (12:03):
That's amazing.
So I guess that leads into whatI was just about to ask is what
are the main requirements forthese clusters?
Is it latency?
You mentioned bandwidth.
What is the overall cost ofthese systems, or maybe not in
money, but in terms of the timeand complexity that it takes to

(12:24):
actually scale these systems?
What's changed?

Speaker 3 (12:27):
Yeah, I think it's all of the above right.
It's always money, but money,you know, is hard to talk about
without having something thatworks right.
So people don't want to overpayand you know the businesses who
run those need to be profitableat the end of the day as well.
So let's focus on really on thenetworking part here for the

(12:49):
supercomputers.
So I'd like to elaborate.
I mentioned before that we needfaster networks.
That's probably something Iwould like to elaborate a little
bit on here.
So, compared to cloud or evenHPC supercomputers of the past,
compared to cloud or even HPCsupercomputers of the past,
often the interconnects havebeen 100 gig, 200 gigabit per

(13:10):
second, ethernet-based in recentyears, maybe InfiniBand for HPC
as well, and that's typicallythe path on the server.
You come from a CPU, you go toa network interface card, you go
to the top of the rack switchand then you go to all your
fabric in your data centers.
Now, what's different in AIservers is that those typically

(13:32):
have many GPUs on them.
Often you know four GPUs perCPU or even more of that like
this, and each of these GPUs hasa lot of high-speed interfaces,
multiple 200 gig, 400 gig oreven more interfaces.
So at the end of the day, ifyou say from a CPU-based system,

(13:53):
you're coming out with 100 or200 gig On the GPU side or the
accelerator side of an AI server.
Often you're coming out with800 gig, 600 gig and people even
moving into 3.2 terabit persecond, which is easily an order
of magnitude more in terms ofbandwidth that you need to deal

(14:14):
with at the network level tomove around.
But it also has a lot ofimplications for power and
packaging inside the systems.
How do you get this bandwidthinto the system?
And once you have them in thenetwork, how do you get this
bandwidth into the system?
And once you have them in thenetwork, how do you move it
around?
So the industry essentially onthat front is looking into

(14:35):
several concepts.
One side, you merge thestandard data center traffic
with the GPU traffic.
But there's also a conceptrecent years that people say,
okay, we do a front-end network,which is a standard data center
traffic, with the GPU traffic.
But that's also a conceptrecent years that people say,
okay, we do a front-end network,which is a standard data center
network, storage and all thatstuff.
But then we also have aseparate network for training
just to deal with this highbandwidth communications between

(14:56):
servers.
That's often required.
So I think that's on thetraining side.
You asked about latency as well.
So I think in training oftenwe're limited, throughput
limited.
So that's a bandwidth play.
You want as much bandwidth inthe system as you want, but for
inference, I think the rest isoften you want to process as

(15:18):
many requests in as short a timeas you can.
So there the latency aspectoften becomes dominant over the
bandwidth aspect.
So Cherry is not one size fitsall, but I think for network
engineers it's a fantastic placeand lots of room for innovation
here all across the stack.

Speaker 2 (15:38):
So on that topic, could you give our readers some
rough size for these networks?
I mean, are they in one largedata center, are they in one
room of a data center, are theybetween data centers, and is
that going to change in thefuture?

Speaker 3 (15:56):
It's all of the above .
So I think the big models thatare out there there are tens of
billions, hundreds of billionsof parameters that out, that
there are tens of billions,hundreds of billions of
parameters and people alwaysspeculate.
You know, exponential growth isnot going to continue forever,
but people are talking abouttrillions of data centers and I
saw recent press releases wherepeople are talking about
building single clusters of youknow 100,000 GPUs.

(16:20):
So that's just enormous interms of systems.
Now, are these already built?
I think most are probably a lotsmaller.
You can say you know thousandsof GPUs is not a surprise.
But then also, I think there'sprobably a market for some
leadership systems, but noteverybody can support those with

(16:42):
the very large ones.
Right, if you go intoenterprise or smaller customers,
right, people might want totrain their own systems.
But you don't necessarily wantto have 100,000 GPUs because you
will never be able torecuperate that money.
You want to be as efficient asyou can.

Speaker 2 (16:59):
I see, yeah, 100,000 GPUs consume quite a lot of
power.
I guess that leads me tosomething that you've worked on
in the past, which is could youtalk a little bit more about
what co-packaged optics is andhow it could potentially help
scale these data center networks?

Speaker 3 (17:16):
Yeah, so I think you mentioned power.
Power is generally a bigproblem there.
Power I see it in two things.
One is the OPEX aspect.
You know power costs money andif you need to operate all that
pay, that you know.
I think there's estimates of,you know, $1,000 per GPU per
year or something like this.

(17:37):
Now, 100,000 GPUs, that's a lotof money Then you need to, you
need to recuperate that at somepoint.
But the other side is also athermal aspect.
So all this power generatesheat and in the past we've been
able generally to air cool thoseservers.
But there's a big trend rightnow into moving into more liquid

(18:00):
cooling, especially as thepower densities become higher.
But there's a lot of room forinnovation on the power delivery
and cooling aspect on one side.
But on the other side, whereveryou can avoid power, it makes
the cooling and the generationaspect a lot easier and it makes
it, at the end of the day,cheaper as well.
So one thing that people havebeen looking at at the

(18:22):
connectivity side is if you lookhow data move from a GPU or CPU
into a network, typically, youknow we have a big printed
circuit board, you have somesockets in the middle with GPUs
or CPUs and you have pluggabletransceivers at the edge of this

(18:43):
box.
That's where the optics comesin and go out with fiber, but to
connect those pluggabletransceivers and the modules you
need some copper trace inbetween.
You need some copper trace inbetween and also that copper
trace, especially at high datarates, that becomes lossy, so to
overcome those losses you needto drive it, and that consumes

(19:04):
power at the end of the day.
So I think that's one aspectwhere people are looking at well
, how can I shorten thisdistance between the optics and
where the data is actuallygenerated on my GPUs or CPUs?
Can I put those two actuallytogether?
Can I co-package my opticalplugable transceivers right on
the module where my data arebeing generated, on the module

(19:28):
or right next to the model?
There are different flavors onthat front right.
So I think that's essentiallythe paradigm.
That's one of the paradigms.
So you know lowering power.
The other thing is also, onceyou come out from, you know a
module you go to, you know astack of electrical
interconnects, you knowdifferent electrical pins and

(19:50):
interfaces and all those takesize.
Once you go into, you know thecopper tracers.
Those take space up as well.
So one play where Copac Choptakes potential benefits as well
is how do I get more bandwidthper area or per linear
dimensions right into my network, into my optics?

(20:13):
And co-packaging might help inthat aspect as well, because now
you don't need to fan outnecessarily into electrical
fields but you can put theoptics right next to where the
data are being generated.
Now of course you know there'sa lot of considerations that
need to be solved there in thatcontext, right, so I'd say
generally co-packaged optics isabout extending the bandwidth

(20:37):
scaling roadmap for building myfaster networks.
I want to say for you know agiven envelope, a fixed thermal
envelope or power deliveryenvelope, and here at IBM we've
done co-packaged optics projectstogether with partners funded,

(20:59):
but there's also a bunch oflarge-scale industry projects
that have come up in the lastfew years Now.
Generally hardware developmentcycles are different than
software development cycles.
Often you need to incubatethose technologies before they
really become mature and usableat the large system scale.

Speaker 2 (21:20):
Right Incredibly difficult integration challenge
as well.
So I mean, on that front, arethere issues with reliability
and replaceability, when we talkabout sort of upending this
model, this pluggabletransceiver model, and going to
co-packaged optics, which youknow higher data rates, highly
integrated, right next to theswitching chips.

Speaker 3 (21:40):
Yeah, so, absolutely so.
I think one of the attractiveparts of transceivers and
switches being separate isessentially a disaggregated
model.
So now you can have companieswho own the transceivers.
If one doesn't work or whatever, you plug it out to plug
another one in.
I don't care about what'shappening to my switch, that
just can keep running.

(22:05):
Now, if I co-package everythingtogether in a co-package
platforms Right?
First question comes in therewhat if one transceiver fails?
Who owns the problem?
Actually, if I'm operating adata center myself, I want to be
able to have this fieldreplaceability, just keep it
going.
But on the other side, atransceiver manufacturer doesn't
necessarily need to worry abouthow chips work, how

(22:29):
co-packaging works and all thatstuff.
They focus on doing really wellwhat they've been doing well
and keep doing that.
Now, if you put everythingtogether, the transceiver
manufacturer whoever needs toknow much more about the
packaging than they did in thepast.
We, as operators, may need toknow much more about
transceivers that we know in thepast, right?

(22:49):
So then, if something goeswrong you know, can I finger
point to you, brandon, orsomething why did it go wrong?
You know, am I owning theproblem, are you?

Speaker 2 (22:58):
owning the problem.

Speaker 3 (22:59):
So I think people are looking at a lot of solutions
around it.
You know it's not a new thingto integrate things more.
So you often look at redundancyat the system level.
You know fail in place, do youhave spare channels on one side,
but then the other side ofcourse you also got to work in
improving your devicereliability to the degree

(23:21):
possible, right.
So it's a whole you know kindof worms with potentially very
big payoffs.
But you know to develop thistakes cycles.

Speaker 2 (23:31):
Yeah, it seems like a perennial problem between the
consumer and the transceiverdesigners in terms of who wants
to take responsibility forfailures.
Who wants to takeresponsibility for failures?
Well, you know.
So when I was working on someof this in grad school, you know
co-packaged optics was comingabout and there was a lot of

(23:55):
talk about it.
But you know, recently therehave been some new types of
technologies that have beenentering this sort of
power-saving space in, you know,intra-data center interconnects
and especially for AI clusters,and they're called LPO linear
pluggable optics and LRO linearretimed optics, and they've been
proposed for again for powersavings for inside the data

(24:18):
center optical links.
Just, could you give us anoverview of how those are
different from CPO andco-packaged optics and the
traditional pluggabletransceiver model?

Speaker 3 (24:31):
Yeah.
So I think they're both verypromising avenues.
I would look at it probablyfrom two points of view.
One is the systems point ofview.
So it's kind of to keep up withconnectivity requirements, make
sure that within my given powerand thermal envelope and cost
envelope I can just keep scalingmy cables and make them faster.

(24:54):
So I would consider that for meas an operator.
I would say that's cabling as ablack box.
It needs to work, not cost toomuch, within whatever
constraints, and if that's thecase I'm good, right.
So just got to get fast overtime.
But then on the technology side, right.
So I think it's what it is.
It's somewhere in between, youknow, co-packaged optics and the

(25:16):
plugable transceivers and tosome extent, to simplify it,
really at the 50,000 miles level.
Here it's what I mentioned.
If we have my GPU or my CPUhere, I go over the board trace
to my edge of the transceiver, Ineed to drive this electrical
line, I need to potentiallyretime it, and everything of

(25:37):
this costs power essentially.
And how can I minimize thispower?
Can I get by with only partialread timing, say only on the
transmit side, only on thereceive side, or things like
this?
So yeah, I think it's possibleto some extent with a clearer
system design.
Now you need to have anend-to-end design.
Really, you know what does yourelectrical channel and your

(25:58):
transmit and your receive look,both on the transceiver side and
on the module side, to reallymake sure that this channel is
going to work at high speeds.
If you get this working, does ithelp?
I'm pretty sure in the shortterm we can get a little bit
more continued bandwidth scalingfor another generation or two,
a little bit lower power thanwith fully repeated transceivers

(26:22):
and so on, and there's lesspackaging involved there than,
say, with a full CPO solution ora packaged optic solution.
So in that sense, yes, but thenagain, if you zoom out in terms
of how much power you're reallysaving, so I think it probably
solves a thermal problem to someextent.

(26:43):
But the total power needed atthe system level typically is
not moved that much by justpluggable transceivers, because
the vast majority of the poweris being consumed by your
processors, by your memory, byyour GPUs, accelerators, it's
not by the cables.
So even if you have cables, youhave a single-digit percentage.

(27:06):
If you improve that by a factorof 100%, you're not saving
double-digit percentages ofpower.
So I think it's something thatneeds to happen to keep the
bandwidth scaling, if it'stechnologically feasible and if
it's feasible it costs less,people won't consume it.
But I think it's just one ofthose puzzle pieces that fit in

(27:30):
into the whole system at the endof the day, I see, I see.

Speaker 2 (27:34):
So bandwidth is the most important and power scaling
is important, but isn't thebottom line for a lot of these
systems?
So, okay, we talked aboutbandwidth and power.
Are there other, perhapsnon-technical challenges that
are hindering growth orpromoting growth?

(27:54):
I guess in AI.

Speaker 3 (27:57):
There always are?
Right, there always are, youknow, with any technology that's
grown so fast, right?
So I think you want to be ableto scale up your workforce
potentially very quickly withvery skilled people.
Now, those skills might notnecessarily be existing right
away.
People need to be trained tosome extent or learning on the

(28:20):
job, right, which partially isthat right.
So I think what I hear ispopular at the college, grad
school level right now is reallylearning about the generative
ai models, which is important,of course.
But then you know if we'rebuilding whole systems, we need
the rest of the stack as well.
We do need networking skills.
We need to.
You know the lower level systemdesign, chip design skills and

(28:43):
eventually you know if youreally want to go to the next
generation on multi-year orfive-year plus time horizon.
You know it comes down tosemiconductors and materials.
You know how do you make thosefaster, how do you scale those
as well.
So I think skills is a big, bigfactor here as well.
And maybe one additional thing Ithink that's been mentioned

(29:03):
very much trust, trust generallyin AI models.
Obviously, technologicalchanges come by fast, but
there's often a societal aspectthat gets discussed at the
society level.
Is society ready to adopt thisat whatever level?
I think two aspects we've heardgenerally the current

(29:25):
generation of models.
They are prone tohallucinations to some degree,
getting better over time and I'mpretty confident those go away
over time.
That's one aspect.
The other thing is then alsocopyright infringements.
If you use chat, gpt or so,maybe it shouldn't do it, but
less important than if you're abank or hospital or somebody who

(29:47):
might get sued by using datathat are proprietary in that
sense.
So a need for known data sets,a need for governance all around
to make sure that thetechnology is ready to be
adopted from a legal standpointas well, as, you know, a
societal standpoint as well.

Speaker 2 (30:08):
So lots of challenges all across the board.
Yeah, yeah, further evidencethat AI is permeating every part
of our lives, going forwardprobably.
And you know, I just.
I have one more questionthere's.
You know, in recent yearsthere's been some.
You know, I just.
I have one more questionthere's.
You know, in recent yearsthere's been some publications
on optical circuit switches forhyperscale data centers and AI

(30:29):
networks.
Do you have any opinion orideas related to this technology
?
Is it going to have a futureimpact on AI systems?

Speaker 3 (30:40):
Yeah, ocs, optical circuit switching.
I've worked on this for a longtime, so it's how much time do
we have?

Speaker 2 (30:46):
yeah, maybe two minutes so it's just like.

Speaker 3 (30:52):
So I think it has been recent high leverage papers
that have talked aboutsignificant potential in real
systems at the hyperscale levelin terms of power savings.
But a resource utilizationusing OCS and potential in real
systems at the hyperscale level,you know, in terms of power
savings, better resourceutilization using OCS, and
there's also been a lot ofacademic research over a long

(31:13):
time.
There are various technologiesthat are, you know, being
considered there.
So I think to really, again,you know, if you want to adopt
this technology at the systemlevel, you need to have a
technology at the system level.
You need to have a reallystrong understanding of what
your workloads are.
What can you actually do withit.
It's just not like aplug-and-play replacement for

(31:35):
existing networking technologies.
What can be accelerated withOCS to being smart about
connectivity at a high level?
With OCS to being smart aboutconnectivity at a high level.
So if you compare to, you know,optical circuit switches to
electrical switching, which isreally the elephant in the room
here.
So OCS at a very level isessentially just steering light.

(31:57):
You come seeing light with onefiber and you go out to another
fiber and essentially it's you.
You know a better patch panel,automated patch panel that can
switch faster.
Um, what it does not do is likeelectrical uh switches.
You know this switch at thepacket level, at the flip level
or even better, they have a lotof buffering, they have logics

(32:18):
inside your switch and you knowwe don't have optical memories
or buffers in that sense.
So that's not something thatocCS can do right out of the bat
here.
The other thing is electricalswitching is also a huge market.
I think it's a lot of $10billion per year or something
like this and OCS market isorders of magnitude smaller with

(32:39):
a bunch of companies there, butit's not like a thousand
gorillas in there In terms ofthe technology.
So there's a lot of technologythat people are looking at.
It can switch very fast, buttypically there's a trade-off.
If you want to switch atnanosecond or microsecond level,
typically you know that'sassociated with a lot of

(33:02):
insertion loss, maybe withpolarization dependence and also
with relatively small switchsize.
So there could be oneapplication for that.
The other extreme is that youreally switch slow, maybe at the
millisecond or even secondlevel or something like this,
and in that case you can buildmuch higher rating switch

(33:23):
matrices, also with lowinsertion loss.
But obviously, if you want toswitch at a packet level, that's
not something you'reconsidering in that case,
because it's just way, way waytoo slow.
So that's more like beingconsidered for integration with,
say, a software-definednetworking stack where software

(33:43):
can look at what's the resourceutilization of my cluster.
Are there some servers or someparts of the cluster that are
not utilized perfectly well?
Well, now I have my automatedpatch panel here.
Can I just reconfigure that andreally attribute resources,
compute resources on demand inthat context?

(34:04):
So I think that's somethingthat's very promising and I
clearly see potential in thatarea.
The faster ones you know peoplework on faster technologies and
try to make all of this stufffaster as well.
So I think it's a very excitingfield to keep following and I'm
positive that we see moreinnovation in that coming up.

Speaker 2 (34:26):
Well, there's a lot to chew on there.
We could probably have anotherentire podcast, but I think this
is a good time to bring Akhilback and chat a little bit more
about some professionaldevelopment topics.
So, yeah, welcome back.

Speaker 1 (34:42):
Akhil, that's been absolutely uh fantastic.
It was really good to listen.
Um, obviously, because I dosomething slightly different to
the put.
Both of you were talking.
I was sitting on google, sortof talking and searching for
every single term should havejust gone to chat gpt, it would
have told me what was going on.
We're gonna I'm gonna changethe pace just a little and I'm

(35:04):
uh gonna have a few questionswhen it comes to sort of
professional development, careerdevelopment and things like
that.
So, uh, this will not be likeyour phd viva.
This will not be like yourpostdoc interviews.
This is more like tell us whyyou're so amazing sort of
segment of the podcast.
Um, we've talked about a lot ofthings and I've noticed you've
covered, you've had a lot ofexperience, laurent, in

(35:27):
everything you've done.
I'll start with the firstquestion, to which I actually
have a follow-up Could you giveme and the audience of the
podcast and background of whereyou've done your prior
experience studies and sort of asnapshot view of your career?

Speaker 3 (35:45):
That's a broad question, right.
So I experience studies andsort of a snapshot view of your
career?
That's a broad question, right.
So I did my studies, you knowuniversity studies, mostly in
Switzerland, eth Zurich,undergrads.
During that time, you know,there were exchange programs.
I always was interested inbroadening my horizon, so I did
exchanges in France and Scotland, part of my studies over there,
masters project in france, butthen I went back to eth zurich

(36:08):
for my, my phd.
That was around the the time ofthe dot com areas, when I
started.
Now, fiber optics was reallysuper, yeah, we need this huge
connectivity.
But then when I graduated,right, so I think there was like
, uh, the job market dried up alittle bit at least, uh, over
what I was looking for.
So, um, but then finally, youknow, uh, through a uh colleague

(36:32):
at ibm, uh, switzerland, I gotin touch with, um, with the us.
They say, well, we don't havejobs here, but are you
interested in going to the us?
There's a postdoc, exactly withwhat you're looking for.
Say, well, yeah, worst case,you know, I get to travel.
Uh, for free, going to the US,there's a postdoc, exactly with
what you're looking for.
Say, well, yeah, worst case,you know I get to travel for
free over to the US, even ifthey don't take me.
I go to New York and have a funweek there.
But then you know, things getrolling and you know finally

(36:54):
start with a postdoc here andthen a permanent position later
on.

Speaker 1 (36:58):
So that's how it started.

Speaker 3 (37:04):
Yeah, go ahead.
No, no, go ahead.
I'll let you finish.
Yeah, so then, initially youknow it's more on the physical
layer, you know all theconnectivity sides.
But I was always interested.
You know what can you actuallydo with the technology?
So technology by itself, for meit's a means to solve, you know
, broader problems, to haveimpact and in that sense you
know the question for opticsspecifically is how can you make

(37:26):
your system faster, how can youmake them better?
What's the application for allof this technology?
So that's how it kept movingacross the stack, broadening the
horizon.

Speaker 1 (37:36):
That's fantastic.
There's a diversity ofgeography, there's a diversity
of experience, and thecombination of all of that,
effectively, is a culmination ofeverything that you're doing
today.
So, in all of that journey,what do you think was your
biggest challenge, transitioningboth geographically and your

(37:57):
roles, and how do you think whatworked for you to sort of
overcome any of those challenges?

Speaker 3 (38:05):
Yeah, I don't think there's a single challenge in
that sense.
So I think what you learn youknow I've been living in so many
places working on verydifferent areas Everything has
essentially good aspects and badaspects and essentially it
comes down to you know, knowingyourself, what you like, what's
really important for you, wheredo you feel comfortable?

(38:26):
Well, you know, both challengedin a sense, and you know,
academically and mentally andhave an interesting job, but
also you know, feel, feel atease or have like a supportive
group around you, both privateand in and in.
You know the workplace, whatworks for you individual, and I
think that's a choice thateverybody can only make by

(38:47):
themselves, right.
So, and I think I've found sosomething that's worked for me
here, very happy, this, thisenvironment fantastic.

Speaker 1 (38:56):
So I'm sure you've actually had and had
conversations and met some very,very interesting people along
the way any sort of perspectiveson how it is to actually have a
mentor?
If you would like to mentionsomebody who's helped you during
your career as a mentor andalso, I'm sure by now you would
have had a few roles where youwere the mentor and somebody
else is the mentee and this is asort of a situation of being on

(39:19):
both sides of the same table.
Any sort of reflections on thatif you'd like to share any?

Speaker 3 (39:26):
Yeah, okay, I think that's also a long discussion.

Speaker 2 (39:33):
Don't worry, we can.

Speaker 3 (39:33):
We can always bring you back for a part two and a
part three yeah, no, I thinkgenerally, uh, it's important
that that you open at some point, right?
So, um, after you come into anew place you know, especially
here, the ibm research center um, you, you come in as a nobody.
Maybe you're an expert in justyour specific part you did
during your research, but therehave been people here that have

(39:55):
been working for 30, 40 years oreven longer on technology and,
at the end of the day, is, howcan you learn from those?
So don't think you're perfecton everything across the board.
No, it's a team play,especially at the systems level.
Here You've got to do yourindividual things and be really

(40:17):
good at what you're doing, butthen be open to what do others
tell you, both on the technicallevel as well as on the way to
get things done.
So I think that's a lot of youknow.
You got to put in the effort,you got to put in the time to
learn, to be humble about it,but then you know, get your

(40:38):
hands dirty, get into trenchesand work it out Right.
So there's often no realshortcut to that.
You know, at the end of the day, what's working best.
Real shortcut to that.
You know, at the end of the day.
What's what's working best, Ithink, in this environments here
is that you have a a bunch ofexperts who are really good in
what they're doing, but alsoable to see across the horizon,

(40:58):
to work with others together.
You know, to make this a wholeteam play all across the board.
So I think that's important interms of um mentoring.
And you know, when you come inas a, as a fresh phd grad, you
don't necessarily know that thatmuch.
So that's important in terms ofmentoring and you know, when
you come in as a fresh PhD grad,you don't necessarily know that
that much.
So that's a little bit of alearning experience.
You know Feedback.
You know candid feedback.

(41:18):
You know generally.
You know try to be as positiveas you can, but you know you
still need to be open.
Okay, there's some opportunitythat you have.
You could do that maybe betterin that aspect.
And then, as we go with newerpeople as well, same thing

(41:40):
trying to pass on the knowledgethat we have and say how do you
get things done in a teamwork?
We expect that we can trusteverybody individually to do a
really good job on theirindividual front, but then you
still got to put all thesepieces together.
So mentoring, that's that's avery, that's a very broad
perspective on mentoring yeah,so that's.

Speaker 1 (41:58):
That's very interesting because it quite
directly connects onto aspectsof what do you look for in a
person?
So say, somebody at a careerstage is at an early career
stage is listening to this andthey would like to work with you
.
Or somebody in a person.
So say, somebody at a careerstage is at an early career
stage is listening to this andthey would like to work with you
or somebody in your sort ofcareer position, and they're
trying to look around, networkand try and meet more people to

(42:20):
find out the answer to a verysimple question what are you
looking for in terms of a personat my career stage being able
to work with you and work inthis role?
So, based on that, could I askyou a question about what were
you looking for?
If you were looking forsomebody, let's say, a few years
behind you in your career, ayounger self, or maybe somebody

(42:41):
out there who wants a similarcareer path, what sort of
qualities and attributes do youthink are good to cultivate at
an early career stage?

Speaker 3 (42:51):
Yeah, I think that's also a broad question.
I think there's no size fitsall there and there I'm really
on the front.
I want to do these interviewsmyself.
I want to have them evaluatedby other people, not necessarily
by AI.
So I would not necessarilytrust that my AI gives me a
perfect first back, becauseoften you know, of course you've

(43:12):
got to have the qualifications,you've got to be, you know, in
a technical, quick, moving field, as always, you've got to be,
you know, top notch technicallyqualified and able to do that.
But then, on top of that,you've got to be open.
You've got to be open to learn.
I think all this technology ischanging so fast.
If you want to keep doing whatyou keep doing things move, the

(43:36):
market moves, the technologymoves You've got to be able to
learn all the time.
Something we're looking for ismotivation.
I know a lot of other peopleare motivated.
What do they really want to do?
Motivated, but not in the senseof okay, here's my way or the
highway and I don't want to dothat, right.
But you know, are you able tofit in here?

(43:59):
Are you able to learn?
Are you humble about this,right?
So it's kind of a play betweenmotivation and also drive.
You know, are you willing totake the initiative by yourself
or do you need a lot ofhand-holding, right?
So I think of course you needsome hand-holding when you come
in there, but you know, is theperson willing to go this extra

(44:19):
step and say, okay, yeah, thisis something maybe I can come
out of.
I'm trying to do this.
You know, take an initiative,take a lead on something like
this.

Speaker 1 (44:37):
So it's a mix of, you know, qualification and you
know, being humble aboutyourself, being willing to learn
, being willing to play togetherand putting in the effort.
Yeah, you're always looking forsomebody who's ready to take
the initiative, who's proactive,open to learn.
These are the sort of thingsthat we always hear, but it's
always good to sort of sometimeslisten to good advice twice, so
it's always good to sort ofhear all of that again.

(44:58):
I've got two more questions andI'll pull brandon into this as
well.
Um, you've both volunteered forthe society.
You've both volunteeredexternally as well.
What do you think that hasadded value in terms of your
career, and how would you saysell the idea to somebody?
Because we all know thatvolunteering is interesting.
It helps you meet some veryinteresting people, you gain
skills.

(45:18):
There's a plethora of thingsyou can achieve by sharing
simply your time.
What have your experiences beenand how would you basically
convince somebody thatvolunteering is a good idea?

Speaker 3 (45:34):
Yeah, also a broad set of questions.
I can talk about that for anhour, right.
I think on one side, you knowit's an altruistic aspect to it
and there's a more, you know,self-driven development aspect
to it.
A more self-driven developmentaspect to it, the altruistic

(45:55):
effect.
Science generally relies onpeer reviews.
Somebody, I'm putting time into review your paper, you're
putting time in to review mypaper, whether it's blind or not
blind, this type of thing.
You need feedback to reallydrive the quality up there and
things get better over time.
So I think that's where thealtruistic aspect comes in At
the end of the day, especiallyjunior people, not only junior

(46:17):
people, but they benefit a lotfrom having experts look over
your papers and say point out,what can I do better?
In that sense, and I thinkthat's passing on the knowledge,
whether it's paper review,whether it's driving conference,
whether it's mentoring, I thinkthat's passing on the knowledge
in what it's paper review, whatit's you know, driving
conference, what it's mentoring.
I think that's generally a youknow training, the next
generation.
Moving on, you know what works,what doesn't work Right.

(46:39):
So on the other side, formyself as well, I think I do get
a lot of things out as well.
So, on one side, uh, certainly,um, working with top-notch
people is that you have accessto um, to the leading edge of
knowledge, right?
So if I um say I need to knowsomething about uh lpo or so and

(47:03):
brandon knows more about thisthan I do, right, so we can't do
a talk uh company secrets, thatlike this.
But I could call up Brendan andsay, hey, give me a lowdown in
15 minutes instead of me havingthree days of doing to read it.
I'm still not at the same level, just to give an example here.
Right?
So I think that's maybe a moreselfish approach.

(47:27):
But then, beyond that as well,it's often, i'd'd say, driving
research agendas.
You know when, when, when youmeet with new people and you see
what are the leading edgepeople working on, right, so
what, what's really leading edge?
What people are working on,where are the gaps in generally
industry roadmaps, right?
So if we want to say, push inOCS or co-packaged optics or

(47:50):
whatever forward, right, so,okay, we might have this
technology piece and thistechnology piece and this
technology piece, but how youput it together, right?
So?
And I think having a broadnetwork in that sense or, you
know, knowing people is reallyhelpful in that sense of kind of
driving a full agenda that youknow may benefit the industry as

(48:10):
a whole or maybe you know avirtual company or something
like this in terms of gettingthings done.
So I think there are multipleaspects to this that are that
are all important, right?

Speaker 2 (48:21):
yeah, I mean I think yeah, yeah, I think that you
covered, um, yeah, quite a lotof the, the altruism and and uh,
the networking parts.
I think for me, um, myinvolvement with the, with the,
uh, the optics community andthese professional societies,
really started in grad school.
I I received a lot of supportfrom uh professional societies
like I truly photonic society,and that really was, um, uh,

(48:44):
really helped me build a lot ofcommunity that I didn't didn't
have in grad school and that was, you know, I think,
instrumental in me deciding togo into optics and optical
communications, and I think thatthat was something that you
know, when I look back on a lotof these professional
development activities that I'minvolved in today, that's sort

(49:06):
of the core driver is that I wasgiven a lot of opportunities
when I, you know, didn't havethat community or was just
starting out in this field, andit really was.
It's really an opportunity togive back.
So that's what I really ampassionate about, and also,
there are a lot of good thingsthat come with giving back.
Like, as Lauren said, you getto meet a lot of interesting

(49:27):
people, you get exposed to newideas and you really get to be a
part of this community, thescientific community that's
pushing progress forward.
So it's very fulfilling on apersonal level and probably also
on a professional level too.

Speaker 3 (49:43):
Yeah, maybe one additional thought on there is
that I wouldn't take it forgranted.
I think to some extent it's aprivilege to be able to serve
the broader community in thatsense.
Right Just to give it back toyou know, to pass on knowledge
to others, to other generation,I think it's an honor, it's a
privilege to some extent as well.

Speaker 1 (50:01):
And it's very interesting because the idea of
actually giving back to acommunity that's actually
supported somebody at the verystart of their career sort of
makes a very, very positivefeedback loop work, where,
effectively, somebody gets a lotof support at the early stages
of their career, turns around tobecome eventually the
leaderships and the mentors ofthat particular organization,

(50:22):
wherever that may be, is alwaysa very good feedback loop,
because you've been through theprocess, you've done everything
that somebody after you wants todo, so I really appreciate that
that's, that's the cycle thatwe are perpetuating in that way.
One final question, and afterthis I will basically summarize

(50:42):
everything that we've talkedabout uh, in what was it?
Now we're on podcast numberfour, five.
I don't know which uh futureversion.
We're on um.
We're here, we're sitting.
We're talking about umtechnical aspects of the
research and the output that theboth of you specialize in.
We're talking about careers,trajectories, developments, all
of these aspects.

(51:03):
If you had to distilleverything down, this might be
yours or this might be somebodyelse's in your career, a mentor
or somebody who's helped you out.
If you have to give one pieceof advice to somebody who's a
young professional who wants tosort of chart a similar course
in academia, industry, bigorganizations, wherever they

(51:25):
want to go.
What would that one piece ofadvice be?
It's for me or for Brendan.

Speaker 3 (51:32):
Brendan, you want to go.
What would that one piece ofadvice be?

Speaker 2 (51:33):
It's for me or for Brendan?
Brendan, you want to go first?
No, no, wow.
One piece of advice?
Well, I think if I had todistill it to one thing, I'd say
be passionate, show yourpassion, show your enthusiasm,
show your excitement.
I think that Brian Laurentcould talk about this.
When you see someone at workwho is passionate about what

(51:55):
they do, it inspires others, andI think that it's really only
going to lead to good things inyour professional career.
So, okay, I've got many otherpieces that I could probably
give, but that's, I think, themain one.
Right Be passionate and beengaged.

Speaker 3 (52:10):
Yeah, I think that's the first and foremost one.
I think, in addition to that,you know, be prepared to get
your hands dirty right.
So I think, both at thetechnical level, you know, go to
Trent, just do the work, reallysweat the little details if you
need to.
I think that's super important.
But then also later on as youmove a little bit up the chain.

(52:31):
So I think what you don't wantto do at the later level is
micromanage a technical dealthat's a recipe for disaster in
that sense.
But that being said, I thinkthere's a big difference between
micromanagement and deeptechnical dives.
I think you've got to trust yourpeople.
You've got to make sure thatthey all work perfectly well,
but also got to make sure tohold them accountable.

(52:53):
Here's what to say If you havea problem, well, we'll find
solutions around it.
We're easygoing about this, butyou've got to go and explain to
me in really accruciatingdetails what you're doing at the
end of the day, so that we haveconfidence that the system is
going to work right.
So I think so it's thispassionate thing combined with,

(53:15):
you know, putting in the timeand the effort, that and the
mentality essentially to, youknow, to pull the things off the
ground.

Speaker 1 (53:23):
That is extremely, extremely good advice.
I'll basically add on to thatto anybody who's listening.
One of my closest colleaguesand mentors in my career had
once said, when I asked him asimilar question, his response
was sometimes the people thatyou work with are almost more

(53:44):
important than the actual workthat you do.
So make sure that you'resurrounding yourself with good
people and make sure, whetheryou are at a level where you're
building a team or you're a partof a team, make sure that you
perpetuate that positive culturearound you so that you enjoy
going to work every single day.
And don't we want to do that?
Don't we all want to enjoy ourtimes when we go to work?

(54:05):
So we've talked about a lot.

Speaker 3 (54:11):
We've talked about a lot.
We've talked about, maybe,maybe on that front, just one
last episode.
I remember at the verybeginning, when I was here just
telling a story right off thetop of my head here, uh, I went
skiing one one sunday and Ithink I was like, yeah, it's a
hard technical problem.
And I was like, uh, yeah, gottafigure it out.
So maybe a little bit tense onthe ski lift, but then, talking
to you know, coincidence,sitting to a very, uh, senior

(54:33):
people person next to me, yougotta talk on the chairlift on
the way up on that side.
So, so where do you work?
So you're working at um, at uh,at ibm research, that's what I
said.
Right, oh, great, so you don'tneed to go to work, you're
actually getting paid for havingfun and I think that a lot of
that summarizes it right.
So you know always have thelittle things that you've got to

(54:54):
work through, right, but Ithink having this supportive and
broad environment, I think thatgoes a long, long way.

Speaker 1 (55:02):
Yeah, I've now got this image in my mind of you on
a ski slope trying to do themath on the ski going.
That's it.
I've solved the problem.
Now I can go down the slope.
We've talked about a lot today.
We've talked about networks anddata centers.
We've talked about complexitiesof larger systems, growth of

(55:23):
data centers, co-packaged optics, career challenges, mentorships
I mean the list is is so longthat we've decided we're now on
episode six.
So thank you very much, brandonand Lohan.
This has been an absolutefantastic chat.
I really enjoyed theconversation.
I'm sure everybody listeninghas as well.
I'd like to thank you very much.
If you have any final thoughts,now's your opportunity.

(55:45):
Otherwise I will sign off fortoday.

Speaker 3 (55:50):
Be passionate.
Thanks so much for youropportunity, Otherwise I will
sign off for today.
Be passionate.
Thanks so much for theopportunity.
Really appreciate it being onhere.
It's been a pleasure, excellent.

Speaker 1 (55:56):
Thank you very much, and we'll remember what Brandon
said let's be passionate.
Thank you very much everyonefor listening.
It has been an.
I'm talking to both Lohan andBrandon today, and join us again
next time for the next episode.
Thank you for today and bye,bye.
Advertise With Us

Popular Podcasts

Are You A Charlotte?

Are You A Charlotte?

In 1997, actress Kristin Davis’ life was forever changed when she took on the role of Charlotte York in Sex and the City. As we watched Carrie, Samantha, Miranda and Charlotte navigate relationships in NYC, the show helped push once unacceptable conversation topics out of the shadows and altered the narrative around women and sex. We all saw ourselves in them as they searched for fulfillment in life, sex and friendships. Now, Kristin Davis wants to connect with you, the fans, and share untold stories and all the behind the scenes. Together, with Kristin and special guests, what will begin with Sex and the City will evolve into talks about themes that are still so relevant today. "Are you a Charlotte?" is much more than just rewatching this beloved show, it brings the past and the present together as we talk with heart, humor and of course some optimism.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.