All Episodes

June 18, 2024 23 mins

In this episode of Dev Interrupted, Conor Bronsdon is joined by Sanghamitra Goswami, Senior Director of Data Science and Machine Learning at PagerDuty. Sanghamitra shares her expertise in AI and data science, including how engineering teams can effectively leverage both within their organizations. She also explores the history and significance of LLMs, strategies for measuring success and ROI, and the importance of foundational data work. The conversation ends with a discussion about practical applications of AI at PagerDuty, including features designed to reduce noise and improve incident resolution. 

Episode Highlights:
00:56 Why are LLMs important for engineering teams to understand?
03:17 How should engineering leaders think about using AI in their products?
07:57 What sort of plan should engineering leaders have to get buy in for AI?
13:22 Are there ways to show ROI on an investment in AI?
15:08 How should we communicate with customers about AI in our products?
18:53 How can companies find a good use case for AI in their product?

Show Notes:

Support the show:


Mark as Played

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Sanghamitra Goswami (00:00):
Each phase should have a deliverable.

It might not be a specificproduct or anything, is okay to
be imperfect, but let's have agoal.
And when we have a goal and whenwe have a plan at each phase,
then big foundational work mightseem achievable.
although you cannot see an ROIat 1 and 2, but since I know I

can give you that ROI back, atphase six, so that this is very
In that case, you'll be moreconvinced than me going to you
and saying that, Hey, I don'treally know how to go to phase
six, but we need to do phaseone.
How can you drive developerproductivity, lower costs and
deliver better products?
On June 20th and 27th LinearB ishosting a workshop that explores

the software engineeringintelligence category.
We'll showcase how data-driveninsights and innovative workflow
automations can optimize yoursoftware delivery practices.
At the end of the workshop,you'll leave with a
complimentary Gartner marketguide to software engineering,
intelligence, and otherresources to help you get
Head to the show notes orLinearB dot IO slash events to

sign up today.

Conor Bronsdon (01:06):
Welcome back to Dev Interrupted, I'm your co
host Conor Bronsdon.
Today I'm being joined bySanghamitra Goswami, Senior
Director of Data Science andMachine Learning at PagerDuty.
Sanghamitra, thank you so muchfor joining me.

Sanghamitra Goswami (01:18):
Thank you, Conor, for inviting me.

Conor Bronsdon (01:20):
Honestly, it's my pleasure.
I think we could really benefitfrom your expertise as someone
who has such a deepunderstanding of the approach
that data scientists have takento developing AI models.
And, I mean, frankly, AI is allthe rage right now, right?
Everyone's talking about it.
Everyone has opinions on what itis.
And so it's important that welevel set with the audience a
bit, and have the opportunityhere to pick the brain of a data

science leader like you, andunderstand how engineering teams
can translate this and leverageAI models, or LLMs in
particular, within their org.
Let's maybe unravel some of thestrategies that champion the
role of data science, machinelearning, and AI teams, and help
our audience understand how tonavigate this emerging future

and ever expanding landscape.
Why don't we talk a bit aboutthe history of LLMs, what they
and why they're so important.

Sanghamitra Goswami (02:12):
You know, Conor, it's been a crazy time
Um, nine months back whenChildGBD came out, uh, PagerDuty
leadership, they told me, Mitra,we need to do something with,
uh, LLMs, and it, it's crazy theway the world is saying that,
hey, we, we want to do LLM,let's have a feature that uses
So what are LLMs?
Large language models, theyleverage foundational machine

learning, AI, deep learning.
Models to understand naturallanguage and to give answers as
so they can talk to us like abot You look at the history of
NLP, it's based on NLP, it'sbased on the NLP models that we
have built out.
If you look at the history ofNLP, in 1948, Shannon and
Weaver, the first paper cameout.
And, and, during that time, wedidn't have the computer storage

that is possible now.
We didn't, we couldn't reallyhave a lot of computational
So, It was not possible toalways run these large language
models because they requirelarge amounts of text.
However, from that start, whereShannon and we were in 1948, if

you fast forward, um, I wouldactually mention there is
another milestone where thetransformer architecture got
introduced and, um, attention isall you need is the paper.
So, if I look at these twomilestones and how the landscape
has changed, the computationalpower, GPUs.
So with everything in mind, nowis the perfect time that we can

all reap the benefits of yearsof research and computational
And engineering in that sense.
So this is the perfect time.

Conor Bronsdon (03:52):
It's, it's really interesting tothink about how, even just a few
years ago before we realizedthat we could paralyze processes
within AI model development, uh,using GPUs, there just wasn't
this speed of development thatwe've seen around AI models
I know.
Uh, so I, I'd love to kind oftalk about how teams can
actionably leverage AI or LLMs,uh, within their tooling.

What would be your advice toengineering leaders who are
thinking, Hey, I wanna startusing AI to extend the capacity
of our product.
How should they kick off?

Sanghamitra Goswami (04:24):
I think, let's start with ai.
Just ai.
Someone wants to do AI withtheir teams.
That is a huge pro, hugechallenge because if you look at
all the leaders in the industry,how do we see if something is
successful in the industry?
We have to get some ROI withour, with all the endeavors,

And data science, AI, it's anexperiment.
So when we start building it, itis always not clear how this is
going to show up.
For example, in AI models, we,we try to build a model.
We experimented with historicaldata, and then we take the model
out in the market.
And then it starts getting allthis new data from our customers

and our model is giving answerslive in the run time.
So it's very difficult.
It's an experiment that we arerunning.
Leaders should be very carefulthat they know that what are the
risks of running theseexperiments.
That's number one.
Number two, they should be verycareful on how they measure

adoption or how the results ofthese experiments are being used
by end users.
Once these two pillars are inplace, I think it's easier for
leaders to measure value anddefine ROI.
Without these two, AI, we can'tdo AI or we can't do So these
are the two main, I would say,pillars if you want to do AI.

Now there are other, uh, youknow, organizational challenges
as well.
For example, democratization ofdata.
Everybody talks about it.
But it's a difficult job.
Their data is always in silo.
There are different channels.
If you look at marketing, youget, there is social media,

there are emails, there isYouTube or other social
So, so many different channelsand you have to get the data
If you look at logistics, thereis carrier data in the ocean,
carrier data in the Um, and onroad, in, in air.
So it's just that if we want tomeasure any process, data is

always in silo and there is ahuge, there are huge effort that
need, that is needed, um, on theside of the person who wants to
do AI, to do a successful AI.
That's number one.
And we need, we have thearchitecture, we have the
solution, we know that we need adata lake or we need a source of
data where all the data can beconsolidated.

But it's, it's a huge effort.
It's a huge foundational effortthat always doesn't show direct
So you have to convince as aleader in AI, you have to
convince your leaders that, hey,I'm going to do this
foundational effort.
However, you might not see adirect Right now, but it's going

to, um, you know, executivestalk about this flywheel effect,
but it's going to create thatflywheel effect at some point.
That's the discussion they needto drive a

Conor Bronsdon (07:28):
I think it's really importantfor us to understand both the
potential and the risks of ai.
And, and I don't mean, you know,the risks of a GI and you know,
this world of like, oh, likecrazy cyber villain and creative
ai like that.
That's, that's fun to thinkabout Unci.
Yeah, sure, we can talk aboutthat.
But like, like more specificallyto your business.

There is a challenge where ifyou don't put these foundations
in place, there's major risk tohow your business will present
itself, whether that model willhallucinate.
And this is where it comes downto these foundational data
science concepts you're talkingabout, like, is my data siloed?
Do we have the right trainingdata?
Is that training data validated?
Are there issues with that dataset that are going to cause long
term issues?

And when I've talked to otherdata science leaders like
yourself, that is one of thethings people really hone in on.
So I'm glad you bring up thisfoundational piece because A lot
of leaders are getting pushed bytheir board or pushed by their
c-suite of like, oh, we need toget AI in their product.
But if, if the data that you'refeeding in to train the model
isn't, some data, isn't datathat is actually, uh, validated

and maybe peer reviewed or, or,uh, checked on, like there are
major risks that you put inplay.

Sanghamitra Goswami (08:35):
Yes, absolutely.
You need to know what your datacan do.
Without that, you know, garbagein, garbage out.
You cannot save yourself evenwith an LLM.
So, yeah.

Conor Bronsdon (08:46):
Very well said, and I think it's challenging
though for a lot of leaders toget buy in on that foundational
work as you point out becausethere is an immediate ROI.
So, how should engineeringleaders start to, uh, get that
buy in about ensuring theyactually do all the steps needed
to be successful?

Sanghamitra Goswami (09:06):
I think, we work in an agile world, so we
should have a plan.
We should have a plan withdifferent phases.
And I always say this to my teamthat each phase should have a
It might not be a specificproduct or anything, but it
should have a goal.
It should have a deliverable.
I don't know if I read aboutthis Wabi Sabi.

It's a Japanese, uh, GuidingPrinciple.
It talks about continuousdevelopment and that
imperfection is good.
And I say this to my team thatwhile we say let's do something,
it is okay to be imperfect, butlet's have a goal.
And when we have a goal and whenwe have a plan at each phase,

then big foundational work mightseem achievable.
And that is very important asSay you are my boss, Conor, and
I'm talking to you, and I'mgiving you a plan, and I'm
saying that, hey, phase 1, 2, Ihave a goal, and I know I can go
to phase 6, and at the end of,although you cannot see an ROI
at 1 and 2, but since I know Ican give you that ROI back, at

phase six, so that this is veryimportant.
If I give you the plan that inthat case, you'll be more
convinced than me going to youand saying that, Hey, I don't
really know how to go to phasesix, but we need to do phase

Conor Bronsdon (10:26):
So if I'm an engineering leader who hasn't
had deep experience with datascience or ai, and I'm thinking
about how do I build this phasedapproach.
Um, what would be the generalsteps you would advise, or is
there a resource where leaderscan go in and say, Hey, let me
look at, I don't know, atemplate to start applying to
our specific use case?

Sanghamitra Goswami (10:45):
Yes, I think there are many if you, if
you look at Google, like, how doyou use, use gather ROI for data
science projects?
But I would say before we dothat, it is very important that
in any organization, there is aproduct counterpart with a data
science engineering manager.
I believe there should be otherpeople championing data science
rather rather than get So you inYeah.

To get buy-in rather than thedata scientists themselves.
So it is critical that you havea friend in the product
organization because they canlook at the product holistically
from a top level view and theycan help you.
Go ahead.

Conor Bronsdon (11:20):
What would you say to people who are having
trouble finding that champion orpicking the right champion?

Sanghamitra Goswami (11:26):
Convince your boss, convince your boss
that you need a product partner.
A single person or a group ofdata scientists can't always do
You need to have a, someone elsebeyond that organization who
could champion for you.
Sometimes an executive can playthat role too, but with, with
having some time and commongoals and everything, I think

it's critical that data scienceorganizations have a product

Conor Bronsdon (11:50):
And I'm sure this creates some translation
challenges across the company,as you're trying to bring in
these other stakeholders and getpeople to buy in because you
know you need the support, butmaybe the goals across those
different organizations can bedifferent.
Um, how would you try to solvethat cross organizational
translation challenge to, to getthese champions?

Sanghamitra Goswami (12:11):
Well, I, I don't think the data scientists
can solve it by themselves.
What, what they can do is they,they can say that, hey, I
understand it's late in yourroadmap and I can be ready on my
part and I can make it easierfor you to understand and access
what I'm developing.
But I do think executives play avery important role here because

they need to drive alignmentacross different teams at the
Let's say engineering team A andengineering team B both wants to
do data science, but they don'thave time because their roadmaps
are full of other projects.
Data, whatever the datascientists do, it won't convince
them, right?
So the executives need toprioritize that, hey, or, or a
product partner who can look atit and say, hey, this data

science feature, if we do it,this will drive huge ROI than a,
than a small change or than anyother feature that we are taking
out this year.
So we need, we need to haveexecutive alignment on roadmap
across teams and also.
Some other champions, but whatthe data scientists or data
science, uh, data science, uh,organization leaders can do is

they can think of, okay, hereare some benefits of, um,
empowering data scientists anddata engineers so that they can
write code well.
Um, this is, I'm going off atangent because, you know, data
scientists come from differentbackgrounds and they are always
not the best software engineers.
So they need support from dataengineers and they need to

productionize their code, writeproduction level codes.
So what the leader in the datascience organization do is make
sure that the organization isempowered to Build something
that is very easily accessibleand can be taken by the
engineering team, and theengineering team doesn't spend a
lot of time building that orunderstanding that.

Conor Bronsdon (14:03):
It's interesting because you're talking a lot
about these change managementconcepts, frankly, of like, you
know, getting organizationalalignment, building up champions
within the org, making sure youget buy in, so you can showcase
that ROI, ensuring you havethese phased rollouts and a
clear goal for each step of yourprocess.
What if you're having troublegetting that kind of buy in?

Are there ways that, you know,data science or engineering
teams can leverage currentlyavailable AI models or tooling
to showcase the ROI and thencreate that buy in?

Sanghamitra Goswami (14:36):
Yes, there are also many tools in the
market that ROI, but it's alittle bit difficult.
Once again, I think, uh, It's anexperiment.
That's how I see data science.
So it's not, depending on howmuch customer, how much data
your customer have.
You know, some customers mightbe new.

Some customers might not bestoring data very well.
So the experiment that I haverun on my end might not always
be great when I run it with realtime data with all my customers.
So having those risk factors.
Figuring those out beforerelease or having a slow release

so that you can talk to yourcustomers and figure out.
Conor is a great customerbecause we have five years of
data and our model is going togive very good results.
Whereas Mitra might not be agood customer because she has
only six months of data.
Figuring those out and how, whatis the fraction of your
customers will give goodresults.
So the, the risks, I thinkthinking those ahead of time.

Makes a huge difference.
I mean,

Conor Bronsdon (15:41):
we've talked about this some on the show
before about the importance forleaders of understanding the
risks of even excitingopportunities.
Yes, and I think you bring up agood one, which is like, it's
really easy for us toover-exaggerate the impact of AI
on a particular customer or on,you know, a feature or product
where maybe the realistic.
Truth is that it's going to taketime for it to develop because

you need that, uh, dataintegrity to, to actually build
up, because you need morecustomer data.
How should you go aboutcommunicating with customers
about what you're able to dowith AI as you build your

Sanghamitra Goswami (16:12):
I think building trust with customers is
Um, at PagerDuty we have aprocess called Early Access,
where our product is not fullybuilt out, but we are in the
Early Access program, we have aprototype, and we can Ask our
customers to use it and give usfeedback.
I think that feedback iscritical.

They can tell us that, hey, it'sgiving great results.
They can tell us it's givingvery bad results.
So then we know and we canimprove and we can, so this
early access program is veryuseful.

Conor Bronsdon (16:44):
How are you leveraging AI at PagerDuty?

Sanghamitra Goswami (16:47):
We do a lot of AI.
Um, so We have five features,and when I say features, these
are features which havedifferent models in the, um, in
the backend.
So we have five AIOps features,and our AIOps, which used to be
an add on for our full IRproduct, now is a separate BU.
So we have, um, A lot offeatures in AIOps, noise

I was just talking to someonewho mentioned that, uh, uh, it's
always a problem when you havelots of, um, alerts.
And we are talking aboutsecurity camera, like Google
You keep on getting alerts andthen you are lost, right?
So, the same thing happens whenpeople use PagerDuty.
People use PagerDuty when thereis an incident and you are
getting alerts.
And if you get a lot of alerts,if you're inundated by alerts,

you don't know which one to gofor.
So we have very good noisereduction algorithms and we use
AI to, um, build those, uh,noise reduction algorithm.
That's, that's a super smart usecase.
'cause that kind of cognitiveload, it really makes us start
to tune out.
I mean, like, I'm sure we've allbeen guilty of this.
Maybe with our email.
Sometimes it's, ah, so many,okay, I just gotta get through

these things.
And it's so easy to misssomething that might be
important if you're not reallystaying on it.
Uh, that's a great example ofhow you can leverage the power
of AI to assist for yourcustomers.
Um, are there other ways thatyou see PagerDuty leveraging AI
in the future?
Yes, we have root cause.
So these are, I'm talking about,you know, non NLM AI ops

We have root cause, we haveprobable origin.
Uh, what we do with thesefeatures is during the incident,
during the triage process, wetry to provide information to
developers who are looking We'retrying to figure out what has
gone wrong, how can we figureout, how can we resolve the
incident faster.
So we have a suite of featureson that end.

On the LLM side of things, wehave three new features that are
coming out.
These are our first Gen AIfeatures.
We have a summarization usecase.
I think this is a very good usecase and uh, one of the ways I
always say, once again going offa little tangent, I always say
that if you want to do LLM, finda good use case.

And I think this is an awesomeuse case.
So During that incident,developers are trying to solve,
you know, a problem and saythat, okay, I'm resolving the
But even during that phase, theyhave to update their
stakeholders or external, uh,companies that, who are waiting
for information about theincident that is going on.

That's a very difficult job.
Developers who are always in theback end, they need to write up
an email and, you know, dividetheir attention between solving
a problem and drafting up anemail.
So you'll give it to thegenerative AI platform because
now they can do it for yourself.
And those conversations arealready there in Slack, in Zoom,

in Microsoft Teams.
So why repeat it?
Ask your generative AI model towrite it for yourself.

Conor Bronsdon (19:43):

Sanghamitra Goswami (19:43):
So I think this is a very, very good use
and empowering to, to thedevelopers who are using

Conor Bronsdon (19:51):
And you mentioned this idea of ensuring
you find the right use cases orgood use cases.
What's the approach that youshould think people should take
to that?

Sanghamitra Goswami (19:59):
What is the problem that you are solving
How would you provide relief toyour customers or the end users?
What would they find useful?
That's the key.
And that's true for LLMs too.
You want to find the use case,but is your use case solving the
problem that is being asked by alot of your customers?

Conor Bronsdon (20:19):
And that's just good business advice, period,
Like, solve your customerproblems.
The phrase, like, make your beertaste better has been
popularized recently.
It's a great example.
We have,

Sanghamitra Goswami (20:28):
we say in PagerDuty, champion the

Conor Bronsdon (20:32):
Great way to put it.
I'd love to just get some moregeneral thoughts from you about
your viewpoint on AI in general,where the industry's going, the
explosion of success here nowthat we have paralyzed GPU
models, we have years of themworking, um, obviously there's
been an explosion in the publicconsciousness of the ability to
leverage AI, and as you pointedout, like, this all goes back to

Like, old papers, these, thisgoes back to old sci fi novels,
frankly, where we talked aboutthese ideas.
And, and now we're seeing themcome into reality.
Uh, what are some of the thingsthat you're excited about by the
current AI revolution that'shappening?

Sanghamitra Goswami (21:10):
I think I'm seeing very good use cases.
One use case that I reallyloved.
I am big on Instagram.
You know, I was looking at thephoto editing capabilities and
you can just take out a personthat you didn't like.

Conor Bronsdon (21:25):
So if you've got an ex boyfriend or girlfriend or
something like that, like, yeah.

Sanghamitra Goswami (21:31):
No, think about it.
I want my picture before theEiffel Tower.
And I don't want anyone else.
And I can do that now with AI.
So I love it.
I love some of the applicationsthat are coming up.
This is a fun one, but there arevery useful ones in the, if I
look around.
Um, recently when I was going tothe Chicago airport, they did a,

they did a facial scanning.
And they didn't actually scan myboarding pass.
Not LLM, but still it's so cool.
Where I'm just walking and thereis a machine who is scanning my

Conor Bronsdon (22:07):
I get some privacy concerns that I have to
admit, but it is very cool.

Sanghamitra Goswami (22:10):
Yeah, it's just so cool.

Conor Bronsdon (22:12):
Uh, well, thank you so much for taking the time
to chat with me today aboutthis.
It's been fascinating to diveinto your thoughts about AI.
Do you have any closing thoughtsyou'd like to share with the
audience about either how theyshould approach LLMs or what all
of this change means?

Sanghamitra Goswami (22:25):
I would say data science leaders fight for,
fight for space.
Uh, you need to do more.
Uh, think of a good use case.
Ask your executives for aproduct partner.
Try to prove that this, thefeatures you want to develop,
that the use case you arevouching for, you want to build
for, is going to solve acustomer problem.

And that is needed.
Write up.
I think writing is very useful.
And give it away for people toconsider, take feedback.
Be vocal.
Yeah, I would say that

Conor Bronsdon (22:58):
Well said song.
thank you so much for coming onthe show.
It's been a distinct pleasure.
If you're someone who'slistening to this conversation,
uh, consider checking out onYouTube.
We're here in the midst of anincredible lead dev conference,
here in Oakland, and I think itwould be a ton of fun for you to
see us having this conversationlive, uh, on, on the YouTube
So that's, uh, dev interruptedon YouTube, check it out.
And, uh, once again, thanks forcoming on the show.

Sanghamitra Goswami (23:19):
Thank you, Connor.
Thanks for the invitation.
Advertise With Us

Popular Podcasts

1. Start Here
2. Dateline NBC

2. Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations.

3. Amy and T.J. Podcast

3. Amy and T.J. Podcast

"Amy and T.J." is hosted by renowned television news anchors Amy Robach and T. J. Holmes. Hosts and executive producers Robach and Holmes are a formidable broadcasting team with decades of experience delivering headline news and captivating viewers nationwide. Now, the duo will get behind the microphone to explore meaningful conversations about current events, pop culture and everything in between. Nothing is off limits. “Amy & T.J.” is guaranteed to be informative, entertaining and above all, authentic. It marks the first time Robach and Holmes speak publicly since their own names became a part of the headlines. Follow @ajrobach, and @officialtjholmes on Instagram for updates.

Music, radio and podcasts, all free. Listen online or download the iHeart App.


© 2024 iHeartMedia, Inc.