All Episodes

June 18, 2024 23 mins

In this episode of Dev Interrupted, Conor Bronsdon is joined by Sanghamitra Goswami, Senior Director of Data Science and Machine Learning at PagerDuty. Sanghamitra shares her expertise in AI and data science, including how engineering teams can effectively leverage both within their organizations. She also explores the history and significance of LLMs, strategies for measuring success and ROI, and the importance of foundational data work. The conversation ends with a discussion about practical applications of AI at PagerDuty, including features designed to reduce noise and improve incident resolution. 

Episode Highlights:
00:56 Why are LLMs important for engineering teams to understand?
03:17 How should engineering leaders think about using AI in their products?
07:57 What sort of plan should engineering leaders have to get buy in for AI?
13:22 Are there ways to show ROI on an investment in AI?
15:08 How should we communicate with customers about AI in our products?
18:53 How can companies find a good use case for AI in their product?

Show Notes:

Support the show:

Offers:

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Sanghamitra Goswami (00:00):
Each phase should have a deliverable.

(00:02):
It might not be a specificproduct or anything, is okay to
be imperfect, but let's have agoal.
And when we have a goal and whenwe have a plan at each phase,
then big foundational work mightseem achievable.
although you cannot see an ROIat 1 and 2, but since I know I

(00:23):
can give you that ROI back, atphase six, so that this is very
important.
In that case, you'll be moreconvinced than me going to you
and saying that, Hey, I don'treally know how to go to phase
six, but we need to do phaseone.
How can you drive developerproductivity, lower costs and
deliver better products?
On June 20th and 27th LinearB ishosting a workshop that explores

(00:45):
the software engineeringintelligence category.
We'll showcase how data-driveninsights and innovative workflow
automations can optimize yoursoftware delivery practices.
At the end of the workshop,you'll leave with a
complimentary Gartner marketguide to software engineering,
intelligence, and otherresources to help you get
started.
Head to the show notes orLinearB dot IO slash events to

(01:05):
sign up today.

Conor Bronsdon (01:06):
Welcome back to Dev Interrupted, I'm your co
host Conor Bronsdon.
Today I'm being joined bySanghamitra Goswami, Senior
Director of Data Science andMachine Learning at PagerDuty.
Sanghamitra, thank you so muchfor joining me.

Sanghamitra Goswami (01:18):
Thank you, Conor, for inviting me.

Conor Bronsdon (01:20):
Honestly, it's my pleasure.
I think we could really benefitfrom your expertise as someone
who has such a deepunderstanding of the approach
that data scientists have takento developing AI models.
And, I mean, frankly, AI is allthe rage right now, right?
Everyone's talking about it.
Everyone has opinions on what itis.
And so it's important that welevel set with the audience a
bit, and have the opportunityhere to pick the brain of a data

(01:43):
science leader like you, andunderstand how engineering teams
can translate this and leverageAI models, or LLMs in
particular, within their org.
Let's maybe unravel some of thestrategies that champion the
role of data science, machinelearning, and AI teams, and help
our audience understand how tonavigate this emerging future

(02:05):
and ever expanding landscape.
Why don't we talk a bit aboutthe history of LLMs, what they
are.
and why they're so important.

Sanghamitra Goswami (02:12):
You know, Conor, it's been a crazy time
now.
Um, nine months back whenChildGBD came out, uh, PagerDuty
leadership, they told me, Mitra,we need to do something with,
uh, LLMs, and it, it's crazy theway the world is saying that,
hey, we, we want to do LLM,let's have a feature that uses
LLMs.
So what are LLMs?
Large language models, theyleverage foundational machine

(02:35):
learning, AI, deep learning.
Models to understand naturallanguage and to give answers as
so they can talk to us like abot You look at the history of
NLP, it's based on NLP, it'sbased on the NLP models that we
have built out.
If you look at the history ofNLP, in 1948, Shannon and
Weaver, the first paper cameout.
And, and, during that time, wedidn't have the computer storage

(02:59):
that is possible now.
We didn't, we couldn't reallyhave a lot of computational
power.
So, It was not possible toalways run these large language
models because they requirelarge amounts of text.
Right.
However, from that start, whereShannon and we were in 1948, if

(03:19):
you fast forward, um, I wouldactually mention there is
another milestone where thetransformer architecture got
introduced and, um, attention isall you need is the paper.
So, if I look at these twomilestones and how the landscape
has changed, the computationalpower, GPUs.
So with everything in mind, nowis the perfect time that we can

(03:43):
all reap the benefits of yearsof research and computational
power.
And engineering in that sense.
So this is the perfect time.

Conor Bronsdon (03:52):
Absolutely.
It's, it's really interesting tothink about how, even just a few
years ago before we realizedthat we could paralyze processes
within AI model development, uh,using GPUs, there just wasn't
this speed of development thatwe've seen around AI models
today.
I know.
Uh, so I, I'd love to kind oftalk about how teams can
actionably leverage AI or LLMs,uh, within their tooling.

(04:15):
What would be your advice toengineering leaders who are
thinking, Hey, I wanna startusing AI to extend the capacity
of our product.
How should they kick off?

Sanghamitra Goswami (04:24):
I think, let's start with ai.
Just ai.
Someone wants to do AI withtheir teams.
Okay.
That is a huge pro, hugechallenge because if you look at
all the leaders in the industry,how do we see if something is
successful in the industry?
We have to get some ROI withour, with all the endeavors,
right?

(04:44):
And data science, AI, it's anexperiment.
So when we start building it, itis always not clear how this is
going to show up.
For example, in AI models, we,we try to build a model.
We experimented with historicaldata, and then we take the model
out in the market.
And then it starts getting allthis new data from our customers

(05:07):
and our model is giving answerslive in the run time.
So it's very difficult.
It's an experiment that we arerunning.
Yeah.
Leaders should be very carefulthat they know that what are the
risks of running theseexperiments.
That's number one.
Number two, they should be verycareful on how they measure

(05:28):
adoption or how the results ofthese experiments are being used
by end users.
Once these two pillars are inplace, I think it's easier for
leaders to measure value anddefine ROI.
Without these two, AI, we can'tdo AI or we can't do So these
are the two main, I would say,pillars if you want to do AI.

(05:52):
Now there are other, uh, youknow, organizational challenges
as well.
For example, democratization ofdata.
Everybody talks about it.
But it's a difficult job.
Their data is always in silo.
There are different channels.
If you look at marketing, youget, there is social media,

(06:12):
there are emails, there isYouTube or other social
networks.
So, so many different channelsand you have to get the data
together.
If you look at logistics, thereis carrier data in the ocean,
carrier data in the Um, and onroad, in, in air.
So it's just that if we want tomeasure any process, data is

(06:33):
always in silo and there is ahuge, there are huge effort that
need, that is needed, um, on theside of the person who wants to
do AI, to do a successful AI.
That's number one.
And we need, we have thearchitecture, we have the
solution, we know that we need adata lake or we need a source of
data where all the data can beconsolidated.

(06:56):
But it's, it's a huge effort.
It's a huge foundational effortthat always doesn't show direct
ROI.
So you have to convince as aleader in AI, you have to
convince your leaders that, hey,I'm going to do this
foundational effort.
However, you might not see adirect Right now, but it's going

(07:17):
to, um, you know, executivestalk about this flywheel effect,
but it's going to create thatflywheel effect at some point.
So.
That's the discussion they needto drive a

Conor Bronsdon (07:28):
Absolutely.
I think it's really importantfor us to understand both the
potential and the risks of ai.
And, and I don't mean, you know,the risks of a GI and you know,
this world of like, oh, likecrazy cyber villain and creative
ai like that.
That's, that's fun to thinkabout Unci.
Yeah, sure, we can talk aboutthat.
But like, like more specificallyto your business.

(07:48):
There is a challenge where ifyou don't put these foundations
in place, there's major risk tohow your business will present
itself, whether that model willhallucinate.
And this is where it comes downto these foundational data
science concepts you're talkingabout, like, is my data siloed?
Do we have the right trainingdata?
Is that training data validated?
Are there issues with that dataset that are going to cause long
term issues?

(08:09):
And when I've talked to otherdata science leaders like
yourself, that is one of thethings people really hone in on.
So I'm glad you bring up thisfoundational piece because A lot
of leaders are getting pushed bytheir board or pushed by their
c-suite of like, oh, we need toget AI in their product.
But if, if the data that you'refeeding in to train the model
isn't, some data, isn't datathat is actually, uh, validated

(08:31):
and maybe peer reviewed or, or,uh, checked on, like there are
major risks that you put inplay.

Sanghamitra Goswami (08:35):
Yes.
Yes, absolutely.
You need to know what your datacan do.
Without that, you know, garbagein, garbage out.
You cannot save yourself evenwith an LLM.
So, yeah.

Conor Bronsdon (08:46):
Very well said, and I think it's challenging
though for a lot of leaders toget buy in on that foundational
work as you point out becausethere is an immediate ROI.
So, how should engineeringleaders start to, uh, get that
buy in about ensuring theyactually do all the steps needed
to be successful?
Yes,

Sanghamitra Goswami (09:06):
I think, we work in an agile world, so we
should have a plan.
We should have a plan withdifferent phases.
And I always say this to my teamthat each phase should have a
deliverable.
It might not be a specificproduct or anything, but it
should have a goal.
It should have a deliverable.
I don't know if I read aboutthis Wabi Sabi.

(09:28):
It's a Japanese, uh, GuidingPrinciple.
It talks about continuousdevelopment and that
imperfection is good.
And I say this to my team thatwhile we say let's do something,
it is okay to be imperfect, butlet's have a goal.
And when we have a goal and whenwe have a plan at each phase,

(09:50):
then big foundational work mightseem achievable.
And that is very important asSay you are my boss, Conor, and
I'm talking to you, and I'mgiving you a plan, and I'm
saying that, hey, phase 1, 2, Ihave a goal, and I know I can go
to phase 6, and at the end of,although you cannot see an ROI
at 1 and 2, but since I know Ican give you that ROI back, at

(10:14):
phase six, so that this is veryimportant.
If I give you the plan that inthat case, you'll be more
convinced than me going to youand saying that, Hey, I don't
really know how to go to phasesix, but we need to do phase
one.

Conor Bronsdon (10:26):
So if I'm an engineering leader who hasn't
had deep experience with datascience or ai, and I'm thinking
about how do I build this phasedapproach.
Um, what would be the generalsteps you would advise, or is
there a resource where leaderscan go in and say, Hey, let me
look at, I don't know, atemplate to start applying to
our specific use case?

Sanghamitra Goswami (10:45):
Yes, I think there are many if you, if
you look at Google, like, how doyou use, use gather ROI for data
science projects?
But I would say before we dothat, it is very important that
in any organization, there is aproduct counterpart with a data
science engineering manager.
I believe there should be otherpeople championing data science
rather rather than get So you inYeah.

(11:06):
To get buy-in rather than thedata scientists themselves.
So it is critical that you havea friend in the product
organization because they canlook at the product holistically
from a top level view and theycan help you.
Go ahead.

Conor Bronsdon (11:20):
What would you say to people who are having
trouble finding that champion orpicking the right champion?

Sanghamitra Goswami (11:26):
Convince your boss, convince your boss
that you need a product partner.
A single person or a group ofdata scientists can't always do
everything.
You need to have a, someone elsebeyond that organization who
could champion for you.
Sometimes an executive can playthat role too, but with, with
having some time and commongoals and everything, I think

(11:46):
it's critical that data scienceorganizations have a product
partner.

Conor Bronsdon (11:50):
And I'm sure this creates some translation
challenges across the company,as you're trying to bring in
these other stakeholders and getpeople to buy in because you
know you need the support, butmaybe the goals across those
different organizations can bedifferent.
Yes.
Um, how would you try to solvethat cross organizational
translation challenge to, to getthese champions?

Sanghamitra Goswami (12:11):
Well, I, I don't think the data scientists
can solve it by themselves.
What, what they can do is they,they can say that, hey, I
understand it's late in yourroadmap and I can be ready on my
part and I can make it easierfor you to understand and access
what I'm developing.
But I do think executives play avery important role here because

(12:33):
they need to drive alignmentacross different teams at the
organization.
Let's say engineering team A andengineering team B both wants to
do data science, but they don'thave time because their roadmaps
are full of other projects.
Data, whatever the datascientists do, it won't convince
them, right?
So the executives need toprioritize that, hey, or, or a
product partner who can look atit and say, hey, this data

(12:57):
science feature, if we do it,this will drive huge ROI than a,
than a small change or than anyother feature that we are taking
out this year.
So we need, we need to haveexecutive alignment on roadmap
across teams and also.
Some other champions, but whatthe data scientists or data
science, uh, data science, uh,organization leaders can do is

(13:19):
they can think of, okay, hereare some benefits of, um,
empowering data scientists anddata engineers so that they can
write code well.
Um, this is, I'm going off atangent because, you know, data
scientists come from differentbackgrounds and they are always
not the best software engineers.
So they need support from dataengineers and they need to

(13:40):
productionize their code, writeproduction level codes.
So what the leader in the datascience organization do is make
sure that the organization isempowered to Build something
that is very easily accessibleand can be taken by the
engineering team, and theengineering team doesn't spend a
lot of time building that orunderstanding that.

Conor Bronsdon (14:03):
It's interesting because you're talking a lot
about these change managementconcepts, frankly, of like, you
know, getting organizationalalignment, building up champions
within the org, making sure youget buy in, so you can showcase
that ROI, ensuring you havethese phased rollouts and a
clear goal for each step of yourprocess.
What if you're having troublegetting that kind of buy in?

(14:24):
Are there ways that, you know,data science or engineering
teams can leverage currentlyavailable AI models or tooling
to showcase the ROI and thencreate that buy in?

Sanghamitra Goswami (14:36):
Yes, there are also many tools in the
market that ROI, but it's alittle bit difficult.
Once again, I think, uh, It's anexperiment.
That's how I see data science.
So it's not, depending on howmuch customer, how much data
your customer have.
You know, some customers mightbe new.

(14:56):
Some customers might not bestoring data very well.
So the experiment that I haverun on my end might not always
be great when I run it with realtime data with all my customers.
So having those risk factors.
Figuring those out beforerelease or having a slow release

(15:16):
so that you can talk to yourcustomers and figure out.
Conor is a great customerbecause we have five years of
data and our model is going togive very good results.
Whereas Mitra might not be agood customer because she has
only six months of data.
Figuring those out and how, whatis the fraction of your
customers will give goodresults.
So the, the risks, I thinkthinking those ahead of time.

(15:39):
Makes a huge difference.
I mean,

Conor Bronsdon (15:41):
we've talked about this some on the show
before about the importance forleaders of understanding the
risks of even excitingopportunities.
Yes, and I think you bring up agood one, which is like, it's
really easy for us toover-exaggerate the impact of AI
on a particular customer or on,you know, a feature or product
where maybe the realistic.
Truth is that it's going to taketime for it to develop because

(16:01):
you need that, uh, dataintegrity to, to actually build
up, because you need morecustomer data.
How should you go aboutcommunicating with customers
about what you're able to dowith AI as you build your
program?

Sanghamitra Goswami (16:12):
I think building trust with customers is
key.
Um, at PagerDuty we have aprocess called Early Access,
where our product is not fullybuilt out, but we are in the
Early Access program, we have aprototype, and we can Ask our
customers to use it and give usfeedback.
I think that feedback iscritical.

(16:33):
They can tell us that, hey, it'sgiving great results.
They can tell us it's givingvery bad results.
So then we know and we canimprove and we can, so this
early access program is veryuseful.

Conor Bronsdon (16:44):
How are you leveraging AI at PagerDuty?

Sanghamitra Goswami (16:47):
We do a lot of AI.
Um, so We have five features,and when I say features, these
are features which havedifferent models in the, um, in
the backend.
So we have five AIOps features,and our AIOps, which used to be
an add on for our full IRproduct, now is a separate BU.
So we have, um, A lot offeatures in AIOps, noise

(17:09):
reduction.
I was just talking to someonewho mentioned that, uh, uh, it's
always a problem when you havelots of, um, alerts.
And we are talking aboutsecurity camera, like Google
camera.
You keep on getting alerts andthen you are lost, right?
So, the same thing happens whenpeople use PagerDuty.
People use PagerDuty when thereis an incident and you are
getting alerts.
And if you get a lot of alerts,if you're inundated by alerts,

(17:32):
you don't know which one to gofor.
So we have very good noisereduction algorithms and we use
AI to, um, build those, uh,noise reduction algorithm.
That's, that's a super smart usecase.
'cause that kind of cognitiveload, it really makes us start
to tune out.
I mean, like, I'm sure we've allbeen guilty of this.
Maybe with our email.
Sometimes it's, ah, so many,okay, I just gotta get through

(17:55):
these things.
Yes.
And it's so easy to misssomething that might be
important if you're not reallystaying on it.
Uh, that's a great example ofhow you can leverage the power
of AI to assist for yourcustomers.
Um, are there other ways thatyou see PagerDuty leveraging AI
in the future?
Yes, we have root cause.
So these are, I'm talking about,you know, non NLM AI ops
features.

(18:15):
We have root cause, we haveprobable origin.
Uh, what we do with thesefeatures is during the incident,
during the triage process, wetry to provide information to
developers who are looking We'retrying to figure out what has
gone wrong, how can we figureout, how can we resolve the
incident faster.
So we have a suite of featureson that end.

(18:36):
On the LLM side of things, wehave three new features that are
coming out.
These are our first Gen AIfeatures.
We have a summarization usecase.
I think this is a very good usecase and uh, one of the ways I
always say, once again going offa little tangent, I always say
that if you want to do LLM, finda good use case.

(18:56):
And I think this is an awesomeuse case.
So During that incident,developers are trying to solve,
you know, a problem and saythat, okay, I'm resolving the
incident.
But even during that phase, theyhave to update their
stakeholders or external, uh,companies that, who are waiting
for information about theincident that is going on.

(19:17):
That's a very difficult job.
Developers who are always in theback end, they need to write up
an email and, you know, dividetheir attention between solving
a problem and drafting up anemail.
So you'll give it to thegenerative AI platform because
now they can do it for yourself.
And those conversations arealready there in Slack, in Zoom,

(19:37):
in Microsoft Teams.
So why repeat it?
Ask your generative AI model towrite it for yourself.

Conor Bronsdon (19:43):
Smart.

Sanghamitra Goswami (19:43):
So I think this is a very, very good use
case.
and empowering to, to thedevelopers who are using
PagerDuty.

Conor Bronsdon (19:51):
And you mentioned this idea of ensuring
you find the right use cases orgood use cases.
What's the approach that youshould think people should take
to that?

Sanghamitra Goswami (19:59):
What is the problem that you are solving
for?
How would you provide relief toyour customers or the end users?
What would they find useful?
That's the key.
And that's true for LLMs too.
You want to find the use case,but is your use case solving the
problem that is being asked by alot of your customers?

Conor Bronsdon (20:19):
And that's just good business advice, period,
right?
Like, solve your customerproblems.
The phrase, like, make your beertaste better has been
popularized recently.
It's a great example.
We have,

Sanghamitra Goswami (20:28):
we say in PagerDuty, champion the
customers.

Conor Bronsdon (20:32):
Great way to put it.
I'd love to just get some moregeneral thoughts from you about
your viewpoint on AI in general,where the industry's going, the
explosion of success here nowthat we have paralyzed GPU
models, we have years of themworking, um, obviously there's
been an explosion in the publicconsciousness of the ability to
leverage AI, and as you pointedout, like, this all goes back to

(20:55):
Like, old papers, these, thisgoes back to old sci fi novels,
frankly, where we talked aboutthese ideas.
Yes.
And, and now we're seeing themcome into reality.
Uh, what are some of the thingsthat you're excited about by the
current AI revolution that'shappening?

Sanghamitra Goswami (21:10):
I think I'm seeing very good use cases.
One use case that I reallyloved.
I am big on Instagram.
You know, I was looking at thephoto editing capabilities and
you can just take out a personthat you didn't like.

Conor Bronsdon (21:25):
So if you've got an ex boyfriend or girlfriend or
something like that, like, yeah.

Sanghamitra Goswami (21:31):
No, think about it.
I want my picture before theEiffel Tower.
And I don't want anyone else.
And I can do that now with AI.
So I love it.
I love some of the applicationsthat are coming up.
This is a fun one, but there arevery useful ones in the, if I
look around.
Um, recently when I was going tothe Chicago airport, they did a,

(21:55):
they did a facial scanning.
And they didn't actually scan myboarding pass.
Not LLM, but still it's so cool.
Where I'm just walking and thereis a machine who is scanning my
face.

Conor Bronsdon (22:07):
I get some privacy concerns that I have to
admit, but it is very cool.

Sanghamitra Goswami (22:10):
Yeah.
Yeah, it's just so cool.
Yeah.

Conor Bronsdon (22:12):
Uh, well, thank you so much for taking the time
to chat with me today aboutthis.
It's been fascinating to diveinto your thoughts about AI.
Do you have any closing thoughtsyou'd like to share with the
audience about either how theyshould approach LLMs or what all
of this change means?

Sanghamitra Goswami (22:25):
I would say data science leaders fight for,
fight for space.
Uh, you need to do more.
Uh, think of a good use case.
Ask your executives for aproduct partner.
Try to prove that this, thefeatures you want to develop,
that the use case you arevouching for, you want to build
for, is going to solve acustomer problem.

(22:47):
And that is needed.
Write up.
I think writing is very useful.
And give it away for people toconsider, take feedback.
Be vocal.
Yeah, I would say that

Conor Bronsdon (22:58):
Well said song.
thank you so much for coming onthe show.
It's been a distinct pleasure.
If you're someone who'slistening to this conversation,
uh, consider checking out onYouTube.
We're here in the midst of anincredible lead dev conference,
here in Oakland, and I think itwould be a ton of fun for you to
see us having this conversationlive, uh, on, on the YouTube
channel.
So that's, uh, dev interruptedon YouTube, check it out.
And, uh, once again, thanks forcoming on the show.

Sanghamitra Goswami (23:19):
Thank you, Connor.
Thanks for the invitation.
Advertise With Us

Popular Podcasts

1. Start Here
2. Dateline NBC

2. Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations.

3. Amy and T.J. Podcast

3. Amy and T.J. Podcast

"Amy and T.J." is hosted by renowned television news anchors Amy Robach and T. J. Holmes. Hosts and executive producers Robach and Holmes are a formidable broadcasting team with decades of experience delivering headline news and captivating viewers nationwide. Now, the duo will get behind the microphone to explore meaningful conversations about current events, pop culture and everything in between. Nothing is off limits. “Amy & T.J.” is guaranteed to be informative, entertaining and above all, authentic. It marks the first time Robach and Holmes speak publicly since their own names became a part of the headlines. Follow @ajrobach, and @officialtjholmes on Instagram for updates.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.