All Episodes

August 24, 2022 • 48 mins

In this episode of Breaking Changes, Postman Chief Evangelist Kin Lane welcomes Tim Velasquez, Software QA Manager at Werner Enterprises. Tim sheds new light on the role that Quality Assurance plays in the API lifecycle by demonstrating how critical they are to moving enterprise organizations forward

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Thank you for tuning in to today's full episode of the Breaking Changes podcast.

(00:11):
I'm your host and chief evangelist for Postman, Kin Lane.
With Breaking Changes, we explore specific topics from the world of APIs, but through
the lens of business and engineering leadership.
Joining me today, we have Tim Velasquez, software QA manager at Werner Enterprises.
Tim completely blew my mind when it comes to the role that QA or quality assurance plays

(00:35):
in the part of the overall API lifecycle, demonstrating how critical they are to moving
enterprise organizations forward.
Let's start with the basics.
Who are you?
What do you do?
My name is Tim Velasquez.
I'm a software QA manager.
I've been in QA for about 17 years since that nice department space.

(00:58):
My goal is to provide quality for all of the applications that QA touches that's delivered
to the user to make sure that there is no issues that are present, as we all like to
call bugs.
Where do you do this at?
I do this over at Werner Enterprises.
It's a logistic company.

(01:18):
As you know, if you are ever on the highway, you see those big blue trucks that says Werner,
that's us.
We provide all those goods for you just to make sure that we can get it delivered.
But also, there is demand right now because of our current economy.
How can we be able to supply and demand based upon the strain of limitations based on drivers

(01:39):
or based on whatever have you on logistics or even just the main structure of how the
applications apply to the usage to make sure that they can get all the information provided
and be able to deliver it to who it needs to go to.
Talk me through what Q&A is.
How does Q&A in an API world differ from historically web applications, mobile applications, stuff

(02:04):
like that?
Yeah, that's a really good question.
Technically, it's QA, not Q&A.
Excuse me.
No, no, it's okay.
It's just part of my job.
Every day, it's part of making sure that everybody has an understanding of what's the ask of
us and how do we determine what's being delivered is meeting the business features.

(02:24):
So in a day of life of QA, what we do is that we wanted to make sure is within any of the
user stories that are being presented to us, we are going to go through it, make sure that
it meets the definition already.
Despite platform, let's just pay this here.
We have to be able to understand what's the ask, how to be able to implement it.
And then also, is there dependencies that's being used that we also have to be as the

(02:48):
first or the second because if there's a second and they're going ahead of us, we need to
stop that to make sure the deliverables.
QA is everywhere.
Not just within the specific standalone component.
We're in Scrum Masters, we're with product owners, we're with architects, we're with
developers because we have to be able to make sure everybody can be successful once it comes

(03:13):
to us to mitigate any of those defects.
And how does that work within an API?
So if you remove those nice front end UIs that's been given out to the consumer, let's
take that away.
APIs is really critical.
It talks about what is going to be pulled from the database, how is that going to be

(03:33):
manipulated, and then it's going to have it for what is the end response because you have
request and response.
And then what that does right there is that we can understand what is the ask, how is
the mapping from the API to the database?
What's the ETL transformation?
What's the business intelligence?

(03:53):
APIs are incredibly critical because that's the bridge for database to the front end.
And you can have a phenomenal front end, but if your APIs are not structurally sound, it
becomes volatile.
And do you understand what I mean by that, Ken?
Yeah, yeah.
Oh, yeah.
I mean, chaotic, unknown outcomes in your web, mobile applications in your integrations.

(04:18):
No, it makes a lot of sense.
And so one of the great ways that we wanted to utilize using Postman here is that Postman
is such a robust application that it helps us to be able to use it for more than just
standardized this is our testing practices.

(04:39):
And then we can go out and give our definition done has been met.
Let's expand on that.
How can we be able to prevent duplication?
And then also how can we prevent scalability exceeding the weight of a user story but shrinking
it down so we can have faster deliverables?
Interesting, right?
Yeah.
So when the developers make their endpoints and then they structure how is the need and

(05:03):
ask of a user story, they create that structure.
They create how does the endpoint work, the primary stations, client ID, secret ID, the
JSON body or XML body, however they're doing the body, right?
And then what's the next step there?
It's QA.

(05:24):
What we've started noticing here in my years of doing API testing is that QA will have
to duplicate that process, try to find out what's the headers, what's the necessary required
What is the endpoint?
What's the method of it?
Client ID, secret ID, you know, keep listening to what I'm saying here.
I'm adding layers.

(05:46):
And then that becomes penetration and discovery for us that's already been created.
And so what we did is we made a shared collection using Postman as our library.
And we have it to where all those collections are created by the developers and give them
that rules and responsibilities.
And then that also helps us to see the discovery of what is their unit testing, their baseline

(06:10):
functional testing, which is just negative scenarios.
And then for QA, we take that specific endpoint to see what does the test coverage mean within
that user story.
We can then inherit that and run with it.
And we can be able to say, okay, I want to create a collection of my APIs that are going
to apply to, let's say, a customer enrollment just for the heck of it.

(06:33):
So for customer enrollment, we can be able to take all those collections and then put
that into its own category.
So if I were to need to do a testing, just say this specific feature of core functionality,
does it work?
We have that test already expanded upon their single endpoint because it's already structured
and created.

(06:53):
Take that to scale all that different layers I was telling you about earlier.
I've just drastically reduced it down by a third of the time because it's already provided.
So it's a requirement for developers to create the initial collection.
So they're codifying the surface area of this API and when they're understanding of what

(07:17):
they've built as a collection and they hand it off to you.
But then you might derive specific business scenarios or iterate on that collection to
accommodate specific features and capabilities and business scenarios.
You're correct.
So we talked about baseline functional testing and we also talked about unit testing.

(07:40):
But for QA, we have to expand on that.
We have to do end-to-end.
We have to do entirely functional testing as well as we look at APIs as chain links,
one for the next.
What was the predecessor API done that inherits that certain collection or the certain response
headers to apply to the next endpoint?

(08:01):
Because it might have certain requirements in the background.
So what we wanted to do is how can we make that into a smooth transitional?
How can we make that strategy here from dev to QA?
And so that helped us out on being able to come up with that strategy.
How to be able to put it into gear and see what's the next step from that after we do

(08:24):
the end-to-end, after we do the functional testing.
What's the most critical one?
And that's regression testing, right?
Yeah.
So developers have their blinders on.
They don't have to understand all this context.
You're saying QA's role is to then bring in all this wider context of how it can actually

(08:46):
be applied in a business situation.
Apply it as a way of both positive and negative.
We want to find every possible way to break it.
That's our job.
So we want to be able to, over time, break the code.
It's critical because the happy path is already being discovered by the developers.
QA's responsibility is really to act as the competitive company or, in the case of a bad

(09:11):
customer, we want to make sure that that bad customer experience doesn't happen.
We also want to make sure that our solid structurization of what's been implemented into that end
is sound.
Because if we have a breakage at all in the UI, what's the next layer of testing?
That's APIs.
So we have to give that assurance before we move to production.

(09:33):
That's why API testing is incredibly critical.
So we just made a mention about regression testing, right?
So within regression testing, if you have that that's going to be deployed, what's our
strategy to make sure that those functionalities are being met?
If you make an update, and Ken, you're going to be the developer at this instance of the
description here.

(09:54):
You make an update to customer, and then that goes to the customer being able to be enrolled.
Once it comes to me, I want to be able to have it before it goes into my space, my lab.
I already have regression that was executed when it went from dev to QA.
And what that does there is that that validates for me that all of the

(10:19):
core functionality that was not touched was not broken.
But everything that's customer, I can now correctly work on that, and I can specifically
see what you did inside of our shared collection space.
So this helps me out with strategy awareness, as well as reusability.
You're the one who worked on customer, and you decide to leave.

(10:40):
You went to another company, another team, whatever happy.
So what happens is that what we see culturally is that that, unfortunately, that knowledge
thing goes with that person.
So I want to take a moment to think about that.
What happens with that knowledge?
It's gone.
So how can we be able to capture and keep it?

(11:03):
And having those shared workspaces with using Postman retains that knowledge for us.
But also, here's the cool thing that we just love about how Postman utilizes here.
We can see how it's being embedded within our company.
What are all the endpoints that are prior to the before and after?
What does that specific endpoint do by adding those descriptions?

(11:25):
So that's now becoming an overall library at a good health status.
And so when the next person comes in, they have that knowledge.
We just reduced that time for them to understand what was the ask, what's the legacy system?
How is those APIs being done?

(11:45):
This isn't normally in the wheelhouse of QA.
This is a whole other dimension of wisdom.
Like wisdom, building wisdom, organizational wisdom, and making sure it's not blowing out
the door every time the window or the door is open, someone comes in and out.

(12:08):
A great approach to this as DevOps, right?
Operations and development.
Operations are the ones that have the need, what is needed for them to be successful inside
of the productive world.
And then they start creating those epics and those features.
If it's something that's already been inherited, how can we be able to determine what modifications?

(12:28):
So that helps us also be able to project our planning more strategic, more directive.
Because when you weigh in stories, you have to be able to say, what's the weight that
I have to do to make the modifications or add on to it?
And let's put this for both Dev and QA because they're both in the same boat on this area
here.
They have to be able to understand what APIs were used to make sure that customer on how

(12:53):
it was created, the endpoints that are all applied to it, and what is the predecessor
endpoint, what's the next one after that?
What's the structure?
That's the common problem that I see from both sides.
Another issue is when someone comes into the team for onboarding, we also have to export
it out, send it over to them.

(13:14):
That makes it truly complex because you don't know if that person has the most current version.
So then you have a misunderstanding that could occur.
And then unfortunately, that creates a nuance.
We want to mitigate that.
And so having everything as Postman as a library, and I just brought someone onto my team not
too long ago, and one of the great things about it is that he mentioned, he's like,

(13:36):
I have the API collection for this specific application.
I was like, well, great news for you.
You already have it.
When you signed on, we have everything configured for you.
You just go download the desktop, go to our domain.
Everything is there.
Because our workspaces of how we targeted was not by team structure.

(13:58):
I know I'm going a little bit more in depth for you, but let's go stay on track this for
a moment.
So what that did right there is that that showed him, I have everything at hand.
I don't have to create my collection.
I don't have to figure out how that's structured.
It's already done.
I just reduced my scale for onboarding as well.

(14:21):
Yeah, I mean, it's just they so single sign on, they get they have their employee account.
They have their Postman access.
The topology of Werner is already organized and structured by workspaces with the artifacts
that are needed, the APIs, but also the tests, the collections, the documentation, all the

(14:41):
things that go with that and that whole historical legacy.
It's all right there in the workspace so they can immediately start searching, go where
they need to be to be productive and successful.
That's true.
So let's talk about this as another way, too.
If I ever ask you to say, hey, I have another team that's working on, which is say their

(15:03):
hack of it Workday, Workday is going to be applied to what you're working on by an app
because you got to look at your profile information, make sure it's accurate.
Well, in this way, you can be able to use your workspaces, again, targeted by application,
not by team, because teams come and go.
So that makes it easy for you to select that drop down, find that Workday, go to it, see

(15:24):
features are created by collection name, and then you can go inside more deep diving of
that specific APIs that you might be dependent on.
And so everything is article for you.
So you're talking internal and external infrastructure, both or...
Yes.
This also helps out with strategy.
You have to think of global and also local environment.

(15:46):
So remember how we talked about for unit testing baseline functional earlier, and we have regression
testing and in pure functional testing as well.
Well, those also require certain parameters based upon those headers, client ID and secret
ID.
They may not be the same on those environments.

(16:06):
And so that's a great thing of using those different parameters and having them preset
up based upon that specific application you're working on, because some of them may be the
internal, so maybe external, and having that overall architectural and infrastructural
ideas of implementing as a housing library, this makes it much more easier for us for

(16:28):
understandability.
And that was the main target, make this as a collaboration, not just a single team.
So you're using the word library pretty regularly here, and it feels like in several forms.
And then you mentioned workspace and organization of your topology, not by team.

(16:52):
You have resource capability application, and it sounds like you have some sort of...
The naming and ordering of your collections and workspaces have some logic applied to
it as well?
Yep.
So you have to think of each company has internal and external APIs, especially if they're using
a vendor.
And then how do you identify those two, especially based upon the feature?

(17:16):
That makes it easier for us if we say customer, and then if it's going to be an internal API,
whatever naming convention you can provide, I always recommend something pretty small
and easy to look at.
And then an external API, you can say the feature, and then ext, or whatever have you.
But this makes it so this way you know that feature, if it's internal, and then what is
the external, because you can see how those structuralizations are done for those endpoints.

(17:42):
That kind of awareness, just to think about that, that kind of awareness makes it to where
I have all of the different endpoints and libraries, and when I have to create my testing
strategy, I can create my collection and then harvest all those different API endpoints into
my own specific collection, how I want it to work.

(18:04):
And just to validate that the true end to end in that chain is not broken.
And then also for me to build it out that test scenario, I've reduced it because I don't have
to rebuild those methods, rebuild the endpoint, rebuild anything.
It's already there.
This is a very different painting of the landscape that I hear from many customers when it comes,

(18:27):
they have a portal somewhere where people publish APIs or docs or anything.
This feels more real time, just keeping the pace, the actual real world pace of what's going on at
Werner and the pace of needs of everyone working across the teams.
It's true.
So let's talk about this in longevity, okay?

(18:50):
Let's talk about that portal.
From what I've seen within portals is that it's a great idea.
It starts off strong, it goes for an X amount of time, but then it starts to degrade.
Then the portal starts to lose its value.
And then for me as a tester, I go in there for hopes to find the information I need,

(19:10):
especially like what's the header name?
And if it's not matching what's current production, I have a mismatch.
And is that because the library is not up to date?
Or is that the business analyst didn't provide the correct header?
Or what's in production does not match the library?
So I have so many different possible variables.
Do I create a bug on that?

(19:31):
Do I create a defect on it?
How do I want to structure that as an issue?
That goes down to the integration of that library.
So us using Postman, that keeps us liable, keeping that health of it, keeping it up to date.
And if there's any modifications, we can be able to see it.
But also that comes to accountability.

(19:51):
You have your responsibilities to update your description of your Postman's endpoint.
But also QA has a responsibility to discuss the strategy with that.
That's the awareness, right?
In my experience, from the time an API gets developed, code written,

(20:12):
and it's put into production and documentation is published to a portal,
any number of things can be lost.
Or developers, doc folks are busy.
They forget things.
They leave things out.
They don't have time.
Or they take shortcuts along the way.
What you're saying is bringing this closer because each developer is required to create

(20:32):
a collection and hand that off to Q&A.
You're bringing, I would say, the entire enterprise list of resources and capabilities,
the landscape of what an enterprise organization can do,
kind of closer to the reality of operations with QA as the vehicle for that.
Yeah, because QA is the backbone.

(20:54):
We have to make sure what's coming out was part of operations ask,
as well as the business, as well as the developers building it out for them.
Do they meet?
And then we can be able to do the valid as well as the invalid testing with no breakage.
This is interesting because this is like I'm working on other talks and conversations

(21:19):
around APIs as a product and API product management and the notion that when you
treat it as a product, it's not about publicly externally selling it and APIs for sale.
You have a feedback loop associated with it that allows you to iterate on your API.

(21:40):
In many organizations I talk to, QA is this externally facing this filter,
this thing that things go out as they go out the door.
It's kind of a one way street.
But what you're describing is you are that feedback loop.
You are that in and out and that critical piece of the roadmap and legacy breaking

(22:01):
changes and we get bonus points for saying breaking changes here on the show.
So you guys are that mechanism, not just that outward facing quality as it goes out the door.
That's true.
What we want to do is take you away from an afterthought to further stage.
We want them to be a part of everybody's definition done and help them out and also

(22:23):
come up with a strategy saying we know that you're asking for this API but you have dependencies here
based upon this product that you're asking to come in.
Having them at the forefront, this helps them to be able to advise of any possible concerns that are
in.
Which to say the user story, we don't want to convert that to a spike to do analysis.
We want to prevent spikes being created on any user story because that's scale.

(22:48):
We want to shrink that as much as possible and prevent spikes being created.
And having QA presence at the forefront has really helped out with making sure we can get that quality
ahead of time instead of at the back end where we will create mass amount of defects.
So that's what we ended up getting as a reward.
If we have QA at the front, those defects start to shrink down a little bit more because the

(23:12):
quality of work coming channel through is more at a higher deliverable.
And one of the cool things, I'm going to provide this out here and shout out to my company,
because one of the things that we did for an internship program is that I'll have kids stay
with me for six months teaching them different testing strategies because I'm going to ask you,
Kin, and you're going to tell me the same different story or even something similar.

(23:36):
But one of the ways that I want to hear from you is your pure thought of what does QA do?
Kin, can you tell me?
Oh, what does QA do?
I mean, it ensures the quality of the products and services that this company puts out.

(23:56):
Nice.
So what you made for a statement is really true.
But it's at the back end of it.
So how can we make it forefront?
And so that was where we wanted to teach kids how to become with definition and ready,
understand what was being asked of that work, be able to take the accountability, say,
no, I got a question.

(24:16):
I'm concerned because when future developers come in, they're a little bit timid to say something
that doesn't make truly sense.
So unfortunately, that doesn't meet the quality of code being delivered.
So we end up with high defects on them.
We don't want that.
So this helps give those kids within six months with me to get them ready and up to speed,
show them how to do testing on APIs because I'll tell you, the front end is great.

(24:41):
I love it.
But API is really a great source to validate what's being done and in between.
What's that bridge from database to UI?
And this gives them more of awareness on how to be able to meet definition of done for them
and to make sure that their code quality is at a high level with low defects.
And once they're done with me and go to development, we start seeing that they're

(25:04):
testing strategies a lot better.
We also start seeing how they're structuring using the APIs and then be able to deliver
high code compared to someone who's never gone through a QA internship.
So you're exposing them to API literacy fundamentals.
You're exposing them, bringing them closer to business needs, objectives, goals.

(25:28):
And then putting them into a developer role.
I would say those are some of the biggest deficiencies you see in development.
The old school notion of you lock your developers away in a basement, you slide pizza under the
door and they do what they do.
They don't have any idea of what's going on with business.
They don't learn about the latest next thing, JSON APIs or whatever it may be.

(25:53):
Micro services.
And so what you're talking is it's fundamental for them to get this exposure right out of
the gate.
It's true.
And what this does here is that that gives them the awareness.
I keep telling you awareness is critical collaboration, reducing scale, high quality code.
And what that does is QA.
What we want to do is help bridge those, get you there and get you that awareness.

(26:15):
And having that structure, this makes it wonderful.
And that makes it to where what you deliver.
It doesn't have a mass amount of defects or brittleness.
That's common with what we see within development that's done at a fast pace.
If you can do it right, take the time and structure it appropriately, work on the iterations,

(26:38):
apply that by the platforms, by API, database, then UI.
Then that makes it to find out how do you want to be able to deliver high code and using it
as a central library of how those APIs are structured.
And how do you retain that knowledge?
That's critical.
How do you retain it?
So that's why I really express enough about Postman.

(26:59):
It does all that for me with retaining.
Yeah, that's interesting.
That retention.
I'm a big fan of work at Postman, but the concept of a collection as a unit of value
for business, for the enterprise, something that you can describe a single resource or
a single capability or suite or set of because workflows and building sequences of API calls.

(27:27):
But coding that as a collection and then you mentioned putting into workspaces.
But that ability of it to develop wisdom, I can't come up with a better word right now.
That's a good way to say it.
That accumulation of enterprise knowledge.
That's the opposite of legacy.

(27:52):
But that's a positive notion of the word legacy rather than what you hear
people complaining about legacy being always a burden and a problem.
But you're talking like this approach and utilizing Postman and collections and workspace
with this QA approach is making it a positive version of legacy.
Yes.

(28:13):
And so once we have new teams who come up that might have to work on certain products,
this allows them to be able to obtain that legacy APIs, how it's being utilized,
how it's being applied to a team, because that's the common thought.
How can I figure out how the endpoints being done?
Who has that collection?
Hey, Kin, can you send that over to me?

(28:34):
Yeah, but I have last year's version.
Well, who has the latest?
Oh, he left.
So do you hear the strain?
And so that typically is the problem when legacy is that whoever had that knowledge
doesn't exist there anymore.
Provided into a central source location, this prevents that knowledge from being out the window.
But it's not just taking that knowledge and putting it in a workspace and retaining it.

(28:58):
That knowledge has had that historic relationship with QA that's testing the good, the bad,
the bad, codifying.
So your QA in this function is shedding and filtering out a lot of what's known
as being the legacy problems.

(29:21):
Because so it's leaving the errors, the bugs, the deficiencies behind,
but then also caring for the knowledge and retaining the knowledge it needs to
keep delivering that core service or whatever is being built.
Yeah.
You have to think that as regression, right?
What you're talking about is regression testing.
How can we validate that core functionality did not break with the new feature that was added to it

(29:46):
or modify?
Yeah.
Yeah, but you took it with a common theme.
We see successful strategies in this kind of this last five years, microservices,
whatever you call it.
People are realizing and there's a very human component aspect of doing these APIs and technology.
It's not just about, hey, we did restful API designed perfectly or level four rest

(30:08):
and it dials in.
You're talking, you're saying like we can onboard people immediately.
As soon as SSO, they have their postman account.
Here's the landscape and things are organized.
You're immediately productive.
You leave that knowledge isn't lost and that's retained within these workspaces.
Plus with this Q&A process evolution marching everything forward,

(30:31):
it's yeah, that's powerful.
That's a powerful vision.
Let me expand on that for you real quick before we carry off.
The great thing is this, I'm looking for gaps.
What are the areas that you may not be very familiar with?
So how can I get everybody comfortable and using this single structure and testing tool
as well as their collection library training?

(30:54):
So what we did was we create a training videos, how to use postman from a very beginner.
What does each of these windows do?
And then to intermediate where it says, okay, this is how we could be able to use
office education OAuth or how to use the prerequisites, how to use the test,
how to be able to apply that within here.
And then go into more in advance, talk about using Newman.

(31:17):
And what this helps out is that gives the team all the same baseline and build them up.
So we have now everybody at that now new baseline of understandability to use the tool
but to expand on it, now we're creating smaller, more containerization training video topics.
So this way I can harvest based upon your team's need,

(31:41):
saying you're never going to use Jmeter, for example.
So I'll never have to worry about trying to hook Jmeter using postman, using Newman.
So this will be able to say, all right, I want you to use the fundamentals.
I want you to use how is Newman used, how to be able to do the installation,
how do you share collection, branching, and so forth.
I've now made containerization training videos.

(32:02):
And that's something that we're working for this next three months.
So this way we can help create more of an understandability to use a tool
because the entire time we've been talking, Ken, is how has it been beneficial?
How do we get everybody on that same baseline?
Because that's the roles and responsibilities and expectations.

(32:24):
Yeah. Wow.
You're my hero with all that.
You're basically building out, I'm calling it blueprints internally,
but we realized creating white papers, large guides,
worked for some for an older audience, but I would say it didn't work,
it isn't working as well and more snackable video content

(32:46):
and modular video content is where we're switching to right now.
I agree. And if we can make them smaller components,
makes it more easier for understanding because, I mean, honestly,
watching like an hour of video sometimes it just becomes,
wait, I just missed this. What happened? Where am I?
Then you have to go back, try to find it within that whole length.

(33:08):
Let's make it more targeted. Let's make it more focused.
In fact, let's make it more hyper-focused so that way it meets the need of the team.
Well, stay tuned while I put bacon in some of these libraries,
these snackable content into the product, into the workspaces.
And then my goal is to make it so you can just maintain your own library

(33:31):
of these content and features. We'll provide the baseline of them,
but you can automate and add JMeter in there and add it for our competitors.
And then it's in line within the workspace as part of the training delivery mechanism.
So stay tuned for that.
Yeah. And one of the great things that I like is

(33:51):
when we do automation for API, you know the complexity of what we're talking about,
creating that collection, having an understanding of those endpoints, putting in.
Unfortunately, even doing API automation, it ends up being and taking longer of a skill.
Yes, the person can automate. Yes, they can be able to do the
structuralization, sterilization, deserialization, however have you.

(34:13):
But here's the problem. How do you work those collections that you don't have awareness to?
How do you know what your testing does have the correct coverage needed?
So what is a really good way for strategy to me is that we can use both the analytical
as well as the engineering aspect of Postman to provide that force

(34:36):
to come up with the strategy needed if we would decide to go into doing API testing
in automation fashion. But one of the cool things is that
Postman even provides that for us as well.
Yeah. Well, that's the so at some point, maybe not on the podcast, in a different session,

(34:57):
Postman has an API. So your workspaces have an API, your collections have an API.
And so how do you start automating, configuring and linting?
So I'm linting more.
On that requirements.
Yeah. Yeah. So you start, it's kind of like Terraform and Ansible and other things.

(35:20):
You see kind of topographical, declarative ways of describing your topology and working with it.
But you all have invested all this into your workspace structure.
This is your API factory floor, your virtual factory floor that you guys have put all this
energy into and that developers able to SSO get their Postman account the first day and walk into

(35:43):
the door of this virtual factory that you've all built. Now, once you have workspaces with
ops level collections that can help you automate and configure those workspaces and those
collections and help you see that layer of it. That's where we see some folks like you that are

(36:04):
more further along in their journey getting to with it.
Yeah. And then let's take this into another area is that one of the strains that I've also
noticed within UI automation is the most common theme is test data.
You're constantly hear that test data. Yes, there's other factors too,
but let's focus on that. It's just the test data for right now.

(36:27):
So one of the cool things that I've noticed with using Postman is that I can actually
kick it off a certain collection to make sure that my automation has test data needed
by using Newman to execute within my pipeline. And I like to call that create my stage.
And then my UI automation that will kick off afterwards based upon the targeting of my tagging.

(36:52):
And then what this does, that will automatically guarantee that that automation has test data
needed or the conditions applied to the user. So this way I can guarantee what's applying to
that specific automation will be able to be satisfactory and ruling out the most common issue,
test data. So you have the landscape tagged and defined, and then you have data catalog that

(37:22):
is applicable across all of this? Not all of it. There's always going to be
hiccups with some based upon if the internal or external based on who owns the data,
but how can we make it to our data is going to live at the time of automation going to be executed?
Because nobody likes false positives and false negatives.

(37:43):
Yeah, it's powerful. Well, maintaining that catalog of data and then
folks who are injecting it at the pipeline level or the runner level into a collection and then
having a collection strategy and environment strategy, a variable strategy and a data strategy
all coming together at that moment is where it's at.

(38:06):
Yeah, that's going to be our target for this year that we'd like to get up to.
And so we're making the efforts and coming up with an architectural design.
So this way within the pipelines, we can have that as a high deliverable. Our goal is to try to do
as whatever we can to make sure that we have the appropriate setup
done in that environment for when automation is kicked off.

(38:27):
Yeah, yeah. And for any happy or unhappy path that you may want to take at that point.
Oh yeah, I plan to break it.
Technically, I mean, that's the job requirement. Our job is 70% of the time you need to break it.
QA must break it. And then we'll then validate everything works on the happy path.

(38:48):
That's the test strategy that should always be.
Do you, I mean, a developer creates something and I guess they have a limited scope what they're
moving it for. They may not have the full legacy or the full scope of that service.
But do you guys make sure that that's you guys poke at that to make sure it's honest and true?

(39:09):
Like you look beyond, like, can you add more parameters? Can you add headers on,
you know, can you do things? So I'm guessing you guys really stretch that and push the
boundaries. We do. And I'm also noticing within the culture within QA is that there is actually
more of a precedence to test on the APIs more than just on the UI right now. And we're starting to

(39:30):
see that within the structuralizations, just to make sure that the mapping is done appropriately.
And also the APIs are meeting all the conditions that's asked within that feature.
Because when a UI does break, what's our next layer that we go to test it? We go to the API.
We validate everything's there. So there might be even a structural difference for our own testing

(39:53):
strategy to go with APIs as well. Right now, let's create that collection. Let's create it
by application and then break those down into what are the features from that application.
Because right there, that library, that helps create your stage of your testing.
You can do a lot just by knowing, especially if you have everything in front of you.

(40:18):
And testing everything, any of those apps are capable of using that are using,
and the things those apps aren't capable of doing. You have the more honest view of the landscape.
Yeah, the limitations. Yeah.
What you talk about is that having those limitations and understanding, like,
where are the areas that we can even focus at, that gives us the gap analysis.

(40:43):
That gives us the risk assessments.
One of my sticks for APIs, I go with a lot of enterprise organizations, go to talk to them
about consulting, helping them. And they'd be like, well, we don't have any public APIs.
We just, you know, microservices, internal, a lot of legacy stuff. And before I go in, I would
download their mobile app or go to their web app and then run it through Postman Interceptor,

(41:07):
Postman Proxy, generate a collection document, and then go in and go, well, here's all your APIs.
And so many of them just have a blind spot behind their mobile apps. They see the mobile app,
they see the web app, and they don't see those APIs as being public APIs. So what you're really
just, you know, the stage being pulling back the stage curtain, letting all the light in and going,

(41:30):
all right, here's all of the resources at this layer. Yeah. So that comes down to like
understandability too. A lot of times that we've seen people is that they're more comfortable with
like a front end system. And, you know, how can we make them say, it's fine, it's, let me show you
how to be able to create Postman's collections based upon this specific workspace to do your

(41:51):
testing needs. But that's also that training that we talked about. You know, you've got to create
that stage. You got to talk about roles and responsibilities for dev, QA, the expectations
within the user stories of the ask, and also having training, internal training,
reviewing what are areas that you can do to help make sure everybody's on the same level,

(42:13):
that baseline. That helps out so much because you take that human component of understandability
and everybody's at that same level. So when you get people to learn how to use API,
like what you're just talked about when you provide that collection, then they'll be like,
I know what to do. And it's okay to be in discomfort because technically we live in that

(42:38):
every day in technology, we always have to be at a discomfort and find that serenity and be able to
find what is the ask, how to make it meet, but get that understandability and also work as a
collaborative. That changes your view entirely, not just as a single person, not just as a team
person, it's also as an enterprise. This is the expectations and this is how we do it.

(43:08):
Well, impressive. I mean, I get a lot of customers who are responding and just tell us how to do
APIs so we can check the box and be done because we're being asked to do APIs. And I'm like, wait,
that's not how it works. You're always going to be doing APIs, but because of PTSD, the great

(43:32):
vulnerability of 2007, the great breaking change of 2011, 12, 13, 14, they have this PTSD and they
don't want to do APIs, they just want stability. It's true. And then we know that as their goal,
right? Stability. But what are the step stones to get in there from where they're at? That's the

(43:58):
roadmap. And then we just talked about that the entire time we've been on this call, which is
understandability, accountability, training internally, how to be able to validate the
ask within the user stories. It always seems to be falling back.
Awareness. You keep using the word awareness.

(44:20):
You know how important that is, right? And so what that does, everybody has that fear. And I get it.
I totally do. I get it because I had it as well. But giving the knowledge, getting everybody at
the same level, having to work in collaboration for Dev and QA, that creates such a unique culture.

(44:42):
And also we start seeing that expanding with the scrum masters, the product owners, they all have
more of awareness of what's happening. And then we talk about UI as like a safe net for everybody
to use because they have something to see. Postman created that. Indirectly, you technically created

(45:03):
that. Those workspaces targeted by application, that gives me everything I need to see. That helps
me out. Plus, how do we categorize it? Because of our roles and responsibility of creating those
collections for internal, external, I can understand that within the feature. This gives the awareness.
Yeah, it reflects what we're seeing across the board. I have a large retailer, US-based retailer

(45:28):
that had a QA team. They were building collections and they started building collections for
analysts. And the word got out, demand grew. Now the team's dedicated. All they do is build
collections and workspaces for analysts. And they come in, here's the URL, land on it, or the run
and postman buttons embedded somewhere that they see. They click on it and all they know is they

(45:50):
click the blue button and then download the data set. And that's all they do with Postman.
Another company, it's like a shipping company, they do the same with their sales department.
They can provide their sales teams with the numbers they need ahead of the dashboards and
the other interfaces that their sales team depends on. That's a really good strategy, too.

(46:13):
One of the great things that we hear is that it's being utilized, being structured and collaborative
and giving business analysts everything that they need to then provide better user stories
that prevent spikes. There's that no longer have an afterthought. There is everything front of the
stage, forefront. Just because of awareness. So it's not just QA, not just dev, right?

(46:44):
Well, this stuff's hard to see. I mean, it's hard to see. Everyone wants a digital transformation,
but digital is difficult to see, right? And that's why you keep using awareness because
you can't achieve awareness unless you can see, find, discover and get what you need.

(47:04):
It's really important, but really the main baseline of everything that we talked about
is that collaboration. I will never stop saying it to you because I have seen such an amazing
culture here at my company because of this. And I only think we just scratched the surface. I can't
wait if we can just do another podcast in the next six months to tell you the differences from

(47:28):
then to now. All right. I'm going to put it in the schedule. Let's do it.
Sounds great. All right, Kin. Well, I really appreciate this time. I know that
you have a busy guy, but thank you so much for this. Breaking changes. This was an amazing time.
I mean, thanks for being here. Thanks for doing this, but you got me thinking in some different

(47:50):
ways just as part of this conversation. So I definitely want to talk some more and yeah,
let me just process and simmer and we can talk some more. Thanks again to Tim for stopping by.
You can find more about Warner Enterprises at warner.com and you can find Tim on LinkedIn.i
You can also subscribe to the Breaking Changes podcast at postman.com

(48:13):
slash events slash breaking dash changes. I'm your host Kin Lane and until next time, cheers.
Advertise With Us

Popular Podcasts

Boysober

Boysober

Have you ever wondered what life might be like if you stopped worrying about being wanted, and focused on understanding what you actually want? That was the question Hope Woodard asked herself after a string of situationships inspired her to take a break from sex and dating. She went "boysober," a personal concept that sparked a global movement among women looking to prioritize themselves over men. Now, Hope is looking to expand the ways we explore our relationship to relationships. Taking a bold, unfiltered look into modern love, romance, and self-discovery, Boysober will dive into messy stories about dating, sex, love, friendship, and breaking generational patterns—all with humor, vulnerability, and a fresh perspective.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.