All Episodes

April 17, 2025 • 7 mins

Send us a text

Ever wondered if you're wasting resources by setting unnecessarily high confidence levels for your reliability requirements? You're not alone. Many engineering teams default to 95% or 99% confidence without considering the downstream impact on testing timelines and resources.

This episode tackles a question that's been coming up frequently from listeners: how to choose appropriate confidence levels for reliability requirements and test methods. Rather than making arbitrary decisions, I share a practical approach using your existing Failure Mode and Effects Analysis (FMEA) as a guide. This risk-based method helps you match your confidence levels to the actual risks associated with potential failures.

When you connect your testing strategy to your risk analysis, you create a logical framework for deciding where to invest more testing resources and where you might reasonably accept lower confidence levels. I walk through exactly how to do this by examining the severity of potential failures, the number of possible effects, and what other controls might already be in place. The beauty of this approach is that it leverages work your cross-functional team has already done during FMEA development, providing an objective foundation for your test planning decisions.

For those new to the Quality During Design podcast, this episode exemplifies our philosophy that emphasizes using quality tools early in the development process to make better decisions. Whether you're struggling with reliability testing or just looking to optimize your design process, you'll find practical insights to help you create better products with fewer resources. Subscribe to the podcast, visit qualityduringdesign.com for additional resources, or sign up for our monthly newsletter to stay informed about the latest quality design methodologies.

Visit the podcast blog.

DISCOVER YOUR PRODUCT DEVELOPMENT FOCUS: UNLOCK YOUR IMPACT
Take this quick quiz to cut through the 'design fog' and discover where your greatest potential lies

BI-WEEKLY EPISODES
Subscribe to this show on your favorite provider and Give us a Rating & Review to help others find us!

NEWSLETTER
Subscribe to get updates: newsletter.deeneyenterprises.com

SELF-PACED COURSE FMEA in Practice: from Plan to Risk-Based Decision Making is enrolling students now. Join over 300 students: Click Here.

ABOUT DIANNA
Dianna Deeney is a quality advocate for product development with over 25 years of experience in manufacturing. She is president of Deeney Enterprises, LLC, which helps organizations and people improve engineering design.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to the Quality During Design podcast.
I'm your host, diana Deeney.
Just a few weeks ago I pulledan episode back to the forefront
from the archive having to dowith reliability requirements
and since then I've gotten somefeedback, input and messages
about choosing a confidencelevel for our reliability
requirement and I covered thatin another previous episode of

(00:24):
the Quality During Designpodcast.
So to address these ponderingsand questions, I'm pulling
another episode back from thearchive having to do with
choosing a confidence level forour reliability requirement or
for our test method, and whatwe're going to use is we're
going to use our FMEA failuremode and effects analysis

(00:44):
Instead of blindly settingwhatever confidence level we
want failure mode and effectsanalysis.
Instead of blindly settingwhatever confidence level we
want, we can base it off therisks of failure using our FMEA.
I'm going to share that episodewith you in a minute, but if
you are a repeat listener of theQuality During Design podcast,
welcome back.
If you're new to the QualityDuring Design podcast, welcome.
We talk a lot about productdevelopment and engineers

(01:06):
working to create new products.
Quality During Design is aphilosophy that emphasizes the
benefits of cross-functionalteam involvement in design.
It's also a methodology.
We use quality tools to refinedesign concepts early.
So if you're involved indesigning stuff and want or need
to know how to do it better, ifyou want to avoid surprises

(01:29):
during tests, design what yourcustomers really want and have
shorter design cycles, and alsoif you feel like you just need
to do more with less and stillcreate the best, we have some
resources for you.
I invite you to visit andbookmark the website
qualityduringdesigncom.
On that website, you can accessand search through the podcast

(01:50):
library.
There's also additionaltraining links and other
offerings available that you canaccess for free.
If you want to stay on top ofwhat's the latest and greatest,
then please sign up for ourmonthly newsletter.
All of this can be done atqualityduringdesigncom.
So, without further delay, I'llshare this Quality During

(02:10):
Design Archive episode aboutchoosing confidence levels for
tests and requirements usingFMEA Enjoy.
We're developing requirementsfor our product, including
setting reliability requirementsand its confidence levels, or
we're setting acceptancecriteria for our test plans.

(02:31):
What confidence levels do wechoose?
We don't have to blindly setthem.
We can base it off the risks offailure.
I'll tell you how.
After this brief introduction,how.
After this brief introduction,hello and welcome to Quality
During Design, the place to usequality thinking to create
products others love for less.
My name is Diana.

(02:51):
I'm a senior level qualityprofessional and engineer with
over 20 years of experience inmanufacturing and design.
Listen in and then join theconversation at
qualityduringdesigncom.
Before testing anything, we needto choose what confidence level
we want to have in the results.
We need to do this becausethere's variation in everything

(03:14):
in the way that we measure andtest.
The way that we manufactureintroduces variation, including
the raw materials that we'reusing and the tools we're using
to make it, including the rawmaterials that we're using and
the tools we're using to make it.
Setting a confidence levelaccounts for the variability
we're going to see in our testdata.
A confidence level is used indetermining the sample size to
test.

(03:34):
If we want to make statementsabout how the design will
perform in the field, then weneed to test a sample size
that's statistically relevant,where we can use statistics to
help us predict the performancein the field from a few tested
in the lab.
Usually, confidence levels are90%, 95% or 99%.

(03:56):
Why don't we take the mostconservative approach and just
pick a 99% confidence level whenthat may save us time in having
to think about it?
It wastes a lot of time andresources later.
The higher the confidence level, the more likely we'll need to
test lots of samples.
The higher the confidence level, the more likely we'll need to

(04:17):
test lots of samples, and thatmeans making units for test,
testing them all in the lab andthen having a more complex
analysis Instead.
One way we can choose aconfidence level that we want
for test is to correlate it withthe risk of failure associated
with it.
Our product requirement islikely a control for a potential
failure.
What was the origin of ourrequirement?

(04:38):
Why did we set it in the firstplace?
What performance orcharacteristic of the final
design is it controlling?
If our product doesn't meetthis requirement, what are the
ways that it can fail?
If we have an FMEA, we can findthe place in the table where
our requirement is a control orwhere it's associated with a

(04:59):
potential failure mode and cause.
When we find it, then we have alot of metrics we can use to
help us decide on a level ofconfidence to test based on risk
, and if we've done our FMEAearlier, then we would have had
it populated with informationfrom our cross-functional team.
In a time of cool heads,without the pressures of project

(05:23):
management.
It will be an objective inputinto what confidence level we
should require for our test.
What are the potential effectsof this failure mode?
In other words, what type ofharm to the user, environment or
performance of the product ispossible?
Are there many effects listedor just one?

(05:44):
If there are many effects, wemay want a higher confidence
level.
What is the severity ranking ofthe effect?
Is it high or is it low?
The higher the severity ranking, the more likely we should
choose a higher confidence level.
What other controls are inplace besides our requirement
and what is the detectionranking?

(06:05):
If this requirement is the onlycontrol or if it's the
strongest control, then we maywant to choose a higher
confidence level.
We could also use thisinformation to justify a lower
confidence level.
If we have a requirement that'sassociated with a failure that
has one effect, that effect isnot severe and there are two

(06:27):
other controls associated withthat same cause, then maybe
we'll choose a lower confidencelevel.
What's today's insight to action?
We should choose a confidencelevel for our requirements or
their test plans.
We can associate thatconfidence level with the level
of risk of our product.
Fmea is a great tool to referto to help us choose a relevant

(06:49):
confidence level for our tests.
If this episode is helpful toyou, I recommend two other
previous Quality During Designepisodes.
Episode 27, how Many ControlsDo we Need to Reduce Risk?
Talks more about the controlsthat we use in an FMEA to
control a risk.
Episode 31, five Aspects ofGood Reliability Goals and

(07:13):
Requirements talks about why wewant a confidence level
associated with our requirement.
Please go to my website atqualityduringdesigncom.
You can visit me there and italso has a catalog of resources,
including all the podcasts andtheir transcripts.
Use the subscribe forms to jointhe weekly newsletter where I

(07:33):
share more insights and links Inyour podcast app.
Make sure you subscribe orfollow Quality During Design to
get all the episodes and getnotified when new ones are
posted.
This has been a production ofDini Enterprises.
Thanks for listening.
Advertise With Us

Popular Podcasts

Stuff You Should Know
My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.