Martin Gibson
Download
Report
Transcript Martin Gibson
ISO/TS 16949:2009(E) and
AIAG MSA 4th edn. (2010)
Martin Gibson CStat, CSci, MSc, MBB
AQUIST Consulting
[email protected]
© M G Gibson 2010
RSS Destructive Testing MSA
1
Making sense of MSA
Do you know how accurate and precise your measurement and
test equipment are?
Do you suspect that good work is sometimes condemned as bad
simply because of uncertainty in the measurement system; is
bad work ever released as good?
Do you know the cost of non-capable measurement systems?
Do you realise how important it is to understand measurement
systems uncertainty?
Does your auditor share your understanding of measurement
systems?
What can you do about it?
© M G Gibson 2014
RSS Destructive Testing MSA
2
ISO/TS 16949:2009(E)
7.6.1 Measurement System Analysis
Statistical studies shall be conducted to analyse the variation
present in the results of each type of measuring and test
equipment system.
... applies to measurement systems in the control plan.
... analytical methods and acceptance criteria used shall
conform to those in customer reference manuals on MSA.
Other analytical methods and acceptance criteria may be used
if approved by the customer.
Questions:
What is the operational definition of statistical studies?
Do organisations, auditors, quality mangers understand statistical
studies?
Why do auditors ask?, “Can you show me GR&R studies for each type of
measuring and test equipment system referenced in the control plan?”
© M G Gibson 2014
RSS Destructive Testing MSA
3
ISO/TS 16949 Scheme Update
IF SMMT Webinar, 5 Nov. 2013
Common problems found in ISO/TS16949 audits
Calibration and MSA (7.6 and 7.6.1)
Definition of Laboratory scope
Control of external laboratories
Traceability to national or international standards
MSA not done for all types of measuring systems
MSA only considering gauge R and R
Questions:
1. Why is MSA regarded as GR&R?
© M G Gibson 2014
RSS Destructive Testing MSA
4
Ford Motor Company MSA requirements (2009)
4.35 (ISO/TS 16949 cl. 7.6.1)
All gauges used for checking Ford components/parts per
the control plan shall have a gauge R&R performed in
accordance with the appropriate methods described by
the latest AIAG MSA to determine measurement
capability.
Variable gauge studies should utilize 10 parts, 3 operators & 3
trials
Attribute gauge studies should utilize 50 parts, 3 operators & 3
trials
Questions:
1. Are some Customers leading the thinking?
2. Why just limited to products?
3. What are your Customer expectations?
© M G Gibson 2014
RSS Destructive Testing MSA
5
Measurement System Variation
Observed Variation = Process Variation + Measurement System Variation
Bias
Accuracy
Linearity
Measurement
System
Variation
Calibration
Stability
Repeatability
Precision
Gauge R&R
Reproducibility
© M G Gibson 2014
RSS Destructive Testing MSA
6
AIAG MSA 4th edn. (2010)
Accuracy, Bias, Stability, Linearity, Precision, Repeatability,
Reproducibility, GR&R
Attributes, Variables, & non-replicable data considered
Variables GR&R study
10 parts, 3 operators, 3 measurements
Parts chosen from 80% of tolerance
Destructive testing requires 90 parts from a homogeneous batch
Three analytical methods:
Range – basic analysis, no estimates of R&R
2. Average & Range – provides estimates of R&R
3. ANOVA – preferred, estimates of parts, appraisers, parts*operators
interaction, replication error due to gauge
Question:
1. Do organisations, auditors, quality mangers understand MSA?
1.
© M G Gibson 2014
RSS Destructive Testing MSA
7
AIAG ANOVA Models
Crossed vs. Nested
Y ijk = μ + Operator i + Part j + (Operator*Part) ij + ε k(ij)
Y ijk = + Operator i + Part j(i) + (ij)k
Crossed vs. Nested?
See Barrentine, Moen, Nolan & Provost, Bower, Burdick, Skrivanek
Fixed vs. mixed effects models?
Software?
MTB V16+ includes fixed, mixed effects, enhanced models, pooled
standard deviation approach not included.
SPC for Excel – fixed effects
Other software packages?
Question:
Do organisations, auditors, quality mangers understand ANOVA?
© M G Gibson 2014
RSS Destructive Testing MSA
8
GR&R Variables Data Acceptance Criteria
% Contribution
Measurement System Variation as a percentage of Total Observed
Process Variation using variances (additive)
% Study Variation
Measurement System Standard Deviation as a percentage of Total
observed process standard deviation (not additive)
% Tolerance
Measurement Error as a percentage of Tolerance
Number of Distinct Categories (ndc)
Measures the resolution of the scale
% Contribution
% Study Variation
% Tolerance
ndc
> 9% Unacceptable
> 30% Unacceptable
> 30% Unacceptable
< 5 Unacceptable
2-9% Acceptable
11-30% Acceptable
11-30% Acceptable
5-10 Acceptable
< 1% Good
< 10% Good
< 10% Good
> 10 Good
Do organisations, auditors, quality mangers understand the metrics?
9
Non-replicable GR&R case study (Anon, 2002)
Ensure that all the conditions surrounding the
measurement testing atmosphere are:
defined, standardized and controlled
appraisers should be similarly qualified and trained
lighting should be adequate and consistently controlled
work instructions should be detailed and operationally
defined
environmental conditions should be controlled to an
adequate degree
equipment should be properly maintained and calibrated,
failure modes understood, etc.
10
Non-replicable GR&R case study (Anon, 2002)
If the overall process appears to be stable & capable, and all
the surrounding pre-requisites have been met, it may not
make sense to spend the effort to do a non-replicable study
since the overall capability includes measurement error – if
the total product variation and location is OK, the
measurement system may be considered acceptable.
Ironically high Cp / Cpk gives poor ndc!
Question:
1. Do organisations, auditors, quality mangers understand this concept?
AIAG FAQs response:
If your process is stable and capable, the spread of this acceptable
process distribution includes your measurement error. There may be no
need to study your measurement error from a purely "acceptability"
viewpoint.’
11
Questions for Making sense of MSA
What is the operational definition of statistical studies?
Do organisations, auditors, quality mangers understand
statistical studies?
Why do auditors ask for GR&R studies?
Why is MSA regarded as GR&R?
Are (some) Customers leading the thinking?
Why is MSA limited to products?
Do you know your Customer expectations?
Do organisations, auditors, quality mangers understand?
MSA, ANOVA, crossed vs. nested, fixed vs. mixed models, metrics, high
Cp/Cpk gives low ndc?
Is MSA seen just as a QMS requirement or a true part of
continuous improvement?
© M G Gibson 2014
RSS Destructive Testing MSA
12
References
AIAG Measurement System Analysis, 4th edn., (2010)
Anon. Non-replicable GR&R case study, (circa 2002)
Barrentine, Concepts for R&R Studies, 2nd edn., ASQ, (2003)
Bower, A Comment on MSA with Destructive Testing, (2004) ; see also
keithbower.com
Gorman & Bower, Measurement Systems Analysis and Destructive Testing,
ASQ Six Sigma Forum Magazine, (August 2002, Vol. 1, No. 4)
Burdick, Borror & Montgomery, A review of methods for measurement
systems capability analysis; JQT, 35(4): 342-354, (2003)
Burdick, Borror & Montgomery, Design & Analysis of Gauge R&R Studies,
SIAM, ASA, (2005)
Moen, Nolan & Provost, “using a Nested Design for quantifying a destructive
test” in Improving Quality Through Planned Experimentation, McGraw-Hill;
1st edn., (1991)
Skrivanek, How to conduct an MSA when the part is destroyed during
measurement, moresteam.com/whitepapers/nested-gage-rr.pdf
13