Batch Measurements

ISO 13485, medical devices, medical device manufacturing, design of experiments

Question
I have a concentration measurement for an ingredient in a bulk final product. This measurement is close to one of the spec limits. Given each measurement has an inherent error (or range), how do I calculate the probability this measurement is actually within the specs? Or, in other words, how do I calculate the probability that I’m right or wrong in accepting or discarding the batch based on that measurement?

Response
The total variability is the combination of process variation and measurement variability. Measurement variability is further defined as the sum of reproducibility and repeatability. Unfortunately, in many situations the measurement and process variation cannot be parsed from the total, especially when only a single value is collected for a measurement. Assuming a measurement study (MSA) was conducted, the repeatability from this study can be used to estimate the confidence interval (or tolerance interval) around a measurement. This would require knowing the degrees of freedom from the estimate of the repeatability from the measurement study to effective estimate the probability.

The reality is that you never know the probability of being right or wrong, as the assumption is that the measurement variability is acceptable, and that any reading close to the specification is not different than any other reading. If the measurement system is capable, we must use the value obtained from the measurement as the true value, without any probability of making a wrong decision.

I will use the following example to clarify the point.

Assume we have a repeatability from a measurement study of 0.25 (standard deviation of sigma error is 0.25) and one with a S=0.05. Assuming an upper specification of 79.25, the probability using the standard normal of a mean exceeding the specification when the observed value is below the specification is seen below.

Observed Value S=0.25 S=.05
79.00 15.9% 0.0%
79.10 27.4% 0.1%
79.20 42.1% 15.9%
79.25 50.0% 50.0%

Assume we have a repeatability from a measurement study of 0.25 (standard deviation of sigma error is 0.25) and one with a S=0.05. Assuming an upper specification of 79.25, the probability using the standard normal of a mean being less than the specification when the observed value is above the specification is seen below.

Observed Value S=0.25 S=.05
79.30 42.1% 15.9%
79.40 27.4% 0.1%
79.50 15.9% 0.0%

Based on the criticality of the measurement, the measurement system should be improved to minimize the probability of making either a Type 1 or Type II error.

Steven Walfish
Secretary, U.S. TAG to ISO/TC 69
ASQ CQE
Principal Statistician, BD
http://statisticaloutsourcingservices.com

For more on this topic, please visit ASQ’s website.

Method of Using Gauge Pins

Question:

We recently received a complaint from a customer who claims a diameter hole is oversize. The method of gauging the diameter is with minus gauge pins.  The part is a plastic molded part (the material used is PBT). The diameter is .150 +.004 /-.002.

The method question is we do not force the maximum pin in the part, we use the weight of the pin to fall into the opening using no hand force pressure except to guide the pin over the opening.

Our customer is using a method of hand pressure to force the maximum pin in the diameter opening.  If the gauge pin begins to enter they continue to try and force the pin and record the hole as oversize.

Are there any instructions on the proper method for using gauge pins in regards to hand pressure, force entry, and gauge pin weight?

Thank you.

Response:

This is a question that comes up often.  To begin with, let me say that a gauge pin should never be forced into a machined hole.  The largest pin that can be fully inserted and extracted using only light finger grip on the sides of the gauge is what will determine the hole size.

Most gauge pins used in industry today are Class Z. These can be either “Plus” or “Minus” pins.  Those most commonly used are the Minus pins.  They are tolerance up to -.0002”. Therefore a .9998 gauge pin might be actual size but it is generally referred to as a 1.000” pin (The size shown on the pin).

It is common practice in American industry to use a GO/NOGO pin set up.  The size you mentioned, .150 +.004/-.002 would require a GO pin of .152 and a NOGO pin of .154.  If NOGO pin will not fit but, the Go pin can be fully inserted without interference, the part is acceptable on the low end of the tolerance.  If the NOGO pin fits without interference, then the hole is oversize and the part should be rejected.  To touch on that just a little further, keep in mind, if you have a 1.000 hole, a 1.000 pin cannot be inserted into it. That would be a size-on-size interference fit.  However, a 1.000 Minus pin might slip in without difficulty.Pages from gage-inspection-mil-std-120

One other thing to keep in mind is the surface finish of the holes.  A hole that is out-of round could also introduce fit problems.

The Machinery’s Handbook shows the American National Standard Tolerances for Plain Cylindrical Gauges.  However, there really is no documented standard (that I am aware of) which tells you how tight or how loose a gauge pin should fit.  The common practices noted above should help you there.

You mentioned that “if the gauge pin begins to enter they continue to try and force the pin”.  It is not uncommon for the beginning of a machined hole, or a hole in an injection molded product to be slightly larger near the surface.  Various machining and/or molding practices would eliminate that.  Yet, it is the ‘full’ insertion and extraction of a pin, without forcing, that determine acceptance criteria.

Thank you for the good question.

Bud Salsbury, CQT, CQI

ISO 9001:2008 Requirement for Control and Monitoring of Measurement Equipment

Chemistry, micro testing, chemical analysis, sampling

Inquiry

I am trying to clearly understand the  ISO 9001:2008 requirement for Control and Monitoring of Measurement Equipment.  My question:

If a measurement equipment like a Karl Fisher Titrator or pH meter which is calibrated by the user with a known standard traceable to an international standard, then does the unit itself require to be periodically sent to a third party for calibration?  It is not clear to me.  In the past I have received a finding for not doing so.  As I read the standard it is not clear.  Can you provide exactly the clause and reference statement that would indicate and clarify its meaning.

Response:

Your question leads me to believe there was a valid reason for the finding you received. Calibration of a Karl Fischer Electrometric Titration unit is more of a validation and adjustment. That is, in one common practice, you use sodium tartrate dehydrate in a very fine powder form, along with other substances and follow all the steps of calibration.  However, (this is why sending your unit to a third party becomes necessary), you cannot be certain your unit is reading accurately if it hasn’t had a certified calibration by a third party. Example: Is the water equivalent (WE) of the titrant (Karl Fischer reagent or titrating solution) based on accurate calculations?

If you have a known standard which is traceable to national standards which you can use as a comparator, you might be able to set your recalibration periods fairly far apart. This would of course save you money. Nonetheless, unless you can show traceability of your Karl Fischer system, you are not compliant with the standard.

That was a good question and I hope this will help.

Bud Salsbury, CQT, CQI

For more on this topic, please visit ASQ’s website.

Measurement System Analysis

ISO/IEC 17025:2017 General requirements for the competence of testing and calibration laboratories

Question:

Is there ever an exception to the rule about needing full Measurement System Analysis for any instrument placed in the Evaluation / Measurement Technique column on the Control Plan?  If an instrument is listed on the control plan, does it HAVE to have GRRs done, in addition to having to prove stability?  Please base off of ISO 9001 and TS 16949 requirements, and if there is a difference between them for this requirement.   

Answer:

Thank you for this interesting question. Clause 7.6 of ISO 9001: 2008 makes most of this fairly clear. Any monitoring and measuring equipment used to verify conformity of product must “be calibrated or verified, or both, at specified intervals, or prior to use. . .” Notice I made ‘at specified intervals’ bold. This is just to bring to light the importance of calibration cycles. You/your organization can determine what those cycles will be based on the stability of the measuring tool, frequency of use, working conditions, etc. For example, if you were using a micrometer to check close tolerance parts and, you found it a good process to measure the parts frequently, this would be a contributing factor to the decision process. Then, if the working conditions included a lot of cutting fluids or perhaps a good deal of metal dust, another factor is added to the decision process. What I am driving at is this; once you have determined that the product conformity which you are checking is good and/or consistent and that your sample frequency is satisfactory, you would have no definite requirement for GRR’s on the measuring equipment. The calibrations and or verifications you do must be with equipment which is traceable to international or national measurement standards. If you use working standards as gages to check measuring equipment throughout production and those standards are traceable, then you are doing fine. The processes you use to verify the tools and any in-process measuring practices should be documented in Work Instructions or even with the use of photographs or flow charts.

In the second part of your question, you ask if there is a difference between 9001 and TS 16949. I reference section 7.6.1 of TS 16949. Here it is put straight forward:

7.6.1  Measurement System Analysis 

Conduct statistical studies to analyze variation present in the results of each type of MMD that is referenced in the Control Plan.

Use analytical methods & acceptance criteria that: 

Conform to methods and criteria in customer reference (MSA) manuals Or use other methods, if approved by the customer 

This is an automotive sector specific QMS standard. Herein it is necessary to consider safety and liability in everything you do. So, Gage R&R’s are a common practice. Nonetheless, the necessity for these is dictated by individual processes. Some may need them, some may not.

So, if an instrument is listed on YOUR control plan, GRR’s will become a requirement based on all the criteria I’ve noted above. A gage which has proven stability is most often safe from that requirement under 9001 but TS16949 has more extensive requirements.

Bud Salsbury, CQT, CQI

For more about this topic, please visit ASQ’s website.