Rounding Numbers and Specifications

Automotive inspection, TS 16949, IATF 16949

Question

If I have a data collection form and the data inputs are whole numbers, should the report then use rounded whole numbers rather than to 2 decimals points?  I’d like to be accurate.

Answer

It depends on the specification.  If the specification is a whole number, then each reporting value should be in a whole number.  The issue comes in when inspection, measurement, and test equipment (IM&TE) gives a read-out in decimals, as they should.  (remember the requirement that the IM&TE should be more accurate than the desired result).

For example: if the specification is XX, then the test result should be reported in XX.  If the specification is XX.X, then the result should be reported in XX.X (one decimal place).

Not doing so can lead to problems.  If the specification is, for example 8 – 10, then a result of 8, 9 or 10 is passing.  If the IM&TE gives a result of 10.1 and you record 10.1 then you open the door for interpretation by others.  The best thing to do is define the recording of test results in the test procedure / method.

Once when I was being audited, the ISO auditor claimed that the company was accepting out-of-specification test results.  The specification was 0.01 max.  The test instrument gave a digital read out in the 4th decimal place and the technician record all 4.  The test result was 0.0102.  The auditor claimed that was over the max or 0.01. Boy did I have an argument with him (I won).  After that,the technicians were trained to round to the readings and record only to 2 decimal places (i.e. significant digits)  This was also added to the test instruction.

James Werner

ISO 17025 Calibration Requirements

ISO/IEC 17025:2017 General requirements for the competence of testing and calibration laboratories

Question:

I was recently in a discussion with someone about ISO 17025 calibration requirements. It was my understanding that all equipment associated with the tests within our ISO 17025 scope needed to have an uncertainty value reported with each calibration. However, my coworker said only tests that actually use the uncertainty value as a part of their test results calculations require an uncertainty value. Meaning, we have may have tests performed within our ISO 17025 scope but the equipment doesn’t need an uncertainty value?

Could you please provide some clarity on this?

Response:

First let us understand why the measurement uncertainty is required. This is to support the metrological traceability requirement for any measurement made.

The definition of metrological traceability per ISO Guide 99:2007 is:
“Property of a measurement result whereby the result can be related to a reference through a documented unbroken chain of calibrations, each contributing to the measurement uncertainty”.

Therefore, if you require your measurements to be traceable, then measurement uncertainty is required. Since this definition of Metrological Traceability was defined per ISO Guide 99 in 2007, there has been confusion on the requirement for reporting measurement uncertainty. The current version of ISO/IEC 17025:2005 does not help by stating “and/or” requirement in Section 5.6.2.1.1 when reporting measurement results with measurement uncertainty.

Whenever such ambiguity exists in the standards, International Laboratory Accreditation Cooperation provides guides and policy documents for clarification for accrediting bodies and accredited laboratories. The ILAC P14:2013 is one such document which provides policy guidance in reporting measurement uncertainties for the laboratories and for accrediting bodies to enforce.

In short, if any equipment (and its associated measurements) requires metrological traceability, then measurement uncertainty must be estimated for that equipment regardless of if it is within the ISO/IEC 17025 scope of accreditation or not.

Dilip A Shah
ASQ CQE, CQA, CCT
President, E = mc3 Solutions
Chair, ASQ Measurement Quality Division (2012-2013)
Secretary and Member of the A2LA Board of Directors (2006-2014)
Medina, Ohio
www.emc3solutions.com

For more information on this topic, please visit ASQ’s website.

Difference Between ISO/IEC 17025 and ISO 10012

ISO/IEC 17025:2017 General requirements for the competence of testing and calibration laboratoriesQ: I am updating the instrumentation section of a product fabrication specification to replace a cancelled military specification (MIL-STD 45662) that specified calibration systems requirements.  I am looking for an industry standard that provides requirements/guidance for documentation of our established schedules and procedures for all of our measuring and test equipment and measurement standards.

I am looking into ANSI/ISO/ASQ Q10012-2003: Measurement management systems — Requirements for measurement processes and measuring equipment and ISO/IEC 17025-2005: General requirements for the competence of testing and calibration laboratories, and I would like guidance on usage and application of these standards.

A: The two standards in question, ISO 10012 and ISO 17025 have different scopes.

While the scope of both documents includes language that can perhaps cause confusion, what follows is the salient text from both that illuminates the difference between the two.

From the scope of ISO 10012:

“It specifies the quality management requirements of a measurement management system that can be used by an organization performing measurements as part of the overall management system, and to ensure metrological requirements are met.”

From scope of ISO 17025:

“This International Standard is for use by laboratories in developing their management system for quality, administrative and technical operations.”

ISO 10012 focuses on the requirements of the measurement management system. You can consider it a system within the quality management system. It defines requirements relevant to the measurement management system in language that may illustrate interrelations to other parts of an overall quality management system.

ISO 10012 is a guidance document and not intended for certification. An organization, for example, could have a quality management systems that is certified to ISO 9001:2008. Even if the organization chooses to adhere to the requirements of ISO 10012, the certification to ISO 9001 does not imply certification to the requirements of ISO 10012.

ISO 17025 describes the requirements for a quality management system that can be accredited (a process comparable but different from certification). It encompasses all aspects of the laboratory.

The competence referred to in the title of the standard relates to the competence of the entire system – not just training of personnel. It addresses such factors as contracts with customers, purchasing, internal auditing, and management review of the entire quality management system – ISO 10012 does not.

In summary, ISO 10012 is a guidance document that addresses one element (namely management of a measurement system) of a quality management system. ISO 17025 defines requirements for entire quality management system that can be accredited.

Denise Robitaille
Vice Chair, U.S. TAG to ISO/TC 176 on Quality Management and Assurance
SC3 Expert – Supporting Technologies

Related Content:

Expert Answers: Metrology Program 101, Quality Progress

Measure for Measure: First Step Toward Disaster, Quality Progress

10 Quality Basics, Quality Progress

Standards Column: Using the Whole ISO 9000 Family of Quality Management System Standards, Quality Engineering

Gage R&R Study on a Torque Wrench

Gage R&R, Torque Wrence

Q: I need information on performing a Gage R&R on a torque wrench. We are using the wrench to check customer parts.

A: For reference on both variable and attribute Gage R & R techniques, a good source is the Automotive Industry Action Group (AIAG) Measurement Systems Analysis (MSA) publication.

The traditional torque wrench is a “generate” device in the sense that it generates a torque to tighten or loosen a fastener (a nut or a bolt, etc.). So, in a strict sense, it is not a “measurement” device. Therefore, both preset and settable torque wrenches are set to a torque value and then used to tighten a fastener or loosen a fastener. When loosening a fastener, it will determine how much torque is required to loosen the fastener. Usually, the clockwise motion is for tightening and counterclockwise motion is for loosening in a torque wrench.

To conduct a variable Gage R & R study on a torque wrench, we would need a “measurement” device which would be a torque checker with a capability to register peak (or breaking) torque. Many such devices are commercially available and if a facility is using torque wrenches, it is a good idea to have one of these to verify performance of torque wrenches. Such a device is usually calibrated (ensure traceable accredited calibration) and provides reference for proper working of torque wrenches.

Now,  one would conduct a Gage R&R study using the typical format:

  • Two  or more appraisers.
  • 5 to 10 repeat  measurements at a preset torque by each appraiser, replicated 2 to 3 or more times.

A word of caution on torque wrenches and setting up the Gage R&R:

  • The measurement is operator dependent, so operators need to be trained on proper toque wrench usage techniques.
  • Ensure that torque is set between every measurement in the settable torque wrench to simulate actual usage between repeated readings.
  • Ensure the number of repeated reading and replicated readings are the same for all appraisers.

The templates for data collection are available in spreadsheet format  from commercial providers. Alternatively, one can design the template from the MSA publication referenced. The data would be analyzed using the guidelines from the MSA publication.

Good luck with the Gage R&R! It is a very useful and worthwhile exercise in understanding your measurement process.

Dilip A Shah
ASQ CQE, CQA, CCT
President, E = mc3 Solutions
Chair, ASQ Measurement Quality Division (2012-2013)
Secretary and Member of the A2LA Board of Directors (2006-2014)
Medina, Ohio
http://www.emc3solutions.com/

Related Content:

To learn more about this topic, visit ASQ’s website.

Accuracy of Measurement Equipment

Automotive inspection, TS 16949, IATF 16949

Q: I work for an incoming quality assurance department. In our recent audits, the auditor claimed that high precision machines such as the Coordinate Measuring Machines (CMM) and touchless measurement system should have higher Gage Repeatability and Reproducibility (GR&R) values compared to less precise equipment such as hand-held calipers and gages. If this is the case, does Measurement System Analysis (MSA) cater to this by providing a guidance on what are the recommended values for each measuring equipment by general? If not, should we still stick to the general MSA rules, regardless of the equipment’s precision value?

A: When you noted “higher GR&R values,” that in itself can be a bit confusing because the GR&R value is a percentage of errors caused by repeatability and reproducibility variation. The higher the number, the more variation present — and the worse the measurement method is.

As far as I know, MSA doesn’t give specific guidance for recommended values depending on the measuring equipment. Also, I’m not sure of the validity of saying that a CMM is consistently more accurate than other equipment, such as calipers. Although the equipment may theoretically be more accurate, how you stage the part to be measured will also affect the amount of variability, as will the feature being measured.  Consequently, even though the CMM is theoretically more accurate, there may be 20 percent GR&R, mainly due to the holding fixture or the feature being measured. I’m sure you get the point here.

As far as I know, MSA manuals do discuss what the major inputs should be when deciding the amount of acceptable variation. It strongly recommends to look at each application individually to verify what is required and how the measurement is going to be used.

Another thing to consider is whether you are looking at the GR&R based on total variation or on the specified tolerance. Tolerance-based is more commonly used than total variation, but that may depend on the type of industry.

One thing I would like to mention is that if you have three people take 10 measurements each, and then dump the information into one of the common software programs, it will not matter if they take the 10 measurements with a dial caliper or with a CMM. The instruments’ “accuracy” should not be the deciding factor, but the tolerance base should be.

Also, ISO standards do not dictate GR&R values. If you do what your quality management system says you do, most auditors will not push such an issue. While some auditors may offer “opinions” and suggestions, such items are rarely cause for nonconformance findings.

I hope this helps answer your question.

Bud Salsbury
ASQ Senior Member, CQT, CQI

For more on this topic, please visit ASQ’s website.

Visual Fill Requirements

Pharmaceutical sampling

Q: I work for a consumer products company where more than 60% of our products have a visual fill requirement. This means, aside from meeting label claim, we must ensure the fill level meets a visual level.

What is the industry standard for visual fills?

We just launched Statistical Process Control (SPC), and we notice that our products requiring visual fills show significant variability.

A: This is an interesting question. The NIST SP 1020-2 Consumer Package Labeling Guide and the Fair Packaging and Labeling Act, along with any other industry standards, regulate how you must label a product “accurately.” However, it appears you have been burdened with a separate, and somewhat conflicting requirement —  a visual fill requirement.

In most cases, you probably cannot satisfy both requirements without variability. The laws and standards will direct labeling requirements with regard to accuracy, and your company is liable for that. If you choose to use visual fill standards for “in-process” quality assurance, then you would need a fairly broad range between the upper and lower acceptance limits.

Personally, I would use weights and measures as needed to meet customer and legal requirements. These are the data I would use for SPC records.

If your company has a need (or a desire) to use visual fill levels for a gage, then generating a work instruction telling employees where a caution level is would be a way to start. In other words, “If the visual level is above point A or below point B, immediately notify management.” If you are to remain compliant with what you put on a label, visuals will change from run to run. Using them as a guide for production personnel can be a helpful tool, but not as a viable SPC input.

Bud Salsbury
ASQ Senior Member, CQT, CQI

For more on this topic, please visit ASQ’s website.