Gage R&R Study on a Torque Wrench

Gage R&R, Torque Wrence

Q: I need information on performing a Gage R&R on a torque wrench. We are using the wrench to check customer parts.

A: For reference on both variable and attribute Gage R & R techniques, a good source is the Automotive Industry Action Group (AIAG) Measurement Systems Analysis (MSA) publication.

The traditional torque wrench is a “generate” device in the sense that it generates a torque to tighten or loosen a fastener (a nut or a bolt, etc.). So, in a strict sense, it is not a “measurement” device. Therefore, both preset and settable torque wrenches are set to a torque value and then used to tighten a fastener or loosen a fastener. When loosening a fastener, it will determine how much torque is required to loosen the fastener. Usually, the clockwise motion is for tightening and counterclockwise motion is for loosening in a torque wrench.

To conduct a variable Gage R & R study on a torque wrench, we would need a “measurement” device which would be a torque checker with a capability to register peak (or breaking) torque. Many such devices are commercially available and if a facility is using torque wrenches, it is a good idea to have one of these to verify performance of torque wrenches. Such a device is usually calibrated (ensure traceable accredited calibration) and provides reference for proper working of torque wrenches.

Now,  one would conduct a Gage R&R study using the typical format:

  • Two  or more appraisers.
  • 5 to 10 repeat  measurements at a preset torque by each appraiser, replicated 2 to 3 or more times.

A word of caution on torque wrenches and setting up the Gage R&R:

  • The measurement is operator dependent, so operators need to be trained on proper toque wrench usage techniques.
  • Ensure that torque is set between every measurement in the settable torque wrench to simulate actual usage between repeated readings.
  • Ensure the number of repeated reading and replicated readings are the same for all appraisers.

The templates for data collection are available in spreadsheet format  from commercial providers. Alternatively, one can design the template from the MSA publication referenced. The data would be analyzed using the guidelines from the MSA publication.

Good luck with the Gage R&R! It is a very useful and worthwhile exercise in understanding your measurement process.

Dilip A Shah
ASQ CQE, CQA, CCT
President, E = mc3 Solutions
Chair, ASQ Measurement Quality Division (2012-2013)
Secretary and Member of the A2LA Board of Directors (2006-2014)
Medina, Ohio
http://www.emc3solutions.com/

Related Content:

To learn more about this topic, visit ASQ’s website.

Accuracy of Measurement Equipment

Automotive inspection, TS 16949, IATF 16949

Q: I work for an incoming quality assurance department. In our recent audits, the auditor claimed that high precision machines such as the Coordinate Measuring Machines (CMM) and touchless measurement system should have higher Gage Repeatability and Reproducibility (GR&R) values compared to less precise equipment such as hand-held calipers and gages. If this is the case, does Measurement System Analysis (MSA) cater to this by providing a guidance on what are the recommended values for each measuring equipment by general? If not, should we still stick to the general MSA rules, regardless of the equipment’s precision value?

A: When you noted “higher GR&R values,” that in itself can be a bit confusing because the GR&R value is a percentage of errors caused by repeatability and reproducibility variation. The higher the number, the more variation present — and the worse the measurement method is.

As far as I know, MSA doesn’t give specific guidance for recommended values depending on the measuring equipment. Also, I’m not sure of the validity of saying that a CMM is consistently more accurate than other equipment, such as calipers. Although the equipment may theoretically be more accurate, how you stage the part to be measured will also affect the amount of variability, as will the feature being measured.  Consequently, even though the CMM is theoretically more accurate, there may be 20 percent GR&R, mainly due to the holding fixture or the feature being measured. I’m sure you get the point here.

As far as I know, MSA manuals do discuss what the major inputs should be when deciding the amount of acceptable variation. It strongly recommends to look at each application individually to verify what is required and how the measurement is going to be used.

Another thing to consider is whether you are looking at the GR&R based on total variation or on the specified tolerance. Tolerance-based is more commonly used than total variation, but that may depend on the type of industry.

One thing I would like to mention is that if you have three people take 10 measurements each, and then dump the information into one of the common software programs, it will not matter if they take the 10 measurements with a dial caliper or with a CMM. The instruments’ “accuracy” should not be the deciding factor, but the tolerance base should be.

Also, ISO standards do not dictate GR&R values. If you do what your quality management system says you do, most auditors will not push such an issue. While some auditors may offer “opinions” and suggestions, such items are rarely cause for nonconformance findings.

I hope this helps answer your question.

Bud Salsbury
ASQ Senior Member, CQT, CQI

For more on this topic, please visit ASQ’s website.