Terminology for Inspected Material (GMP, ISO 13485)

Pharmaceutical sampling

Q: There is often confusion with the labeling of purchased materials  after they have been “inspected, tested and/or verified” according to good manufacturing practice (GMP)
requirements.  Once out of quarantine, are purchased materials labeled as accepted, approved or released?  I’ve had auditors and inspectors tell me all three.

A: Either term (accepted, approved, or released) is appropriate and commonly used.  It would appear that the auditors are voicing an opinion and shouldn’t be. Neither ISO 13485:2003: Medical devices — Quality management systems — Requirements for
regulatory purposes or FDA’s quality system regulation (QSR) specify what language is to be used.

ISO 13485:2003, clause 7.5.3.3 status identification, states:

“The organization shall identify the product status with respect to monitoring and measurement requirements.  The identification of product status shall be maintained throughout production, storage, installation and servicing of the product to ensure that only product that has passed the required inspections and test … is dispatched, used or installed.”

FDA 21 CFR 820.86 acceptance status requires:

“Each manufacturer shall identify by suitable means the acceptance status of product, to indicate the conformance or nonconformance of product with acceptance criteria. The identification of acceptance status shall be maintained throughout manufacturing, packaging, labeling, installation, and serving of the product to ensure that only product which has passed the required acceptance activities is distributed, used, or installed.”

The requirement should be clear for purchased materials: identify so that only those materials that passed acceptance activities are allowed to be used.  Neither the standard or regulation states how the material is to be identified.  That is up to the manufacturer to define in its operating procedure(s).

My personal recommendation is to use the terms “accept/reject” at receiving and during in-process, then use the terms “release/hold” to mean the final product is or is not to be released for distribution.  But any similar terms are fine as long as they are consistently used throughout the quality system and personnel understand the requirement that they can only use product that passed their acceptance activities.

Jim Werner
Voting member to the U.S. TAG to ISO TC 176 Quality Management and Quality Assurance
Medical Device Quality Compliance (MDQC), LLC.
ASQ Senior Member
ASQ CQE, CQA, RABQSA Lead QMS Assessor

For more on this topic, please visit ASQ’s website.

ISO 9001 Electronic Records

Reviewing confidential files, training records, human resources files
Q: I have a few questions about employee training records.  My company is certified to ISO 9001:2008 Quality management systems–Requirements, and we are considering transitioning to electronic records. However, we don’t know what the requirements are from an ISO perspective. Specifically, we want to know:1. Do we need to retain hardcopy originals, or can we just keep the scanned electronic copies?

2. Does a record need to be in each individual’s file, or can there be a spreadsheet, cross reference-type matrix?

3. How long do they need to be retained?

4. Are there different requirements for environmental and safety type training records?

A: Thank you for contacting the ASQ Ask the Experts Program. Responses to your specific inquiries follow:

1.You may retain records in any format or media you desire.  You do not need both hardcopy and electronic.

2. You may use a spreadsheet matrix.

3. Retention times are your determination. Consult with the corporate attorney as to any requirements from the U.S. Equal Employment Opportunity Commission to protect yourself if there is a lawsuit (assuming your organization is located in the United States).

4. Check with the U.S. Occupational Safety and Health Administration (OSHA) and the U.S. Environmental Protection Agency (EPA) regarding requirements for these records.  These are outside the scope of ISO 9001.

George Hummel
Voting member of the U.S. TAG to ISO/TC 176 – Quality Management and Quality Assurance
Managing Partner
Global Certification-USA
www.globalcert-usa.com/
Dayton, OH

For more on this topic, please visit ASQ’s website.

Control Chart to Analyze Customer Satisfaction Data

Control chart, data, analysis

Q: Let’s assume we have a process that is under control and we want to monitor a number of key quality characteristics expressed through small subjective scales, such as: excellent, very good, good, acceptable, poor and awful. This kind of data is typically available from customer satisfaction surveys, peer reviews, or similar sources.

In my situation, I have full historical data available and the process volume average is approximately 200 deliveries per month, giving me enough data and plenty of freedom to design the control chart I want.

What control chart would you recommend?

I don’t want to reduce my small scale data to pass/fail, since I would lose insight in the underlying data. Ideally, I’d like a chart that both provides control limits for process monitoring and gives insight on the repartition of scale items (i.e., “poor,” “good,” “excellent”).

A: You can handle this analysis a couple of ways.  The most obvious choice and probably the one that would give you the most information is a Q-chart. This chart is sometimes called a quality score chart.

The Q-chart assigns a weight to each category. Using the criteria presented, values would be:

  • excellent = 6
  • very good =5
  • good =4
  • acceptable =3
  • poor =2
  • awful=1.

You calculate the subgroup score by taking the weight of each score and multiply it by the count and then add all of the totals for the subgroup mean.

If 100 surveys were returned with results of 20 that were excellent, 25 very good, 25 good, 15  acceptable, 12 poor, and 3 awful, the calculation is:

6(20)+5(25)+4(25)+3(15)+2(12)+3(1)= 417

This is your score for this subgroup.   If you have more subgroups, you can calculate a grand mean by adding all the subgroup scores and dividing it by the number of subgroups.

If you had 10 subgroup scores of 417, 520, 395, 470, 250, 389, 530, 440, 420, and 405, the grand mean is simply:

((417+ 520+ 395+ 470+ 250+ 389+ 530+ 440+ 420+ 405)/10) = 4236/10 =423.6

The control limits would be the grand mean +/- 3 √grand mean.  Again, in this example, 423.6 +/-3√423.6 = 423.6 +/-3(20.58).   The lower limit is  361.86 and the upper limit is 485.34. This gives you a chance to see if things are stable or not.  If there is an out of control situation, you need to investigate further to find the cause.

The other choice is similar, but the weights have to total to 1. Using the criteria presented, the values would be:

  •  excellent = .3
  • very good = .28
  • good =.25
  • acceptable =.1
  • poor=.05
  • awful = .02.

You would calculate the numbers the same way for each subgroup:

.3(20)+.28(25)+.25(25)+.1(15)+.05(12)+.02(1)= 6+7+6.25+1.5+.6+.02=21.37

If you had 10 subgroup scores of 21.37, 19.3, 20.22, 25.7, 21.3, 17.2, 23.3, 22, 19.23, and 22.45, the grand mean is simply ((21.37+ 19.3+ 20.22+ 25.7+ 21.3+ 17.2+ 23.3+ 22+ 19.23+ 22.45)/10)= 212.07/10 =21.207.

The control limits would be the grand mean +/- 3 √grand mean.  Therefore, the limits would be 21.207+/-3 √21.207= 21.207+/-3(4.605).  The lower limit is  7.39 and the upper limit is 35.02.

The method is up to you.  The weights I used were simply arbitrary for this example. You would have to create your own weights for this analysis to be meaningful in your situation.  In the first example, I have it somewhat equally weighted. In the second example, it is biased to the high side.

I hope this helps.

Jim Bossert
SVP Process Design Manger, Process Optimization
Bank of America
ASQ Fellow, CQE, CQA, CMQ/OE, CSSBB, CSSMBB
Fort Worth, TX

For more on this topic, please visit ASQ’s website.

ISO 9001 Certification to Meet Customer Requirements

Training, completed training, competance

Q: My company is a small manufacturer that makes one product that I designed and engineered. We have a contract to produce the part for a much larger company. The larger company wants us to become certified to ISO 9001:2008 Quality management systems — Requirements. The company has sent its auditor/Six Sigma black belt to our plant for the third time and stated that our operators (three employees, myself included) are not trained because the training matrix is not filled out.

The auditor also stated that our work instructions are not adequate, that our process flow charts are not good enough, and that our forms (all five forms we use in-house) are not compliant because they lack a form number printed on them. Is there a clear definition of what is required by ISO on any of these items?

We currently have a 2.5 percent nonconformance rate on our parts. These are identified at our 100 percent inspection points – at three, four, or five. Out of the 2.5 percent nonconformance, the 2 percent are able to be reworked and the 0.5 percent is scrapped.

A: Your question has several layers so I will try to offer what answers I think will help.

To begin with, I have to assume that you have a copy of the ISO 9001 standard. If you do not have a copy, you must get one.

At the same time, it would benefit you to acquire the services of a consultant or you can purchase one of the many books that are available which would help you along the way.

Now, in ISO 9001:2008, clause 6.2.2 states that you “shall” do five things with regard to competence, training and awareness.

In ISO documentation, the word “shall” indicates a requirement.  Basically, you are required to identify (document) the training requirements of those whose work can affect conformity to product requirements. There is nothing in the standard that says you must have a “matrix.”

You must have a record showing the training has been completed and of its effectiveness. You must also verify each employee’s competence in doing his/her job on their own. Competence is important. Keep that in mind.

You mentioned in your inquiry that your customer states your work instructions are not adequate, that the process flow charts are not good enough, and that your forms are not compliant because they lack a form number printed on them.

To begin, the standard requires just six documented procedures.

  • Clause 4.2.3 Control of documents
  • Clause 4.2.4 Control of records
  • Clause 8.2.2 Internal quality audits
  • Clause 8.3 Control of nonconforming product
  • Clause 8.5.2 Corrective action
  • Clause 8.5.3 Preventive action

Your written procedures need to be compliant with the standard they are for. (By the way, Most companies have more than just six documented procedures, as it helps their Quality Management System to operate more efficiently)

As for process flow charts I am thinking you are referring to work instructions. The 9001:2008 standard says that work instructions should be available “as necessary.” If you have work instructions written and they are readily available, the auditor should have no cause for concern there.

In addressing your mention of “flow charts,” in all fairness, I cannot respond completely without actually seeing the flow charts in question.  If you mean the process flow charts which often accompany a documented procedure to show a “map” of the process, then you should read clause 4.1 of the standard. You would find that you are required to show “interactions” of the processes. There are no actual ISO requirements for flow charts, but many companies use that format to show the interactions, often in their quality manual. You would need to determine if flow charts are needed to ensure consistent quality.

Finally, let’s talk about forms. How you control your forms or the format should be mentioned in your document control procedure (4.2.3).

Each type of form would need a title, a revision number or letter, and a revision date.  Having a record of these makes it easy to identify which version of a document you are using and if it is the correct revision.

I know that approaching ISO compliance can seem like a bigger than life challenge at first. However, for every step you take, you will realize that standards are beneficial and not nearly as complicated as they might first appear to be.  As noted above, you might want to consider a consultant and/or acquire some reference material. Your customer’s auditor can become a friendly associate.

As a senior member of ASQ, I salute you for running a business dedicated to quality.

Bud Salsbury
ASQ Senior Member, CQT, CQI

For more on this topic, please visit ASQ’s website.

Sampling in a Call Center

Q: I work as a quality assessor (QA) and I am assisting with a number of analyses in a call center. I need a little help with sampling. My questions are as follows:

1. How do I sample calls taken by an agent if there are six assessors and 20 call center agents that each make 100 calls per day?

2. I am assessing claims paid and I want to determine the error rate and the root cause. How many of those claims would have to be assessed by the same number of QAs if claims per day, per agent, exceed 100?

3. If there are 35 interventions made by an agent per day, with two QAs assessing 20 agents in this environment, then the total completed would amount to between 300 to 500 per month. What would be the sample size be in this situation?

A: I may be able to provide some ideas to help solve your problem.

The first question is about sampling calls per day by you and your fellow assessors. It is clear that the six assessors are not able to cover all of the calls handled by the 20 call center agents.

What is missing from the question is what are you measuring — customer satisfaction, correct resolution of issues, whether agents are appropriately following call protocols, or something else? Be very clear on what you are measuring.

For the sake of providing a response, let’s say you are able to judge whether the agents are appropriately addressing callers’ issues or not. A binary response, or simply a call, is either considered good or not (pass/fail). While this may oversimply your situation, it may be instructive on sampling.

Recalling some basic terms from statistics, remember that a sample is taken from some defined population in order to characterize or understand the population. Here, a sample of calls are assessed and you are interested in what portion of the calls are handled adequately (pass). If you could measure all calls, that would provide the answer. However, a limit on resources requires that we use sampling to estimate the population proportion of adequate calls.

Next, consider how sure you want the results of the sample to reflect the true and unknown population results. For example, if you don’t assess any calls and simply guess at the result, there would be little confidence in that result.

Confidence in sampling in one manner represents the likelihood that the sample is within a range of about the sample’s result. A 90 percent confidence means that if we repeatedly draw samples from the population, then the result from the sample would be within a confidence bound (close to the actual and unknown result) 90 percent of the time. That also means that the estimate will be wrong 10 percent of the time due to errors caused by sampling. This error is simply the finite chance that the sample draws from more calls that “pass” or “fail.” The sample, thus, is not able to accurately reflect the true population.

Setting the confidence is a reflection on how much risk one is willing to take related to the sample providing an inaccurate result. A higher confidence requires more samples.

Here is a simple sample size formula that may be useful in some situations.

n is samples size

C is confidence where 90% would be expressed as 0.9

pi is proportion considered passing, in this case good calls.

ln is  the natural logarithm

If we want 90 percent confidence that at least 90 percent of all calls are judged good (pass), then we need at least 22 monitored calls.

This formula is a special case of the binomial sample size calculation and assumes that there are no failed calls in the calls monitored. This assumes that if we assess 22 calls and none fail, that we have at least 90% confidence that the population has at least 90% good calls. If there is a failed call out the 22 assessments, we have evidence that we have less than 90 percent confidence of at least 90 percent good calls. This doesn’t provide information to estimate the actual proportion, yet it is a way to detect if the proportion falls below a set level.

If the intention is to estimate the population proportion of good vs. bad calls, then we use a slightly more complex formula.

pi is the same, the proportion of good calls vs. bad calls

z is the area under a standard normal distribution corresponding to alpha/2 (for 90 percent confidence, we have 90 = 100 percent (1-alpha), thus, in this case alpha is 0.1. The area under the standard normal distribution is 1.645.

E is related to accuracy of the result. It defines a range within which the estimate should reside about the resulting estimate of the population value. A higher value of E reduces the number of samples needed, yet the result may be further away from the true value than desired.

The value of E depends on the standard deviation of the population. If that is not known, just use an estimate from previous measurements or run a short experiment to determine a reasonable estimate. If the proportion of bad calls is the same from day-to-day and from agent-to-agent,  then the standard deviation may be relatively small. If, on the other hand, there is agent-to -agent and day-to-day variation, the standard deviation may be relatively large and should be carefully estimated.

The z value is directly related to the confidence and affects the sample size as discussed above.

Notice that pi, the proportion of good calls, is in the formula. Thus if you are taking the sample in order to estimate an unknown pi, then to determine sample size, assume pi is 0.5. This will generate the largest possible sample size and permit an estimate of pi with confidence of 100 percent (1-alpha) and accuracy of E or better. If you know pi from previous estimates, then use it to help reduce the sample size slightly.

Let’s do an example and say we want 90 percent confidence. The alpha is 0.1 and the z alpha/2 is 1.645. Let’s assume we do not have an estimate for pi, so we will use 0.5 for pi in the equation. Lastly, we want the final estimate based on the sample to be within 0.1 (estimate of pi +/- 0.1), so E is 0.1.

Running the calculation, we find that we need to sample 1,178 calls to meet the constraints of confidence and accuracy. Increasing the allowable accuracy or increasing the sampling risk (higher E or higher C) may permit finding a meaningful sample size.

It may occur that obtaining a daily sample rate with an acceptable confidence and accuracy is not possible. In that case, sample as many as you can. The results over a few days may provide enough of a sample to provide an estimate.

One consideration with the normal approximation of a binomial distribution for the second sample size formula is it breaks down when either pi n and n (1-pi) are less than five. If either value is less than five, then the confidence interval is large enough to be of little value. If you are in this situation, use the binomial distribution directly rather than the normal approximation.

One last note. In most sampling cases, the overall size of the population doesn’t really matter too much. A population of about 100 is close enough to infinite that we really do not consider the population size. A small population and a need to sample may require special treatment of sampling with or without replacement, plus adjustments to the basic sample size formulas.

Creating the right sample size to a large degree depends on what you want to know about the population. In part, you need to know the final result to calculate the “right” sample size, so it often just an estimate. By using the above equations and concepts, you can minimize risk of determining an unclear result, yet it will always be an evolving process to determine the right sample size for each situation.

Fred Schenkelberg
Voting member of U.S. TAG to ISO/TC 56
Voting member of U.S. TAG to ISO/TC 69
Reliability Engineering and Management Consultant
FMS Reliability
http://www.fmsreliability.com

Related Content:

Find more information about sampling on ASQ’s website.

Establishing and Maintaining a CAPA System

CAPA process, CAPA requestsQ: We have a Corrective Action and Preventative Action (CAPA) system, and we find that CAPAs are almost always completed late — even though we do have an extension request form for CAPAs, and the system sends automated reminders to  employees in advance.

What can we do to resolve this issue and avoid late CAPAs?

A: I will answer this question based on the information provided.

1. Does the CAPA system rank the CAPA based on risk? If not, each CAPA should be ranked either high, medium, or low.

High risks generally mean that the problem behind the CAPA could have a negative affect on the business and put it at risk. For example, in the medical device industry, a high risk CAPA could include a regulation violation, something that can harm a device user or patient, or issues that could result in legal action against the company.

2. Does the CAPA system have a way to involve top management? If not, it should — especially if timely corrective action is not being taken in instances of high risk CAPAs.

3. Does the management review process include a statistical analysis of the time it takes to complete CAPAs?

Often, reports to management include the number of CAPAs greater than 90-days old and greater than 180-days old. In addition to reporting on the number of open CAPAs, also report on the number of CAPAs completed by the due date and the number of CAPAs that are overdue (past the original, assigned completion date).

It is a good idea to also convert these numbers into percentages to make data digestible and to allow for comparison making.

4. Next, discuss with management (if possible) to consider consequences for employees if company problems that result in a CAPAs are not addressed in a timely manner.

With this approach, proceed with caution. You must make certain that the CAPA system is robust. Not every little problem is a CAPA. A good way to weed out the CAPAs from the non-CAPAs is to ask: is this an issue that requires an investigation into the root cause? And, does this problem require corrective action to fix it? If the answers are yes, then it is probably a CAPA.

5. You may want to consider benchmarking how other organizations structure their CAPA system and look to guidance documents for help. The Global Harmonization Task Force published a guidance document help establish CAPA systems. It is for the medical device industry, but it can be applied elsewhere.

Jim Werner
Voting member to the U.S. TAG to ISO TC 176
Medical Device Quality Compliance (MDQC), LLC.
ASQ Senior Member
ASQ CQE, CQA, RABQSA Lead QMS Assessor

For more about this topic, please visit ASQ’s website