- Training & Events
- Buyer's Guide
Testing laboratories associated with manufacturing plants play a key role by assisting production personnel in monitoring the manufacturing process. The laboratories are often chartered with providing customers, both internal and external, with proof that the material and/or products sold meet customer specifications. To ensure the data provides users with the appropriate level of reliability, the laboratories need to monitor their own processes (the testing procedures) themselves.
Process and Quality Control
The methods used to monitor processes, track conformance to specifications and evaluate the measurement process are collectively known as statistical process control (SPC). SPC enables an organization to track and reduce process variability by using tools such as control charts. Laboratories often refer to the use of SPC methods in their internal quality control program as statistical quality control (SQC).
Table 1. Subset of Sulfide DataPrecision and Accuracy
Ideally, customers of testing data want test results to be generated from a method that is both accurate and precise. In reality, achieving a high level of reliability may be cost prohibitive or overly time consuming. To determine if the data is useful, the laboratory and its customer must understand how the data will be used as well as the types of decisions that will be made. They must also be aware of the industry standards and regulatory guidelines. Control charts measure the level of method variability as well as provide ongoing feedback to laboratory staff. This article discusses how a laboratory can use the control charting capabilities of NWA Quality Analyst® software to monitor the performance of its testing methods.
Types of Errors
Variable systematic errors
Constant systematic errors
Uncertainty can also be introduced by gross errors. These errors occur when material is spilled or calculations are performed incorrectly. Usually, these errors are caught by the analyst, the mistake is noted and the test is repeated.
Table 2. Raw Spike Recovery DataMonitoring the Precision of Test Methods
A more rigorous process to determine method uncertainty involves conducting a propagation of errors determination, which identifies all of the method's components that contribute to the uncertainty, and measures their combined variability. Differences in analysts, equipment and environment can all contribute to method variability and should be included in the measurement of precision.
To monitor precision over time, the laboratory needs a stable sample that contains the analyte in a matrix similar to production samples. This type of sample may be commercially available as a certified standard, or may be obtained in-house. Once this material has been found, testing can be conducted over a period of time and the standard deviation can be calculated.
The next step is to plot the results on a laboratory control chart. Each value is plotted on the chart rather than averaging repeat measurements to make an X-bar chart. The control chart graphically represents how the method performs over time. Trends and cycles are illustrated and the control limits offer a measure of method variability.
Calculating Control Limits
Use the average range (the difference between two measurements) to estimate the standard deviation of a process.
Calculate the standard deviation directly from the individual data points.
The first method calculates a more robust estimate of the standard deviation. The control limits will be narrower than those calculated based on the actual standard deviation. The second method is preferred by some regulatory agencies. On the control chart, it is important to indicate which method is used. All control limits discussed in this article have been calculated using individual measurements.
The key to establishing this type of control chart is to have a control sample that is both stable and representative of the process being monitored.
Occasionally, these two criteria cannot be met due to the nature of the material. In this case, precision can be tracked by conducting multiple measurements (usually two) on an actual sample. The difference between the two measurements (range) can be plotted on a control chart. This provides the laboratory staff the standard deviation of duplicate measurements over time as well as a graphical illustration of how the method is performing.Duplicate Chart
The absolute difference between duplicate value one and two is easily computed using the calculation editor. The calculated values are shown in Table 1, Columns 5 and 6. A control chart can then be constructed from the difference data.
The difference calculated between each duplicate represents the precision associated with the analyst performing the test on a single day. The differences seen daily represent variability due to different analysts, environmental conditions and other changes that can affect this method.
None of the data is outside the control limits, even with the unit upset on 2/17/1999 (Table 1, Row 6). The plotted data represents differences between measurements, not the original measured values. Rather than tracking the unit, the absolute difference indicates the method variability. There are, however, a number of points that show an absolute difference greater than 0.1.
Looking at the original data, many of these points correspond to higher levels of sulfides in the sample. It is common for the imprecision of the test to increase with increasing values of the analyte. Two control charts can be kept, or the relative percent difference (RPD) can be calculated and plotted on the same chart.
RPD is equal to the difference between the two duplicates, divided by the average value of the duplicates, then multiplied by 100.
RPD = [abs(Dup1 - Dup 2)/Avg (Dup1:Dup2)] x 100
This calculation can be entered into Quality Analyst with the resulting chart shown in Figure 2.
No out-of-control points are indicated on this chart, and the data seem to be randomly distributed around the mean. The average RPD of 11 percent may be high depending on how that data will be used and thus may warrant improvement.Monitoring Method Accuracy
While certified reference materials are available from standards organizations, they are expensive and often do not represent the material being tested. When used, these materials can assure the laboratory that the method is being conducted properly with regard to the reference material. Sampling and matrix effects associated with actual samples make it difficult to translate the results obtained from a highly characterized standard to actual samples tested in the laboratory.
"Spike recovery" quality control samples can indicate the method bias under daily operating conditions. Because spike recovery is calculated from multiple tests conducted on the same sample, the random and variable systematic errors should be low. Usually, the sample is analyzed in duplicate and a known amount of the analyte is added to a third sample. Percent spike recovery (PSR) is calculated using the following equation:
If there were no interferences or matrix effects and the variability was low, the recovery would be expected to be close to 100 percent. Spike recoveries that are always low or always high indicate a method bias that should be investigated. If testing is conducted under regulatory guidelines, spike recovery limits may be supplied by the agency. Spike recovery limits of 75 to 125 percent or 80 to 120 percent are common.
Spike Recovery Chart
Using the calculation editor, the average of the unspiked duplicates and spike recovery can be assessed. The control chart of the percent spike recovery values is shown in Figure 3.
Figure 3 shows a section of data that has fallen below the lower control limit and shows some pattern rule violations. Reviewing the data, we can see that there was a newly hired analyst performing the analysis. This person produced results that are consistently low and introduced constant systematic error. All the data generated by this analyst is suspect and should be recalled and the samples rerun if possible. Continued training and closer supervision are warranted until improvements are seen.
Because there is an assignable cause associated with this data, it can be tagged or set aside and not used in subsequent data analysis.Capability Analysis
The expected level of performance can be treated as specifications. Capability analysis looks at how wide the measured distribution is compared to the specification limits. Before performing this analysis, however, the process must be stable and in statistical control. Some common measures of capability are the Cp and Cpk indexes. The equations are given below.
The higher the ratios, the more capable the method is of meeting specifications. Capability is represented graphically by plotting a histogram of the data. If the mean and specification limits are added to the chart, the user can see if method bias is present as well as how well the method is performing with regard to the specification. The calculated values of the indexes are also reported in Figure 4.
In this example, both Cp and Cpk are greater than one, which means the process (in this case, the analytical method) is capable of consistently producing data within the spike recovery guidelines. However, there may be an opportunity for method improvement by investigating the points that fall below 90 percent recovery and above 105 percent recovery.Conclusion
Monitoring the testing process at a predefined frequency assures the analyst and laboratory that the test method is in control and that results can be released to production personnel with confidence. Having control standards and charts may be a requirement of regulatory agencies and is an indication that the data produced by the laboratory are defensible.
Proper tools such as NWA Quality Analyst, which can store the data, perform calculations and produce a variety of control charts, simplify the monitoring program and reduce laboratory workload.