Particle counting is one of the most common oil analysis tests. It can be used to determine the cleanliness of new oil, identify dirt ingress, verify filter performance or indicate the onset of active machine wear. However, like most oil analysis tests, obtaining a representative sample is paramount to the accuracy of particle count data. With particle counting, this is particularly true, since in some instances where cleanliness targets are tight, one may be trying to identify fewer than 100 particles, which are similar in size to red blood cells, in every ml of fluid!
One of the most important, but often overlooked, key contributors to inaccurate particle count data is bottle cleanliness. Where target cleanliness levels are very tight, even the process of removing the cap from the sample bottle in a dusty environment may render the bottle unusable for particle counting. Despite this, very few sample bottle suppliers provide a certificate of inspection, verifying the cleanliness of new bottles. It is time for bottle producers, commercial laboratories and industrial end users to recognize the importance of sample bottle cleanliness and come together to develop an appropriate certification system for bottle cleanliness.
The True Cost of Unclean Bottles
The cost of unclean bottles is far more than the cost of a wasted bottle, which typically ranges from fifty cents to several dollars. Industrial users of oil analysis who are serious about their programs use results of tests such as particle counting as a basis for maintenance decisions. Therefore, the cost of an unclean bottle can be quite high.
A bottle causing a one- or two-range ISO code jump in a particle count may signal an alarm that results in significant investigation into a production process problem that never existed - a false positive. Maintenance or engineering resources are therefore wasted on an investigation that has no hope of providing value to the organization. Perhaps another sample is taken to verify the result. Resources are wasted in sample collection and testing. Time is wasted as the sample is turned around. If bottle cleanliness is wildly out of control (widely varying), there is no reason to believe the second test will provide more reliable data than the initial test.
Maintenance decisions, then, such as a filtration upgrade, fluid change or even overhaul, are based on a false alarm. The corrective action itself may have consequences that could threaten the system’s reliability.
Bottle contamination can also result in false negatives and potentially missed opportunities to take early preemptive action. Bottle contamination with excessive amounts of dust and other nonferrous debris could result in underreporting important wear-screening measures such as percent ferrous particles. In effect, lack of bottle cleanliness short-circuits the predictive and proactive maintenance processes.
Bottle Cleanliness Definition
Bottle cleanliness definitions currently used in industrial oil analysis originated in the fluid power research setting. These definitions focus on particles greater than 10 µm (Table 1).1
Even though tables that attempt to translate ISO 4406:99 particle counts into the number of particles greater than this size exist, it is clear that the particle size distribution will vary according to the process in which they are produced. It has been noted that some particle distributions, such as dusts, may be normal. Roll milling of larger particles in mechanical systems and filtration tend to skew this distribution toward the smaller end, producing a log of normal distribution. Over time, such a skewed distribution progresses and eventually approaches the Rossin Rammler distribution (Figure 1).
Figure 1. Natural Evolution of Particle Size Distributions from Large to Small Sizes.1
In the March-April 1999 issue of Practicing Oil Analysis magazine, Jim Fitch proposed that a minimum acceptable signal-to-noise ratio (SNR) of 5:1 be used when selecting bottles for industrial oil analysis use.2 When industrial particle counts are performed using the ISO 4406:99 standard that identifies particles in ranges greater than four, greater than six and greater than 14 microns, how does one determine which bottles are clean enough for use based on cleanliness definitions focused at an entirely different size range?
Anyone who has attempted to rigorously establish the actual cleanliness of commercially available bottles knows the answer to that question is in effect irrelevant. Mass-produced bottles of the type commonly used for routine particle counting are not certified to any particular cleanliness level. At best, typical cleanliness levels are quoted. These levels, however, are hardly sufficient for most process critical applications.
Typical cleanliness levels are defined as the arithmetic mean of some sample population. Perhaps some analysis was done to establish a confidence interval for the actual average cleanliness of the underlying population assuming a roughly normal distribution with only one peak and tails that decay rapidly. Unfortunately, these assumptions may be entirely false. Data is distributed normally when the process producing it is completely in control; the variation that exists between bottles is caused only by random variations that have no assignable cause.
Is the production process that produces the bottles used by your facility in control? The only way to determine if the bottle production process is in control is with rigorous statistical tests.
The Need for a New Guideline
ISO 3722, the primary standard that addresses methods for establishing bottle cleanliness, is extremely rigorous. While appropriate for research laboratory use, the testing requirements are too onerous for broad application to most industrial oil analysis applications. The number of samples that must be evaluated from a production run is so great that the expense of certification makes them too costly for routine analysis.
A new guideline for industrial used oil analysis particle counting bottle cleanliness classification is needed. The guideline should be robust and sensitive to variations in particle size distribution so it will apply equally to any production process. It should also be clear to the end user which bottles can be used effectively in each particular application.
Users, however, must be prepared to pay more for certified bottles. Testing each production run will always involve significant expenses to bottle manufacturers, resulting in high bottle prices, especially when compared to the prices at which bottles are currently sold. Additional expense will be required to correct production deficiencies. Bottle producers must be able to justify the cost of producing a bottle with verifiable and improved quality. Customers will benefit if they recognize and demonstrate a willingness to pay for superior products.
Industrial users are primarily concerned with target particle counts as defined by ISO 4406:99. Because particle size distributions vary depending upon the process that produces them, it is logical to think that bottles should be certified at all three size ranges rather than just at 10 microns, which does not correspond to ISO 4406:99 range numbers.
Several factors must be considered for the portion of production runs that are in control:
Other questions that may come up include: How much of each production run is potentially defective? And, can the quality of bottles produced while the process was out of control be described? Studies conducted by Noria indicate that even production runs that achieve low average contamination levels produce a much higher proportion of defectives (outliers) than a strictly normal distribution would produce. This indicates that the production process is not consistent, and that the actual defect rate should be quantified.
By definition, it is impossible to predict with an economical sample size just how contaminated defective bottles may be, however an attempt can be made to quantify what proportion of the overall production may be defective. Why is this important? If most bottles are clean enough to produce a signal-to-noise ratio that does not result in false alarms for high fluid particle count, then those that are defective, if sufficiently deviant, would produce a false alarm.
With high defect rates, even a retest could confirm an errant result. How should sufficiently deviant be defined? What is an acceptable false alarm rate? Are different classes of users likely to define these differently?
Will tiered certification allow most users to buy relatively inexpensive bottles that meet their needs while still allowing the standard to produce better bottles at an appropriate incremental price? An investigation into the statistics required to assess bottle cleanliness indicates that the greater the level of confidence in bottle cleanliness, the more onerous (and expensive) the testing process becomes. Cost and benefits must be rationally equated. The system need not be a pass/fail endeavor. Production runs could be certified to the highest level possible for the testing protocol, or to lower levels depending upon the actual performance witnessed in testing.
Further discussion of this issue is warranted between all stakeholders. Several conversations among Noria consultants failed to end in a consensus of what an appropriate certification system might look like. The author will present a paper at Lubrication Excellence 2003 to spark discussion among bottle producers, commercial laboratories and industrial end users. The technical paper will focus on the statistics involved in determining an appropriate testing protocol. The author also plans to facilitate a discussion between all interested stakeholders in an attempt to start moving the industry toward some consensus. Hopefully such discussions will be echoed at International Council for Machinery Lubrication (ICML) and other standards-writing organizations until this issue is resolved.
If you would like more information on how to certify bottles, be sure to attend Lubrication Excellence 2003. Noria looks forward to seeing you there and hearing your opinions.