- All Topics
- Training & Events
- Buyer's Guide
Discussion of appropriate wear metal alarms and limits must occur within the context of the tests used for determining wear metal concentrations in used oil analysis. Tests commonly used to prescreen for abnormal wear metal production in a mechanical or hydraulic system include particle counts by ISO 4406:99, elemental analysis and ferrous density analysis. Each test is the most powerful and accurate when used in combination with the others.
Wear debris, the solid particles produced from the breakdown of machine surfaces, are particulates, and therefore contribute to the overall system particle count. Except in extreme (and often dire) circumstances, their quantity is relatively low compared to the amount of particulate dirt in the system.
When monitoring for contamination, dirt levels are the signal of interest. In this case, however, dirt constitutes noise that makes it difficult to discern the magnitude of the actual signal of interest, wear particles. Unclean systems have very low, and perhaps even fractional, signal-to-noise ratios (Figure 1).
Figure 1. Clean Oil Makes it Possible to Detect Faults Earlier
Even relatively clean systems can have low signal-to-noise ratios if the data has a tendency to scatter wildly. With particle counting, it is easiest to monitor wear debris levels at early stages in clean, stable systems. Although, in many cases the filters used to establish cleanliness removes wear particles as well.
Particle counting, however, can be effective in this capacity if used properly. In low noise systems, a jump in particle count indicates that something has changed, be it the introduction of particulates or an increase in wear debris formation.
The ISO 4406:99 standard, with its three range numbers, is superior to the old standard for this use. In most systems, the environmental dirt is relatively consistent in its size distribution, meaning that the relationship between the range numbers tends to remain relatively constant.
Over time, a sealed system with tight clearances may have a tendency to produce silt, gradually breaking apart larger particulates into smaller particles, without new larger particles being introduced. In such an instance, the first range number may begin to creep upward without a change, or even a reduction, in the higher range numbers.
In other cases, the onset of more severe wear modes will have a tendency to produce wear debris of 10 microns or greater (larger particles being indicative of more aggressive wear behavior). These particles would therefore cause a rise in the back two range numbers or in all three range numbers, due to the cumulative nature of the reporting standard. Of course, maintenance action cannot be taken on the basis of this information alone, and subsequent testing of a more specific nature is warranted.
Elemental analysis can help determine silt production and dirt ingression rate changes from wear debris formation. By viewing the silicon (Si) and aluminum (Al) numbers (which should track together in many applications due to their proportion in common dust), the amount of smaller dirt particles can be confirmed.
Silt formation or dirt ingression will most likely cause these elements reported in spectroscopic analysis to increase. If they do not increase, it is further evidence that wear is causing the particle count change.
Care must also be taken when using elemental analysis as a primary wear debris indicator. Inductive coupled plasma (ICP) and rotating disc electrode (RDE) spectroscopy are two of the most commonly applied methods for routine laboratory use. Both of these methods can efficiently measure small particles (less than three microns), and larger particles with decreasing efficiency.
Wear metal concentrations can, therefore, be under reported as wear states begin to become more severe, and the particle distribution shifts to the larger end. While wear metal particles will likely be under reported, regular monitoring and knowledge of machine components’ should still provide excellent warning of developing wear states in most machines. More information on the basics of elemental analysis can be found in an article titled “Oil Analysis 101: Elemental Analysis” in the January-February 2002 issue of Practicing Oil Analysis.
Ferrous density testers, such as direct read (DR) ferrography, magnetic-flux testers and ferrous particle counters are perhaps the best tools for determining wear particle concentrations under certain situations. However, these methods suffer from limitations that do not affect the two methods already discussed.
DR ferrography directly monitors the amount of magnetic ferrous debris in a sample and divides it into large and small particle sizes. This allows an assessment not only of the amount of wear debris, but also the severity of the wear mode. When there is no question that a machine is producing magnetic wear debris, this is obviously the most accurate test method.
Many machines, however, have aluminum, babbit, copper or bronze containing components that could be wearing. Such debris is not magnetic and these devices will not track it. Highly alloyed stainless steels also may not be magnetic, and thus may not be detectible. In addition, magnetic ferrous debris may become demagnetized if transferred to iron oxides (rust).
Other sources of bias, untrendability and error are commonly encountered, but can be avoided through careful program design and execution: They include:
Sample contamination from external environmental sources or cross contamination from other samples or machines.
Signal dilution from dead zone sampling, particle fly-by, post filter sampling or insufficient sample agitation.
Signal-to-noise ratio reduction from bottom sediment in the sample, making it difficult to differentiate old debris from new wear.
Data corruption from a failure to normalize.
Data normalization can also be used when selecting alarms and limit values for wear debris parameters. The length of time a lubricant has been in service, the time that has elapsed since the previous sample and the make-up rate all affect the absolute value of wear debris indicators. Fortunately, with good record-keeping and simple mathematics, these parameters can be normalized and made trendable, repeatable and, therefore, alarmable.
Make-up oil is routinely added to many systems to compensate for lubricant that has been consumed; lost due to combustion, leakage, vaporization or misting; or drained off to remove contaminates such as water. Whatever the reason, the lost lubricant carried with it some portion of the signal (wear debris) intended to be measured; therefore, the make-up volume dilutes the remaining signal.
Similarly, even small variations in the period between oil samples can make trend plots meaningless. Wear debris is often produced on a continuous basis and may not by itself be indicative of a severe problem. The wear debris generation rate, however, is of great concern. It is helpful therefore to standardize results to a standard time interval so that generation rates can be fairly compared. Oil age in service has a slightly different effect and will be discussed separately.
The following formula (Formula 1) should be used to normalize each data point before it is trended:
While prescreening wear debris tests are extremely useful, and their application can be made much simpler through the use of alarms, limit values are not easily applied to these tests. All of these prescreening tests should be followed up with further analysis (exception tests), for rarely can the data from these tests be so conclusive that it warrants major maintenance intervention without supplemental data.
With that said, however, some organizations may opt to impose both high alarms and high-high alarms to trigger more rapid action for critical situations.
Alarm setting for wear debris generation is usually based on a percentage or statistically derived absolute change from a normal baseline for a given machine in a given application, or on a group of similar machines. This will be set for either the normalized parameter value or its first derivative, or slope, in order to monitor its rate of change, or its second derivative in order to monitor its acceleration for stability analysis purposes.
Here an understanding of the nature and history of equipment is important, because what is normal for one class of equipment, or even a specific device, may indicate trouble in another, or vice versa. Further, the analysis technique most appropriate will depend upon the system’s design and other factors. The specifics of these methods were discussed in “Alarms 101: Setting Viscosity Alarms and Limits”, which was published in the January-February 2003 issue of Practicing Oil Analysis.
When setting alarms for wear debris detection, it is important to remember that no meaningful baseline can be determined from the unused lubricant. Instead, a baseline period for a piece of equipment should be established during a period of proper function under normal operating conditions. For most applications, the random scatter of data will allow for the application of the Central Limit Theorem, and therefore the data’s actual average can be modeled effectively by 10 to 15 data points (though more are always better).
From this average, upper control limits can be set using an appropriate significance level (one for which the false alarm rate is acceptable) and the t-distribution to create statistically derived limits. Alternately, set percentage limits can be imposed, or an alarm value selected informally by choosing a value above the apparent scatter of the data. Figure 2 shows a rate of change control chart with statistically derived upper and lower control limits.
For more information on this topic, please consult the article titled “Statistically Derived Rate-of-Change Oil Analysis Limits and Alarms”, which was published in the January-February 2001 issue of Practicing Oil Analysis.
Rate of change alarms (first derivative or slope) are more effective for many wear debris tracking applications. Some devices will normally have the wear parameter trending upward without the wear generation rate being excessive. This is due to a lack of material balance between debris generation and removal.
During the baseline period, if the normalized parameter trends upward (for example, in unfiltered systems in which wear debris is allowed to accumulate), then the rate of wear debris production - the slope of the normalized parameter - is of primary concern for baselining.
Wear debris alarms should always be considered a “work in progress.” In the early stages of a program, trial limits should be set. As an analyst becomes more familiar with how equipment operates, what is normal for individual pieces of equipment and equipment classes, alarms will naturally evolve. Original equipment manufacturers (OEMs) can be a good source for initial trial alarm levels until a satisfactory baseline can be established.
Effective oil analysis programs continuously evolve with experience and added knowledge. Knowledge about machine design and operation is critical. For example, in the case of multiclad plain bearings, a drop in the production of one wear element may indicate that the layer has worn through. In such a case, it is important to monitor and alarm not only the surface material wear metal, but also the layers beneath it. Alarms should be set to trigger at low levels as well as high levels.
Wear particle count (WPC) and percent large particles (PLP) can be trended from DR ferrography and can have alarm levels set using any of the methods already discussed. The formulas for these measures can be seen in Formula 2.
When setting alarms for the PLP parameter, it is important to consider that while increasing percentages of large particles are indicative of advanced wear states, in some equipment, wear particles can be reduced in size over time as they get drawn into machine clearances. It is necessary to set alarms at both high and low levels and track these two parameters together.
Wear debris alarms should always be followed-up with further analysis to confirm the alarm, determine its source and estimate the severity of the condition. Previous articles, such as Jim Fitch’s Practicing Oil Analysis article “Tricks to Classifying Wear Metals and Other Used Oil Suspensions,” which was published in the September-October 2000 issue, address the problem solving process in detail.
With regular sampling and proper analysis, used oil analysis should provide an early warning of machine faults unless severe damage resulted from a transient condition. As such, there is usually sufficient time to perform proper analysis before aggressive action needs to be taken.
Resampling helps the analyst confirm that an alarm value was not a statistical fluke. Supplemental sampling at secondary locations can help confirm the source of the wear debris, and therefore the component that is failing.
Supplemental tests can be used to eliminate possible interferences and other causes for the alarm. Analytical ferrography can be employed to classify the source and wear mechanism that created the debris. Other predictive technologies can be used, such as infrared thermography, ultrasound and vibration to confirm a degraded machine condition.
Each of the most commonly applied wear screening tests is most powerful and accurate when used in combination with the others. All of these prescreening tests should be responded to with further analysis, for rarely can the data from these tests be so conclusive as to warrant major maintenance intervention without supplemental data. Properly selected alarms for all of these tests should be a powerful tool in limiting the quantity of analysis that must be performed while improving its overall effectiveness.