- Training & Events
- Buyer's Guide
Think of oil analysis as being like the television game show Wheel of Fortune. There’s a message to be told but you’ll only see and understand it if you expose enough letters in their proper order. Some of these letters are in the oil but many are elsewhere. These include current knowledge of machine operating environment, service history, inspection reports and condition monitoring data from companion technologies. Start by carefully listing what questions you want oil analysis to answer. Then work backwards to determine test slate required to answer these questions. Optimize the data set, don’t minimize or maximize.
Many labs are far better at analyzing the oil than data presentation and interpretation. Expertise in analytical chemistry does not always translate to effectiveness in machine condition monitoring. In fact, frequently users are better served by engaging a lab to provide timely and accurate data than over reliance on interpretation and reporting services. With the right tools they can easily customize and manage their own data using Web-based products offered by most commercial labs or with proprietary software sold independently.
Getting your data to talk to you has a lot to do with presentation, which is the subject of my column today. By leveraging the full resources of computer software, including multimedia, oil analysis can take on a whole new dimension. I’ve seen hundreds of attempts at formatting data for maximum effect and there are clearly some top-flight lab reports in use. The most important features of these reports are described in the list below:
Quick-Glance Condition Overview. Many reliability professionals have a full slate of daily activities including routine review of oil analysis data on hundreds of machines. Report formats that quickly filter data and then identify those machines with noncomplying conditions is essential. The order of presentation can be arranged by severity of the offending data and the criticality of the machine to plant production. The vast majority of reports from oil labs place this “quick-glance” overview at the top of their reports.
New Oil Baseline. It would seem obvious enough, but too often you don’t see new oil data on reports. Frequently this is due to the fact that users have failed to provide samples of new lubricants for testing. In my opinion, the new lube should be the first oil tested and its data should be a permanent fixture on the report. Baseline information on additives, neutralization numbers, viscosity, etc. must be available for quick comparison to routine test data. Never rely on data from product data sheets for reference purposes.
Flagged Data Tied to Comments. All data that’s outside of acceptable bounds should be visibly flagged on the oil analysis report. Most labs do this and even identify the magnitude of the breach. However, many oil analysts fail to tie the flagged data to the comments. For instance, sodium might be flagged on the data table but the comments only state “possible coolant leak”. Many users don’t understand that sodium is associated with a common additive used in antifreeze.
Targets and Limits. Oil analysis is not simply about trending, although trend analysis is an important data interpretation technique. For instance, most proactive maintenance alarms have no trendable characteristic. These include particle counts, moisture analysis, glycol, fuel dilution and soot. For such properties, hard limits or data maximums should be used. A good oil analysis report clearly shows redefined cautionary and critical alarm levels for every data type. This enables assessment of the closeness of current oil analysis data to an alarm level. For instance, tin might be 7 ppm and is unflagged. The user may want to know that it would have been flagged at 8 ppm. Likewise, with visible alarm levels the magnitude of the offending condition can be better noted. Users should take an active role in setting their data targets and limits since these are directly influenced by machine criticality and operating severity.
Grouped Data Plots. Data related to common problems should be plotted together on a single graph. For instance, it’s important to quickly detect coolant leaks in diesel engines. There are several oil analysis data that could collectively point to a coolant leak. These include sodium, boron, viscosity, BN, and water contamination. Elemental families can also be grouped together on plots with knowledge of machine metallurgy such as diesel engine top-end wear (rings, piston, and cylinder liner). In a compressor oil you might want to group data related to oil oxidation including viscosity, AN, and antioxidants (LSV or FTIR) for instance. Grouping is a tool used in pattern recognition. Very often modest movements in data don’t get flagged but when viewed collectively a discernable, and perhaps serious, condition can be confirmed.
Photography. Digital photography adds an important dimension to oil analysis and is a cheap and simple addition to a report. Some labs actually photograph every sample to trend oil color and clarity. The current sample can visually be compared to new oil and previous samples. Other uses of photography include membrane patch colorimetry (varnish potential), ultracentrifuge (varnish potential), blotter spot testing, and analytical ferrography.
Normalizing Data. Oil analysis is influenced by the age of the lubricant (runtime hours). Without runtime information rate-of-change is impossible to define. For instance, 50 ppm may be of little concern in a gear with 2000 hours of runtime. Conversely, 50 ppm might be a critical alarm in the same gearbox at 250 hours of service. Some labs present “run rate” data instead of absolute values for wear metals. Parts-per-million per 100 hours of service is an example of run-rate data. So too, data should be normalized for machines that require continuous supply of makeup oil. Makeup oil replenishes depleting additive elements and dilutes wear metals and contaminants. By knowing makeup volume since the last sample, the dynamics of critical lubricants can be normalized for more effective interpretation.
Exception Tests. There are routine tests and exception tests. Certain oil analysis tests are expensive and hence their use should be based on need and importance. These exception tests (also known as confirming tests) are triggered by data from routine tests. Common examples include analytical ferrography, varnish potential, demulsibility, FTIR fingerprint analysis and oxidation stability. Data (and other related information) from exception tests should be integrated into an oil analysis report to enable a comprehensive view of current conditions.
Imagine how difficult it would be to recognize the hidden phrase in Wheel of Fortune if the exposed letters were shown in random order. This is not unlike an oil lab that presents test data in a standard block of numbers. For a gifted analyst this may not present a problem but for the rest of us it could be a difficult struggle to extract the hidden message within. Effective oil analysis may be as much about data presentation as it is about the data itself.