- Buyer's Guide
Used oil analysis is a tool, and like most tools, it can be properly used or misused, depending on the application, user, surrounding conditions, etc. A number of articles and publications explain how to interpret the information in an oil analysis report, but most fail to address one very important issue: statistical normalcy. What is “normal” in a data set represents the typical average values and expected variation within that group. It’s a matter of how to view a series of used oil analyses and how the results can shape your view of a healthy or ailing piece of equipment as well as the viability of continued lube service.
Most people have heard of the Six Sigma approach using statistics and other similar concepts. These are applicable to the world of lubricants as much as any other topic. Statistical analysis can be applied in both small and large viewpoint formats. Typically, these are referred to as micro-analysis and macro-analysis. Micro-analysis looks at one specific entity and lets data develop as inputs affect it. An example of this would be performing a series of used oil analysis tests on one engine with reasonably consistent usage patterns. All inputs (lubricant, fuel, filtration, sample cycle, etc.) are held constant or with minimal change so the natural development of information can be seen. This is done to establish ranges and to allow for any trends to develop. Over time, this methodology can be used to decide which product or process excels over another for a specific application.
It is important to note that even when experiencing extremely consistent conditional and resource inputs, there is variation, even when the process is in control. You need a considerable amount of data from this single source to define what is average and normal. This takes time, money and patience.
Macro-analysis does not look at just one entity but all those in a desired grouping. It predicts the behavior (results) of the mass population’s reaction to changing conditions (multiple inputs). With this method, you can look at a large group of data that represents a piece of equipment (engine, gearbox, differential, transmission, etc.) from different points of origin and determine what is “normal” across a broad base of applications. Macro-analysis comes much quicker because multiple sources are accepted. However, caution must be used to ensure that illogical conclusions are not drawn based upon false presumptions or in confusing correlation with causation.
Table 1 is a good example of micro-analysis for a V-6 gasoline engine. Oil changes were performed religiously, the inputs were consistent and the owner was dedicated to the testing parameter protocol. The vehicle saw very typical use in its life cycle and environment, including weather, driving cycles, etc.
In this example, the data created was consistent and could be used to make a sound decision for the stated operating conditions. No abnormalities were revealed. The standard deviations were all well below the means, which was as expected and desired in a controlled micro-data set.
The vehicle went from a steady diet of a synthetic oil with a premium filter to a quality conventional oil with an off-the-shelf filter. The data shows that the average wear metals shifted less than a point after this change. All shifts were well within one standard deviation for each distinct metal.
What can be surmised from these results is that there was no tangible benefit to using the high-end products for this maintenance plan and operational pattern. Conversely, the typical quality baseline products presented no additional risk of accelerated wear. It cannot be concluded that this result would be true in all potential circumstances, only that it is true when applied to a 5,000-mile oil change interval with the given operating conditions. Significantly longer oil change intervals likely may have shown a statistical difference between the two lube/filter choices, but that was not part of the test protocol.
The following examples of macro-analysis illustrate how mass-market data can be used. The first set of data is from a V-8 gasoline engine.
In Table 2, note the two columns for lead (Pb). One is the raw data, while the other is the same data stream with three data points removed because they were affecting the “normalcy” of the data. Most of the lead counts in all the other samples were well below 35 parts per million (ppm), but three samples had lead counts of 68 ppm, 204 ppm and 602 ppm. When the individual results were reviewed, there was no reasonable explanation as to why the lead was so high in these three reports. In Table 3, you can see how greatly those three data points were skewing the results.
Notice how the average lead count dropped more than 57 percent, and the standard deviation decreased by nearly a factor of 10. Only three samples of 548 were responsible for such an overt act of skewing the data. This is where math and common sense come together to form a reasonable conclusion that some intervention of the data is warranted. By removing only 0.5 percent of the lead data population, the range shifted significantly. This indicates that those three samples were not “normal,” and the remaining 99.5 percent were.
In macro-data, when the standard deviation is some multiple larger than the mean, there is cause to believe abnormalities are imbedded in the data stream. When the deviation is smaller, it indicates the mass-market population is representing the variability of inputs as desired and not being affected by spoilers. Unfortunately, there is no hard and fast rule. Training, experience and knowledge of the subject matter will help define and delineate when and where to intervene.
In examining the results through the years, there clearly were not any significant changes over time. For example, the average iron wear rate was reasonably consistent and varied by less than 1 part per million over five years of data. However, if you look at the iron wear in detail, a great storyline develops. When the oil was run longer, the iron went up and very predictably. In 2007, the average oil sample was taken at 4,500 miles, and the iron average was 10.2 ppm. Five years later, the average oil sample was taken at 8,100 miles, and the iron average was at 18.1 ppm. An 80-percent increase in mileage was mirrored in a resultant 80-percent increase in iron. That is a very predictable response curve; the wear is consistent.
When oil is changed frequently, a higher iron wear metal count will be seen in the oil analysis results. There are two reasonable explanations for this phenomenon - residual oil and tribo-chemical interaction. Studies have shown that elevated wear levels after an oil change can be directly linked to chemical reactions of fresh additive packages. In addition, when you change oil, no matter how much you drip into the catch basin, there is always a moderate amount left in the engine. It is estimated that up to 20 percent of the old oil remains, depending on the piece of equipment. So when you begin your new oil change interval, you are not starting at zero ppm.
The oil analysis results from this example showed that engine wear was generally unaffected by operational conditions and oil change intervals. It was also concluded that the filtration selection, oil brand and grade, as well as various service factors did not have much of an influence on the results. For this engine, it didn’t make much difference what oil was used or how it was driven.
The next set of data in Table 4 is from a V-8 diesel engine. These oil analysis samples represent fairly high-mileage vehicles, with 179 of the 527 samples from vehicles with more than 100,000 miles and many others from vehicles with more than 250,000 miles.
Once again, there is a need to manipulate the data to remove abnormalities. Forty-one samples had ultra-high copper (Cu) counts, with many readings more than 100 ppm and some more than 300 ppm. Therefore, a separate “copper prime” column was created to root out the high flyers. Although some might decry the removal of data, you can clearly see how these spikes can adversely affect what is deemed “normal.” While 41 samples may seem like a large amount of data to remove, they represent only 7.7 percent of the total population, and yet their removal resulted in nearly a 79-percent drop in the “average” copper magnitude (from 16 to 3.4 ppm).
To determine how the oil’s life cycle affected wear rates, three sub-groups were examined: 3,500 miles, 7,500 miles and 11,500 miles. Again, higher iron wear rates were revealed toward the front of the oil change interval (see Table 5).
In no way does this mean that an engine is being harmed, but it directly contradicts the mantra that more is better (“more” being more frequent oil changes and “better” being less wear). At some point the iron wear rate will begin an ascent and probably become parabolic, but that is farther down the road than most people think. What is clear is that you can change your oil early, but it will not reduce your wear rate. You can also put off your oil change for a long time (at least to 12,000 miles), and it generally will not affect your wear rate.
Table 6 illustrates how macro-analysis can be used to determine what is normal in separate cases. Two diesel-engine trucks were driven in very similar circumstances for the same length of time. Both trucks pulled heavy recreational vehicles into the mountains for roughly 6,500 miles and experienced heat and cold patterns that were comparable to each other. However, there was a significant difference: one vehicle was run on premium synthetic 15W-40 engine oil and utilized bypass filtration, while the other truck used conventional 10W-30 engine oil with a normal filter. Below are the oil analysis results in regard to wear for both trucks.
Did either truck perform better than the other? Without true micro-analysis, you could not make such a determination. Iron is the greatest indicator of cumulative wear, and these samples were right at average levels. At face value, one might claim the synthetic oil did better because the lead value was lower in truck A and higher in truck B, but they are both well within the typical variance. Ironically, the chromium, iron and copper levels were higher in the truck using synthetic oil and bypass filtration, but again these amounts were well within the normal variation.
It can be expected that wear metal counts will bounce up and down from one sample to the next. It is also normal for metals to vary in mass populations and in individual units. However, when you can see a single sample well within mass population “normalcy,” you can deduce that it is performing no better or worse than any other unit using any other fluid/filter combination.
The slight variation that occurred was the expected normal variation due any engine in this family. Two vastly different inputs (lubes and filters) did not result in any significant difference under nearly identical operating conditions at the same duration exposure. So in these two examples with very similar operational circumstances and conditional limitations, there was no tangible benefit whatsoever to using the high-end products.
Unlike micro-analysis, macro-analysis does not allow for any conclusion to be drawn as to what product(s) might be better or worse than any other in the grouping. When a sample is within one or two standard deviations of average, thereby defining itself as normal, you can only conclude that the events and products that led to that unique data stream were also normal. Any variance is not due to one particular product or condition but the natural variation of macro-inputs. Therefore, you cannot say that brand X was better than brand Y or brand Z because typical variation is in play. Only with micro-analysis, using long, well-detailed controlled studies, can you make specific determinations as to what might be better or best for an application.
With macro-analysis, if two separate samples are both within the standard deviation, the separate conditions and products did not manifest into uniquely different results. When viewed within an engine family, if engine A is compared and contrasted to engine B, and the two engines used different oils but resulted in similar wear metal counts and rates, you can conclude that neither oil was better than the other. When the results are within one standard deviation, the proof is conclusive that neither product had an advantage over the other. Essentially, under these conditions, you cannot say that either choice is better, but you can say that neither is better.
Keep in mind that standard deviation data can be large or small, depending on your definition of large and small. For a frame of reference, when the standard deviation is more than 50 percent of the average magnitude, many consider this to be large. However, this does not preclude it from being “normal,” as defined by happening with great regularity and having no adverse successive effects.
In conclusion, used oil analysis is a great tool, but you must understand how to properly manipulate the data and interpret the results. You must know not only the averages but also if there are any abnormalities embedded in those averages and how large the standard deviation is. Unfortunately, you’ll never know how many abnormalities are present, nor if they have been pre-screened for you, because most oil analysis services do not perform this extra filtering. You can take solace in the fact that if your results are near or less than “universal average,” you’re probably in good shape. You are, in essence, “normal.”