- All Topics
- Training & Events
- Buyer's Guide
Note From Editor:
This paper was presented at the Practicing Oil Analysis 2000 Conference and Exhibition but not included in the proceedings. This version has been modified from the original paper. And, due to the length of the paper, it has been divided into two parts. The first described available systems and argued the case for an integrated lubrication and oil analysis software program and was published in the last issue of Practicing Oil Analysis.
Part I of this series outlined the goals for a lubrication and oil analysis information management system. The strengths of standalone systems were compared to the strengths of a tightly integrated system. Part I concluded that the tightly integrated system is far superior. In Part II, the issues related to developing a tightly integrated lubrication and oil analysis information management software system are explored in detail.
To achieve the goal of having a tightly integrated lubricant analysis and lubrication management system, the primary design issues that needed to be met were as follows:
1. A common plant hierarchy structure, so that the lubricant analyst and the lubricant planner would work on a common database. However, it was important to ensure that the two groups didn’t have to be exposed to the details of the other group’s data (unless they wanted such exposure). In other words, a data hiding mechanism was required.
2. Integration of lubrication specification data with lubricant purchasing/inventory data. Lubrication specification data tends to be entered into a system at startup and then allowed to go stale (data is not continuously updated). By linking specification data to the much more active inventory management data, we felt this would help ensure that lubrication specification data is kept up-to-date.
3. Allow data collected in the lubricant analysis database to be used to trigger lubrication tasks. However, we wanted the system to do it intelligently, rather than simply generating a lubrication task based on simple alarms. An easy-to-use but powerful condition assessment capability was required.
4. Allow data collected in the lubricant analysis database to be used both for tracking the condition of the lubricant and usage/operation of the equipment. In other words, allow meter data to be collected and stored along with laboratory analysis results. Also, we wanted the meter data to be able to trigger lubrication tasks along with the condition data.
5. Allow data collected by onsite instruments to be matched with the data generated by laboratories, allowing both data sets to be used together to manage lubrication tasks.
6. Design the system so that multiple lubrication tasks could be issued to multiple lubrication technicians from a single screen, or to send those tasks to a lubrication technician’s handheld work management device.
7. Simplify the management of scheduled lubrication tasks as much as possible. Allow the lubrication planner to close off multiple lubrication tasks from a single screen, preferably with a single button.
The structure of the system’s database is a hierarchy that reflects the plant equipment breakdown chart (Figure 1). There can be as many levels of facility as required (a facility can be a building, a geographical area, a ship, a hydraulic system, etc.). Equipment is defined as machine trains or driver-driven combinations (motor-pump, diesel-generators, turbo-compressors) broken down into their individual components.
Below the component level, the system hierarchy separates. Both the lubricant analyst and the lubrication planner are able to see the common plant structure. However, when the analyst opens the plant hierarchy tree to the bottom, he sees a list of sample ports and the various tests (spectrography, viscosity, FTIR, etc.) that are applied to samples taken from a sample port. The planner, on the other hand, sees a list of lubrication tasks that are to be applied to that motor, pump, gearbox, etc., and their associated schedules. The different views are part of the “data hiding” principle - don’t show people the detail that they are not interested in.
With this system structure, the system is able to “know” that a test point under a specific component can be associated with one or more lubrication tasks on that same component. The stage is set to allow data contained in that test point to alter the schedule for a lubrication task.
Our development efforts have convinced us that the common plant structure is the key element in determining the success or failure of maintenance management integration efforts. Without the common plant structure, the effort to build and configure “glue” components (both programmically and by plant engineers) becomes so high that the level of effort may be deemed to be greater than the benefit of the integration. The common plant structure reduces the effort to a feasible level.
The dataflow of a system describes the steps undertaken by the system to move data from one module to the next (Figure 2). There are several key dataflow steps required to ensure a smooth interface between lubricant analysis and lubrication management.
1. Collect Data
The data collection process for lubricant analysis involves one of the following:
1. Direct data acquisition from an onsite instrument;
2. Retrieving data from an oil analysis laboratory, either through a Web site, an FTP site or e-mail;
3. Downloading data from a PDA-like device used for field inspections;
4. Manual data entry.
2. Apply Alarms
During the data collection process, preset alarm levels are applied to determine if any of the data going into the lubricant analysis database crosses an alarm threshold.
3. Generate Symptoms
Any data points that cross an alarm threshold automatically create a new object called a symptom. This symptom object contains the specific data and alarm levels that produced the symptom, along with statistical data (mean, standard deviation) and, if appropriate, a curve fit analysis of the data.
4. Generate Condition
An inference engine collectively analyzes all symptoms for a particular component or equipment level object. The inference engine can use all of the data contained within the symptom objects to generate one or more diagnostics for the oil sample or inspection route.
5. Meter Data
Special data points in the lubricant analysis database are designated as meter points. They are used to update such operational data as operating hours and/or mileage for equipment or component objects. This is not an essential function for integration, but it certainly makes handling operational data more convenient.
Lubrication Scheduling Module
6. Generate PM Jobs
The lubricant planner is able to update the job backlog at any time by running a “Generate PM Jobs” process. This process compares all the tasks in the database to their schedule, and generates them as scheduled PM jobs if they are due.
The same process also checks for any new condition assessment diagnostics or meter data that may have been added since the last job generation. These new assessments are matched to any PM jobs that are triggered by the condition of the oil. If the oil condition matches the condition trigger for the PM job, the job is then scheduled, in exactly the same manner as if it were a calendar or meter-based job.
7. Create and Manage
The planner is given final control over which jobs get approved and submitted to the lubrication technicians. The approved jobs are given a work order number, and can be delivered to the technician as a work order list, a series of printed work order forms or electronically as e-mail or downloaded to a PDA. Lubrication work orders specify the task to be carried out, the type of lubricant to use and (sometimes) how much to use.
8. Post Work Orders to
Once the work order has been carried out, the work order is closed. The closing procedure has the lubrication technician enter the actual type and amount of lubricant used, which can be compared to the budgeted amount. All data is posted to work order history, where it can be used to analyze weekly, monthly and annual lubricant consumption and other costs of operation.
Assessments as Lubrication Task Triggers
There are several issues that arise with using alarms only as the triggering mechanism for lubrication tasks. For example, say there is a lubrication task that is triggered by a simple alarm such as high iron, and there is a second, more important task that is triggered by high iron, high silicon and a decreasing viscosity trend. If tasks are triggered simply by alarms, how do you specify the trigger for the second task? And how do you make sure that the second task gets generated, but that the first task doesn’t?
The use of a simple set of rules (and an associated inference engine) increases the effectiveness of using oil analysis data as a trigger for lubrication tasks. Without the filtering capability of a rule base, the user could potentially be overwhelmed by spurious or conflicting PM job orders.
The data hiding mechanism prevents the lubrication planner or lubricant analyst from being overwhelmed by data that is not immediately relevant or desired. However, the lubricant analyst is likely to be very interested in the lubrication history for a component that is showing signs of lubricant-related failure. The lubricant analyst should be able to see specific data sets from the lubrication management system while being shielded from all the other PM job detail. Conversely, the lubrication management system user should be able to see condition assessment results without getting bogged down in all of the lubricant test data.
There are substantial benefits in integrating lubrication management and lubricant analysis tools. The conclusions we have reached in developing and implementing such an integrated system are:
1. The use of a common plant hierarchy is an essential step in ensuring that the benefits of integration are not overwhelmed by implementation complexity.
2. The effort involved in setting up a tightly integrated system is more than setting up an individual lubricant analysis or lubrication management database, but it is considerably less than setting up both systems independently. However, the real benefit (regarding labor requirements) is that the tightly integrated system requires considerably less maintenance over the long haul, as there aren’t multiple databases that can get out of sync.
3. Using a rule-based interface layer (even a very simple one) between lubricant analysis data and lubricant PM job triggers prevents a lot of unnecessary PM job generation, and helps to prevent the generation of conflicting PM jobs.
4. Both parts of the integrated system need to hide details from the other, but certain data sets need to be viewable across the system.