I was recently involved in a discussion regarding the value of Overall Equipment Effectiveness (OEE). Of course, I fully supported OEE and confirmed that it can bring tremendous value to any organization that is prepared to embrace it as a key metric. I also qualified my response by stating that OEE cannot be managed in isolation:
As a top level metric, OEE does not describe or provide a sense of actual run-time performance. For example, when reviewing Availability, we have no sense of duration or frequency of down time events, only the net result. In other words we can’t discern whether downtime was the result of a single event or the cumulative result of more frequent down time events over the course of the run. Similarly, when reviewing Performance, we cannot accurately determine the actual cycle time or run rate, only the net result.
As shown in the accompanying graphic, two data sets (represented by Red and Blue) having the same average can present very different distributions as depicted by the range of data, height of the curve (kurtosis), width or spread of the curve (skewness), and significantly different standard deviations.
Clearly, any conclusions regarding the process simply based on averages would be very misleading. In this same context, it is also clear that we must exercise caution when attempting to compare or analyse OEE results without first considering a statistical analysis or representation of the raw process data itself.
The Missing Metrics
Fortunately, we can use statistical tools to analyse run-time performance to determine whether our process is capable of consistently producing parts just as Quality Assurance personnel use statistical analysis tools to determine whether a process is capable of producing parts consistently.
One of the greatest opportunities for improving OEE is to use statistical tools to identify opportunities to reduce throughput variance during the production run.
Run-Time or throughput variance is OEE’s silent partner as it is an often overlooked aspect of production data analysis. Striving to achieve consistent part to part cycle times and consistent hour to hour throughput rates is the most fundamental strategy to successfully improve OEE. You will note that increasing throughput requires a focus on the same factors as OEE: Availability, Performance, and Quality. In essence, efforts to improve throughput will yield corresponding improvements in OEE.
Simple throughput variance can readily be measured using Planned versus Actual Quantities produced – either over fixed periods of time and is preferred or cumulatively. Some of the benefits of using quantity based measurement are as follows:
- Everyone on the shop floor understands quantity or units produced,
- This information is usually readily available at the work station,
- Everyone can understand or appreciate it’s value in tangible terms,
- Quantity measurements are less prone to error, and
- Quantities can be verified (Inventory) after the fact.
For the sake of simplicity, consider measuring hourly process throughput and calculating the average, range, and standard deviation of this hourly data. With reference to the graphic above, even this fundamental data can provide a much more comprehensive and improved perspective of process stability or capability than would otherwise be afforded by a simple OEE index.
Using this data, our objective is to identify those times where the greatest throughput changes occurred and to determine what improvements or changes can be implemented to achieve consistent throughput. We can then focus our efforts on improvements to achieve a more predictable and stable process, in turn improving our capability.
In OEE terms, we are focusing our efforts to eliminate or reduce variation in throughput by improving:
- Availability by eliminating or minimizing equipment downtime,
- Performance through consistent cycle to cycle task execution, and
- Quality by eliminating the potential for defects at the source.
To make sure we’re on the same page, let’s take a look at the basic formulas that may be used to calculate Process Capability. In the automotive industry, suppliers may be required to demonstrate process capability for certain customer designated product characteristics or features. When analyzing this data, two sets of capability formulas are commonly used:
- Preliminary (Pp) or Long Term (Cp) Capability: Determines whether the product can be produced within the required tolerance range,
- Pp or Cp = (Upper Specification Limit – Lower Specification Limit) / (6 x Standard Deviation)
- Preliminary (Ppk) or Long Term (Cpk) Capability: Determines whether product can be produced at the target dimension and within the required tolerance range:
- Capability = Minimum of Either:
- Capability Upper = (Average + Upper Specification Limit) / (3 x Standard Deviation)
- Capability Lower = (Lower Specification Limit – Average) / 3 x Standard Deviation)
- Capability = Minimum of Either:
When Pp = Ppk or Cp = Cpk, we can conclude that the process is centered on the target or nominal dimension. Typically, the minimum acceptable Capability Index (Cpk) is 1.67 and implies that the process is capable of producing parts that conform to customer requirements.
In our case we are measuring quantities or throughput data, not physical part dimensions, so we can calculate the standard deviation of the collected data to determine our own “natural” limits (6 x Standard Deviation). Regardless of how we choose to present the data, our primary concern is to improve or reduce the standard deviation over time and from run to run.
Once we have a statistical model of our process, control charts can be created that in turn are used to monitor future production runs. This provides the shop floor with a visual base line using historical data (average / limits) on which improvement targets can be made and measured in real-time.
Run-Time Variance Review
I recall using this strategy to achieve literally monumental gains – a three shift operation with considerable instability became an extremely capable and stable two shift production operation coupled with a one shift preventive maintenance / change over team. Month over month improvements were noted by significantly improved capability data (substantially reduced Standard Deviation) and marked increases in OEE.
Process run-time charts with statistical controls were implemented for quantities produced just as the Quality department maintains SPC charts on the floor for product data. The shop floor personnel understood the relationship between quantity of good parts produced and how this would ultimately affect the department OEE as well.
Monitoring quantities of good parts produced over shorter fixed time intervals is more effective than a cumulative counter that tracks performance over the course of the shift. In this specific case, the quantity was “reset” for each hour of production essentially creating hourly in lieu of shift targets or goals.
Recording / plotting production quantities at fixed time intervals combined with notes to document specific process events creates a running production story board that can be used to identify patterns and other process anomalies that would otherwise be obscured.
I am hopeful that this post has heightened your awareness regarding the data that is represented by our chosen metrics. In the boardroom, metrics are often viewed as absolute values coupled with a definitive sense of sterility.
Run-Time Variance also introduces a new perspective when attempting to compare OEE between shifts, departments, and factories. From the context of this post, having OEE indices of the same value does not imply equality. As we can see, metrics are not pure and perhaps even less so when managed in isolation.
Variance is indeed OEE’s Silent Partner but left unattended, Variance is also OEE’s Silent Killer.
Until Next Time – STAY lean!
Twitter: @VersalyticsFollow Versalytics