Lean Execution

Integrated Waste: Lather, Rinse, Repeat

shampoo

Image via Wikipedia

Admittedly, it has been a while since I checked a shampoo bottle for directions, however, I do recall a time in my life reading:  Lather, Rinse, Repeat.  Curiously, they don’t say when or how many times the process needs to be repeated.

Perhaps someone can educate me as to why it is necessary to repeat the process at all – other than “daily”.  I also note that this is the only domestic “washing” process that requires repeating the exact same steps.  Hands, bodies, dishes, cars, laundry, floors, and even pets are typically washed only once per occasion.

The intent of this post is not to debate the effectiveness of shampoo or to determine whether this is just a marketing scheme to sell more product.  The point of the example is this:  simply following the process as defined is, in my opinion, inherently wasteful of product, water, and time – literally, money down the drain.

Some shampoo companies may have changed the final step in the process to “repeat as necessary” but that still presents a degree of uncertainty and assures that exceptions to the new standard process of “Lather, Rinse, and Repeat as Necessary” are likely to occur.

In the spirit of continuous improvement, new 2-in-1 and even 3-in-1 products are available on the market today that serve as the complete “shower solution” in one bottle.  As these are also my products of choice, I can advise that these products do not include directions for use.

Scratching the Surface

As lean practitioners, we need to position ourselves to think outside of the box and challenge the status quo.  This includes the manner in which processes and tasks are executed.  In other words, we not only need to assess what is happening, we also need to understand why and how.

One of the reasons I am concerned with process audits is that conformance to the prescribed systems, procedures, or “Standard Work” somehow suggests that operations are efficient and effective.  In my opinion, nothing could be further from the truth.

To compound matters, in cases where non-conformances are identified, often times the team is too eager to fix (“patch”) the immediate process without considering the implications to the system as a whole.  I present an example of this in the next section.

The only hint of encouragement that satisfactory audits offer is this: “People will perform the tasks as directed by the standard work – whether it is correct or not.”  Of course this assumes that procedures were based on people performing the work as designed or intended as opposed to documenting existing habits and behaviors to assure conformance.

Examining current systems and procedures at the process level only serves to scratch the surface.  First hand process reviews are an absolute necessity to identify opportunities for improvement and must consider the system or process as a whole as you will see in the following example.

Manufacturing – Another Example

On one occasion, I was facilitating a preparatory “process walk” with the management team of a parts manufacturer.  As we visited each step of the process, we observed the team members while they worked and listened intently as they described what they do.

As we were nearing the end of the walk through, I noted that one of the last process steps was “Certification”, where parts are subject to 100% inspection and rework / repair as required.  After being certified, the parts were placed into a container marked “100% Certified” then sent to the warehouse – ready for shipping to the customer.

When I asked about the certification process, I was advised that:  “We’ve always had problems with these parts and, whenever the customer complained, we had to certify them all 100% … ‘technical debate and more process intensive discussions followed here’ … so we moved the inspection into the line to make sure everything was good before it went in the box.”

Sadly, when I asked how long they’ve been running like this, the answer was no different from the ones I’ve heard so many times before:  “Years”.  So, because of past customer problems and the failure to identify true root causes and implement permanent corrective actions to resolve the issues, this manufacturer decided to absorb the “waste” into the “normal” production process and make it an integral part of the “standard operating procedure.”

To be clear, just when you thought I picked any easy one, the real problem is not the certification process.  To the contrary, the real problem is in the “… ‘technical debate and more process intensive discussions followed here’ …” portion of the response.  Simply asking about the certification requirement was scratching the surface.  We need to …

Get Below the Surface

I have always said that the quality of a product is only as good as the process that makes it.  So, as expected, the process is usually where we find the real opportunities to improve.  From the manufacturing example above, we clearly had a bigger problem to contend with than simply “sorting and certifying” parts.  On a broader scale, the problems I personally faced were two-fold:

  1. The actual manufacturing processes with their inherent quality issues and,
  2. The Team’s seemingly firm stance that the processes couldn’t be improved.

After some discussion and more debate, we agreed to develop a process improvement strategy.  Working with the team, we created a detailed process flow and Value Stream Map of the current process.  We then developed a Value Stream Map of the Ideal State process.  Although we did identify other opportunities to improve, it is important to note that the ideal state did not include “certification”.

I worked with the team to facilitate a series of problem solving workshops where we identified and confirmed root causes, conducted experiments, performed statistical analyses, developed / verified solutions, implemented permanent corrective actions, completed detailed process reviews and conducted time studies.  Over the course of 6 months, progressive / incremental process improvements were made and ultimately the “certification” step was eliminated from the process.

We continued to review and improve other aspects of the process, supporting systems, and infrastructure as well including, but not limited to:  materials planning and logistics, purchasing, scheduling, inventory controls, part storage, preventive maintenance, redefined and refined process controls, all supported by documented work instructions as required.  We also evaluated key performance indicators.  Some were eliminated while new ones, such as Overall Equipment Effectiveness, were introduced.

Summary

Some of the tooling changes to achieve the planned / desired results were extensive.  One new tool was required while major and minor changes were required on others.  The real tangible cost savings were very significant and offset the investment / expense many times over.  In this case, we were fortunate that new jobs being launched at the plant could absorb the displaced labor resulting from the improvements made.

Every aspect of the process demonstrated improved performance and ultimately increased throughput.  The final proof of success was also reflected on the bottom line.  In time, other key performance indicators reflected major improvements as well, including quality (low single digit defective parts per million, significantly reduced scrap and rework), increased Overall Equipment Effectiveness (Availability, Performance, and Quality), increased inventory turns, improved delivery performance (100% on time – in full), reduced overtime,  and more importantly – improved morale.

Conclusion

I have managed many successful turnarounds in manufacturing over the course of my career and, although the problems we face are often unique, the challenge remains the same:  to continually improve throughput by eliminating non-value added waste.  Of course, none of this is possible without the support of senior management and full cooperation of the team.

While it is great to see plants that are clean and organized, be forewarned that looks can be deceiving.  What we perceive may be far from efficient or effective.  In the end, the proof of wisdom is in the result.

Until Next Time – STAY lean!

Vergence Analytics
Twitter:  @Versalytics
Posted in Advanced Lean Manufacturing, Execution, Lean, Problem Solving Tagged with: , , , , , , , , ,

OEE – A Race Against Time

The printer Benjamin Franklin contributed grea...

Image via Wikipedia

Background

If “Time is Money”, is it reasonable for us to consider that “Wasting Time is Wasting Money?”

Whether we are discussing customer service, health care, government services, or manufacturing – waste is often identified as one of the top concerns that must be addressed and ultimately eliminated.  As is often the case in most organizations, the next step is an attempt to define waste.  Although they are not the focus of our discussion, the commonly known “wastes” from a lean perspective are:

  • Over-Production
  • Inventory
  • Correction (Non-Conformance  – Quality)
  • Transportation
  • Motion
  • Over Processing
  • Waiting

Resourcefulness is another form of waste often added to this list and occurs when resources and talent are not utilized to work at their full potential.

Where did the Time go?

As a lean practitioner, I acknowledge these wastes exist but there must have been an underlying element of concern or thinking process that caused this list to be created.  In other words, lists don’t just appear, they are created for a reason.

As I pondered this list, I realized that the greatest single common denominator of each waste is TIME.  Again, from a lean perspective, TIME is the basis for measuring throughput.  As such, our Lean Journey is ultimately founded on our ability to reduce or eliminate the TIME required to produce a part or deliver a service.

As a non-renewable resource, we must learn to value time and use it effectively.  Again, as we review the list above, we can see that lost time is an inherent trait of each waste.  We can also see how this list extends beyond the realm of manufacturing.  TIME is a constant constraint that is indeed a challenge to manage even in our personal lives.

To efficiently do what is not required is NOT effective.

I consider Overall Equipment Effectiveness (OEE) to be a key metric in manufacturing.  While it is possible to consider the three factors Availability, Performance, and Quality separately, in the context of this discussion, we can see that any impediment to throughput can be directly correlated to lost time.

To extend the concept in a more general sense, our objective is to provide our customers with a quality product or service in the shortest amount of time.  Waste is any impediment or roadblock that prevents us from achieving this objective.

Indirect Waste and Effectiveness

Indirect Waste (time) is best explained by way of example.  How many times have we heard, “I don’t understand this – we just finished training everybody!”  It is common for companies to provide training to teach new skills.  Similarly, when a problem occurs, one of the – too often used – corrective actions is “re-trained employee(s).”  Unfortunately, the results are not always what we expect.

Many companies seem content to use class test scores and instructor feedback to determine whether the training was effective while little consideration is given to developing skill competency.  If an employee cannot execute or demonstrate the skill successfully or competently, how effective was the training?  Recognizing that a learning curve may exist, some companies are inclined to dismiss incompetence but only for a limited time.

The company must discern between employee capability and quality of training.  In other words, the company must ensure that the quality of training provided will adequately prepare the employee to successfully perform the required tasks.  Either the training and / or method of delivery are not effective or the employee may simply lack the capability.  Let me qualify this last statement by saying that “playing the piano is not for everyone.”

Training effectiveness can only be measured by an employee’s demonstrated ability to apply their new knowledge or skill.

Time – Friend or Foe?

Lean tools are without doubt very useful and play a significant role in helping to carve out a lean strategy.  However, I am concerned that the tendency of many lean initiatives is to follow a prescribed strategy or formula.  This approach essentially creates a new box that in time will not be much different from the one we are trying to break out of.

An extension of this is the classification of wastes.  As identified here, the true waste is time.  Efforts to reduce or eliminate the time element from any process will undoubtedly result in cost savings.  However, the immediate focus of lean is not on cost reduction alone.

Global sourcing has assured that “TIME” can be purchased at reduced rates from low-cost labour countries.  While this practice may result in a “cost savings”, it does nothing to promote the cause of lean – we have simply outsourced our inefficiencies at reduced prices.  Numerous Canadian and US facilities continue to be closed as workers witness the exodus of jobs to foreign countries due to lower labor and operating costs. Electrolux closes facility in Webster City, Iowa.

I don’t know the origins of multi-tasking, but the very mention of it suggests that someone had “time on their hands.”  So remember, when you’re put on hold, driving to work, stuck in traffic, stopped at a light, sorting parts, waiting in line, sitting in the doctors office, watching commercials, or just looking for lost or misplaced items – your time is running out.

Is time a friend or foe?  I suggest the answer is both, as long as we spend it wisely (spelled effectively).  Be effective, be Lean, and stop wasting time.

Let the race begin:  Ready … Set … Go …

Until Next Time – STAY lean!

Vergence Analytics

Twitter:  @Versalytics
Posted in Advanced Lean Manufacturing, Eliminate Waste, Lean Metrics, Lean Mindset, Theory of Constraints, Training Tagged with: , , , , , , , , ,

Lean – OEE and Pareto’s Law

Typical Application to Analyze Quality Defects

The Premise:  Pareto’s Law

The late Josheph Juran introduced the world to Pareto’s Law, aptly named after Italian economist Vilfredo Pareto.  Many business and quality professionals alike are familiar with Pareto’s law and often refer to it as the 80 / 20 rule.  In simple terms, Pareto’s Law is based on the premise that 80% of the effects stem from 20% of the causes.

As an example, consider that Pareto’s Law is often used by quality staff to determine the cause(s) responsible for the highest number of defects as depicted in the chart to the right.  From this analysis, teams will focus their efforts on the top 1 or 2 causes and resolve to eliminate or substantially reduce their effect.

In this case, the chart suggests that highest number of defects are due to shrink followed by porosity.  At this point a problem solving strategy is established using one of the many available tools (8 Discipline Report, 5 Why, A3) to resolve the root cause and eliminate the defect.  Over time and continued focus, the result is a robust process that yields 100% quality, defect free, products.

In practice, this approach seems logical and has proven to be effective in many instances.  However, we need to be cognizant of a potential side effect that may be one of the reasons why new initiatives quickly wane to become “the program of the day.”

The Side Effects:  Burnout and Apathy

Winning the team’s confidence is often one of the greatest challenges for any improvement initiative.  A common strategy is to select a project where success can be reasonably assured.  If we apply Pareto’s Law to project selection, we are inclined to select a project that is either relatively easy to solve, offers the greatest savings, or both.

In keeping with the example presented in the graphic, resolving the “shrink” concern presents the greatest opportunity.  However, we can readily see that, once resolved, the next project presents a significantly lower return and the same is true for each subsequent project thereafter.

Clearly, as each problem is resolved, the return is diminished.  To compound matters, problems with lower rates of recurrence are often more difficult to solve and the monies required to resolve them cannot be justified due to the reduced return on investment.  In other words, we approach a point where the solution is as elusive as “the needle in a haystack” and once found, it simply isn’t cost effective to resolve it.

The desire to resolve the concern is significantly reduced with each subsequent challenge as the return on investment in time and money diminishes while the team continues to expend more energy.  Over extended periods of time the continued pursuit of excellence leads to apathy and may even lead to burnout.  As alluded to earlier, adding to the frustration is the inability to achieve the same level of success offered by the preceding opportunities.

The Solution

One of the problems with the approach as presented here is the focus on resolving the concern or defect that is associated with the greatest cost savings.  To be clear, Pareto Analysis is a very effective tool to identify improvement opportunities and is not restricted to just quality defects.  A similar Pareto chart could be created just as easily to analyze process down time.

Perhaps the real problem is that we’re sending the wrong message:  Improvements must have an immediate and significant financial return.  In other words, team successes are typically recognized and rewarded in terms of absolute cost savings.  Not all improvements will have a measurable or immediate return on investment.  If a condition can be improved or a problem can be circumvented, employees should be empowered to take the required actions as required regardless of where they fall on the Pareto chart.

To assure sustainability, we need to focus on the improvement opportunities that are before us with a different definition of success, one with less emphasis on cost savings alone.  Is it possible to make improvements for improvements sake?  We need to take care of the “low hanging fruit” and that likely doesn’t require a Pareto analysis  to find it.

Finally, not all improvement strategies require a formal infrastructure to assure improvements occur.  In this regard, the ability to solve problems at the employee level is one of the defining characteristics that distinguishes companies like Toyota from others that are trying to be like them.  Toyota and the principles of lean are not reliant on tools alone to identify opportunities to improve.

As suggested earlier, Pareto Analysis is useful to resolve availability, performance, and quality concerns that will most certainly improve Overall Equipment Effectiveness (OEE) and your bottom line.

Until Next Time – STAY lean!

Vergence Analytics
Posted in Advanced Lean Manufacturing, Pareto's Law, Quality Tagged with: , , , , , , , ,

Scorecards and Dashboards

Interior of the 2008 Cadillac CTS (US model sh...

Image via Wikipedia

I recently published, Urgent -> The Cost of Things Gone Wrong, where I expressed concern for dashboards that are attempting to do too much.  In this regard, they become more of a distraction instead of serving the intended purpose of helping you manage your business or processes.  To be fair, there are at least two (2) levels of data management that are perhaps best differentiated by where and how they are used:  Scorecards and Dashboards.

I prefer to think of Dashboards as working with Dynamic Data.  Data that changes in real-time and influences our behaviors similar to the way the dashboard in our cars work to communicate with us as we are driving.  The fuel gauge, odometer, two trip meters, tachometer, speedometer, digital fuel consumption (L/100 km), and km remaining are just a few examples of the instrumentation available to me in my Mazda 3.

While I appreciate the extra instrumentation, the two that matter first and foremost are the speedometer and the tachometer (since I have a 5 speed manual transmission).  The other bells and whistles do serve a purpose but they don’t necessarily cause me to change my driving behavior.  Of note here is that all of the gauges are dynamic – reporting data in real time – while I’m driving.

A Scorecard on the other hand is a periodic view of summary data and from our example may include Average Fuel Consumption, Average Speed, Maximum Speed, Average Trip, Maximum Trip, Total Miles Traveled and so on.  The scorecard may also include other items such as driving record / vehicle performance data such as Parking Tickets, Speeding Tickets, Oil Changes, Flat Tires, Emergency and Preventive Maintenance.

One of my twitter connections, Bob Champagne (@BobChampagne), published an article titled, Dashboards Versus Scorecards- Its all about the decisions it facilitates…, that provides some great insights into Scorecards and Dashboards.  This article doesn’t require any further embellishment on my part so I encourage you to click here or paste the following link into your browser:  http://wp.me/p1j0mz-6o.  I trust you will find the article both informative and engaging.

Next Steps:

Take some time to review your current metrics.  What metrics are truly influencing your behaviors and actions?  How are you using your metrics to manage your business?  Are you reacting to trends or setting them?

It’s been said that, “What gets measured gets managed.”  I would add – “to a point.”  It simply isn’t practical or even feasible to measure everything.  I say, “Measure to manage what matters most”.

Remember to get your free Excel Templates for OEE by visiting our downloads page or the orange widget in the sidebar.  You can follow us on twitter as well @Versalytics.

Until Next Time – STAY lean!

Vergence Analytics
Posted in Execution, Lean Metrics, Performance Tagged with: , , , , , , , ,

Toyota’s Culture – Inside Out

Image by opensourceway via Flickr

As discussed on our Lean Roadmap page, the culture that exists inside your company will determine the success or failure of your lean initiatives in the long-term.  So, how do we cultivate and nurture this culture that we desire to achieve?

Fortunately, I found a great article,  How to implement “Lean Thinking” in a Business: Pathway to creating a “Lean Culture”, written by one of my recent twitter connections (lean practitioner and former Toyota employee) that briefly describes the process embraced by Toyota.

I will not paraphrase the content of the article if only to preserve the essence of the presentation and passion that is conveyed in its writing.

As an aside, it is interesting to note that Toyota does not typically refer to their methods as lean.  Lean is not a set of tools but rather a manner of thinking and focus on a seemingly elusive target to achieve one piece flow.

The spirit of Lean, like synergy, cannot be taught – only experienced.

An innate ability exists and continues to evolve where team members operate with a high level of synergy and are able to identify and respond to concerns in real-time.  Steven Spear also discusses various characteristics or attributes of high performance teams from a different perspective and much wider range of industries in his book “The High Velocity Edge“.

Toyota Recall – Update

Following the release of the NHTSA investigation, Bloomberg Businessweek published an article titled “Toyota, The Media Owe You an Apology“.  The article clarifies a number of allegations against Toyota, however, I am reminded that the government’s investigation did not completely exonerate Toyota from having any responsibility.

Whether the failure is mechanical or electronic is moot considering the tragic results that ensued for some.  I think the real concern is whether the problem itself has been identified and resolved regardless of fault.

Since we are on the topic of culture, consider the media’s role in reporting the events surrounding the recall.  What was your overall sense of the media’s reporting and perspective on this issue?

As you ponder this question, your answer will reveal how quickly events and people of influence can shape our culture.  On a much larger scale, consider the current events in Egypt or the last Presidential election in the United States.

As always, I appreciate your feedback – leave a comment or send us an e-mail.

Until Next Time – STAY lean!

Vergence Analytics

Twitter:  @Versalytics
Posted in Advanced Lean Manufacturing, Culture Tagged with: , , , , , , ,

Lean Is …

A scrapyard.

Image via Wikipedia

What is lean?  The following definition is from the Oregon Manufacturing Extension Partnership website, http://www.omep.org:

Lean Is

A systematic approach for delivering the highest quality, lowest cost products with the shortest lead-times through the relentless elimination of waste.

The eights wastes that accompanied this definition include:

  1. Overproduction
  2. Waiting
  3. Transportation
  4. Non-Value-Added Processing
  5. Excess Inventory
  6. Defects
  7. Excess Motion
  8. Underutilized People

It is very easy to become overwhelmed by the incredible amount of information on the subject of Lean.  I always like to refer back to the basic tenets of lean to keep things in perspective.

Until Next Time – STAY lean!

Vergence Analytics

Twitter:  @Versalytics

Posted in Lean, Lean Mindset Tagged with: , , , , , , ,

What are we Changing?

N(0,1) normal distribution curve, mean and sta...

Image via Wikipedia

Our process improvement strategy is founded on the Theory of Constraints where improvement initiatives are supported by lean and six sigma tools.  Process disruptions affecting flow and task execution all contribute to variance and the efforts to eliminate or reduce them are evidenced by increased stability, increased throughput over time, and increased profits.

So, our main goal in production is to improve flow by focusing our efforts to reduce and eliminate variation in our processes.  This is also the message behind our previous two posts, OEE in an Imperfect World and Variation:  OEE’s Silent Partner.  The effects of our actions will be reflected by the metrics we have chosen to measure our performance.

The following videos further the cause for the Theory of Constraints and Improving Flow:

Standing on the Shoulders of Giants by Dr. Eliyahu M. Goldratt

http://www.youtube.com/watch?v=C3RPFUh3ePQ

The following video discusses “What to Change?”

http://www.youtube.com/watch?v=prrA-onO0Nc&NR=1

Stories can be the best teachers and when the topic is manufacturing, production, or operations, I highly recommend “The Goal”, an industry standard, and the recently released “Velocity“.  Both novels present an all too common manufacturing dilemma – resource capacity and scheduling constraints – to teach the Theory of Constraints.  Velocity is a continuation of The Goal and expands the discussion to include Lean and Six Sigma.

For additional resources and reading recommendations, visit our Book Page.

The message is simple:  Change drives Change.  What are your thoughts?

Until Next Time – STAY lean!

Vergence Analytics

Twitter:  @Versalytics

Posted in Advanced Lean Manufacturing, Theory of Constraints Tagged with: , , , , , , , , , ,

OEE in an imperfect world

Image via Wikipedia

Background: This is a more general presentation of “Variation:  OEE’s Silent Partner” published on January 31, 2011.

In a perfect world we can produce quality parts at rate, on time, every time.  In reality, however, all aspects of our processes are subject to variation that affects each factor of Overall Equipment Effectiveness:  Availability, Performance, and Quality.

Our ability to effectively implement Preventive Maintenance programs and Quality Management Systems is reflected in our ability to control and improve our processes, eliminate or reduce variation, and increase throughput.

The Variance Factor

Every process and measurement is subject to variation and error.  It is only reasonable to expect metrics such as Overall Equipment Effectiveness and Labour Efficiency will also exhibit variance.  The normal distribution for four (4) different data sets are represented by the graphic that accompanies this post.  You will note that the average for 3 of the curves (Blue, Red, and Yellow) is common (u = 0) and the shapes of the curves are radically different.  The green curve shows a normal distribution that is shifted to the left, the average (u) is -2, although we can see that the standard deviation for this distribution is better than that of the yellow and red curves.

The graphic also allows us to see the relationship between the Standard Deviation and the shape of curve.  As the Standard Deviation increases, the height decreases and the width increases.  From these simple representations, we can see that our objective is to reduce to the standard deviation.  The only way to do this is to reduce or eliminate variation in our processes.

We can use a variety of statistical measurements to help us determine or describe the amount of variation we may expect to see.  Although we are not expected to become experts in statistics, most of us should already be familiar with the normal distribution or “bell curve” and terms such as Average, Range, Standard Deviation, Variance, Skewness, and Kurtosis.  In the absence of an actual graphic, these terms help us to picture what the distribution of data may look like in our mind’s eye.

Run Time Data

The simplest common denominator and readily available measurement for production is the quantity of good parts produced.  Many companies have real-time displays that show quantity produced and in some cases go so far as to display Overall Equipment Effectiveness (OEE) and it’s factors – Availability, Performance, and Quality.  While the expense of live streaming data displays can be difficult to justify, there is no reason to abandon the intent that such systems bring to the shop floor.   Equivalent means of reporting can be achieved using “whiteboards” or other forms of data collection.

I am concerned with any system that is based solely on cumulative shift or run data that does not include run time history.  As such, an often overlooked opportunity for improvement is the lack of stability in productivity or throughput over the course of the run.  Systems with run time data allow us to identify production patterns, significant swings in throughput, and to correlate this data with down time history.  This production story board allows us to analyze sources of instability, identify root causes, and implement timely and effective corrective actions.  For processes where throughput is highly unstable, I recommend a direct hands-on review on the shop floor in lieu of post production data analysis.

Overall Equipment Effectiveness

Overall Equipment Effectiveness and the factors Availability, Performance, and Quality do not adequately or fully describe the capability of the production process.  Reporting on the change in standard deviation as well as OEE provides a more meaningful understanding of the process  and its inherent capability.

Improved capability also improves our ability to predict process throughput.  Your materials / production control team will certainly appreciate any improvements to stabilize process throughput as we strive to be more responsive to customer demand and reduce inventories.

Until Next Time – STAY lean!

Vergence Analytics

Twitter:  @Versalytics

Posted in Advanced Lean Manufacturing, Availability, Performance, Problem Solving, Process Control and OEE, Quality Tagged with: , , , , , , , , ,

Variance – OEE’s Silent Partner (Killer)

Image via Wikipedia

I was recently involved in a discussion regarding the value of Overall Equipment Effectiveness (OEE).  Of course, I fully supported OEE and confirmed that it can bring tremendous value to any organization that is prepared to embrace it as a key metric.  I also qualified my response by stating that OEE cannot be managed in isolation:

OEE and it’s intrinsic factors, Availability, Performance, and Quality are summary level indices and do not measure or provide any indication of process stability or capability

As a top level metric, OEE does not describe or provide a sense of actual run-time performance.  For example, when reviewing Availability, we have no sense of duration or frequency of down time events, only the net result.  In other words we can’t discern whether downtime was the result of a single event or the cumulative result of more frequent down time events over the course of the run.  Similarly, when reviewing Performance, we cannot accurately determine the actual cycle time or run rate, only the net result.

As shown in the accompanying graphic, two data sets (represented by Red and Blue) having the same average can present very different distributions as depicted by the range of data, height of the curve (kurtosis), width or spread of the curve (skewness), and significantly different standard deviations.

Clearly, any conclusions regarding the process simply based on averages would be very misleading.  In this same context, it is also clear that we must exercise caution when attempting to compare or analyse OEE results without first considering a statistical analysis or representation of the raw process data itself.

The Missing Metrics

Fortunately, we can use statistical tools to analyse run-time performance to determine whether our process is capable of consistently producing parts just as Quality Assurance personnel use statistical analysis tools to determine whether a process is capable of producing parts consistently.

One of the greatest opportunities for improving OEE is to use statistical tools to identify opportunities to reduce throughput variance during the production run.

Run-Time or throughput variance is OEE’s silent partner as it is an often overlooked aspect of production data analysis.  Striving to achieve consistent part to part cycle times and consistent hour to hour throughput rates is the most fundamental strategy to successfully improve OEE.  You will note that increasing throughput requires a focus on the same factors as OEE: Availability, Performance, and Quality.  In essence, efforts to improve throughput will yield corresponding improvements in OEE.

Simple throughput variance can readily be measured using Planned versus Actual Quantities produced – either over fixed periods of time and is preferred or cumulatively.  Some of the benefits of using quantity based measurement are as follows:

  1. Everyone on the shop floor understands quantity or units produced,
  2. This information is usually readily available at the work station,
  3. Everyone can understand or appreciate it’s value in tangible terms,
  4. Quantity measurements are less prone to error, and
  5. Quantities can be verified (Inventory) after the fact.

For the sake of simplicity, consider measuring hourly process throughput and calculating the average, range, and standard deviation of this hourly data.  With reference to the graphic above, even this fundamental data can provide a much more comprehensive and improved perspective of process stability or capability than would otherwise be afforded by a simple OEE index.

Using this data, our objective is to identify those times where the greatest throughput changes occurred and to determine what improvements or changes can be implemented to achieve consistent throughput.  We can then focus our efforts on improvements to achieve a more predictable and stable process, in turn improving our capability.

In OEE terms, we are focusing our efforts to eliminate or reduce variation in throughput by improving:

  1. Availability by eliminating or minimizing equipment downtime,
  2. Performance through consistent cycle to cycle task execution, and
  3. Quality by eliminating the potential for defects at the source.

Measuring Capability

To make sure we’re on the same page, let’s take a look at the basic formulas that may be used to calculate Process Capability.  In the automotive industry, suppliers may be required to demonstrate process capability for certain customer designated product characteristics or features.  When analyzing this data, two sets of capability formulas are commonly used:

  1. Preliminary (Pp) or Long Term (Cp) Capability:  Determines whether the product can be produced within the required tolerance range,
    • Pp or Cp = (Upper Specification Limit – Lower Specification Limit) / (6 x Standard Deviation)
  2. Preliminary (Ppk) or Long Term (Cpk) Capability:  Determines whether product can be produced at the target dimension and within the required tolerance range:
    • Capability = Minimum of Either:
      • Capability Upper = (Average + Upper Specification Limit) / (3 x Standard Deviation)
      • Capability Lower = (Lower Specification Limit – Average) / 3 x Standard Deviation)

When Pp = Ppk or Cp = Cpk, we can conclude that the process is centered on the target or nominal dimension.  Typically, the minimum acceptable Capability Index (Cpk) is 1.67 and implies that the process is capable of producing parts that conform to customer requirements.

In our case we are measuring quantities or throughput data, not physical part dimensions, so we can calculate the standard deviation of the collected data to determine our own “natural” limits (6 x Standard Deviation). Regardless of how we choose to present the data, our primary concern is to improve or reduce the standard deviation over time and from run to run.

Once we have a statistical model of our process, control charts can be created that in turn are used to monitor future production runs.  This provides the shop floor with a visual base line using historical data (average / limits) on which improvement targets can be made and measured in real-time.

Run-Time Variance Review

I recall using this strategy to achieve literally monumental gains – a three shift operation with considerable instability became an extremely capable and stable two shift production operation coupled with a one shift preventive maintenance / change over team.  Month over month improvements were noted by significantly improved capability data (substantially reduced Standard Deviation) and marked increases in OEE.

Process run-time charts with statistical controls were implemented for quantities produced just as the Quality department maintains SPC charts on the floor for product data.  The shop floor personnel understood the relationship between quantity of good parts produced and how this would ultimately affect the department OEE as well.

Monitoring quantities of good parts produced over shorter fixed time intervals is more effective than a cumulative counter that tracks performance over the course of the shift.  In this specific case, the quantity was “reset” for each hour of production essentially creating hourly in lieu of shift targets or goals.

Recording / plotting production quantities at fixed time intervals combined with notes to document specific process events creates a running production story board that can be used to identify patterns and other process anomalies that would otherwise be obscured.

Conclusion

I am hopeful that this post has heightened your awareness regarding the data that is represented by our chosen metrics.  In the boardroom, metrics are often viewed as absolute values coupled with a definitive sense of sterility.

Run-Time Variance also introduces a new perspective when attempting to compare OEE between shifts, departments, and factories.  From the context of this post, having OEE indices of the same value does not imply equality.  As we can see, metrics are not pure and perhaps even less so when managed in isolation.

Variance is indeed OEE’s Silent Partner but left unattended, Variance is also OEE’s Silent Killer.

Until Next Time – STAY lean!

Vergence Analytics

Twitter:  @Versalytics

Posted in Advanced Lean Manufacturing, Lean Metrics, Performance, Problem Solving Tagged with: , , , , , , ,

Achieve Sustainability Through Integration

Image via Wikipedia

It’s no secret that lean is much more than a set of tools and best practices designed to eliminate waste and reduce variance in our operations.  I contend that lean is defined by a culture that embraces the principles on which lean is founded.  An engaged lean culture is evidenced by the continuing development and integration of improved systems, methods, technologies, best practices, and better practices.  When the principles of lean are clearly understood, the strategy and creative solutions that are deployed become a signature trait of the company itself.

Unfortunately, to offset the effects of the recession, many lean initiatives have either diminished or disappeared as companies downsized and restructured to reduce costs.  People who once entered data, prepared reports, or updated charts could no longer be supported and their positions were eliminated.  Eventually, other initiatives also lost momentum as further staffing cuts were made.  In my opinion, companies that adopted this approach simply attempted to implement lean by surrounding existing systems with lean tools.

Some companies have simply returned to a “back to basics” strategy that embraces the most fundamental principles of lean.  Is it enough to be driven by a mission, a few metrics, and simple policy statements or slogans such as “Zero Downtime”, “Zero Defects”, and “Eliminate Waste?”  How do we measure our ability to safely produce a quality part at rate, delivered on time and in full, at the lowest possible cost?  Regardless of what we measure internally, our stakeholders are only concerned with two simple metrics – Profit and Return on Investment.  The cold hard fact is that banks and investors really don’t care what tools you use to get the job done.  From their perspective the best thing you can do is make them money!  I agree that we are in business to make money.

What does it mean to be lean?  I ask this question on the premise that, in many cases, sustainability appears to be dependent on the resources that are available to support lean versus those who are actually running the process itself.  As such, “sustainability” is becoming a much greater concern today than perhaps most of us are likely willing to admit.  I have always encouraged companies to implement systems where events, data, and key metrics are managed in real-time at the source such that the data, events, and metrics form an integral part of the whole process.

Processing data for weekly or monthly reports may be necessary, however, they are only meaningful if they are an extension of ongoing efforts at shop floor / process level itself.  To do otherwise is simply pretending to be lean.  It is imperative that data being recorded, the metrics being measured, and the corrective actions are meaningful, effective, and influence our actions and behaviors.

To illustrate the difference between Culture and Tools consider this final thought:  A carpenter is still a carpenter with or without hammer and nails.

Until Next Time – STAY lean!

Vergence Analytics

Twitter:  @Versalytics

Posted in Advanced Lean Manufacturing, Culture, Execution, Lean, Lean Metrics, Lean Mindset Tagged with: , , , , , , , , , , ,
Sign Up and JOIN our TEAM

Make it Yours

Namecheap

Free Downloads

September 2018
S M T W T F S
« Jul    
 1
2345678
9101112131415
16171819202122
23242526272829
30  

Categories

Archives

Twitter

error: Content is protected !!