Search This Blog

Tuesday, May 28, 2013

26 Introduction to Variation

W. Edwards Deming, the famous quality improvement guru, claimed that the two most important things for managers to understand are:
1.     Variation and how to deal with it
2.     The forces that motivate and demotivate people
The subjects of the first 21 lectures, motivating, staffing and communicating, address the forces that motivate and demotivate people, i.e. the Theory Z portion of effective leadership. Forces mean the collection of perceptions, understandings and misunderstandings that influence the attitude and behavior of people. Lectures 23 – 25 introduced management of processes, part of the control function of managers, and treated the stand alone topics of managing risk and theory of constraints. Now we turn to variation and how to deal with it, the central theme of process improvement and process control. Managing in the presence of variation is also part of the control function of managers.
W. Edwards Deming claimed that the inability to interpret and use the information in variation is the main problem for managers and leaders. (See the book Out of the Crisis by W. Edwards Deming) When there is a problem with any work process the manager and the employees both must understand when the manager must act and when employees must act. It is through an understanding of variation and the measurement of variation that they understand when and who should take action and, just as importantly, when not to take action. Thus variation is involved in both improving poor processes and maintaining good processes.
Variation is just the reality that actual values of parameters, physical or financial, have some statistical spread rather than being exactly what we expect, specify or desire. For example, we may have a budget for supplies of $1000 per month. When we look at spending for each month it is typically close to but not exactly $1000. Over time the spending might look like that shown in figure 15.


Figure 15. An example of variation from planned budget by actual spending.
For our purposes the definition of variation is deviation from planned, expected or predicted values of any parameter. The parameter might be financial, as in the example shown in figure 15, it might be in units of production per day or minutes per service, or it might be a physical parameter, such as the dimension of a machined part. Thus variation occurs in all the work processes of any kind of organization. Therefore, as Deming implied, the effective leader must understand the information in variation and how to properly manage in the presence of variation.
Let’s start by returning to the work process illustrated in figure 12, the SIPOC diagram.  Where might we expect to see variation in a work process? The answer is everywhere. Deviations from ideal inputs are variation. Deviations from ideal outputs are variation. Deviations from expectations in use are variation. Variation in use can be due to either hidden variation in outputs or unexpected variation in the use environment or the use process.
Let’s define an effective process from a customer’s point of view. It is a process that produces outputs that meet or exceed the customer’s expectations for quality and cost. Customers can be internal or external to the enterprise or the organization that owns the process. Customers have stated and unstated expectations. Specifications, requirements, standards, and contract items are examples of customer’s stated expectations. Customer’s unstated expectations are typically suitability for all conditions of use and affordability. Therefore, for the purposes of process improvement discussions, we can say that an organization’s effectiveness is determined by the effectiveness of its processes in satisfying its customer’s expectations. (In general the effective organization must satisfy all its stake holders’ expectations, including managers, workers, owners and the community as well as the customers.)

Variation Drives Process Effectiveness

We can see the effects of variation by examining an ideal business process (figure 12, an ideal process is repeated in the top half of figure 16) and a typical process as shown in the bottom half of figure 16.

 

Figure 16. Comparison of a typical process to an ideal business process.

An ideal process converts all of the supplier’s inputs to outputs that satisfy the customer’s expectations. A typical process includes inspection steps to ensure that a defective input is not sent to the process or a defective output is not sent to the customer. The customer also adds an inspection step because of receiving defective outputs in the past. If outputs fail any of these inspections the failed item is scrap or must be reworked. It’s easy to see that the typical process is more expensive, and therefore less effective, than an ideal process because inspections cost money and scrap or rework cost money. In a typical chain of processes costs of failing inspection increases as the work progresses along the chain because more rework is required if an inspection is failed at processes near the end of the chain. Thus often the largest cost to the organization is warranty costs from customer returns. That is the reason for the inspection of the outputs before they are sent to the customers. The reason these inspection steps are added is the presence of variation. If there was no variation in the inputs or the outputs then there would be no need for inspection to find those items whose variation from ideal is larger than acceptable.
Notice that even the ideal process has inputs and outputs that exhibit variation but for the ideal process this variation is within acceptable limits most of the time. We need to define what we mean by “most of the time”. If there is variation then sooner or later a product will fail to meet customer expectations if there is no inspection. (Actually it will happen even with inspection since no inspection is perfect, i.e. inspection is a process that also has variation.) If the variation is small enough so that only rarely is there a customer return and the cost of correcting this return plus the cost of the disgruntled customer is less than the cost of including inspection then it makes business sense to not have inspection.
Now I hope the student is thinking that to make a valid decision to not include inspection takes data to establish that the variation is sufficiently low. The astute student is also thinking that collecting such data costs money also, perhaps as much as the inspection. This is an example of what is meant by a manager needing to know how to manage in the presence of variation. Next we examine how a manager can achieve such understanding and make good decisions in the presence of variation.

Variation is a Statistical Phenomenon

To understand managing in the presence of variation we must answer the questions how can the manager decide:
·       when to take action,
·       what action to take and
·       who should take the action?
Managing correctly in the presence of variation requires the use of methods based on statistics since variation is a statistical phenomenon. The statistics needed for 85% or so of a manager’s work is relatively simple and easily learned. The effective leader and all workers must understand and use these simple methods. However, there are situations that require more elaborate statistics. Every organization should have access to at least one person well versed in statistical methods so that managers and process improvement teams have a resource to check their work and assist on complex problems. This statistical expert can be a consultant or a worker that is well trained in statistics.
Here we are going to briefly look at some of the most important simple methods. As an example, figure 17 illustrates the daily averages of phone expenses for an organization plotted for each month of a year.


Figure 17 A graph of an organization’s daily phone expenses averaged for each month of a year.
Should the manger take action in response to the March expenses? The June expenses? If action is necessary in response to the March expenses, whose action is it? The manager’s? The workers? If the manager is expected to discuss unusual expenses in a weekly or monthly report what should the manager say about the March and June expenses?
Control charts are a visual method of answering the questions posed about the phone bills. A control chart for the phone expenses data from figure 17 is shown in figure 18. You can learn how to generate control charts later. For now I only partially describe how to interpret the data in a control chart.


Figure 18 A control chart for the example phone expense data.
The line with diamond markers is the same data shown in figure 17. The line with the square markers results from averaging the data over a whole year. The line with the triangle markers shows the range of variation of daily expenses for a given month. The two lines labeled Upper CL and Lower CL are upper and lower control limits, which are statistically determined from the data set. For the purposes of this introduction it isn’t necessary to know how to calculate the control limits. The control chart tells us that, with the exception of the March data point, the phone expenses are stable, that is they exhibit variation about a stable sample average, which is not steadily increasing or decreasing. A stable process is predictable, e.g. frequency of errors, efficiency, process capability and process cost are predictable. Deliberate changes to a stable process can be evaluated.  Note that some process improvement literature refers to a stable process as being “in control”.
Variation exhibiting a stable statistical distribution is due to the summation of many small factors and is called common cause variation. Changes to a stable process, i.e. one with common cause variation is typically the manager’s responsibility but can be the responsibility of trained and empowered workers. Knowledge workers should be responsible for common cause variation because they are usually more expert with respect to their processes than their managers. However, as is described in the next lecture, even knowledge workers should not be empowered to control their processes before they have been trained in statistical methods because mistakes can make processes worse.
Only the data point for one month, March, falls above or below the two control limit lines. Variation that is outside the stable statistical distribution, i.e. above the upper control limit or below the lower control limit, is special cause variation.  The point for March falls below the lower control limit. This means that the March data is special cause variation. Special cause variation is the workers responsibility; they typically know more about possible causes than the manager because they are closer to the process. But the workers need training in problem solving to fix special cause variation and they need to be empowered to make fixes to their processes.
The workers should review the data for March and examine the phone system to see if they can determine the reason the daily averages were so low. For example, the phones may have been out of order for a week, which would have lowered the daily expenses but require no action other than getting the system operating again. Properly trained and motivated workers can handle special cause problems, usually without any management involvement.
A stable process is a good candidate for process improvement. The goal of process improvement for a stable process is to reduce the variation and/or change the mean. Process improvement should not be attempted on a process that is unstable until the process is brought to a stable condition because changes in data taken on an unstable process cannot be uniquely attributed to the action of the process improvement. The special cause variation that makes the process unstable must be removed before beginning process improvement.
Note that the control chart also provides the manager information useful in considering process improvement. In the example shown in figure 18 the yearly average phone expenses are about $21 per day. A manager can evaluate the cost benefit of making a change to the phone service based on this data since it is stable over a year. If the manager can make a change without investment that promises a 10% reduction in phone expenses the manager can see that data will have to be monitored for about four to six months to determine if the mean daily expenses do indeed drop from $21 to $19 because the normal range of variation in monthly averages is larger than the expected change. However, if the change really works as promised then in about four to six months the monthly averages should begin to vary about a new long term average and the control chart will show this change.

Exercise

1.     Go to “Control Charts” in Wikipedia (http://en.wikipedia.org/wiki/Control_) and read the article. This material expands upon the introduction given in this lecture.
2.    Go to http://www.goalqpc.com/shop_products.cfm and buy yourself a copy of Memory Jogger II. This handy book teaches everything you need to know about problem identification and problem analysis. It is small enough to carry in your pocket and it is your guide to the details of process improvement. If you prefer a spiral bound version it is available from Amazon.com (Michael Brassard, and Diane Ritter, The Memory Jogger II: A Pocket Guide of Tools for Continuous Improvement and Effective Planning) There is also a Six Sigma Memory Jogger available.
The Memory Jogger book recommended here is so widely used and so effective for the practical user that there is no point in repeating the material in this course. The student is expected to study the Memory Jogger and put the techniques into practice. This means that the student and all the people reporting to the student are to have the Memory Jogger book , or an equivalent, be trained in the techniques summarized in the book and put these techniques into practice. This is essential if an effective organization is expected. The exception is if your organization is following the Six Sigma approach where only selected people are highly trained.
If you prefer not having to learn statistical techniques yourself you can attend training if your budget and schedule permits. One example workshop in statistical process control is offered by the American Supplier Institute. See: http://www.amsup.com/spc/1.htm. This workshop focuses on manufacturing but the techniques work for any type of organization. A web search reveals many other training organization offering similar programs. I have found it more cost effective when training all workers to bring the trainer to the organization rather than sending workers to outside training.

If you find that the pace of blog posts isn’t compatible with the pace you  would like to maintain in studying this material you can buy the book “The Manager’s Guide for Effective Leadership” in hard copy or for Kindle at:
or hard copy or for nook at:
or hard copy or E-book at:



Wednesday, May 22, 2013

25 Overview of Theory of Constraints

The theory of constraints involves techniques for improving processes that have to be learned independently of the material we address in subsequent lectures. This theory should be applied to business processes before beginning the process improvement methods that are discussed in the following lectures. If the student understands the theory of constraints and if this theory is being applied to the business processes the student is concerned with then this lecture can be skipped. If not, this overview introduces the theory and gives the student some feeling for the necessity for learning and using this theory.
Theory of constraints deals with aspects of control often neglected or wrongly presented in standard texts. I suspect the likely reason is that theory of constraints as applied to business organizations was made popular outside of business schools by a physicist, Eliyahu M. Goldratt. Theory of constraints is described by Goldratt via his books The Goal, The Race, Critical Chain & other process oriented management books. These books are “business novels” and enjoyable reads as well as being excellent self-training books. Theory of constraints is appropriate to processes associated with manufacturing operations, back and front office service operations and projects. I distinguish between back and front office service operations because although theory of constraints applies to front office service operations it shouldn’t be the main focus when dealing directly with customers. This is because it is better to be effective with customers than to be highly efficient at the expense of some effectiveness.
Theory of constraints is based on the fact that the throughput of a process can be no greater than the throughput of the slowest step in the process, i.e. the constraint. It is a simple and seemingly obvious concept but having seen many offices with desk after desk stacked with paper work waiting to be processed and many factories with work in process stacked around machine after machine I can tell you that it isn’t obvious to many managers in spite of the fact that violating this theory leads to inefficient operations and excessive costs.
A basic work process, applicable to any organization, is shown in figure 12.


Figure 12 A basic work process has suppliers, inputs, outputs and customers.
This chain is often called SIPOC after the initials of each element in the chain. Manufacturing, project and back office service processes are typically many step processes, each with suppliers, inputs, outputs, & customers. A simple example with ten steps is shown figure 13. Each circle with an S is a SIPOC chain in which the preceding S is the supplier of inputs to the S and the following S is the customer for its outputs. Note that a process can have more than one supplier, as S4 is supplied by S3 and S8 in this figure. Similarly a process can have more than one customer. A more complex, but typical process might have loop backs where material or paperwork not meeting standards is sent back to an earlier process for rework.


Figure 13 Typical business processes integrate many individual SIPOC processes.
If we assume that each of the steps shown in figure 13 has a different through put then the theory of constraints states that the through put of the overall process cannot be any larger than the through put of the slowest step. If the manager in charge of an overall process like that illustrated in figure 13, with each step having a different through put, expects the workers to stay busy you can imagine what results. Work in process (WIP) builds up in from of all steps that are slower than the previous step. This excess WIP can lead to several problems, including:
·       In manufacturing operations and in some project operations the WIP leads to excess inventory costs.
·       Associated with excess WIP is excess cycle time, i.e. the time from the first step to the final step in the overall process.
·       If a worker at one of the non-constraining step begins to make errors in paperwork or if a machine at a non-constraining step begins to produce defective parts then excess costs result from the extra rework required on all the defective material produced before the problem is detected at some subsequent step
·       Eventually expediters and/or overtime are added to ensure that time critical work is located and processed at the expense of other less critical work, leading to excess labor costs.
A second, and again often overlooked, result of the theory of constraints is that there are no additional costs incurred if workers at non-constraining steps are idle as long as there is material available for the worker or machine at the next step. This means that if such workers are cross trained then they can do other productive work when there is a buffer of output work after their step. The value of workers doing other work justifies paying premium wages to workers that are cross trained and the cost of cross training.
Most important is that workers at non-constraining processes have time to spend on process improvement and, since total productivity is not reduced, there is no additional cost for the process improvement labor. This is one reason theory of constraints should be applied to work processes before initiating other process improvement activities.
Figure 14 illustrates how to control processes with a constraining step.


Figure 14 Adding buffer inventories and controlling work material release controls work in process for processes with constraining steps.
In the example shown in figure 14 step 3 is assumed to be the constraining step. Buffer inventory is maintained in front of step 3, indicated by the small rectangle, so that it can never be idle due to lack of input. The size of the buffer in front of step 3 is controlled by the rate of work material released to the input of step 1, indicated by the dotted line from the input of step 1 to the buffer inventory at the input to step 3. It is also correct practice to add a buffer in front of step 4 and regulate the input to step 5 to control the size of this second buffer. The reason for the second buffer is to ensure that step 4 does not become the constraining step due to material not being available from step 8. Note that this process control approach applies to any type of business that involves material, i.e. paper, electronic media or parts, moving from step to step to accomplish an overall work objective.
A personal experience is a good illustration of the problems caused by not applying the theory of constraints. I was asked to consult for a factory that was in danger of being shut down and the work moved out of the country because the corporate office was not satisfied with the factory’s performance. A quick tour showed that there was excess WIP nearly everywhere. In fact a special material handling system had been installed just to deal with the partially finished goods throughout the factory. A few questions revealed that the constraining process was the final process before the products were boxed and shipped.
I held a Saturday training system for the managers. I asked them what the cycle time was for their products. They answered that it was about 35 days from first material release to shipping products made with that material. I then asked what the cycle time would be if material moved from process to process with no waiting time in front of each process. They thought awhile and answered that it would be 7 days. A few more leading questions and I could see light bulbs coming on in a few minds and excited expressions on faces. Incidentally, the first person that comprehended what they had been doing wrong was a woman doing administrative work in the front office. By Monday they had plans worked out to change their methods and were starting to implement the plans.
I called the general manager a couple of months later and asked if the cycle time had changed. They had two products going through the same production line. He said the cycle time for one product had been reduced to the ideal 7 days by applying theory of constraints. They began releasing material into the line at the rate of the final constraining process and maintained buffer work in process only in front of the constraining process. Unfortunately, he was not allowed to control the release of material for the second product and its cycle time was still about 35 days. Corporate marketing people controlled the release of material for the second product and they released it according to their sales instead of the factory capabilities. I never learned if the general manager was able to convince corporate management that marketing’s control of material release for the second product was the cause of the factory’s excess cycle time, excess WIP and associated excess costs.
This short introduction to the Theory of Constraints illustrates the principle. Managers of manufacturing or back office service operations should study Theory of Constraints, just in time (JIT) inventory control and Lean techniques and understand the value of small lot size in controlling the cost of poor quality. Project managers should study critical path scheduling as well as the theory of constraints. I recommend project managers read Goldratt’s book Critical Chain, which addresses scheduling for projects.

Exercise

Like lecture 23 this lecture is only an introduction and no exercises are required unless the student isn’t familiar with the theory of constraints and using it already. If the student isn’t knowledgeable in these techniques and isn’t already using them then additional self-study is necessary to learn how to put them into practice for real business processes, which tend to be more complex than the simple example used here to illustrate the principles involved. I recommend reading Goldratt’s books because they are fun reads as well as excellent for self-training.

If you find that the pace of blog posts isn’t compatible with the pace you  would like to maintain in studying this material you can buy the book “The Manager’s Guide for Effective Leadership” in hard copy or for Kindle at:
or hard copy or for nook at:
or hard copy or E-book at:


Monday, May 13, 2013

24 C The Risk Burn Down Chart


A spreadsheet similar to the risk register can be developed to manage risks and manage the budgets associated with risk management on large and long duration projects. It isn’t possible to avoid all arbitrariness in forecasting the risk management budget but it is possible to provide good management visibility into the process. One approach is as follows: (This description is a bit tedious so look ahead at figures 10 and 11. If the process is obvious to you from the figures skip the text. If not then just wade through the description. It may help to drawn out the spreadsheet as you read the description.)
•           Develop a spreadsheet with time, e.g. months, in the first column and the known risks in the first row of adjoining columns. The planned mitigation expense estimate as a function of time for each risk is added in the rows for each known risk. Summing the entries in each row across columns results in the estimated mitigation expense for all risk mitigation activities for that time period. As new risks are identified they are added in the first row of new columns and mitigation budgets are added in appropriate time rows in the new columns.
•           Develop a second spreadsheet with the following columns
o          Time line, e.g. month number from beginning of the project or actual dates
o          Planned Mitigation Expense per time period to be spent on risk mitigation for known risks, i.e. the cumulative value of the row for that time period from the first spreadsheet
o          Cumulative Planned Mitigation Expenses, i.e. the cumulative cost estimates for mitigation activities for the risks known at the time the plan is developed.
•           Recognize that as the project progresses new risks will appear as decisions are made and additional risk management budget is needed to mitigate these new risks. Therefore add the following columns to the spreadsheet.
o          Adjusted Cumulative Mitigation Budget; the planned expenses plus an adjustment, e.g. an arbitrary percentage, to mitigate unknown risks that will arise during the project.
o          Actual Mitigation Expenses for each time period.
o          Cumulative Actual Mitigation Expense
It may be helpful at this point to show a chart resulting from an example of the process described so far. Figure 10 is a chart for a large project in which the mitigation budget is nearly $40 million dollars. In this example the initially identified risks were planned to be mitigated with just over $30 million. The arbitrary adjustments for unknown risks increased the budget to nearly $40 million and the actual expenses at the end of a year were just below the adjusted budget. For situations where the budget for risk mitigation is released incrementally or for a large project that continues for several more years having data such as this chart provides the project managers sound arguments to defend their requests for risk mitigation budgets.

  Figure 10 An example of risk mitigation budget and expense resulting from the example approach.
The mitigation budget and expense are only half of the story. Risk is the rest of the story so now let’s return to the example approach:
•           At the beginning of a project sum up the expected values of all risks on the risk register. This cumulative risk value is the amount of over budget expense that is likely if initially known risks are not mitigated before they impact the project. Add a column to the second spreadsheet for this Cumulative Risk Value before Mitigation.
•           Add a risk value adjustment factor for each time period to cover unknown risks that will arise and add a new column to the second spreadsheet for the Adjusted Cumulative Risk Value before Mitigation. These “adjusted” values represent the best estimate of how both identified and new risks will be mitigated throughout the project
•           As the project continues, new risks are added and all risks are mitigated so that a Cumulative Risk Value after Mitigation can be added to the spreadsheet. Now there is sufficient data to construct a Risk Burn Down Chart which shows how the risk value is reduced over time by the risk mitigation work.
An example of a risk burn down chart is shown in figure 11. In this example the adjusted and actual cumulative risk values track each other reasonably well. If the manager of this project needed additional risk management funding in the middle of the project then showing this chart to the funding authority would provide excellent justification for the needed funds. If the planned and actual risk mitigation expenses also tracked each other well, as in the example shown in figure 10, then the funding authority should have good confidence in the management team.

 Figure 11 An example risk burn down chart for a large project with high initial risk.
The charts resulting from the approach outlined above are useful for showing those responsible for funding projects the most likely project expense if risk mitigation is effectively conducted and the likely budget impacts if risks are not proactively mitigated. In the example shown the likely budget impact if risks are not mitigated is over $400 million. This budget impact is reduced to about $23 million by an expenditure of about $38 million for a total impact of about $63 million compared to over $400 million.
The percentage adjustments risks that will be identified during a project are necessarily arbitrary but can be adjusted during the project if the actual expected risk value line deviates substantially from the adjusted expected value line.
In summary, spending a small amount of money in proactively mitigating risks is far better than waiting until the undesirable event occurs and then having to spend a large amount of money fixing the consequences. Remember that risk management is proactive (problem prevention) and not reactive. Also risk management is NOT an action item list for current problems. Finally, risk management is an on-going activity. Do not prepare risk summary grids or risk registers and then put them in a file as though that completes the risk management process, a mistake inexperienced managers make too often.
Exercise
1. Spend some quiet time thinking about what the worst possible thing your competitors could do that would negatively impact your organization in the short and long terms. If you have already done this and have mitigation plans in place or on the shelf you are a mature risk manager. If not, you have some homework to do.
2. Handling anything your competitors do or responding to the loss of your most important customer are the easy ones. Now imagine that your organization is stable, progressing well on improving effectiveness, trust in management is growing, enthusiasm is growing and then your superiors tell you to lay off 10% of your people in order to increase enterprise profits for the year. You know this is going to demoralize the organization for some time and erode trust in the benefits of working to improve the organization. How do you respond to your people and to your superiors? There is no easy answer to this question but in today’s environment it is not an unlikely occurrence and you should be prepared for it.
2. Does your organization have a standard risk management process in place? If so then go on the next lecture. If not then think through a plan to put a standard process in place and train workers to use it. This can be a commercial process or a process you or your workers develop. You can implement it via formal training or on an incremental basis. The important thing is having a process and using it religiously.

If you find that the pace of blog posts isn’t compatible with the pace you  would like to maintain in studying this material you can buy the book “The Manager’s Guide for Effective Leadership” in hard copy or for Kindle at:
or hard copy or for nook at:
or hard copy or E-book at: