Optimizing Manufacturing Processes by Applying Advanced Lean/Six Sigma Tools


by Ron Covelli

Understanding the Effects of Variability on a Value Stream’s Behavior then Managing the Tradeoffs 

The Challenge

Behavior then Managing the Tradeoffs

Striving for “one-piece flow” represents the ultimate vision for operation excellence, but acknowledging the existence of variability, both good and bad, defines the more practical endeavor of achieving “continuous flow”. While many organizations are active in value stream mapping, forming cellular work centers, and applying Lean and Six Sigma methodologies, they struggle with reaching the ideal final result – increasing process velocity by creating continuous flow. A value stream is composed of two essential components: demand from the customer side and transformation on the supplier side. If demand and transformation are not perfectly aligned an additional component in the form of a buffer (inventory, capacity, or time) will appear. The appropriate mix and tradeoffs of variability buffers depends on the behavioral attributes and organizational strategy of the process. Establishing the framework of process performance, due to variation, and accurately predicting the results in changes to the value stream, accelerates the optimization of a process, when utilizing the tools of Lean and Six Sigma.

Initial steps in this journey typically include identifying part families that go through the same set of processes, and dedicating a workcell that minimizes people and product movement. Rearranging equipment into a U-shape, reducing the amount of floor space, and people/product travel by 50%, does not define a continuous flow workcell. While these results are an improvement, they represent only half of the potential gains. Continuous flow is what propels a workcell from good to great!

In the manufacturing field, the two sides of the product fence often discussed when implementing flow are high volume/low mix and low volume/high mix environments. Diving deeper into these product flows, uncovers many additional quantities as variable: process times, setup times, mean time to failure and mean time to repair of a machine, yield rates, arrival rates, batch sizes, and routing sequences.

The fundamental activity of any manufacturing process centers on the flow of parts and inventory. Flows typically follow routings that define the sequences of processes, and include the elements of capacity and time. Inventory is what separates flows. A form of inventory will exist when parts need to come together. In almost all processes, the following performance measures are significant: throughput, the rate at which parts are processed; work in process (WIP), the number of parts in the process, and cycle time, the time it takes a part to pass through the process, including any rework, restarts due to yield loss, or other distracters. In practice, a process can reveal dramatic differences in throughput. Why? Variability! Variability within and between workstations is the reason that queues form at processes, and why queuing delays propagate to downstream workstations. Along with variability, batching has a profound impact on cycle time. Based on the lot or batch size, the wait-in-batch-time (WIBT) and the wait-to- batch-time (WTBT) can be the largest values in the total line cycle time equation! Variability is ultimately what drives the behavior of a process away from the best case and toward the worst case.

The Process

Improving process parameters and performance given parameters are two methods for enhancing the execution of a flow. Process parameters can be improved by either increasing the bottleneck rate, or decreasing the raw process time. Process velocity can be amplified by either adding capacity or through improving machine reliability, yield, or quality. The two primary means for improving performance given parameters are: 1.) Reducing batching delays at, or between processes by means of setup reduction, better scheduling, and lot-splitting, and 2.) reducing delays caused by variability by changes in products, processes, routings and operators, that permit smoother flows through and between workcells.

The methodology defined below embraces the strategies of Lean and Six Sigma, allowing increased throughput velocity by continuous flow, resulting in reduced value stream cycle time.

Step 1: Determine the Boundaries of Process Performance

Variability is never zero! Process behavior analysis of individual stations and cumulative line performance is paramount in describing how variability affects the value stream. Complicated simulation software is not required. Means, standard deviations, variances, and the coefficient of variation are key inputs into the analysis. Along with applying statistical thinking, there are fundamental relationships between inventory, cycle time and variability that govern how all processes operate. Processes can’t operate differently than what’s permitted. Often, the tools and methodologies of Lean and Six Sigma are applied without first understanding these underlying associations. Two common principles in understanding process behavior are, 1.) Cycle time increases in utilization and does so sharply as utilization approaches 100%, and 2.) In a batching environment the smallest batch size that yields a stable system may be greater than one. Determine how well the process is performing versus how well the process could be performing—what’s the best possible (due to variation) you should expect? Why the process isn’t performing as well as it could be performing? What changes are needed to get to the best possible performance?

Step 2: Eliminate “Direct” Waste

Eradicate sources of waste using Lean concepts, such as removing redundant operations and decreasing downtime due to unreliable equipment. Feature the tools of standard work to eliminate operator errors, improve work place organization through 5S, utilize visual controls for mistake-proofing, and layout the work area to reduce people/product travel. Unfortunately, this step is the start and end points of most Lean based improvement projects.

Step 3: Substitute Capacity for Inventory Buffers

A major determinant of throughput, cycle time, and WIP is capacity. The fundamental principle of capacity is stated, “The processing rate at all workstations in a flow cell must be strictly greater than the arrival rate to each station”. While this statement appears to be obvious, it is frequently neglected in practice. Make sure there is a sufficient capacity buffer in the process to enable a significant reduction in inventory without sacrificing delivery performance. The capacity principle is not a mathematical oddity. If capacity is not increased and inventory reduced, then time, due to a reduction in delivery performance, or capacity, due to a reduction in demand, will become the buffer by default.

Step 4: Reduce Variability

Drive out variability by using the enhanced visibility made possible by the low-WIP environment by focusing on rework, scrap, down time, and long setups. Problems can be traced to their source because less WIP inventory translates to reduced cycle times, and thus decreases the time between “defect creation" and “defect detection”. Opportunity has opened its doors to apply Six Sigma techniques. An easy calculated measure of variability frequently used is the coefficient of variation (CV), which is defined as the standard deviation divided by the mean. Because mean and standard deviation have the same units, the coefficient of variation is unit less. This makes it a consistent measure of variability across a wide range of random variables. Random variables included in process analysis include setup and run times, and arrival and inter- arrival times to workstations within a flow cell.

Apply Lean techniques to reduce variability in demand as seen by the process by installing a pull or continuous flow process and level the production schedule and fix the product mix (heijunka). The choice of how many parts of a certain family to process before switching to a different family is a lot (batch) sizing decision that involves a tradeoff between capacity and time. There is no inherent reason that the process batch must equal the move batch. Partial lots can be transferred to the next workstation and processed before the entire batch has been completed at the previous workstation. Because parts do not have to wait for the remaining parts in the batch, total cycle time is reduced, under this concept of lot-splitting.

Step 5: Reduce the Capacity Buffer

Finally, as variability is reduced, it becomes possible, to operate resources closer to their capacity.

The Results

Variability and batching degrade the performance of process flows and create boundaries for establishing continuous flow. The complicated performance measure of cycle time can be easier understood after addressing the issues of utilization, batching and variability. Average cycle time at a single work center is made up of move time, queue time, setup time, process time, wait-to-batch-time (WTBT), and wait-in-batch- time (WIBT). The process time in the equation is the only value-adding component, where the remaining elements are delay time, or pure muda (non-value added). By applying the theory of lot-splitting, since a batch can be processed at more than one work center at a time, the average cycle time in a continuous flow workcell is equal to the sum of the cycle times at the individual workstations less any time that overlaps two or more work centers.

As continuous improvement efforts progress in setup reduction, and the y=f(x) equation for moderate/high coefficient of variation values is defined, average process batch size will be reduced resulting in additional gains.

To view or add a comment, sign in

Others also viewed

Explore content categories