Developing Effective Sampling Plans for In-Process Control
During my Operations and Quality career, I have observed frequent confusion regarding the topic of establishing defendable sampling plans to support an in-process control monitoring strategy, which effectively monitors and predicts lot quality without handicapping the production team with inefficiency. I have recently come across this concern again on a Quality Board posting on LinkedIn. It is no wonder why the issue is so intensely debated, as there are many stakeholders impacted by the decision. Ultimately, the end user or customer is most impacted by the sampling plan’s ability to detect reduced quality before they receive the product. Quality is and should be very concerned about the appropriateness of the plan, because they typically have final authority for approving and releasing product, which they verify meets specifications. Manufacturing is also appropriately concerned about the suitability of the sampling plan, in particular if the plan requires manual measurements, which may or may not result in temporarily suspending operations, or if the testing is destructive and leads to reduced yields.
For the purposes of this discussion, we will focus on process measurements that produce variable data, such as tablet thickness or hardness, particle size, fill volumes, optical density measurements, pH values, or similar. An analogous thought process exists for attribute data.
I have observed multiple approaches attempting to tackle this issue. Often the first inclination of many organizations is to collect 10 samples every 60 minutes from the production line (or some other arbitrary sample number at some other arbitrary time interval). This approach fails to consider any knowledge or assessment of process performance (from both development and validation efforts), in addition to providing a false sense of security to the organization that they will be able to ‘bracket’ adversely impacted portions of the production run.
Another common approach is that a Quality Engineer, when proposing the sampling plan, blindly consults an Acceptable Quality Limit (AQL) inspection table. The total number of samples required is obtained based on the batch (population) size to confirm acceptance relative to a pre-determined quality level. Then that sample number is evenly dispersed across some arbitrary set of intervals based on the anticipated production duration, which can either be time-based or unit/volume-based. Two main faults with this approach are as follows:
1. An uninformed quality engineer may look up a sampling plan designated for attributes, not realizing the plan is inappropriate for variable data.
2. AQL sampling plans, regardless of whether they are designed for attribute or variable data, are based on a random sampling across an entire lot, and only provide meaningful information, relative to the lot as a whole, if the process performs predictably over time. An AQL sampling plan will not reliably capture isolated or pocketed incidents of non-conformance.
Below I will outline an approach to developing a reliable in-process control strategy that meets the following objectives:
1. Provides a reliable method to predict and detect both real-time process shifts and overall process quality.
2. Is justifiable and can be supported during a regulatory or business audit scenario.
3. Will minimize the burden on the production line to the greatest extent possible.
Define the Process Boundaries
During process development, all of the "boundaries" of the process should have been defined for the critical performance indicators, process parameters, and/or quality attributes, as applicable. Those boundaries establish how far you can stretch the process and still anticipate acceptable clinical or functional results for the applicable product. I would also utilize process development data, along with relevant data generated during validation activities, to truly characterize where my process is centered (i.e., mean or median) and what my typical, long-term variation is expected to be (i.e., a reasonable estimate of process standard deviation).
Determining Process Capability
Understanding both the process specifications along with the critical distribution characteristics (i.e., process center and process variability) from the previous steps allows you to determine the following:
1. Process capability when the process is operating in a state of control. It is important to understand realistic expectations of the process, meaning what conformance or nonconformance rate can you reasonably expect, based on the process variability and the relative ‘distance’ to the specification limits. This is critical --attempting to control a process tighter than it was designed to be capable of is a fruitless effort and a complete waste of time and resources.
2. Tolerance intervals for the process that will allow you to determine how far the process can drift (i.e., process survivability), usually represented in multiples of the process standard deviation, before an unacceptable number of non-conformances are expected.
Determining Sample Size for the Control Strategy
Once you have defined the process center and variability over time, you can use the mean and standard deviation values to determine a minimum sample size that will accurately detect the 'survivable' drift, based on the tolerance interval calculation, and the relative distance to your specifications, with the desired sampling power and confidence (which most often are set at 80% and 95%, respectively). This can be applied to both the sub-group of samples pulled at a single time-point or the total number of samples across the entire lot or batch to determine batch quality.
This is a key point to understand -- this allows you to determine the minimum number of samples necessary to obtain the desired information, as well as preventing the potential of oversampling. Quite often there are diminishing returns from increased sampling, as the amount of information, or your ability to estimate the population characteristics, does not increase with the number of samples taken. A knee jerk reaction by organizations to improve quality is often to increase sampling, but the information returned by that increased sampling does not always provide any more useful information. In fact, if measurements are manual, increased sampling opens the process up to increased opportunity for measurement error, which, if not recognized, can result in process decisions based on false information.
Process Surveillance
Once you have developed your in-process control strategy in the previous steps, you can leverage the data over time and incorporate learnings into your organization’s Process Surveillance program. Process surveillance techniques, like statistical process control (SPC) and process capability indices (e.g., Cpk and Ppk), would be used to evaluate how well the process continues (or fails to continue) to align with the original performance that should have been well characterized during development and validation activities.
Dear Sir First I would like to thank you for this insightful article in a very important and frequently asked topic. Secondly you have mentioned in the article that Once you have defined the process center and variability over time, you can use the mean and standard deviation values to determine a minimum sample size that will accurately detect the 'survivable' drift, based on the tolerance interval calculation, and the relative distance to your specifications, with the desired sampling power and confidence (which most often are set at 80% and 95%, respectively). Can you please suggest a formula to be used to determine the minimum sample size. Also I would like to ask if the minimum sample size is dependent or independent on the batch size. Thank you again.