Design of Experiments (DOE) is deeply entrenched in some R&D labs, and dismissed as overkill in others. A new paper shows you can use it both flexibly and frugally. DOE is widely used in ingredient screening, formulation development, process optimization, and beyond. The toolkit ranges from screening designs that separate active factors from noise, to factorial designs that quantify interactions, to response surface methods that model nonlinear behavior near an optimum. Each flavor makes a mathematically explicit tradeoff between resolution and experimental cost, suited to a different stage of development. In practice, I have seen teams pick a design without matching it to the question: full factorial "just to be safe" when a screening design would suffice. Further, even when the design type is right, it can often be further adjusted based on domain knowledge, for example weighting factors unequally or pooling dimensions known to matter less. The result is wasted effort and sometimes less clarity rather than more. A recent paper captures several practical DOE examples in catalyst screening and cross-coupling optimization that showcase flexible, frugal design shaped by both chemistry and instrumentation constraints. The authors reduced experiments by 75% compared to full factorial and still identified the most promising catalytic systems and conditions. Four lessons reinforced by this work: 🔹Start by ranking your variables: which factors drive outcomes, which interact, and which are secondary. That ranking is a bet. Making it explicit lets you invest experimental budget where it matters most and accept reduced coverage where a directional trend is sufficient. 🔹Match the design to that ranking. Some designs provide uniform coverage across all dimensions, ideal when factors are equally unknown. Others let you cut runs selectively on lower-impact dimensions. The right choice depends on what you must know precisely versus where a general trend is enough. 🔹Think in stages, not one big design. A preliminary screen does not need to find the optimum. It needs to eliminate dead ends and surface promising directions. Save the higher-resolution designs for the follow-up. It is being strategic to match the resolution and objective to each stage. 🔹Look beyond classical DOE when the problem calls for it. Approaches like Bayesian Optimization (BO) operate under different assumptions and yield different information. Understanding when each fits, and when to combine them, can unlock insights that no single method delivers alone. Check out the detailed use cases in the paper (including the integration of DOE and BO for cost-aware discovery), and see how you might adapt them to your own designs. 📄 Frugal Sampling Strategies for Navigating Complex Reaction Spaces, Organic Process Research & Development, April 10, 2026 🔗 https://lnkd.in/eQZjvzvc
Design of Experiments (DOE) Techniques
Explore top LinkedIn content from expert professionals.
Summary
Design of Experiments (DOE) techniques are structured methods for planning and conducting experiments to understand how different factors influence outcomes and to find the best possible conditions with fewer tests. By systematically varying multiple factors at once, DOE helps researchers and teams make smarter decisions, save time, and uncover relationships that might be missed with traditional approaches.
- Rank your variables: Start by identifying which factors matter most for your goal, then focus your experimental resources where they’ll have the biggest impact.
- Choose the right design: Select a DOE approach that fits your stage of development and question, whether it’s for quick screening or detailed optimization, to avoid unnecessary work.
- Think in stages: Use sequential experiments to build knowledge step by step, refining your plan as you learn and preventing wasted resources on dead ends.
-
-
If I had to give one tip to biotech startups, it would be to use Design of Experiments (DOE). It helps you save time and get more reliable results. I first heard about DOE during my Master’s in Industrial Biotechnology. It was introduced as a way to speed up experimental design. At the time, I was still convinced that optimizing a process meant changing one variable at a time. Temperature, then pH, then nutrients. I had the chance to apply DOE in my first job. That’s when I saw the real difference. The sequential approach was slow, often misleading, and blind to how variables actually interact. With DOE, I could: -Test multiple factors at once -Detect hidden interactions -Build predictive models without running every single experiment. That changes everything, especially in fermentation, where parameters are tightly interconnected. I’ll give you a concrete example. A team was optimizing enzyme production using 3 variables: temperature, nutrient concentration, and agitation speed. Sequential method: 27 experiments.DOE method: 9 well-designed tests. Not only did they save time, but they also discovered a key insight: agitation speed strongly influenced nutrient availability. That single piece of information drove faster, smarter decisions. Obviously, when I founded Cultiply, I made sure DOE would be part of our DNA. It allows us (and our clients) to reduce uncertainty and make solid technical choices from the start.
-
DoE, QbD and PAT 1. Introduction Evolution of pharmaceutical development: from empirical trial-and-error → risk-based scientific approaches. Regulatory drivers: ICH guidelines (Q8–Q14), FDA PAT initiative (2004). Importance of integrating design, knowledge, and real-time control. Positioning DoE, QbD, and PAT as a “triad” for robust, efficient, compliant development. 2. Historical Context and Regulatory Push Past reliance on end-product testing and its limitations. Shift to lifecycle management approaches. Role of FDA’s Critical Path Initiative. QbD introduced into regulatory lexicon in 2004; PAT guidance published. Global adoption: EMA, MHRA, WHO. 3. Understanding the Three Pillars 3.1 Quality by Design (QbD) – The Framework Definition & Philosophy: Proactive design vs reactive testing. Key Concepts: QTPP – Quality Target Product Profile. CQA – Critical Quality Attributes. CPP – Critical Process Parameters. CMA – Critical Material Attributes. Stages of Application: Early development → Technology transfer → Lifecycle management. Regulatory Basis: ICH Q8(R2), Q9, Q10, Q11, Q12, Q13, Q14. Tools: Risk assessments (FMEA, Ishikawa, Fault Tree Analysis), control strategy design. Case Study Example: QbD applied to controlled-release tablet development. 3.2 Design of Experiments (DoE) – The Optimizer Definition: Statistical framework for systematic factor–response exploration. Role in QbD: Tool to identify design space. Types of DoE: Screening designs (Plackett-Burman, Fractional Factorial). Optimization designs (Central Composite, Box-Behnken). Robustness studies. Benefits: Identifies interactions, reduces experiments, builds knowledge quantitatively. Case Example: Optimizing binder level, granulation time, and impeller speed. 3.3 Process Analytical Technology (PAT) – The Real-Time Guardian Definition: Real-time monitoring and control toolkit. Role: Ensures processes remain within validated design space. Techniques: NIR, Raman, FTIR, Particle size analyzers, Focused Beam Reflectance Measurement (FBRM). Applications: Blend uniformity. Moisture control. Coating thickness. Continuous manufacturing. Regulatory Context: FDA PAT Guidance (2004). Case Example: Inline NIR monitoring for RTRT (Real-Time Release Testing). 4. Interrelationship of the Three Pillars DoE as the engine of knowledge → defines design space. QbD as the overarching framework → integrates knowledge, risks, and control strategy. PAT as the execution safeguard → ensures adherence in manufacturing. Lifecycle integration (development → validation → continuous verification). 5. Benefits of Integrated Use Regulatory alignment & faster approvals. Cost savings through fewer failed batches. Increased robustness and reproducibility. Knowledge management & data-driven decision-making. Example: Continuous manufacturing systems where DoE defines design space, QbD integrates it, and PAT ensures execution.
-
One of the central value-drivers of #DesignOfExperiments is avoiding unnecessary work, but show-casing that is not so easy. Any decent experimental plan made with DOE will still have lots of individual tests (or runs as we call them) and that makes it hard to see how much work was saved. I mean, I can’t just point to the missing experiments and say: “Look how productive we were!”, now can I? 😉 But the power of examples is strong, so as part of my preparation for an upcoming talk on how DOE has benefitted me in my work, I figured I might as well share examples here. The first the case where DOE “clicked” for me, my first success story. This was a project where we helped a company that makes wood coatings develop water-based products with a better environmental profile. For reasons I won’t get into, our main task was the choice of potential candidates for the four main components of a formulation: A film-former (4 options), a surfactant (5 options), a filler (4 options) and a binder (4 options). You can combine these in 320 unique ways. Essentially, we wanted to answer: “Which combination of ingredients (among the 320) should we choose?”. To answer this, we used DOE to generate a multi-level categoric design with 30 total experiments, that you can see applied to oak boards below. The second figure shows their distribution in “ingredient space”. The coatings clearly behave very differently (they're not meant to change the color of the wood)! This experiment allowed us to predict a combination of ingredients (that wasn’t among the initial 30) that met the quality requirements! 👏 And this even without messing with the mixture ratios. But is 30 mixtures a small amount of work in this kind of project? Well, before we decided to try this “new” DOE thing I had stumbled upon, we had spent a full year of the project using the one-factor-at-a-time approach and tested more than 150 different formulations 😓 These 30 runs took us two months to complete, from first plan to final validation. I assume you can see why this really got me hooked on DOE… 😝
-
💪🏻 𝐖𝐡𝐲 𝐒𝐞𝐪𝐮𝐞𝐧𝐭𝐢𝐚𝐥 𝐃𝐨𝐄 𝐁𝐞𝐚𝐭𝐬 "𝐁𝐢𝐠" 𝐄𝐱𝐩𝐞𝐫𝐢𝐦𝐞𝐧𝐭𝐬 ⚗️ Traditional experimental design may often follow a "big DoE" approach: plan everything upfront, run all experiments at once, then analyze. But there's a smarter way. Sequential Design of Experiments builds knowledge iteratively: • 𝐋𝐞𝐚𝐫𝐧 𝐚𝐬 𝐲𝐨𝐮 𝐠𝐨: Use early results to refine later experiments. • 𝐑𝐞𝐝𝐮𝐜𝐞𝐝 𝐫𝐞𝐬𝐨𝐮𝐫𝐜𝐞 𝐰𝐚𝐬𝐭𝐞: Stop when you have enough information, or pivot when assumptions prove wrong. • 𝐀𝐝𝐚𝐩𝐭𝐢𝐯𝐞 𝐨𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧: Focus experimental effort where uncertainty or benefit is highest. • 𝐋𝐨𝐰𝐞𝐫 𝐫𝐢𝐬𝐤: Catch problems early rather than after completing hundreds of runs, smaller batches are easier to complete even when resources shift or priorities change. • 𝐅𝐚𝐬𝐭𝐞𝐫 𝐢𝐧𝐬𝐢𝐠𝐡𝐭𝐬: Get preliminary answers sooner, refine as needed. 🤓 In one of the use cases I have contributed to, the advantages of sequential DoE become clearly visible: at each stage, we are able to maximize the response as well as quickly reduce the variation of the response, while adapting the experimental space to new findings: modifying factors ranges, adding new factors, etc... 🤝🏻 This situation also help increasing discussions and collaboration between domain experts and design creators, where new knowledge can quickly be incorporated in the next augmentation phase. 🔄 Sequential DoE embraces uncertainty and turns it into an advantage. Why commit all resources upfront when you can learn, adapt, and optimize along the way ? ⏯️ If you are interested about this topic, I also highly recommend watching the recorded presentation 𝑩𝒊𝒈 𝑫𝑶𝑬: 𝑺𝒆𝒒𝒖𝒆𝒏𝒕𝒊𝒂𝒍 𝒂𝒏𝒅 𝑺𝒕𝒆𝒂𝒅𝒚 𝑾𝒊𝒏𝒔 𝒕𝒉𝒆 𝑹𝒂𝒄𝒆? with David Wong-Pascua, Phil Kay, Ryan Lekivetz and Ben Francis, which highlights how and why sequential DoE make the study of large experimental space (and high number of possible combinations) possible and efficient. 🔗 Link: https://lnkd.in/ercR9AQV #DoE #Learning
-
📈 Econometric Corner: #34 📊 This week, let’s explore a new working paper that develops a unified framework for designing experiments to combine external estimates, allowing researchers to answer questions in complex scenarios that go beyond (local) experimental effects. 𝗠𝗼𝘁𝗶𝘃𝗮𝘁𝗶𝗼𝗻 Practical constraints often limit experiments to estimating localized effects—like effects at a specific site, or for certain subpopulations. However, these local effects are often insufficient for answering broader questions about external validity, generalizability, or equilibrium impacts. 𝗖𝗼𝗺𝗺𝗼𝗻 𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻 📌 To complement localized experiments with external evidence—such as reduced-form or structural observational estimates and trials in other settings—with the goal of estimating complex counterfactuals that no single experiment can fully identify. 🚩 A design question: Given experimental feasibility constraints, which experiments (i.e., which parameters/effects are the most valuable to learn) should be run, and how, when their results will be combined with external evidence to estimate counterfactuals? 𝗧𝗵𝗶𝘀 𝗰𝗶𝘁𝗲𝗱 𝗽𝗮𝗽𝗲𝗿 🎯 𝘖𝘣𝘫𝘦𝘤𝘵: Develop a framework for designing experiments to be used alongside external evidence, including reduced-form or structural estimates (potentially biased and unknown ex ante) and results from experiments in other settings 📐 𝘊𝘳𝘪𝘵𝘦𝘳𝘪𝘰𝘯 𝘧𝘰𝘳 𝘤𝘩𝘰𝘰𝘴𝘪𝘯𝘨 𝘵𝘩𝘦 𝘦𝘴𝘵𝘪𝘮𝘢𝘵𝘰𝘳 𝘢𝘯𝘥 𝘵𝘩𝘦 𝘥𝘦𝘴𝘪𝘨𝘯: A minimax proportional regret criterion that compares the MSE of a candidate design to that of an oracle that knows the worst-case bias. 📢 𝘒𝘦𝘺 𝘳𝘦𝘴𝘶𝘭𝘵: The optimal design balances the design’s variance normalized by the smallest achievable variance (variance gap) and its worst case bias normalized by the smallest attainable bias (bias gap). 📌 𝘈 𝘱𝘳𝘰𝘤𝘦𝘥𝘶𝘳𝘦 𝘵𝘰 𝘥𝘦𝘵𝘦𝘳𝘮𝘪𝘯𝘦: 1️⃣ how to combine observational and experimental evidence 2️⃣ how to allocate precision across experiments given budget constraints 3️⃣ which treatment arm and/or sub-population to include in the experiment with fixed experimental costs Check out the technical details, extensions (nonlinear and multi-valued estimands, CI-length as regret), the implementation workflow, and empirical applications in site selection and treatment-arm choice for structural estimation. 𝗥𝗲𝗳𝗲𝗿𝗲𝗻𝗰𝗲: Epanomeritakis, A., & Viviano, D. (2025). Choosing What to Learn: Experimental Design when Combining Experimental with Observational Evidence. 𝘢𝘳𝘟𝘪𝘷 𝘱𝘳𝘦𝘱𝘳𝘪𝘯𝘵 𝘢𝘳𝘟𝘪𝘷:2510.23434.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development