Experimental Protocol Optimization

Explore top LinkedIn content from expert professionals.

Summary

Experimental protocol optimization means tweaking and refining the steps in a scientific experiment to improve results, reduce time and cost, or simplify procedures. This process uses statistical and data-driven methods to identify which parts of an experiment have the greatest impact and to adjust them for better outcomes.

  • Map key variables: Start by identifying all factors that could influence your results, then focus on the few that matter most for further study.
  • Test methodically: Use approaches like design of experiments or machine learning tools to systematically explore combinations of variables rather than changing one thing at a time.
  • Analyze and refine: Regularly review your data to spot patterns, then adjust your protocol and confirm improvements through repeated testing.
Summarized by AI based on LinkedIn member posts
  • View profile for Angelo Lanzilotto

    Digital Chemistry Specialist at Merck Group | SaaS | AI in Drug Discovery | Workflow Automation

    8,136 followers

    “Synthesis of a Bispidine Derivative by Response Surface Methodology”   Bispidines consist of two fused piperidine rings and are prepared by the condensation of ketones with ethyl cyanoacetate in the presence of ammonia. Intermediate (3) is then treated with sulfuric acid and cyclizes to tetrone. One tetrone in particular (8) is of particular interest, as it is involved in the synthesis of gabapentin. In an effort to establish the highest yielding experimental protocol, the authors conduct a Design of Experiment (DoE) analysis. Within DoE, Response Surface Methodology is a method to model the relationship between independent variables and a dependent one while running the least number of experiments. Specifically, Box−Behnken design is an experimental design that can be applied to a process with three factors (independent variables) and three levels (values the variables can assume), and it has the advantage of requiring less experiments than the 27 needed for a full-factorial design. The process development of bispidines falls into this category as it studies the effects of three factors (acid concentration, temperature, reaction time) on yield.   Before performing DoE, a preliminary investigation is needed to determine the range of values the independent variables can span by what is called one-factor-at-a-time study (OFAT). In such study, the effect of one variable is studied by keeping the other two constant and so on. The OFAT study suggests that the best ranges are: acid concentration of 50−70% (v/v), temperature of 90−110 °C, and reaction time of 19−24 h.   As part of the Box−Behnken design, 17 experiments are conducted, and the yields are fitted with a quadratic second-order polynomial model. Analysis of variance indicates that the linear coefficients of the model (the independent variables) are significant, and so are the second-order terms (their square). Instead, among the interaction terms, only (concentration*temperature) seems significant, while (temperature*time) and (concentration*time) are not.   Analysis of the residuals (the difference between the experimental values and the ones predicted by the model) follows a normal distribution, supporting the hypothesis that the quadratic regression model accurately describes the process under exam. Finally, other tools at disposal are the 3D surface plots and the contour plots. A quick look at these indicates the area where the optimal conditions lie: a yield of 86.6% can be achieved running the reaction for 22 h with a 55% (v/v) sulfuric acid concentration at a temperature of 103 °C. Notably, when the authors performed such experiment, in triplicate, they found an experimental yield of 84.0 +/- 0.6%.   Post n. 135. Original publication: https://lnkd.in/dTUs9Wwk #chemistry #medicinalchemistry #drugdiscovery #drugdevelopment #pharma #science #research #synthesis  

  • View profile for Jorge Bravo Abad

    AI/ML for Science & DeepTech | Prof. of Physics at UAM | Author of “IA y Física” & “Ciencia 5.0”

    29,007 followers

    When your experiment has stages, your optimizer should too Most real experiments in chemistry and materials science are not single black-box evaluations. They are cascades: you deposit a layer, measure its quality, then fabricate the device and measure efficiency. You synthesize a molecule, compute a proxy property, then run the expensive assay. Standard Bayesian optimization ignores this structure entirely, treating the whole pipeline as a monolithic input-output function. Torresi and Friederich introduce Multi-Stage Bayesian Optimisation (MSBO), a framework that explicitly models these cascade dependencies and makes decisions at each stage based on intermediate measurements — what they call proxy measurements. The architecture has three core ideas working together. First, a cascade of independent Gaussian processes, one per stage, propagates uncertainty through the full pipeline via Monte Carlo sampling. Second, a nested acquisition function evaluates the expected utility of a decision at stage i by integrating forward over the probabilistic outcomes of all downstream stages. Third, an inventory system enables resumable sampling: the algorithm can pause a candidate at any intermediate stage, continue it later, or discard it early if the proxy measurement signals low utility — without re-running preceding steps. The results are systematic. Across nine combinations of stage complexity in synthetic benchmarks, MSBO consistently outperforms both standard BO and BOFN. In real-world tasks — optimizing HOMO/LUMO levels from the QM9 dataset, aqueous solubility, and hydration free energy — MSBO identifies top-1% molecules using roughly half the budget of standard BO. For LUMO optimization, it reaches the top 0.01% of candidates, an order of magnitude deeper than the baseline. Crucially, even when proxy measurements are weakly correlated with the final objective, MSBO still outperforms standard approaches. For R&D teams running multi-step experimental pipelines — whether in drug discovery, catalyst development, or materials synthesis — this directly addresses one of the most persistent inefficiencies: the tendency to commit expensive downstream resources to candidates that could have been screened out cheaply at an earlier stage. MSBO provides a principled, data-efficient way to build that funnel without requiring pre-defined selection thresholds, replacing human-designed filters with learned, adaptive decision-making across the full cascade. Paper: Torresi and Friederich, Digital Discovery (2026) — CC BY 3.0 | https://lnkd.in/eF9KuasW #BayesianOptimization #SelfDrivingLaboratory #MachineLearning #AIforScience #MaterialsDiscovery #DrugDiscovery #AutonomousLaboratory #GaussianProcesses #MaterialsScience #ComputationalChemistry #ActiveLearning #MolecularDesign #ScientificAI #ExperimentalDesign #MaterialsInformatics

  • View profile for Anurag Rathore

    2023 Tata Transformation Prize Winner, Professor, Department of Chemical Engineering, IIT Delhi

    15,553 followers

    In-vitro refolding of biotherapeutic inclusion bodies has long been recognized as a bottleneck in protein production in host systems such as Escherichia coli. Refolding optimization typically employs statistical approaches such as Design of Experiments (DoE), which while effective are labour-intensive. This paper demonstrates a knowledge-based refolding optimization, contrasted to the typical DoE-based protocol for proinsulin. The reaction is monitored and segmented into two parts (segment 1: 0–2 h and segment 2:2–6 h) based on the Fourier transform infrared (FTIR), Oxidation Reduction Potential (ORP) and Reverse Phase- High Performance Liquid Chromatography (RP-HPLC) analysis. The data is fed to a multi-objective optimization (MOO) method that utilize XGBoost, coupled with an NSGA-II optimizer. Based on the Pareto front, a linear correlation between parameters was observed in segments 1 and 2. An ensemble coupled non-dominated sorting genetic algorithm II (NSGA-II) was developed to optimize the reaction conditions beforehand. The proposed optimizer was then compared with the traditional DoE-based optimization. The developed optimization framework increased the yield to 65% ± 1.78% compared to 54% ± 2.62% in the traditional DoE-based approach (relatively 20% higher). The approach could combine screening and optimization analysis in a single step, dramatically reducing the overall experimental efforts by ∼50%. https://lnkd.in/d9SE8c33

  • View profile for Omarah AbdAlqader

    HCPC License/ (Senior MRI & CT)/ Cardiac/ Radiation Safety Officer/ M.Sc. Degree of Functional Imaging Technology

    5,310 followers

    Protocol Optimization Spotlight: Mimicking Bone Contrast on Siemens 1.5T MRI with T1 VIBE WE Optimized a T1-weighted VIBE sequence on a 1.5T Siemens MRI scanner to better visualize scaphoid fractures, aiming to replicate bone-like contrast typically achieved with Zero TE imaging (e.g., UTE/ZTE) — even though these are not available on standard systems. Here’s how we reduced the TE (Echo Time) effectively to enhance cortical bone signal: Matrix Base Resolution: 320 Phase Resolution: 100% Slice Resolution: 100% Using full resolution ensures high spatial detail, which is essential for subtle fracture visualization, especially in small joints like the wrist. Bandwidth increased to 360 Hz/pixel A higher readout bandwidth shortens the sampling time, allowing the echo to be collected sooner and thus reducing TE. This is critical for capturing fast-decaying signals from tissues with short T2*, like cortical bone. Strong Asymmetric Echo enabled By shifting the echo sampling earlier in the gradient readout, we achieved a further TE reduction of up to ~2 ms. This setting is key for approaching ultrashort TE imaging in conventional GRE sequences. Water excitation OFF → Disabling water excitation eliminates additional RF pulses, helping reduce sequence complexity and shorten TE. This also increases overall SNR, improving the visibility of bone-soft tissue interfaces. Result: TE dropped to the minimum achievable on 1.5T (~1.6 ms), and bone detail in the wrist appeared significantly sharper, aiding in fracture detection — especially when UTE is unavailable. This approach proves how far we can push conventional sequences with smart parameter tuning. It’s all about understanding how each knob affects signal timing, contrast, and image quality. *** sometimes the most powerful innovation comes from reimagining what we already have. #MRI #T1VIBE #SiemensMRI #MSKImaging #ScaphoidFracture #RadiologyInnovation #ProtocolOptimization #ApplicationSpecialist #MRImaging #ZeroTE #UTE #ZTE

  • View profile for Karthikeyan S

    Automotive Quality Assurance professional | Interior & Exterior trims | I help organizations to exceed customer expectations through exceptional quality products by implementing innovative quality strategies!

    2,059 followers

    Design Of Experiments (DOE) Procedure Phase I: Screening Experiment. 🌟 An experiment that studies many key variables. The purpose is to identify which ones significantly affect the output and which do not. 🌟 When many variables are included, the results cannot provide good information about interactions of factors with each other. Phase II: Optimization Study. 🌟 One or more experiments that study just a few key variables. These experiments provide better information about interactions. General Procedure 1. Identify the process to be studied and the purpose of the study. 2. Identify the output measurement(s) that you want to improve. (This is called the response.) 3. Determine measurement precision and accuracy, using tools such as repeatability and reproducibility studies. 4. Identify potential key variables that you can control and that might affect the output. (These are called factors.) Use tools such as brainstorming, flowcharts, and fishbone diagrams. Identify each factor as A, B, C, and so on. 5. Choose the settings, or levels, for each factor. Usually two levels will be used for each. If the variable is quantitative, choose high and low levels. If the variable is qualitative, choose two different settings and arbitrarily call them high and low. Designate the high setting with + and the low one with –: A+, A–, B+, B–, and so on. 6. Determine and document the experimental design. This includes: • All the different combinations of levels (called runs or treatments), specifying which variables will be at which settings • How many times each treatment will be done (called replication) • The sequence of all the trials, preferably chosen using a method that ensures random order (called randomization) 7. Identify other variables that might interfere with the experiment. Plan how you will control or at least monitor them. 8. Run the experiment, following the design exactly. 9. Analyze the data and draw conclusions. Computer software or spreadsheets do the math for you. Graph the results and effects to better understand them. A Pareto chart of the effects also can help you compare effects and understand visually which are most important. See below for an overview of analysis. 10. If the conclusions suggest you make changes to improve the process, verify those results and then standardize the new process. 11. Determine what additional experiments should be run. Go back to step 5 to plan and carry them out.

  • View profile for Amit Singh

    Regulatory Affairs Manager | Expert in US-FDA #Labeling Compliance, Global Labeling | Formerly worked with Sun Pharma, L&T, Endo, and Amneal Pharma

    15,489 followers

    ✍ 𝗗𝗲𝘀𝗶𝗴𝗻 𝗼𝗳 𝗘𝘅𝗽𝗲𝗿𝗶𝗺𝗲𝗻𝘁 (𝗗𝗢𝗘) ✍ is a critical component within the Product Development Report (PDR) for an Abbreviated New Drug Application (ANDA). DOE is used to systematically plan, conduct, and analyze experiments to optimize the formulation and manufacturing process of the generic drug. 𝗣𝘂𝗿𝗽𝗼𝘀𝗲 𝗼𝗳 𝗗𝗢𝗘 𝗶𝗻 𝗣𝗗𝗥 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻: DOE helps in identifying the optimal conditions for the formulation and manufacturing process by evaluating the effects of multiple variables simultaneously. 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆: It reduces the number of experiments needed by using statistical methods to design the experiments, thus saving time and resources. 𝗤𝘂𝗮𝗹𝗶𝘁𝘆: Ensures that the product meets the required quality standards by understanding the relationship between input variables (e.g., excipients, process parameters) and output responses (e.g., drug stability, dissolution rate). 𝗞𝗲𝘆 𝗘𝗹𝗲𝗺𝗲𝗻𝘁𝘀 𝗼𝗳 𝗗𝗢𝗘 𝗙𝗮𝗰𝘁𝗼𝗿𝘀: These are the variables that are changed during the experiment. In drug development, factors could include temperature, pH, mixing speed, etc. 𝗟𝗲𝘃𝗲𝗹𝘀: The different values or settings for each factor. For example, temperature might be tested at 25°C, 30°C, and 35°C. 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗲𝘀: The outcomes measured to determine the effect of the factors. This could include drug potency, dissolution rate, and stability. 𝗜𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝗼𝗻𝘀: DOE helps in understanding how different factors interact with each other and their combined effect on the responses. 𝗦𝘁𝗲𝗽𝘀 𝗶𝗻 𝗖𝗼𝗻𝗱𝘂𝗰𝘁𝗶𝗻𝗴 𝗗𝗢𝗘 𝗗𝗲𝗳𝗶𝗻𝗲 𝗢𝗯𝗷𝗲𝗰𝘁𝗶𝘃𝗲𝘀: Clearly state what you aim to achieve with the experiment. 𝗦𝗲𝗹𝗲𝗰𝘁 𝗙𝗮𝗰𝘁𝗼𝗿𝘀 𝗮𝗻𝗱 𝗟𝗲𝘃𝗲𝗹𝘀: Choose the factors to be studied and the levels at which they will be tested. 𝗗𝗲𝘀𝗶𝗴𝗻 𝘁𝗵𝗲 𝗘𝘅𝗽𝗲𝗿𝗶𝗺𝗲𝗻𝘁: Use statistical software to create an experimental design, such as a full factorial or fractional factorial design. 𝗖𝗼𝗻𝗱𝘂𝗰𝘁 𝘁𝗵𝗲 𝗘𝘅𝗽𝗲𝗿𝗶𝗺𝗲𝗻𝘁: Perform the experiments as per the design. 𝗔𝗻𝗮𝗹𝘆𝘇𝗲 𝗗𝗮𝘁𝗮: Use statistical methods to analyze the data and determine the significance of the factors and their interactions. 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗲: Identify the optimal conditions based on the analysis. 𝗕𝗲𝗻𝗲𝗳𝗶𝘁𝘀 𝗼𝗳 𝗗𝗢𝗘 𝗶𝗻 𝗣𝗗𝗥 𝗜𝗺𝗽𝗿𝗼𝘃𝗲𝗱 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴: Provides a deeper understanding of the process and formulation. 𝗥𝗼𝗯𝘂𝘀𝘁 𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝘀: Helps in developing robust products that are less sensitive to variations in manufacturing conditions. 𝗥𝗲𝗴𝘂𝗹𝗮𝘁𝗼𝗿𝘆 𝗖𝗼𝗺𝗽𝗹𝗶𝗮𝗻𝗰𝗲: Demonstrates a scientific approach to product development, which is crucial for regulatory approval. 𝗗𝗢𝗘 𝗶𝘀 𝗮 𝗽𝗼𝘄𝗲𝗿𝗳𝘂𝗹 𝘁𝗼𝗼𝗹 𝗶𝗻 𝘁𝗵𝗲 𝗣𝗗𝗥 𝘁𝗵𝗮𝘁 𝗲𝗻𝘀𝘂𝗿𝗲𝘀 𝘁𝗵𝗲 𝗴𝗲𝗻𝗲𝗿𝗶𝗰 𝗱𝗿𝘂𝗴 𝗶𝘀 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗱 𝗲𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝘁𝗹𝘆 𝗮𝗻𝗱 𝗺𝗲𝗲𝘁𝘀 𝗮𝗹𝗹 𝗾𝘂𝗮𝗹𝗶𝘁𝘆 𝘀𝘁𝗮𝗻𝗱𝗮𝗿𝗱𝘀 #Pharmaceuticals #DrugDevelopment #DOE #Quality #Innovation #GenericDrugs

  • View profile for Kakasaheb Nandiwale, Ph.D.

    Principal Scientist at Pfizer | MIT Postdoc | AI Architect | Scientific Automation & Robotics | Multimodal GenAI | Continuous Manufacturing

    21,635 followers

    Delve into our collaborative publication between Pfizer and Massachusetts Institute of Technology (Klavs Jensen). 📚 Dynamic Flow Experiments for Bayesian Optimization of a Single Process Objective, Reaction Chemistry & Engineering, 2025. 🔬 Key insights: * 🔄 A new method, named dynamic experiment optimization (DynO), is developed for the chemical reaction optimization by leveraging for the first time both #Bayesian optimization and data-rich dynamic experimentation in flow chemistry. * ⚙️ The algorithm is able to guide the user from initialization (using steady or dynamic experiments) to the end of the optimization procedure thanks to useful convergence criteria, which were proposed for the first time together with an estimate of the regret reached. * 🚀 DynO is readily implementable in #automated systems and it is augmented with simple stopping criteria to guide non-expert users in fast and reagent-efficient optimization campaigns. * 📊 The developed algorithms is compared in silico with the algorithm Dragonfly (and a random optimizer), showing remarkable performance in terms of experiment time saving and reagent volume reduction. * 🧬 DynO is validated with an ester hydrolysis reaction at Pfizer on an automated platform, showing that DynO can be easily implemented experimentally and allows to evaluate optimal reaction conditions with a limited number of experiments. Thanks to all collaborators for this insightful research! Federico Florit, Dr. Kakasaheb Nandiwale, Cameron T. Armstrong, Katharina Grohowalski, Angel Diaz, Jason Mustakis, Steven Guinness, and Klavs Jensen. Congratulations to the authors!🎉 #Bayesian #Optimization #Datarich #Dynamic #collaboration #PfizerProud 📚 Article link: https://lnkd.in/efic5nSc

  • View profile for Victor GUILLER

    Design of Experiments (DoE) Expert @L’Oréal | 💪 Empowering R&I Formulation labs with Data Science & Smart Experimentation | ⚫ Black Belt Lean Six Sigma | 🇫🇷 🇬🇧 🇩🇪

    3,024 followers

    🗺️ 𝐄𝐱𝐩𝐥𝐨𝐫𝐢𝐧𝐠 𝐄𝐱𝐩𝐞𝐫𝐢𝐦𝐞𝐧𝐭𝐚𝐥 𝐒𝐩𝐚𝐜𝐞 : 𝐓𝐡𝐞 𝐏𝐨𝐰𝐞𝐫 𝐨𝐟 𝐒𝐢𝐦𝐩𝐥𝐞𝐱 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧 🔎 In our recent exploration of smart experimentation methods, one approach merits closer attention for its underappreciated potential: Simplex Optimization (#SOpt). Despite being mentioned significantly less frequently in publications compared to Design of Experiments (#DoE) and Bayesian Optimization, SOpt offers distinct advantages that merit consideration. Its simplicity and iterative exploration capabilities make it a formidable tool in navigating parameter spaces and uncovering optimal solutions. ⚙ 𝑯𝒐𝒘 𝒅𝒐𝒆𝒔 𝒊𝒕 𝒘𝒐𝒓𝒌 ? Imagine you realize an organic chemical reaction where time and temperature need to be adjusted to provide the highest yield. In order to explore your experimental space and find the most promising conditions, here are the steps needed : 1️⃣ Start with an initial simplex, a geometric figure defined by a number of coordinates (vertex) equal to one more than the number of factors being optimized (here N+1 = 3). 2️⃣ Evaluate the response (yield) at each vertex of the simplex. 3️⃣ Determine the centroid of all points except the worst-performing one (lowest response value). Reflect this worst point through the centroid to obtain a new experimental point. 4️⃣ Evaluate the response at the new point. 5️⃣ Depending on the performance of the new point, update the simplex by replacing the worst point with the new point. If the new point is better than the best existing point, consider moving the simplex in that direction. If the new point is worse, consider contracting the simplex towards better-performing points. 6️⃣ Continue this process iteratively until a termination criterion is met, such as reaching a specified number of iterations or achieving a desired level of convergence. 👍🏼 𝑩𝒆𝒏𝒆𝒇𝒊𝒕𝒔 Versatility: It can handle various constrained optimization problems. Light computation: Simplex optimization does not require complex analysis, making it a user-friendly and very graphical tool for low dimensional spaces. 👎🏼 𝑫𝒓𝒂𝒘𝒃𝒂𝒄𝒌𝒔 Local optima: Simplex optimization may converge to a local optimum rather than the global one. Initialization & dimensionality sensitivity: The performance of the method can be sensitive to the choice of initial points and problem dimensionality. Finding a good initial simplex may require some trial and error. Interpretability: No model is used to learn from previous iterations, preventing to identify influential factors or realize predictions. 💡 While Simplex Optimization may reside in the shadow of its more widely recognized counterparts, its efficacy in iterative exploration and simplicity of approach underscore its relevance in the pursuit of optimal solutions within complex parameter spaces. 📖 Reference : "𝐴𝑛 𝑎𝑙𝑡𝑒𝑟𝑛𝑎𝑡𝑖𝑣𝑒 𝑚𝑒𝑡ℎ𝑜𝑑 𝑓𝑜𝑟 𝐷𝑒𝑠𝑖𝑔𝑛 𝑜𝑓 𝐸𝑥𝑝𝑒𝑟𝑖𝑚𝑒𝑛𝑡𝑠" by Mark L. Crossley : https://shorturl.at/EIPWZ

  • View profile for Gopal Gantepogu

    Junior Manager in Technical service department

    5,397 followers

    🎯How to Design of Experiments (DOE) for Process Optimization🧐 Design of Experiments (DOE) is a powerful, structured approach to systematically study how key factors affect your process outcomes. Here’s a simple step-by-step guide to get started: 1️⃣ Define your objective: What process parameter or outcome do you want to improve or understand? 2️⃣ Select factors and levels: Identify variables that impact your process and decide on different settings (e.g., low/high temperature). 3️⃣ Choose the experimental design: Pick an approach like full factorial or fractional factorial to test combinations efficiently. 4️⃣ Run the experiment:Conduct trials changing multiple factors simultaneously to observe effects and interactions. 5️⃣ Analyze results:Use statistical tools to determine which factors influence the output and optimize accordingly. 6️⃣ Validate findings:Confirm your optimized settings deliver consistent improvements in real conditions. DOE helps reveal hidden interactions in processes, reduces trial-and-error, and accelerates development with fewer tests. Have you applied DOE in your projects? Share your experiences or challenges below! #ProcessEngineering #DesignOfExperiments #DOE #ProcessOptimization #ContinuousImprovement #QualityByDesign #LeanManufacturing

  • View profile for Prerit Saxena

    Making Copilot awesome at Microsoft AI | Machine Learning | Experimentation | Product Analytics

    12,140 followers

    ⚠️ Hard truth – more than 70% of experiments fail to improve key metrics. And yet, we spend countless hours designing, instrumenting, and analyzing them manually. Multiple dashboards, scattered SQL, endless checks—the friction is real. That’s exactly where I’ve found Model Context Protocols (MCPs) to be game changers. They create a shared context layer that agents can plug into—removing silos and automating the mechanics of experimentation. Imagine this: ➡️ You define the product change ➡️ MCP auto-generates hypotheses, metrics, and sample sizes ➡️ As data flows in, it runs SRM checks, applies variance reduction, analyzes results ➡️ You get a clean, contextual summary—ready for decision-making Experimentation shifts from patched-together workflows → agentic, self-orchestrating systems. The future isn’t fewer experiments—it’s smarter, faster, and more scalable ones. That’s exactly what I’ve been building these past few months. Have you explored MCPs for streamlining your digital experiments yet? #Experimentation #AI

Explore categories