Engineering Design Optimization

Explore top LinkedIn content from expert professionals.

Summary

Engineering design optimization is the process of finding the best possible design solutions by systematically analyzing and adjusting multiple factors to meet specific goals, such as performance, cost, or safety. Recent advancements have focused on using AI models, simulation techniques, and streamlined workflows to tackle complex challenges in design, manufacturing, and analysis.

  • Embrace AI tools: Consider integrating generative AI models to simplify high-dimensional design problems and boost creativity in engineering projects.
  • Focus computational resources: Use targeted mesh refinement and adaptive meshing strategies to reduce simulation times without sacrificing accuracy in critical areas.
  • Tune for scalability: Apply parameter autotuning and transfer learning to make optimization models efficient and scalable for both small and large systems.
Summarized by AI based on LinkedIn member posts
  • View profile for Jousef Murad
    Jousef Murad Jousef Murad is an Influencer

    CEO & Lead Engineer @ APEX 📈 Drive Business Growth With Intelligent AI Automations - for B2B Businesses & Agencies | Mechanical Engineer 🚀

    182,139 followers

    Traditional surrogate-based design optimization (SBDO) is hitting a wall, especially with high-dimensional, complex designs. In this new paper, Dr. Namwoo Kang presents a next-gen framework using generative AI, integrating three key models: - Generative model (design synthesis) - Predictive model (performance estimation) - Optimization model (iterative or generative) Rather than optimizing directly in a high-dimensional design space (x), the workflow introduces a low-dimensional latent space (z) learned via generative models. ➡️ z → x → y z = latent variables x = CAD geometry y = performance (drag, stress, etc.) This means we’re no longer hand-coding design parameters or doing trial-and-error with simplified surrogate models. 🧠 Why this matters: - Parametric modeling is no longer a bottleneck - Complex shapes are learned directly from CAD - Dynamic and multimodal performance data (1D, 2D, 3D) can be used - Near real-time optimization is possible #AI #GenerativeDesign #CAE #DesignOptimization

  • View profile for Udit Bagdai

    Mechanical and CAE Engineer | Content Manager, FEA, CFD, AI driven engineering education, Industry 4.0, PLM, Agentic AI | India + UK work rights | Strathclyde Alumni

    3,796 followers

    Clients want speed. Models demand accuracy. That tension shows up in every FEA project. I learned this the hard way on a job where the client wanted results in 2 hours even though the fine mesh needed 8 hours to solve. So I built an 80/20 meshing strategy that delivers most of the accuracy with a fraction of the cost. I broke the model into refinement zones that match the actual physics instead of spreading elements everywhere. → Ultra-fine mesh at 0.5–1 mm in the exact stress hot spots. → Medium mesh at 2–5 mm along the secondary load paths. → Coarse mesh at 10–20 mm in the bulk material that barely carries load. This keeps the solver focused where it matters. Then I use a short list of time savers that always pay off. → Symmetry to cut solve time by 2–4x. → Submodeling to refine only the areas that need detail. → Adaptive meshing to let the solver chase the critical regions for me. → Remote computing to spread the job across more cores. A recent project shows how much this changes the outcome. The initial fine mesh had 12 million elements and needed 18 hours to solve. The optimized mesh had 2 million elements and finished in 3 hours. The accuracy shift in the critical results stayed under 3 percent. The client signed off and the deadline was met without stress. Now I use a simple decision matrix to move faster. → Tight deadline means coarse mesh with targeted refinement. → Critical design means fine mesh with a convergence study. → Rapid iteration means adaptive and parametric meshing. Start coarse when exploring the design. Refine only when locking in the final validation. How do you balance mesh quality with project timelines? #FEA #TimeManagement #MeshOptimization #Engineering

  • View profile for Moinuddin Syed , Ph.D , MBA, PMP®

    Head, Global Pharma R & D wockhardt , Leading UK R & D at Wrexham, Indian R & D at Aurangabad, ireland R & D at clonmel I Formulation Development I Analytical Development I PMOI TechnologyTransfer I US, Eu & ROW I

    21,256 followers

    DoE, QbD and PAT 1. Introduction Evolution of pharmaceutical development: from empirical trial-and-error → risk-based scientific approaches. Regulatory drivers: ICH guidelines (Q8–Q14), FDA PAT initiative (2004). Importance of integrating design, knowledge, and real-time control. Positioning DoE, QbD, and PAT as a “triad” for robust, efficient, compliant development. 2. Historical Context and Regulatory Push Past reliance on end-product testing and its limitations. Shift to lifecycle management approaches. Role of FDA’s Critical Path Initiative. QbD introduced into regulatory lexicon in 2004; PAT guidance published. Global adoption: EMA, MHRA, WHO. 3. Understanding the Three Pillars 3.1 Quality by Design (QbD) – The Framework Definition & Philosophy: Proactive design vs reactive testing. Key Concepts: QTPP – Quality Target Product Profile. CQA – Critical Quality Attributes. CPP – Critical Process Parameters. CMA – Critical Material Attributes. Stages of Application: Early development → Technology transfer → Lifecycle management. Regulatory Basis: ICH Q8(R2), Q9, Q10, Q11, Q12, Q13, Q14. Tools: Risk assessments (FMEA, Ishikawa, Fault Tree Analysis), control strategy design. Case Study Example: QbD applied to controlled-release tablet development. 3.2 Design of Experiments (DoE) – The Optimizer Definition: Statistical framework for systematic factor–response exploration. Role in QbD: Tool to identify design space. Types of DoE: Screening designs (Plackett-Burman, Fractional Factorial). Optimization designs (Central Composite, Box-Behnken). Robustness studies. Benefits: Identifies interactions, reduces experiments, builds knowledge quantitatively. Case Example: Optimizing binder level, granulation time, and impeller speed. 3.3 Process Analytical Technology (PAT) – The Real-Time Guardian Definition: Real-time monitoring and control toolkit. Role: Ensures processes remain within validated design space. Techniques: NIR, Raman, FTIR, Particle size analyzers, Focused Beam Reflectance Measurement (FBRM). Applications: Blend uniformity. Moisture control. Coating thickness. Continuous manufacturing. Regulatory Context: FDA PAT Guidance (2004). Case Example: Inline NIR monitoring for RTRT (Real-Time Release Testing). 4. Interrelationship of the Three Pillars DoE as the engine of knowledge → defines design space. QbD as the overarching framework → integrates knowledge, risks, and control strategy. PAT as the execution safeguard → ensures adherence in manufacturing. Lifecycle integration (development → validation → continuous verification). 5. Benefits of Integrated Use Regulatory alignment & faster approvals. Cost savings through fewer failed batches. Increased robustness and reproducibility. Knowledge management & data-driven decision-making. Example: Continuous manufacturing systems where DoE defines design space, QbD integrates it, and PAT ensures execution.

  • View profile for Can Li

    Assistant Professor at Purdue University

    2,870 followers

    🎯 How can we use a low-fidelity optimization model to achieve similar performance to a high-fidelity model? Many decision-making algorithms can be viewed as tuning a low-fidelity model within a high-fidelity simulator to achieve improved performance. A great example comes from Cost Function Approximations (CFAs) by Warren Powell. CFAs embed tunable parameters, such as cost coefficients, into a simplified, deterministic model. These parameters are then refined by optimizing performance in a high-fidelity stochastic simulator, either via derivative-free or gradient-based methods. A similar philosophy appears in optimal control, where controllers are tuned using simulation optimization. ⚙️ Inspired by this paradigm, my student Asha Ramanujam recently developed the PAMSO algorithm. PAMSO—Parametric Autotuning for Multi-Timescale Optimization—tackles complex systems that operate across multiple timescales: High-level decision layer: makes strategic decisions (e.g., planning, design). Low-level decision layer: takes high-level inputs, makes detailed operating decisions (e.g., scheduling), applies detailed constraints and uncertainties, and computes the true objective. However, one-way top-down communication between layers often results in infeasibility or poor solutions due to mismatches between the high-level and the detailed low-level operating models. 💡 PAMSO augments the high-level model with tunable parameters that serve as a proxy for the complex physics and uncertainties embedded in the low-level model. Instead of attempting to jointly solve both levels, we fix the hierarchical structure: the high-level layer makes planning or design decisions, and then passes them down to the low-level scheduling or operational layer, which acts as a high-fidelity simulator. We treat this top-down hierarchy as a black box: The inputs are the tunable parameters embedded in the high-level model. The output is the overall objective value after the low-level simulator evaluates feasibility and performance. By optimizing these parameters using derivative-free methods, PAMSO is able to steer the entire system toward high-quality, feasible solutions. 🚀 Bonus: Transfer Learning! If these parameters are designed to be problem-size invariant, they can be tuned on smaller problem instances and transferred to solve larger-scale problems with minimal extra effort. ⚙️ Case studies demonstrate PAMSO’s scalability and effectiveness in generating good, feasible solutions: ✅ A MINLP model for integrated design and scheduling in a resource-task network with ~67,000 variables ✅ A massive MILP model for integrated planning and scheduling of electrified chemical plants and renewable energy with ~26 million variables Even solving the LP relaxation of these problems is beyond memory limits, and their structure is not easily decomposable for optimization techniques. https://lnkd.in/gDfcvDaZ

  • View profile for Sigrid Adriaenssens

    Professor at Princeton University & Director Keller Center for Innovation in Engineering Education --- The postings on this site are my own.

    9,878 followers

    New publication from the Form Finding Lab, now available in Computer Methods in Applied Mechanics and Engineering. Our paper presents an accelerated simulation and design optimization framework for multi‑stable elastic rod networks (ERNs), with applications in adaptive structures, aerospace engineering, and soft robotics. ERNs exhibit rich nonlinear and multi‑stable behavior, but their proximity to unstable equilibria makes conventional simulation and optimization approaches computationally challenging. To address this, we introduce a spline‑based least‑squares formulation for solving the Kirchhoff rod boundary value problem, enabling robust and efficient simulations. The framework is applied to networks composed of bistable bigons assembled into articulated bigon arms. Benchmarks demonstrate significant improvements in computational efficiency and robustness compared to traditional boundary value problem solvers. Building on this, we introduce a physics‑based shape optimization method that allows ERNs to be optimized to approximate target curves and end‑plane constraints. The approach is validated through numerical experiments and physical prototypes. Article link: https://lnkd.in/egNEhktw Reference: Larsson, A., Hayashi, K., Adriaenssens, S. (2026). Accelerated simulation and design optimization of elastic rod networks with a spline‑based least‑squares formulation. Computer Methods in Applied Mechanics and Engineering, 456, 118925. https://lnkd.in/ezV3VWH6 Image credit: Axel Larsson

  • View profile for Jarith Fry

    Business Owner / Automation Expert

    2,011 followers

    Just wrapped up a fun engineering challenge: building a constraint-driven optimization engine for complex axis assignment problems — the kind we bump into all the time in industrial automation, controls, and high-precision motion systems. The idea sounds simple: match a set of movable axes to a set of target positions. The reality: every axis has its own travel limits, allowable region, spacing rules, and mechanical ordering… and every invalid combination needs to be avoided. To keep it clean and rock-solid, the engine does a two-stage approach: 1️⃣ Smart assignment It evaluates feasible permutations, enforces mechanical monotonicity (no crossing), respects per-axis limits, honors pairwise spacing rules, and selects a contiguous block of axes — no “holes” allowed. The best solution wins based on a cost model that favors stable, centered, predictable motion. 2️⃣ Intelligent parking Unused axes are placed safely outside the active region. Then a refinement step nudges those parked positions just enough to satisfy limits, clearances, and spacing rules without disturbing the optimized core. Along the way, the system reports every violation, cost component, and decision path — transparent, debuggable, deterministic. It’s designed with Inductive Automation’s Ignition, Python, real-world PLC constraints, and the kind of control-system edge cases you get from Allen-Bradley, motion rigs, and industrial equipment in mind. Perfect fit for heavy-duty production environments. Really proud of how this one came together — blending optimization theory, practical controls engineering, and real-world mechanical constraints into one clean engine. #IndustrialAutomation #ControlsEngineering #Ignition #InductiveAutomation #PLC #AllenBradley #Optimization #ManufacturingTech #MotionControl

  • View profile for Santi Adavani

    AI Systems for the Physical World

    6,130 followers

    🔬 Engineering design synthesis is moving from manual iteration to automated, data-driven approaches. MIT researchers map out how deep generative models are enabling this shift, with important technical implications for how we develop products in the reference paper below. The paper provides a systematic analysis across: 🎯 Design Problems: • Topology optimization • Materials & microstructure design • 2D/3D shape synthesis • Multi-component product design 💾 Data Representations: • Voxels & point clouds for 3D • Images for 2D designs • Parametric specs for manufacturing • Graphs for component relationships 🧮 Model Architectures: • GANs with various conditioning approaches • VAEs for latent space exploration • RL for sequential design decisions • Integration with physics-based simulation ⚖️ Loss Functions: • Performance metrics from simulation • Manufacturability constraints • Style transfer for design aesthetics • Multi-objective optimization 📊 Key Datasets: • UIUC airfoil database • ShapeNet/ModelNet for 3D shapes • BIKED bicycle design dataset • Material microstructure collections 📝 Reference: "Deep Generative Models in Engineering Design: A Review" by Regenwetter et al. https://lnkd.in/g_mMR-8y S2 Labs #EngineeringDesign #MachineLearning #TechnicalResearch

  • View profile for Daniel Lalain, ARP-E, CMRP

    Senior Site Reliability Engineer / Inclusion Leader

    8,515 followers

    Optimized an ESP32 program for very fast video streaming. The example programs that come with a lot of these devices are good starting points. But for practical applications a lot more intelligence needs to be added in to deal with bandwidth issues and channel issues including co-channel and adjacent channel interference. By default the example program I was using began at 40 MHz bandwidth in access point (AP) mode which created a lot of co-channel and adjacent channel interference. Reducing the bandwidth to 20 MHz helped to allow channel placement within 1, 6 or 11 and not overlap other devices. And finally an algorithm to check the received signal strength (RSSI) of the other devices within range and choose a channel with minimal interference. The video transfer was very fast, nearly real-time which was awesome for such a small and inexpensive device. I think this is the fun and satisfying part of software engineering; to take an existing design/method and to improve on the performance and reliability to make it more useful for a broader range of applications. #engineering #softwareengineering #reliabilityengineering #embeddeddesign

  • View profile for Victor GUILLER

    Design of Experiments (DoE) Expert @L’Oréal | 💪 Empowering R&I Formulation labs with Data Science & Smart Experimentation | ⚫ Black Belt Lean Six Sigma | 🇫🇷 🇬🇧 🇩🇪

    3,025 followers

    🎉 Continuing the 2025 series on the foundations of Design of Experiments (#DoE) and modern experimentation approaches, here’s Part 3: Optimization Methods in Experimentation (a more complete version will be published on Medium soon). 🔎 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧 lies at the heart of experimental design, helping researchers and practitioners refine processes, improve performance, and uncover the best experimental conditions while minimizing resources. The evolution of optimization methods reflects a balance between leveraging models to guide experimentation and exploring unknown spaces without assumptions. 🏛️ 𝐌𝐨𝐝𝐞𝐥-𝐁𝐚𝐬𝐞𝐝 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧 Model-based approaches rely on predefined mathematical models to guide experiments. These include classical designs like Central Composite Designs (CCD) and Box-Behnken Designs, which assume polynomial models for response surfaces, as well as Bayesian Optimization, which combines surrogate models and acquisition functions to propose new experiments iteratively. These methods excel when a prior understanding of the system exists or when computational efficiency is key. 🌌 𝐌𝐨𝐝𝐞𝐥-𝐀𝐠𝐧𝐨𝐬𝐭𝐢𝐜 𝐎𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧 In contrast, model-agnostic methods avoid assumptions about the underlying system, focusing instead on geometric or distance-based considerations. Space-filling designs, such as Latin Hypercube or Maximin designs, ensure even exploration of the experimental space, making them ideal for nonlinear responses. Simplex optimization methods, on the other hand, employ geometric steps to converge on optimal conditions, relying purely on iterative distance-based logic and results ranking. ⏳ 𝐒𝐞𝐪𝐮𝐞𝐧𝐭𝐢𝐚𝐥 𝐯𝐬. 𝐏𝐚𝐫𝐚𝐥𝐥𝐞𝐥 𝐄𝐱𝐩𝐞𝐫𝐢𝐦𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧 Sequential methods, such as Bayesian Optimization or Simplex, prioritize small batches of experiments. These allow iterative learning and adaptation, particularly useful when resources are limited or when experiments are costly. Parallel approaches, favored in space-filling designs and model-based optimization like DoE, enable larger experiment batches to be conducted simultaneously, providing a more comprehensive understanding of the experimental space at the expense of iterative refinement. 🎯 𝐓𝐚𝐤𝐞𝐚𝐰𝐚𝐲 The choice of optimization method hinges on the balance between prior knowledge, resource availability, and the need for exploration versus exploitation. Sequential methods align with adaptive learning, while parallel methods accelerate discovery in larger spaces. Similarly, the decision between model-based and model-agnostic approaches depends on the complexity of the system and the availability of prior information. 📢 What optimization approaches have you found most effective in your experimental work ? #Optimization #ExperimentalDesign #DataScience #Innovation

  • View profile for Bruce Ratner, PhD

    I’m on X @LetIt_BNoted, where I write long-form posts about statistics, data science, and AI with technical clarity, emotional depth, and poetic metaphors that embrace cartoon logic. Hope to see you there.

    22,658 followers

    *** Mathematical Optimization *** ~ Mathematical optimization is finding the best solution from a set of feasible solutions for a given problem. The goal is to either maximize or minimize a specific objective function. ~ Key components of mathematical optimization: * Objective Function: The function that needs to be maximized or minimized. * Variables: The decision variables that affect the objective function. * Constraints: The limitations or requirements that the solution must satisfy. ~ Types of Optimization Problems: 1. Linear Optimization (Linear Programming): The objective function and constraints are linear. 2. Nonlinear Optimization: The objective function or some of the constraints are nonlinear. 3. Integer Optimization: The decision variables are restricted to integer values. 4. Combinatorial Optimization: The problem involves optimizing an objective function over a finite set of possibilities. ~ Applications of Mathematical Optimization: * Operations Research: Optimizing the supply chain, logistics, scheduling, etc. * Economics: Maximizing profit, minimizing costs. * Engineering: Designing efficient systems and structures. * Machine Learning: Training models by optimizing loss functions. ~ Solving Optimization Problems To solve these problems, various algorithms and methods are used, such as: * Simplex Method: Used for linear programming problems. * Gradient Descent: Used for optimization of continuous differentiable functions. * Genetic Algorithms: Used for complex optimization problems involving large search spaces. * Branch and Bound: Used for integer programming and combinatorial problems. * Simulated Annealing: Used for global optimization problems. ~ Real-World Applications 1. Supply Chain Management: Optimizing the production schedule, transportation routes, and inventory levels. 2. Finance: Portfolio optimization to maximize returns and minimize risks. 3. Energy Systems: Optimizing the operation of power plants and distribution networks. 4. Telecommunications: Network design and bandwidth allocation. 5. Manufacturing: Scheduling production processes and resource allocation. ~ Conclusion Optimization is a powerful tool that can be applied to many real-world problems, making processes more efficient and effective. --- B. Noted

Explore categories