Numerical Methods in Science

Explore top LinkedIn content from expert professionals.

Summary

Numerical methods in science are mathematical techniques used to approximate solutions for complex equations and simulate real-world phenomena, especially when exact answers are impossible to find by hand. These methods are foundational in fields like engineering, physics, finance, and artificial intelligence, ensuring calculations remain reliable and stable for practical applications.

  • Understand algorithm stability: Always check whether the numerical method maintains accuracy over time and avoids amplifying errors, especially in simulations or engineering scenarios.
  • Choose the right method: Select the numerical approach—such as finite element, finite volume, or spectral methods—that matches the physical problem and ensures trustworthy results.
  • Monitor real-world constraints: Stay aware of hardware limitations and error sources, such as round-off and truncation, to prevent unreliable outcomes and ensure your scientific computations remain credible.
Summarized by AI based on LinkedIn member posts
  • View profile for Yan Barros

    Building Physics AI Infrastructure for Engineering & Digital Twins | Advisor in Clinical AI & Lunar Systems | Creator of PINNeAPPle | Founder @ ChordIQ

    8,559 followers

    📌 Numerical Analysis and Stability of Computational Methods Numerical analysis is the theoretical foundation behind computational methods used to solve mathematical problems efficiently, stably, and accurately. It ensures that algorithms produce reliable results, especially in physical simulations, engineering, finance, and artificial intelligence. 🔹 What is Numerical Analysis? Numerical analysis studies the behavior of mathematical algorithms, especially in terms of accuracy, stability, convergence, and computational complexity. ✔️ It answers key questions like: - Does the numerical method converge to the true solution? - How do errors propagate during simulation? - Is the algorithm stable over long periods or large time steps? 🔹 Sources of Error in Numerical Methods ✅ Truncation Error Occurs when replacing a continuous expression with a discrete approximation. Example: yx is aproxximate to (y[n+1] - y[n]) / h ✅ Round-off Error Results from the limited precision of floating-point arithmetic. ✅ Global Error The cumulative effect of truncation and round-off errors throughout the simulation. 🔹 Stability of Numerical Methods Stability determines whether errors decay or amplify over time. An unstable method can diverge numerically—even with correct inputs. ⚠️ Classic Example: Explicit Euler Method For the equation: yt = lambda * y Euler’s method is stable only if: |1 + lambda * y| < 1 🔐 Implicit methods, such as Backward Euler, are generally more stable and suited for stiff problems. 🔹 Convergence Criteria A method converges if, as the step size hhh decreases, the numerical solution approaches the exact solution. 📌 Lax Equivalence Theorem "Consistency + Stability ⇒ Convergence" ✔️ This is a central result in the analysis of PDE solvers and emphasizes the importance of choosing proper time steps and spatial grids. 🔹 Example: Stability via CFL Condition For hyperbolic equations like transport or wave models, the Courant-Friedrichs-Lewy (CFL) condition is critical: c * delta t / delta x <= 1 Violating this condition can lead to numerical instability—even with theoretically sound methods. 🔹 Key Tools in Numerical Analysis 🧮 Error Analysis: Determines the order of accuracy of a method. 📉 Stability Analysis: Uses amplification factors and eigenvalue inspection. 🌀 Consistency Analysis: Verifies that the method replicates the original equation as h→0. 💡 Convergence Analysis: Checks whether the numerical solution approaches the analytical one. 🔹 Practical Applications ✔️ Fluid modeling (CFD) ✔️ Structural simulations (FEM) ✔️ Physics-Informed Neural Networks (PINNs) ✔️ Signal processing ✔️ Financial modeling 📌 Conclusion Numerical analysis goes far beyond coding algorithms — it ensures that computational methods are trustworthy, stable, and efficient. It is the bridge between theoretical mathematics and the simulation of real-world phenomena.

  • View profile for Youngsoo Choi

    Computational Scientist at Lawrence Livermore National Laboratory

    29,686 followers

    💡Are we over-investing in soft-constraint SciML?   A large portion of today’s Scientific ML landscape is centered on approaches like PINNs and neural operators. These methods have brought valuable ideas and momentum to the field.   But I think it is time to ask a deeper question:   Should scientific machine learning continue treating physics mainly as something to be enforced through loss functions?   In many cases, this means we are placing too much burden on optimization solvers to “discover” or “recover” physical consistency. That can lead to fragile training, scalability challenges, and limited trustworthiness when we move beyond the training regime.   For computational science, physics is not just a regularizer. It is the structure of the problem.   That is why I believe the next major step for SciML is not purely more expressive black-box models, but hybrid methods that tightly combine machine learning with classical numerical discretizations such as: + finite element methods (FEM) + finite volume methods (FVM) + finite difference methods (FDM) + spectral methods Traditional numerical methods have endured for a reason: they offer universality, modularity, mathematical structure, and strong physical grounding. In our recent work, we argue that future reusable scientific foundation models should learn from this tradition rather than bypass it.   The opportunity is not “ML versus numerical methods.” The opportunity is ML inside numerical methods.   That means architectures where learning enhances basis functions, closures, constitutive relations, subgrid models, or local representations — while the global solve still respects governing equations, assembly structure, and scientific consistency. Our paper presents DD-FEM as one example of this philosophy: replace classical polynomial bases with locally trained data-driven bases, while retaining finite-element-style local-to-global assembly and PDE enforcement. ( https://lnkd.in/gWSHPAqj )    I suspect this hybrid direction may ultimately prove more scalable, more reusable, and more scientifically credible than approaches that rely primarily on soft physics constraints alone. It may also be a more realistic path toward true foundation models for computational science.   SciML has achieved a lot already. But perhaps the field’s next leap will come from reconnecting modern ML with the best ideas from traditional computational mathematics.   I would love to see more discussion on this.   #SciML #ScientificMachineLearning #ComputationalScience #PDEs #FEM #FVM #FDM #NumericalMethods #MachineLearning #AI #FoundationModels #ReducedOrderModeling

  • View profile for Yassine BOUABID

    Mechanical Engineering Intern (PFE) | Mechanical Design & Simulation | Mechatronics Excellence Center | CAD • CAE • CAM • CFD • FEA • Crash • NVH | CSWA • CSWPA‑SM | Automotive & Aerospace

    9,530 followers

    Finite Element Method: What Really Happens Before the Mesh Finite Element Analysis does not start with a mesh. It starts with physics. Behind every FEM simulation lies a set of governing differential equations that describe how systems behave. The true power of the Finite Element Method isn’t just numerical approximation—it’s the ability to transform the strong form of these equations into a weak (variational) form using the Weighted Residual (Galerkin) principle. This shift from strong to weak form is what makes FEM a cornerstone of modern engineering. It allows us to: 🔹 Relax differentiability requirements, enabling complex geometries and materials 🔹 Naturally incorporate boundary conditions 🔹 Improve stability and ensure convergence 🔹 Translate complex physics into a solvable mathematical framework When an engineer understands this transition, something changes. They stop “running simulations.” They start formulating engineering problems. Software generates results. Formulation builds understanding, confidence, and judgment. That is the essence of real engineering. — Joël CAKPO #FiniteElementMethod #FiniteElementAnalysis #ComputationalMechanics #AppliedMathematics #StructuralEngineering #GeotechnicalEngineering #NumericalMethods #EngineeringScience #ContinuumMechanics #GalerkinMethod #VariationalFormulation

  • View profile for John Sanders

    Program Director and Associate Professor of Engineering | Quincy University

    2,446 followers

    The Perils of Trusting AI Remember the quadratic formula? That pesky equation you had to memorize in high school so that you could “find x” when confronted with a quadratic equation? Some may look back on that more fondly than others. My favorite subject in high school was Spanish. In any case, “finding x” is a common task in mathematics, and nowadays one of my favorite courses to teach is numerical analysis. At The Citadel, we call it Computational Methods in Engineering, and the first topic we cover is root finding. In layman’s terms, root finding is being able to “find x,” not just for quadratic equations, but no matter how complicated the equation is. Root finding, as it turns out, is an essential skill in engineering. Most of us are familiar with the collapse of the Tacoma Narrows Bridge. If you haven’t seen the video, it’s well worth watching on YouTube. To prevent disasters like that, engineers must be able to calculate the resonant frequencies of structures. And to do that, we often have to solve equations that are impossible to solve by hand. Let’s look at an example. The first assignment I give in Computational Methods is to compute the first few resonant frequencies of a particular kind of beam. This involves solving the equation cos(x)*cosh(x) + 1 = 0, because it turns out that the beam’s resonant frequencies are proportional to x^2. There is no magic formula for this equation like our friend the quadratic formula. To solve this equation we must employ a computer. In Computational Methods, we learn how to write code for that. But lately, there is a new computational tool in town, one that is gaining traction with students despite some of our best efforts: AI. What happens when we ask ChatGPT to find the first few values of x that satisfy this equation? I tried this in class today. Prompt: Find the first seven positive roots of the function f(x) = cos(x)*cosh(x)+1 The first value ChatGPT gave was 1.5708. This is the most important from an engineering perspective, because it determines the beam’s fundamental frequency, the one that usually causes the most damage. There’s just one problem: it’s wrong! The correct value is 1.875, making ChatGPT off by 16.2%. Isn’t that close enough? (A few of my more sarcastic students said so.) But remember: the fundamental frequency is proportional to the square of x. In this case, when we square a number that is off by 16.2%, we get a frequency that is off by 29.8%. That might be acceptable error in some disciplines, but not in engineering. Not if you want your bridge to remain standing for very long, anyway. When my students tried this themselves, they got different values still, from me and from each other. As an educator, there is no greater reward than seeing the proverbial light bulb go on in a student’s head. Today I saw several light bulbs go on. This was a fun (if harrowing) way to illustrate the perils and pitfalls of AI in engineering. #ai #chatgpt #openai #engineering #math

  • View profile for Rut Lineswala

    Founder & CTO | Innovating the Space of Simulations & Quantum Tech

    5,086 followers

    Space innovation just reached a new height, and this time, the breakthrough happened inside a simulation! 🤯Researchers in the United States have achieved a milestone that meaningfully shifts what we consider “feasible” in high-fidelity CFD. Using the El Capitan exascale supercomputer at Lawrence Livermore National Laboratory, the team executed one of the most detailed rocket-exhaust plume simulations ever performed and setting a new global record for CFD resolution. Why this matters CFD has long been limited by compute. Capturing the beloved shock interactions, acoustic loads, and turbulent structures at extremely fine scales pushes memory, bandwidth, and solver stability to their limits.  This new work pushes those limits far outward.  The simulation resolved ~500 quadrillion degrees of freedom, representing hundreds of trillions of spatial points, each updating the flow variables at every timestep. A decade ago, the memory footprint alone would have made this impossible. On El Capitan, the job completed in hours! How El Capitan made it possible  With 11,136 nodes built on AMD’s accelerated processors (and more than 44,500 compute engines active) the system delivered:  ⚡~80× faster time-to-solution  🧠~25× lower memory use  🌱~5× lower energy consumption  But hardware wasn’t the only enabler. Algorithm and innovation The team introduced Information Geometric Regularization (IGR) - novel numerical technique designed to stabilize the extreme flow fields without erasing critical physical detail (overcoming the SBLI challenges). Often forcing the unstable oscillations or extremely small timesteps, IGR suppresses non-physical behavior while preserving sharp gradients in pressure and density.  Integrated into the existing framework, it enabled the solver to operate steadily at quadrillion-scale resolution, previously considered unrealistic. What this means for the future of CFD  Breakthroughs like this show how much progress occurs when advanced hardware meets smarter mathematics. Exascale machines demonstrate what is technically possible, but they also highlight a practical truth: most engineering teams still depend on commercial CPU/GPU infrastructure with strict memory, runtime, and cost constraints. This is the space we focus on at BQP. Rather than relying on larger supercomputers, like El Capitan, we are exploring how quantum-inspired methods can accelerate the most computationally demanding parts of CFD on the hardware that industry already uses today. Our recent collaboration with NVIDIA and Classiq shows that hybrid quantum-classical techniques can deliver meaningful speedups even without exascale resources. (Full case study linked in the comments.) Spencer Bryngelson Florian Schaefer Abhishek Chopra Jash Minocha Aditya Singh Georgia Institute of Technology New York University #HPC #CFD #Quantum #DigitalTwin #SpaceX #Innovation https://lnkd.in/dNvFepaF

  • I recently read a textbook that defined Computational Fluid Dynamics (CFD) as the prediction of fluid motion and forces through computation using numerical analysis, often extended to include heat transfer, thermodynamics, chemistry, and solid interactions. The text emphasized that numerical methods are necessary to solve the governing equations, especially the Navier–Stokes equations, which are nonlinear partial differential equations, because closed-form analytical solutions exist only for very simple geometries and flow regimes; for realistic problems, discretization and substantial computational power become essential. This made me reflect on a common question often raised by reviewers or at conferences: “Why did you choose this numerical method?” However, as mathematicians, we know that numerical simulation is not the only pathway. There are also semi-analytical techniques developed to handle nonlinear ordinary and partial differential equations, including: • Homotopy Perturbation Method (HPM) • Variational Iteration Method (VIM) • Adomian Decomposition Method (ADM) • Homotopy Analysis Method (HAM) — proposed by Liao in 1992 These methods often generate algebraic or series-form solutions that provide analytical structure and parametric insight into nonlinear systems. The literature shows that these methods have been used to solve various forms of the Navier–Stokes equations by Liao, Professor A. M. Wazwaz, and many other researchers. From my understanding now, the choice is not about preferring semi-analytical methods over CFD, but about understanding the objective of the study. If the goal is high-fidelity simulation of complex geometries and realistic boundary conditions, CFD is often the appropriate tool. However, if the goal is to understand nonlinear mechanisms, derive reduced-order models, analyze parameter sensitivity, or build computationally efficient predictive tools, semi-analytical methods can be very powerful. In many engineering problems, especially in heat transfer and fluid flow, the most effective approach may actually be a combination of both, using analytical structure to guide modeling assumptions and numerical simulation to validate and extend results. #AppliedMathematics #CFD #NonlinearAnalysis #HeatTransfer #ComputationalScience #FluidDynamics

  • View profile for Zheyu Jiang

    Assistant Professor at Oklahoma State University

    1,706 followers

    Really happy to share our latest work from my awesome student Zeyuan Song, "MP-FVM: Enhancing finite volume method for water infiltration modeling in unsaturated soils via message-passing encoder-decoder network", is now published in Computers and Geotechnics, one of the leading journals in geotechnics and civil/environmental engineering. In this work, we develop MP-FVM, a new data-driven numerical algorithm that holistically integrates adaptive linearization scheme, encoder-decoder neural network architecture, Sobolev training, and message passing mechanism in a finite volume discretization framework to solve nonlinear partial differential equations with high accuracy and stability. • We provide rigorous proof of the convergence behavior of our MP-FVM algorithm. • We introduce a "coarse-to-fine" approach to enhance the solution accuracy of our MP-FVM algorithm without requiring a large amount of high-accuracy, fine-mesh training data. We show that this coarse-to-fine approach maintains a good balance between computational efficiency and solution accuracy. • We investigate the use of MP-FVM in solving n-dimensional Richards equation, a nonlinear PDE that characterizes fluid flow dynamics in porous media (e.g., soil) and has many applications in smart irrigation, soil sensing, enhanced oil recovery, and carbon sequestration. • We show that MP-FVM algorithm 1) improves convergence and reduces numerical oscillations compared to conventional FVM; 2) better preserves physical laws compared to standard finite element and finite difference schemes; 3) achieves fine-scale accuracy using only coarse-grid training data, hence bypassing computationally expensive training; 4) leverages pre-trained models to reduce retraining time, enabling transfer learning across different boundary and/or initial conditions; 5) extends to different realistic settings (e.g., layered soil, actual precipitation). Paper: https://lnkd.in/gxvwfPqf Preprint: https://lnkd.in/gWrUcjuZ

  • View profile for Rajat Walia

    Senior Aerodynamics Engineer @ Mercedes-Benz | CFD | Thermal | Aero-Thermal | Computational Fluid Dynamics | Valeo | Formula Student

    118,446 followers

    In Computational Fluid Dynamics (CFD), three numerical methods dominate: Finite Difference Method (FDM), Finite Volume Method (FVM), and Finite Element Method (FEM). Each has its unique approach and application. FDM - Based on difference equations derived from the Taylor series expansion. It discretizes the domain into a grid of points and approximates derivatives by finite differences. FDM is straightforward and works well on structured grids but struggles with complex geometries. FVM - Uses the integral form of the governing equations. Through the divergence theorem, it converts volume integrals into surface integrals, ensuring conservation of quantities like mass and momentum. FVM is versatile, working on both structured and unstructured grids, making it ideal for capturing complex flow behaviors. FEM - Employs basis functions to approximate the solution over elements. It converts partial differential equations into a system of algebraic equations by integrating them against these basis functions. FEM is powerful in handling irregular geometries, complex boundary conditions, and material properties. FDM focuses on point-wise approximation, making it fast but geometry-limited. FVM emphasizes conservation laws, ensuring accuracy in flow calculations. FEM excels in adaptability, perfect for complex, curved domains and multi-physics problems. Each method serves a specific purpose in CFD, chosen based on the problem's geometry, accuracy needs, and computational resources. Image Source: https://lnkd.in/gJPHMxAg #mechanicalengineering #mechanical #aerodynamics #aerospace #automotive

  • View profile for Mohamed Amine Abassi, PhD

    Postdoc Scholar Researcher

    3,580 followers

    Ever wondered how we solve differential equations when an analytical solution doesn’t exist? Let me introduce you to the Runge-Kutta method, particularly the classic 4th-order Runge-Kutta (RK4) , a numerical gem widely used in physics, engineering, and CFD! When solving an ODE like: dy/dt=f(t,y),y(t0)=y0 We want to compute the value of y at future points t1,t2,… without solving the equation exactly. The Euler method does this simply, but it's like taking blind steps. The RK4 method improves this by sampling the slope at multiple points within each interval and taking a weighted average. Think of it as asking four friends for advice before making a decision, instead of one. The Runge-Kutta family includes lower-order versions too, like RK2 (midpoint or Heun’s method) and RK3, which sample the slope 2 or 3 times, respectively. They offer a balance between accuracy and computational cost. However, RK4 stands out because it achieves 4th-order accuracy while maintaining stability and efficiency, making it the go-to method in most practical applications. In fact, RK4 is often called the "workhorse" of time integration schemes due to its robustness across a wide range of problems. Whether you're simulating aircraft dynamics, or planetary motion, RK4 is likely behind the scenes, giving you reliable time-stepping. Have you implemented RK4 in your projects? Share your experience or ask your questions below ⬇️ source: https://lnkd.in/gQ2JMc53 #NumericalMethods #RungeKutta #CFD #EngineeringMath #ODEs #Mathematics

  • View profile for Puneet Khandelwal

    JPMC | Quant Modelling Analyst | IIT KGP | CFA L1 | Masters in Financial Engineering

    21,406 followers

    𝗘𝘃𝗲𝗿𝘆𝗼𝗻𝗲 𝗪𝗮𝗻𝘁𝘀 𝘁𝗼 𝗖𝗼𝗱𝗲 𝗶𝗻 𝗣𝘆𝘁𝗵𝗼𝗻. 𝗙𝗲𝘄 𝗪𝗮𝗻𝘁 𝘁𝗼 𝗟𝗲𝗮𝗿𝗻 𝘁𝗵𝗲 𝗠𝗮𝘁𝗵 𝗕𝗲𝗵𝗶𝗻𝗱 𝗜𝘁. When people think of Quant Finance or Data Science, they imagine coding in Python, AI models, or advanced algorithms. But the real edge? It isn’t the code. It’s the math that powers the code. Without math, there’s no risk modelling, no derivatives pricing, no machine learning, no AI predictions. Just scripts without meaning. 𝗛𝗲𝗿𝗲’𝘀 𝘁𝗵𝗲 𝗺𝗮𝘁𝗵 𝗿𝗼𝗮𝗱𝗺𝗮𝗽 𝗲𝘃𝗲𝗿𝘆 𝗮𝘀𝗽𝗶𝗿𝗶𝗻𝗴 𝗾𝘂𝗮𝗻𝘁 & 𝗱𝗮𝘁𝗮 𝘀𝗰𝗶𝗲𝗻𝘁𝗶𝘀𝘁 𝘀𝗵𝗼𝘂𝗹𝗱 𝗸𝗻𝗼𝘄 👇 🔹 𝗣𝗿𝗼𝗯𝗮𝗯𝗶𝗹𝗶𝘁𝘆 & 𝗦𝘁𝗮𝘁𝗶𝘀𝘁𝗶𝗰𝘀 • Distributions (Normal, Poisson, Binomial) • Bayes’ theorem, conditional probability • Hypothesis testing, confidence intervals • Central Limit Theorem & Law of Large Numbers • Time-series foundations (stationarity, autocorrelation) 🔹 𝗟𝗶𝗻𝗲𝗮𝗿 𝗔𝗹𝗴𝗲𝗯𝗿𝗮 • Vectors, matrices, transformations • Eigenvalues & eigenvectors • Matrix decompositions (LU, QR, SVD) • Applications in PCA, factor models, portfolio optimization 🔹 𝗖𝗮𝗹𝗰𝘂𝗹𝘂𝘀 & 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 • Differentiation & integration basics • Multivariable calculus (gradients, Jacobians, Hessians) • Constrained optimization (Lagrange multipliers) • PDEs — the backbone of option pricing 🔹 𝗡𝘂𝗺𝗲𝗿𝗶𝗰𝗮𝗹 𝗠𝗲𝘁𝗵𝗼𝗱𝘀 • Monte Carlo simulation for pricing & risk • Root finding (Newton-Raphson) • Finite difference methods for PDEs 🔹 𝗦𝘁𝗼𝗰𝗵𝗮𝘀𝘁𝗶𝗰 𝗖𝗮𝗹𝗰𝘂𝗹𝘂𝘀 (𝗤𝘂𝗮𝗻𝘁-𝗦𝗽𝗲𝗰𝗶𝗳𝗶𝗰) • Brownian motion & stochastic processes • Itô’s Lemma • Martingales & measure theory • Black-Scholes & term-structure models 💡 𝗧𝗵𝗲 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆:  • 𝗗𝗮𝘁𝗮 𝗦𝗰𝗶𝗲𝗻𝗰𝗲 leans heavily on probability, statistics, linear algebra, and optimisation.  • 𝗤𝘂𝗮𝗻𝘁 𝗙𝗶𝗻𝗮𝗻𝗰𝗲 takes all that and adds stochastic calculus, PDEs, and advanced numerical methods. So if you’re preparing for either path → don’t just master Python. 𝗠𝗮𝘀𝘁𝗲𝗿 𝗺𝗮𝘁𝗵 𝗮𝗹𝘀𝗼. Because models change. The math behind them doesn’t. 👉 Follow Puneet Khandelwal for more quant & ML insights at the intersection of math, finance, and data. 💬 Which statistics topic do you struggle with the most? Let’s discuss this in the comments! #QuantFinance #Math #MachineLearning #DataScience #Finance #Statistics #Quant

Explore categories