Diverse Approaches in Climate Modeling

Explore top LinkedIn content from expert professionals.

Summary

Diverse approaches in climate modeling refer to the different methods scientists use to predict and understand climate changes, ranging from traditional physics-based models to advanced AI and machine learning techniques. These strategies help improve the accuracy and detail of climate forecasts, making them more useful for regional planning and risk assessment.

  • Explore hybrid solutions: Combining physics-based models with machine learning can offer both interpretability and improved prediction for climate and weather extremes.
  • Embrace computational advances: Using AI and generative modeling streamlines the creation of high-resolution, local climate projections, saving time and resources while improving detail.
  • Focus on explainability: Choosing transparent modeling techniques makes it easier to understand how climate variables interact, aiding in decision-making and public communication.
Summarized by AI based on LinkedIn member posts
  • View profile for Jorge Bravo Abad

    AI/ML for Science & DeepTech | Prof. of Physics at UAM | Author of “IA y Física” & “Ciencia 5.0”

    28,988 followers

    AI finds a missing equation for simulating atmospheric and oceanic turbulence Climate models and weather forecasts simulate turbulent flows spanning scales from thousands of kilometers down to meters. No computer can resolve all of them, so modelers approximate the effect of unresolved small scales on the large-scale dynamics. This approximation is known as a subgrid-scale closure—a model that "closes" the governing equations by filling in what the coarse grid cannot see. Getting closures right matters enormously: their shortcomings are a leading source of uncertainty in climate projections and extreme weather forecasts. For decades, the field has faced a trade-off. One family of closures faithfully reconstructs the small-scale stress patterns but makes simulations blow up. The other keeps simulations stable but oversimplifies the physics—removing too much energy, ignoring backscatter from small to large scales, and underestimating extreme events. Karan Jakhar, Yifei Guan, and Pedram Hassanzadeh break this impasse by changing what equation discovery optimizes for. Previous sparse regression searches consistently landed on the same second-order approximation known since the 1970s, which is accurate but unstable. The key insight: if you also require the discovered equation to reproduce how energy flows between scales—not just match local stress patterns—the algorithm finds something different. Searching 930 candidate terms with this physics-informed dual criterion, Bayesian sparse regression robustly identifies an additional fourth-order term in the Taylor expansion of the subgrid stress (NGM4). NGM4 achieves ~0.99 pattern correlation with reference data, produces stable simulations across four diverse 2D turbulence setups mimicking atmospheric and oceanic dynamics, and accurately captures both bulk statistics and rare extremes. Its coefficients depend only on grid resolution—no tuning for flow regime or Reynolds number—and it needs just 100 training snapshots. The most striking aspect: NGM4 could have been derived analytically decades ago, but because the source of the second-order instability was unclear, higher-order terms were never explored. It took sparse regression guided by the right physics to reveal that the missing piece had been hiding in plain sight. One takeaway that extends well beyond turbulence: the criterion you optimize for determines what you discover. Embedding the right physics into equation discovery can uncover interpretable, generalizable equations that purely data-driven approaches systematically miss. Paper: https://lnkd.in/exGQGaGc #MachineLearning #Turbulence #ClimateModeling #EquationDiscovery #AIforScience #LargeEddySimulation #GeophysicalFluidDynamics #SparseRegression #PhysicsInformedAI #SubgridModeling #DeepLearning #ComputationalPhysics #ExtremeEvents #WeatherPrediction #AIforClimate

  • View profile for Jozef Pecho

    Climate/NWP Model & Data Analyst at Floodar (Meratch), GOSPACE LABS | Predicting floods, protecting lives

    3,093 followers

    🌍 Climate scientists often face a trade-off: Global Climate Models (GCMs) are essential for long-term climate projections — but they operate at coarse spatial resolution, making them too crude for regional or local decision-making. To get fine-scale data, researchers use Regional Climate Models (RCMs). These add crucial spatial detail, but come at a very high computational cost, often requiring supercomputers to run for months. ➡️ A new paper introduces EnScale — a machine learning framework that offers an efficient and accurate alternative to running full RCM simulations. Instead of solving the complex physics from scratch, EnScale "learns" the relationship between GCMs and RCMs by training on existing paired datasets. It then generates high-resolution, realistic, and diverse regional climate fields directly from GCM inputs. What makes EnScale stand out? ✅ It uses a generative ML model trained with a statistically principled loss (energy score), enabling probabilistic outputs that reflect natural variability and uncertainty ✅ It is multivariate – it learns to generate temperature, precipitation, radiation, and wind jointly, preserving spatial and cross-variable coherence ✅ It is computationally lightweight – training and inference are up to 10–20× faster than state-of-the-art generative approaches ✅ It includes an extension (EnScale-t) for generating temporally consistent time series – a must for studying events like heatwaves or prolonged droughts This approach opens the door to faster, more flexible generation of regional climate scenarios, essential for risk assessment, infrastructure planning, and climate adaptation — especially where computational resources are limited. 📄 Read the full paper: EnScale: Temporally-consistent multivariate generative downscaling via proper scoring rules ---> https://lnkd.in/dQr5rmWU (code: https://lnkd.in/dQk_Jv8g) 👏 Congrats to the authors — a strong step forward for ML-based climate modeling! #climateAI #downscaling #generativeAI #machinelearning #climatescience #EnScale #RCM #GCM #ETHZurich #climatescenarios

  • View profile for Greg Hakim

    Professor at University of Washington

    2,980 followers

    Coupled chemistry-climate models (CCMs) are essential tools for understanding chemical variability in the climate system, but they are extraordinarily expensive to run. Eric Mei's recent paper shows that linear inverse models (LIMs) can be used to emulate CCMs at a fraction of the computational cost (laptop vs HPC). This opens up new opportunities for strongly-coupled chemistry-climate data assimilation, large ensembles, hypothesis testing, and cost/benefit analysis for nonlinear machine learning emulators of CCMs. In constrast to ML emulators, LIMs have transparent explainability, illustrated by the figure below showing the coupled time-evolving relationship between sea-surface temperature, ozone, and hydroxyl radical for the El Nino mode in the model. Link to the paper: https://lnkd.in/d4DcJHVZ Supported by Schmidt Futures

  • View profile for Reed Maxwell

    William and Edna Macaleer Professor at Princeton University

    2,228 followers

    Really proud to share this new paper in American Geophysical Union journal Water Resources Research led by Madeleine Burns, who completed this work as her undergraduate senior thesis at Princeton. Madeleine developed an LSTM neural network model to estimate snow water equivalent across the western US and compared it with two physics-based models: ParFlow-CLM and the University of Arizona SWE product. Key findings: → The LSTM outperformed both physics-based models on accuracy and snowmelt timing → ML handled erroneous forcing data differently than physics-based models—when temperature inputs were clearly wrong, ParFlow-CLM produced unreasonable results while the LSTM stayed closer to observations. But this cuts both ways: the LSTM may be ignoring forcing when it shouldn't → Without physical constraints, LSTMs can produce unrealistic behavior → Feature importance analysis confirmed the model relies on physically meaningful variables—not spurious correlations The path forward? Hybrid approaches that combine the process understanding of physics-based models with the flexibility of ML. This kind of detailed comparison helps clarify where each approach excels and where improvements are needed as the field works toward more reliable operational snow prediction. Madeleine is now a PhD student at Dartmouth working on western water and climate. Excited to see what she does next in her career. 📄 https://lnkd.in/eMkusDqP Princeton Engineering High Meadows Environmental Institute Integrated GroundWater Modeling Center #maxwellgroupadventures

  • Every year, natural disasters hit harder and closer to home. But when city leaders ask, "How will rising heat or wildfire smoke impact my home in 5 years?"—our answers are often vague. Traditional climate models give sweeping predictions, but they fall short at the local level. It's like trying to navigate rush hour using a globe instead of a street map. That’s where generative AI comes in. This year, our team at Google Research built a new genAI method to project climate impacts—taking predictions from the size of a small state to the size of a small city. Our approach provides: - Unprecedented detail – in regional environmental risk assessments at a small fraction of the cost of existing techniques - Higher accuracy – reduced fine-scale errors by over 40% for critical weather variables and reduces error in extreme heat and precipitation projections by over 20% and 10% respectively - Better estimates of complex risks – Demonstrates remarkable skill in capturing complex environmental risks due to regional phenomena, such as wildfire risk from Santa Ana winds, which statistical methods often miss Dynamical-generative downscaling process works in two steps: 1) Physics-based first pass: First, a regional climate model downscales global Earth system data to an intermediate resolution (e.g., 50 km) – much cheaper computationally than going straight to very high resolution. 2) AI adds the fine details: Our AI-based Regional Residual Diffusion-based Downscaling model (“R2D2”) adds realistic, fine-scale details to bring it up to the target high resolution (typically less than 10 km), based on its training on high-resolution weather data. Why does this matter? Governments and utilities need these hyperlocal forecasts to prepare emergency response, invest in infrastructure, and protect vulnerable neighborhoods. And this is just one way AI is turbocharging climate resilience. Our teams at Google are already using AI to forecast floods, detect wildfires in real time, and help the UN respond faster after disasters. The next chapter of climate action means giving every city the tools to see—and shape—their own future. Congratulations Ignacio Lopez Gomez, Tyler Russell MBA, PMP, and teams on this important work! Discover the full details of this breakthrough: https://lnkd.in/g5u_WctW  PNAS Paper: https://lnkd.in/gr7Acz25

  • View profile for Robert Shibatani

    CEO & Hydrologist; The SHIBATANI GROUP Inc.; Expert Witness - Flood Litigation, Water Utility Advisor; New Dams; Reservoir Operations; Groundwater Safe Yield; Climate Change

    19,728 followers

    “How far can a ‘one-size’ fits all mindset accommodate future water resources?” While hydrologic modelers attempt to develop ubiquitous simulation platforms appropriate for every environmental condition, the reality is that no single model or simulation approach is best for all hydrologic conditions. In other words, hydrologic simulation models are by nature as varied and case specific as the diverse hydrologic environments they attempt to represent. This reality, as difficult as it is for many contemporary practitioners to acknowledge, can and does impart biases in many hydrologic studies … arguably before many are even initiated.  To be fair, such bias, however, can be effectively offset by prudent discussions of the embedded uncertainties, inherent empirical errors, and generalized assumptions, but such mea culpas have only recently found their way into the published literature.   Hydrologic models help predict floods and droughts, but how we calibrate them changes what they "get right".  By testing objective functions across many model types and catchments, however, it is surmised that each can highlight different flow behaviors, such as floods, low flows, or water balances and their contributing elements. The paper cited here, systematically investigates the effect of “objective functions” on the representation of various streamflow characteristics across 47 conceptual models and 10 hydroclimatically diverse catchments selected from the CARAVAN dataset. Here, 8 different objective functions are used for calibration, including the Kling–Gupta efficiency, Nash–Sutcliffe efficiency, and their respective logarithmic variants, as well as 4 more recently proposed metrics.  An evaluation of 15 hydrological signatures that capture a relevant selection of streamflow characteristics was performed. Results showed that the choice of objective function can significantly affect a model's capability to simulate different hydrological signatures such as runoff ratios, extreme flow percentiles, and certain baseflow characteristics. While certain signatures, particularly those related to flow variability, were found to be relatively insensitive to the choice of objective function, others exhibited large performance shifts across different objective function. In conclusion, no single objective function simultaneously achieved high performance across all tested signatures, highlighting that a single-objective calibration is unlikely to lead to an all-purpose model. Although objective functions are widely discussed in the literature, many modeling studies still default to a few common metrics. Matching calibration method to a study's purpose, or combining several methods, can make models more applicable to many of today’s and tomorrow’s expected real-world water decisions. See Wagener et al. (2025) in HESS, EGUsphere, “Metrics that Matter: Objective Functions and Their Impact on Signature Representation in Conceptual Hydrological Models”

  • View profile for Gopal Erinjippurath

    AI for capital markets 🌎 | Founder and CTO | Angel Investor

    8,403 followers

    Climate models have long struggled with coarse resolution, limiting precise climate risk insights. But AI-driven methods are now changing this, unlocking more detailed intelligence than traditional physics-based approaches. I recently spoke with a research scientist at Google Research who highlighted a promising new hybrid approach. This method combines physics-based General Circulation Models (GCMs) with AI refinement, significantly improving resolution. The process starts with Regional Climate Models (RCMs) anchoring physical consistency at ~45 km resolution. Then, it uses a diffusion model, R2-D2, to enhance output resolution to 9 km, making estimates more suitable for projecting extreme climate events. 🔥 About R2-D2 R2‑D2 (Regional Residual Diffusion-based Downscaling) is a diffusion model trained on residuals between RCM outputs and high-resolution targets. Conditioned on physical inputs like coarse climate fields and terrain, it rapidly generates high-res climate maps (~800 fields/hour on GPUs), complete with uncertainty estimates. ✅ Why this matters - Offers detailed projections of extreme climate events for precise risk quantification. - Delivers probabilistic forecasts, improving risk modeling and scenario planning. - Provides another high-resolution modeling approach, enriching ensemble strategies for climate risk projections. 👉 Read the full paper: https://lnkd.in/gU6qmZTR 👉 An excellent explainer blog: https://lnkd.in/gAEJFEV2 If your work involves climate risk assessment, adaptation planning, or quantitative modeling, how are you leveraging high-resolution risk projections?

Explore categories