Model selection for climate impact studies

Explore top LinkedIn content from expert professionals.

Summary

Model selection for climate impact studies involves choosing the most suitable mathematical or AI-based tools to predict how climate changes will affect specific regions, helping researchers move from broad, global forecasts to more detailed, local insights. Recent innovations combine traditional physics-based approaches with advanced machine learning, making climate projections more detailed and accessible for local planning and risk assessment.

  • Prioritize resolution: Select models that offer high spatial detail, so your climate impact projections can inform decisions at the city or neighborhood level.
  • Balance resources: Consider computational demands and available technology, as some AI-driven models deliver precise results without requiring supercomputers.
  • Account for uncertainty: Use models capable of providing probabilistic outputs, which help reflect natural variability and support more robust scenario planning.
Summarized by AI based on LinkedIn member posts
  • View profile for Jozef Pecho

    Climate/NWP Model & Data Analyst at Floodar (Meratch), GOSPACE LABS | Predicting floods, protecting lives

    3,098 followers

    🌍 Climate scientists often face a trade-off: Global Climate Models (GCMs) are essential for long-term climate projections — but they operate at coarse spatial resolution, making them too crude for regional or local decision-making. To get fine-scale data, researchers use Regional Climate Models (RCMs). These add crucial spatial detail, but come at a very high computational cost, often requiring supercomputers to run for months. ➡️ A new paper introduces EnScale — a machine learning framework that offers an efficient and accurate alternative to running full RCM simulations. Instead of solving the complex physics from scratch, EnScale "learns" the relationship between GCMs and RCMs by training on existing paired datasets. It then generates high-resolution, realistic, and diverse regional climate fields directly from GCM inputs. What makes EnScale stand out? ✅ It uses a generative ML model trained with a statistically principled loss (energy score), enabling probabilistic outputs that reflect natural variability and uncertainty ✅ It is multivariate – it learns to generate temperature, precipitation, radiation, and wind jointly, preserving spatial and cross-variable coherence ✅ It is computationally lightweight – training and inference are up to 10–20× faster than state-of-the-art generative approaches ✅ It includes an extension (EnScale-t) for generating temporally consistent time series – a must for studying events like heatwaves or prolonged droughts This approach opens the door to faster, more flexible generation of regional climate scenarios, essential for risk assessment, infrastructure planning, and climate adaptation — especially where computational resources are limited. 📄 Read the full paper: EnScale: Temporally-consistent multivariate generative downscaling via proper scoring rules ---> https://lnkd.in/dQr5rmWU (code: https://lnkd.in/dQk_Jv8g) 👏 Congrats to the authors — a strong step forward for ML-based climate modeling! #climateAI #downscaling #generativeAI #machinelearning #climatescience #EnScale #RCM #GCM #ETHZurich #climatescenarios

  • Every year, natural disasters hit harder and closer to home. But when city leaders ask, "How will rising heat or wildfire smoke impact my home in 5 years?"—our answers are often vague. Traditional climate models give sweeping predictions, but they fall short at the local level. It's like trying to navigate rush hour using a globe instead of a street map. That’s where generative AI comes in. This year, our team at Google Research built a new genAI method to project climate impacts—taking predictions from the size of a small state to the size of a small city. Our approach provides: - Unprecedented detail – in regional environmental risk assessments at a small fraction of the cost of existing techniques - Higher accuracy – reduced fine-scale errors by over 40% for critical weather variables and reduces error in extreme heat and precipitation projections by over 20% and 10% respectively - Better estimates of complex risks – Demonstrates remarkable skill in capturing complex environmental risks due to regional phenomena, such as wildfire risk from Santa Ana winds, which statistical methods often miss Dynamical-generative downscaling process works in two steps: 1) Physics-based first pass: First, a regional climate model downscales global Earth system data to an intermediate resolution (e.g., 50 km) – much cheaper computationally than going straight to very high resolution. 2) AI adds the fine details: Our AI-based Regional Residual Diffusion-based Downscaling model (“R2D2”) adds realistic, fine-scale details to bring it up to the target high resolution (typically less than 10 km), based on its training on high-resolution weather data. Why does this matter? Governments and utilities need these hyperlocal forecasts to prepare emergency response, invest in infrastructure, and protect vulnerable neighborhoods. And this is just one way AI is turbocharging climate resilience. Our teams at Google are already using AI to forecast floods, detect wildfires in real time, and help the UN respond faster after disasters. The next chapter of climate action means giving every city the tools to see—and shape—their own future. Congratulations Ignacio Lopez Gomez, Tyler Russell MBA, PMP, and teams on this important work! Discover the full details of this breakthrough: https://lnkd.in/g5u_WctW  PNAS Paper: https://lnkd.in/gr7Acz25

  • View profile for Gopal Erinjippurath

    AI for capital markets 🌎 | Founder and CTO | Angel Investor

    8,403 followers

    Climate models have long struggled with coarse resolution, limiting precise climate risk insights. But AI-driven methods are now changing this, unlocking more detailed intelligence than traditional physics-based approaches. I recently spoke with a research scientist at Google Research who highlighted a promising new hybrid approach. This method combines physics-based General Circulation Models (GCMs) with AI refinement, significantly improving resolution. The process starts with Regional Climate Models (RCMs) anchoring physical consistency at ~45 km resolution. Then, it uses a diffusion model, R2-D2, to enhance output resolution to 9 km, making estimates more suitable for projecting extreme climate events. 🔥 About R2-D2 R2‑D2 (Regional Residual Diffusion-based Downscaling) is a diffusion model trained on residuals between RCM outputs and high-resolution targets. Conditioned on physical inputs like coarse climate fields and terrain, it rapidly generates high-res climate maps (~800 fields/hour on GPUs), complete with uncertainty estimates. ✅ Why this matters - Offers detailed projections of extreme climate events for precise risk quantification. - Delivers probabilistic forecasts, improving risk modeling and scenario planning. - Provides another high-resolution modeling approach, enriching ensemble strategies for climate risk projections. 👉 Read the full paper: https://lnkd.in/gU6qmZTR 👉 An excellent explainer blog: https://lnkd.in/gAEJFEV2 If your work involves climate risk assessment, adaptation planning, or quantitative modeling, how are you leveraging high-resolution risk projections?

Explore categories