Advancing regional climate variability simulations

Explore top LinkedIn content from expert professionals.

Summary

Advancing regional climate variability simulations means using new methods, including artificial intelligence, to create detailed and accurate predictions of how climate can change in specific areas. This approach helps researchers and planners understand local risks and patterns—like extreme weather or rainfall—that traditional, broad-scale models often miss.

  • Embrace AI tools: Consider machine learning and generative models, which can quickly provide high-resolution regional forecasts and improve predictions for extreme events.
  • Combine strengths: Integrate physics-based climate models with AI-driven refinements to capture both large-scale processes and fine local details for better scenario planning.
  • Support decision-making: Use these advanced climate simulations to inform infrastructure planning, emergency response, and environmental policy at the regional or city level.
Summarized by AI based on LinkedIn member posts
  • View profile for Jozef Pecho

    Climate/NWP Model & Data Analyst at Floodar (Meratch), GOSPACE LABS | Predicting floods, protecting lives

    3,099 followers

    🌍 Climate scientists often face a trade-off: Global Climate Models (GCMs) are essential for long-term climate projections — but they operate at coarse spatial resolution, making them too crude for regional or local decision-making. To get fine-scale data, researchers use Regional Climate Models (RCMs). These add crucial spatial detail, but come at a very high computational cost, often requiring supercomputers to run for months. ➡️ A new paper introduces EnScale — a machine learning framework that offers an efficient and accurate alternative to running full RCM simulations. Instead of solving the complex physics from scratch, EnScale "learns" the relationship between GCMs and RCMs by training on existing paired datasets. It then generates high-resolution, realistic, and diverse regional climate fields directly from GCM inputs. What makes EnScale stand out? ✅ It uses a generative ML model trained with a statistically principled loss (energy score), enabling probabilistic outputs that reflect natural variability and uncertainty ✅ It is multivariate – it learns to generate temperature, precipitation, radiation, and wind jointly, preserving spatial and cross-variable coherence ✅ It is computationally lightweight – training and inference are up to 10–20× faster than state-of-the-art generative approaches ✅ It includes an extension (EnScale-t) for generating temporally consistent time series – a must for studying events like heatwaves or prolonged droughts This approach opens the door to faster, more flexible generation of regional climate scenarios, essential for risk assessment, infrastructure planning, and climate adaptation — especially where computational resources are limited. 📄 Read the full paper: EnScale: Temporally-consistent multivariate generative downscaling via proper scoring rules ---> https://lnkd.in/dQr5rmWU (code: https://lnkd.in/dQk_Jv8g) 👏 Congrats to the authors — a strong step forward for ML-based climate modeling! #climateAI #downscaling #generativeAI #machinelearning #climatescience #EnScale #RCM #GCM #ETHZurich #climatescenarios

  • View profile for Gopal Erinjippurath

    AI for capital markets 🌎 | Founder and CTO | Angel Investor

    8,403 followers

    Climate models have long struggled with coarse resolution, limiting precise climate risk insights. But AI-driven methods are now changing this, unlocking more detailed intelligence than traditional physics-based approaches. I recently spoke with a research scientist at Google Research who highlighted a promising new hybrid approach. This method combines physics-based General Circulation Models (GCMs) with AI refinement, significantly improving resolution. The process starts with Regional Climate Models (RCMs) anchoring physical consistency at ~45 km resolution. Then, it uses a diffusion model, R2-D2, to enhance output resolution to 9 km, making estimates more suitable for projecting extreme climate events. 🔥 About R2-D2 R2‑D2 (Regional Residual Diffusion-based Downscaling) is a diffusion model trained on residuals between RCM outputs and high-resolution targets. Conditioned on physical inputs like coarse climate fields and terrain, it rapidly generates high-res climate maps (~800 fields/hour on GPUs), complete with uncertainty estimates. ✅ Why this matters - Offers detailed projections of extreme climate events for precise risk quantification. - Delivers probabilistic forecasts, improving risk modeling and scenario planning. - Provides another high-resolution modeling approach, enriching ensemble strategies for climate risk projections. 👉 Read the full paper: https://lnkd.in/gU6qmZTR 👉 An excellent explainer blog: https://lnkd.in/gAEJFEV2 If your work involves climate risk assessment, adaptation planning, or quantitative modeling, how are you leveraging high-resolution risk projections?

  • Every year, natural disasters hit harder and closer to home. But when city leaders ask, "How will rising heat or wildfire smoke impact my home in 5 years?"—our answers are often vague. Traditional climate models give sweeping predictions, but they fall short at the local level. It's like trying to navigate rush hour using a globe instead of a street map. That’s where generative AI comes in. This year, our team at Google Research built a new genAI method to project climate impacts—taking predictions from the size of a small state to the size of a small city. Our approach provides: - Unprecedented detail – in regional environmental risk assessments at a small fraction of the cost of existing techniques - Higher accuracy – reduced fine-scale errors by over 40% for critical weather variables and reduces error in extreme heat and precipitation projections by over 20% and 10% respectively - Better estimates of complex risks – Demonstrates remarkable skill in capturing complex environmental risks due to regional phenomena, such as wildfire risk from Santa Ana winds, which statistical methods often miss Dynamical-generative downscaling process works in two steps: 1) Physics-based first pass: First, a regional climate model downscales global Earth system data to an intermediate resolution (e.g., 50 km) – much cheaper computationally than going straight to very high resolution. 2) AI adds the fine details: Our AI-based Regional Residual Diffusion-based Downscaling model (“R2D2”) adds realistic, fine-scale details to bring it up to the target high resolution (typically less than 10 km), based on its training on high-resolution weather data. Why does this matter? Governments and utilities need these hyperlocal forecasts to prepare emergency response, invest in infrastructure, and protect vulnerable neighborhoods. And this is just one way AI is turbocharging climate resilience. Our teams at Google are already using AI to forecast floods, detect wildfires in real time, and help the UN respond faster after disasters. The next chapter of climate action means giving every city the tools to see—and shape—their own future. Congratulations Ignacio Lopez Gomez, Tyler Russell MBA, PMP, and teams on this important work! Discover the full details of this breakthrough: https://lnkd.in/g5u_WctW  PNAS Paper: https://lnkd.in/gr7Acz25

  • View profile for Yunsoo Choi

    Professor at UH (Air Quality/Weather/Climate Forecasting, Deep (Machine) Learning, Digital Twin)

    4,587 followers

    Deveshwar Singh has successfully defended his Ph.D. and is my 21st doctoral student to earn the Ph.D. degree under my supervision at the University of Houston. His dissertation, titled “Utilization of Deep-Learning Algorithms for Bias Correction and Forecasting of Weather, Air Quality, and Climate Parameters over Regional Scales,” develops and applies deep learning frameworks to enhance prediction and bias correction of key environmental variables across regional scales. In the first study, he addresses systematic biases in Indian Summer Monsoon Rainfall projections from CORDEX-SA regional climate models using two super-resolution methods—Autoencoder-Decoder and Residual Neural Network (ResNet)—which ingest eight meteorological variables plus elevation at 0.25° resolution and generate bias-corrected precipitation at 0.05° resolution; the ResNet model achieves a five-fold increase in spatial resolution and reduces extreme rainfall bias from 21.18 mm to -7.86 mm. The second study proposes the Deep-BCSI framework, which combines CNN-based bias correction with Partial CNN-based spatial imputation to improve 72-hour PM₂.₅ forecasts over South Korea, increasing the Index of Agreement from 0.65–0.68 (CMAQ) to 0.71–0.80 and lowering RMSE by 25%–41% in metropolitan regions, with SHAP analysis confirming behavior consistent with atmospheric chemistry and meteorological processes. The third study introduces the Spatial-Temporal Attention ResNet Transformer (START), which integrates 17 meteorological variables with spatial and temporal encodings to outperform the NASA GEOS baseline over the contiguous United States—reducing RMSE for temperature by 30.3% and MAE for relative humidity by 32.2%—while SHAP-based interpretation and Monte Carlo Dropout with calibrated uncertainty provide reliable, well-explained predictions that support environmental management and policy decisions.

  • View profile for Yossi Matias

    Vice President, Google. Head of Google Research.

    54,283 followers

    Precipitation is one of the most challenging variables to accurately simulate in global climate models as it depends on small-scale physical processes. In our latest research published in 𝘚𝘤𝘪𝘦𝘯𝘤𝘦 𝘈𝘥𝘷𝘢𝘯𝘤𝘦𝘴, we describe an advancement in our hybrid atmospheric model, NeuralGCM, which now leverages AI trained directly on NASA satellite observations to improve global precipitation simulations. Key results of this work: 👉 Physics-AI Integration: The model combines a traditional fluid dynamics solver for large-scale processes with AI neural networks that learn to account for the effects of small-scale physics, specifically precipitation. 👉 Improved Extremes: NeuralGCM demonstrates significant improvements in capturing the intensity of the top 0.1% of extreme rainfall events, better representing heavy precipitation than many traditional models. 👉 Long-Term Accuracy: In multi-year simulations, the model achieved a 40% average error reduction over land compared to leading atmospheric models used in the latest Intergovernmental Panel on Climate Change (IPCC) report. 👉 Daily Patterns: It more accurately reproduces the timing of peak daily precipitation, which is critical for hydrology and agricultural planning. We are already seeing the value of this approach in the field. A partnership between the University of Chicago and the Indian Ministry of Agriculture recently used NeuralGCM in a pilot program to help predict the onset of the monsoon season. NeuralGCM is part of our Earth AI program to better understand the physical earth in ways that benefit society. We have made the code and model checkpoints openly available to the community. Read the full details on the Google Research blog by Janni Yuval: goo.gle/4qH63sU Paper: https://lnkd.in/d7E4US4W

Explore categories