🌍 Climate scientists often face a trade-off: Global Climate Models (GCMs) are essential for long-term climate projections — but they operate at coarse spatial resolution, making them too crude for regional or local decision-making. To get fine-scale data, researchers use Regional Climate Models (RCMs). These add crucial spatial detail, but come at a very high computational cost, often requiring supercomputers to run for months. ➡️ A new paper introduces EnScale — a machine learning framework that offers an efficient and accurate alternative to running full RCM simulations. Instead of solving the complex physics from scratch, EnScale "learns" the relationship between GCMs and RCMs by training on existing paired datasets. It then generates high-resolution, realistic, and diverse regional climate fields directly from GCM inputs. What makes EnScale stand out? ✅ It uses a generative ML model trained with a statistically principled loss (energy score), enabling probabilistic outputs that reflect natural variability and uncertainty ✅ It is multivariate – it learns to generate temperature, precipitation, radiation, and wind jointly, preserving spatial and cross-variable coherence ✅ It is computationally lightweight – training and inference are up to 10–20× faster than state-of-the-art generative approaches ✅ It includes an extension (EnScale-t) for generating temporally consistent time series – a must for studying events like heatwaves or prolonged droughts This approach opens the door to faster, more flexible generation of regional climate scenarios, essential for risk assessment, infrastructure planning, and climate adaptation — especially where computational resources are limited. 📄 Read the full paper: EnScale: Temporally-consistent multivariate generative downscaling via proper scoring rules ---> https://lnkd.in/dQr5rmWU (code: https://lnkd.in/dQk_Jv8g) 👏 Congrats to the authors — a strong step forward for ML-based climate modeling! #climateAI #downscaling #generativeAI #machinelearning #climatescience #EnScale #RCM #GCM #ETHZurich #climatescenarios
Testing climate solutions for regional models
Explore top LinkedIn content from expert professionals.
Summary
Testing climate solutions for regional models involves evaluating innovative tools and approaches that predict local climate impacts with greater detail, accuracy, and speed than traditional global models. By combining artificial intelligence with physics-based models, researchers can generate high-resolution forecasts that are crucial for local planning, disaster preparedness, and climate adaptation.
- Embrace hybrid modeling: Combine regional climate models with AI-driven downscaling techniques to generate detailed, localized climate projections without the need for months of supercomputing time.
- Prioritize local relevance: Use these advanced regional solutions to improve risk assessment and infrastructure planning by providing community-specific forecasts for extreme weather, air quality, and long-term climate impacts.
- Integrate explainable AI: Leverage interpretable machine learning frameworks to ensure predictions are both reliable and transparent for decision-makers and stakeholders.
-
-
Every year, natural disasters hit harder and closer to home. But when city leaders ask, "How will rising heat or wildfire smoke impact my home in 5 years?"—our answers are often vague. Traditional climate models give sweeping predictions, but they fall short at the local level. It's like trying to navigate rush hour using a globe instead of a street map. That’s where generative AI comes in. This year, our team at Google Research built a new genAI method to project climate impacts—taking predictions from the size of a small state to the size of a small city. Our approach provides: - Unprecedented detail – in regional environmental risk assessments at a small fraction of the cost of existing techniques - Higher accuracy – reduced fine-scale errors by over 40% for critical weather variables and reduces error in extreme heat and precipitation projections by over 20% and 10% respectively - Better estimates of complex risks – Demonstrates remarkable skill in capturing complex environmental risks due to regional phenomena, such as wildfire risk from Santa Ana winds, which statistical methods often miss Dynamical-generative downscaling process works in two steps: 1) Physics-based first pass: First, a regional climate model downscales global Earth system data to an intermediate resolution (e.g., 50 km) – much cheaper computationally than going straight to very high resolution. 2) AI adds the fine details: Our AI-based Regional Residual Diffusion-based Downscaling model (“R2D2”) adds realistic, fine-scale details to bring it up to the target high resolution (typically less than 10 km), based on its training on high-resolution weather data. Why does this matter? Governments and utilities need these hyperlocal forecasts to prepare emergency response, invest in infrastructure, and protect vulnerable neighborhoods. And this is just one way AI is turbocharging climate resilience. Our teams at Google are already using AI to forecast floods, detect wildfires in real time, and help the UN respond faster after disasters. The next chapter of climate action means giving every city the tools to see—and shape—their own future. Congratulations Ignacio Lopez Gomez, Tyler Russell MBA, PMP, and teams on this important work! Discover the full details of this breakthrough: https://lnkd.in/g5u_WctW PNAS Paper: https://lnkd.in/gr7Acz25
-
Deveshwar Singh has successfully defended his Ph.D. and is my 21st doctoral student to earn the Ph.D. degree under my supervision at the University of Houston. His dissertation, titled “Utilization of Deep-Learning Algorithms for Bias Correction and Forecasting of Weather, Air Quality, and Climate Parameters over Regional Scales,” develops and applies deep learning frameworks to enhance prediction and bias correction of key environmental variables across regional scales. In the first study, he addresses systematic biases in Indian Summer Monsoon Rainfall projections from CORDEX-SA regional climate models using two super-resolution methods—Autoencoder-Decoder and Residual Neural Network (ResNet)—which ingest eight meteorological variables plus elevation at 0.25° resolution and generate bias-corrected precipitation at 0.05° resolution; the ResNet model achieves a five-fold increase in spatial resolution and reduces extreme rainfall bias from 21.18 mm to -7.86 mm. The second study proposes the Deep-BCSI framework, which combines CNN-based bias correction with Partial CNN-based spatial imputation to improve 72-hour PM₂.₅ forecasts over South Korea, increasing the Index of Agreement from 0.65–0.68 (CMAQ) to 0.71–0.80 and lowering RMSE by 25%–41% in metropolitan regions, with SHAP analysis confirming behavior consistent with atmospheric chemistry and meteorological processes. The third study introduces the Spatial-Temporal Attention ResNet Transformer (START), which integrates 17 meteorological variables with spatial and temporal encodings to outperform the NASA GEOS baseline over the contiguous United States—reducing RMSE for temperature by 30.3% and MAE for relative humidity by 32.2%—while SHAP-based interpretation and Monte Carlo Dropout with calibrated uncertainty provide reliable, well-explained predictions that support environmental management and policy decisions.
-
Climate models have long struggled with coarse resolution, limiting precise climate risk insights. But AI-driven methods are now changing this, unlocking more detailed intelligence than traditional physics-based approaches. I recently spoke with a research scientist at Google Research who highlighted a promising new hybrid approach. This method combines physics-based General Circulation Models (GCMs) with AI refinement, significantly improving resolution. The process starts with Regional Climate Models (RCMs) anchoring physical consistency at ~45 km resolution. Then, it uses a diffusion model, R2-D2, to enhance output resolution to 9 km, making estimates more suitable for projecting extreme climate events. 🔥 About R2-D2 R2‑D2 (Regional Residual Diffusion-based Downscaling) is a diffusion model trained on residuals between RCM outputs and high-resolution targets. Conditioned on physical inputs like coarse climate fields and terrain, it rapidly generates high-res climate maps (~800 fields/hour on GPUs), complete with uncertainty estimates. ✅ Why this matters - Offers detailed projections of extreme climate events for precise risk quantification. - Delivers probabilistic forecasts, improving risk modeling and scenario planning. - Provides another high-resolution modeling approach, enriching ensemble strategies for climate risk projections. 👉 Read the full paper: https://lnkd.in/gU6qmZTR 👉 An excellent explainer blog: https://lnkd.in/gAEJFEV2 If your work involves climate risk assessment, adaptation planning, or quantitative modeling, how are you leveraging high-resolution risk projections?
-
𝐅𝐫𝐨𝐦 𝐒𝐮𝐩𝐞𝐫𝐜𝐨𝐦𝐩𝐮𝐭𝐞𝐫𝐬 𝐭𝐨 𝐀𝐈: 𝐓𝐞𝐬𝐭𝐢𝐧𝐠 𝐆𝐫𝐚𝐩𝐡𝐂𝐚𝐬𝐭 𝐨𝐧 𝐈𝐧𝐝𝐢𝐚’𝐬 𝐄𝐱𝐭𝐫𝐞𝐦𝐞 𝐖𝐞𝐚𝐭𝐡𝐞𝐫 For decades, predicting the chaotic behaviour of our atmosphere has depended on massive supercomputers solving complex numerical equations. I recently decided to put Google DeepMind’s GraphCast to a real-world test—to see how a purely data-driven model compares against traditional physics-based forecasting. 🧪 What I tried I developed an end-to-end pipeline using Earth2Studio and JAX to: Ingest historical global atmospheric data Run deterministic hindcasts Evaluate performance on India’s most extreme weather events 🌧️ The Results Were Striking ⛈️ Kerala Floods (2018) Captured orographic lifting over the Western Ghats and predicted >450 mm rainfall anomalies several days in advance. 🌀 Super Cyclone Amphan (2020) Accurately reproduced the counter-clockwise Coriolis circulation and the formation of a tight eyewall while tracking intensification over the Bay of Bengal. 🎯 Cyclone Biparjoy (2023) The toughest test. Despite its erratic trajectory, the model—run 3.5 days before landfall—predicted a track remarkably close to the IMD-observed landfall at Jakhau Port. 💬 Let’s Discuss While global AI weather models like GraphCast and FourCastNet demonstrate high skill in predicting synoptic-scale patterns, they often suffer from spectral blurring, failing to capture localized, high-intensity events such as cloudbursts. I am exploring regional downscaling architectures to bridge this gap. Specifically, I’m interested in hybrid physics-AI frameworks—such as incorporating conservation laws or differentiable solvers—to ensure sub-grid scale reconstructions remain physically consistent. #MachineLearning #AI #Meteorology #GraphCast #CFD #EarthScience #DeepLearning #WeatherForecasting #DataScience #JAX #ClimateTech
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development