Challenges in refining global climate model outputs

Explore top LinkedIn content from expert professionals.

Summary

Refining global climate model outputs means improving the accuracy and usefulness of large-scale climate predictions for local or regional decision-making. One of the biggest challenges in this process is dealing with uncertainties, biases, and differences between what models simulate and what is observed in the real world.

  • Select trustworthy models: Evaluate climate models against real-world data to choose those that best reflect historical patterns and trends for your area of interest.
  • Apply bias corrections: Adjust model outputs with statistical techniques so their results align more closely with observed data, making them suitable for practical applications.
  • Embrace new tools: Consider using machine learning and multi-model ensembles to blend outputs and generate more accurate, regionally detailed climate scenarios without overwhelming computing demands.
Summarized by AI based on LinkedIn member posts
  • View profile for Shahid Iqbal

    Senior Water & Climate Change Expert | Climate Modeling, Flood & Drought Risk Resilience Planning | Nature-base Solution | R, Python and GEE | Extreme Event Analysis

    5,030 followers

    Climate Model Selection and Bias Correction for Impact Assessment Climate impact assessments rely heavily on climate model outputs. However, not all models perform equally well at regional scales, and raw model outputs often contain systematic biases. This video explains how climate models are selected and how bias correction is applied to ensure reliable impact assessments. Global Climate Models, or GCMs, are designed to simulate large-scale climate processes, but their performance can vary significantly across regions and variables such as temperature and precipitation. Using poorly performing models can introduce large uncertainties into impact studies, particularly for hydrology, agriculture, and risk assessments. Therefore, selecting appropriate models is a critical first step in climate impact analysis. Climate models are typically evaluated against observed or reanalysis data over a historical baseline period. Key performance metrics include correlation, standard deviation, and overall agreement with observations. Tools such as Taylor diagrams are commonly used to compare multiple models simultaneously. Models that best reproduce observed climate variability and trends are selected, while poorly performing models may be excluded. In many studies, a multi-model ensemble is preferred, as it reduces individual model uncertainty and provides more robust projections. Even well-performing climate models exhibit systematic biases due to limitations in model structure, resolution, and parameterization. These biases can significantly affect impact models, especially when simulating extremes such as floods, droughts, or heatwaves. Bias correction adjusts model outputs so that their statistical properties match observed climate data, making them suitable for downstream impact assessments. Several bias correction techniques are commonly used. Simple methods include mean bias correction and scaling, which adjust the average values of climate variables. More advanced approaches, such as quantile mapping, correct the entire distribution of the variable, including extremes. Importantly, bias correction is applied during the historical period and then transferred consistently to future projections, preserving the climate change signal. Once bias-corrected, climate model outputs can be confidently used as inputs for impact models, such as hydrological models, crop models, or risk assessments. This process improves the realism of simulated impacts while maintaining consistency with projected future climate change under different emission scenarios. In summary, careful climate model selection ensures that only reliable models inform impact studies, while bias correction bridges the gap between model simulations and observed climate. Together, these steps are essential for producing credible, policy-relevant climate impact assessments.

  • View profile for Erich Fischer

    Professor at ETH Zürich, climate scientist with interest in weather and climate extremes, lead author of the IPCC AR6 and upcoming AR7

    5,657 followers

    Can climate models reproduce observed trends? The answer can be challenging. Our new review paper in Science Advances led by Isla Simpson and Tiffany Shaw discusses challenges and ways forward in confronting climate models and observations. It's tricky. Climate models and observations may disagree (1) by chance, due to unforced internal variability, (2) due to error in the model response, (3) due to inaccurate prescribed external forcings, (4) due to incomplete or uncertain observations or (5) due to inappropriate comparison methods. The paper discusses ways forward in disentangling the reasons for potential mismatches between observed and simulated trends. It provides a long catalogue of examples of success, discrepancies and unclear situations that require further attention. https://lnkd.in/dHrEJfDh Let by Isla Simpson and Tiffany Shaw with Paulo Ceppi, Amy Clement, Erich Fischer, Kevin Grise, Angeline Pendergrass, James Screen, Robert Jinglin Wills, Tim Woollings, Russell Blackport, Joonsuk Kang, and Stephen Po-Chedley supported by US CLIVAR

  • View profile for Jozef Pecho

    Climate/NWP Model & Data Analyst at Floodar (Meratch), GOSPACE LABS | Predicting floods, protecting lives

    3,093 followers

    🌍 Climate scientists often face a trade-off: Global Climate Models (GCMs) are essential for long-term climate projections — but they operate at coarse spatial resolution, making them too crude for regional or local decision-making. To get fine-scale data, researchers use Regional Climate Models (RCMs). These add crucial spatial detail, but come at a very high computational cost, often requiring supercomputers to run for months. ➡️ A new paper introduces EnScale — a machine learning framework that offers an efficient and accurate alternative to running full RCM simulations. Instead of solving the complex physics from scratch, EnScale "learns" the relationship between GCMs and RCMs by training on existing paired datasets. It then generates high-resolution, realistic, and diverse regional climate fields directly from GCM inputs. What makes EnScale stand out? ✅ It uses a generative ML model trained with a statistically principled loss (energy score), enabling probabilistic outputs that reflect natural variability and uncertainty ✅ It is multivariate – it learns to generate temperature, precipitation, radiation, and wind jointly, preserving spatial and cross-variable coherence ✅ It is computationally lightweight – training and inference are up to 10–20× faster than state-of-the-art generative approaches ✅ It includes an extension (EnScale-t) for generating temporally consistent time series – a must for studying events like heatwaves or prolonged droughts This approach opens the door to faster, more flexible generation of regional climate scenarios, essential for risk assessment, infrastructure planning, and climate adaptation — especially where computational resources are limited. 📄 Read the full paper: EnScale: Temporally-consistent multivariate generative downscaling via proper scoring rules ---> https://lnkd.in/dQr5rmWU (code: https://lnkd.in/dQk_Jv8g) 👏 Congrats to the authors — a strong step forward for ML-based climate modeling! #climateAI #downscaling #generativeAI #machinelearning #climatescience #EnScale #RCM #GCM #ETHZurich #climatescenarios

  • View profile for Afed Ullah Khan, PhD

    Hydrologist | Climate Change & Water Resources Researcher | Remote Sensing & AI for Sustainable Development | GIS, GEE, Python, R | Consultant GIZ, UNICEF & Adam Smith International

    2,970 followers

    🌍 Improving Climate Model Accuracy Using Machine Learning: A Multi-Model Ensemble Approach 📢 Just wrapped up an exciting project where I used Bayesian Optimization + XGBoost to compute a Multi-Model Ensemble (MME) of Global Climate Models (GCMs). 🧠 The Goal: Climate models vary widely. Instead of relying on a single GCM, I combined outputs from multiple models—CESM2-WACCM, INM-CM4-8, and EC-Earth3—to better match observed record. 🔧 The Process: ✅ Data Preprocessing ✔ Cleaned + normalized GCM & observed data ✔ Filled missing values and ensured time-consistent splits ✅ Bayesian Optimization Used scikit-optimize to find optimal hyperparameters for an XGBoost model, accelerating convergence with smart probabilistic search. ✅ Grid Search Refinement Fine-tuned the best Bayesian result using a local Grid Search for extra precision. ✅ Evaluation 📊 Metrics: R², RMSE, and NSE 📈 Visuals: Time series comparison + residual analysis 🔍 Why It Matters: MMEs are crucial for reducing uncertainty in climate predictions. By integrating machine learning with GCM outputs, we can boost reliability for real-world decision-making—from water resource management to climate adaptation strategies. 🚀 Youtube video link🌱 https://lnkd.in/d5k3wFrx #ClimateChange #MachineLearning #XGBoost #BayesianOptimization #GCM #EnvironmentalScience #AI4Climate #Hydrology #DataScience #ClimateModeling #Python #TimeSeries #MME

  • View profile for Greg Bronevetsky

    Software Engineer at X, the moonshot factory

    3,022 followers

    Modeling Talk: Oct 28, 2025 Climate modeling in the era of AI Laure Zanna, New York University Video Recording: https://lnkd.in/gMxW5sgY Slides and Summary: https://lnkd.in/gPzWBSea Abstract: While AI has been disrupting conventional weather forecasting, we are only beginning to witness the impact of AI on long-term climate simulations. The fidelity and reliability of climate models has been limited by computing capabilities. These limitations lead to inaccurate representations of key processes such as convection, cloud, or mixing or restrict the ensemble size of climate predictions. Therefore, these issues are a significant hurdle in enhancing climate simulations and their predictions. Here, I will discuss a new generation of climate models with AI representations of unresolved ocean physics, learned from high-fidelity simulations, and their impact on reducing biases in climate simulations. The simulations are performed with operational ocean model components. I will further demonstrate the potential of AI to accelerate climate predictions and increase their reliability through the generation of fully AI-driven emulators, which can reproduce decades of climate model output in seconds with high accuracy. Bio: Laure Zanna is a physical oceanographer and climate physicist in the Department of Mathematics at the Courant Institute and the Center for Data Science, NYU. She holds the Joseph B. Keller and Herbert B. Keller Professorship in Applied Mathematics. Her research focuses on understanding, simulating and predicting the role of the ocean in climate on local and global scales. She combines theory, numerical simulations, statistics, and machine learning to tackle a wide range of problems in fluid dynamics and climate, including turbulence, multiscale modeling, ocean heat and carbon uptake, and sea level. Since 2020, she is leading M²LInES, an international collaboration sponsored by Schmidt Sciences dedicated to improving climate models using scientific machine learning. In 2020, Prof Zanna received the Nicholas P. Fofonoff Award from the American Meteorological Society “for exceptional creativity in the development and application of new concepts in ocean and climate dynamics”, and was the 2022 WHOI Geophysical Fluid Dynamics principal lecturer. #modeling #simulation #ai #ml #research #datascience #climatechange #climatemodeling #oceanmodeling #weathermodels #neuralemulator

  • View profile for Yossi Matias

    Vice President, Google. Head of Google Research.

    54,267 followers

    Precipitation is one of the most challenging variables to accurately simulate in global climate models as it depends on small-scale physical processes. In our latest research published in 𝘚𝘤𝘪𝘦𝘯𝘤𝘦 𝘈𝘥𝘷𝘢𝘯𝘤𝘦𝘴, we describe an advancement in our hybrid atmospheric model, NeuralGCM, which now leverages AI trained directly on NASA satellite observations to improve global precipitation simulations. Key results of this work: 👉 Physics-AI Integration: The model combines a traditional fluid dynamics solver for large-scale processes with AI neural networks that learn to account for the effects of small-scale physics, specifically precipitation. 👉 Improved Extremes: NeuralGCM demonstrates significant improvements in capturing the intensity of the top 0.1% of extreme rainfall events, better representing heavy precipitation than many traditional models. 👉 Long-Term Accuracy: In multi-year simulations, the model achieved a 40% average error reduction over land compared to leading atmospheric models used in the latest Intergovernmental Panel on Climate Change (IPCC) report. 👉 Daily Patterns: It more accurately reproduces the timing of peak daily precipitation, which is critical for hydrology and agricultural planning. We are already seeing the value of this approach in the field. A partnership between the University of Chicago and the Indian Ministry of Agriculture recently used NeuralGCM in a pilot program to help predict the onset of the monsoon season. NeuralGCM is part of our Earth AI program to better understand the physical earth in ways that benefit society. We have made the code and model checkpoints openly available to the community. Read the full details on the Google Research blog by Janni Yuval: goo.gle/4qH63sU Paper: https://lnkd.in/d7E4US4W

Explore categories