Improving global climate data representation

Explore top LinkedIn content from expert professionals.

Summary

Improving global climate data representation means making climate information clearer, more accurate, and easier to use—especially at local and regional levels—so communities and decision-makers can better plan for weather events, environmental risks, and climate change. This involves using advanced technologies like AI and machine learning, as well as innovative visualization techniques, to turn vast and complex climate data into practical insights.

  • Use advanced modeling: Tap into AI-powered tools and hybrid models to generate detailed regional climate projections, making complex weather patterns easier to understand and apply locally.
  • Apply clear visualization: Present long-term climate trends and emissions with creative charts and timelines that help audiences quickly grasp important changes over time.
  • Integrate local data: Combine global and local rainfall or climate records with geographic mapping to improve risk assessments and infrastructure planning at the community level.
Summarized by AI based on LinkedIn member posts
  • View profile for Jozef Pecho

    Climate/NWP Model & Data Analyst at Floodar (Meratch), GOSPACE LABS | Predicting floods, protecting lives

    3,093 followers

    🌍 Climate scientists often face a trade-off: Global Climate Models (GCMs) are essential for long-term climate projections — but they operate at coarse spatial resolution, making them too crude for regional or local decision-making. To get fine-scale data, researchers use Regional Climate Models (RCMs). These add crucial spatial detail, but come at a very high computational cost, often requiring supercomputers to run for months. ➡️ A new paper introduces EnScale — a machine learning framework that offers an efficient and accurate alternative to running full RCM simulations. Instead of solving the complex physics from scratch, EnScale "learns" the relationship between GCMs and RCMs by training on existing paired datasets. It then generates high-resolution, realistic, and diverse regional climate fields directly from GCM inputs. What makes EnScale stand out? ✅ It uses a generative ML model trained with a statistically principled loss (energy score), enabling probabilistic outputs that reflect natural variability and uncertainty ✅ It is multivariate – it learns to generate temperature, precipitation, radiation, and wind jointly, preserving spatial and cross-variable coherence ✅ It is computationally lightweight – training and inference are up to 10–20× faster than state-of-the-art generative approaches ✅ It includes an extension (EnScale-t) for generating temporally consistent time series – a must for studying events like heatwaves or prolonged droughts This approach opens the door to faster, more flexible generation of regional climate scenarios, essential for risk assessment, infrastructure planning, and climate adaptation — especially where computational resources are limited. 📄 Read the full paper: EnScale: Temporally-consistent multivariate generative downscaling via proper scoring rules ---> https://lnkd.in/dQr5rmWU (code: https://lnkd.in/dQk_Jv8g) 👏 Congrats to the authors — a strong step forward for ML-based climate modeling! #climateAI #downscaling #generativeAI #machinelearning #climatescience #EnScale #RCM #GCM #ETHZurich #climatescenarios

  • View profile for Yossi Matias

    Vice President, Google. Head of Google Research.

    54,272 followers

    Precipitation is one of the most challenging variables to accurately simulate in global climate models as it depends on small-scale physical processes. In our latest research published in 𝘚𝘤𝘪𝘦𝘯𝘤𝘦 𝘈𝘥𝘷𝘢𝘯𝘤𝘦𝘴, we describe an advancement in our hybrid atmospheric model, NeuralGCM, which now leverages AI trained directly on NASA satellite observations to improve global precipitation simulations. Key results of this work: 👉 Physics-AI Integration: The model combines a traditional fluid dynamics solver for large-scale processes with AI neural networks that learn to account for the effects of small-scale physics, specifically precipitation. 👉 Improved Extremes: NeuralGCM demonstrates significant improvements in capturing the intensity of the top 0.1% of extreme rainfall events, better representing heavy precipitation than many traditional models. 👉 Long-Term Accuracy: In multi-year simulations, the model achieved a 40% average error reduction over land compared to leading atmospheric models used in the latest Intergovernmental Panel on Climate Change (IPCC) report. 👉 Daily Patterns: It more accurately reproduces the timing of peak daily precipitation, which is critical for hydrology and agricultural planning. We are already seeing the value of this approach in the field. A partnership between the University of Chicago and the Indian Ministry of Agriculture recently used NeuralGCM in a pilot program to help predict the onset of the monsoon season. NeuralGCM is part of our Earth AI program to better understand the physical earth in ways that benefit society. We have made the code and model checkpoints openly available to the community. Read the full details on the Google Research blog by Janni Yuval: goo.gle/4qH63sU Paper: https://lnkd.in/d7E4US4W

  • Every year, natural disasters hit harder and closer to home. But when city leaders ask, "How will rising heat or wildfire smoke impact my home in 5 years?"—our answers are often vague. Traditional climate models give sweeping predictions, but they fall short at the local level. It's like trying to navigate rush hour using a globe instead of a street map. That’s where generative AI comes in. This year, our team at Google Research built a new genAI method to project climate impacts—taking predictions from the size of a small state to the size of a small city. Our approach provides: - Unprecedented detail – in regional environmental risk assessments at a small fraction of the cost of existing techniques - Higher accuracy – reduced fine-scale errors by over 40% for critical weather variables and reduces error in extreme heat and precipitation projections by over 20% and 10% respectively - Better estimates of complex risks – Demonstrates remarkable skill in capturing complex environmental risks due to regional phenomena, such as wildfire risk from Santa Ana winds, which statistical methods often miss Dynamical-generative downscaling process works in two steps: 1) Physics-based first pass: First, a regional climate model downscales global Earth system data to an intermediate resolution (e.g., 50 km) – much cheaper computationally than going straight to very high resolution. 2) AI adds the fine details: Our AI-based Regional Residual Diffusion-based Downscaling model (“R2D2”) adds realistic, fine-scale details to bring it up to the target high resolution (typically less than 10 km), based on its training on high-resolution weather data. Why does this matter? Governments and utilities need these hyperlocal forecasts to prepare emergency response, invest in infrastructure, and protect vulnerable neighborhoods. And this is just one way AI is turbocharging climate resilience. Our teams at Google are already using AI to forecast floods, detect wildfires in real time, and help the UN respond faster after disasters. The next chapter of climate action means giving every city the tools to see—and shape—their own future. Congratulations Ignacio Lopez Gomez, Tyler Russell MBA, PMP, and teams on this important work! Discover the full details of this breakthrough: https://lnkd.in/g5u_WctW  PNAS Paper: https://lnkd.in/gr7Acz25

  • View profile for Rayan S.

    Senior FP&A Analyst | Power BI • Vega • Financial Modeling

    1,904 followers

    274 Years of Global Emissions in One Visualization I created a serpentine timeline tracking CO₂ emissions across six world regions from 1750 to 2024. This visualization combines two techniques that are rarely used together: Serpentine Timeline Layout Instead of a traditional linear timeline, the layout follows a back-and-forth path, efficiently using vertical space while keeping chronological order. The timeline snakes down the canvas with smooth arc transitions at each turn, which is ideal for long-duration datasets. Packed Bubble Chart with Force Simulation Each data point is represented by a bubble positioned along the timeline using a force-directed layout with collision detection. This ensures that bubbles don’t overlap while remaining anchored to their temporal positions. Bubble size corresponds to emissions volume, showing growth from barely visible dots in 1750 to much larger circles in 2020. Technical Highlights Custom transforms calculate serpentine coordinates using arc geometry (straight segments with semicircular turns). Force layout with x/y anchoring and collision handling. Interactive cross-filtering: highlighting a region in the legend emphasizes its 274-year trajectory. Fully signal-based positioning with no hardcoded coordinates. Data Story The visualization shows the slow rise of emissions during the Industrial Revolution and the rapid growth after 1950. The 2020 pandemic year shows a temporary dip, followed by continued increase in 2024. Why It Matters Combining serpentine timelines with packed bubbles allows long-term datasets with large variations in magnitude to be visualized clearly. Traditional charts would require massive horizontal space or compromise temporal context. This approach delivers both density and clarity. Built entirely in Vega (not Vega-Lite), the specification includes manual force simulation calculations, arc geometry, data normalization, dynamic scaling, and responsive positioning. Data: Our World in Data Visualization: Vega, Deneb #DataVisualization #Vega #ClimateData #DataScience #InfoVis #CarbonEmissions #DataStorytelling #Deneb

  • View profile for Soheyb Hassan

    Riyadh, Saudi Arabia

    1,898 followers

    🌧️ Rainfall data analysis as a fundamental input for advanced hydrological modelling . Rainfall data is the governing variable in hydrological studies, as it directly affects the estimation of surface runoff, the hydrological response of basins, and the accuracy of mathematical model outputs used in flood risk assessment and water infrastructure design. 📊 The hydrological importance of rainfall analysis Accurate analysis of rainfall data aims to: Describe the statistical characteristics of rainfall (frequency, intensity, variability) Represent the temporal and spatial distribution of precipitation Identify design storms Reduce uncertainty in hydrological models. 🧠 Advanced statistical analysis of rainfall The choice of statistical method depends on the nature of the data and the length of the time series. The most prominent methods are: 🔹 Frequency Analysis Application of probability distributions such as: Gumbel Extreme Value Type I Log-Pearson Type III Generalised Extreme Value (GEV) Goodness of Fit test using: Kolmogorov–Smirnov Chi-Square Anderson–Darling. 🔹 Intensity-Duration-Frequency (IDF) Curves Derivation of mathematical relationships between intensity (I), duration (D), and frequency (T) Form the basis for the design of stormwater drainage networks and urban infrastructure. ⏱️ Temporal Analysis Time series analysis to detect: Long-term trends (Trend Analysis) Climate changes and their impact on precipitation patterns Use of tests: Mann–Kendall Sen’s Slope Estimator. 🌍 Spatial Rainfall Analysis Due to the heterogeneity of precipitation, rainfall is spatially represented using: Thiessen Polygons Inverse Distance Weighting (IDW) Kriging (Geostatistical Methods) Integration with geographic information systems (GIS) is an essential step in improving rainfall representation at the catchment level. 💧 Linking rainfall and hydrological models Rainfall analysis results are used directly in: Rational Method (for small basins with rapid response) SCS Curve Number Method for estimating loss and surface runoff Rainfall–Runoff Models such as: HEC-HMS WMS SWMM ⚠️ Technical challenges Incomplete or irregular rainfall records High spatial variability of storms The impact of climate change on the stability of statistical assumptions (Stationarity). Any hydrological model, regardless of its computational accuracy, remains dependent on the quality of the rainfall data analysis input into it. Rainfall analysis is not a preliminary step, but rather the essence of the entire hydrological process.

  • View profile for Milos Popovic, PhD

    Spatial Intelligence Product Strategist | Maps, 3D Visualization, Geospatial Data, AI-ready Workflows | Work with me 👉 milosgis.com

    49,203 followers

    Turkiye stretches across two continents. One map can't capture its climate. This one brings together two variables at once (three if you count the terrain). This is a bivariate map. Instead of showing temperature OR precipitation, it shows both simultaneously using a 2D color legend. Every color on the map encodes two variables at once. That's hard to do without making a mess so the trick is the terrain. The 3D elevation gives your brain a second anchor, making sure you're not just reading color but also landscape. And suddenly the climate data makes physical sense: mountains are cold and wet, plateaus are hot and dry, coasts sit in between. Why bivariate maps matter: → Climate isn't one variable. Showing temperature without precipitation is like showing altitude without terrain, you lose half the story. → Bivariate maps force you to think about relationships, not just distributions. → They stand out. Most people have never seen one. That's your scroll-stopper. The hardest part isn't the code. It's choosing the right color palette so two variables stay readable on one map. I built this using: → TerraClimate data (1970–2020 means) free, global, monthly climate grids → R with ggplot2 for the bivariate color mapping → rayshader for the 3D terrain rendering Full step-by-step tutorial: link in comments 👇

  • View profile for 💡Matteo De Felice

    Lead Data Science Services at RaboResearch | Climate Data & Risk expert | I Energy modelling

    3,609 followers

    CMIP7 climate model data will start to roll out soon and will ultimately feed into the next IPCC AR7 report expected around 2028. This new round is a big step forward in making data application‑ready for users across society, including financial institutions. A key shift is that CMIP7 is explicitly designed around Impacts & Adaptation (I&A) Opportunities: 60 variable groups that bundle what is needed to run impact models or derive climate indicators, for example for Agriculture and Food Systems Impacts or Energy System Impacts. Instead of treating bias‑correction and downscaling as second-class components, CMIP7 plans for them from the start, with more standardised, archive‑wide access to sub‑daily data, higher spatial resolution and the variables needed for bias‑adjusted and downscaled products. This should make it easier to build robust climate services, local risk assessments and asset‑level analyses on top of the same data. Compared with CMIP6, CMIP7 aims to reduce methodological fragmentation by standardising variables and by explicitly connecting Earth system outputs to real‑world decision needs, including those of regulators and financial institutions. In practice, that means better hazard inputs for heat, drought, flood and wind risk models, and in addition more comparable products across providers. In general, we should see a smoother pipeline from global climate scenarios to portfolio‑ and asset‑level insights. CMIP7 remains a complex, protocol‑based global community effort, but with a much stronger focus on decision‑relevant variables for sectors such as energy, agriculture, water, ecosystems, cities, health and finance. For anyone interested in how this is being set up, this paper is an excellent overview of the initiative and its priorities: https://lnkd.in/evjrpxYz

  • View profile for Steve Rosenbush

    Bureau Chief, Enterprise Technology at The Wall Street Journal Leadership Institute

    7,608 followers

    In this week's column, I look at NVIDIA's new generative foundation model that it says enables simulations of Earth’s global climate with an unprecedented level of resolution. As is so often the case with powerful new technology, however, the question is what else humans will do with it. The company expects that climate researchers will build on top of its new AI-powered model to make climate predictions that focus on five-kilometer areas. Previous leading-edge global climate models typically don’t drill below 25 to 100 kilometers. Researchers using the new model may be able to predict conditions decades into the future with a new level of precision, providing information that could help efforts to mitigate climate change or its effects. A 5-kilometer resolution may help capture vertical movements of air in the lower atmosphere that can lead to certain kinds of thunderstorms, for example, and that might be missed with other models. And to the extent that high-resolution near-term forecasts are more accurate, the accuracy of longer-term climate forecasts will improve in turn, because the accuracy of such predictions compounds over time. The model, branded by Nvidia as cBottle for “Climate in a Bottle,” compresses the scale of Earth observation data 3,000 times and transforms it into ultra-high-resolution, queryable and interactive climate simulations, according to Dion Harris, senior director of high-performance computing and AI factory solutions at Nvidia. It was trained on high-resolution physical climate simulations and estimates of observed atmospheric states over the past 50 years. It will take years, of course, to know just how accurate the model’s long-term predictions turn out to be. The The Alan Turing Institute of AI and the Max Planck Institute of Meteorology, are actively exploring the new model, Nvidia said Tuesday at the ISC 2025 computing conference in Hamburg. Bjorn Stevens, director of the Planck Institute, said it “represents a transformative leap in our ability to understand, predict and adapt to the world around us.” The Earth-2 platform is in various states of deployment at weather agencies from NOAA: National Oceanic & Atmospheric Administration in the U.S. to G42, an Abu Dhabi-based holding company focused on AI, and the National Science and Technology Center for Disaster Reduction in Taiwan. Spire Global, a provider of data analytics in areas such as climate and global security, has used Earth-2 to help improve its weather forecasts by three orders of magnitude with regards to speed and cost over the last three or four years, according to Peter Platzer, co-founder and executive chairman.

  • View profile for Stephanie Buller BSc, MRes

    Disaster Science I Multi-Hazard Risk Management I Resilience

    4,374 followers

    My favourite news item this week: the UN has finally endorsed the Global Framework to Strengthen Disaster‑Related Statistics. This is a bigger deal than it sounds. For years, disaster science has been held back by a simple problem: countries collect hazard, exposure, vulnerability, and loss data in completely different ways. Definitions don’t line up. Methods don’t line up. Governance doesn’t line up. And when the data is inconsistent, the science built on it inherits those inconsistencies. This framework gives us, for the first time, a shared statistical backbone for disaster data — not glossy dashboards, but the standards that make real comparison and real science possible. Why this matters for disaster science: - It strengthens the “evidence” part of evidence‑based policy. Comparable, high‑quality data finally tackles the definitional and methodological messiness that has undermined DRR for years. - It supports scientific integrity. Harmonised methods reduce arbitrary assumptions, improve reproducibility, and stop apples‑to‑oranges modelling. - It enables integration across climate, socioeconomic, and disaster datasets — essential for understanding systemic risks. - It closes the gap between research and policy. When everyone uses the same definitions, the science becomes policy‑usable by default. It’s not flashy news, it won’t trend. But it’s an important step forward for evidence‑based DRR. And here’s why it matters for the core mission of disaster science — protecting lives and livelihoods. A statistical framework sounds technical, but it directly shapes real‑world outcomes: 1. Better data → better risk understanding → fewer people caught off‑guard. Consistent definitions mean accurate risk profiles — a prerequisite for preventing deaths and losses. 2. Comparable data strengthens early warnings and preparedness. Cross‑country patterns reveal which vulnerabilities drive mortality and what protective measures actually work. 3. Stronger evidence leads to better decisions and investments. Better data underpins planning, infrastructure, health systems, and social protection — thereby reducing the number of destroyed homes and disrupted livelihoods. 4. Integrated datasets reveal systemic risks. Linking climate, infrastructure, and socioeconomic data exposes poverty traps, exposure trends, and points of failure — and where resilience investments have the greatest return. 5. Better statistics protect those usually invisible. Standardised methods make it harder for marginalised groups to disappear from datasets. Better representation → better planning → fewer preventable losses. Thanks to those who have been pushing and supporting this for many years. Virginia Murray, Kanza Ahmed, B. Burçak Başbuğ, PhD, SFHEA, MICPEM, CODATA, United Nations Office for Disaster Risk Reduction (UNDRR), UK Health Security Agency. #DisasterScience - Institute of Civil Protection & Emergency Management (ICPEM) https://lnkd.in/eRu9mTtH

  • View profile for Benny Istanto, GISP

    Exploring #climate with #GIS and #datascience, solving old problems in new ways.

    2,734 followers

    To support regional economic monitoring and risk assessments at work, I regularly process global climate datasets (#CHIRPS, #TerraClimate, #ERA5Land, #IMERG) to track extreme #dry and #wet periods. For years, I relied on existing tools to produce these indices, but as our scale grew, I often hit bottlenecks in error handling and processing efficiency. I needed a solution that was minimal, operationally ready, and capable of handling global-scale data without requiring a supercomputer. So, I built precip-index. It’s a specialized #Python implementation of #SPI (Standardized Precipitation Index) and #SPEI (Standardized Precipitation Evapotranspiration Index) designed for production workflows. Key features for the geospatial community: - Bidirectional Analysis: Monitors both #drought and wet (#flood) conditions using a unified framework. - Operational Mode: Calibrate once, save parameters, and apply them to new data instantly, perfect for periodic reporting. - Scalable: Benchmarked on CHIRPS v3 global data (17M+ grid cells) with memory-efficient tiling. - Multi-Distribution: Supports Gamma, Pearson III, and Log-Logistic fitting. This code stands on the shoulders of giants; it is built upon the foundation of the `climate-indices` library by James Adams, with a focus on optimizing it for operational speed, memory efficiency, and specific run-theory analysis. Built with a heavy dose of #Claude #VibeCoding, enabling a climate geographer like me to build robust, operational tools. I hope this implementation proves useful to others working on climate resilience and data analysis. Check out the documentation (built using #Quarto) and code: https://lnkd.in/gAkwE4fR

Explore categories