Predictive errors in climate change studies

Explore top LinkedIn content from expert professionals.

Summary

Predictive errors in climate change studies are the mismatches between climate model forecasts and actual environmental outcomes, often caused by the complexity of Earth’s systems and limitations in available data. These errors can lead to underestimating or misrepresenting the risks and impacts of climate change, affecting both scientific understanding and real-world decision-making.

  • Consider local differences: Recognize that climate impacts can vary widely across ecosystems and regions, so relying on uniform assumptions may introduce substantial inaccuracies in predictions.
  • Update risk assessments: Regularly revisit and refine damage models to account for unpredictable, non-linear climate events and cascading effects on economies, infrastructure, and natural resources.
  • Fuse multiple data sources: Combine information from satellites, ground networks, and models to improve accuracy and reduce uncertainty in climate forecasts and precipitation estimates.
Summarized by AI based on LinkedIn member posts
  • View profile for Ali Bin Shahid

    Climate Repair Architect | Trigger Point Strategist | I make restoration Smart | Ecohydrology & Biosphere | Cooling the Climate Advisory | Member Eco Restoration Alliance | Catalyst 2030

    4,525 followers

    Climate models apply the same greenhouse gas forcing coefficient to every ecosystem on Earth. A rainforest and a desert get identical treatment. Using 254 flux tower sites across 7 biomes and 4 continents, I measured what actually happens. The effective forcing varies by a factor of four. Forests attenuate it by up to 50%. Arid shrublands amplify it by up to 24%. The uniform assumption introduces 53.7% error. One takeaway: when a forest is cleared and replaced by degraded land, the effective radiative forcing at that location triples. This is independent of carbon emissions. It happens the day the trees fall. Code: https://lnkd.in/drwSsxuN Preprint: https://lnkd.in/dZshdF8k #ClimateScience #RadiativeForcing #FLUXNET #Ecosystems #LandAtmosphere

  • The report "Recalibrating Climate Risk: Aligning Damage Functions with Scientific Understanding" argues that current economic models significantly underestimate the unknown of future climate impacts. The document focuses on the profound uncertainty inherent in "damage functions"—the mathematical tools used to predict how global warming will affect GDP—and highlights a dangerous disconnect between economic theory and scientific reality. The report emphasizes that the future will be defined by "extremes," not the "averages" currently used in most models. There is significant uncertainty regarding the frequency and intensity of "tail risks"—low-probability but catastrophic events like massive storms or heatwaves. Unlike steady economic growth, climate damage is expected to be "non-linear," meaning small increases in temperature could lead to sudden, disproportionate economic collapses that current models fail to predict. A major wildcard is the potential for "planetary tipping points" (e.g., the melting of permafrost), which introduce "bounded collapse probabilities" that are currently omitted from standard risk assessments. Future uncertainty is exacerbated by how damages interact across different sectors and geographies. Damages are described as "cascading and long-lasting," where a failure in one sector (like agriculture) can trigger unpredictable "capital destruction" and "labour productivity losses" across the entire economy. There is deep uncertainty about how damage "compounds across time, space, and sectors," making it difficult for financial regulators to assess the true level of systemic risk. The report identifies "direct and indirect" failures in how climate risk is currently quantified. Much of the current future uncertainty stems from "arbitrary" functional forms and hidden assumptions in Integrated Assessment Models. While incorporating "expert knowledge" can help, the report notes that these judgments may be "biased" and that there is a lack of "expert confidence" when dealing with higher temperature levels. There is a "fundamental disconnect" between climate science and the "top-down macroeconomic perspective" used by financial regulators and investors, creating a "blind spot" for future climate-driven financial crises. The report suggests that the "greatest unknown" is the point at which climate damage exceeds the system's ability to adapt. To navigate this, researchers and regulators must move beyond "aggregate functions" and embrace "process-based approaches" that explicitly quantify the massive uncertainties of a warming world.

  • View profile for Ali Sheridan
    Ali Sheridan Ali Sheridan is an Influencer

    Climate Policy, Fair Transition & Systems Transformation

    41,957 followers

    “Fifty years into the project of modeling Earth’s future climate, we still don’t really know what’s coming. Some places are warming with more ferocity than expected. Extreme events are taking scientists by surprise. Right now, as the bald reality of climate change bears down on human life, scientists are seeing more clearly the limits of our ability to predict the exact future we face. The coming decades may be far worse, and far weirder, than the best models anticipated… This is a problem. The world has warmed enough that city planners, public-health officials, insurance companies, farmers, and everyone else in the global economy want to know what’s coming next for their patch of the planet… Today’s climate models very accurately describe the broad strokes of Earth’s future. But warming has also now progressed enough that scientists are noticing unsettling mismatches between some of their predictions and real outcomes… Across places where a third of humanity lives, actual daily temperature records are outpacing model predictions… And a global jump in temperature that lasted from mid-2023 to this past June remains largely unexplained… Trees and land are major sinks for carbon emissions, and that this fact might change is not accounted for in climate models. But it is changing: Trees and land absorbed much less carbon than normal in 2023, according to research published last October… The interactions of the ice sheets with the oceans are also largely missing from models, Schmidt told me, despite the fact that melting ice could change ocean temperatures, which could have significant knock-on effects… The models may be underestimating future climate risks across several regions because of a yet-unclear limitation. And, Rohde said, underestimating risk is far more dangerous than overestimating it.” #ClimateRisk #TransitionRisk https://lnkd.in/eiSRvUeF

  • View profile for Allison F. Dolan

    Retired; following US politics, HR, IT and other topics

    7,191 followers

    The challenges of climate change modeling: "The Earth is an unfathomably complex place, a nesting doll of systems within systems. Feedback loops among temperature, land, air, and water are made even more complicated by the fact that every place on Earth is a little different. Natural variability and human-driven warming further alter the rules that govern each of those fundamental interactions. On every continent except Antarctica, certain regions showed up as mysterious hot spots, suffering repeated heat waves worse than what any model could predict or explain. Across places where a third of humanity lives, actual daily temperature records are outpacing model predictions. And a global jump in temperature that lasted from mid-2023 to this past June remains largely unexplained. Per one researcher: “We have to approximate cloud formation because we don’t have the small scales necessary to resolve individual water droplets coming together." "Similarly, models approximate topography, because the scale at which mountain ranges undulate is smaller than the resolution of global climate models, which tend to represent Earth in, at best, 100-square-kilometer pixels. That resolution is good for understanding phenomena such as Arctic warming over decades. But “you can’t resolve a tornado worth anything.” "Models simply can’t function on the scale at which people live, because assessing the impact of current emissions on the future world requires hundreds of years of simulations. Some variables are missing from climate models entirely. Trees and land have been considered major sinks for carbon emissions. But it is changing: Trees and land absorbed much less carbon than normal in 2023. In Finland, forests have stopped absorbing the majority of the carbon they once did, and recently became a net source of emissions, which swamped all gains the country has made in cutting emissions from all other sectors since the early 1990s. The interactions of the ice sheets with the oceans are also largely missing from models. Changing ocean-temperature patterns are currently making climate modelers at NOAA rethink their models of El Niño and La Niña; the agency initially predicted that La Niña’s cooling powers would kick in much sooner than it now appears they will. "The models may be underestimating future climate risks across several regions because of a yet-unclear limitation. And underestimating risk is far more dangerous than overestimating it. Excerpts from The Atlantic article: Climate Models Can’t Explain What’s Happening to Earth Global warming is moving faster than the best models can keep a handle on. By Zoë Schlanger

  • View profile for Jorge Bravo Abad

    AI/ML for Science & DeepTech | Prof. of Physics at UAM | Author of “IA y Física” & “Ciencia 5.0”

    29,018 followers

    Generative models that fuse imperfect precipitation data into something better than any single source Precipitation is notoriously difficult to measure. Rain gauges are accurate but sparse—even 2.5° grid cells average fewer than two gauges, and oceans have virtually none. Satellites provide global coverage but infer surface rainfall indirectly from cloud-top signatures. Numerical models offer physical consistency but accumulate errors through parameterized convection and microphysics. In tropical regions, different products can disagree by as much as the signal itself. Yet the complementary nature of these limitations hints that if you could fuse them intelligently—leveraging each source's strengths while compensating for its weaknesses—you might extract more information than any single system provides alone. Sun and coauthors take that idea and build a principled framework around it. They introduce PRIMER (Precipitation Records Infinite MERging), a coordinate-based diffusion model that learns from heterogeneous, imperfect data without requiring any single source to be trustworthy. The key innovation is representing precipitation as a continuous spatial function rather than a fixed grid—allowing dense satellite products and sparse gauge networks to be treated as different sampling patterns of the same underlying field. Through two-stage training—first learning climatological priors from ERA5 reanalysis and IMERG satellite retrievals, then fine-tuning with gauge observations under shared weights—PRIMER captures both large-scale structure and local precision. Once trained, it functions as a plug-and-play Bayesian prior: conditioning on biased fields yields bias correction, conditioning on coarse fields yields downscaling, conditioning on observations plus background yields optimal interpolation. Applied to 150 precipitation events across East Asia, PRIMER achieves statistically significant error reductions at most gauge sites, better captures heavy precipitation tails that existing products underestimate, and corrects spatial anisotropy. Remarkably, it generalizes without retraining—successfully bias-correcting ECMWF operational forecasts it never saw during training and downscaling CMIP6 future scenario fields while preserving large-scale climate signals. By treating imperfect Earth system observations not as limitations but as complementary constraints on a shared generative prior, it now becomes possible to turn the fundamental challenge of heterogeneous climate data—where no single source is uniformly reliable—into a source of strength, enabling precipitation estimates that surpass any individual product in accuracy, resolution, and coverage. Paper: https://lnkd.in/ezQfytZK #MachineLearning #ClimateScience #GenerativeAI #DiffusionModels #Precipitation #DataFusion #WeatherForecasting #DeepLearning #EarthScience #Hydrology #RemoteSensing #AIforScience #ClimateData #BayesianInference

  • View profile for Robert Shibatani

    CEO & Hydrologist; The SHIBATANI GROUP Inc.; Expert Witness - Flood Litigation, Water Utility Advisor; New Dams; Reservoir Operations; Groundwater Safe Yield; Climate Change

    19,729 followers

    “Bias adjustment strategies … in hydroclimatic modeling”   Future changes in hydrologic extremes can be assessed through hydroclimatic change studies that rely on hydrological projections generated by driving a hydrological model with “forced” climate model simulations.   Doing so, however, leaves large uncertainties about the sign, frequency and magnitude of projected future changes in both floods and droughts because of the uncertainties within the modeling chain. Notable among these is the internal climate variability that affects both observations and future projections.   Such internal climate variability causes large streamflow fluctuations on annual to decadal timescales that may mask change signals and influence the estimation of extremes and their return periods.   Hydrologists acknowledge that climate simulations can exhibit systematic biases.  Their statistical characteristics, such as the mean, variance, or extremes, can differ from those observed or those calculated for re-analyses. These biases affect hydrologically relevant variables, such as temperature and precipitation and can lead to a misrepresentation of hydrological processes.  Climate simulations, therefore, need to be bias adjusted before their use in hydrological models.   When selecting a bias adjustment strategy for a given application, some methodological choices have to be made, which include whether to correct variables individually or to correct all variables of interest jointly (e.g., univariate vs. bi-/multivariate) or whether to choose a method that best preserves the change signal of the original climate simulation(s) or not (e.g., change-preserving vs. non-change-preserving).    In recent years, single-model initial-condition large ensembles (or SMILEs) have emerged in climate impact research as a valuable and robust tool to quantify internal variability and account for its influence on climate change projections.   Bias adjustment can be performed either in ways that preserve the changes between the historical and projected distributions of a climate variable from the raw (i.e. unadjusted) simulations (e.g., change preserving methods) or in ways that do not explicitly aim to preserve the raw climate change signal in the adjustments (e.g., non-change-preserving methods).   The objective of the cited study below was to determine which bias adjustment strategies were best suited for studying future changes in hydrological extremes using a SMILE.   Here, adjusted biases of a 50-member RCM-SMILE (CRCM5-LE) climate ensemble for 87 catchments in Switzerland using five bias adjustment strategies were used together with an HBV-type hydrological model.   An excellent comprehensive discussion of the investigation and its results is provided in Astagneau et al. (2025) in HESS, EGUsphere, “Impact of bias adjustment strategy on ensemble projections of hydrological extremes”

  • View profile for Stephen Bennett

    Head of Climate and Catastrophe Science at Mercury Insurance

    6,330 followers

    “All models are imperfect,” says Sankar Arumugam, corresponding author of the paper and a professor of civil, construction and environmental engineering at NC State. “Sometimes a model may underestimate rainfall, and/or overestimate temperature, or whatever. Model developers have a suite of tools that they can use to correct these so-called biases, improving a model’s accuracy. “However, the existing suite of tools has a key limitation: they are very good at correcting a flaw in a single parameter (like rainfall), but not very good at correcting flaws in multiple parameters (like rainfall and temperature),” Arumugam says. “This is important, because compound events can pose serious threats and – by definition – involve societal impacts from two physical variables, temperature and humidity. This is where our new method comes in.” The new method takes a novel approach to the problem and makes use of machine learning techniques to modify a climate model’s outputs in a way that moves the model’s projections closer to the patterns that can be observed in real-world data.

  • View profile for Hans van Boven

    Officer rtd Royal Netherlands Navy

    6,322 followers

    The North Atlantic Oscillation (NAO) could reach unprecedented magnitudes by the end of the century, leading to severe impacts such as increased flooding and storm damage in northern Europe. The NAO is a large-scale atmospheric pressure see-saw in the North Atlantic and is a key driver of winter weather patterns in the U.K., western Europe and eastern U.S. (pic 4,5,6,7) It is measured by the gradient between High pressure over the Azores and Low pressure over Iceland and controls the strength of the prevailing winds. A new study, led by a team of climate scientists, identifies climatological water vapor as a significant factor governing differences in long-term fluctuations in the NAO across climate model simulations. The research shows that errors in current climate models relating to water vapor lead to uncertainty in predictions of the NAO's future behavior. Taking account of these errors reveals a substantial response of the NAO to volcanic eruptions and greenhouse gases. These new findings have major implications for understanding and preparing for extreme weather events. New study suggests that taking model projections at face value could leave society unprepared for impending extremes. Mitigation efforts are crucial to prevent the severe impacts associated with an unprecedented increase in the NAO. Under a scenario with very high concentrations of greenhouse gases by the end of the century, the NAO will increase to levels never before seen, posing severe risks of impacts from extreme weather such as flooding and storm damage. However, these impacts could be mitigated through efforts to reduce greenhouse gas emissions. This study shows that better understanding the response of atmospheric circulation to greenhouse gases is crucial for anticipating what climate change has in store for the U.K. Key findings from the study include: Some of the model differences in NAO projections are due to climatological water vapor errors in the models. The research reveals the NAO's significant response to external forcings such as volcanic eruptions and greenhouse gases. The study also takes into account the "Signal to Noise Paradox," which suggests that climate models may underestimate the magnitude of the real-world NAO changes. The research results underscore the importance of mitigation efforts to avoid severe impacts from an unprecedented increase in the NAO. The study highlights the need for improved climate models to better predict future changes in the regional climate.

    • +2
  • View profile for George Lawrence

    Writing about Climate, Energy, Epidemiology & the Grid

    4,577 followers

    AAAS: “High-resolution climate model forecasts a wet, turbulent future.” Let me start with a story I heard yesterday. One of my writing groups meets remotely, weekly, including 2 participants who live in eastern Florida + in Barbados, about 1600 miles apart. Both were almost giddy in recounting stories of recent intense rain events + flooding; for example Barbados recently experienced 17″ of rain over 2 days, with 1 drowning death. Now let’s discuss advanced climate modeling. “To simulate hundreds of years in a manageable amount of computer time, [conventional] models divide the atmosphere into the equivalent of coarse pixels, 100 kilometers across, before solving the equations of fluid dynamics for each one.” But this coarseness leads to inaccuracies in the predictions, especially when it comes to patchy phenomena such as heat waves and downpours, which are heavily influenced by what happens at a finer scale.  “A new high-resolution modeling project called MESACLIP, run at great computational expense over the past 5 years, is putting Earth’s future into sharper focus by simulating the churning of the atmosphere and ocean at a level of detail similar to the scale of weather forecasts.” Unfortunately, the project reveals heightened risks for regions like the Gulf Coast and coastal California, where extreme rainfall could occur far more often than traditionally projected. “MESACLIP divides the atmosphere into 25-km boxes and the upper layer of the ocean into a 10-km grid…running global simulations that began in 1900 and looked ahead to 2100 for multiple greenhouse gas emission scenarios.”  The results better match the historic records of ocean and air temperature, which many climate models have long struggled to do. “They also better capture cold tongues of upwelling water and swirling eddies in the ocean, which are thought to play an important role in modulating wind patterns…they mimic the extreme rainfall events observed today far more accurately.” After 900 days of computing time + 4500 yrs of simulation, 6 petabytes of open data will prove to be a gold mine for climate scientists everywhere. Note a petabyte is a unit of digital information storage equal to one quadrillion (1,000,000,000,000,000) bytes, or ~ 1,000 terabytes (TB). So—keep your shoes dry.

  • View profile for Klaus Mager

    Farm to Table Food Systems Design and Support

    6,686 followers

    There are indications that the climate models missed to account for and integrate the hydrologic cycles, for example the relationship between soil and water. Focusing solely on carbon may result in the misallocation of single bullet resources with potentially catastrophic impacts. Soil and Water: The health of the soil, particularly its microbiome, is a cornerstone of the hydrologic cycle. Healthy soils have a better capacity to absorb and retain water, regulating both drought and flood conditions. Depleted soils, often a result of industrial agricultural practices, can disrupt water cycles, leading to worsened droughts or floods. Potential Misallocation of Resources: When we fixate solely on carbon as the primary culprit in climate change, we may overlook other vital aspects of the ecosystem, like the hydrologic cycle. Such an oversight can lead to misguided solutions, potentially exacerbating problems rather than ameliorating them. There is a profound need for integrating holistic approaches in our climate strategies. Addressing the shortcomings of current models isn't just a scientific necessity but a moral imperative for the well-being of all life on Earth. https://lnkd.in/g7tCuBQi.

Explore categories