Challenges in static climate risk modeling

Explore top LinkedIn content from expert professionals.

Summary

Static climate risk modeling refers to using fixed data and assumptions to predict the impact of climate change on assets, infrastructure, or financial systems. Recent posts highlight key challenges in relying on static models, such as their inability to capture rapidly changing climate conditions, the mismatch between predicted and actual extreme events, and the risk of underestimating future threats.

  • Question assumptions: Regularly review and challenge the foundational assumptions behind climate risk models to ensure that they reflect current scientific understanding and real-world outcomes.
  • Build data integration: Work toward integrating higher-resolution data and tracking adaptation efforts so models can better reflect geographic and operational realities.
  • Address probability gaps: Adjust risk frameworks to properly account for catastrophic climate scenarios, preventing the mispricing of high-impact events that are more likely than models suggest.
Summarized by AI based on LinkedIn member posts
  • View profile for Ali Sheridan
    Ali Sheridan Ali Sheridan is an Influencer

    Climate Policy, Fair Transition & Systems Transformation

    41,947 followers

    “Fifty years into the project of modeling Earth’s future climate, we still don’t really know what’s coming. Some places are warming with more ferocity than expected. Extreme events are taking scientists by surprise. Right now, as the bald reality of climate change bears down on human life, scientists are seeing more clearly the limits of our ability to predict the exact future we face. The coming decades may be far worse, and far weirder, than the best models anticipated… This is a problem. The world has warmed enough that city planners, public-health officials, insurance companies, farmers, and everyone else in the global economy want to know what’s coming next for their patch of the planet… Today’s climate models very accurately describe the broad strokes of Earth’s future. But warming has also now progressed enough that scientists are noticing unsettling mismatches between some of their predictions and real outcomes… Across places where a third of humanity lives, actual daily temperature records are outpacing model predictions… And a global jump in temperature that lasted from mid-2023 to this past June remains largely unexplained… Trees and land are major sinks for carbon emissions, and that this fact might change is not accounted for in climate models. But it is changing: Trees and land absorbed much less carbon than normal in 2023, according to research published last October… The interactions of the ice sheets with the oceans are also largely missing from models, Schmidt told me, despite the fact that melting ice could change ocean temperatures, which could have significant knock-on effects… The models may be underestimating future climate risks across several regions because of a yet-unclear limitation. And, Rohde said, underestimating risk is far more dangerous than overestimating it.” #ClimateRisk #TransitionRisk https://lnkd.in/eiSRvUeF

  • Physical climate risk data: the more we learn, the less we know? Khalid Azizuddin's recent piece in *Responsible Investor captures well what many practitioners are grappling with today: - asset-level data that remain incomplete or hard to interpret; - physical hazard exposure often disconnected from financial materiality; - little visibility on supply chains or customers; - adaptation and resilience efforts largely ignored; - and a risk of over-simplifying complex realities into a single “score.” Some three years ago, EDHEC Business School set out to address exactly these challenges, working to advance climate risk modelling and make decision-useful for investors, companies, and public authorities. In this work, we have developed: 🔹 a blueprint for a new generation of probabilistic climate scenarios; 🔹 high-resolution geospatial modeling capabilities to allow for geographic and sectoral downscaling, consistent with each scenario; 🔹 an open database of decarbonisation and resilience technologies through the #ClimaTech project, which officially launched this week. While the research is public, the new EDHEC Climate Institute has also been assisting a school-backed venture, Scientific Climate Ratings (SCR), which integrates this research to deliver forward-looking quantification of the #financialmateriality of climate risks for infrastructure companies and investors worldwide. While SCR provides a rating scale for comparability, it avoids the trap of over-simplification. Each rating is backed by probabilistic scenario modelling, analysis of physical and transition risk exposures, and explicit accounting for adaptation measures. The result is a synthesis that remains transparent, interpretable, and anchored in scientific rigour. Together, these initiatives aim to move the discussion from data abundance to decision relevance, equipping practitioners with tools that connect climate science, finance, and strategy.

  • View profile for Alexander Nevolin

    Consulting Partner | Risk Executive | Financial Services

    8,996 followers

    Here’s my take on the recent BIS climate credit risk model "Incorporating physical climate risks into banks’ credit risk models" (July 2025 https://lnkd.in/e4fEezjZ) The approach is clearly designed for a specific application: incorporating physical climate risk into the IRB capital framework. It improves on scalar PD blends by introducing a theoretically motivated adjustment that remains compatible with the Basel formula. But it does so by relying on regulatory-acceptable simplifications that make it operationally feasible at the cost of structural modelling consistency. Conceptually, the model adjusts PD by shifting the implied terminal distribution via a weighted blend of original and stressed states (as in the q-climate adjustment). This is very similar in spirit to a classic Distance-to-Default stress test: rather than modelling asset paths or volatility changes explicitly, it imposes a distribution-level shift to increase the probability of default relative to a fixed debt threshold. ➡️ Key strength: It is easy to implement in the IRB framework. Banks can move beyond simplistic scalar PD adjustments while keeping the regulatory capital formula intact. But there are important limitations: 🔹First, this is not a “jump model” in the classical sense. True jump-diffusion models embed stochastic jumps in the asset-value process itself, impacting path-dependent dynamics and producing fatter-tailed distributions. They preserve structural consistency - market cap as an option on assets. By contrast, the BIS approach imposes an exogenous terminal shift at the distribution level, changing the mean but not the shape. 🔹Second, this means the approach cannot be used for market-consistent pricing. Structural models calibrate asset value and volatility so that the option price matches observed market cap. Any exogenous shift in the terminal distribution would simply be offset in calibration to maintain this consistency, effectively cancelling the adjustment. Regulators ignore this option-price matching in capital frameworks, but this simplification makes the approach unsuitable for pricing applications. 🔹Third, the practical challenge is calibration. While the mechanics of the q and α parameters are straightforward, robust estimation remains an open question. ✅In short: The BIS model is purpose-built for IRB capital calculations. It offers a clear, operationally feasible way to embed physical climate risk in regulatory capital requirements, accepting simplifications that make it incompatible with market-consistent pricing or full structural modelling. It’s not a failure of the approach, but a deliberate trade-off to meet regulatory needs. Importantly, the q and α parameters are central to its implementation, controlling the weight and magnitude of the stressed distribution shift and so require careful design and calibration in practice.

  • View profile for Dr. Ron Dembo

    Founder & CEO at riskthinking.AI | Founder of Algorithmics | Author of “Risk Thinking” | Lifetime Fellow, Fields Institute | Former Yale Professor, with deep expertise in Mathematical Modelling/Climate Risk

    17,240 followers

    The Probability Gap: How the mispricing of Climate Tail Risk threatens financial stability The financial sector relies on a simple yet increasingly risky misunderstanding of risk. We have traditionally regarded catastrophic climate scenarios as tail risks—unlikely events that are only a small part of our models. But what if the mathematical foundation for that perspective is flawed? This isn't a philosophical question; it's a measurable problem of model risk that needs urgent attention from any fiduciary responsible for protecting capital. After thirty years of developing enterprise risk management systems, from founding Algorithmics to creating RiskThinking.AI, I’ve learned that the biggest vulnerabilities come from assumptions we refuse to question. The evidence now shows that our core assumption about the likelihood of climate-related disaster is flawed, and it's time to revisit our understanding of risk from the ground up. The Probability Gap: Where Financial Models Diverge from Reality The gap between climate science and financial practice is evident. Recent analysis from Oxford Economics estimates a 57.5% chance of climate catastrophe scenarios. However, the standard Expected Credit Loss (ECL) models used by banks assign only a 5% likelihood to these same scenarios. This isn't a calibration mistake; it's a fundamental mismatch of scale that risks undermining systemic stability. Climate science indicates a 57.5% chance of catastrophic scenarios, yet traditional bank credit models assign them only a 5% weight. This isn't a calibration error; it's a fundamental "Probability Gap" at the heart of our financial system. We are misjudging highly probable outcomes as unlikely tail risks because our models—intended for a stable world of mean reversion—are not functioning correctly in our new, non-stationary climate reality. The result is a widespread mispricing of risk. When the data suggests that catastrophic outcomes are this probable, failing to consider them properly isn't just poor strategy—it is a breach of fiduciary duty. The only logical response is to update our framework. This requires a new technological infrastructure capable of modelling these complex, multifactor risks stochastically. After decades of building risk systems, from Algorithmics to RiskThinking.AI, I can say with certainty that the tools to do this exist today. The challenge is no longer technical; it's about leadership. Institutions that revise their planning assumptions and acknowledge the real likelihood of these tail events will gain a crucial analytical edge in the coming decades. Is your risk framework designed for the world that is, or the world that was? #ClimateRisk #FinancialRisk #RiskManagement #ESG #Finance #Adaptation #SystemicRisk #Leadership  

  • View profile for Charles Cozette

    CEO @ CarbonRisk Intelligence

    8,857 followers

    The spatial correlation of climate risks breaks standard insurance models. New research synthesizes how this and other market failures should guide adaptation policy. A new working paper identifies key market failures preventing efficient adaptation: insurance markets cannot fully cover spatially correlated climate risks (e.g. hurricanes in Florida making state-wide infrastructure damages), credit constraints block adaptation investments for low-income households, and positive and negative externalities distort private adaptation decisions. Their analysis formalizes adaptation through two economic channels. Ex-post responses to weather shocks and ex-ante investments based on climate expectations. This framework shows how incomplete insurance markets and credit constraints create adaptation gaps. For instance, smallholder farmers often cannot access credit to invest in irrigation systems, while disaster insurance remains prohibitively expensive due to correlated risks. The chapter points toward distributional concerns in adaptation markets. Low-income households invest less in adaptation despite higher marginal benefits, creating a feedback loop where climate vulnerability concentrates among the poor. These distributional effects justify public intervention even when markets function well - particularly as historical emissions from wealthy regions drive adaptation needs in poorer areas. Great work from top scientists: Tamma Carleton, Esther Duflo, Jack Kelsey, and Guglielmo Zappalà, published by the National Bureau of Economic Research. (https://lnkd.in/e3gcrvn8)

  • The report "Recalibrating Climate Risk: Aligning Damage Functions with Scientific Understanding" argues that current economic models significantly underestimate the unknown of future climate impacts. The document focuses on the profound uncertainty inherent in "damage functions"—the mathematical tools used to predict how global warming will affect GDP—and highlights a dangerous disconnect between economic theory and scientific reality. The report emphasizes that the future will be defined by "extremes," not the "averages" currently used in most models. There is significant uncertainty regarding the frequency and intensity of "tail risks"—low-probability but catastrophic events like massive storms or heatwaves. Unlike steady economic growth, climate damage is expected to be "non-linear," meaning small increases in temperature could lead to sudden, disproportionate economic collapses that current models fail to predict. A major wildcard is the potential for "planetary tipping points" (e.g., the melting of permafrost), which introduce "bounded collapse probabilities" that are currently omitted from standard risk assessments. Future uncertainty is exacerbated by how damages interact across different sectors and geographies. Damages are described as "cascading and long-lasting," where a failure in one sector (like agriculture) can trigger unpredictable "capital destruction" and "labour productivity losses" across the entire economy. There is deep uncertainty about how damage "compounds across time, space, and sectors," making it difficult for financial regulators to assess the true level of systemic risk. The report identifies "direct and indirect" failures in how climate risk is currently quantified. Much of the current future uncertainty stems from "arbitrary" functional forms and hidden assumptions in Integrated Assessment Models. While incorporating "expert knowledge" can help, the report notes that these judgments may be "biased" and that there is a lack of "expert confidence" when dealing with higher temperature levels. There is a "fundamental disconnect" between climate science and the "top-down macroeconomic perspective" used by financial regulators and investors, creating a "blind spot" for future climate-driven financial crises. The report suggests that the "greatest unknown" is the point at which climate damage exceeds the system's ability to adapt. To navigate this, researchers and regulators must move beyond "aggregate functions" and embrace "process-based approaches" that explicitly quantify the massive uncertainties of a warming world.

  • View profile for Yeshwanth Vepachadu

    Helping Leaders, Founders & HRs Build Personal Brand on LinkedIn | AI Insurance Strategist

    10,315 followers

    𝗪𝗵𝘆 𝗶𝗻𝘀𝘂𝗿𝗲𝗿𝘀 𝗮𝗿𝗲 𝗻𝗼 𝗹𝗼𝗻𝗴𝗲𝗿 𝘁𝗿𝘂𝘀𝘁𝗶𝗻𝗴 𝗰𝗮𝘁𝗮𝘀𝘁𝗿𝗼𝗽𝗵𝗲 𝗺𝗼𝗱𝗲𝗹𝘀 𝘁𝗵𝗲 𝘄𝗮𝘆 𝘁𝗵𝗲𝘆 𝘂𝘀𝗲𝗱 𝘁𝗼 Catastrophe models were never meant to answer one question that boards are asking today: “𝙄𝙨 𝙤𝙪𝙧 𝙚𝙭𝙥𝙤𝙨𝙪𝙧𝙚 𝙘𝙝𝙖𝙣𝙜𝙞𝙣𝙜 𝙛𝙖𝙨𝙩𝙚𝙧 𝙩𝙝𝙖𝙣 𝙤𝙪𝙧 𝙢𝙤𝙙𝙚𝙡𝙨?” In 2025, that question has become impossible to ignore. Here’s what many insurers and reinsurers are struggling with right now: • Return periods are less reliable for secondary perils. • Loss clustering is happening across regions that were once treated as independent. • Event frequency is shifting faster than annual model updates. • Exposure is accumulating silently through growth, not underwriting intent. • Post-event loss creep is consistently higher than modelled expectations. This is where AI is being used in efficient ways, alongside traditional CAT models. 𝗪𝗵𝗮𝘁 𝗶𝗻𝘀𝘂𝗿𝗲𝗿𝘀 𝗮𝗿𝗲 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗱𝗼𝗶𝗻𝗴 𝘁𝗼𝗱𝗮𝘆 Instead of running catastrophe models once or twice a year, teams are now: • Using AI to monitor exposure drift weekly, so portfolio changes don’t go unnoticed between renewals. • Layering near-real-time climate signals on top of vendor models to detect when assumptions start to break down. • Running thousands of scenario variations, not to predict the next event, but to understand where loss amplification could occur. • Using AI to identify emerging accumulation risks, especially from secondary perils like convective storms, flooding, and wildfire spread. • Comparing modelled vs actual post-event development patterns to adjust expectations around loss creep and reserve adequacy. • Highlighting where diversification assumptions no longer hold, before those correlations fail during a real event. This is not about replacing RMS or AIR style models. It’s about challenging them earlier and more often. 𝗪𝗵𝗮𝘁 𝘄𝗶𝗹𝗹 𝘁𝗵𝗶𝘀 𝘂𝗻𝗹𝗼𝗰𝗸 𝗻𝗲𝘅𝘁 The most important shift is not better prediction. It’s earlier 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻-𝗺𝗮𝗸𝗶𝗻𝗴. • Reinsurance buying decisions that reflect current exposure, not last year’s view. • Capital buffers adjusted before volatility hits, not after. • Underwriting appetite changes informed by live accumulation signals. • Portfolio steering away from emerging hotspots months earlier. • Clearer conversations with boards about uncertainty, not false precision. Cat modelling is transitioning from a static output to a continuous risk conversation. 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗺𝗮𝘁𝘁𝗲𝗿𝘀 𝗻𝗼𝘄 In a stable climate, historical models were good enough. In a volatile one, delay becomes risk. The insurers who treat catastrophe risk as a living signal, not a periodic report, will make better capital, underwriting, and reinsurance decisions even when the models are wrong. In today’s environment, being early matters more than being precise. #InsuranceLeadership #CatastropheRisk #AIinInsurance #Reinsurance #RiskManagement

  • View profile for Nate Wittasek

    Resilience-focused engineer and regulatory expert focused on environmental risks, the built environment, and how people and places adapt so they can thrive.

    4,175 followers

    This won’t win applause, but it’s true: when insurers say a state is “too hard to do business in,” it isn’t punishment. It’s physics. I’ve sat at kitchen tables after wildfires as families opened non-renewal letters. I’ve watched closings fall apart because wind coverage vanished. It feels like abandonment. It’s a signal: price is being held below risk, and capital is walking. Insurers don’t exit because weather got worse. They exit because the rules won’t let price, models, and mitigation line up with reality. When that alignment breaks, the math breaks. What makes a state “too hard”? ✔️ Rate approvals that lag loss trends and reinsurance. ✔️ Bans or limits on forward-looking catastrophe models (wildfire, wind, flood, hail). ✔️ Legal friction that turns small claims into big volatility. ✔️ Underwriting with one hand tied—maps you can’t use, data you can’t price. ✔️ FAIR Plans swelling from last resort to first stop. Price must equal risk—or capital leaves. Unpopular, yes. Also fixable: ✔️ Allow credible, forward-looking cat models and recognize reinsurance costs. ✔️ Tie real discounts to verified mitigation: a 0–5 ft noncombustible zone, ember-resistant vents and eaves, fortified roofs, elevated utilities. ✔️ Use transparent hazard maps with consumer protections—not bans. ✔️ Fund community-scale risk reduction (fuels, drainage, roof upgrades, codes) so expected losses actually fall. ✔️ Keep residual markets small and temporary, with clear off-ramps back to private capacity. I don’t like the human cost of saying this. I’ve seen it. But pretending risk is cheap doesn’t protect people; it just delays the bill and shrinks options. We can choose applause now—or availability later. #Insurance #Resilience #Wildfire #ClimateRisk #RiskModeling #Infrastructure #PublicPolicy #DisasterMitigation

Explore categories