Super excited to share our new review led by Matthew Worden just published in Global Change Biology: “Combining Observations and Models: A Review of the CARDAMOM Framework for Data-Constrained Terrestrial Ecosystem Modeling” https://lnkd.in/gWsJJy6w Over the last two decades, the CARbon DAta MOdel fraMework (CARDAMOM) has grown into one of the leading approaches for bringing together diverse ecological observations (from flux towers to satellites) with process-based models of the terrestrial biosphere. This review, motivated by the 2024 CARDAMOM Community Workshop at Caltech, takes stock of where the framework stands and where it’s heading: 1) How Bayesian model–data fusion enables retrieval of spatially explicit ecosystem parameters and carbon cycle dynamics; 2) Key scientific advances made possible by CARDAMOM, from drought responses to global carbon–climate feedbacks; 3) Challenges ahead, including data quality, equifinality, and computational costs; 4) Community recommendations for incorporating new observations (e.g., hyperspectral, biomass, hydrology missions), engaging with remote sensing and field networks, and integrating machine learning. At its core, CARDAMOM provides a unique bridge between observations and models, offering both mechanistic insight and actionable pathways to improve Earth system forecasts and ecosystem management. Really nice to be part of this collaborative effort with colleagues across Stanford University, NASA Jet Propulsion Laboratory, Caltech, The University of Edinburgh, UC Santa Barbara, Columbia University, University of California, Davis, Berkeley Lab and beyond. 🚀 #EcosystemModeling #RemoteSensing #DataAssimilation #CarbonCycle #GlobalChangeBiology
Scientific models and climate observations
Explore top LinkedIn content from expert professionals.
Summary
Scientific models are computer simulations that help predict climate changes, while climate observations are real-world measurements like temperature, rainfall, and satellite data. These models and observations work together to improve our understanding of Earth's climate, forecast future conditions, and guide environmental decisions.
- Connect real data: Always integrate up-to-date climate observations, such as satellite or weather station data, to make your models more accurate and meaningful.
- Refine with technology: Utilize advances like high-resolution modeling and AI-based methods to capture complex processes and produce better climate projections.
- Address uncertainties: Regularly compare model predictions with actual outcomes, and update models to account for unexpected changes or gaps in understanding.
-
-
🌍 NASA - National Aeronautics and Space Administration, in collaboration with data from the World Meteorological Organization, merges satellite observations, advanced models, and immense computing power to monitor aerosols in our atmosphere. These tiny, invisible solid or liquid particles — including black carbon (orange/red), sea salt (cyan), dust (magenta), and sulfates (green) — travel vast distances, affecting air quality, human health, climate, and visibility far from their source. 🔹 In South America, black carbon from wildfires burning in the Amazon rainforest drifts across the continent. 🔹 Over the Atlantic, massive plumes of dust from Northern Africa journey westward toward the Americas, influencing ecosystems, weather, and even hurricane formation. This striking visualization, powered by NASA’s Goddard Earth Observing System (GEOS) model and informed by WMO’s authoritative climate data, delivers realistic, high-resolution weather and aerosol insights. These data streams fuel #AI innovation and help provide customized environmental predictions — critical tools for #climateresilience and disaster preparedness #EW4ALL. ➡ A reminder: Every particle tells a story about the planet’s interconnected systems — and our shared responsibility to protect them
-
“Fifty years into the project of modeling Earth’s future climate, we still don’t really know what’s coming. Some places are warming with more ferocity than expected. Extreme events are taking scientists by surprise. Right now, as the bald reality of climate change bears down on human life, scientists are seeing more clearly the limits of our ability to predict the exact future we face. The coming decades may be far worse, and far weirder, than the best models anticipated… This is a problem. The world has warmed enough that city planners, public-health officials, insurance companies, farmers, and everyone else in the global economy want to know what’s coming next for their patch of the planet… Today’s climate models very accurately describe the broad strokes of Earth’s future. But warming has also now progressed enough that scientists are noticing unsettling mismatches between some of their predictions and real outcomes… Across places where a third of humanity lives, actual daily temperature records are outpacing model predictions… And a global jump in temperature that lasted from mid-2023 to this past June remains largely unexplained… Trees and land are major sinks for carbon emissions, and that this fact might change is not accounted for in climate models. But it is changing: Trees and land absorbed much less carbon than normal in 2023, according to research published last October… The interactions of the ice sheets with the oceans are also largely missing from models, Schmidt told me, despite the fact that melting ice could change ocean temperatures, which could have significant knock-on effects… The models may be underestimating future climate risks across several regions because of a yet-unclear limitation. And, Rohde said, underestimating risk is far more dangerous than overestimating it.” #ClimateRisk #TransitionRisk https://lnkd.in/eiSRvUeF
-
Alpine Glaciation Modeling with Physics-Driven AI A groundbreaking article called A data-consistent model of the last glaciation in the Alps achieved with physics-driven AI, has just been published detailing a new, data-consistent model of the last glaciation in the European Alps. This achievement leverages the power of physics-informed machine learning, overcoming previous limitations in high-resolution glacier modeling. For decades, numerical models of Alpine glaciation have struggled with significant discrepancies between simulated and observed ice thickness, often overestimating by hundreds of meters. This new model, the Instructed Glacier Model (IGM), uses a 3D model enhanced with AI to run 100 Alps-wide simulations at an unprecedented 300m resolution, spanning 17,000 years. This level of detail was previously computationally impossible. The IGM reduces the ice thickness offset by 200-450% compared to previous studies. The model also produces ice margins that better match empirical reconstructions and is able to provide better estimates of ice velocities, temperatures, basal conditions, and erosion processes. This research demonstrates the transformative potential of physics-driven AI in tackling complex scientific challenges. The IGM's efficiency opens doors for higher-resolution modeling of other paleo-ice fields and even ice sheet-scale modeling, crucial for improving projections of future sea-level rise and climate change. #glaciology #AI #machinelearning #climatechange #paleoclimate #Alps #scientificresearch #innovation #dataanalysis #modelling Link for the Paper: https://lnkd.in/dESBb9wd Authors: Tancrède P.M. Leger, Guillaume Jouvet, Sarah Kamleitner, Jürgen Mey, Frédéric Herman, Brandon D. Finley, Susan Ivy-Ochs, Andreas Vieli, Andreas Henz, and Samuel U. Nussbaumer.
-
I am very excited about our new study that was just published in Nature Geoscience. It shows that future extreme precipitation will intensify far more than previously estimated, driven by stronger mesoscale moisture convergence. Using a global high-resolution Earth system model that simulates extreme-producing phenomena far better than its low-resolution counterpart (see image below), we find that daily extremes could rise by over 40% by 2100—nearly three times the dynamical contribution seen in standard low-resolution models. These results highlight the urgent need for high-resolution climate modeling to constrain risks better and support effective adaptation strategies.
-
What are two of the biggest uncertainties in climate? On the top of my list are aerosols and clouds. So I was very surprised to realize that climate models tend to agree with each other when it comes to trends in the amount of sunlight reaching the land surface -- since this is largely a function of reflection, absorption, or scattering related to atmospheric water vapor and aerosols*. In general, CMIP6 climate models brighten in the Eastern US and Europe (reduced aerosols), dim in India/China, and don't do much elsewhere (see CMIP6 panel in figure below). *Caveat: most CMIP6 models use prescribed aerosol properties and the same historical inventory, so the model spread is unlikely to represent true uncertainty. The observations (as measured by ERA5 -- we do our best to validate it in the paper, open access, linked below) look pretty different, including greater brightening in the central/west US than eastern US, and substantial increases in downward shortwave throughout South America (see ERA5 panel below). To understand whether the observed trends are distinct from the multimodel mean (which averages out internal variability) but consistent with the range of outcomes simulated by our climate models, we comprehensively compared the observations to 237 different CMIP6-era simulations. In multiple regions of the world, including the US Southwest, parts of South America, and eastern China, the trends in ERA5 are consistently at the edge or outside the ensemble. This is shown in the righthand panel below, in which the ERA5 trends are ranked within the climate model ensemble. A rank of 238 means that the ERA5 trends exceed all the model trends. Why? We don't totally know at the moment. In the GRL paper where Isla Simpson and I explore these trends (https://lnkd.in/gubYUrXH), we find that differences in cloud trends is the likely (and expected) driver. But this raises the usual question: is the model-observational discrepancy in trends due to model errors in the forced response or deficiencies in simulation of internal variability? The role of errors in ERA5 cannot be discounted either, since sources of "ground truth" are limited before the 2000s. Intriguingly, many although not all of the regions with the newly identified model-observational discrepancy in downward shortwave radiation are also those with a discrepancy in humidity trends (https://lnkd.in/gSPhpdjP). As always, there is more to do, including a deeper dive into the trends in continental cloudiness. My personal hypothesis is that all of these discrepancies (clouds, radiation, humidity) are linked to model-observational differences in land-atmosphere interactions. Would love to hear thoughts in the comments or by email!
-
Precipitation is one of the most challenging variables to accurately simulate in global climate models as it depends on small-scale physical processes. In our latest research published in 𝘚𝘤𝘪𝘦𝘯𝘤𝘦 𝘈𝘥𝘷𝘢𝘯𝘤𝘦𝘴, we describe an advancement in our hybrid atmospheric model, NeuralGCM, which now leverages AI trained directly on NASA satellite observations to improve global precipitation simulations. Key results of this work: 👉 Physics-AI Integration: The model combines a traditional fluid dynamics solver for large-scale processes with AI neural networks that learn to account for the effects of small-scale physics, specifically precipitation. 👉 Improved Extremes: NeuralGCM demonstrates significant improvements in capturing the intensity of the top 0.1% of extreme rainfall events, better representing heavy precipitation than many traditional models. 👉 Long-Term Accuracy: In multi-year simulations, the model achieved a 40% average error reduction over land compared to leading atmospheric models used in the latest Intergovernmental Panel on Climate Change (IPCC) report. 👉 Daily Patterns: It more accurately reproduces the timing of peak daily precipitation, which is critical for hydrology and agricultural planning. We are already seeing the value of this approach in the field. A partnership between the University of Chicago and the Indian Ministry of Agriculture recently used NeuralGCM in a pilot program to help predict the onset of the monsoon season. NeuralGCM is part of our Earth AI program to better understand the physical earth in ways that benefit society. We have made the code and model checkpoints openly available to the community. Read the full details on the Google Research blog by Janni Yuval: goo.gle/4qH63sU Paper: https://lnkd.in/d7E4US4W
-
You might have seen news from our Google DeepMind colleagues lately on GenCast, which is changing the game of weather forecasting by building state-of-the-art weather models using AI. Some of our teams started to wonder – can we apply similar techniques to the notoriously compute-intensive challenge of climate modeling? General circulation models (GCMs) are a critical part of climate modeling, focused on the physical aspects of the climate system, such as temperature, pressure, wind, and ocean currents. Traditional GCMs, while powerful, can struggle with precipitation – and our teams wanted to see if AI could help. Our team released a paper and data on our AI-based GCM, building on our Nature paper from last year - specifically, now predicting precipitation with greater accuracy than prior state of the art. The new paper on NeuralGCM introduces 𝗺𝗼𝗱𝗲𝗹𝘀 𝘁𝗵𝗮𝘁 𝗹𝗲𝗮𝗿𝗻 𝗳𝗿𝗼𝗺 𝘀𝗮𝘁𝗲𝗹𝗹𝗶𝘁𝗲 𝗱𝗮𝘁𝗮 𝘁𝗼 𝗽𝗿𝗼𝗱𝘂𝗰𝗲 𝗺𝗼𝗿𝗲 𝗿𝗲𝗮𝗹𝗶𝘀𝘁𝗶𝗰 𝗿𝗮𝗶𝗻 𝗽𝗿𝗲𝗱𝗶𝗰𝘁𝗶𝗼𝗻𝘀. Kudos to Janni Yuval, Ian Langmore, Dmitrii Kochkov, and Stephan Hoyer! Here's why this is a big deal: 𝗟𝗲𝘀𝘀 𝗕𝗶𝗮𝘀, 𝗠𝗼𝗿𝗲 𝗔𝗰𝗰𝘂𝗿𝗮𝗰𝘆: These new models have less bias, meaning they align more closely with actual observations – and we see this both for forecasts up to 15 days, and also for 20-year projections (in which sea surface temperatures and sea ice were fixed at historical values, since we don’t yet have an ocean model). NeuralGCM forecasts are especially performant around extremes, which are especially important in understanding climate anomalies, and can predict rain patterns throughout the day with better precision. 𝗖𝗼𝗺𝗯𝗶𝗻𝗶𝗻𝗴 𝗔𝗜, 𝗦𝗮𝘁𝗲𝗹𝗹𝗶𝘁𝗲 𝗜𝗺𝗮𝗴𝗲𝗿𝘆, 𝗮𝗻𝗱 𝗣𝗵𝘆𝘀𝗶𝗰𝘀: The model combines a learned physics model with a dynamic differentiable core to leverage both physics and AI methods, with the model trained directly on satellite-based precipitation observations. 𝗢𝗽𝗲𝗻 𝗔𝗰𝗰𝗲𝘀𝘀 𝗳𝗼𝗿 𝗘𝘃𝗲𝗿𝘆𝗼𝗻𝗲! This is perhaps the most exciting news! The team has made their pre-trained NeuralGCM model checkpoints (including their awesome new precipitation models) available under a CC BY-SA 4.0 license. Anyone can use and build upon this cutting-edge technology! https://lnkd.in/gfmAx_Ju 𝗪𝗵𝘆 𝗧𝗵𝗶𝘀 𝗠𝗮𝘁𝘁𝗲𝗿𝘀: Accurate predictions of precipitation are crucial for everything from water resource management and flood mitigation to understanding the impacts of climate change on agriculture and ecosystems. Check out the paper to learn more: https://lnkd.in/geqaNTRP
-
New paper alert! A fully coupled climate reanalysis by Vince Cooper covering 1850-2023. We used strongly coupled data assimilation on observations of sea surface temperature, land-based air temperature, sea-level pressure over the ocean, and satellite sea-ice concentration at monthly resolution. As far as we know, this is the first time that these fields have been simultaneously reconstructed over the historical period. Results show significant low-frequency variance in ENSO, with a peak near the start of the 20th century, muted modern cooling trends in Southern Ocean SST (see figure below), a decline in Arctic sea-ice area since the 19th century, and relatively small changes in Antarctic sea-ice area. Additional key points: * Most reanalysis datasets consider each component of the climate system independently (i.e., separate atmospheric and oceanic reanalyses), leading to inconsistencies in coupled variability. Here, we use strongly coupled data assimilation, which means that all observations update every component of the climate system. * Efficient emulators are used to propagate the memory of past observations forward in time. We use cyclostationary linear inverse models trained on 8 CMIP6 model simulations to include the role of model error in the reconstructions. These models are used to create 8 separate reanalyses, propagating the full error covariance matrix for all climate variables. * A 1600-member ensemble is created by sampling the posterior distributions in a dynamically consistent process, providing a large sample of equally likely reanalyses of historical climate. This provides a rich dataset for exploring climate variability with uncertainty quantification. The preprint can be found here: https://lnkd.in/gbEtR4Jw
-
The Nature Communications article "How to stop being surprised by unprecedented weather" outlines a comprehensive framework to anticipate and manage the risks of extreme, previously unobserved weather events. The article’s central thesis is that surprise should not be the default response to such events—and that science, policy, and disaster planning can work in concert to build resilience. These methods help anticipate extreme weather events beyond what has occurred in the observational record: a. Conventional Statistical Methods - Use historical weather data and extreme value theory to estimate probabilities of rare events. Limitations: Short observational records, underestimation of extremes, and inability to simulate events beyond past climate conditions. b. Past Events and Proxy Data - Extend the view of climate risk through historical documentation, oral history, and paleoclimate proxies (tree rings, sediments, etc.). Benefits: Reveal long-term variability and past extremes that modern records miss. Limitations: Coarse resolution, dating uncertainty, and difficulty aligning with present-day conditions. c. Event-Based Storylines - Construct physically plausible scenarios of specific high-impact events using counterfactuals and modeling. Useful for local decision-making and public engagement. Limitations: Focused on specific events, often non-probabilistic, and dependent on expert input. d. Weather and Climate Model Data Exploration Mine large ensembles of model outputs (e.g., UNSEEN, SMILEs, CORDEX) for unobserved but plausible extremes. Enables exploration of events outside the observational record using physical consistency. Limitations: Computationally intensive, resolution trade-offs, and model biases.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development