Remote Sensing Applications

Explore top LinkedIn content from expert professionals.

  • View profile for Matt Forrest
    Matt Forrest Matt Forrest is an Influencer

    🌎 I help GIS professionals break out of the technician trap, and build modern, high-impact geospatial careers · Scaling geospatial at Wherobots

    81,833 followers

    🧠 GPT changed language. Clay might change the way we understand Earth. Clay is an open-source foundation model for Earth: trained on massive amounts of satellite imagery across location and time. It transforms the complexity of environmental data into powerful embeddings that can be used to: ✅ Identify land cover, crop types, or urban expansion ✅ Detect change like wildfires, floods, or deforestation ✅ Power downstream models for prediction, classification, and mapping ✅ Serve as a backbone for custom geospatial AI pipelines The result? A model that understands Earth the way LLMs understand language. Training models is tough, plus you need access to massive amounts of data. As foundational models start to get better, the data backbone being built by Cloud-Native Geospatial Forum (CNG) data and computing systems that can leverage these models like those we are working on at Wherobots can help bring these models to global scale. This is bigger than just another geospatial model. It’s a signal that foundation models are coming to remote sensing, and with them, a new paradigm: 🧠 Pre-trained models that can be adapted everywhere 📡 Build models with fewer labels 🌱 Tackle climate, agriculture, and environmental challenges with speed If you’re working in geospatial AI, Earth observation, or climate data: Clay is worth watching. And using. It's open source and live on Hugging Face and GitHub. The geospatial foundation model era is bound to be an exciting one. 🌎 I'm Matt and I talk about modern GIS, geospatial data engineering, and how spatial thinking is changing. 📬 Want more like this? Join 5k+ others learning from my newsletter → forrest.nyc

  • View profile for Prayank Swaroop
    Prayank Swaroop Prayank Swaroop is an Influencer

    Partner at Accel

    37,837 followers

    🚀 AlphaEarth Foundations (AEF) - New from Google DeepMind I keep looking out for interesting usecases of AI. Deepmind folks are at it again. 📄 Paper: AlphaEarth Foundations on arXiv (https://lnkd.in/giHUwe2d) --- 🌍 What is AlphaEarth Foundations? AEF is a foundation model for Earth observation that turns sparse and messy satellite, climate, LiDAR, and even text data into dense embeddings at 10 m² resolution. These embeddings provide a universal feature space for mapping and monitoring the planet, outperforming all previous approaches — reducing mapping errors by ~24% on average. And the best part? The embeddings are already available as annual global datasets (2017–2024) for free: 👉 Earth Engine Data Catalog: Google Satellite Embedding V1 Annual - https://lnkd.in/g6dcv4-M --- 🛠 Why does this matter? (weekend project ?) For places like Bengaluru, India (or any fast-changing city), AEF makes it possible to: - Track urban growth and land use change with very few ground samples. - Monitor lakes and wetlands for encroachment and seasonal changes. - Map flood risk by combining rainfall, elevation, and land cover. - Identify urban heat islands and vegetation loss. - Support peri-urban agriculture with low-shot crop type classification. - Study biodiversity shifts (tree species, invasive plants) by linking with GBIF/iNaturalist data. In short, it’s like having a plug-and-play geospatial backbone — ready to support everything from city planning to climate adaptation. --- 🔧 For the Geeks Want to try it out? You can get started in minutes using Earth Engine + Python: 📘 Earth Engine Python Quickstart Docs - https://lnkd.in/g9zBBPJv 🌐 This is a big step toward planetary-scale AI for environmental monitoring — making high-quality maps possible even when labels are scarce. --- Further reading : 1. https://lnkd.in/gsXU2BqS 2. https://lnkd.in/gxJpqS6b --- Authors: Christopher Brown, Michal Kazmierski, Valerie Pasquarella, William J. Rucklidge, Masha Samsikova, Chenhui Zhang, Evan Shelhamer, Estefania Lahera, Olivia Wiles, Simon Ilyushchenko, Noel Gorelick, Lihui Lydia Zhang, Sophia Alj, Emily Schechter, Sean Askay, Oliver Guinan, Rebecca Moore, Alexis Boukouvalas, Pushmeet Kohli.

  • View profile for Dr. Ayesha Khanna
    Dr. Ayesha Khanna Dr. Ayesha Khanna is an Influencer

    AI Entrepreneur. Board Member. Reuters Trailblazing Woman in Enterprise AI (2026). Forbes Groundbreaking Female Entrepreneur in Southeast Asia. LinkedIn Top Voice for AI.

    92,178 followers

    Pano AI , a #SanFrancisco-based startup specializing in AI-powered wildfire detection, has raised $44 million in Series B funding to expand its early detection platform. As climate change accelerates, wildfires are becoming more frequent, destructive, and harder to manage. In the 2023–24 season alone, fires scorched nearly 4 million square kilometers, an area larger than India. 😔 Faster Detection, Smarter Response Pano AI is addressing this growing threat by providing emergency responders with cutting-edge AI tools for early detection and rapid response. ► The company’s platform integrates ultra-high-definition, 360-degree cameras with proprietary AI models to monitor nearly 30 million acres across the U.S., Canada, and Australia. ► These cameras, placed on high vantage points, continuously scan for signs of smoke and fire. ► When the AI detects a potential incident, human analysts verify it before dispatching alerts with precise GPS coordinates to local emergency crews—enabling faster, more effective responses. Pano AI’s system proved its value during Colorado’s Bear Creek Fire, detecting smoke within minutes and helping contain the blaze to three acres. Today, over 250 agencies and 15 major utilities rely on the platform. The company says its tools have helped keep 95% of detected fires from growing beyond 10 acres. If done right, this kind of early detection could save lives, protect homes, and prevent billions in damage — all by buying first responders a little more time. #AI #Technology

  • View profile for Lawrence X.

    NikaPlanet — Purpose built GeoAI Deployment Technology.

    25,319 followers

    AI Intensity in EO & GIS is exploding! Over the past few weeks, the geospatial AI landscape has surged with transformative research breakthroughs and I’m thrilled to witness this wave of innovation in real time. Here are some standout developments I’m particularly excited to share: 1. AlphaEarth Foundation Model Google DeepMind This newly unveiled “virtual satellite” model extracts compact, yearly embeddings for every 10 m pixel—from 2017 to 2024—by integrating diverse sources such as optical, radar, LiDAR, climate, and elevation data. These ready-to-use embeddings massively simplify environmental monitoring workflows and reduce storage needs by up to 16x. 2. OmniGeo A cutting-edge multimodal large language model, OmniGeo merges satellite imagery, metadata, and spatial text to enhance geospatial task performance. It represents a major step toward richer, more context-aware geospatial AI. 3. S2Vec This self-supervised framework learns geospatial embeddings through masked autoencoding of rasterized urban features. Its strength lies in socioeconomic inference—especially in areas where labeled data is scarce—making it a powerful tool for urban and social analytics. 4. Meta DINOv3: A Leap in Geo-Benchmarking Meta has launched DINOv3—a 7-billion-parameter frozen vision encoder that delivers state-of-the-art performance across segmentation, classification, depth estimation, and unsupervised object discovery. Impressively, when evaluated on GEO-Bench using Sentinel-2 and Landsat images (RGB only), DINOv3 ranks first in 10 out of 12 tasks—without any satellite-specific pretraining or fine-tuning. 5. ESA’s new Cloud Optimized GeoZarr format Supported by ESA, this project introduces cloud-native, high-performance access to Sentinel data via GeoZarr. It enables efficient handling of ND-array time-series geospatial data, significantly improving performance for large-scale Earth observation analytics. 6. Llama3-MS-CLIP (FAST-EO, ESA-Funded) Another exciting innovation from ESA’s FAST-EO project, Llama3-MS-CLIP is a vision–language model designed for multispectral data—enabling powerful zero-shot classification and retrieval in Earth Observation, going beyond standard RGB capabilities. Would you like to explore any of these topics in more detail, such as real-world use cases, technical deep dives, or how these tools compare with legacy geospatial frameworks? Comment below! #GeospatialAI #EarthObservation #GIS #MachineLearning #RemoteSensing #AI #SatelliteImagery #DeepLearning #FoundationModels #GoogleDeepMind #Meta #ESA #AlphaEarth #DINOv3 #ComputerVision #GeospatialData #Sentinel2 #Landsat #MultimodalAI #SelfSupervisedLearning #GeoZarr #CloudNative #SpatialAnalytics #UrbanAnalytics #EnvironmentalMonitoring #VisionLanguageModels #MultispectralData #GeospatialTechnology #SpatialIntelligence #DataScience #Innovation #TechTrends #Research #Breakthrough

  • View profile for Jocelyn Chanussot

    Research Director, INRIA (on leave from Grenoble INP), AXA Chair in Remote Sensing, Chinese Academy of Sciences, Beijing (Cn)

    8,510 followers

    [new publication][open access] SpectralEarth: Training Hyperspectral Foundation Models at Scale, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Nassim Ait Ali Braham, Conrad Albrecht, Julien Mairal, Jocelyn Chanussot, Yi Wang and Xiaoxiang Zhu The dataset, pretrained models, and code are publicly available ! https://lnkd.in/gFeZuzpp Foundation models have triggered a paradigm shift in computer vision and are increasingly being adopted in remote sensing, particularly for multispectral imagery. Yet, their potential in hyperspectral imaging (HSI) remains untapped due to the absence of comprehensive and globally representative hyperspectral datasets. To close this gap, we introduce SpectralEarth, a large-scale multi-temporal dataset designed to pretrain hyperspectral foundation models leveraging data from the Environmental Mapping and Analysis Program (EnMAP). SpectralEarth comprises 538,974 image patches covering 415,153 unique locations from 11,636 globally distributed EnMAP scenes spanning two years of archive. Additionally, 17.5% of these locations include multiple timestamps, enabling multi-temporal HSI analysis. Utilizing state-of-the-art self-supervised learning (SSL) algorithms, we pretrain a series of foundation models on SpectralEarth, integrating a spectral adapter into classical vision backbones to accommodate the unique characteristics of HSI. In tandem, we construct nine downstream datasets for land-cover, crop-type mapping, and tree-species classification, providing benchmarks for model evaluation. Experimental results support the versatility of our models and their generalizability across different tasks and sensors. We also highlight computational efficiency during model fine-tuning. IEEE Geoscience and Remote Sensing Society (GRSS) #artificialintelligence #foundationmodels #remotesensing #hyperspectral Technical University of Munich Inria German Aerospace Center (DLR)

  • View profile for Xiaoxiang Zhu

    TUM Professor for Data Science in Earth Observation

    9,077 followers

    🌍 New Research Alert 🌍 In geospatial AI, we often assume that #FoundationModels must be trained with self-supervised learning on huge unlabeled corpora. But for tasks like land use & land cover (#LULC) mapping, we already have global LULC products. Yes—they’re noisy. But they’re also massive, global, and free. Why not use them as weak labels to train task-specific Geo FMs? 🚀 Introducing: #LandSegmenter A task-specific LULC foundation model built from weak supervision. Developed by my PhD student Chenying Liu, with @Wei Huang and myself. Highlights: 🔹 Trains on LAS, a 150k-sample global dataset (≈80% weak labels) 🔹 Handles RGB + multispectral inputs, from 5 cm to 30 m 🔹 Supports user-defined classes via text prompts 🔹 Zero-shot mapping across sensors & regions 🔹 New confidence-guided fusion to recover unseen categories 🔹 Strong performance across six benchmarks, especially at medium/low resolution 🧠 Takeaway: Weak supervision at global scale can rival—sometimes outperform—heavy SSL pipelines for task-specific Geo FMs. 📄 Paper: https://lnkd.in/dj6bNQAC 💻 Code & data: https://lnkd.in/d9gKG5d3 This work is supported by Munich Center for Machine Learning.

  • View profile for Greg Cocks

    Applied (Spatial) Researcher | Engineering Geologist (Licensed) || Individual professional LinkedIn account, hence NOT affiliated with my employer in ANY sense || Info/orgs shared should not be seen as an endorsement

    35,260 followers

    Glacial Lake Mapping Using Remote Sensing Geo-Foundation Model -- https://lnkd.in/gfKTw4BQ <-- shared paper -- https://lnkd.in/gFD5XCtZ <-- shared technical article - "How The Greenland Ice Sheet Fared In 2025" -- "HIGHLIGHTS:  • Proposed U-ViT model based on Prithvi GFM for multi-sensor glacial lake mapping.  • Achieved an F1 score of 0.894 on Sentinel-1&2, surpassing CNNs scoring below 0.8.  • Maintains strong performance with 50% less training data, proving efficiency.  • Excels in detecting small lakes (<0.01km²) and handling clouds and complex terrains. ABSTRACT: Glacial lakes are vital indicators of climate change, offering insights into glacier dynamics, mass balance, and sea-level rise. However, accurate mapping remains challenging due to the detection of small lakes, shadow interference, and complex terrain conditions. This study introduces the U-ViT model, a novel deep learning framework leveraging the IBM-NASA Prithvi Geo-Foundation Model (GFM) to address these issues. U-ViT employs a U-shaped encoder–decoder architecture featuring enhanced multi-channel data fusion and global-local feature extraction. It integrates an Enhanced Squeeze-Excitation block for flexible fine-tuning across various input dimensions and combines Inverted Bottleneck Blocks to improve local feature representation. The model was trained on two datasets: a Sentinel-1&2 fusion dataset from North Pakistan (NPK) and a Gaofen-3 SAR dataset from West Greenland (WGL). Experimental results highlight the U-ViT model’s effectiveness, achieving an F1 score of 0.894 on the NPK dataset, significantly outperforming traditional CNN-based models with scores below 0.8. It excelled in detecting small lakes, segmenting boundaries precisely, and handling cloud-shadowed features compared to public datasets. Notably, the U-ViT demonstrated robust performance with a 50% reduction in training data, underscoring its potential for efficient learning in data-scarce tasks. However, its performance on the WGL dataset did not surpass that of DeepLabV3+, revealing limitations stemming from differences between pre-training and input data modalities. The code supporting this study is available online. This research sets the stage for advancing large-scale glacial lake mapping through the application of GFMs…” #GIS #spatial #mapping #glaciallake #GeospatialFoundationModel #satellite #Sentinel #GaoFen #remotesensing #earthobservation #model #modeling #climatechange #glacial #glacier #melt #melting #UViT #deepleanring #AI #framework #performance #metrics #opensource

    • +3
  • View profile for Josh Gilbert

    CEO @ Sust Global. Founder solving orbital scale problems

    6,648 followers

    NASA just trained a 3 billion parameter model on 100 million MODIS satellite images. Google released foundation models that reason across geospatial datasets. Yet most institutional investors still use Excel to assess physical climate risk. I met with a CRO of a $200B AUM fund last week. They were proud of their "advanced" climate risk system. It was a spreadsheet with color-coded cells. This gap between new technology and status quo is where revenue opportunity lives. Today's geospatial foundation models don't just find patterns. They understand causality across space and time. SatVision-TOA can predict the shape of objects in cloud-obscured images with 93% accuracy while spotting features for deeper analysis. Let's explore what this means for institutional investors: 1. Risk assessment is becoming multi-dimensional - models understand how risks compound across variables - demographic shifts, infrastructure resilience, economic activity, and climate patterns. 2. The speed of insight has accelerated exponentially - What used to take months of analysis can now be generated in minutes. 3. Power is now the only constraint, and space infra investment is now viable - Space solar power, orbital data centers, in-orbit manufacturing: geospatial AI can model the terrestrial economic impacts of these technologies years before deployment. (I've watched portfolio managers' eyes widen when we discussed how we can project the value of space-based solar transmission to specific grid-constrained regions) At Sust Global , we're embedding these foundation models into our geospatial AI platform. Not just layering data, but enabling true cross-domain reasoning. Last quarter, a client used our platform to identify real estate assets with both high climate resilience and proximity to emerging demographic booms. They executed a $300M allocation based on insights that didn't exist in any conventional dataset. That's the real breakthrough: finding opportunities others can't see by connecting domains others don't combine. Climate risk data can't exist in isolation. Neither can space technology. The future belongs to those who can reason across all these domains simultaneously. Curious how geospatial foundation models can unlock insights for your portfolio? Let's connect.

  • View profile for Heather Couture, PhD

    Fractional Principal CV/ML Scientist | Making Vision AI Work in the Real World | Solving Distribution Shift, Bias & Batch Effects in Pathology & Earth Observation

    16,982 followers

    𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗔𝗜 𝗧𝗵𝗮𝘁 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝘀 𝗔𝗻𝘆 𝗦𝗮𝘁𝗲𝗹𝗹𝗶𝘁𝗲 For years, Earth Observation has been trapped in a loop: new sensor, new model, new training cycle. But a fundamental shift is happening. We are moving toward "any-sensor" foundation models—universal systems that process arbitrary combinations of spectral bands and resolutions without skipping a beat. Five recent breakthroughs show us how this future is being built, but they are taking very different paths to get there: 𝗧𝗵𝗲 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗮𝗹 𝗕𝗮𝘁𝘁𝗹𝗲: 𝗛𝘆𝗽𝗲𝗿𝗻𝗲𝘁𝘄𝗼𝗿𝗸𝘀 𝘃𝘀. 𝗠𝗶𝘅𝘁𝘂𝗿𝗲 𝗼𝗳 𝗘𝘅𝗽𝗲𝗿𝘁𝘀 How do you build one "brain" for 20+ different sensors? 𝗖𝗼𝗽𝗲𝗿𝗻𝗶𝗰𝘂𝘀-𝗙𝗠 uses dynamic hypernetworks and flexible metadata encoding to adapt its internal weights to any spectral or non-spectral modality, spanning from the Earth's surface to the atmosphere. Meanwhile, 𝗦𝗸𝘆𝗦𝗲𝗻𝘀𝗲 𝗩𝟮 pushes for parameter efficiency, using a Mixture of Experts module and learnable modality prompt tokens to handle vast resolution differences and limited feature diversity across sensor types. 𝗧𝗵𝗲 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗟𝗼𝗴𝗶𝗰: 𝗡𝗮𝘁𝘂𝗿𝗮𝗹 𝗔𝘂𝗴𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 𝘃𝘀. 𝗧𝗼𝗸𝗲𝗻 𝗠𝗶𝘅𝘂𝗽 How do these models learn a "universal language" for Earth? 𝗣𝗮𝗻𝗼𝗽𝘁𝗶𝗰𝗼𝗻 treats images of the same geolocation across different sensors as "natural augmentations," forcing the model to learn features that remain constant regardless of the platform. In contrast, 𝗦𝗠𝗔𝗥𝗧𝗜𝗘𝗦 projects heterogeneous data into a shared spectrum-aware space, using cross-sensor token mixup to train a single transformer capable of reconstructing masked data from any combination of bands. 𝗠𝗼𝘃𝗶𝗻𝗴 𝗕𝗲𝘆𝗼𝗻𝗱 𝗣𝗶𝘅𝗲𝗹𝘀: 𝗧𝗵𝗲 𝗖𝗹𝘂𝘀𝘁𝗲𝗿 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵 While most models focus on reconstructing pixels, 𝗣𝘆𝗩𝗶𝗧-𝗙𝗨𝗦𝗘 takes a different route by adapting the SwAV algorithm. By focusing on cluster assignments (prototypes) rather than pixel-space reconstruction, it creates embeddings that are independent of specific band combinations—making it uniquely robust for downstream tasks where certain sensors might fail or be obscured by clouds. These papers prove that the diversity of training data and the flexibility of metadata encoding are now more critical than sensor-specific tuning. PyViT-FUSE https://lnkd.in/eHxhpbfr Panopticon: https://lnkd.in/eKBmvp8u Copernicus-FM: https://lnkd.in/ezfrqgQ2 SMARTIES: https://lnkd.in/eJdnUTmA SkySense V2: https://lnkd.in/euqEUUPD --- Keeping up with the literature is increasingly a team sport. This analysis was supported by NotebookLM and grounded in my own review and experience. If you found this useful, let me know in the comments. If it missed the mark, I want that feedback too. Weekly briefings on making vision AI work in the real world → https://lnkd.in/guekaSPf

  • View profile for David Adedamola Adejumo

    President, NISGS FUTA | Leadership-Centric • Nation Building | ’26 Alumnus, Harvard Aspire Leaders’ Program.

    14,088 followers

    In the bid of further promoting the Sustainable Development Goals #SDGs with the target being the seventh core goal [Affordable and Clean Energy], I recently carried out a Geospatial Assessment showcasing solar suitability in Federal University of Technology Akure environment. The project has revealed the feasibility of Solar and its criteria in the study area by integrating diverse #GIS tools. This can help to build the energy gap between industrial and commercial firms around Federal University of Technology Akure community and Ondo State Government, Nigeria at large. From the second slide is the elevation analysis of the study area and this as well interconnects with the solar suitability carried out. At higher elevations, the atmosphere is thinner meaning less air and particulate matters like dust for sunlight to pass through. Locations at higher altitudes on the other hand often receive strong solar radiation thereby fostering solar energy production. This Geospatial Project again addresses one of the many integral functionalities of #GIS in continually shaping our environment for sustainability. #GIS #MAPPING #SDGs #ENERGY #ARCGIS #ESRI - David Adedamola Adejumo

Explore categories