Applications of Artificial Intelligence in Remote Sensing

Explore top LinkedIn content from expert professionals.

Summary

Artificial intelligence in remote sensing uses computer models to analyze images and data captured from satellites, drones, or sensors, helping people make sense of complex landscapes and environmental changes more quickly and accurately. This technology is transforming fields like disaster response, agriculture, and urban planning by automating tasks that were previously slow or impossible.

  • Streamline disaster prediction: AI models can combine signals from multiple sources—like satellites, weather sensors, and historical data—to warn communities about floods, earthquakes, and wildfires days before they happen.
  • Advance precision agriculture: By processing drone imagery with AI, farmers gain detailed maps of crop health, soil moisture, and early disease detection, allowing for smarter decisions about planting, irrigation, and harvesting.
  • Improve urban and environmental monitoring: AI-powered remote sensing tools help identify changes in cities and natural spaces, making it easier to track pollution, plan new infrastructure, or measure the impact of climate change.
Summarized by AI based on LinkedIn member posts
  • View profile for Heather Couture, PhD

    Fractional Principal CV/ML Scientist | Making Vision AI Work in the Real World | Solving Distribution Shift, Bias & Batch Effects in Pathology & Earth Observation

    16,989 followers

    𝗣𝗶𝘅𝗲𝗹-𝗟𝗲𝘃𝗲𝗹 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 𝗳𝗼𝗿 𝗦𝗮𝘁𝗲𝗹𝗹𝗶𝘁𝗲 𝗜𝗺𝗮𝗴𝗲𝗿𝘆 While large multimodal models excel at understanding natural images, they struggle with satellite and aerial imagery. The unique overhead perspective, scale variation, and small objects in high-resolution remote sensing data present distinct challenges that current models can't handle effectively. Akashah Shabbir et al. introduced GeoPixel, the first large multimodal model designed specifically for high-resolution remote sensing image analysis with precise pixel-level grounding capabilities - meaning it can identify exactly which pixels in an image correspond to objects it discusses. 𝗪𝗵𝘆 𝗥𝗲𝗺𝗼𝘁𝗲 𝗦𝗲𝗻𝘀𝗶𝗻𝗴 𝗶𝘀 𝗗𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁: Remote sensing imagery requires specialized understanding that general vision-language models lack: • Overhead viewpoints create spatial relationships unlike natural photography • Extreme scale variations - from individual vehicles to entire city blocks in one image • Small objects distributed across vast areas require precise localization • Limited training data with conversations where text references are linked to specific image regions 𝗧𝗵𝗲 𝗚𝗲𝗼𝗣𝗶𝘅𝗲𝗹 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵: The system combines three key components to handle high-resolution imagery up to 4K: • 𝗔𝗱𝗮𝗽𝘁𝗶𝘃𝗲 𝗶𝗺𝗮𝗴𝗲 𝗽𝗮𝗿𝘁𝗶𝘁𝗶𝗼𝗻𝗶𝗻𝗴: Divides images into local and global regions for efficient processing • 𝗣𝗶𝘅𝗲𝗹-𝗹𝗲𝘃𝗲𝗹 𝗴𝗿𝗼𝘂𝗻𝗱𝗶𝗻𝗴: Generates precise segmentation masks that show exactly which pixels belong to each object mentioned • 𝗜𝗻𝘁𝗲𝗿𝗹𝗲𝗮𝘃𝗲𝗱 𝗺𝗮𝘀𝗸 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻: Produces detailed responses with corresponding visual masks in conversation 𝗧𝗵𝗲 𝗚𝗲𝗼𝗣𝗶𝘅𝗲𝗹𝗗 𝗗𝗮𝘁𝗮𝘀𝗲𝘁: To enable grounded conversations, the researchers created a specialized dataset using a multi-tier annotation strategy: • 54k grounded phrases linked to 600k objects • Descriptions averaging 740 characters with rich spatial context • Hierarchical annotations from scene-level context to individual object details • 5k validated referring expression-mask pairs for evaluation 𝗞𝗲𝘆 𝗔𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀: This capability enables more precise analysis for: • Urban planning and infrastructure mapping • Environmental monitoring and change detection • Disaster response and damage assessment • Agricultural monitoring and precision farming • Defense and security applications 𝗜𝗺𝗽𝗮𝗰𝘁: GeoPixel addresses a critical gap in AI applications for EO. By enabling natural language conversations about satellite imagery with pixel-accurate grounding, it could accelerate decision-making in fields from urban planning to climate research. The model and dataset are publicly available to advance the field. https://lnkd.in/ejJs6aFc #RemoteSensing #ComputerVision #MachineLearning #SatelliteImagery #EarthObservation

  • View profile for Kanchan B.

    Head of AI | Former Chief Product Officer | GenAI • RAG • AI Agents | GeoAI & Drone Data Intelligence | AI Product Leader | 15K+ Followers | 2M+ Impressions | Tech Creator

    15,644 followers

    Drone + AI in Agriculture: Multispectral vs. Hyperspectral Imaging #Drones are no longer just flying cameras—they’re data collection machines. Paired with #AI, they unlock powerful insights for farmers. #Multispectral Imaging (#Drone + #AI) -- 4–10 broad bands (Blue ~450 nm, Green ~550 nm, Red ~650 nm, Red Edge ~720 nm, NIR ~850 nm) -- Light data → easy to process with AI for vegetation indices (#NDVI, #NDRE, #SAVI) -- Applications: crop vigor maps, irrigation stress, yield prediction -- Works best for large-scale, routine monitoring #Hyperspectral Imaging (#Drone + #AI) --100–400+ narrow bands (400–2500 nm, ~5–10 nm each) -- Early nutrient deficiency detection -- Identifying diseases before symptoms appear -- Soil nutrient & moisture mapping -- Differentiating crop varieties -- Best suited for precision farming, crop breeding, high-value crops Trade-offs -- #Multispectral + #AI = affordable, scalable insights. --#Hyperspectral + #AI = advanced, research-grade diagnostics. Agriculture in Action #Drone + #AI + #Multispectral → weekly monitoring, yield forecasts, irrigation management. #Drone + #AI + #Hyperspectral → deep diagnostics, stress detection in wheat, disease monitoring in vineyards, soil health analysis. Bottom line: -- #Multispectral is your farm health monitor. -- #Hyperspectral is your farm lab in the sky. Both, when powered by #Drone + #AI, redefine #precision #agriculture.

  • View profile for Thilosha Nipunajith

    Geospatial Developer | Spatial Data Nerd 🤓 | Data Scientist | Volunteer | Artist

    1,920 followers

    Sentinel-2 Deep Resolution 3.0 (S2DR3) marks a major leap forward in enhancing Earth observation imagery achieving effective 12-band, 10× single-image super-resolution across Sentinel-2 L2A data. Optimized to preserve subtle spectral variations in soil and vegetation, the model reconstructs spatial features down to 3 m with remarkable spectral fidelity. This enables next-generation insights for precision agriculture, environmental monitoring, and carbon verification (MRV) applications. What stands out most is not just the architecture, but the balance between spectral integrity and spatial precision, achieved through meticulous attention to model design and optimization. It should be said, however, that in my experience four key factors have the greatest impact on the attainable performance of such models, in the following order of decreasing importance: 1️⃣ Training data 2️⃣ Loss function 3️⃣ Hyperparameters 4️⃣ Network architecture 🛰️ S2DR3 Google Colab notebook is available for testing and validation. A remarkable step forward in how AI continues to reshape remote sensing moving from visualization to truly analytical applications. #RemoteSensing #EarthObservation #AIinGeospatial #Sentinel2 #DeepLearning #GeospatialAI #PrecisionAgriculture #EnvironmentalMonitoring #MRV #DataScience #MachineLearning #SatelliteImagery #SuperResolution #ClimateTech #Geoinformatics #SpatialAnalytics #OpenData #AIResearch #SustainableInnovation #GIS

    • +1
  • View profile for Maryam Miradi, PhD

    Chief AI Scientist | 20+ Yrs in AI | 400+ Production AI Agents Built | AI Agents Instructor | Teaching 2,300+ students Agentic Python Systems (LangGraph, CrewAI, PydanticAI, MCP, OpenAI Swarm) | 46k+ Newsletter

    109,763 followers

    I spent 20 hours analyzing 5 breakthrough Earth Disasters AI Agents from Stanford, MIT, and NASA's Jet Propulsion Lab. Here's the life-saving architecture that's changing disaster response forever ⬇️ Most AI systems clean up after disasters. 》𝗧𝗵𝗲 𝗕𝗿𝗲𝗮𝗸𝘁𝗵𝗿𝗼𝘂𝗴𝗵: 𝗣𝗿𝗲𝗱𝗶𝗰𝘁𝗶𝘃𝗲 𝗚𝗲𝗼-𝗔𝗴𝗲𝗻𝘁𝘀 These research teams built geo-agents that triangulate risk by combining three weak signals most systems ignore. Individually these signals mean nothing. Combined, they predicted the 2023 Turkey earthquake 72 hours early in simulations. 》𝗛𝗼𝘄 𝗧𝗵𝗲𝘆 𝗕𝘂𝗶𝗹𝘁 𝗧𝗵𝗶𝘀: 𝗠𝘂𝗹𝘁𝗶-𝗔𝗴𝗲𝗻𝘁 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 ✸ Data Sources & Agent System: ☆ Seismic Agent: Monitors ground movement from LSTM + Transformer models ☆ Satellite Agent: Processes visual changes using computer vision ☆ Weather Agent: Tracks rainfall & temperature via APIs ☆ Historical Pattern Agent: Analyzes past disaster data ☆ Prediction Agent: Combines conflicting signals for ensemble prediction ✸ The Key Insight: ☆ When satellite shows dry land BUT weather predicts heavy rain AND historical data flags flood season = 72-hour warning ☆ Weak signal detection through contradiction analysis ☆ Multi-agent orchestration beats single-model approaches ✸ Tech Stack: ☆ Reasoning LLMs for causal analysis ☆ Groq for real-time processing ☆ LangGraph for agent orchestration ☆ ChromaDB for geospatial embeddings 》𝟱 𝗚𝗲𝗼 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝗣𝗮𝗽𝗲𝗿𝘀 𝗬𝗼𝘂 𝗦𝗵𝗼𝘂𝗹𝗱 𝗞𝗻𝗼𝘄 ✸ 1. GeoChat: Grounded Large Vision-Language Model for Remote Sensing ☆ Key Feature: Conversational querying for geospatial data ☆ Benefit: Non-experts extract insights with natural language prompts ✸ 2. GEOBench-VLM: Benchmarking Vision-Language Models for Geospatial Tasks ☆ Key Feature: Standardized benchmarking for geospatial VLMs ☆ Benefit: Robust model evaluation with consistent metrics ✸ 3. RS5M: A Large-Scale Vision-Language Dataset for Remote Sensing ☆ Key Feature: Massive dataset of image-text pairs ☆ Benefit: Fine-tunes models for disaster monitoring tasks ✸ 4. VHM: Versatile and Honest Vision Language Model for Remote Sensing ☆ Key Feature: High interpretability for sensitive applications ☆ Benefit: Builds trust in AI for disaster response and policymaking ✸ 5. EarthGPT: Universal Multi-modal LLM for Multi-sensor Image Comprehension ☆ Key Feature: Multimodal analysis combining multisensor data ☆ Benefit: Integrates diverse datasets for richer insights ≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣ ⫸ꆛ Join My 𝗛𝗮𝗻𝗱𝘀-𝗼𝗻 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝟱-𝗶𝗻-𝟭 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 trusted by 1,500+ worldwide! ➠ Build Geo, Audio, Video & Vision Agents ➠ Master 5 Modules: 𝗠𝗖𝗣 · LangGraph · PydanticAI · CrewAI · OpenAI Swarm ➠ Deploy for Healthcare, Finance, Smart Cities & More 👉 𝗘𝗻𝗿𝗼𝗹𝗹 𝗡𝗢𝗪 (𝟱𝟲% 𝗢𝗙𝗙): https://lnkd.in/eGuWr4CH

  • View profile for Gopal Erinjippurath

    AI for capital markets 🌎 | Founder and CTO | Angel Investor

    8,403 followers

    Remote Sensing Foundation Models are here - but how effectively are they delivering value to the application layer? The past two years have been a blizzard of innovation in foundation models. This momentum has firmly reached remote sensing, with significant implications for how we build, deploy, and scale geospatial AI. The real opportunity that I see: ❇️ Lowering the cost of high-precision inference for specific tasks even when training data is scarce or expensive. ❇️ Reducing time to market for AI models on geospatial data, from research concept to production To dive deeper on foundation models for remote sensing: Today, we’re looking at 75+ vision foundation models tailored for remote sensing. So, how do you move from hype to validation? How do you evaluate which models are right for your use case? Here are some dimensions I look at when evaluating remote sensing FMs: ➡️ Do they hold up across modalities such as Optical, Multispectral and SAR? ➡️ Can they generalize across specific inference tasks such as (scene)classification, (feature) tracking, depth (estimation), (change) detection? ➡️ How do they scale across compute scenarios—from multi-billion parameter models for cloud native inference to edge-deployable or payload native light weight variants? Foundation models are powerful - but true value comes when they're context-aware, task-optimized, and deployment-ready. So we would need to craft model #evals specific to those tasks and operating environments. An interesting example of a remote sensing FM published earlier this month, from a Chinese research group, called RingMoE. This FM hat uses a mixture of experts model trained over multimodal input (optical, multispectral and SAR) with 14B parameters (prunable to 1B for efficiency) trained on data from 9 distinct satellites sensors, including Worldview, SPOT and the Chinese missions GF-1/2/3. The model documents outperformance on existing foundation models and sets new SOTA benchmarks, demonstrating the adaptability from single-modal to multi-modal scenarios. Image shows the performance characterization of the proposed Ring MoE on 25 benchmarks across 6 key remote sensing inference categories. My thoughts on the Prithvi EO FM from IBM Research: https://lnkd.in/gybdDQ_M If you're exploring RS FMs and their applications —either as a builder, product lead or as an investor, I want to hear from you on what’s promising, what’s noise, and what’s next. #GeospatialAI #remotesensing #FoundationModels #SpatialComputing #GeospatialIntelligence #AIProductStrategy #ComputerVision

  • View profile for Dr. Prasad S. Thenkabail, PhD

    Senior Scientist (ST), United States Geological Survey (USGS)

    9,549 followers

    "The Artificial Intelligence (AI) Revolution in Remote Sensing: A New Era of Earth Observation Science" Special Issue in the Journal Frontiers in Remote Sensing Link to special issue: https://lnkd.in/g5JDXich Guest co-Editors: 1. Dr. Prasad S. Thenkabail (pthenkabail@usgs.gov), Senior Scientist, U.S. Geological Survey (USGS), USA 2. Dr. Qian Yu (qyu@umass.edu), Professor in GIS and Remote Sensing, Department of Earth, Geographic, and Climate Sciences, University of Massachusetts Amherst, USA. 3. Dr. Sudhanshu S. Panda (Sudhanshu.Panda@ung.edu), Professor, GIS/Environmental Science, Institute for Environmental Spatial Analysis, University of North Georgia, USA. Remote sensing is undergoing one of the most profound transformations in its history, driven by rapid advances in artificial intelligence (AI). The convergence of exponentially growing Earth observation (EO) data streams from spaceborne and airborne sensors, cloud-scale computing infrastructures, and modern AI methodologies has fundamentally redefined how the Earth system is observed, analyzed, and understood. These developments are enabling natural resources and environmental processes to be mapped, modeled, and monitored with unprecedented spatial detail, temporal frequency, and computational efficiency. Over the past decade, AI has progressed from a complementary analytical approach to a core engine of next-generation EO science. Deep learning architecture, transformer-based models, and emerging EO foundation models, such as Prithvi 2.0 developed through collaborations between NASA and IBM, now support automated analysis of petabyte-scale satellite archives. These models facilitate multi-sensor and multi-resolution data fusion across optical, thermal, radar, and LiDAR platforms, enabling discovery of complex spatiotemporal patterns that were previously inaccessible using conventional methods. When combined with cloud-native geospatial processing, AI-driven EO workflows are reshaping scientific and operational approaches to monitoring agriculture, forests, water resources, urbanization, biodiversity, extreme weather events, and natural hazards. This special issue invites contributions that define, expand, and exemplify AI revolution in Geospatial Science. Contributions addressing methodological innovation, scalable architectures, multi-sensor integration, uncertainty quantification, and real-world environmental applications are particularly encouraged, highlighting how AI is ushering in a new era of planetary-scale observation and decision support. Read Full Details @ and Submit Papers @: https://lnkd.in/g5JDXich   Submissions: Open till August 31, 2027. 

  • Satellite achieves autonomous decision-making in space using onboard AI in 90 seconds A briefcase-sized satellite successfully used onboard AI to autonomously decide where and when to capture scientific images, completing the entire decision cycle in under 90 seconds without human input. NASA's Jet Propulsion Laboratory tested the "Dynamic Targeting" technology aboard a satellite built by UK startup Open Cosmos, equipped with machine learning processors from Dublin-based Ubotica. The system scans 500 kilometers ahead of the satellite's orbit, captures preview images, and analyzes cloud cover in real-time. Clear skies trigger detailed surface photography, while cloudy conditions prompt the satellite to skip shots entirely. This intelligent filtering saves bandwidth, storage capacity, and processing time while dramatically improving data quality for scientists. Traditional satellites function as passive data collectors, imaging whatever passes beneath them and transmitting everything back to Earth for later analysis. The AI-powered approach enables immediate disaster response capabilities, potentially detecting wildfires, volcanic eruptions, and severe storms within minutes rather than days after post-processing. The breakthrough builds on previous International Space Station demonstrations and represents a fundamental shift toward autonomous space-based intelligence that could transform Earth observation, climate monitoring, and emergency response systems. 🛰️https://lnkd.in/e-b_f-Xw

  • View profile for Mike Spaeth

    Global Vice President - US Artificial Intelligence Institute

    18,222 followers

    NASA - National Aeronautics and Space Administration’s Data Science Group has introduced SatVision-TOA, a cutting-edge geospatial foundation model. Here's why it's groundbreaking: 🚀 Scalable Machine Learning Trained on 100 million satellite images with 3 billion parameters, SatVision-TOA leverages NASA’s MODIS TOA radiance imagery. It excels in "all-sky" modeling, processing diverse atmospheric and land conditions, paving the way for new levels of accuracy in environmental monitoring. 🔍 Advanced Applications In 3D cloud retrieval tasks, it achieved 50% better accuracy than baseline models, enabling more precise atmospheric and climate research. Its masked-image modeling framework ensures robust, label-free learning for multiple downstream tasks. 💡 Impact By addressing the limitations of cloud-free models, SatVision-TOA enhances monitoring of atmospheric variables and land surfaces. Its open-source availability ensures accessibility to researchers worldwide, fostering innovation in Earth observation. The future of remote sensing is brighter—and smarter—than ever! #RemoteSensing #EarthObservation #AI #MachineLearning #Sustainability Acknowledgments Computing resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Center for Climate Simulation (NCCS) at Goddard Space Flight Center. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.. Downstream task example testing is partially supported by a NASA GSFC internal seed funding. https://lnkd.in/eXceHA4Y

  • View profile for Milan Janosov

    The New Science of Maps · Geospatial AI Consultant & Educator · Forbes 30U30 · TEDx Speaker · Bestselling Author

    94,456 followers

    𝐆𝐞𝐨𝐬𝐩𝐚𝐭𝐢𝐚𝐥 𝐌𝐚𝐜𝐡𝐢𝐧𝐞 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐀𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬 Machine learning is no longer just an analytical tool - it’s becoming the backbone of geospatial intelligence. From wildfire prediction and groundwater mapping to disease forecasting, carbon estimation, and urban sprawl detection, these papers show how spatial data + ML are reshaping environmental risk assessment, urban analytics, agriculture, and climate science. If you're working at the intersection of GIS, remote sensing, and AI - this collection of recent research papers is worth bookmarking. My tutorials: https://lnkd.in/dXpjUz3K ⬇️ Full list in the end ⬇️ 1. Exploration of Geo-Spatial Data and Machine Learning Algorithms for Robust Wildfire Occurrence Prediction https://lnkd.in/dtTW_iau 2. Enhancement of Groundwater Resources Quality Prediction Using an Improved DRASTIC Method and Machine Learning https://lnkd.in/dNhTsieN 3. Remote Sensing-Based Forest Cover Classification Using Machine Learning https://lnkd.in/dZfAUZs4 4. Forest Age Estimation Based on a Machine Learning Pipeline Using Sentinel-2 and Auxiliary Data https://lnkd.in/dr3c79-P 5. Factors of Acute Respiratory Infection Among Under-Five Children Using Machine Learning Approaches https://lnkd.in/d6DUxbAh 6. SAR Image Integration for Multi-Temporal Wetland Dynamics Analysis Using Machine Learning https://lnkd.in/dY4--gep 7. Effects of Non-Landslide Sampling Strategies in Landslide Susceptibility Mapping https://lnkd.in/d7mRkFWv 8. Enhancing Co-Seismic Landslide Susceptibility and Risk Analysis Through Machine Learning https://lnkd.in/dtsigmG8 9. 10-m Scale Chemical Industrial Parks Map Along the Yangtze River Based on Machine Learning https://lnkd.in/dS9qGi68 10. Geospatial Distribution and Machine Learning Algorithms for Assessing Surface Water Quality in Morocco https://lnkd.in/dwxSamAt ... 20. Wheat Crop Genotype Identification Using Multispectral Radiometer Data and Machine Learning 21. Geospatial Data for Peer-to-Peer Communication Among Autonomous Vehicles Using Optimized ML Algorithms ... 𝐅𝐮𝐥𝐥 𝐥𝐢𝐬𝐭: https://lnkd.in/dNC9VA7E

  • View profile for Vani Kola
    Vani Kola Vani Kola is an Influencer

    MD @ Kalaari Capital | I’m passionate and motivated to work with founders building long-term scalable businesses

    1,523,754 followers

    The world is changing. 2024 was the first year to surpass 1.5 degrees Celsius. Climate change, deforestation, pollution—the challenges aren’t new. We have been hearing about them for years. But can AI become a true game-changer in addressing them? In 2024, natural disasters caused $368 billion in economic losses worldwide, with 60% of these damages uninsured. Despite this, AI-powered tools are beginning to shift how we respond. ➡️ AI-powered tools, like Google Earth’s Cloud Score+, are stepping up to fill critical gaps. By providing clearer images of ecosystems obscured by clouds, such innovations make monitoring the environment faster and more accurate. ➡️ AI Algorithms now track polar ice melt, analyze deforestation trends, and even alert authorities to illegal logging within hours. ➡️  In Brazil, AI-driven deforestation monitoring cut illegal activities by 20% last year, saving millions of hectares of rainforest. These advancements highlight how AI turns raw satellite data into tools for immediate action. ➡️  Researchers are deploying AI-powered drones to track marine species, improving conservation efforts. Smart fishing systems, driven by AI, help reduce bycatch by distinguishing between target fish and other marine life. ➡️ Air quality monitoring is being transformed by AI. Google’s Air View+ system in India has improved air quality in cities like Aurangabad by 50% over three years, proving how AI can drive cleaner urban environments. The possibilities are limitless, from personalized climate action plans to autonomous drones monitoring remote ecosystems. But technology alone isn't enough. AI gives us the tools to combat environmental crises, but the question remains: how will you contribute? Whether adopting eco-friendly habits, supporting AI initiatives, or staying informed, every action counts. What do you think? #AI #climatechange #technology

Explore categories