How to Explore Spatial Computing Innovations

Explore top LinkedIn content from expert professionals.

Summary

Spatial computing innovations bring digital and physical worlds together, allowing computers to understand and interact with space in new ways—such as turning regular photos or videos into immersive 3D environments and using advanced data tools to build, analyze, and share spatial information. This fast-evolving field makes it easier for businesses and creators to design, test, and collaborate using digital models of real-world places.

  • Explore new tools: Try out the latest software and platforms that transform simple images, videos, or text into interactive 3D spaces for simulation, virtual walkthroughs, and collaboration.
  • Adopt cloud data formats: Use cloud-based spatial data formats like GeoParquet or Cloud-Optimized GeoTIFF to efficiently store and share large-scale maps, scans, or spatial datasets without relying on traditional files.
  • Experiment with real-world captures: Capture real locations or objects using 3D scanning and volumetric video to create reusable assets for virtual production, property marketing, or remote inspections.
Summarized by AI based on LinkedIn member posts
  • View profile for Matt Forrest
    Matt Forrest Matt Forrest is an Influencer

    🌎 I help GIS professionals break out of the technician trap, and build modern, high-impact geospatial careers · Scaling geospatial at Wherobots

    81,862 followers

    🚨 Finding the right tools to learn is hard - here is how I would start. If you're still relying solely on desktop tools, it's time to level up. Modern GIS is no longer just about clicking through menus in ArcGIS or QGIS. Today, spatial data lives in the cloud, runs at scale, and powers real-time decisions. That means the skills we need are evolving too. Here are three core areas to focus on if you want to grow in this new era — and what to learn to get there: 🔁 Automation & Transformation Traditional workflows often depend on ModelBuilder, ArcPy, or tools like FME to automate geoprocessing. But modern data pipelines are built differently: they're scriptable, scalable, and modular. Learn tools like Airflow, dbt, or Apache Sedona, and get comfortable writing spatial SQL in PostGIS or DuckDB. These tools are how modern teams build repeatable, automated workflows that run in production, not just on someone’s laptop. ⚙️ Processing Processing used to mean opening a big shapefile or raster in ArcGIS Pro and hoping your machine didn’t crash. That doesn’t scale. Modern GIS runs on distributed compute engines like Dask, Spark, or platforms like Wherobots that are designed for large scale spatial data. These let you process billions of rows or terabytes of raster data without breaking a sweat. 💾 Data Storage Say goodbye to shapefiles and file geodatabases. The modern stack runs on cloud-native formats that are fast, open, and built for interoperability. Learn about GeoParquet, Cloud-Optimized GeoTIFF (COG), and Zarr. These aren’t just buzzwords, they’re the backbone of scalable, shareable spatial data in the cloud (and often times you don't need a cloud account to use these files). More to come in the next post but for now what are you focusing on learning or what questions do you have? #gis #moderngis #geospatial #geospatialdataengineering #dataengineering #spatialanalytics

  • View profile for Tom Emrich 🏳️‍🌈
    Tom Emrich 🏳️🌈 Tom Emrich 🏳️‍🌈 is an Influencer

    Building the platform for physical AI at Springcraft | Hiring founding engineers | 17+ years in spatial computing | Ex-Meta, Niantic

    72,943 followers

    This week's defining shift for me is that creating 3D data is getting much simpler. New tools are turning everyday inputs like smartphone video, single photos, and text prompts into usable 3D environments and assets. This lowers the barrier to building the scenes, objects, and spaces that robotics, simulation, and immersive content rely on. It also shifts 3D creation from a specialized skill to something all teams can generate quickly and at the scale modern spatial systems require. This week’s news surfaced signals like these: 🤖 Parallax Worlds raised $4.9 million to turn standard video into digital twins for robotics testing. The platform turns basic walkthrough videos into interactive 3D spaces that teams can use to run their robot software and see how it performs before sending anything into the field. 🪑 Meta introduced SAM 3D to reconstruct objects and people from single images, producing full-textured meshes even when subjects are partly hidden or shot from difficult angles. The models were trained using real-world data and a staged process to improve accuracy. 🌏 Meta unveiled WorldGen, a research tool that generates full 3D worlds from text prompts. It produces complete, navigable spaces that can be used in Unity or Unreal and shows how AI can create environments without manual modeling. Why this matters: Faster 3D pipelines expand who can build, test, and refine spatial ideas. They turn 3D creation from a bottleneck into a regular part of development, which opens the door to more experimentation and better decisions earlier in the process. #robotics #digitaltwins #simulation #VR #AR #virtualreality #spatialcomputing #physicalAI #AI #3D

  • View profile for Evan Helda

    Physical AI @ Nebius | Writing @ Dream Machines | MBA

    9,551 followers

    Spatial computing just had its most important month in a while. Hardware's heating back up. Enterprise blockers are falling. And memory itself? It just became... a place to be explored. Here are the 5 signals that matter most 🧵: 1/ Meta + Anduril: AR goes to war (and then to work). Meta is partnering with Anduril to deliver EagleEye, a rugged XR headset for the U.S. military—blending Meta’s optics + AI with Anduril’s battlefield OS. This isn’t about combat. It’s a blueprint for industrial AR—built for field work, logistics, energy, and more. The frontline interface for digital twins just got real. 2/ Apple Vision Pro quietly filled major enterprise gaps With visionOS 26, Vision Pro now supports Team Device Sharing. Your calibration & settings follow your iPhone—making shared headsets practical. More importantly: Shared Spatial Experiences means real-time, multi-user collaboration is now native. Apple didn’t shout about it—but this should accelerate enterprise adoption 3/ Snap + Niantic are building the AR Cloud. Snap’s new Specs (2026) will run Snap OS, feature OpenAI/Gemini AI, and support real-time contextual experiences. They also announced a multi-year partnership with Niantic Spatial to map the real world with centimeter precision. The result: persistent AR that everyone can see, everywhere. 4/ Real-time avatars just took a major leap. Apple’s new Personas look… eerily human. Google’s Project Beam? It uses Gemini AI to animate your likeness—so it can speak as you. Soon, your avatar could pitch, train, or meet… without you. This isn’t about avatars. It’s about the performance of self. 5/ 4D GSplats: Your memories just became places. A breakthrough from 4DV.ai turns regular videos into 3D scenes you can explore—with time as a dimension. Pause. Scrub. Walk around. All in your browser. It’s like stepping into a memory, not just watching it. And it hints at a future where our past is spatial, explorable, and alive. So what does all this mean? Check out the latest Dream Machines post here: https://lnkd.in/gUyDpRZe And if you enjoy, be sure to subscribe! Much more to come on spatial, AI, and the future of being human.

  • View profile for Davide Zhang

    Design Prototyping at Microsoft Mixed Reality | XR, Wearables, AI

    5,209 followers

    Incredible work from Gracia AI streaming 4D gaussian splats directly from the headset browser (video 1). A high-fidelity volumetric video in your space without apps or downloads. In 2023, I had to rely on an entire workflow (image 2) just to convert and put heavy Gaussian Splats in mixed reality. And those were static. This maps closely to something I explored in my ACM Interactions article on situatedness (image 3). One of the core gaps I pointed out is that spatial computing today treats space as visual output, while interaction remains task-based and detached from context. Situatedness reframes this, not as rendering space, but as collocating people, objects, and environments across a spectrum (video 4): - from small, anchored objects - to partially blended spaces - to fully immersive environments Right now, splats behave like volumetric video. You look, maybe navigate, but rarely act. If spatial computing is to move beyond “3D UI in a box,” the question becomes: What are the interaction primitives for situated environments? - selecting and editing regions of a splat - traversing splats - attaching annotations or memory to places in space - querying a scene and getting contextual responses - co-presence and collaboration inside streamed scenes Designing interactions across the situatedness spectrum is the next frontier to me. What does it mean to interact with a streamed world, not just view it? #gaussiansplat #spatialcomputing

  • View profile for Philippe Lewicki

    Co-Founder of AfterNow, Artificial Inteligence, Spatial Computing UX, AR/VR/XR, Speaker

    3,594 followers

    I’ve started building a library of high-quality 𝟯𝗗 𝗚𝗮𝘂𝘀𝘀𝗶𝗮𝗻 𝗦𝗽𝗹𝗮𝘁 𝘀𝗰𝗮𝗻𝘀 across 𝗣𝗮𝗿𝗶𝘀, 𝗟𝗼𝘀 𝗔𝗻𝗴𝗲𝗹𝗲𝘀, 𝗮𝗻𝗱 𝗽𝗿𝗶𝘃𝗮𝘁𝗲 𝗽𝗿𝗼𝗽𝗲𝗿𝘁𝗶𝗲𝘀. These are not just “cool 3D captures.” I’m exploring how they can become useful business assets for: - 𝗩𝗶𝗿𝘁𝘂𝗮𝗹 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 — real locations captured for digital sets / stage workflows - 𝗥𝗲𝗮𝗹 𝗲𝘀𝘁𝗮𝘁𝗲 — videos, stills, and interactive property marketing from a single scan - 𝗩𝗥 𝘃𝗶𝘀𝗶𝘁𝘀 — immersive remote property walkthroughs - 𝗥𝗲𝗺𝗼𝘁𝗲 𝗶𝗻𝘀𝗽𝗲𝗰𝘁𝗶𝗼𝗻𝘀 — documentation for insurance, security, or expert review without always sending someone on site What interests me most is the gap between 𝗰𝗮𝗽𝘁𝘂𝗿𝗲 𝘁𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝘆 and 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝘂𝘀𝗲. If a location can be captured quickly and turned into a reusable visual asset, it opens up a lot of possibilities:- scouting remotely - marketing properties differently - reviewing spaces without travel - preserving places digitally This montage includes scans from public landmarks, urban spaces, and residential properties. I’m currently testing which use cases create the most real demand. If you work in 𝘃𝗶𝗿𝘁𝘂𝗮𝗹 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻, 𝗿𝗲𝗮𝗹 𝗲𝘀𝘁𝗮𝘁𝗲, 𝗩𝗥, 𝗶𝗻𝘀𝘂𝗿𝗮𝗻𝗰𝗲, 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆, 𝗼𝗿 𝗹𝗼𝗰𝗮𝘁𝗶𝗼𝗻-𝗯𝗮𝘀𝗲𝗱 𝗺𝗲𝗱𝗶𝗮, I’d love to compare notes. #3DGS #GaussianSplatting #VirtualProduction #RealEstateTech #PropTech #VR #DigitalTwins #SpatialComputing #ImmersiveTech

  • View profile for Matt Sheehan

    Director, AI Strategy & Innovation | Architecting the ‘Decision Layer’ - Simulation Models & Reasoning Engines

    13,134 followers

    𝐆𝐈𝐒 𝐦𝐢𝐠𝐡𝐭 𝐛𝐞 𝐥𝐚𝐭𝐞 𝐭𝐨 𝐭𝐡𝐞 𝐫𝐞𝐚𝐥 𝐟𝐫𝐨𝐧𝐭𝐢𝐞𝐫 𝐨𝐟 𝐆𝐞𝐨𝐬𝐩𝐚𝐭𝐢𝐚𝐥 𝐀𝐈. General Intuition just raised $133.7M - not to make better maps, but to train AI agents that move through 3D environments and make decisions in motion. That’s a bigger signal than it looks. While most of the geospatial industry is still debating dashboards, layers, and map UIs … the real frontier is emerging in synthetic worlds where AI learns how to act, not just visualize. This is the Decision Twin era forming in real time: - Not digital models of assets — but decision models of environments. - Not maps to interpret, but environments to train machine judgment. - Not “Where is the flood?” — but “Given this evolving flood state, which action offers the best outcome based on intent and consequence?” - World models + spatial-temporal reasoning = the training grounds for Decision Twins. And here’s the key: 𝑇ℎ𝑖𝑠 𝑤𝑎𝑠𝑛'𝑡 𝑏𝑢𝑖𝑙𝑡 𝑏𝑦 𝐺𝐼𝑆 𝑣𝑒𝑛𝑑𝑜𝑟𝑠. 𝐼𝑡 𝑐𝑎𝑚𝑒 𝑓𝑟𝑜𝑚 𝑔𝑎𝑚𝑖𝑛𝑔, 𝑠𝑖𝑚𝑢𝑙𝑎𝑡𝑖𝑜𝑛, 𝑎𝑛𝑑 𝐴𝐼 𝑟𝑒𝑠𝑒𝑎𝑟𝑐ℎ𝑒𝑟𝑠 𝑤ℎ𝑜 𝑛𝑒𝑣𝑒𝑟 𝑐𝑎𝑟𝑒𝑑 𝑎𝑏𝑜𝑢𝑡 𝑚𝑎𝑝-𝑚𝑎𝑘𝑖𝑛𝑔 - 𝑜𝑛𝑙𝑦 𝑑𝑒𝑐𝑖𝑠𝑖𝑜𝑛-𝑚𝑎𝑘𝑖𝑛𝑔. This is the clearest market signal yet that Geospatial 2.0 will not emerge from within GIS tooling, it will be built by those training AI agents to move, react, and choose inside live spatial-temporal environments. We are leaving the “map era” and entering the “decision rehearsal” era. 📬 Follow the evolution of Geospatial 2.0 in the Spatial-Next Newsletter: https://shorturl.at/SG9tU Read the article: https://lnkd.in/dGXcNUFg #Geospatial2_0 #AI #SpatialComputing #WorldModels #DecisionTwins #Geospatial

  • View profile for Manas Bhatia

    Design Technology Specialist @ HLW | Columbia GSAPP MS CDP’25 | Featured across CNN · BBC · PBS · Forbes 30U30 Asia Nominee

    11,680 followers

    Before asking AI to reason with space, we must ask, what even is space? __ This question has guided my work this year from developing my capstone, "Co-Design Canvas", an AI-driven participatory design tool, to our team's 'Spatial AI' research at Columbia University Graduate School of Architecture, Planning & Preservation with Prof. William Martin. Space isn’t just a measurable container. It embodies both void and presence. It's both smooth and striated. It’s not static but negotiated, shaped by its agents, behaviors, and context. This raises a critical question for AI in design: If today’s models are trained on images and object labels, how do we teach them to grasp these spatial logics, hierarchies, and behaviors? As part of the course’s exploration into spatial reasoning, semantic modeling, and agent-based simulations, we built a 2D simulation of our GSAPP design studio and constructed a detailed semantic ontology: 📦 Entities like desks, lockers, ducts, studio divisions.... 🧑🎓 Agents like students and professors.... 🛣️ Rules and Actions like “one-way entry,” “one student per chair,” and “corridor-only movement”, and so on....the list goes on... We then used LLM-based reasoning (via Funkify by Spatial Pixel and OpenAI APIs) to query spatial behaviors and test scenarios like: "How does circulation efficiency shift when desks are clustered?", "What happens if agents gain the agency to move their chairs?", and, "How is movement impacted if spatial divisions or entrances are reconfigured?" On the technical side, we developed a computational model featuring: ⚙️ Agent-based pathfinding algorithms (BFS, Dijkstra, heuristic routing) 🔥 Congestion heatmaps and circulation trails 📊 Layout scoring functions for efficiency, comfort, and space utilization This formed a closed-loop system where AI (LLM) interprets the space, simulates behaviors, reads outcomes, and recommends new spatial configurations. We’d adjust parameters like number of divisions, desk orientations, or entry/exit points and then run simulations to evaluate their effects on movement and congestion. The larger question this raises for cities: What kind of datasets and ontologies do we need for AI to meaningfully understand urban space? Not just 3D models or images, but operational, annotated data. Maybe a pothole dataset for Indian roads to guide autonomous vehicles, or culturally grounded definitions of public, private, and negotiated spaces. If you’re working AI in physical space I would love to connect. 🔗 Explore our GSAPP Spatial AI simulation here: https://lnkd.in/gfFHikdW 🔗Project information: https://lnkd.in/gXfENEBm Team: Manas Bhatia, Assoc. AIA, Vaibhav Jain Course Instructor: William Martin Course: Arch A6956-1 Spatial AI, Spring 2025 TA: Sebastian Schloesser #SpatialAI #SpatialIntelligence #ComputationalDesign #agentbasedmodeling #AIinArchitecture #ColumbiaGSAPP #SpatialReasoning #geospatialAI #physicalAI

    • +5
  • View profile for Anne-Liese Prem

    Head of Cultural Insights & Trends @LOOP | AI & Emerging Tech | Luxury, Digital Fashion, Beauty | Speaker

    18,909 followers

    The missing piece for AI and immersive tech? It’s all about 3D content 👾 With Apple Vision Pro and other 3D headsets, it’s clear that immersive technologies are rapidly advancing. But the real game-changer isn’t just the devices—it’s the content behind them. As immersive tech continues to evolve, it needs rich 3D training data to deliver truly immersive and interactive experiences. 🤖 The use of AI and 3D rendering in interior design and architecture fascinates me, because it’s poised to also influence how we digitally represent products in fashion, luxury, and e-commerce in general. 👀 That´s why this video caught my eye: Earlier this year at AWE, IKEA’s Innovation Manager, Martin Enthed, shared how the company is transforming the way we interact with spaces using 3D visualization and spatial computing. ➡ IKEA was an early adopter of AR and 3D modeling, and today, they have a catalog of over 60,000 high-res 3D images of their products—including even the plants they sell. Materials, surface appearances, and textures are stored separately in meticulous detail, for maximum flexibility. ➡ But IKEA isn't just building a digital catalog—they’re advocating for metaverse industry standards to ensure these 3D models can be seamlessly integrated into other applications. ➡ The potential here is enormous: Imagine training AI with this wealth of 3D data... Future devices understand and interact with three-dimensional spaces in ways we’re only beginning to imagine. This will redefine how we shop, decorate, and interact with the digital world. What are your thoughts on the future of spatial computing? How are you handling 3D data for your products? https://lnkd.in/dZzFCbcJ #spatialcomputing #emergingtech #design #3D

  • View profile for Gerald Mvumi

    Geospatial Engineer | GIS Developer | 3D Reality Capture

    4,444 followers

    🌍 𝗝𝘂𝘀𝘁 𝗟𝗮𝘂𝗻𝗰𝗵𝗲𝗱 𝗶𝗻𝘁𝗼 𝘁𝗵𝗲 𝗙𝘂𝘁𝘂𝗿𝗲 𝗼𝗳 𝗚𝗲𝗼𝘀𝗽𝗮𝘁𝗶𝗮𝗹 𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲 𝘄𝗶𝘁𝗵 𝗚𝗲𝗼𝗚𝗣𝗧 🚀 Thrilled to announce that I’ve started exploring 𝗚𝗲𝗼𝗚𝗣𝗧, an open source, non profit research initiative at the cutting edge of 𝗚𝗲𝗼𝗔𝗜 𝗶𝗻𝗻𝗼𝘃𝗮𝘁𝗶𝗼𝗻. GeoGPT blends the strengths of geospatial science with the capabilities of large language models to deliver smarter, faster, and more scalable spatial insights across domains. 💬 𝗠𝘆 𝗳𝗶𝗿𝘀𝘁 𝗽𝗿𝗼𝗺𝗽𝘁 “𝘞𝘩𝘢𝘵 𝘢𝘳𝘦 𝘵𝘩𝘦 𝘦𝘴𝘴𝘦𝘯𝘵𝘪𝘢𝘭 𝘗𝘺𝘵𝘩𝘰𝘯 𝘱𝘢𝘤𝘬𝘢𝘨𝘦𝘴 𝘧𝘰𝘳 𝘨𝘦𝘰𝘴𝘱𝘢𝘵𝘪𝘢𝘭 𝘸𝘰𝘳𝘬?” Here’s the powerful stack GeoGPT recommended. Each one is a cornerstone in modern spatial analysis: 📌 𝗖𝗼𝗿𝗲 𝗦𝗽𝗮𝘁𝗶𝗮𝗹 𝗦𝘁𝗮𝗰𝗸 • 𝗚𝗗𝗔𝗟/𝗢𝗚𝗥 – The geospatial data backbone • 𝗚𝗲𝗼𝗣𝗮𝗻𝗱𝗮𝘀 – Streamlined spatial operations • 𝗙𝗶𝗼𝗻𝗮 – Efficient vector data I/O • 𝗦𝗵𝗮𝗽𝗲𝗹𝘆 – Easy geometry manipulations • 𝗥𝗮𝘀𝘁𝗲𝗿𝗶𝗼 – Pythonic raster processing • 𝗣𝘆𝗣𝗿𝗼𝗷 – Coordinate system transformations 🗺️ 𝗠𝗮𝗽𝗽𝗶𝗻𝗴 & 𝗩𝗶𝘀𝘂𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 • 𝗙𝗼𝗹𝗶𝘂𝗺 – Interactive web mapping • 𝗖𝗮𝗿𝘁𝗼𝗽𝘆 – Scientific visualization • 𝗞𝗲𝗽𝗹𝗲𝗿.𝗴𝗹 – High scale browser based mapping 📊 𝗦𝗽𝗮𝘁𝗶𝗮𝗹 𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲 𝗧𝗼𝗼𝗹𝘀 • 𝗪𝗵𝗶𝘁𝗲𝗯𝗼𝘅𝗧𝗼𝗼𝗹𝘀 – Advanced geoprocessing • 𝗣𝘆𝗦𝗔𝗟 – Spatial econometrics and clustering • 𝗥𝗮𝘀𝘁𝗲𝗿𝗦𝘁𝗮𝘁𝘀 – Fast zonal statistics 🌐 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗺𝗮𝘁𝘁𝗲𝗿𝘀 GeoAI is revolutionizing how we approach urban planning, climate science, disaster response, and more. This isn’t just a tech upgrade. It’s a paradigm shift in geospatial thinking. 📌 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆 The geospatial landscape is rapidly evolving and AI is at the center of that transformation. GeoGPT is just one example of how accessible and impactful GeoAI can be for professionals across disciplines. 👥 𝗬𝗼𝘂𝗿 𝘁𝘂𝗿𝗻 Are you experimenting with GeoAI tools in your workflows? Tried GeoGPT yet? Share your insights or drop a comment. I’d love to hear how others are innovating in this space. Let’s shape the future of spatial intelligence together. #GeoAI #GeoGPT #GIS #SpatialDataScience #GeospatialTech #OpenSourceGIS #AIinGIS #PythonForGIS #FutureOfGIS #MachineLearning 𝗜𝗺𝗮𝗴𝗲 𝗰𝗿𝗲𝗱𝗶𝘁: "𝘐𝘮𝘢𝘨𝘦 𝘣𝘺 𝘔𝘢𝘯𝘶𝘦𝘭 𝘊𝘰𝘳𝘵é𝘴 𝘕úñ𝘦𝘻 𝘧𝘳𝘰𝘮 𝘗𝘪𝘹𝘢𝘣𝘢𝘺"

  • View profile for Arkadiusz Szadkowski

    🧭 Shaping the Reality Mapping, Digital Twins, GIS, Imagery and Remote sensing sectors.

    51,819 followers

    Innovation rarely comes from following the workflow. It comes from people who are willing to break it. One example I’ve been following closely for a while is Gabriel Ortiz and his team. After achieving impressive results with GeoAI at scale for feature extraction. (including generating 1:5000 topographic maps from Reality Mapping layers…) He’s now pushing further. Not on accuracy. On perception. Using AI, he’s experimenting with enhancing maps by combining: • graphical encoding • depth information • visual emphasis To make maps not just correct… …but intuitive, readable, and “pop.” This is interesting because for decades, cartography has been about: → simplifying → symbolizing → abstracting Now we’re starting to ask: What if AI helps us enhance understanding… without losing the richness of the data? This is where things get exciting. Not just in automation of production. But evolution of how we communicate spatial information. More of this kind of thinking and leveraging AI is needed. Pushing boundaries. Questioning defaults. Exploring what’s possible. 👏 Great work, Gabriel.

Explore categories