Shared Spatial Reasoning is a Big Deal
I am working on representation formats for future single synthetic environments (SSE), and I keep getting reminded of a potential giant in this area, Apple. Stick with me a bit while I explain why.
Lets get one thing out of the way first. Apple is in the consumer electronics business, with the notable exception of collaborative gaming. They surely will not enter the SSE space as I address it. Though we’ll see an Apple XR headset soon, their market pathways are just too far from what will work for what I need. But they absolutely own a key expertise: spatial modelling.
The world sees Apple as a device supplier with a growing service business. The iPhone is the most successful product in history; Macs, iPads and watches are big business. If you just segmented out what they call ‘services’, it would be one of the world’s largest firms.
But look at some of the things they’ve done, almost invisibly.
Navigating Space
We may never see an Apple car, but they put billions into developing one. The central advance when they were working was self-driving. That’s a process of spatial reasoning.
Real Time Spatial Acoustic Reasoning
Some of you will recall the original HomePod. This now discontinued device had six microphones whose job was to listen to the acoustic space and modify both the volume and using machine learning create synthetic echos so that the broadcast sound canceled out the spatial interference to present a more spatially open acoustic environment. Over time it could adapt to furniture changes or people moving around. Something like this has been attempted as far back as the 80s with Bose working from MIT. But here it was finally realised with real time dynamic spatial AI. The enormity of this is compounded by having to decompose the music into components that can individually be manipulated for the environment.
If using ‘Spatial Audio’, (for instance Dolby Atmos) with dynamic head tracking on their new devices, they can keep the spatial registration of the music and ‘virtual walls’ constant, and you can move your head to sonically be facing the instruments or environment on one side or the other.
Object and Environment Spatial Mapping
Apple was the first to master facial recognition on portable devices. They do this with a remarkable infrared dot projector and sensor that is coupled to a bespoke ‘bionic’ chip they’ve designed. (They have over a billion of these in the wild).
They use this same technology to map environments for their impressive extended reality framework. It can recognise features that virtual objects can travel behind and be occluded. This is far ahead of the other big guys, but unnoticed, because they don’t yet use it.
This technology has been applied for video conferencing in their recent ‘Center Stage’ iPad/iPhone cameras. Even though the camera remains stationary, it will follow you around and zoom to keep you in the centre of the frame.
Recommended by LinkedIn
Mesh Geolocation
Apple now sells AirTags, which uses a notable computing paradigm. When one of these moves through the world, it speaks to every iPhone in the area, which then relays the information back where spatial reasoning precisely locates its path. This is all the more impressive because unlike everything Google does, Apple does this without tracking users.
Maps
To my knowledge, they haven’t merged the above technologies with their maps effort. But Apple was the first to go with vector models of buildings in urban displays. What this means is that their discovery vehicles have learned how to recognise edges of buildings and their features like windows and doors. When they draw them in their 3D flyovers, these are rendered the way Pixar renders objects. And that brings us to Pixar, and a strange confluence.
Agile Supply Chain Management and Renderman
I was at DARPA helping to reinvent enterprise modelling. Central to that was creating an advance in how we modelled supply chains. And a lot of that had to do with the simple process of getting stuff spatially around the world. Turns out that integrating physics like physical movement, business processes and control structures was hard, ARPA-hard.
As it happened, our group in the intelligence community was buying a comparatively large number of NeXT machines, so we encountered Steve Jobs a fair bit. And he was very interested in this spatial/process modelling problem. I do know that when he went back to Apple, he incorporated our DARPA enterprise work in Apple University and it provided a competitive advantage still visible today. It is possible that there is a connection between that and the systems listed above. Jobs wasn’t just a salesman, he understood what mattered.
More likely is a logical direct connection, though I like to think they all blend together. At this same time, Jobs bought an animation group from George Lucas and started to make animated films. The key competitive trick was three d rendering software that he subsidised when it was a big risk. Rendering is all about a model of space and how light interacts with objects. “Toy Story” revolutionised filmmaking, and behind it was a system called Renderman. That was the first scalable spatial modelling environment that allowed creative affordance. Jobs fundamentally changed the world well before he returned to Apple.
Spatial reasoning.
Why this History?
Well, it is pretty cool to think about what’s going on in the most profitable firm on the planet (if you don’t count tyrants and oil). And interesting to speculate on what they have up their sleeve with their XR headset and where they could go with health wearables — which can be seen as a spatial modelling challenge.
But it is because I’m thinking about the next generation internet, and what that means for single synthetic environments, or whatever we will call them. The challenge is in how we bring the world to and through the network, in all its complexity rather than pushing existing network paradigms into the world. The sad fact is that back in the day, defence research led the rest of the world. Today, defence research is almost irrelevant compared to what happens at places like Apple. By itself, Apple research is an order of magnitude larger than DARPA today, and by my measure far more like the old DARPA in attitude.
I think the future of SSEs is a matter of carrying narrative in space to others in their spaces. All the excitement in machine learning helps not a wit. And meanwhile folks are getting things done by putting the physical environment first. We probably need a bit more of Jobs right now in thinking about the future, and less of Musk.