Self Photogrammetry
It’s no wonder that Google has done a lot for our digital world, staking their name in the business through pioneering concepts and workflows to help visualize the digital world. But what about that digital world has us so enamoured with it?
For me, that would be the realism of this digital world and how accurately it feels like our own! How did things get so accurate and what went goes into making the digital world feel like our own?
Google acquired Keyhole, the company responsible for developing what became Google Earth, in 2004 and afterwards relied on the Lansat satellites 7 and 8 to produce textures to overlay on a sphere, which helped us get from A to B. Once 3D modeling was introduced to Google Earth, things really went vertical. Buildings were modeled and added to Earth adding to the immersion. Streetview brought that even one step further by giving us a realistic view, something paramount if you navigate via landmarks like me!
Photogrammetry is the link between worlds here. Thousands and thousands of pictures taken on the ground by Google Street View cars and just as many pictures have been submitted by users across the world, all to be analyzed together to help define Earth. If something is so achievable on such a large scale, what can it do in a more local setting? Getting some practice with this technology is a good idea for new developers or designers to help define a more realistic setting in their projects. I’m lucky enough to have two good friends with the same curiosity to try this out at home!
My friends Dylan and Adrian helped me with a small passion project to help expand our knowledge of photogrammetry, design, and application of the process. Dylan is a photographer who took these many pictures of me and Adrian works in graphic design and put those pictures to work.
Dylan had taken well over three hundred photos taken of me, sitting and rotating, all at three different vertical angles. When all of these pictures are compounded, map points are drawn and the distance between these points helps determine the depth of things such as my eyes, mouth and cheeks. These distinct key points act like signatures, so that when the program is looking at the other pictures of the model, it can match these points and define their spatial relationships with each other. When each feature point is mapped, the rough outline of a 3D model is the product of those measurements taken.
The second picture shows how accurately the program matches key feature points from these three hundred pictures. The produced mesh is a little bumpy looking, a byproduct of some textures being difficult to measure depth with. Long and spindly things like threads, tree branches, a pot of ivy, my hair, are all harder to determine feature points with because of a lack of contrasted backgrounds. Even with a proper background, it was difficult to get my curly hair just because of this texture and contrast problem as shown below.
The point data is confused on where to go, because feature points end up getting lost in depth measurements. What looks deep from one angle will look shallow from another angle. Even at the top of my hair you can tell where feature points are mistaken to be, because of where my hair is defined in that location on another reference picture.
This is called Uncertain Spatial Relationships. Essentially what’s happening here is that the computer starts to generalize the space between the feature points. Without that contrast and accuracy, it fills in that information with the next closest approximation, and, as we can see here doesn’t alway work out. This is also why trees on Google Earth look like cotton candy, it’s simply too difficult to capture that precision from a distance. To help remedy this, a bald cap is worn. We used a knit hat and it ended up working just as well, but the flatter a surface is, the easier it is to measure depth.
Recommended by LinkedIn
Photogrammetry’s process cuts a lot of the initial legwork out of the design and development process, but it’s not the end-all solution. Smoothing those feature points out and bringing the models to life takes just as much work. Especially with curly hair, but that’s no match for Adrian! The end product of this venture was me, as my favourite class from the Final Fantasy series, a Blue Mage!
This model could now be used in production and is ready much faster than it would have been to start from scratch! This process can be done iteratively to achieve whatever scale of environment you’re trying to develop! Just like how Google Earth has its 3D buildings, it shows the actual depth of things like the Grand Canyon and even the height of Mt. Everest. You can even see the Tulsa City Skyline and Niagara Falls!
For me, I’ll be interested in utilizing photogrammetry to help capture realistic settings for the games I want to develop. Big titles like Red Dead Redemption and Resident Evil have used photogrammetry in this fashion, whether it’s by using satellite images a la Google to develop interesting large scale locations in-game, like a mountain or mansion, down to a good number of the objects in the game; fences, barrels, small structures, etc.
As well as using it for realistic design and for mapping, it can even be used for things like archaeology. Archaeologists can use photogrammetry at their excavation sites to help make models for studying to help preserve them in the process. One super interesting use is, instead of pictures, sonar can be used underwater and the map developed through echolocation brings information to the surface for underwater archaeologists without ever having to get their feet wet!
Another form of detection is lidar, which determines depth by the time it takes for a laser’s light to go to and from a surface. This is advantageous because lidar is not skewed by things such as darkness or weather, such as in photogrammetry. With pictures, the more contrast there is the more detail and information can be found and used in measurements. When you take out the uncertain spatial relationships with precision accuracy, lidar provides much better measurements. The tradeoff here is that lidar can be very expensive, sometimes hard to find expertise in and, although is highly accurate, photogrammetry covers a broader range of information that lidar can. When used in conjunction, lidar and photogrammetry can produce hyper realistic models, especially if the cameras used are detailed.
Thank you again, Dylan and Adrian for your time and effort in helping me explore this! Hopefully in the near future we’ll be able to work with larger scales of things. I’m anxious to try applying these models in some of my studies at Holberton. I’m especially excited for the opportunity to try and put together a full character model with this process. I think developers that are just starting out in these environments can make great use of the apps out there to help facilitate their projects! Especially if you’re like me and are still working on your own modeling skills in the meantime!
Resources