The Future of Making Animations
One Piece - Japanese manga series written and illustrated by Eiichiro Oda

The Future of Making Animations

Making animations is storytelling.

I have always loved animation. When I was a kid, I read many comic books and watched numerous cartoons. Even now, I love and enjoy watching animated movies. It has been something I admired: the feelings you can get from watching facial expressions and body motions. It’s something I always enjoy, because you can experience so much feeling from just one motion.

The transition from 2D to 3D has been dramatic — adding one more dimension created a lot more technical challenges. Animators are required to have the artistic talents to create those expressive motions as a storyteller, as well as the technical skills to understand what is happening underneath. For the last 20 years of 3D animations, we are still using similar methods to animate characters.

 Traditionally, you have many controls to animate pieces. Here is an animation rig example from Blender.

On the surface, it seems like advanced computer power, rendering, and simulation techniques have allowed us to take a huge leap forward in graphics. But when it comes to making animations, that’s not the case. And for the last five years, people have been looking for a better way to create motions.

ToyStory - The different stages of animation are highlighted by rendering technologies

Having worked as an animation engineer at Epic Games for 12 years, I’ve seen a very in-depth look at this problem. While I developed various tools with animators to use, keyframe, or rig the animations, I realized it’s never that easy to create animations. I don’t even have the patience to create one walk animation of 30 frames — this is one second animation of 30fps. Granted, I’m not an animator, but I found it so difficult. I enjoy watching animations, but not making them. 

That is my dilemma. Even though I love watching animations, I don’t want to make one. I can make a tool for them, but I am not patient enough to make them. For me, making animations should be a fun way to create art. It should be like drawing a picture on a piece of paper, but this is far from the reality. So how do we get there?

Evolution of Making Animations

Though there have been interesting attempts to solve this problem, here are the important conditions I think will help address the challenges mentioned above: 

  1. An easy way to create motion
  2. Having artistic control on the finite detail
  3. Being able to animate any creature, like a dragon

The first two are difficult to suffice, as they can be seen as contradictory. It’s like physics — we love physics when it works the way we want, but we don’t love physics when it doesn’t. If it’s easy to create motions, it’s harder to control finite detail. If it does things for you, you lose your control. You need some balance between.

The third one is also important for storytelling. Stories are not just about humans, but includes animals, insects, shapes, and all the imaginary creatures who you want to tell the story for.

 I love ‘Larva’  - they tell so much of the stories without any arms or legs


https://mokastudio.com/mosketch/

One approach (Mosketch) is to draw a curve on the 2D plane of your camera, and it will automatically pick out body parts and move along the curves. This looks more fun than just moving controls, but the mapping of 2D to 3D creates ambiguity, and you still need finite controls to do more detail.

https://cascadeur.com/

Cascadeur utilizes physics with a conjunction of traditional and new tools. You can edit the basic pose or the trajectory of the motion over time, and it can tell you if the pose is physically incorrect. Similarly, there has been research to auto-interpolate between poses. You create poses on a timeline, and fill up the between empty spaces in time.

https://developer.nvidia.com/blog/creating-a-human-pose-estimation-application-with-deepstream-sdk/

What about video? We have so many videos on the internet. How can we utilize them? There has been much research around this recently as AI has been getting better at this. But this hits the similar issue with Mosketch - the mapping ambiguity of 2D plane to 3D. It works great on the front, but not on the side, for example. The foot doesn’t stay on the ground because it’s unclear if contact has to be made or not. In the end, they’re not easy to use on a 3D avatar yet. But the work is ongoing.

VR animation is the other approach. There are two aspects of this: one is using VR control to capture the motion, where you only have three points of information to animate; and the other is using VR control to animate an avatar. This seems more intuitive, since artists can see on the scene, and they can just pick or move the limbs.

https://twitter.com/Maxi4Maci/status/1309590685750833152

Also, what happens to the dragon I mentioned earlier? The retargeting of animation is a process of transferring animation from one character (i.e. biped) to the other character (i.e. elephant), and it has been neglected very often because it can be usually solvable per case. I think we can take a better look at this problem. It’s not just moving a point to a point, or a limb to a limb. We want to transfer the intention of the move like this elephant wanting to follow this man.

Here is how I dream of what animation could be: creating motions should be easy, like pulling from a video source (or camera) or VR capture. Once you have this, we’ll extract key points from the captured motion. We identify and transfer the key points to the target key points of a 3D avatar - this could be a humanoid, or an elephant. After the transfer, the target avatar will follow those key points in the way it thinks it should. This requires the target avatar to have physical information of their limbs or body parts, so they know how to follow the movements. For example, if I wave my hand, the elephant won’t move its foot like I do with my hand, but it will lift and wave its nose.

The Future is Coming

NVIDIA Omniverse is a 3D package tool that is looking at these problems in different ways. We want to make this platform easier for everyone and be used by everybody. Check out Kaolin, for example — a tool specifically for accelerating 3D deep learning research. 

We also want the industry to look at animation problems as a whole. At the GPU Technology Conference, leading artists and animators will gather to share insights into design workflows and how the future of content creation is evolving. Join us — registration is free — and learn how we can all move forward towards being more creative, so we can be more expressive in the 3D world. 

Hopefully someday, making animation will be like drawing on a piece of paper, and you’ll be able to use animation to express your ideas, your feelings, and tell your stories.

Omniverse allows you to truncate much of the animation process. Running it over AWS allows creatives to work remote. Overlaying bone capture atop video allows a sort of mocap. Add in animators and VFX atrist with tools to puppeteer the animations - and you have a ‘live’ animation pipeline.

Great write up Lina! Cool to read your thoughts 🙂

To view or add a comment, sign in

Others also viewed

Explore content categories