Since we need to give the artists very flexible room for animation, we create customizable blend space.
To reduce computational complexity, barycentric coordinate is used for a point to blend 3 nearest animations.
To create animation component so that animation only affect parts of the body, skeleton mask is used.
For each vertex, we assign weights to each animation.
Animations like head shaking usually stored coordinate changes rather than exact local coordinate. This way, we can make the animated figure to shake head in every pose.
Additive Blending is dangerous in a sense the vertex can go out of intended area. We might need to set a range for vertex to move and always clip them in a physically possible range.
We could package a blend space as a node in ASM.
With circular looping animation, interpolation could produce natural result. However, for switching between non-looping animations in state machine, ease in and ease out is needed.
There are two major ways of doing easing:
Smooth Transition: mathematically pleasing
Frozen Transition: look more natural
Layered ASM: multiple state machine each in charge different parts of the body, taking multi-modal input.
Like Expression Tree, we can blend animation clips using trees.
Terminal Node can be:
Animation Clip
Blend Space
ASM
Non-Terminal Node can be:
Binary LERP blend node
Ternary (Rectangular) LERP blend node
Binary Additive blend node
Game logic can control parameters in Blend Tree.
Forward Kinematics: the use of kinematics equations of a robot to compute the position of the end-effects from specified values for the joint parameters.
Inverse Kinematics: instead of artists tells each joints what to act, we want artist to tell what is the result of acting and computer will calculate how the joints should act.
We want the animated foot to always land on a non-flat ground. Since the length of bone is fixed, we can solve the triangle for the degree each joints should rotate.
In 2D, we have 2 solutions and only 1 is correct. In 3D we have infinitely many solutions. Therefore, we should use the walking direction to choose which solution we should use.
First, we can approach it by some reachability sanity checks.
Secondly, if reachable, we might also be limited by human joints' constraints.
The analytical method is too complex to solve, so we use heuristics algorithm.
approximation
global optimality not guaranteed
iteration has maximum limit
sacrificing optimality, accuracy, precision, completeness for speed
Principle: from joint to joint, rotates the end-effector as close as possible to the target, solves IK problem in orientation space.
Reachability: algorithm can stop after certain number of iteration to avoid unreachable target
Constraints: angular limit is allowed, by checking after each iteration.
To optimize the result, we insert a tolerance region for earlier iterations. This will result more distributed angles for each joints.
It has physical explanation: bending a spring.
Under-damped Angle Scaling: each joint moves only a small amount toward the goal and distributes the movement accross multiple bones. It produce less abrupt joint changes and more smooth can causal poses for character movement.
We force root joints to have a angular constraints that is smaller than the leaf joints.
Instead of orientation space, we solve IK problem in position space.
Reachability: stop after certain number of iterations.
We can also add constraints to FABRIK.
But in reality, we need to have multiple constraints in animation. The computation is still huge.
This is because meeting one constraint may ruin the result of other.
We need Jacobian Matrix
Physics-based:
More natural
need lots of computation
Position Based Dynamics (PBD):
different from traditional physics-based
better visual performance
lower computational cost
Fullbody IK (FBIK) in UE5:
Challenges:
self mesh collision avoidance
IK with prediction during moving
natural human behavior (deep learning) so that human is always in balance
Interestingly, human is one of rare animal on earth that have complex facial expression system. So most animals can't interpret human expression. Dog, however, can interpret human expression.
To express facial animation, we use vertex animation with 28 key poses and blend within them.
Therefore, we should store vertex positions offset from neutral face.
In real engines, we still use some joint animation and combine it with vertex animation. This is because open and close mouth and eye movements are harder for vertex animation.
UV texture (especially normal map) should change to reflect folding of facial skin.
This is commonly used in offline film settings.
Unreal's Metahuman is the state-of-art for facial animation.
Problem: we need to re-target skeletons in different bone structure and different initial pose.
Instead of applying re-targeting from local space, we need do re-targeting in relation of standard pose.
Handle Animation Track:
Rotation track from source animation
Translation from target skeleton
Scale from source animation
When body size is different, we need to pay special attention to walking speed and displacement of character. Displacement curve needs to be scaled by the proportion of the pelvis.
Moreover, re-targeting need some IK for different bones.
We set anchor on source bone and divide the source bone into points. We try to match target bone to those points.
Unsolved Problem:
Self-mesh penetration
Self contact constraints (hand can't clap together after re-target)
balance of character
We need to insert constraint point and apply Laplace Operator.
Table of Content