owned this note
owned this note
Published
Linked with GitHub
:::info
The [Animation & Rigging module][module] had a workshop two weeks ago, 26-30 June 2023, at Blender HQ. It was part of the *Animation 2025* project, and a continuation of [last year's workshop in October][october].
The goal of the workshop: **to speed up animators** (and others working with animation data) **by reducing manual data management**.
:::
[module]: https://projects.blender.org/blender/blender/wiki/Module:%20Animation%20&%20Rigging
[october]: https://code.blender.org/2022/11/the-future-of-character-animation-rigging/
The Action has been Blender's main animation data container for over two decades. In this blog post we propose **a new data-block called 'Animation'**, which will eventually replace the Action as the main container of animation data.
This blog post is divided into a few sections:
- Sketching out our desires for the new system
- A closer look at the proposed model
- Adding non-linear editing capabilities
- Technical details of the data model
- Things yet to be designed
- Grease Pencil collaboration & Ghosting
- Operations, possibilities, and ideas for the future
- Rough timeline
- Conclusion
:::danger
When moving from HackMD to the blog, the above ToC should be turned into links to the actual sections.
:::
## Desires
Whatever data model we come up with, it has to serve as a basis for the bigger ideas from the [October 2022 workshop][october].
At the start of the workshop we set out some *desires* for the new system. We want animators to be able to:
- gradually build up animations.
- try different alternative takes.
- adjust someone else’s animation (or your own a week later).
- have a procedural animation system.
- stream in animation from other sources like USD files or a game controller.
- and manually animate on top of all of that.
From this it was clear we wanted to introduce **animation layers**.
Furthermore, we want to keep animation of related things together, which means that it should be possible to **animate multiple data-blocks with one Animation**. With this you could, for example, animate an adult holding a baby and put the animation of both of them in one Animation data-block.
Finally, a desire was to **make all animation linkable** between blend files. This is already possible with Action data-blocks, but the NLA data sits directly on top of the animated object, and cannot be linked without linking the object itself. Furthermore, each object has independent NLA data, so doing non-linear animation with multiple objects simultaneously is tedious. We concluded that we should **move the NLA functionality into the Animation**.
### Animation Layers
Currently it is *technically* possible to work with layered animation in Blender. It would require using the NLA, creating an Action for each layer and placing those on different tracks. *Technically* possible but not a pleasure to work with. Not only does this create plenty of different Actions, it also makes it very hard to perform edits across layers, for example when retiming.
To exacerbate the situation, as already described above, NLA data cannot be linked as a separate 'thing' like Actions can, so taking these layered animations further through the production pipeline can be problematic.
![](https://hackmd.io/_uploads/r1IutHlKh.png)
The goals for a new animation layering system are:
- **One data-block** that contains all the layers.
- **Tools for cross-layer editing**, much like the Dope Sheet can edit keys in multiple F-Curves across different data-blocks.
- Open up the possibility to put **non-F-Curve data** on layers. These could filter / manipulate animation from lower layers, like modifiers, or introduce new animation data, like simulations.
- Make it possible to **animate on top** of those non-F-Curve layers.
### Multi Data-Block Animation
Wouldn't it be great if 'one character' could just have 'one animation'? The reality is that 'one character' consists of many data-blocks, all of which will get their own Action when they get animated. This is often worked around by driving everything from bones and custom properties on the Armature. This is possible, but this is clumsy and can be hard to set up.
The proposed Animation data-block can keep its animation grouped per "user":
<!-- ![](https://hackmd.io/_uploads/S1rj9SxFh.png) -->
![](https://hackmd.io/_uploads/B1f69HeY3.png)
Through the same system, animating multiple characters is then also possible:
![](https://hackmd.io/_uploads/HJ0CqSxth.png)
## Closer Look
This section takes a closer look at the ideas described above.
### Animation Layers
In a sense, each layer can be seen as an Action data-block (but more powerful, more on that later). Layers can **contain F-Curves** and other kinds of animation data, and are evaluated **bottom to top**.
Animation Layers can, of course, have different contents, and be keyed on different frames.
![Two layers, with "Layer 1: Blocking" at the bottom, and "Layer 2: Finessing" at the top](https://hackmd.io/_uploads/Skm4jrgt3.png)
How each layer **blends** with the layers underneath it can be **configured per layer**.
![Example of a base layer, a 'combine' layer, and a 'replace' layer](https://hackmd.io/_uploads/SyzGAreYn.png)
This is different than Blender's current NLA system, where the blend mode can be set per strip. Having this setting per layer makes it simpler to manage, and more straight-forward to **merge layers together**. Each layer has a **mix mode** (replace, combine, add, subtract, multiply) and **influence** (0-100%).
Layers can also be **nested**:
![](https://hackmd.io/_uploads/Syy7CreF2.png)
How these nested layers behave is determined by the parent layer's '**child mode**' setting:
Mix
: Combine the children just like top-level layers.
Choice
: Only allow a single child layer to be active at a time. This can be used to switch between different alternatives.
:::warning
Whether a layer can have both data of itself *and* child layers is still an open question. For now it's likely that we'll try and keep things simpler, and only allow one or the other.
:::
### Multi-target Animation
Contrary to Actions, layers can contain **animation for multiple data-blocks**. This is done via a system of **outputs**; the animation data has various outputs, and you can plug a data-block into each of those.
![Two-layer animation, where Layer 1 has animation for Einar, and Layer 2 has animation for both Einar and Theo](https://hackmd.io/_uploads/S1-spreY3.png)
The example above has **two outputs**, one for *Einar* and one for *Theo*. Each of these has a set of independent F-Curves.
:::warning
How exactly this will be represented in the user interface is still being designed. A connected output will likely just show the name of the connected data-block.
:::
## Non-Linear Editing
When we wrote "*Layers can contain F-Curves and other kinds of animation data*", we simplified things a bit. Let's dive in deeper.
The animation data of a layer is actually contained in a **strip** on that layer. By default there is only **one strip**, and it extends to infinity (and beyond). This is why we didn't show that strip in the images above.
![Two layers, one that has the implicit infinite strip, and the other has a strip that's repeated once.](https://hackmd.io/_uploads/HycURreFh.png)
When desired, you can **give the strip bounds** and it will behave more like an NLA strip. You can move it left/right in time, repeat it, reference it from the same or from other layers, etc.
By having the animation data always sit in a strip, your data is handled in a uniform way. This should avoid making a big switch like you'd have to do now to use the NLA. Also tooling will become more uniform, and add-on writers will have an easier time too.
### Strip Types
All layers are the same. The strips on them can be of different types, though.
:::info
These strip type names may change over the course of further design work.
:::
Keyframe Strip
: is just like Actions, but with the multi data-block animation capabilities.
Reference Strip
: references another strip in the same Animation data-block. It can also remap the data to another *output* (i.e. use Cube.001 animation for Cube.002) or to another *data path* (after a bone got renamed, remap the FCurves to the new name).
Anim Strip
: is similar to the *reference strip*, except that it doesn't point to another strip but to a different Animation data-block in its entirety.
Meta Strip
: contains a nested set of layers, with their own strips, which can also be meta strips, making it possible to have arbitrary nesting. Effectively it's an embedded Animation data-block.
We have ideas for other strip types too; these need more work to properly flesh out. For now these are rough ideas, but still a core part of the overall design.
Simulation Strip
: simulates muscles, cloth, or other things. And animate on top of that in another layer, of course. This needs more work in Blender than just the animation system, though, as simulation time and animation time may be using different clocks.
Procedural Animation Strips
: has a node system to process the result of the underlying animation layers. This would split up the evaluation of the data into several parts (animation layers, then process by other code, then further animation layers), which needs support in other areas of Blender.
Streaming Data
: coming in from other sources, like Universal Scene Description (USD), Alembic files, and motion capture files. This also needs changes the the current approach of working with such files, possibly in combination with [#68933: Collections for Import/Export](https://projects.blender.org/blender/blender/issues/68933).
## Data Model
Since this is the Developer Blog, of course we have data model diagrams. The green nodes are all embedded inside the Animation data-block itself.
`Animation` is an `ID` so that it can be linked or appended from other blend files.
Other `ID`s can point to the `Animation` they are influenced by, similar to how they currently point to an `Action`.
<style>
pre.mermaid {
outline: thin dashed grey;
}
.node.insideAnim rect {
stroke: #569f3c !important;
fill: #d9ead3 !important;
}
.node.insideAnim line.divider {
stroke: #bbcab6 !important;
}
</style>
```mermaid
classDiagram
direction LR
ID --> "0-1" Animation
Animation "1" --> "*" Layer
class ID {
string name
Animation* anim
}
class Animation:::insideAnim {
ID anim_is_an_id
list layers
Output outputs[]
}
Animation "1" --> "*" Output
Output --> ID
class Output:::insideAnim {
ID *animated_id
string label
int id_type
}
class Layer:::insideAnim {
string name
enum mix_mode
float mix_influence
list~Strip~ strips
enum child_mode
list~Layer~ child_layers
}
Layer "1" --> "*" Strip
class Strip:::insideAnim {
enum type
float frame_start
float frame_end
float frame_offset
bool isInfinite()
void makeFinite()
}
KeyframeStrip "is" --|> Strip
class KeyframeStrip:::insideAnim {
map[output → array~AnimChannel~] channels
}
```
Each Animation has a list of *layers*, and a list of *outputs*.
Each `Output` by default has a single ID pointer, which determines what data-block that output animates. The `label` is automatically set to the name of the data-block, so if the pointer gets lost, there's still the label to identify and remap the output. The `id_type` makes sure that different data-block types cannot be mixed, just like you cannot assign a Material Action to a Mesh data-block.
:::warning
An idea we are exploring is the possibility of 'shared' outputs, i.e. making it possible to animate multiple data-blocks with one output, the same way you can currently have multiple Objects using the same Action.
It is not yet known whether this would actually be a desirable feature, as it would complicate working with the new system.
:::
The diagram shows that an `ID` points to its `Animation`, and an output points back to the `ID`. This seems perculiar at first, and earlier designs did not have that second pointer. Instead, each output had a name, and the `ID` would declare the name of the output it would be animated by. Appearing straight-forward at first, we found out that such a name-based approach will likely be fragile and hard to manage. Because of this, we chose to use pointers instead, which Blender already has tools for managing.
### Strip Types
Like the diagram above, green nodes are all contained inside the Animation data-block itself.
```mermaid
classDiagram
direction LR
class KeyframeStrip:::insideAnim {
map[output → array~AnimChannel~] channels
}
class ReferenceStrip:::insideAnim {
Strip *reference;
map[output → output] output_mapping
map[rna prefix → rna prefix] rna_mapping
}
class AnimStrip:::insideAnim {
Animation *reference;
map[output → output] output_mapping
map[rna prefix → rna prefix] rna_mapping
}
class MetaStrip:::insideAnim {
list~Layer~ layers
}
AnimStrip --> Animation
class Animation {
...
}
ReferenceStrip --> KeyframeStrip
ReferenceStrip --> AnimStrip
ReferenceStrip --> MetaStrip
```
Keyframe strips define an array of animation channels (more on those below) for each output.
:::warning
How exactly outputs are referenced by the strips is still an open design question. We are considering simply using the output index, but that has some fragility. Pointers could work, but they'd need remapping every time a copy is made of the Animation. This also happens for the undo system, so it's not as rare as it might seem at first.
:::
The **reference types** `ReferenceStrip` and `AnimStrip` can do two kinds of remapping:
Remapping Outputs
: An animation for some data-block gets applied to another data-block. For example, the animation of `Cube.001` gets applied to `Cube.002`.
Remapping Data Paths
: An animation for some property gets applied to another property. For example, all animations for `pose.bones["clavicle_left"].…` gets mapped to `pose.bones["clavicle_L"].…`. This would be done on a prefix basis, so any data path (called 'RNA path' internally in Blender) that starts with the remapped prefix is subject to this change.
### Channel Types
The new animation model should be appliccable to **more than F-Curve** keyframes. This would allow for tighter integration with Grease Pencil animation, to name one concrete example. Furthermore, the current system of using **camera markers** to set the active scene camera is a bit of a hack, in the sense that the system is limited to only this specific use. It would be better to have animations of the form '*from frame X onward use thing Y*' more generalised. For these reasons, a `KeyframeStrip` can contain **different animation channel types**:
```mermaid
classDiagram
direction LR
KeyframeStrip "1" --> "*" AnimChannel
class KeyframeStrip:::insideAnim {
map[output → array~AnimChannel~] channels
}
class AnimChannel:::insideAnim {
enum type
pointer data
}
AnimChannel --> FCurve
class FCurve:::insideAnim {
string rna_path
int array_index
array bezier_keys
}
AnimChannel --> FMap
class FMap:::insideAnim {
some identifier
map[frame range → index]
}
AnimChannel --> IDChooser
class IDChooser:::insideAnim {
string rna_path
short id_type
map[frame time → ID*] id_pointers
}
```
`FMap` is a mapping from a frame number to an index into some array. Unlike an `FCurve`, which has a value for every point in time, an `FMap` can have 'holes'. It is intended for Grease Pencil, to define which drawing is shown for which frames.
`IDChooser` is a generalisation of the current camera markers. Basically it says '*from frame X forward, choose thing Y*'. It is unlikely that this can be applied to animate *all* data-block relations, as it could be very difficult to create a system to support all of that. We'll likely pick a few individual properties that can be animated this way first, to gain experience with how it's used and what the impact is. Later this can be expanded upon.
## To Be Designed
There is a lot still to be designed to make this a practical system. Here we list some of the open topics, so that you know these have not been forgotten:
Linking Behaviour & Tooling
: Linking animation data from one file into the other is a common occurrence. The basic flow should work well, and new tools should make more complex workflows possible.
Simulation Layers
: Animation and simulation *could* use the same time source, but using different clocks for these should also be possible.
Procedural Animation
: A lot of different things fall under the umbrella of 'procedural animation'. This could be a full-blown node-based animation generation and mixing system, or as simple as F-Curve modifiers at the layer level.
Animating the Animation System
: It should be possible to animate layer influence, various strip parameters, etc. Where does that animation get stored?
Rig Nodes
: One of the big outcomes of [the October 2022 workshop][october] was 'slappable rigs': a control rig system that can be activated at different frame ranges. Rig Nodes is a [prototype](https://projects.blender.org/dr.sybren/rignodes) for this. How such a system would integrate with the bigger animation system needs to be designed.
## Grease Pencil Collab - Ghosting
In collaboration with the Grease Pencil team, represented by Falk David and Amélie Fondevilla, we discussed **ghosting**. This is also known as 'onion skinning' in the 2D animation world; we chose 'ghosting' as the term for Blender as this seems to be more widely used in 3D animation, games, and other fields.
Christoph and Falk collaborated on a prototype, which is shown in the video below:
<iframe src="https://drive.google.com/file/d/1tAJCuOLmwTMa8yt9VmHfeDhpmG7enkEC/preview" width="640" height="360" allow="autoplay"></iframe>
### Goals
The goals of the ghosting system are:
- To show data of **different points in time**, overlaid on the current view.
- A **unified system** that works for all object types.
- **Editable** ghosts, so that you do not necessarily have to move the scene frame in order to edit a ghost.
- **Movable** ghosts, to shift & trace, or the opposite, to space them apart to get a good overview of the motion.
- **Non-blocking** to the rest of Blender. Playback and scrubbing should not be slowed down by the ghosting system.
### Features
The following features are considered important for the system:
- Define an object to ghost, or a subset of it, like only the hand you're animating at that moment in time.
- Define the time of Ghosts, either relative to the current time or as absolute frames.
- Clicking on a ghost to jump to that frame.
- Offset Ghosts in Screen and World Space.
- Ghosts can be locked, so they don’t update. This can be useful to keep a reference for exploring different animation takes.
### Technical Challenges
Of course there are various technical challenges to solve.
Currently **selection** can already be tricky. For example, when two objects share the same armature and both are in pose mode, the selection is synced between them. Selection across different points in time would likely require more copies of the data, which needs to be managed such that Blender doesn't explode.
The **dependency graph** will have to be updated to account for multiple points in time being evaluated in parallel. This will likely also cause 'completion' states to be per frame, so that the current scene frame can be marked as 'evaluated completely' before the ghosts are.
Finally there is the question of how to **ensure the speed** of the system. If we use caching for this, there's always the question on how to invalidate that cache.
## Operations / Possibilities / Future Ideas
The workshop focused on the data model, and less on operations to support this model. The following operations were briefly discussed, and considered for inclusion. This is not an exhaustive list.
Split by Selection
: Selected FCurves go to another layer, the rest stays in the current layer.
Configurable 'property set' per layer
: that can work as a keying set. When inserting a key, these properties are always keyed on that layer. Multiple layers can each have a 'property set'. Example: have a 'body animation' layer and a 'face animation' layer, where Blender automatically puts the keys in the right place.
Frequency Separation
: F-Curves are split between low frequency (i.e. the broad motions) and high frequency (finely detailed motions), which are then blended together to result in the same animation. Such workflows are already common in photo and music editing, and could be very powerful as well for animation.
Streaming & Recording
: Stream animation in from some system, and have it recorded as F-Curves, for example from a live motion capture system. Which could be just a game pad you use for puppeteering.
Combined Animation Channels
: Ensuring that quaternions (always 4 values) or maybe even entire transforms (location + rotation + scale) are always fully keyed can open up interesting new ways of working. When Blender 'knows' that every frame has all the keys, we can bring more powerful, easier to use animation techniques to the viewport.
## Timeline
It is a bit too early to come up with a detailed timeline, so here is our goal in broad terms:
- 4.0: might already see some of this functionality as experimental feature
- 4.x: expanding the functionality & creating tooling to convert existing animations to the new system.
- 4.y: change the default animation storage to the new system, retaining the old one.
- 5.0: automatic conversion of the old-style animation data to the new system, and removal of the old animation system from the UI (some datastructures will have to be retained for versioning purposes).
## Conclusion
This blog post has given an overview of the current state of design of a new animation data model. Although this broad direction is pretty well decided on, things are still in flux as we create prototypes and try to figure out how to integrate other ideas.
If you want to track the Animation & Rigging module, there are the weekly module meetings ([meeting notes archive][notes]) and of course the discussions on the [#animation-module Blender Chat channel][chat].
[notes]: https://devtalk.blender.org/tags/c/meetings/28/animation-rigging
[chat]: https://blender.chat/channel/animation-module