---
tags: workshop
---
# Animation Workshop June 2023
## Travel schedule
- Brad: arrives Sunday 25 June 10:25 at Schiphol, departs July 1 12:50
- Christoph: Sunday 25 June 23:25 at Amsterdam CS, departs June 30
## Links
- [Final presentation for Blender HQ](https://docs.google.com/presentation/d/1Q_5fGTDaFcBuzDPTsu79uzt3xWjarb3bOmCBPy0nSKc/edit)
- [Google Meet](https://meet.google.com/ora-nbxv-evf)
- [Presentation for Ton & the Admins](https://docs.google.com/presentation/d/1LcxgzT6gBWeMjq8Eruwe_3E1h5Vy9E4M9oQLUJvfuaM/edit#slide=id.g2558df0e402_0_284)
- [Brecht's proposal](https://docs.google.com/document/d/1CEWFwqhAAimO0lj06QxGekLaC1LeP_krUU_6zSZgGcs/edit?usp=sharing)
- [MohammadHossein's presentation on physics based rigging](https://docs.google.com/presentation/d/1lKNIRutGAsSENC_sh0DzG2APVVlKbPPkvMqShbHSjoM/edit?pli=1#slide=id.g2514ddedd37_0_5)
## Topics
Not yet sorted / pruned. Topics listed here may not be covered by the workshop at all.
## Questions
- Where do keys go?
- Linked-in Animation datablock, local object that needs to be animated by it.
- If Anim ID brings in animated IDs as well, those may not be enough. For example, brings in the Armature, but not the rest of the collection that makes up the character.
- Slots: defined at animation datablock level & reference from clips? Or define per clip?
- How does RigNodes fit into this? How does 'slapping' rigs go?
- What are animation nodes going to look like?
- As a game animator, how would you build up a library of animations?
- What happens when the animator keys the blend value on the layer?
- Where does grease pencil data live
- Where does time remapping (via f.e. time curve) live?
- Undo performance of big datablocks
- How do we handle summaries of heterogeneous channels? (like FCurves and GP frames)
-------------------------------
## Current Discussion Point
- How do layer blend modes work
## Conclusions / Decisions
### Ghosting (aka Onion Skinning)
**Ghosting uses the Depsgraph** to evaluate the state of an object on a different frame. Currently this is required to be a different Depsgraph for each frame, but the Depsgraph could be extended to be able to do that on its own. The depsgraph can live on the scene level since the scene already has a hash map of depsgraphs.
**Objects:** Which objects are going to be displayed will be defined by the user, either directly or in bulk on the collection level. It can be a subset of an object, e.g. a Hand.
**Time:** A Ghost can be relative to the current scene time, or absolute meaning it will always be displayed regardless of the scene time.
For this there are filtering options:
* only display frames with keys on it
* every nth frame
* Only include certain types of keys (e.g. breakdown...)
**Ghosts can be locked** meaning they will not update as long as they are not unlocked or manually updated. This needs to be reflected in the 3D viewport in a way to tell the user that this Ghost will not update. This feature can also be used to free memory (only GPU data needs storage, and depsgraph + CoW copies can be discarded).
**The N-panel** will show the data of the *ghost frame* that is currently being edited.
**Operations**
* Jump to the frame of the Ghost under the cursor.
* Offset Ghosts in Screen Space and/or World Space. Both manual placement and 'automated' ones (i.e. "I want to see it riiiiight here" vs. "just move them apart")
**Things that need to be clarified**
How multi frame editing works, e.g. selecting the hand on 3 Ghosts and rotating it. The depsgraph needs to handle that somehow. Selection state would also have to be stored on a per-frame / per-ghost basis.
### Layer Blend Modes & Output
- Each layer spits out Channel data that is then merged with the previous data based on the blend mode defined on the layer.
- The final output is a list of channels of potentially different types.
- The channel types are defined by Blender, and include a function that knows how to apply its values.
### Layer Model
- `Action` data-block will be renamed to `LegacyAction`.
- new `Animation` data-block for the new system.
- Contains of layers
- Layers always contain at least one clip (infinite by default)
- Output of a layer is "animation channels"
- Layers are generic, clips have specific types.
- Keys are stored in a clip.
- Clips can be infinite.
### Channel Types
There is a need for more than FCurve channels. Open questions are highlighted in **bold**.
- Grease Pencil layers basically store "at frame `X` show drawing `N`", where '`N`' can also be 'no drawing'.
- This can technically be stored as FCurve, but that has downsides:
- The numbers have no meaning (just an index into an unordered list)
- Interpolation has no meaning (can interpolate over unrelated drawings)
- Values can be dragged out of valid bounds
- Graph Editor: Drawing as curve is meaningless
- Dope Sheet: Representing as 'diamond shape' does not convey the 'from `X` to `Y`' nature of the data
- Other use of FCurves hit the same problems:
- Enum properties
- Boolean properties
Apart of the above issues, there is also the need for non-numerical animation:
- *ID Pointer* animation, for example the active scene camera.
- Must be compatible with the type of the animated property.
- **Probably should be weak pointers?** Or link in those cameras as well, when linking the Animation datablock?
- **Reference counting** of the pointed-to datablock?
- Is use in an Animation datablock also a 'use'?
- Is having the camera in the scene enough?
- What about IDs that are not part of the scene, how are they protected if they happen to be unused at this frame, but still referenced later?
- What about **non-camera ID types**? Animating material assignments? Entire armatures?
- **What does the UI for editing the keys look like?**
- Dopsheet can show bars with just the ID type icon & ID name, as it changes over time.
- Boolean channels can show bars where the property is ON, and nothing where it is OFF.
- **How are such channels integrated into a summary view?**
- **How are such channels mixed?** Likely on a binary basis; 'add' just does 'replace', and 'replace' always does a 100% replace when the mix influence is non-zero.
**Channel or Clip Type?**
These (so Grease Pencil layer channels, enum property channels, ID pointer channels) should really be *channel* types, as often Grease Pencil and object-level animation is tightly coupled and needs to be handled as a single unit (i.e. be containable in a single clip).
**Moving forward:**
- New animation system will be in C++, but not ready for 4.0
- Current system of animation filtering is troublesome ([analysis](https://hackmd.io/@anim-rigging/ByypmW8As))
- First step: extract animation filtering calls (mostly the construction of filter flags) into semantic functions.
- When adding new channel types, these functions can be adjusted to make the rest of the code behave.
### USD / Alembic
- Collections for Import/Export for managing the existence of objects in the scene
- Can create an Action for these objects, with a USD/Alembic clip in a layer
- Keyframe animation can happen in a layer on top.
- USD/Alembic already have a mapping from 'animated ID' to 'animation data', so implicitly handle 'slots' as well.
### In-progress diagram
Simple, up to the `KeyframeStrip`, excluding channel types:
```mermaid
classDiagram
direction LR
ID <|-- "is" Animation
ID --> "has" Animation
Animation "1" --> "*" Layer
class ID{
string name
Animation* anim
}
class Animation {
list layers
Output outputs[]
}
Animation "1" --> "*" Output
class Output {
list~ID *~ id_pointers
string label
bool is_shared
int id_type
}
class Layer {
string name
enum mix_mode
float mix_influence
list~Strip~ strips
enum child_mode
list~Layer~ child_layers
}
Layer "1" --> "*" Strip
class Strip {
enum type
float frame_start
float frame_end
float frame_offset
bool isInfinite()
void makeFinite()
}
KeyframeStrip "is" --|> Strip
class KeyframeStrip {
map[output_idx → array~AnimChannel~] channels
}
```
Channel types:
```mermaid
classDiagram
direction LR
KeyframeStrip "1" --> "*" AnimChannel
class KeyframeStrip {
map[output_idx → array~AnimChannel~] channels
}
class AnimChannel {
enum type
pointer data
}
AnimChannel --> FCurve
class FCurve {
string rna_path
int array_index
list bezier_keys
list sample_points
}
AnimChannel --> Cel
class Cel {
int cel_index
}
AnimChannel --> IDChooser
class IDChooser {
short id_type
map[frame_nr → ID] id_pointers
}
```
Other strip types:
```mermaid
classDiagram
direction LR
class KeyframeStrip {
map[output_idx → array~AnimChannel~] channels
}
class ReferenceStrip {
Strip *reference;
map[output_idx → output_idx] output_mapping
map[rna prefix → rna prefix] rna_mapping
}
class AnimStrip {
Animation *reference;
map[output_idx → output_idx] output_mapping
map[rna prefix → rna prefix] rna_mapping
}
class MetaStrip {
list~Layer~ layers
}
```
-------------------------------
### Animation Data Model
See [Nathan's slides](https://perm.cessen.com/2023/animation_module/2023_06_19_animation_data_model/) about the current state of exploration.
- **Where do keys go?**
- Slotted Actions / multi-data-block Actions -> no 'nil' slot, only named ones. Not named after object, just 'Slot' for default name.
- Animation Layers
- Layer controls blend modes (replace, additive, etc.)
- Nested layers need that too.
- Takes -> nope, is for an add-on to do on top of the layer system.
- Nestable Actions
- Named Time Ranges
- Named & Instancable Key Groups
- NLA
- Dynamic Overrides
- Data-block linking
- Spatial offsets for reused animation (NLA clips, nested Actions, whatever we come up with)
- Collection-level Actions?
### Terminology
- Layers (not tracks)
- Action -> Animation data-block -> no still Actions -> no Animation data-block
- Slots -> 'channel sets'? 'actors'?
- Ghosts (not onion skinning)
### Explicit Combinations
- Combined Channel Types (or 'tracks' or whatnot), for example 'always key all quaternion channels in unison', or just a 'rotation' channel instead of `rotation[0]`, `rotation[1]`, etc.
- Property Sheets, for overview, manipulation, keying sets
### Mastering Time
- Selecting & defining time ranges
- Proportional editing over time
- Layering lower-frequency changes on top of higher-frequency ones
- Frequency decomposition workflows
- Ghosting (including GP)
- Editable Motion Trails
### RigNodes & Constraints
- Flow Control (looping, conditionals, switches)
- Offsets & delta-transforms
- Data Model (own ID? All controls as part of the RN ID?)
- Transfering / baking animation back & forth
- RN on top of multiple data-blocks
- How to switch between RN rigs?
- Declarative constraints ('*hands should be together*' rather than '*copy transform of hand A to hand B*')
### Non-pose, non-transform animation
- Grease pencil
- Non-transform scalar (FCurve'able) properties (float IDProps, material props, scene/world props, etc)
- Non-scalar properties (enums without relying on underlying integer, active camera, active control rig, space switching)
- Reusable / instantiatable simulations
### Geometry Nodes support
How can we get armatures to support geometry nodes (or vice versa)?
- Adjustments to the pose?
- Generating the armature from geometry?
### Other 'Givens'
Things we know we want, but have to be planned in at some point. Not necessarily to be solved at the workshop, but the designs we come up with should work towards a solution for these.
- Animation snippet support for the pose library
- Improved auto-keying ([wiki](https://wiki.blender.org/wiki/Modules/Animation-Rigging/Weak_Areas#Auto-keying))
- Selection syncing between animation channels and the things they animate ([wiki](https://wiki.blender.org/wiki/Modules/Animation-Rigging/Weak_Areas#Selection_synchronisation_between_pose_bones_and_animation_channels))
- Bone Picker interface