# Geometry Nodes Workshop November 2023 One week after this years Blender Conference, the geometry nodes team came together in Amsterdam to discuss many design topics that affect the future of geometry nodes. This post gives a general overview of the topics that were discussed. You can also read all the [notes](https://devtalk.blender.org/t/2023-11-06-geometry-nodes-workshop-notes/32007) we took during the meetings. ## Volumes One of the main priorities for this workshop was to come to a conclusion for how we want to integrate volumes better into geometry nodes. Working with volumetric data in geometry nodes opens up many new opportunities. While there are already a few volume nodes available, they only offer very limited functionality. Over the past years, we discussed this topic many times already, but more recently the topic became more important because we want to use volumes for physics simulations in geometry nodes. The main difficulty with this topic comes from the fact that individual volume grids in a volume object are a lot like attributes, but they are also quite different in that they are not attached to the geometry like e.g. attributes on a mesh. A lot of the discussion went into the trade-off between extending what a field is so that it can be used for volume grid processing without new mental models, and introducing a different workflow that is more tailored to volumetric data. We ended up with the decision that we will introduce a new more volume grid specific workflow. A key argument for using a new workflow is that the same workflow can work in the future with other data types like lists and images. The proposal is described in more detail on [devtalk](https://devtalk.blender.org/t/volumes-in-geometry-nodes-proposal/31917). For reference, [this](https://hackmd.io/@lukas-tonne/ry1vZFblp) is a proposal that attempts to fit grids into fields. ## Gizmos The goal with gizmos is modifying inputs that feed into geometry nodes directly in the viewport. We want node groups to be able to specify gizmos for their inputs in a simple but powerful way. A large design challenge here is the inter-dependence between the transform of a gizmo and the value that it controls. Changing the gizmo transform affects the controlled value, while changing the value affects the gizmo transform. Furthermore, we want to support so called "crazyspace" with gizmos, i.e. if the gizmo controls a geometry which is transformed later in the node tree, we want the gizmo to be transformed in the same way so that it stays attached to what it controls in the viewport. Design work for this started in another [workshop](https://code.blender.org/2023/06/node-tools-interface-and-baking/#gizmos) earlier this year, was then made more concrete in a follow up [proposal](https://devtalk.blender.org/t/gizmos-for-geometry-nodes-proposal/29776) and the latest version is available in a work-in-progress [patch](https://projects.blender.org/blender/blender/pulls/112677). During the workshop, we want over the general design implemented in the prototype and tried to use it for some different simple projects. That worked fairly well. We concluded that the general design in the prototype is the right way forward and discussed more UI details. ## Baking Baking is currently only supported in simulation zones. The original plan was to introduce a new "Bake" node in Blender 4.0, but that didn't make it, mainly because the design needed more work. For 4.0 we added support for baking individual simulations instead, which is preliminary work for adding the new "Bake" node which can also be baked individually. The general design of the Bake node has been clear for a while already. The main thing that was discussed in this workshop was a scene-wide overview over everything that can be baked. This is important in a production environment where there may be many things that need to be baked which also depend on each other. The main idea here is to integrate this overview as well as batch editing for bakes into the outliner. We have a relatively good idea for what data we want to see in the outliner and what operators are needed. However, we couldn't conclusively decide whether it should be a new view mode in the outliner or whether it can be integrated into an existing mode. It's not obvious because while the "View Layer" mode has many elements that would also be needed by the bakes overview, we also need some new things that don't necessarily fit into the existing mode. This decision needs some help from the UI team, because it depends on the longer term plan for the outliner and maybe on the "editor tabs" proposal (see e.g. an [initial attempt](https://projects.blender.org/blender/blender/pulls/111068) but that needs much more discussion). ## Subframes This is about subframes for simulation evaluation as well as baking. In earlier discussions we already concluded that we want the ability to control the number of subframes on a per-simulation basis. Furthermore, we generally want subframes of separate simulations to align. For example, having one simulation with 10 steps per frame and another with 9 is problematic because then objects that are used by both simulations (e.g. a collider object) has to be evaluated for 19 times. Whereas when both would use 10 steps, only 10 evaluations of the collider are necessary. We want to guide the user to use aligned subframes by having a dropdown that contains `1, 2, 4, 8, ..., Custom` instead of just having a subframes integer property. Additional problems arise when a simulation depends on another simulation with fewer subframes. That's because in this case it is sometimes necessary to evaluate an earlier frame and some frames even more than once. While technically possible, it's just kind of unfortunate that this could make things quite a bit slower. Unintuitively, the fix here would be to actually increase the number of subframes for the first simulation. We couldn't think of a way to avoid the duplicate evaluation yet without breaking the principle that playback or baking should behave identically when everything is evaluated at once compared to when all simulations are baked one after another from left to right. We ended up with some discussion for how we could tell the user when frames are evaluated more often than probably intended. ## Menu Switch The general design for this has been worked on previously already in a [proposal](https://devtalk.blender.org/t/enum-sockets-proposal/21318) and later in a [workshop](https://code.blender.org/2022/11/geometry-nodes-workshop-2022/#menu-switch) last year. Now, there is already a working implementation in [WIP patch](https://projects.blender.org/blender/blender/pulls/113445) that can be tested already. In this workshop we confirmed the design and discussed some more details like socket shape and how it works with attributes. The general conclusion was that menu sockets are enough like other data types that they can just use the circle shape for single values and the diamond shape for fields. The color still has to be defined though. We won't support attributes of this type for the time being, since supporting this outside of geometry nodes has many implications, also for I/O. If users need this, they just have to map menu values to e.g. integers manually. Note, the original design for the menu switch node assumed that we have dynamically types sockets already. This would allow the user to switch different types based on the same menu value. We found that the node is already useful enough without dynamic socket types already so we removed that dependency. Once we have proper dynamic socket types, the Menu Switch node can be extended with it. ## Dynamic Socket Visibility Some built-in nodes dynamically hide and show sockets based on node properties. The first goal here are to allow node groups to do the same so that they can behave more like built-in nodes. Furthermore, the usage status of a socket should also depend on the value of other input sockets and not only on node properties that are not exposed as sockets. The discussion started out by going over the different options in [this](https://devtalk.blender.org/t/dynamic-socket-visibility/31874) document on the topic. We concluded that the automatic approach to detecting socket usage is the way to go. This way, the user cannot say that an input affects the output even if it does and it generally works out of the box based on Switch nodes. We also found that it would be better to just gray out unused inputs instead of hiding them by default. There are still cases where hiding is the better behavior though, e.g. in the cases where built-in nodes do this already. ## Asset Menu Path Currently, the catalog path of an asset determines where it is shown in e.g. the Add Node node. This was quite useful originally because it avoided the need to setup a secondary hierarchy for the whole asset library so that it can be integrated with menus. The problem is that the catalog path just does not work well as menu path in many cases. We discussed different approaches, but couldn't really decide for a real long-term solution yet. We did found that a single "Menu Level" checkbox per catelog would solve many common cases already. For example, the checkbox could be turned off for the `Nodes` catalog that contains an asset like `MyAssetLib/Nodes/RadialArray`. Then the `RadialArray` node group would show directly in the `MyAssetLib` menu instead of in a nested `Nodes` menu. ## Replacement-Based Procedural Modelling The goal here is to make procedural modelling in geometry nodes significantly more powerful by roughly following the idea in the [original proposal](https://devtalk.blender.org/t/replacement-based-procedural-modelling-proposal/31851). The core idea with replacement-based procedural modelling is that one does not perform many small edits on a potentally large mesh, which is how traditional modelling generally works, but that one usually works on a smaller change locally that is then inserted into the large geometry. The benefits of this approach are that it is easier to work on mesh edits on a small scale first, before applying the edit to a large mesh. It leads to easier to reuse building blocks for common face replacements and can also have significantly better performance because many edits on a large mesh can be batched together and can be run in parallel. The proposed `Replace Faces` node can also be used to build nodes for other common operations like inserting edge loops. It also brings many use cases that currently use the [Tissue](https://docs.blender.org/manual/en/latest/addons/mesh/tissue.html) addon into geometry nodes. We went over a few examples for how this can work and how the deformation works more specifically. Overall, the feedback was positive. ## Modal Node Tools Blender 4.0 introduces Node Tools to Blender, which are a way to build custom operators for e.g. mesh edit mode with geometry nodes. In later versions we intend to extend this concept to support modal operators, i.e. operators that are interactive and take additional user input after their first invocation. In this workshop we discussed how represent these kinds of operators with geometry nodes. There main problems to be solved are how the geometry nodes group remembers data from the initial invocation and the previous events, how user-generated-events (e.g. key presses) are passed to geometry nodes and how the operator finishes. After some back and forth with using a new kind of zone for modal operators, we concluded that we can just use the existing simulation zones to remember data from previous modal updates of the operator. The `Delta Time` in the simulation zone would just be the actual real-world time delta then. For some kinds of modal operators it would be useful if it runs continuously without additional user input. This can be achieved with a simple "Is Interactive" checkbox on the node group. If it's turned of, geometry nodes only runs when there are new input events. Event information is passed into node groups through normal (boolean) group inputs. The node group can then map some of its inputs to specific keymap events. The node group defines some defaults keymap, but that can be overwritten by the user with a custom keymap. The modal operator can always be cancelled with the escape key. So this key can not be used by the operator. For normal termination of the operator, a "Finished" boolean group output is added. Once the node groups returns true for that output, it finishes. ## Grease Pencil Integration One of the main open topics for the grease pencil integration into geometry nodes was the handling of grease pencil layers in nodes that convert to different geometry types. That isn't straight forward, because other geometry types do not have the idea of layers. There are two main approaches to deal with layers. One can either join the geometries generated from each layer into one, or one can keep the geometries separate by outputting them as instances. It's nice to keep layers separate since they were separate before, but the tricky thing with turning them into instances is that the word "instances" kind of suggests that it is just used to save memory or so. That's not so much the case when every instance references a different geometry. This problem also exists with the new `Split to Instances` which outputs instances that are all different. We considered changing the name "instances" but couldn't really come up with a better fitting alternative. In the end we concluded that just keeping the existing name is fine, but that we can improve the tooling to make it more obvious what data one is working with. For example, the spreadsheet and socket inspection should show more details for how many unique geometries are contained in the instances. Furthermore, it can help if we could name geometries, and there also instance references. With that the instance created from each grease pencil layer could have the same name as the original layer, making it much more obvious what the data is. We also talked about a new "Grease Pencil to Curves" node, how curve sampling nodes should work and different things "Apply Modifier" could do for grease pencil. ## Dynamic Socket For existing nodes, but also especially for the volume integration we need new socket designs that capture the ideas of "dynamic socket category" (single value, field or grid) and "dynamic socket type" (float, int, etc.). For the dynamic socket category we confirmed that using a [wildcard/asterisk socket shape](https://projects.blender.org/blender/blender/pulls/113608) works quite well, since it is semantically meaningful and does not look as busy as originally feared. For dynamic socket types the situation is a bit more difficult. Some kind of "rainbow socket" could work, where instead of a single color, a socket could contain a color wheel. We don't know yet if that looks too busy or not. The more difficult thing here is also the link color. ## Interactivity Option In the past, something like an "Is Edit Mode" was requested that is similar in spirit to the "Is Viewport" node. The intended use-case is that geometry nodes can skip some heavy work while in edit mode, so that the editing experience is more interactive. The problem with this proposal is that it is quite specific to edit mode, while the same thing can make sense for many kinds of edits like also just transforming objects in object mode. Maybe this functionality can be somehow combined with the the Is Viewport node into a more general "Interactivity Level" node. Furthermore, input nodes like these have the problem that they can't be controlled by the use of a node group. This can be an issue if one actually wants to use a specific node group in "render" mode even if the scene is currently not rendered. This also applies to input nodes for node tools. A potential solution that we discussed for node tools before is that automatically add inputs to a group node that use these special nodes. All of these inputs can be in a subpanel. This allows to make it easy to use the functionality, but it can also be overwritten from the outside. ## Realtime Mode The real-time mode has been in the design phase for quite a while already. A general goal here is to just put Blender into an "interactive mode" where simulations just run in real-time and the user can interact with the scene naturally. To achieve that, the scene time will likely have to be decoupled from a "real-time clock" that just keeps running independently of the scene time. One difficulty with the design here is finding the right set of use-cases that need the interactive mode but can not be handled by the simpler and less intrusive design for node-based modal operators. ## Asset Deduplication The "Append and Reuse" functionality for assets works fine as long as one is working on a single file, but it becomes problematic when working in a production setting that also uses linking. The issue is that many .blend files have appended copies of the same data-blocks which currently can't be automatically deduplicated when they are linked into a separate file. This leads to a large number of duplicate data blocks. A solution to that is not local to geometry nodes but affects core functionality in Blender related to linking and appending. Fixing this might require automatically detecting if two data blocks are the same and/or version numbers on all data blocks. Both approaches are fairly involved and require their own project. While there is a proposal for a solution, it is not generally agreed on and requires more discussion. ## Next Steps In this workshop we managed to make quite a few design decisions, so the main next step is to implement those. For example, the volume, gizmo and grease pencil integration don't have immediate blocking design tasks anymore. Other topics like the real-time mode and asset deduplication still require more discussion.