# A vision for Bevy UI
:::warning
Important note! This is written by Alice (one of Bevy's maintainers), but does not yet have maintainer consensus. I've noted the confidence that I have in the answers given in each of the section in parentheticals to communicate the level of consensus with more nuance.
:::
Much of this work is based on Cart's design laid out in [Bevy's Next Generation Scene/UI System](https://github.com/bevyengine/bevy/discussions/14437): I'm trying to fill in the gaps and give concrete answers to concrete questions so we can align on a shiny-future vision for "what should `bevy_ui` be"!
While preparing this document, I've referenced a huge number of existing GUI frameworks for Bevy:
- [Quill](https://github.com/viridia/quill)
- [sickle_ui](https://github.com/UmbraLuminosa/sickle_ui)
- [haalka](https://crates.io/crates/haalka)
- [bevy_lunex](https://github.com/bytestring-net/bevy_lunex)
- [kayak_ui](https://github.com/StarArawn/kayak_ui)
- [bevy_egui](https://docs.rs/bevy_egui/latest/bevy_egui/)
This work has been instrumental in helping explore the space of possibilities, and identify the patterns and foundational crates that work well for Bevy's needs.
## Motivation and overview
### Who is `bevy_ui` for? (High confidence)
Bevy UI is intended for people making games, tools for games, and weird game-like things that need solid performance, complex domain logic and sophisticated rendering integration (like CAD software).
It probably won't compile on embedded platforms (or fit their performance requirements), and for standard CRUD apps, it would be an unusual choice.
### What are `bevy_ui`'s core values? (High confidence)
1. **ECS-powered:** `bevy_ui` uses the same data structures and tools for logic as the rest of your game.
2. **Data-driven:** UI can be generated from assets on the disk, supporting first-class tooling for non-programmers.
3. **Flexible:** peel back or add layers of abstraction to match the needs for your project and task.
### Why are we making another Rust GUI crate? (High confidence)
While good options exist within the Rust ecosystem, none of them are incredible. Moreover, none of them are tightly integrated with Bevy.
That tight integration makes it easier for users to learn `bevy_ui`, reduces surprising bugs, and eases maintenance.
### What UI framework will Bevy's editor be built with? (High confidence)
Bevy's editor will be built with `bevy_ui`. Dogfooding is essential to ensure `bevy_ui` gets the care and attention from engine devs that it needs to be great.
Using a UI stack that we control and maintain is also important as a risk management strategy: Rust GUI frameworks do not have a particularly long half-life, and a sudden full rewrite into a new stack is an existential risk to both the editor and Bevy itself.
### Will Bevy ship its own standard collection of widgets? (High confidence)
Yes. In order to make the experience for new and dependency-averse users pleasant, Bevy needs to ship its own customizable collection of widgets.
All provided widgets should live in their own crate, `bevy_widgets`, while core functionality and building blocks are found in `bevy_ui`.
This both makes it easy to swap out the widget collection your project is using and ensures that `bevy_widgets` never relies on non-public functionality.
### What's the overall workflow for authoring UIs? (Moderate confidence)
Fundamentally, there are two authoring strategies: assets and code. Generally, style and layout will be easier to author as assets, while logic will be easier to author in code.
Each widget library comes with their own plugin, which must be added to initialize any standard observers, resources or systems needed to make their widget-internal logic work.
For small projects, or those with logic-heavy UI needs:
1. Widgets are spawned and customized as needed in code, using the `bsn!` macro and related builders.
2. App-specific behavior like "what does this button do" is defined with callbacks, typically by passing in an anonymous function defined at the callsite.
3. Layout and styling is tweaked and iterated on using a reflection-powered inspector.
For projects with design-heavy UI needs:
1. Widgets are defined as relatively small scene assets. These are procedurally generated by widget libraries by calling the code-based constructors and serializing the output.
2. Larger scenes are designed in a GUI tool, composing these smaller scenes and then spawned and despawned for each screen.
3. App-specific behavior is defined using named callback functions, which are referenced within the scene file and can be selected using a GUI.
4. Particularly simple logic may use an embedded programming language within the BSN file format.
Like always though, there will be escape hatches throughout and users will be free to mix-and-match on a given project.
## Widget basics
### What is the fundamental data structure? (High confidence)
Nodes in the UI tree are represented as entities in the ECS.
Their data is stored in components, which are operated on by systems to define common behavior for widgets.
Like most UI frameworks, they are organized into multiple trees, in our case defined by the `Parent` and `Children` components.
Eventually, those links will be replaced by true relations.
### What is a widget? (High confidence)
A widget is a collection of entities, arranged into a subtree.
These use the same abstraction as any other collection of entities in a subtree (e.g. a player with many mesh children): scenes.
### How are UI elements declared? (Moderate confidence)
At the simplest level, you can simply spawn the entities associated with your UI, like any other part of Bevy. Groups of entities are spawned as scenes, knit together with parent-child relationships.
Required components and the `Construct` trait make this convenient, and the `Patch` and `EntityPatch` traits offer opt-in ways to inherit from and customize existing scenes.
More complex widgets are best defined using a higher level of abstraction to make working with hierarchies easier.
We're not fully sure what that will look like yet.
In mixed teams or on particularly dynamic projects, many UI layouts will be data-driven, loaded from BSN files like any other scene and then spawned into the world.
To reduce boilerplate, the `bsn!` macro matches the terse syntax of these scene files.
### How do users make new widgets? (Moderate confidence)
They define a new scene. You can inherit behavior from other scenes using the [`Patch` trait](https://github.com/bevyengine/bevy/discussions/14437).
### How are widgets composed? (High confidence)
If you want to nest widgets inside of other widgets, add one widget-scene as a child of the other.
To combine their *behavior* (e.g. a clickable image), create your own widget, generally with a combination of the components and subentities.
### How do users add behavior to their widgets? (Moderate confidence)
Interactive widgets will provide their own callbacks, in the form of observers, which are run when various events occur on them.
In this shiny future, multiple observers of the same kind can exist for a given entity, and they can be ordered just like any other system.
Observers use Bevy's standard system abstraction (with the addition of some context from the event) to read and write the data they require from the world with the power of dependency injection.
While observers run with exclusive world access (as part of command application), the diversity of tasks needed as part of UI handling (and relative infrequency of them) generally makes this a better performance tradeoff than trying to parallelize the work at a system level.
### How can users select specific widgets or entities within widgets? (Low confidence)
By default, using marker components and queries.
This may not be adequate for more complex use cases, and a more elaborate system may need to be devised.
### How do users theme widgets and applications? (Low confidence)
To be determined: this is an active area of controversy.
Components will likely have their value updated based on a user-provided theme, but beyond that the details are contentious.
### How is the app's state machine managed? (High confidence)
Over the course of a game, you might need to load assets, transition through menus, pause the game and so on. As part of this, you'll want to spawn and despawn UI, set up
`bevy_ui` is designed to be used with `bevy_state`: while no direct integration exists, the `OnEnter` / `OnExit` schedules, state-scoped entities and state-linked run conditions work great to model high-level state machines.
The dividing line between "app state" and "widget state" is pretty clear: if the same setup / cleanup / mutation logic is req
uired for every use of the widget, it needs to be baked into the widget. Otherwise it should be part of the wholly user-defined app state machine.
## Incrementalization and reactivity
### Does `bevy_ui` need incrementalization? (High confidence)
Incrementalization is a fancy term that describes a simple concept: don't recompute things that haven't changed. This has two main benefits:
1. Performance. If the cost of doing the work is high relative to the cost of checking whether or not we should do the work and work only needs to be done relatively infrequently, we can save a ton of cycles!
2. Cleaner downstream change detection. If we only update values when they actually *need* to be changed, any mechanism (such as Bevy's change ticks or a screen reader) that's listening for when things change will have fewer false positives.
Immediate mode UI, which simply throws away all of the UI information every frame and spawns it in new, obviously isn't incremental!
But even retained-mode UI can be non-incremental: resetting state every frame or recomputing it without any consideration as to whether or not it *needs* to be changed will cause the same problems.
To understand why this is so important, consider a list with a million entries.
While we can store a million little strings, we can't reasonably store all of the supporting data needed to render them: layout, textures and so on.
Simply hiding the extra nodes won't work!
Instead, we need to dynamically spawn and despawn objects as the parent widget state changes.
Without a strategy for incrementalization, we're forced to locally fall back to an immediate mode approach: despawning then spawning all of the visible list items every frame.
We agree with [Raph Levien's position](https://raphlinus.github.io/rust/gui/2022/05/07/ui-architecture.html): incrementalization is essential to create performant UI at the scale needed by real applications.
While Bevy may not need *special* tools for incrementalization, these needs have to be taken into account, and `bevy_ui` should promote a standardized way to tackle these problems.
### Does incrementalization imply reactivity? (High confidence)
While incrementalization and reactivity pair nicely together (reactivity makes it easier to track what needs to be updated), they aren't the same thing and we don't need to ship both, especially not at the same time.
### What will `bevy_ui`'s incrementalization strategy look like? (Low confidence)
Bevy's ECS already ships with a powerful tool for incrementalization: change detection!
As long as other users are careful about only updating data if it genuinely needs to be, we can use change detection to incrementalize relatively simple updates that modify existing data.
The other primary tool in our toolbox is events: both observers and batched.
When computing updates using systems, we'll check to see if work needs to be done using change detection on the input values.
In cases where the computation is particularly cheap, `set_if_neq` is a cheap and easy way to keep things clean, even if you don't want to go through the work of checking if the value needs to be recomputed.
With the help of [better tools to track where values are mutated](https://github.com/bevyengine/bevy/pull/14034) and [archetype-level change detection](https://github.com/bevyengine/bevy/issues/5097), this can be a powerful, performant and developer-friendly approach.
Change detection struggles when working to summarize or mirror the state of multiple entities though: objects can be added or removed from the list without triggering our mutation-focused change detection.
To make this performant and ergonomic, we need [query change detection](https://github.com/bevyengine/bevy/issues/14510)
Similarly, figuring out when an asset changes is currently very hard: the underyling asset can be changed, or the handle stored on your entity can be swapped. [Change detection for assets](https://github.com/bevyengine/bevy/issues/14444) is needed to resolve this.
### Will `bevy_ui` have a VDOM? (Moderate confidence)
One of the most common approaches to managing widget state is to maintain a [Virtual Document Object Model](https://legacy.reactjs.org/docs/faq-internals.html), [like in Quill](https://hackmd.io/@dreamertalin/Hkfj3TWFR).
The basic idea is to store a representation of what the UI *should* look like, and then update the simple, data-only nodes to reflect that state.
This can make working with widgets who have a dynamic entity composition (like a list that users can add items to) much more comfortable, as we can operate at a higher level of abstraction and persist state even when the ephemeral entities are despawned.
Bevy needs a way to handle these use cases elegantly, but it's not clear that a VDOM is the best or only solution.
If we have one, it's almost certainly to be stored as a hierarchy in the ECS. It may be a fully seperate tree, or some entities within the main UI tree may serve as a template for multiple entities.
This is an open research question!
### How is widget state stored? (High confidence)
Simple widget state (which radio button is selected) is stored as simple components on the parent entity, and replicated downwards if and only if it is needed for display purposes.
### Does Bevy need a first-party reactivity solution? (Moderate confidence)
[**Reactivity**](https://github.com/bevyengine/bevy/discussions/10978) is a method of programming in which an output result is computed from input values, in which the mere act of reading those inputs creates an implicit dependency, such that the output is recomputed when those input values change.
Spreadsheets are a great, familiar example of a reactive programming framework. Rather than explicitly saying "watch for changes to this cell, and if it's changed recompute the sum", simply calling `=SUM(A:A:)` will subscribe to all of the cells in column A.
Critically, the logic needed to *create* a new value is the same as the logic needed to *update* the value. By having a single source of truth for "what should this value be", it's easier to avoid mistakes and we can halve the amount of work.
Immediate mode UI solves this problem in a different way: by simply despawning all of the UI every frame and not worrying about update logic!
The benefits of reactive programming are valuable to `bevy_ui`:
- it easier to do efficient incrementalization
- it reduces the prevalence of spaghetti-style mutation
- it de-duplicates spawning and updating logic
- its updates are synchronous: reducing the risk of surprising bugs due to inconsistent state
To make sure Bevy is both performant and up to the ergonomic standards expected of a modern UI framework, we're likely to ship some sort of solution in this space, although it's not clear that it will be truly reactive.
To reduce project management risk, this is likely to come in a later phase of the project: reactivity is *hard*. There's a lot to do to just get the foundations of the rest of UI right, and no reason to wait on it.
## Will UI use a special reactivity framework? (High confidence)
No: any tools and patterns introduced to handle reactivity in the context of Bevy's UI will be standardized, deeply integrated into the ECS and usable across the engine.
As a result, we will not be shipping existing Rust reactivity frameworks as part of `bevy_ui`, although we will certainly learn from them!
## Will *everything* in UI be reactive? (High confidence)
With a reactivity solution in hand, it's easy to say that reactivity should the only way to handle updating UI state.
This has its benefits! There's only one way to reason about things, and it makes incrementalization easier to design.
However, preventing users from simply reaching in an mutating the state of a reactive value is likely to be *very* hard if it's all just plain-old-data stored in the ECS.
Moreover, reactivity is a tracking-intensive, poorly batched strategy for computation. Things like layout, transform and visibility propagation and so on are likely going to be faster (and simpler) to just handle in the imperative systems-first approach.
As a result, Bevy's UI will use a hybrid approach, with systems for bulk updates and to generate the simple data needed to process user interactions, and reactivity to respond to user interaction in complex ways.
## What will a Bevy-native reactivity solution look like? (Moderate confidence)
Reactivity requires you to store information about how to compute the correct value of an object *on the object itself*.
In Bevy, the best way to do this is to add observers to entities.
When a reactive widget is spawned, an observer is attached to it, which defines how and when its reactive value changes. Usually, these observers will respond to `OnMutate` triggers, which are sent whenever the watched value is added or changed.
See <https://github.com/bevyengine/bevy/pull/14520> for a prototype of these ideas.
This pattern has several key properties:
1. Observers and change detection are general purpose ECS tools.
2. Observers attached to entities stop listening when the entity is despawned.
3. Observers can create event cascades, ensuring all of the required logic is cleaned up inside of a single frame.
4. Observers can read and write arbitrary data in an ergonomic, powerful and familiar way.
While this pattern does give us most of the benefits of reactivity, there are a few caveats:
- this is closer to an observer-based than a true reactivity-based design: the data dependency is not created implicitly, and there's no need to resubscribe
- widget state is not defined purely via observers: there's no way to stop users from just mutating the data anyways
- non-trivial work remains to make this performant and feature-complete
Ultimately, I think that this observer-based pattern is powerful and flexible: creating a familiar-feeling and ergonomic user experience without adding a complex UI-specific layer on top to learn.
### How do widgets update their own state? (Low confidence)
When widgets need to reactively update their own state or that of their children (e.g. folding panels or huge, obstructed lists), they are spawned with their own built-in observers which manage this.
### How are UI elements updated? (Moderate confidence)
Updates that can be handled in batch (like transform propagation or picking) are run once a frame using Bevy systems.
Batched updates like this need to be relatively unopinionated: they should generally provide information to the widgets they touch, who then decide how to act on it.
When data needs to updated more frequently than that, hooks and observers are used to allow for unlimited within-frame event propagation.
## Data-driven design
### How are assets loaded from disk? (High confidence)
Using `bevy_asset`. The core pieces are in place today, but this needs some love to make sure the APIs are well-rounded and documented.
### How are UI hierarchies loaded from the disk? (High confidence)
Using `bevy_scene`. This is slated for major overhauls to reduce boilerplate and improve expressiveness, focused on a custom Bevy Scene Notation (.bsn) file format.
### How will behavior be stored on the disk and deserialized? (Low confidence)
Right now, we're not sure. With hooks, marker components and systems you can add logic after scene loading, but that seperates data from behavior.
## The stack
### What rendering stack will it use? (Moderate confidence)
Like the rest of Bevy, `wgpu` + `bevy_render`.
`vello` is a promising option, but it's still immature and a proper exploration of the implications of adopting it needs to be performed.
### What text backend will it use? (High confidence)
`bevy_ui` is built on `cosmic-text`.
### How is input and windowing managed? (High confidence)
Like the rest of Bevy, with `winit`, `gilrs` and `bevy_input`.
### How do you work with colors? (High confidence)
Using `bevy_color`.
### How do you debug `bevy_ui` apps? (High confidence)
Using the standard tools for debugging any Bevy application: custom systems, logging, `bevy_inspector_egui`, system stepping, dev consoles, gizmos. In this shiny future, we'll have great first-party visual tools to make this easy.
If the standard tools to manage state (systems, observers and so on) are challenging to debug and scale in the context of UI, these problems will arise in game logic as well.
Building better tools across a unified stack will help all users.
### What's your plan for screen readers? (High confidence)
Screen-reader integration will be provided with `accesskit`,
and integrate with our focus-based navigation system
### What localization crate will you use? (High confidence)
Localization will be handled using `fluent`, a high-quality Rust-native localization crate.
## Use cases
### How will world-space UI be rendered? (High confidence)
World-space UI is incredibly useful for both games and tools: providing labels, health bars, diagetic VR UIs and more.
In this shiny future, UI will use camera-driven rendering, and screen-space will just be another render target. This should unlock world-space UI, split-screen UI and more.
### Can users write custom UI shaders? (High confidence)
Yes! Just like the rest of Bevy, UI can be rendered with custom shaders.
## Subsystems
### How is text modelled within the entity hierarchy? (High confidence)
Each section of text is its own entity in the hierarchy. This entity has a single font, font-family, font size and font color, which as a collection are stored as a component on that entity.
### What's the plan for keyboard navigation? (Moderate confidence)
The concept of "focused" UI elements is core to keyboard, gamepad and screen reader navigation. For the purpose of both accessibility and out-of-the-box functionality, focus-based navigation is a core piece of the architecture.
The details of how this work still need to be designed.
### How should it support picking operations? (High confidence)
Picking operations (clicking, dragging, hovering) are shockingly complex and important to get right.
We'll be using an upstreamed `bevy_mod_picking`, coupled with some form of bubbling observers.
### What layout algorithm does it use? (Moderate confidence)
While game-engines of yore could get away with a fixed layout per screen resolution, this is too much fragile work for developers and isn't compatible with the needs of tools for making games.
Users are able to pick one of several competing layout algorithms (flexbox, grid, absolute) based on their needs at the time, which can happily co-exist in the same tree. The layout algorithm used for a given entity should be determined based on which component(s) it has.
Over time, we may add more layout algorithms to the default set, or even move some to opt-in status.
### Can users add their own layout algorithms? (Moderate confidence)
While flexbox and grid are powerful and familiar to web developers, they can be quite unituitive and challenging to learn.
Many users yearn for alternate layout approaches, such as `morphorm` or simple anchor-based approaches.
With public enough internals and enough gumption, anything is possible.
However, we should make this fairly easy.
Taffy is working on exposing a trait-based API to make this viable: Bevy users should be able to add their own custom layout components and integrate nicely with Taffy's internals.
### How should it support keybindings? (High confidence)
Keybindings are essential for both gameplay and tools.
We will upstream something like `leafwing-input-manager` to handle this for us.
Key-bindable actions are represented by actions in an `InputMap`, which listens to raw input events and dispatches them.
When buttons with a keybinding are pressed, they press the corresponding `ActionState`, rather than duplicating the logic.
### How does localization work? (Low confidence)
Beyond using `fluent`, we don't know!
There's a ton of complexity here: localization is not limited to text, or to languages. A flexible, asset-driven approach is likely to be the best fit: compile-time flags don't allow users to control which locale they're using.
### How can widgets be animated? (Moderate confidence)
Like everything else, with `bevy_animation`.
Beyond that, details are fuzzy.
### How are sound effects played in the UI? (High confidence)
Observers and systems can simply play sounds using `bevy_audio`, like the rest of `bevy`.
## Opinionated tradeoffs
### Will `bevy_ui` be immediate or retained? (High confidence)
`bevy_ui` will be a retained mode UI. Performance is a partial motivator for this, but ease of managing widget state in complex applications is the larger motivation.
You can of course operate it in an immediate mode by simply despawning and respawning widgets every frame, but generally that won't be idiomatic or performant.
### Should `bevy_ui` have a native look-and-feel? (High confidence)
No. Ultimately, a consistent cross-platform experience is more valuable in our domain: much easier to test and quality control.
And within the context of games (and game tools), there are no user expectations that the widgets and so on all look identical.
### How much do we care about performance? (Moderate confidence)
Performance is great! If we can get it for free, we'll take it.
Serious performance problems will prompt a redesign, but squeezing out every last drop isn't the goal, as performance will generally be dominated by app logic in our core domains.
The same answer applies to binary size.
Like the rest of Bevy, we care much more about performance in realistic use cases than benchmark or minimal applications.
### How's the support for mobile (Android and iOS)? (Moderate confidence)
Mobile support is a nice-to-have: we should be sure it basically works on mobile, but full-fledged touch and gesture support is a goal for later work.
### How are structs initialized in code? (Moderate confidence)
By default, using simple struct initialization syntax over public fields.
Constants and sensible constructors are welcome in moderation.
To ease the boilerplate, you can use the `bsn!` macro, which mirrors the syntax of the `.bsn` file format. This is the preferred way to initialize complex hierarchies and create reusable widgets.
Builder methods and especially macros should otherwise be avoided.
### Will `bevy_ui` generate an HTML DOM on the web? (High confidence)
No: this is not one of our goals. Use a web-native framework if this is important to you.
### Is `bevy_ui` based on CSS? (High confidence)
While we will learn from CSS (like other UI frameworks!), `bevy_ui` is not intended to be a direct mirror of it. Though CSS is feature-rich, its API design and architecture reflect its history: built up over years without the ability to go back and clean up old systems. As a result, while *feature* partiy with CSS is highly desirable, mirroring the API and architecture is explicitly not a goal.
The only exception is in our support for specific layout algorithms: `flexbox` and `grid`. In those cases, we will follow the existing spec (and naming conventions) quite closely, to make sure that existing knowledge and tools can be used effectively.
### Will Bevy ever directly adopt one of the ecosystem UI crates? (Moderate confidence)
While the core of `bevy_ui` is quite solid and unlikely to change dramatically, there's room for something higher level on top of it, akin to `bevy_widgets` or `bevy_reactive_ui`.
Will Bevy pick a winner from the ecosystem and make it official?
Different ecosystem crates are more or less suitable to upstreaming.
These tend to fall into one of a few categories:
- Third-party wrappers (like `bevy_egui`): these will not be adopted, except as a secondary UI framework in the unlikely case where we find that the tradeoffs made are different enough to be worth the maintenance burden.
- Experimental and divergent (like `quill` or `bevy_lunex`): if we find that there's unique value in the approach, we're likely to learn from their lessons and incorporate the core patterns into Bevy's code base.
- Incremental (like `kayak_ui`): these crates may make a solid base for a `bevy_widgets`, although APIs and architecture may change dramatically after they are merged.
The core considerations here are complexity and alignment with the long-term vision for Bevy's future.
Third-party UI crates can afford a much steeper learning curve than Bevy's native solution: users who don't want or need those patterns can simply not pick up an ecosystem crate, but if a new Bevy user is met with a wall of specialized jargon when trying to get started with UI they are likely to bounce off entirely.
Smaller reusable systems (like a focus system, localization framework or an opt-in reactivity layer) are much more likely to get upstreamed directly.