# Meeting on 05-10-2024 ## Attendees * Atul * Arndt * Justin * Joe * George ## Agenda * [Token Buckets](https://datatracker.ietf.org/doc/html/draft-richer-wimse-token-container-00) ## Notes * (Atul) Explore the "~" separated token bucket * (Justin) The concept is really sound: * It's not a single cryptographic entity * Because of that JOSE is not an option * Extending that concept into different protocol stacks makes a ton of sense * (Justin) Translation and type model for non-JSON formats * (Justin) You define types, and define how they translate to a specific format * (Justin) IF you augment that "this specific field is so-and-so in this target format" * (Justin) Then you have a specific mapping without data loss * (Atul) This possibly needs to be elevated to a BoF level * (Justin) There are reasonable arguments for doing it in WIMSE or OAuth, but is out of charter for both of those * (Atul) WIMSE is a more natural fit * (Joe) Question I have about translation: Is this about JSON or something else? * (Justin) Both approaches are valid if you are careful. What I mentioned earlier was JSON as a source format, because it has some basic types and structure * (Joe) Are you proposing to have a schema for the JSON or not? * (Justin) If you define it formally and have tooling around it, no one is going to use the tooling. People would rather use the JSON and the schema definition * (Joe) If there isn't a master description... * (Justin) Requiring tools to understand JSON-LD is going to be a heavy lift. * (Atul) We could use protobufs * (Joe) agree * (Justin) protobufs and CDDL are good candidates. Fundamentally you are expressing the types and their interpretation in specific formats * (Justin) The token bucket thing is a little trickier because you want to representa graph, and none of the systems can express a graph * (Justin) So we need something extra to express a graph. It might be something we define a type system and an encoder for each target. * (Justin) we tried this in DID documents * (Justin) DID docs aren't graphs, but they are objects strung together with links * (Atul) If you need to express a graph, you need the linkages to also be cryptographically sound * (Justin) I should be able to attest to the state of a node even if I've not added it. * (Justin) You can then establish the integrity of the system by traversing each node * (Joe) There are a lot of graph formats. What are you proposing? What is the complexity that you are trying to address? * (Justin) Just expressing as a "call chain" is an optimization that may not capture the complexity * (Justin) Think of a map reduce thing, something is going to dispatch something to each node, and the reducer needs to attest to all the results (from each node). * (Joe) In that case it is a DAG * (Justin) Structured fields are very limited in what they can express, so that won't be useful to express token buckets in HTTP headers * (Justin) So we need some way to encode it such that it can be expressed in a header * (Atul) Can we still define it as a tilde separated list? * (Justin) But then you still need to define the node format. Also, you need to Base64 decode and parse the whole set of tokens before you can make sense of it * (Joe) Are there global rules that apply to the graph? * (Justin) The rules may be application specific * (Justin) The graph should only define the nodes and how they connect with each other * (Joe) Those are the rules that need to be incorporated in the graph * (Justin) In my proposal the linkages are established using hashes of the parent nodes that are included in the child nodes * (Justin) The only restriction is that the parent needs to be in the graph when you add the child * (Justin) Play with the Java implementation [here](https://github.com/bspk/token-bucket) *