OLM
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Owners
        • Signed-in users
        • Everyone
        Owners Signed-in users Everyone
      • Write
        • Owners
        • Signed-in users
        • Everyone
        Owners Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
      • Invitee
    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Engagement control
    • Transfer ownership
    • Delete this note
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Sharing URL Help
Menu
Options
Versions and GitHub Sync Engagement control Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Owners
  • Owners
  • Signed-in users
  • Everyone
Owners Signed-in users Everyone
Write
Owners
  • Owners
  • Signed-in users
  • Everyone
Owners Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
Invitee
Publish Note

Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

Your note will be visible on your profile and discoverable by anyone.
Your note is now live.
This note is visible on your profile and discoverable online.
Everyone on the web can find and read all notes of this public team.
See published notes
Unpublish note
Please check the box to agree to the Community Guidelines.
View profile
Engagement control
Commenting
Permission
Disabled Forbidden Owners Signed-in users Everyone
Enable
Permission
  • Forbidden
  • Owners
  • Signed-in users
  • Everyone
Suggest edit
Permission
Disabled Forbidden Owners Signed-in users Everyone
Enable
Permission
  • Forbidden
  • Owners
  • Signed-in users
Emoji Reply
Enable
Import from Dropbox Google Drive Gist Clipboard
   owned this note    owned this note      
Published Linked with GitHub
Subscribed
  • Any changes
    Be notified of any changes
  • Mention me
    Be notified of mention me
  • Unsubscribe
Subscribe
# Observable and Configurable Resolution Pipelines ## Introduction An important lesson learned from the legacy OLM implementation is that having a resolver that is deeply integrated into the operator creates several challenges for users and support, to name a few: - Modifying resolver behaviour, e.g. for new types of constraints, is an adventure that takes us deep into the OLM operator code, which increases both the complexity and risk - Resolver behavior is opaque and hard for users and support to debug (at present it is hard to determine the input that lead to a particular resolution outcome) - It is impossible to query the resolver, or even tune it, to a user's or cluster's particularities This document describes a possible way to structure the resolver in a way that is configurable, extensible, and auditable and observable. By thinking about the input into the solver as a pipeline through which variables get produced, transformed and given to the solver, we can surface the structure pipeline to the user, thereby making the process of translating bundles to variables/constraints transparent and configurable. Each node in the pipeline has a single specific responsibility. Therefore, this architecture makes it easy to modify resolver behavior in a way that is additive and independent from the other nodes/components. Furthermore, by leveraging message passing between these nodes/components, the entire pipeline can be observable (and replayable) by collecting these events. ## Definitions * **Variable**: A variable is the unit of input into the solver. It has a unique ID and a collection of constraints related to other variables. For instance, a variable representing a bundle might have an ID (e.g. the bundle's package and version), and dependency constraints against other bundle variables. * **VariableProducer**: A pipeline component that generates variables * **VariableProcessor**: A pipeline component that takes variables as input and outputs variables * **VariableConsumer**: A pipeline component that only consumes variables and produces nothing * **Pipeline**: A directed acyclic graph (DAG) of pipeline components whose roots are producers and leaves are consumers ## Resolver as a Pipeline The current OLM resolver works roughly in the following way: 1. Collects `Subscriptions` and translates those into `required package` variables that is mandatory and has a dependency on the bundles that can fulfill that required package (e.g. fit the given channel and version range). 2. Collects `ClusterServiceVersions` and translates those into `installed package` variables that are mandatory and depend on the variable representing the specific bundle that is installed 3. Collects `bundle` variables that represent each of the bundles in the repository. These bundles can have dependencies on other bundle variables (e.g. to fulfill their package or gvk dependencies) 4. Adds `global uniqueness` constraints to make sure that at most one bundle per package is selected and at most one bundle per gvk is selected These steps are articulated in code and hidden behind the resolver interface and modifying them is not a trivial exercise. The pipeline representation of this process can be visualized as follows: ![](https://i.imgur.com/wOrk8x1.png) 1. The `Installed Packages` and `Required Packages` producers create and send their variables to the `Solver` and to the `Bundles and Dependencies` processor 2. The `Bundles and Dependencies` processor examines the incoming variables, deduces the bundles attached to those variables, and produces bundle variable the bundle variables for their dependencies. It sends these variables to the `Global Constraints` and `Solver` processors 3. The `Global Constraints` processor keeps track of the bundles for a particular package and gvk by examining the bundle variables being given to it. Once it has examined all bundle variables, it produces the global constraint variables and sends them to the `Solver` 4. The `Solver` processor keeps track of all the variables it is given. Once it has them all, it gives them to the solver for resolution and outputs the selection to the `Output Collector` consumer 5. The `Output Collector` keeps track of all variables given to it. Once it is finished collecting all variables, the pipeline is complete and the output can be examined ### Pipeline Nodes As previously mentioned, the pipeline is a DAG rooted on `Producers`, with `Consumers` at the leaves. There are three types of nodes: - **Producers**: Producers generate data events for either `Consumers` or `Processors`. Once it finished producing its last item, it concludes - **Processors**: Consumer data events to producer different data events. Once all sources of data have concluded, the processor concludes - **Consumers**: Consumers consume data events but don't create anything new. They have no output edges. It concludes once all of its sources of input have concluded Nodes can be in one of (at least) the following states: - **Inactive**: it is ready to start its process but it hasn't started yet - **Active**: its process is on-going - **Successful**: it has completed its process without errors - **Failed**: it has completed its process with an error - **Aborted**: it has aborted its process due to either up- or downstream errors, or due to context expiry ### Events The nodes in the pipeline communicate through events. Each event contains a header that includes information that would allow us to reconstruct the execution of the pipeline, such as: - The time the event was created - Who created the event - Who was the sender and who was the receiver - Custom metadata (string->string map): could include things like a pipelineID and executionID - An unique event ID Events can be of different types, which describe what type of data the event carries. We'd need at least two types of events: data and error: data events carry the variables, while error events surface processing errors. ### Error Handling Any of the steps can fail at any time during processing. A pipeline can have different postures towards failures. For a start, it would be simple and sufficient to just abort execution in case of an error. If a node encounters an error, it can broadcast an error event to all pipeline nodes, which in turn can abort their execution. If a pipeline fails, the state of each node can be examined to find out the culprit(s) and their reasons. ### Debuging A debug channel can be given to a pipeline such that every event generated by the pipeline is also sent down the debug channel. This should give a complete overview of the pipeline execution. It could even be possible to replay source events (in the order they were created) through the pipeline to reproduce executions. ### Pipeline Modelling and Extensibility With a library of node types (e.g. `required-package-producer`, `global-variable-processor`, etc.) it is easy to imagine a declarative pipeline configuration that can be defined in yaml, reconcilled to ensure its meets certain conditions (e.g. it's a DAG) and to fire off executions against that model. Pipeline configuration could be immutable, in the sense that any change made to a pipeline results in a new pipeline ID that can be included in the header of the events as part of metadata. Furthermore, every execution of the same pipeline can have its own unique ID, also included in the event header metadata. This could further help facilitate support and debugging efforts. By adding different types of nodes, the pipeline can be further configured and extended. For instance: - **static-variable-producer**: produces static variables (that can come from declarative sources). This becomes an easy way to gently nudge the resolver in one direction or another. - **online-variable-processor/producer/consumer**: a named on-cluster process that providers a standard api for event handling - **declarative-variable-processors**: a declarativelly configurable processor that can provide a certain range of possibilities, e.g. filtering, mutating specific variables, etc. ## Downsides This approach completely exposes the way the solver gets its variables. Therefore, there's a significant blast radius here, which would need to be contained. A small mistake could lead to big effects on the cluster. Because this approach exposes how resolution works, it also demands more of the user's understanding of the sytem (at least in cases where they might want to change it).

Import from clipboard

Paste your markdown or webpage here...

Advanced permission required

Your current role can only read. Ask the system administrator to acquire write and comment permission.

This team is disabled

Sorry, this team is disabled. You can't edit this note.

This note is locked

Sorry, only owner can edit this note.

Reach the limit

Sorry, you've reached the max length this note can be.
Please reduce the content or divide it to more notes, thank you!

Import from Gist

Import from Snippet

or

Export to Snippet

Are you sure?

Do you really want to delete this note?
All users will lose their connection.

Create a note from template

Create a note from template

Oops...
This template has been removed or transferred.
Upgrade
All
  • All
  • Team
No template.

Create a template

Upgrade

Delete template

Do you really want to delete this template?
Turn this template into a regular note and keep its content, versions, and comments.

This page need refresh

You have an incompatible client version.
Refresh to update.
New version available!
See releases notes here
Refresh to enjoy new features.
Your user state has changed.
Refresh to load new user state.

Sign in

Forgot password

or

By clicking below, you agree to our terms of service.

Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
Wallet ( )
Connect another wallet

New to HackMD? Sign up

Help

  • English
  • 中文
  • Français
  • Deutsch
  • 日本語
  • Español
  • Català
  • Ελληνικά
  • Português
  • italiano
  • Türkçe
  • Русский
  • Nederlands
  • hrvatski jezik
  • język polski
  • Українська
  • हिन्दी
  • svenska
  • Esperanto
  • dansk

Documents

Help & Tutorial

How to use Book mode

Slide Example

API Docs

Edit in VSCode

Install browser extension

Contacts

Feedback

Discord

Send us email

Resources

Releases

Pricing

Blog

Policy

Terms

Privacy

Cheatsheet

Syntax Example Reference
# Header Header 基本排版
- Unordered List
  • Unordered List
1. Ordered List
  1. Ordered List
- [ ] Todo List
  • Todo List
> Blockquote
Blockquote
**Bold font** Bold font
*Italics font* Italics font
~~Strikethrough~~ Strikethrough
19^th^ 19th
H~2~O H2O
++Inserted text++ Inserted text
==Marked text== Marked text
[link text](https:// "title") Link
![image alt](https:// "title") Image
`Code` Code 在筆記中貼入程式碼
```javascript
var i = 0;
```
var i = 0;
:smile: :smile: Emoji list
{%youtube youtube_id %} Externals
$L^aT_eX$ LaTeX
:::info
This is a alert area.
:::

This is a alert area.

Versions and GitHub Sync
Get Full History Access

  • Edit version name
  • Delete

revision author avatar     named on  

More Less

Note content is identical to the latest version.
Compare
    Choose a version
    No search result
    Version not found
Sign in to link this note to GitHub
Learn more
This note is not linked with GitHub
 

Feedback

Submission failed, please try again

Thanks for your support.

On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

Please give us some advice and help us improve HackMD.

 

Thanks for your feedback

Remove version name

Do you want to remove this version name and description?

Transfer ownership

Transfer to
    Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

      Link with GitHub

      Please authorize HackMD on GitHub
      • Please sign in to GitHub and install the HackMD app on your GitHub repo.
      • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
      Learn more  Sign in to GitHub

      Push the note to GitHub Push to GitHub Pull a file from GitHub

        Authorize again
       

      Choose which file to push to

      Select repo
      Refresh Authorize more repos
      Select branch
      Select file
      Select branch
      Choose version(s) to push
      • Save a new version and push
      • Choose from existing versions
      Include title and tags
      Available push count

      Pull from GitHub

       
      File from GitHub
      File from HackMD

      GitHub Link Settings

      File linked

      Linked by
      File path
      Last synced branch
      Available push count

      Danger Zone

      Unlink
      You will no longer receive notification when GitHub file changes after unlink.

      Syncing

      Push failed

      Push successfully