# TODO
## Step 1 ~> goal is to do this by Jun 20th, rajas -> DONE https://github.com/kubernetes/enhancements/pull/2094/commits/6aff7fc2e988204fb8e5db65ec3d2bc364a95105
We are migrating this old KEP to the new format... We need to do these steps...
- look at the new kep format https://github.com/kubernetes/enhancements/blob/be9404fa75ca21d96d54fddf124bf852667a6612/keps/sig-storage/2268-non-graceful-shutdown/README.md
- Grab a prod-readiness reviewer file https://github.com/kubernetes/enhancements/blob/master/keps/prod-readiness/sig-network/2079.yaml
- map the contents in this file to that format
- add a YAML file similar to the one in the above KEP (title, kep number, authoer, ...)
- make a prr-reviewer file needs to be in a new directory
- template: https://github.com/kubernetes/enhancements/tree/master/keps/NNNN-kep-template
## Step 2 ~> parallalezied across kpng folks, rajas, jay, naburun, anush coordinate
- propose breaking code up into separate repos ...
- CONTENT CHANGES
- adding a section for each type of proxy
- windows kernel, iptables: uses old logic for backward compatibility due to universal adoption https://github.com/kubernetes-sigs/kpng/blob/master/backends/iptables/sink.go#L61
- ipvs, nft: use new fullstate model https://github.com/kubernetes-sigs/kpng/blob/master/backends/nft/nft.go#L92
- Add code logistics and 1 year timeline with separate backends
- for each `iptables,ipvs, win-kernel, nft,...`
- file issue in k/org to make new repo
- ping nabarun if you need help
- copy the code into the above repo
- output:
- kubernetes/kube-proxy-iptables
- kubernetes/kube-proxy-ipvs
- kubernetes/kube-proxy-win-kernel
- kubernetes/kube-proxy-userspace-win
- kubernetes/kube-proxy-userspace-linux
- kubernetes/kube-proxy-nft
- kubernetes/kube-proxy-ebpf
- kubernetes/kube-proxy (diffstore, protobufs, api, kpng client, global model)? Or keep it in core? monorepo that behaves same as old kube proxy, vendors in all the backends...
## Step 3
- announce deprecation of kube-proxy
- ... rajas, mikael --> poll folks to add their names as AUTHORS if they want to contribute ...
- reach out to dan about KEP as it evolves
## Summary
At the beginning, `kube-proxy` was designed to handle the translation of Service objects to OS-level
resources. Implementations have been userland, then iptables, and now ipvs. With the growth of the
Kubernetes project, more implementations came to life, for instance with eBPF, and often in relation
to other goals (Calico to manage the network overlay, Cilium to manage app-level security, metallb
to provide an external LB for bare-metal clusters, etc).
Along this cambrian explosion of third-party software, the Service object itself received new
concepts to improve the abstraction, for instance to express topology. Thus, third-party
implementation are expected to update and become more complex over time, even if their core doesn't
change (ie, the eBPF translation layer is not affected).
This KEP is born from the conviction that more decoupling of the Service object and the actual
implementations is required, by introducing an intermediate, node-level abstraction provider. This
abstraction is expected to be the result of applying Kubernetes' `Service` semantics and business
logic to a simpler, more stable API.
############ OLD KEP
# WORK IN PROGRESS
<!--
**Note:** When your KEP is complete, all of these comment blocks should be removed.
To get started with this template:
- [ ] **Pick a hosting SIG.**
Make sure that the problem space is something the SIG is interested in taking
up. KEPs should not be checked in without a sponsoring SIG.
- [ ] **Create an issue in kubernetes/enhancements**
When filing an enhancement tracking issue, please make sure to complete all
fields in that template. One of the fields asks for a link to the KEP. You
can leave that blank until this KEP is filed, and then go back to the
enhancement and add the link.
- [ ] **Make a copy of this template directory.**
Copy this template into the owning SIG's directory and name it
`NNNN-short-descriptive-title`, where `NNNN` is the issue number (with no
leading-zero padding) assigned to your enhancement above.
- [ ] **Fill out as much of the kep.yaml file as you can.**
At minimum, you should fill in the "Title", "Authors", "Owning-sig",
"Status", and date-related fields.
- [ ] **Fill out this file as best you can.**
At minimum, you should fill in the "Summary" and "Motivation" sections.
These should be easy if you've preflighted the idea of the KEP with the
appropriate SIG(s).
- [ ] **Create a PR for this KEP.**
Assign it to people in the SIG who are sponsoring this process.
- [ ] **Merge early and iterate.**
Avoid getting hung up on specific details and instead aim to get the goals of
the KEP clarified and merged quickly. The best way to do this is to just
start with the high-level sections and fill out details incrementally in
subsequent PRs.
Just because a KEP is merged does not mean it is complete or approved. Any KEP
marked as `provisional` is a working document and subject to change. You can
denote sections that are under active debate as follows:
```
<<[UNRESOLVED optional short context or usernames ]>>
Stuff that is being argued.
<<[/UNRESOLVED]>>
```
When editing KEPS, aim for tightly-scoped, single-topic PRs to keep discussions
focused. If you disagree with what is already in a document, open a new PR
with suggested changes.
One KEP corresponds to one "feature" or "enhancement" for its whole lifecycle.
You do not need a new KEP to move from beta to GA, for example. If
new details emerge that belong in the KEP, edit the KEP. Once a feature has become
"implemented", major changes should get new KEPs.
The canonical place for the latest set of instructions (and the likely source
of this file) is [here](/keps/NNNN-kep-template/README.md).
**Note:** Any PRs to move a KEP to `implementable`, or significant changes once
it is marked `implementable`, must be approved by each of the KEP approvers.
If none of those approvers are still appropriate, then changes to that list
should be approved by the remaining approvers and/or the owning SIG (or
SIG Architecture for cross-cutting KEPs).
-->
# KEP-20201010: rework kube-proxy architecture
# UPDATE
As of 4/16/2022
- the KPNG subproject has a working implementation of much of this proposal: https://github.com/kubernetes-sigs/kpng
- this implementation includes windows, ipvs, iptables, nft, and userspace based linux proxies
- third parties have also published external KPNG backends validating this implementation, such as https://kubernetes.io/blog/2021/10/18/use-kpng-to-write-specialized-kube-proxiers/
- the KPNG project meetings are published at https://github.com/kubernetes/community/tree/master/sig-network
- For the backends implemented we have conformance and sig-network tests which run, which are in generally good health with exceptions of a few corner cases and bugs which are being actively worked on by the community
- The status of this KEP hasnt been 100% maintained because our efforts have diverted to getting a concrete prototype working, we encourage others to help us complete this KEP as well as to finish the other broad work associated with rearchitecting the kube-proxy
<!--
A table of contents is helpful for quickly jumping to sections of a KEP and for
highlighting any additional information provided beyond the standard KEP
template.
Ensure the TOC is wrapped with
<code><!-- toc --&rt;<!-- /toc --&rt;</code>
tags, and then generate with `hack/update-toc.sh`.
-->
<!-- toc -->
- [Release Signoff Checklist](#release-signoff-checklist)
- [Summary](#summary)
- [Motivation](#motivation)
- [Goals](#goals)
- [Non-Goals](#non-goals)
- [Proposal](#proposal)
- [User Stories (Optional)](#user-stories-optional)
- [Story 1](#story-1)
- [Story 2](#story-2)
- [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional)
- [Risks and Mitigations](#risks-and-mitigations)
- [Design Details](#design-details)
- [Test Plan](#test-plan)
- [Graduation Criteria](#graduation-criteria)
- [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy)
- [Version Skew Strategy](#version-skew-strategy)
- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire)
- [Feature Enablement and Rollback](#feature-enablement-and-rollback)
- [Rollout, Upgrade and Rollback Planning](#rollout-upgrade-and-rollback-planning)
- [Monitoring Requirements](#monitoring-requirements)
- [Dependencies](#dependencies)
- [Scalability](#scalability)
- [Troubleshooting](#troubleshooting)
- [Implementation History](#implementation-history)
- [Drawbacks](#drawbacks)
- [Alternatives](#alternatives)
- [Infrastructure Needed (Optional)](#infrastructure-needed-optional)
<!-- /toc -->
## Release Signoff Checklist
<!--
**ACTION REQUIRED:** In order to merge code into a release, there must be an
issue in [kubernetes/enhancements] referencing this KEP and targeting a release
milestone **before the [Enhancement Freeze](https://git.k8s.io/sig-release/releases)
of the targeted release**.
For enhancements that make changes to code or processes/procedures in core
Kubernetes—i.e., [kubernetes/kubernetes], we require the following Release
Signoff checklist to be completed.
Check these off as they are completed for the Release Team to track. These
checklist items _must_ be updated for the enhancement to be released.
-->
Items marked with (R) are required *prior to targeting to a milestone / release*.
- [ ] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR)
- [ ] (R) KEP approvers have approved the KEP status as `implementable`
- [ ] (R) Design details are appropriately documented
- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input
- [ ] (R) Graduation criteria is in place
- [ ] (R) Production readiness review completed
- [ ] Production readiness review approved
- [ ] "Implementation History" section is up-to-date for milestone
- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io]
- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes
<!--
**Note:** This checklist is iterative and should be reviewed and updated every time this enhancement is being considered for a milestone.
-->
[kubernetes.io]: https://kubernetes.io/
[kubernetes/enhancements]: https://git.k8s.io/enhancements
[kubernetes/kubernetes]: https://git.k8s.io/kubernetes
[kubernetes/website]: https://git.k8s.io/website
## Summary
At the beginning, `kube-proxy` was designed to handle the translation of Service objects to OS-level
resources. Implementations have been userland, then iptables, and now ipvs. With the growth of the
Kubernetes project, more implementations came to life, for instance with eBPF, and often in relation
to other goals (Calico to manage the network overlay, Cilium to manage app-level security, metallb
to provide an external LB for bare-metal clusters, etc).
Along this cambrian explosion of third-party software, the Service object itself received new
concepts to improve the abstraction, for instance to express topology. Thus, third-party
implementation are expected to update and become more complex over time, even if their core doesn't
change (ie, the eBPF translation layer is not affected).
This KEP is born from the conviction that more decoupling of the Service object and the actual
implementations is required, by introducing an intermediate, node-level abstraction provider. This
abstraction is expected to be the result of applying Kubernetes' `Service` semantics and business
logic to a simpler, more stable API.
## Motivation
<!--
This section is for explicitly listing the motivation, goals and non-goals of
this KEP. Describe why the change is important and the benefits to users. The
motivation section can optionally provide links to [experience reports] to
demonstrate the interest in a KEP within the wider Kubernetes community.
[experience reports]: https://github.com/golang/go/wiki/ExperienceReports
-->
### Goals
- provide a node-level abstraction of the cluster-wide `Service` semantics through an API
- allow easier, more stable proxy implementations that don't need updates when `Service` business
logic changes
- provide a client library with minimal dependencies
- include equivalent implementations of in-project ones (userland, iptables and ipvs)
- (optional) help proxy implementations using the same subsystem (ie iptables) to cooperate more
easily
<!--
List the specific goals of the KEP. What is it trying to achieve? How will we
know that this has succeeded?
-->
### Non-Goals
- provide equivalent implementations of third-party ones
<!--
What is out of scope for this KEP? Listing non-goals helps to focus discussion
and make progress.
-->
## Proposal
<!--
This is where we get down to the specifics of what the proposal actually is.
This should have enough detail that reviewers can understand exactly what
you're proposing, but should not include things like API designs or
implementation. The "Design Details" section below is for the real
nitty-gritty.
-->
Rewrite the kube-proxy to be a "localhost" gRPC API provider that will be accessible as usual via
TCP (`127.0.0.1:12345`) and/or via a socket (`unix:///path/to/proxy.sock`).
- it will connect to the API server and watch resources, like the current proxy;
- then, it will process them, applying Kubernetes specific business logic like topology computation
relative to the local host;
- finally, provide the result of this computation to client via a gRPC watchable API.
This decoupling allows kube-proxy and implementation to evolve in their own timeframes. For
instance, introducing optimizations like EndpointSlice or new business semantics like Topology
does not trigger a rebuild/release of any proxy implementation.
The idea is to send the full state to the client, so implementations don't have to do
diff-processing and maintain any internal state. This should provide simple implementations,
reliable results and still be quite optimal, since many kernel network-level objects are
updated via atomic replace APIs. It's also a protection from slow readers, since no stream has to
be buffered.
Since the node-local state computed by the new proxy will be simpler and node-specific, it will
only change when the result for the current node is actually changed. Since there's less data in
the local state, change frequency is reduced compared to cluster state. Testing on actual clusters
showed a frequency reduction of change events by 2 orders of magnitude.
### User Stories (Optional)
<!--
Detail the things that people will be able to do if this KEP is implemented.
Include as much detail as possible so that people can understand the "how" of
the system. The goal here is to make this feel real for users without getting
bogged down.
-->
#### Story 1
TBD (Calico eBPF)
#### Story 2
TBD (node-local cluster DNS provider)
### Notes/Constraints/Caveats (Optional)
<!--
What are the caveats to the proposal?
What are some important details that didn't come across above?
Go in to as much detail as necessary here.
This might be a good place to talk about core concepts and how they relate.
-->
- sending the full-state could be resource consuming on big clusters, but it should still be O(1) to
the actual kernel definitions (the complexity of what the node has to handle cannot be reduced
without losing functionality or correctness).
### Risks and Mitigations
<!--
What are the risks of this proposal, and how do we mitigate? Think broadly.
For example, consider both security and how this will impact the larger
Kubernetes ecosystem.
How will security be reviewed, and by whom?
How will UX be reviewed, and by whom?
Consider including folks who also work outside the SIG or subproject.
-->
## Design Details
<!--
This section should contain enough information that the specifics of your
change are understandable. This may include API specs (though not always
required) or even code snippets. If there's any ambiguity about HOW your
proposal will be implemented, this is the place to discuss them.
-->
A [draft implementation] exists and some [performance testing] has been done.
[draft implementation]: https://github.com/mcluseau/kube-proxy2/
[performance testing]: https://github.com/mcluseau/kube-proxy2/blob/master/doc/proposal.md
The watchable API will be a long polling, taking a "last known state info" and returning a stream of
objects. Proposed definition:
```proto
service Endpoints {
// Returns all the endpoints for this node.
rpc Next (NextFilter) returns (stream NextItem);
}
message NextFilter {
// Unique instance ID to manage proxy restarts
uint64 InstanceID = 1;
// The latest revision we're aware of (0 at first)
uint64 Rev = 2;
}
message NextItem {
// Filter to use to get the next notification (first item in stream)
NextFilter Next = 1;
// A service endpoints item (any item after the first)
ServiceEndpoints Endpoints = 2;
}
```
When the proxy starts, it will generate a random InstanceID, and have Rev at 0. So, a client
(re)connecting will get the new state either after a proxy restart or when an actual change occurs.
The proxy will never send a partial state, only full states. This means it waits to have all its
Kubernetes watchers sync'ed before going to Rev 1.
The first NextItem in the stream will be the state info required for the next polling call, and any
subsequent item will be an actual state object. The stream is closed when the full state has been
sent.
The client library abstracts those details away and provides the full state after each change, and
includes a default Run function, setting up default flags, parsing them and running the client,
allowing very simple clients like this:
```golang
package main
import (
"fmt"
"os"
"time"
"github.com/mcluseau/kube-proxy2/pkg/api/localnetv1"
"github.com/mcluseau/kube-proxy2/pkg/client"
)
func main() {
client.Run(printState)
}
func printState(items []*localnetv1.ServiceEndpoints) {
fmt.Fprintln(os.Stdout, "#", time.Now())
for _, item := range items {
fmt.Fprintln(os.Stdout, item)
}
}
```
The currently proposed interface for the lower-level client is as follows:
```godoc
package client // import "github.com/mcluseau/kube-proxy2/pkg/client"
type EndpointsClient struct {
// Target is the gRPC dial target
Target string
// InstanceID and Rev are the latest known state (used to resume a watch)
InstanceID uint64
Rev uint64
// ErrorDelay is the delay before retrying after an error.
ErrorDelay time.Duration
// Has unexported fields.
}
EndpointsClient is a simple client to kube-proxy's Endpoints API.
func New(flags FlagSet) (epc *EndpointsClient)
New returns a new EndpointsClient with values bound to the given flag-set
for command-line tools. Other needs can use `&EndpointsClient{...}`
directly.
func (epc *EndpointsClient) Cancel()
Cancel will cancel this client, quickly closing any call to Next.
func (epc *EndpointsClient) CancelOn(signals ...os.Signal)
CancelOn make the given signals to cancel this client.
func (epc *EndpointsClient) CancelOnSignals()
CancelOnSignals make the default termination signals to cancel this client.
func (epc *EndpointsClient) DefaultFlags(flags FlagSet)
DefaultFlags registers this client's values to the standard flags.
func (epc *EndpointsClient) Next() (items []*localnetv1.ServiceEndpoints, canceled bool)
Next returns the next set of ServiceEndpoints, waiting for a new revision as
needed. It's designed to never fail and will always return latest items,
unless canceled.
```
A good example of how to use this low level API is the `client.Run` itself:
```golang
// Run the client with the standard options
func Run(handlers ...HandlerFunc) {
once := flag.Bool("once", false, "only one fetch loop")
epc := New(flag.CommandLine)
flag.Parse()
epc.CancelOnSignals()
for {
items, canceled := epc.Next()
if canceled {
return
}
for _, handler := range handlers {
handler(items)
}
if *once {
klog.Infof("to resume this watch, use --instance-id %d --rev %d", epc.InstanceID, epc.Rev)
return
}
}
}
```
- use the docker syntax to express binding, allowing sockets with
`unix:///run/kubernetes/proxy.sock`
- may economize some syscalls for internal implementations by using
`google.golang.org/grpc/test/bufconn`, but that sounds like premature optimization
### Test Plan
<!--
**Note:** *Not required until targeted at a release.*
Consider the following in developing a test plan for this enhancement:
- Will there be e2e and integration tests, in addition to unit tests?
- How will it be tested in isolation vs with other components?
No need to outline all of the test cases, just the general strategy. Anything
that would count as tricky in the implementation, and anything particularly
challenging to test, should be called out.
All code is expected to have adequate tests (eventually with coverage
expectations). Please adhere to the [Kubernetes testing guidelines][testing-guidelines]
when drafting this test plan.
[testing-guidelines]: https://git.k8s.io/community/contributors/devel/sig-testing/testing.md
-->
### Graduation Criteria
<!--
**Note:** *Not required until targeted at a release.*
Define graduation milestones.
These may be defined in terms of API maturity, or as something else. The KEP
should keep this high-level with a focus on what signals will be looked at to
determine graduation.
Consider the following in developing the graduation criteria for this enhancement:
- [Maturity levels (`alpha`, `beta`, `stable`)][maturity-levels]
- [Deprecation policy][deprecation-policy]
Clearly define what graduation means by either linking to the [API doc
definition](https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-versioning)
or by redefining what graduation means.
In general we try to use the same stages (alpha, beta, GA), regardless of how the
functionality is accessed.
[maturity-levels]: https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions
[deprecation-policy]: https://kubernetes.io/docs/reference/using-api/deprecation-policy/
Below are some examples to consider, in addition to the aforementioned [maturity levels][maturity-levels].
#### Alpha -> Beta Graduation
- Gather feedback from developers and surveys
- Complete features A, B, C
- Tests are in Testgrid and linked in KEP
#### Beta -> GA Graduation
- N examples of real-world usage
- N installs
- More rigorous forms of testing—e.g., downgrade tests and scalability tests
- Allowing time for feedback
**Note:** Generally we also wait at least two releases between beta and
GA/stable, because there's no opportunity for user feedback, or even bug reports,
in back-to-back releases.
#### Removing a Deprecated Flag
- Announce deprecation and support policy of the existing flag
- Two versions passed since introducing the functionality that deprecates the flag (to address version skew)
- Address feedback on usage/changed behavior, provided on GitHub issues
- Deprecate the flag
**For non-optional features moving to GA, the graduation criteria must include
[conformance tests].**
[conformance tests]: https://git.k8s.io/community/contributors/devel/sig-architecture/conformance-tests.md
-->
### Upgrade / Downgrade Strategy
<!--
If applicable, how will the component be upgraded and downgraded? Make sure
this is in the test plan.
Consider the following in developing an upgrade/downgrade strategy for this
enhancement:
- What changes (in invocations, configurations, API use, etc.) is an existing
cluster required to make on upgrade, in order to maintain previous behavior?
- What changes (in invocations, configurations, API use, etc.) is an existing
cluster required to make on upgrade, in order to make use of the enhancement?
-->
### Version Skew Strategy
<!--
If applicable, how will the component handle version skew with other
components? What are the guarantees? Make sure this is in the test plan.
Consider the following in developing a version skew strategy for this
enhancement:
- Does this enhancement involve coordinating behavior in the control plane and
in the kubelet? How does an n-2 kubelet without this feature available behave
when this feature is used?
- Will any other components on the node change? For example, changes to CSI,
CRI or CNI may require updating that component before the kubelet.
-->
## Production Readiness Review Questionnaire
<!--
Production readiness reviews are intended to ensure that features merging into
Kubernetes are observable, scalable and supportable; can be safely operated in
production environments, and can be disabled or rolled back in the event they
cause increased failures in production. See more in the PRR KEP at
https://git.k8s.io/enhancements/keps/sig-architecture/20190731-production-readiness-review-process.md.
The production readiness review questionnaire must be completed and approved
for the KEP to move to `implementable` status and be included in the release.
In some cases, the questions below should also have answers in `kep.yaml`. This
is to enable automation to verify the presence of the review, and to reduce review
burden and latency.
The KEP must have a approver from the
[`prod-readiness-approvers`](http://git.k8s.io/enhancements/OWNERS_ALIASES)
team. Please reach out on the
[#prod-readiness](https://kubernetes.slack.com/archives/CPNHUMN74) channel if
you need any help or guidance.
-->
### Feature Enablement and Rollback
_This section must be completed when targeting alpha to a release._
* **How can this feature be enabled / disabled in a live cluster?**
- [ ] Feature gate (also fill in values in `kep.yaml`)
- Feature gate name:
- Components depending on the feature gate:
- [ ] Other
- Describe the mechanism:
- Will enabling / disabling the feature require downtime of the control
plane?
- Will enabling / disabling the feature require downtime or reprovisioning
of a node? (Do not assume `Dynamic Kubelet Config` feature is enabled).
* **Does enabling the feature change any default behavior?**
Any change of default behavior may be surprising to users or break existing
automations, so be extremely careful here.
* **Can the feature be disabled once it has been enabled (i.e. can we roll back
the enablement)?**
Also set `disable-supported` to `true` or `false` in `kep.yaml`.
Describe the consequences on existing workloads (e.g., if this is a runtime
feature, can it break the existing applications?).
* **What happens if we reenable the feature if it was previously rolled back?**
* **Are there any tests for feature enablement/disablement?**
The e2e framework does not currently support enabling or disabling feature
gates. However, unit tests in each component dealing with managing data, created
with and without the feature, are necessary. At the very least, think about
conversion tests if API types are being modified.
### Rollout, Upgrade and Rollback Planning
_This section must be completed when targeting beta graduation to a release._
* **How can a rollout fail? Can it impact already running workloads?**
Try to be as paranoid as possible - e.g., what if some components will restart
mid-rollout?
* **What specific metrics should inform a rollback?**
* **Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested?**
Describe manual testing that was done and the outcomes.
Longer term, we may want to require automated upgrade/rollback tests, but we
are missing a bunch of machinery and tooling and can't do that now.
* **Is the rollout accompanied by any deprecations and/or removals of features, APIs,
fields of API types, flags, etc.?**
Even if applying deprecation policies, they may still surprise some users.
### Monitoring Requirements
_This section must be completed when targeting beta graduation to a release._
* **How can an operator determine if the feature is in use by workloads?**
Ideally, this should be a metric. Operations against the Kubernetes API (e.g.,
checking if there are objects with field X set) may be a last resort. Avoid
logs or events for this purpose.
* **What are the SLIs (Service Level Indicators) an operator can use to determine
the health of the service?**
- [ ] Metrics
- Metric name:
- [Optional] Aggregation method:
- Components exposing the metric:
- [ ] Other (treat as last resort)
- Details:
* **What are the reasonable SLOs (Service Level Objectives) for the above SLIs?**
At a high level, this usually will be in the form of "high percentile of SLI
per day <= X". It's impossible to provide comprehensive guidance, but at the very
high level (needs more precise definitions) those may be things like:
- per-day percentage of API calls finishing with 5XX errors <= 1%
- 99% percentile over day of absolute value from (job creation time minus expected
job creation time) for cron job <= 10%
- 99,9% of /health requests per day finish with 200 code
* **Are there any missing metrics that would be useful to have to improve observability
of this feature?**
Describe the metrics themselves and the reasons why they weren't added (e.g., cost,
implementation difficulties, etc.).
### Dependencies
_This section must be completed when targeting beta graduation to a release._
* **Does this feature depend on any specific services running in the cluster?**
Think about both cluster-level services (e.g. metrics-server) as well
as node-level agents (e.g. specific version of CRI). Focus on external or
optional services that are needed. For example, if this feature depends on
a cloud provider API, or upon an external software-defined storage or network
control plane.
For each of these, fill in the following—thinking about running existing user workloads
and creating new ones, as well as about cluster-level services (e.g. DNS):
- [Dependency name]
- Usage description:
- Impact of its outage on the feature:
- Impact of its degraded performance or high-error rates on the feature:
### Scalability
_For alpha, this section is encouraged: reviewers should consider these questions
and attempt to answer them._
_For beta, this section is required: reviewers must answer these questions._
_For GA, this section is required: approvers should be able to confirm the
previous answers based on experience in the field._
* **Will enabling / using this feature result in any new API calls?**
Describe them, providing:
- API call type (e.g. PATCH pods)
- estimated throughput
- originating component(s) (e.g. Kubelet, Feature-X-controller)
focusing mostly on:
- components listing and/or watching resources they didn't before
- API calls that may be triggered by changes of some Kubernetes resources
(e.g. update of object X triggers new updates of object Y)
- periodic API calls to reconcile state (e.g. periodic fetching state,
heartbeats, leader election, etc.)
* **Will enabling / using this feature result in introducing new API types?**
Describe them, providing:
- API type
- Supported number of objects per cluster
- Supported number of objects per namespace (for namespace-scoped objects)
* **Will enabling / using this feature result in any new calls to the cloud
provider?**
* **Will enabling / using this feature result in increasing size or count of
the existing API objects?**
Describe them, providing:
- API type(s):
- Estimated increase in size: (e.g., new annotation of size 32B)
- Estimated amount of new objects: (e.g., new Object X for every existing Pod)
* **Will enabling / using this feature result in increasing time taken by any
operations covered by [existing SLIs/SLOs]?**
Think about adding additional work or introducing new steps in between
(e.g. need to do X to start a container), etc. Please describe the details.
* **Will enabling / using this feature result in non-negligible increase of
resource usage (CPU, RAM, disk, IO, ...) in any components?**
Things to keep in mind include: additional in-memory state, additional
non-trivial computations, excessive access to disks (including increased log
volume), significant amount of data sent and/or received over network, etc.
This through this both in small and large cases, again with respect to the
[supported limits].
### Troubleshooting
The Troubleshooting section currently serves the `Playbook` role. We may consider
splitting it into a dedicated `Playbook` document (potentially with some monitoring
details). For now, we leave it here.
_This section must be completed when targeting beta graduation to a release._
* **How does this feature react if the API server and/or etcd is unavailable?**
* **What are other known failure modes?**
For each of them, fill in the following information by copying the below template:
- [Failure mode brief description]
- Detection: How can it be detected via metrics? Stated another way:
how can an operator troubleshoot without logging into a master or worker node?
- Mitigations: What can be done to stop the bleeding, especially for already
running user workloads?
- Diagnostics: What are the useful log messages and their required logging
levels that could help debug the issue?
Not required until feature graduated to beta.
- Testing: Are there any tests for failure mode? If not, describe why.
* **What steps should be taken if SLOs are not being met to determine the problem?**
[supported limits]: https://git.k8s.io/community//sig-scalability/configs-and-limits/thresholds.md
[existing SLIs/SLOs]: https://git.k8s.io/community/sig-scalability/slos/slos.md#kubernetes-slisslos
## Implementation History
<!--
Major milestones in the lifecycle of a KEP should be tracked in this section.
Major milestones might include:
- the `Summary` and `Motivation` sections being merged, signaling SIG acceptance
- the `Proposal` section being merged, signaling agreement on a proposed design
- the date implementation started
- the first Kubernetes release where an initial version of the KEP was available
- the version of Kubernetes where the KEP graduated to general availability
- when the KEP was retired or superseded
-->
## Drawbacks
<!--
Why should this KEP _not_ be implemented?
-->
## Alternatives
- kube-proxy as a library.
If any implementation uses a library, then it has to be released with every change to this client.
This KEP allows to decouple the releases of implementations from the core, so they occur only when
improving the implementation, or reacting to a change in the simplified model (that should be much
less frequent than k8s business logic changes or fixes).
## Infrastructure Needed (Optional)
<!--
Use this section if you need things from the project/SIG. Examples include a
new subproject, repos requested, or GitHub details. Listing these here allows a
SIG to get the process for these resources started right away.
-->