owned this note
owned this note
Published
Linked with GitHub
# Securing your JupyterHub on Kubernetes
*(Thanks to Prem Mishra, Jacob Matuskey & the octraine team at the Space Telescope Sciences Institute)*
Security is about tradeoffs.
k8s security links & resources.
https://cloud.google.com/kubernetes-engine/docs/concepts/multitenancy-overview
## API Access
Many cloud providers and
### Kubernetes API access
Kubernetes is the abstraction layer we use over many machines in a cloud provider (or your own machines). API interaction with the kubernetes API is granted with Role Based Access Control (RBAC) policies. Unrestricted access to the kubernetes API is equivalent to granting users root on your entire cluster. Users generally do not need any elevated access, which can be enforced by setting a Pods `serviceAccountName` to `null`
```yaml
singleuser:
serviceAccountName: null
```
If you want to give users the ability to create pods, you must use PodSecurityPolicy to lock down what exactly they can create. Otherwise it is equivalent to giving them root on the cluster. In Kubernetes, this is typically handled by:
- Setting a `serviceAccountName` on the pod.
- Creating a `ServiceAccount`
- Creating a `Role` with sufficient permissions to create new pods.
- Creating a `RoleBinding` to connect the `ServiceAccount` to the `Role`
```yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: name
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: name
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods", "persistentvolumeclaims"]
verbs: ["get", "watch", "list", "create", "delete"]
- apiGroups: [""] # "" indicates the core API group
resources: ["events"]
verbs: ["get", "watch", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: name
subjects:
- kind: ServiceAccount
name: name
namespace: namespace
roleRef:
kind: Role
name: name
apiGroup: rbac.authorization.k8s.io
```
### Cloud metadata access
```yaml
singleuser:
cloudMetadata:
enabled: false
```
## Container security
### Don't allow users to be root
```yaml
hub:
extraConfig:
01-no-root: |
c.KubeSpawner.extra_container_config = {
'securityContext:' {
'runAsUser': 1000,
'privileged': False,
'allowPrivilegeEscalation': False
}
}
```
### Giving each user a separate uid
BIG PAIN IN THE ASS. Means you can't use any off-the-shelf images, gotta build your own
Needs a lot more docs lol
https://docs.openshift.com/container-platform/3.3/creating_images/guidelines.html#openshift-container-platform-specific-guidelines is
the core concept
### seccomp
https://github.com/kubernetes/enhancements/pull/1148
### AppArmor
* Note: Don't say anything about SELinux *
https://kubernetes.io/docs/tutorials/clusters/apparmor/
## Storage security
### Block vs File storage
#### Block Storage:
Much better isolation between users
Much better performance for users
Much better quota'ing for users
More expensive, based on your usage pattern
Start / stop times higher
Can be mounted on only one container at a time
Higher failure rates based on your cloud provider in attach / detach
Constrains size of each node, since most cloud providers have a limit on number of disks that can be attached to each node
#### File storage
Much cheaper, because you can overcommit!
Faster attach / detach
Can be mounted in multiple places
Managing your own NFS server is work
Managed NFS providers have issues based on what kinda work you are doing
Worse performance, magnified based on your workload
Quotas are hard
### Secure NFS mounting
Currently, people use:
One PV referring the NFS share
One PVC that attaches to this
subPaths in each pod's volume mount to show only their home directory to the user
This mounts the entire NFS share on the host node, and then bind mounts the subPath for each user's home directory. If the user can break out of the container, they can now read *every other user's* home directories, especially if the uid is the same for all users.
An alternative is to use something like nfs-client-provisioner, and create one PVC per user automatically. This will bind to a new PV that is *just* the home directory of the user, so the entire NFS share isn't mounted on all nodes. If there's a container breakout, the user can only see the home directories of other users in the same host, not everywhere.
You can make this a little more secure by giving each of your users their own uid.
## Network Security
### HTTPS
Have HTTPS between your users and the hub. Pretty simple these days with our Let's Encrypt integration.
### Internal TLS?
Depending on what you want, might be needed between components. JupyterHub itself supports it, but I think KubeSpawner doesn't yet - so can't be used on Kubernetes. Consider something like LinkerD or istio in the meantime - although they are probably extremely heavyweight for what you need, and add a lot of extra complexity.
### NetworkPolicy between components
Easy win!
```yaml
hub:
networkPolicy:
enabled: true
proxy:
networkPolicy:
enabled: true
singleuser:
networkPolicy:
enabled: true
```
### Restrict outbound network access
```yaml
singleuser:
networkPolicy:
enabled: true
egress:
- ports:
- port: 53
protocol: UDP
- ports:
- port: 80
protocol: TCP
- ports:
- port: 443
protocol: TCP
```
### Network bandwidth limitation
Depending on how your k8s cluster is set up (network plugin), you can limit it via annotations:
```yaml
singleuser:
extraAnnotations:
kubernetes.io/ingress-bandwidth: 1M
kubernetes.io/egress-bandwidth: 1M
```
If not, you can use minrk's tc-init to limit outbound traffic.
## Resource exhaustion
A user can use up way more resources than they should, thus denying other users resources they legitimately have access to.
CPU / RAM requests & limits
limits vs guarantees.
overcommit
Filesystem usage
block store
Automatically set by your request size. Can grow if your provider supports it.
filestore
No beuno! Unless you run your own NFS server with XFS, then you can use project quotas.
Other resources
Temporary disk space
PID Limits
https://kubernetes.io/blog/2019/04/15/process-id-limiting-for-stability-improvements-in-kubernetes-1.14/