## π Kubernetes Crash Course for C# Devs (No Linux Background)
### π¨βπ« Course Overview
**Audience:** C# developers with Windows background, no Linux or Kubernetes experience.
**Goal:** Understand Kubernetes deeply and deploy real apps step-by-step using a shared environment.
### π§° Requirements (Local Setup)
* Git (to clone starter repos)
* Docker (for local container builds)
* kubectl (to interact with the Kubernetes cluster)
> A remote Docker host and shared Kubernetes cluster are also available for environments with restricted installation.
---
## π Chapter 1 β Introduction to Kubernetes
### What is Kubernetes?
Kubernetes is a container orchestration platform β like an advanced version of Windows Services or IIS, but cloud-native. It helps you:
* Deploy multiple app instances
* Distribute traffic
* Restart crashed apps automatically
* Manage rolling updates safely
### Why Use Kubernetes?
* Declarative configuration via YAML
* Built-in self-healing
* Portability across cloud and on-prem
* Ideal for microservices and CI/CD pipelines
---
## π Chapter 2 β Connecting to a Remote Cluster
### What is a kubeconfig?
A kubeconfig file tells `kubectl` how to talk to the cluster:
* API endpoint (URL of your cluster)
* Your credentials (token or cert)
* Default namespace and context
### How to Use It:
```bash
export KUBECONFIG=/path/to/kubeconfig.yaml # Linux/macOS
$env:KUBECONFIG = "C:\\Users\\you\\Downloads\\kubeconfig.yaml" # PowerShell
```
Check connection:
```bash
kubectl get nodes
```
Or pass per command:
```bash
kubectl get pods --kubeconfig kubeconfig.yaml
```
---
## π§± Chapter 3 β Namespaces
Namespaces help isolate workloads by project or environment.
Create and target one:
```bash
kubectl create namespace dev
kubectl config set-context --current --namespace=dev
```
Always specify it:
```bash
kubectl get pods -n dev
kubectl apply -f app.yaml -n dev
```
---
## π§© Chapter 4 β Kubernetes Core Components
A quick overview:
* **Pod**: Basic unit β runs one or more containers
* **Deployment**: Manages Pods (replicas, upgrades)
* **Service**: Stable network address for Pods
* **Ingress**: Routes HTTP requests to Services
* **ConfigMap/Secret**: Inject config or credentials
* **PVC/Volume**: Persistent storage
Each will get its own chapter with hands-on examples.
---
## βοΈ Chapter 5 β Building and Publishing Your App
We'll start with a basic .NET Minimal API.
### Create the App
```bash
dotnet new web -n HelloK8s
cd HelloK8s
```
Edit `Program.cs`:
```csharp
var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();
app.MapGet("/", () => "Hello from .NET on K8s!");
app.Run();
```
### Add a Dockerfile
```dockerfile
FROM mcr.microsoft.com/dotnet/aspnet:8.0
WORKDIR /app
COPY . .
EXPOSE 80
ENTRYPOINT ["dotnet", "HelloK8s.dll"]
```
### Build and Push
```bash
dotnet publish -c Release -o out
cd out
docker build -t registry.company.com/hello-k8s-dotnet:1.0 .
docker push registry.company.com/hello-k8s-dotnet:1.0
```
Now weβre ready to deploy this to Kubernetes.
---
## π Chapter 6 β Deploying with Deployments and Pods
A **Deployment** ensures your app is running with the desired number of replicas and can handle updates or restarts gracefully.
Under the hood, a Deployment manages a **ReplicaSet**, which in turn ensures the specified number of **Pods** are running at any time. If a Pod crashes or is deleted, the ReplicaSet detects this and launches a new one to maintain the desired state.
Deployments also support **rolling updates**. When you change the container image (or any pod template), Kubernetes updates your Pods in batches β gradually terminating old Pods and starting new ones, minimizing downtime.
You can configure this behavior (e.g., max unavailable or surge) for more control.
### Create a deployment.yaml
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-k8s-dotnet
namespace: dev
spec:
replicas: 2
selector:
matchLabels:
app: hello-k8s-dotnet
template:
metadata:
labels:
app: hello-k8s-dotnet
spec:
containers:
- name: hello-k8s-dotnet
image: registry.company.com/hello-k8s-dotnet:1.0
ports:
- containerPort: 80
```
### Apply and verify:
```bash
kubectl apply -f deployment.yaml -n dev
kubectl get pods -n dev
kubectl describe deployment hello-k8s-dotnet -n dev
```
### Access locally via port-forward:
```bash
kubectl port-forward deployment/hello-k8s-dotnet 8080:80 -n dev
```
Visit: [http://localhost:8080](http://localhost:8080) your app is running with the right number of replicas, and updates it safely.
### Create a deployment.yaml
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-k8s-dotnet
namespace: dev
spec:
replicas: 2
selector:
matchLabels:
app: hello-k8s-dotnet
template:
metadata:
labels:
app: hello-k8s-dotnet
spec:
containers:
- name: hello-k8s-dotnet
image: registry.company.com/hello-k8s-dotnet:1.0
ports:
- containerPort: 80
```
### Apply and verify:
```bash
kubectl apply -f deployment.yaml -n dev
kubectl get pods -n dev
kubectl describe deployment hello-k8s-dotnet -n dev
```
### Access locally via port-forward:
```bash
kubectl port-forward deployment/hello-k8s-dotnet 8080:80 -n dev
```
Visit: [http://localhost:8080](http://localhost:8080)
---
## π Chapter 7 β Services
Pods change IPs often. A **Service** provides a stable endpoint to ensure your application is always reachable, even if the underlying Pods are recreated or rescheduled.
### How Selectors Work
A Service uses a **selector** to find matching Pods based on their labels. For example:
```yaml
selector:
app: hello-k8s-dotnet
```
This will match any Pod with the label `app=hello-k8s-dotnet`. This decouples the Service from specific Pod names or IPs.
### Types of Services
Kubernetes offers several types of Services, each designed for different use cases:
* **ClusterIP** (default): Accessible only within the cluster. Ideal for internal microservice communication.
* **NodePort**: Exposes the Service on a static port on each nodeβs IP address. Useful for simple external access without a load balancer.
* **LoadBalancer**: Provisions a cloud load balancer (if supported by the environment) to expose the service externally.
* **ExternalName**: Maps the Service to a DNS name. Used to access services outside the cluster via DNS.
### service.yaml
```yaml
apiVersion: v1
kind: Service
metadata:
name: hello-k8s-svc
namespace: dev
spec:
selector:
app: hello-k8s-dotnet
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
```
### Apply and test:
```bash
kubectl apply -f service.yaml -n dev
kubectl get svc -n dev
```
You can now access the application internally through this service name: `hello-k8s-svc.dev.svc.cluster.local`. A **Service** provides a stable endpoint.
### service.yaml
```yaml
apiVersion: v1
kind: Service
metadata:
name: hello-k8s-svc
namespace: dev
spec:
selector:
app: hello-k8s-dotnet
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
```
### Apply and test:
```bash
kubectl apply -f service.yaml -n dev
kubectl get svc -n dev
```
---
## π Chapter 8 β Ingress
To expose the app outside the cluster via URL, we use an **Ingress**. It routes HTTP(S) traffic to different services in the cluster based on hostnames and paths.
### What is an Ingress Controller?
An **Ingress Controller** is a web server (or proxy) that runs inside your cluster and is responsible for implementing the Ingress rules. It watches the Kubernetes API for Ingress resources and configures its internal routing (typically using software like NGINX, HAProxy, or Traefik) to forward traffic to the right service.
> β οΈ Without an Ingress Controller installed in your cluster, your Ingress rules will not be processed.
### How It Works
* You create an Ingress resource with rules mapping paths/hosts to services
* The Ingress Controller picks it up and applies it to its internal proxy configuration
* External traffic is routed accordingly
Example: requests to `http://hello.local/` are routed to the `hello-k8s-svc` service on port 80
### ingress.yaml the cluster via URL, we use an **Ingress**.
### ingress.yaml
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-ingress
namespace: dev
spec:
rules:
- host: hello.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hello-k8s-svc
port:
number: 80
```
### Enable ingress and test:
* Install an ingress controller (e.g., nginx)
* Edit your `/etc/hosts` to point `hello.local` to the cluster IP
```bash
kubectl apply -f ingress.yaml -n dev
kubectl get ingress -n dev
```
---
## π§ Chapter 9 β ConfigMaps and Secrets
Kubernetes separates configuration and secrets from the application code. This improves security, flexibility, and portability across environments.
### When to Use ConfigMap vs Secret
* Use **ConfigMap** for non-sensitive configuration values, like feature flags or messages.
* Use **Secret** for sensitive data, like passwords, tokens, or certificates.
Secrets are base64-encoded by default, but **not encrypted at rest unless explicitly configured**. For strong security, enable encryption at the cluster level and avoid including sensitive info in plain YAML or version control.
### ConfigMap Example:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: hello-config
namespace: dev
data:
MESSAGE: "Hello from ConfigMap!"
```
### Secret Example:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: hello-secret
namespace: dev
stringData:
password: supersecret
```
### Inject into Deployment
You can inject ConfigMaps in multiple ways:
#### Option 1: Single `env` variable
```yaml
env:
- name: MESSAGE
valueFrom:
configMapKeyRef:
name: hello-config
key: MESSAGE
```
#### Option 2: All variables with `envFrom`
```yaml
envFrom:
- configMapRef:
name: hello-config
```
#### Option 3: Mount as files in a volume
```yaml
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: hello-config
```
Each key in the ConfigMap becomes a file inside `/etc/config/`.
### Apply to Cluster:
```bash
kubectl apply -f config.yaml -n dev
kubectl apply -f secret.yaml -n dev
```
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: hello-config
namespace: dev
data:
MESSAGE: "Hello from ConfigMap!"
```
**Secret**:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: hello-secret
namespace: dev
stringData:
password: supersecret
```
**Inject into deployment**:
```yaml
env:
- name: MESSAGE
valueFrom:
configMapKeyRef:
name: hello-config
key: MESSAGE
- name: PASSWORD
valueFrom:
secretKeyRef:
name: hello-secret
key: password
```
```
name: hello-config
key: MESSAGE
```
````
```bash
kubectl apply -f config.yaml -n dev
kubectl apply -f secret.yaml -n dev
````
---
## β±οΈ Chapter 10 β Jobs and CronJobs
Kubernetes typically runs long-lived workloads (like APIs or workers) using Deployments and Pods. But for tasks that should run only once or on a schedule, Kubernetes provides two special types of controllers:
### What is a Job?
A **Job** is a resource used to run one or more Pods until a specific task is completed. It's useful for batch processing, initialization routines, database migrations, etc.
* It will retry failed Pods up to a defined limit.
* Once the job completes successfully, it won't run again unless re-created.
### What is a CronJob?
A **CronJob** is like a Linux `cron` job. It schedules Jobs to run at specific times or intervals.
* It wraps a Job and adds a cron-like schedule syntax.
* It ensures that Jobs run at the specified times and handles missed schedules, concurrency policies, etc.
### Differences from Deployments and Pods
* A **Deployment** keeps your app running continuously. It restarts Pods if needed.
* A **Job** runs Pods once to completion.
* A **CronJob** creates Jobs on a repeating schedule.
### Job Example:
```yaml
apiVersion: batch/v1
kind: Job
metadata:
name: one-time
namespace: dev
spec:
template:
spec:
containers:
- name: hello
image: busybox
command: ["echo", "Hello job"]
restartPolicy: Never
```
### CronJob Example:
```yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello-cron
namespace: dev
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: ping
image: busybox
command: ["echo", "ping"]
restartPolicy: OnFailure
```
**Job**:
```yaml
apiVersion: batch/v1
kind: Job
metadata:
name: one-time
namespace: dev
spec:
template:
spec:
containers:
- name: hello
image: busybox
command: ["echo", "Hello job"]
restartPolicy: Never
```
**CronJob**:
```yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello-cron
namespace: dev
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: ping
image: busybox
command: ["echo", "ping"]
restartPolicy: OnFailure
```
---
## πΎ Chapter 11 β Persistent Volumes and PVCs
By default, Pods are ephemeral β they lose all data when restarted or rescheduled. Kubernetes provides a solution for persistent data using:
* **PersistentVolume (PV)**: A piece of storage provisioned in the cluster
* **PersistentVolumeClaim (PVC)**: A request for storage by a user
* **StorageClass**: Describes how volumes are provisioned (e.g., SSD, network, or cloud-based)
### What is a StorageClass?
A **StorageClass** defines how Kubernetes should dynamically provision storage. It allows clusters to create volumes on demand instead of requiring manual admin setup. Each class may point to different backends (like AWS EBS, Ceph, NFS).
If no `storageClassName` is specified in a PVC, Kubernetes may fall back to the default one if it exists.
### pvc.yaml
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: hello-pvc
namespace: dev
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: standard # Optional, uses default if omitted
```
### Mount into deployment:
```yaml
volumeMounts:
- mountPath: /data
name: hello-volume
volumes:
- name: hello-volume
persistentVolumeClaim:
claimName: hello-pvc
```
This makes a persistent folder `/data` available inside your container. Any data written here will be retained between Pod restarts.
```bash
kubectl apply -f pvc.yaml -n dev
```
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: hello-pvc
namespace: dev
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
```
### Mount into deployment:
```yaml
volumeMounts:
- mountPath: /data
name: hello-volume
volumes:
- name: hello-volume
persistentVolumeClaim:
claimName: hello-pvc
```
```bash
kubectl apply -f pvc.yaml -n dev
```
---
## π§― Chapter 12 β DaemonSets and StatefulSets
### What is a DaemonSet?
A **DaemonSet** ensures that a copy of a Pod runs on **every node** in the cluster β or on a selected set of nodes.
Use cases include:
* Running agents for monitoring/logging (e.g., Prometheus Node Exporter)
* Running node-level system utilities (e.g., disk managers)
Example:
```yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-logger
namespace: dev
spec:
selector:
matchLabels:
app: node-logger
template:
metadata:
labels:
app: node-logger
spec:
containers:
- name: logger
image: busybox
command: ["sh", "-c", "while true; do echo hello; sleep 60; done"]
```
### What is a StatefulSet?
A **StatefulSet** manages the deployment of **stateful applications** β apps that need **stable identities and persistent storage**.
Key features:
* Stable network identities (e.g., `pod-0`, `pod-1`, ...)
* Ordered deployment, scaling, and deletion
* PersistentVolumeClaim per replica
Use cases:
* Databases (PostgreSQL, Cassandra)
* Kafka, Zookeeper
Example:
```yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
namespace: dev
spec:
selector:
matchLabels:
app: web
serviceName: "web"
replicas: 3
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
```
---
## π Chapter 13 β Final Project
Build a C# app that:
* Reads a config value from a ConfigMap
* Uses a volume to save uploaded files
* Exposes itself via Ingress
* Is cleaned by a CronJob every hour
---
## π Chapter 13 β Traditional vs Kubernetes: Hello World in C\#
### Traditional (Windows Server + IIS)
* Build the C# web app in Visual Studio
* Deploy the binaries to a Windows Server
* Configure the IIS site manually:
* Set port bindings
* Configure application pool
* Deploy via WebDeploy, FTP, or manually copy files
* Manually restart IIS if things crash
* Scale by cloning the server or setting up a load balancer
### Kubernetes Approach
* Build the app with `dotnet publish`
* Package it in a Docker image
* Push image to registry
* Declare a Deployment (with replica count)
* Expose the app using a Service or Ingress
* Kubernetes automatically:
* Restarts the app if it crashes
* Distributes traffic between replicas
* Handles rolling updates
### Comparison Table
| Feature | Traditional (IIS) | Kubernetes |
| ------------------------ | ----------------------------- | ---------------------------------- |
| Setup | Manual via GUI | Declarative via YAML |
| Port Binding | Manual | Handled by Service & Ingress |
| App Crash Recovery | Manual restart or script | Automatic |
| Load Balancing | External (e.g. NGINX/ELB) | Built-in with Services |
| Updates | Downtime-prone redeploy | Rolling updates with zero downtime |
| Configuration Management | web.config, registry, scripts | ConfigMap and Secret |
| Scalability | Manual server cloning | One line in YAML (`replicas`) |
---
## π Chapter 14 β Internal DNS in Kubernetes
Kubernetes comes with an internal DNS service that automatically assigns DNS names to services and pods. This allows you to refer to them by name instead of IP address.
### How It Works
* Each **Service** gets a DNS name: `service-name.namespace.svc.cluster.local`
* Each **Pod** can resolve other services in the same namespace just by their short name
* Kubernetes runs a DNS server (CoreDNS) inside the cluster that watches the API for changes and keeps DNS records up to date
### Example:
Assuming a service called `hello-k8s-svc` in namespace `dev`:
* Inside another Pod in `dev`: `curl http://hello-k8s-svc`
* From a different namespace: `curl http://hello-k8s-svc.dev`
* Fully qualified: `curl http://hello-k8s-svc.dev.svc.cluster.local`
### Pod DNS (advanced)
Pods can also resolve each other if theyβre using a **Headless Service** (a service without a cluster IP), commonly used with StatefulSets.
---
## π One-Pager Cheat Sheet
| Command | Description |
| ------------------------------------ | ----------------------------- |
| `kubectl get pods -n dev` | List Pods |
| `kubectl logs <pod> -n dev` | Show logs |
| `kubectl apply -f <file> -n dev` | Apply YAML |
| `kubectl delete -f <file> -n dev` | Delete resource |
| `kubectl exec -it <pod> -n dev` | Shell into container |
| `kubectl get svc -n dev` | List services |
| `kubectl get ingress -n dev` | List ingress rules |
| `kubectl get pvc -n dev` | List persistent volume claims |
| `kubectl describe <resource> -n dev` | Detailed info on resource |