Welcome everyone to today’s Kubernetes Office Hours, where we answer your user questions live on the air with our esteemed panel of experts. You can find us in #office-hours on slack, and check the topic for the URL for the information.
The hack.md notes document will have a list of who has asked questions, roll a dice to see who won the shirts. On occasion if someone from the audience has been helpful feel free to give them a shirt as well, we want to reward people for helping others. Note: Multi-sided dice not included.
#1 Name:
Question:
Answer:
#2 Name:
Question:
Answer:
#3 Name:
Question:
Answer:
#4 Name:
Question
Answer:
#5 Name:
Question:
Answer:
#6 Name:
Question:
Answer:
#7 Name:
Question:
Answer:
#8 Name:
Question:
Answer:
#9 Name:
Question:
Answer:
Links:
#1 Name: klap
Question: https://discuss.kubernetes.io/t/yaml-config-for-multiarch-support/13387
#2 Name: fc
Question: Hi from Italy. We have just started a process of migrating our production clusters from GKE to EKS and we are really just at the investigation step. Anyone who has already done this, is there any pitfall you hit that you think might be useful for us to know ?
#3 Name: Artemis
Question: I have a question since we are creating a new k8s infrastructure and we are struggling to choose the right size of the machines. we have like 3 HA Masters + 4 workers Links: sizing of master nodes: https://kubernetes.io/docs/setup/best-practices/cluster-large/#size-of-master-and-master-components https://learnk8s.io/kubernetes-node-size
#4 Name: Victor Dzikovsky
Question: What are the plans for mixed (Linux+Windows) clusters? Is it planned to unify standards for that?
Links: https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/rke-clusters/windows-clusters/
#5 Name: Steve Yackey
Question: If you notice a node has moved into a NotReady state for a while and the kubelet has stopped responding (via describing the node), what would be your first troubleshooting steps to investigate? Had this happen recently and ended up terminating the node, but was curious about best practices for troubleshooting this.
#6 Name: Andrei Question: Hi from Tokyo, So far we end up with answer "no reason for Allocatable" to my next question. details: https://kubernetes.slack.com/archives/C0BP8PW9G/p1602830389221300 https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/ https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/node-allocatable.md https://github.com/kubernetes/kubernetes/blob/323f34858de18b862d43c40b2cced65ad8e24052/pkg/kubelet/cm/qos_container_manager_linux.go#L100
What might be the reason to use QOS cgroups and option named –cgroups-per-qos for enforcing Allocatable?
Sorry for my stubborn (since I've already communicated enough with David Ashpole, who seems to be expert in it). Maybe can find more details about it : )
#7 Name: Agustín Houlgrave
Question: I'm looking for running a service locally against some services in my k8s cluster for debugging purposes. Port forward seems not to be enough as i need to hit lots of endpoints within the cluster network. I've seen Telepresence could be a solution, but it seems a bloated solution. Have you had a similar issue? How did you tackle it? What was your experience with telepresence? Anyway, will give it a shot.
Links:
#8 Name: Cloudgrimm
Question: I am running k3s on raspberry pis and would like to do some automated infrastructure tests and chaos engineering. Any recommendations of tools to use that might accommodate arm architecture?
Links:
#9 Name: Long
Question: why would one see a errorr like this ; Error: Error reloading NGINX: exit status 1 2020/10/19 02:32:29 [notice] 71#71: signal process started 2020/10/19 02:32:29 [error] 71#71: invalid PID number "" in "/tmp/nginx.pid" nginx: [error] invalid PID number "" in "/tmp/nginx.pid"
#10 Name: Janelle Archer
Question: We've been looking at different scaling tools, such as Keda, for the ability to quickly autoscale. Do you guys have any alternatives that you'd recommend?
#11 Name:
Question:
#12 Name:
Question:
#13 Name:
Question:
Helm Stable/incubator repo shutting down
Docker hub pull limits (https://docs.docker.com/docker-hub/download-rate-limit/) starting november 1st
(Note, the companies will change over time depending on the hosts)
Thanks to the following companies for supporting the community with developer volunteers: Giant Swarm, StockX, VMware, Red Hat, Utility Warehouse, Spectrm, and Sysdig
Special thanks to CNCF for sponsoring the t-shirt giveaway.
And lastly, feel free to hang out in [#office-hours] afterwards, if the other channels are too busy for you and you’re looking for a friendly home, you’re more than welcome to pull up a chair and hang out.