Date: 23 August 2020 Authors: Jorge Castro (VMware), Josh Berkus (Red Hat), Chris Carty (Google), Dan "POP" Papandrea (Sysdig), David McKay (Equinix Metal) Today we celebrate three years of the Kubernetes Office Hours. This is a monthly event where we take a panel of volunteers, stick them on a live stream, and then see how many questions we can field from the community. I started the show for one reason. I had recently started my Kubernetes journey and was learning all these new concepts while rewiring my traditional sysadmin brain to be more cloud native. So the idea was if I'm going to dig into this stuff and bother my new coworkers with silly questions, we might as well do it together and on the air to share our experiences and do it in a way that is fun and useful for others. Give away some tshirts, fame and fortune would surely follow. After 65 episodes we've decided to take a look at some of the more common problem areas that we've been tackling, and put together a quick summary for you on things where you might want to invest your attention. You will find many articles on "Top X things to know about Kubernetes". We've specifically avoided those and went back into our archives because what people think you need to know and what you actually need to know can be different.
3/18/2021EU Edition Panelists Person: Andrew Question: We're writing a controller with controller-runtime, and trying to use the Generation/ObservedGeneration pattern to avoid reconciling if there isn't any change (not using the predicate provided by controller-runtime for that purpose yet though). My question is how can that work with the possibility of a stale cache? When we write the ObservedGeneration to the Status of our CR, it triggers another reconcile immediately, but in some cases, the cache is stale and the CR it "Get"s still has the old Status, and therefore the old ObservedGeneration. What is the recommended strategy of dealing with this? Thanks! Person: Simone Baracchi Question: I'd like to configure my small cluster as "highly available" with no single master / single point of failure and make the best use of all the cluster resources. My current plan is to make 3 nodes run as masters and be able to schedule pods on the masters. From my research the issues in doing so are 1) security issues about sensitive data on master which could be read from malicious pods and 2) pods competing for resources (especially in case of a node failure). I'm not too concerned about security atm, and I can think of limiting the max number of pods / resources used. Is there any other red flag in doing so? Person: Jesper Berg Axelsen
2/17/2021EU Edition Panelists Rachel Leekin, Chris Carty, Dan POP Papandrea, Saiyam Pathak Person: Mostafa Elmenbawy (https://kubernetes.slack.com/archives/C6RFQ3T5H/p1609991530274100?thread_ts=1607960423.257700&cid=C6RFQ3T5H) Question: What is recommended for on premise production cluster spanning multiple hosts? Answer:
1/20/2021(TODO: We need a derived version appropriate to send to cncf-maintainers list) Sub: You're invited to the Kubernetes Contributor Celebration in one week! TL;DR we would love to have fellow cloud native contributors to join in the fun. Register now to be the first to join our Discord and we'll see you next week! https://forms.gle/51tqQgxuHxLaeU1P8 The Kubernetes community would normally celebrate this year by meeting in person, and that's not in the cards for us. So we decided to throw something fun online. With a change of venue, and with a lack of physical room limitations there’s no reason we can’t grow this to include all our friends and family! All we ask is that you register for logistical purposes. Here's the overview:
12/4/2020