[10:31:09] <brouberol>	 Hi! FYI I've added a namespace template variable to https://grafana-rw.wikimedia.org/d/pz5A-vASz/kubernetes-resources, so you can see the request/limits per namespace over time 
[10:47:24] <akosiaris>	 next SIG is on Earth Day, I 'll try to reschedule by a week
[10:48:33] <akosiaris>	 brouberol: This is just for the Total Ram and Total CPUs panels ?
[11:00:00] <akosiaris>	 looks like it, I 'll update the description to make it clear and switch the Metric to kube_namespace_created as cardinality is way lower and thus is a ton faster
[12:19:26] <brouberol>	 akosiaris: I tried using `kubernetes_namespace`, but IIRC all I saw was `kube-metrics` 
[12:19:47] <brouberol>	 and correct, it was just for the total ram and total cpus, as they are the only ones in which we see requests
[15:00:08] <akosiaris>	 brouberol: kube-system probably? not kube-metrics? 
[15:00:31] <akosiaris>	 but yes, the kubernetes_namespace label is IIRC populated by helm and it's the namespace the workload is deployed in
[15:00:46] <akosiaris>	 e.g. kube_namespace_created is populated by kube-state-metrics
[15:00:57] <akosiaris>	 every metric has a kubernetes_namespace and a namespace label
[15:01:10] <akosiaris>	 the former being part of the helm deployment, the latter being the actual value we care for
[15:01:23] <brouberol>	 that's right, it was kube-system
[15:01:24] <brouberol>	 my bad
[15:08:27] <brouberol>	 understood, thanks for the clarification!
[15:21:18] <elukey>	 hey folks, I've added a procedure to move live services to Istio ingress in https://phabricator.wikimedia.org/T391457
[15:21:26] <elukey>	 at least, this is what I've used for citoid
[15:21:50] <elukey>	 there are a couple of question marks about the cleanup steps, lemme know if you have opinions