[07:50:21] klausman elukey: would any of you be available today to pair on deploying https://gerrit.wikimedia.org/r/c/operations/deployment-charts/+/1052701? Thanks! [07:50:42] Sure. While we have a sprint week in ML team, I can make time. [07:57:48] appreciated! I'm available whenever is convenient for you [08:01:32] brouberol: o/ there are nice docs in https://wikitech.wikimedia.org/wiki/Kubernetes/Ingress#Istio_setup_and_configuration [08:01:46] looking, thank you! [08:01:58] in theory it should just be a matter of running kube_env admin $cluster + istioctl etc.. [08:04:00] gotcha. I've +1ed the CR [08:07:37] brouberol: I got unitl the top of the next hour now, or after 1130 [08:08:00] thanks klausman. Could you +2 https://gerrit.wikimedia.org/r/c/operations/deployment-charts/+/1052701 ? [08:08:42] ack, will make a copy of the file beforehand, this comes in handy for diffing [08:08:55] after that I think (as per e.lukey's comment) I only need to `istioctl apply -f config.yaml` it from `/srv/deployment-charts/custom-deploy.d/dse-k8s/` [08:10:32] brouberol: I also recommend cd'ing to /srv/deployment-charts/custom_deploy.d/istio/dse-k8s and running istioctl-1.15.7 manifest diff ~klausman/config-before-1052701.yaml config.yaml [08:10:51] This will be as close as you can get to the something like kubectl diff [08:11:22] ack, thanks, and indeed, the diff looks sane, and is only about k8s.securityContext [08:11:25] It also verifies the config.yaml syntax is valid etc. [08:11:41] You can also run apply with dry-run [08:12:46] I have `watch kubectl get pods -n istio-system` running as well, to spot any sudden crashloops or the like [08:13:14] `istioctl-1.15.7 apply -f dse-k8s/config.yaml --dry-run` went fine [08:13:43] right, time to pull the trigger, as it were [08:15:00] Ok, all the istio pods have restarted cleanly, let's see if they keep living for a while [08:15:14] well, all the IGs [08:15:27] I can see the added security context on the pods [08:19:03] I usually wait 5m to see if there are explosions. I've also run a wathc for get pods -A |grep -v Running, and all looks fine [08:19:35] I'd say all good. [08:20:02] thanks! [08:21:25] np :) [08:22:56] the other thing is also to actually test the ingress [08:23:34] I guess that trying superset should be enough (IIRC it is now on k8s behind ingress right?) [10:11:55] Yep, I did. I connected to a bunch of services under ingress w/o issues [10:12:08] {superset.datahub,mpic-next}.wikimedia.org