[13:57:40] interesting https://github.com/kubeflow/kfserving/blob/master/config/rbac/kustomization.yaml#L6-8 [13:57:57] kube-rbac-proxy seems to be used to protect the /metrics endpoint for kfserving manager [14:21:57] istio/knative/kfserving images pushed to our docker registry! \o/ [16:02:28] elukey: that is awesome! [16:03:39] now i wanna try running our services on our own images :) [16:05:09] accraze: morning :) [16:05:40] I didn't add the kube-rbac-proxy one, that seems to be related to protecting the /metrics endpoing of the kfserving controller [16:05:48] in theory we should be able to avoid it [16:06:14] ahh yeah that should be fine [16:17:21] I am trying to create a simple helm chart for kfserving similar to https://github.com/softonic/knative-serving-chart [16:17:30] I am testing it now on minikube [16:17:43] for kfserving I think that we should create something similar, it shouldn't be difficult [16:23:57] accraze: did you see https://wikitech.wikimedia.org/wiki/Deployment_pipeline/Tutorial ? [16:24:12] yeah i was looking through that a bit yesterday [16:24:24] this is what I was trying to explain during the meeting :D [16:25:25] haha yeah i think it makes sense now [16:26:17] so the generic revscoring image will be built by blubber, but then the individual inference services are configured & deployed via helm [16:28:43] exactly yes [16:29:00] it should give us a lot of flexibility [16:29:21] we'll have to update the docs for our use cases (how to deploy etc..) but it shouldn't be too much different from the other k8s clusters [16:32:27] the istio/knative/kfserving parts will be handled differently [16:33:32] for the upgrades I was thinking that since we'll probably be active/active (eqiad/codfw) we could just depool one dc, apply any procedure, repool once we are good [16:33:42] the other side should automatically take care [16:53:50] going afk, ttl!