[06:13:20] 10Machine-Learning-Team, 10ORES, 10Platform Engineering: IndexPager::buildQueryInfo (contributions page unfiltered) LEFT JOIN ores_classification needs tuning - https://phabricator.wikimedia.org/T284888 (10Marostegui) [13:54:34] the problem that I mentioned last week with knative serving is related to helm, I did some tests and it seems related to RBAC [13:56:39] something like https://github.com/knative/serving/issues/2514 [14:01:03] more precisely [14:01:14] 1) Install istio via istioctl - all good [14:01:28] 2) deploy knative serving via helm, all good (apparently), all pods come up etc.. [14:01:41] 3) deploy kfserving via kubectl apply -f - all good [14:01:58] 4) deploy a simple inference service - the queue-proxy pod logs a log of (all different) [14:02:12] reflector.go:178] knative.dev/pkg/controller/controller.go:618: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:default:default" cannot list resource "endpoints" in API group "" at the cluster scope [14:03:48] Failed to list *v1alpha1.Image: images.caching.internal.knative.dev is forbidden: User "system:serviceaccount:default:default" cannot list resource "images" in API group "caching.internal.knative.dev" at the cluster scope [14:03:52] etc.. [14:04:12] now the controller should be the one in the kfserving namespace [14:18:40] but I can't really decode completely the above msg [14:19:22] is it the knative controller (that should be one) answering back to the kfserving queue-proxy that it cannot list xyz? [14:19:44] (the inference servince runs in the default namespace now so it may make sense) [14:39:59] AIUI, cross-namespace communication is not (easily, by default) done [14:40:47] but it works if I use kubectl apply, as the quick install guide from upstream outlines [14:41:27] Hum. [14:42:48] so it must be helm doing something different with the knative serving configs [14:49:02] Have you tried running the inference service in the knative namespace, just to see what happens and if the error changes? [14:49:44] I haven't but we shouldn't deploy in there [14:54:48] in theory we should be able to deploy in our own namespace for inference services [14:55:03] (default, batman, etc..) [14:56:04] Maybe just create a non-default one [14:56:18] I wouldn't be surprised if the default NS is walled-off (sortof) [14:58:12] tried last week but nothing [15:01:06] my impression is that it would need a dedicated RBAC cluster role binding or similar [15:19:05] https://github.com/kubeflow/kfserving/blob/master/docs/KFSERVING_DEBUG_GUIDE.md#debug-kfserving-request-flow is informative