[12:28:23] jhathaway: I fear I don't follow. The cpuset patch adds API endpoints (in the form of sysfs files) to allow to sequester a set of CPUs for whatever use. And in fact that was already a thing. It did require to change the entire of the cgroup hierarchy until the end "leaf" which was cumbersome and had issues, but it was a thing already. It just [12:28:23] became easier (and arguably) more functional with that patch. [12:28:32] 6.7 btw has released <1 month ago. It will be quite a while before we see that patch in our fleet. [12:28:43] The cri-o config part however is already 4 years old, so I assume the kernel patch is just tangentially related? Perhaps allowing eventually cri-o to implement that same functionality a bit more easily and without the possibly side effects of modifying the entire cgroup hiearchy? [12:29:08] In any case, containerd does have cpuset support already, e.g. ctr run has the flag --cpuset-cpu, see https://github.com/containerd/containerd/blob/main/cmd/ctr/commands/run/run_unix.go#L75. nerdctl (the actually user friend CLI) has the same flag. What it doesn't have is the concept of infra containers that cri-o apparently has and which is [12:29:08] supposed working in cooperation with kubelet's --reserved-cpus flag (to the point where they need to be the same value). We had no need for this yet. We did have a need to reserve some resources for system and kubernetes components but not in the form of an entire CPU (or more) [12:29:54] As for the overall systemd integration, containerd is pretty well integrated already I 'd argue. [12:29:54] It supports (and defaults to) the Type=notify type of execution under systemd, effectively letting systemd know that it finished starting up, the io.containerd.grpc.v1.cri containerd plugin allows utilizing systemd_cgroup (defaults to False right now), it allows the same thing for runc options (same default). [12:31:17] I can see an argument about the fact that these values should default to true for even better integration with systemd, but I am failing to see that relation of the linked patches with systemd to be honest. [12:55:55] CRE replacement criteria and grading matrix published under https://wikitech.wikimedia.org/wiki/Kubernetes/CRE/criteria [12:55:58] mamu: ^ [14:03:08] meeting notes published at https://www.mediawiki.org/wiki/Kubernetes_SIG/Meetings/2024-01-30 [16:14:21] akosiaris: my apologies your are correct those config options are not related to the remote partition cpu isolation feature, I should have dug into how they were used a bit more. [16:15:11] They are actually turned on through custom cri-o annotations, as can be seen in the pull request, https://github.com/cri-o/cri-o/pull/7485 [16:15:42] It looks like the original purpse was for running low latency jobs in k8s, https://github.com/openshift-kni/performance-addon-operators/blob/master/docs/cpu-load-balancing.md [16:16:31] cri-o has a number of scheduling domains in their high perf addon, https://docs.openshift.com/container-platform/4.9/scalability_and_performance/cnf-performance-addon-operator-for-low-latency-nodes.html#cnf-cpu-infra-container_cnf-master [16:21:46] I agree the relation to those patches and systemd is weak, what I was trying to get at is a better understanding of the goals of containerd vs cri-o, i.e. why are there separate projects. For cri-o one of their expressed goals in tight integration with systemd, https://news.ycombinator.com/item?id=17033461. More relavent to the patch is redhat engineers collaboratin on the cri-o and [16:21:48] kernel side. That seems attractive, though with the downside of project control being dominated by redhat, as you mention. [16:22:42] All that said, containerd is used by many big shops and cloud providers, and it is the runtime we already use, so I'm sure it will work fine!!