[08:22:14] ottomata: https://wikitech.wikimedia.org/wiki/Kubernetes/Resource_requests_and_limits - we have to run more tests with envoy as of now. Is it a "real" problem (like in latency increase) or is it "just about the metrics"? [09:54:57] it is being throttled heavy (tens of ms) so latency is likely affected, but so far I don't think we see holes in graphs etc.. [09:55:08] https://phabricator.wikimedia.org/T347477#9299688 [14:28:02] jayme: good q, not sure, but mostly i'm wondering why throttling at all, if not anywhere close to limits? [14:28:45] please check on wikitech if that maybe answers that question [15:27:22] okay, i sorta understand that if there a buncha threads scheduling can get divvyed poorly and cause throttling [15:27:38] i think we may be affected, trying something... [15:40:18] yay, success :D If you want more, you may want to read all/some of the external resources. But it's not exactly fun [15:52:02] i have skimmed some, understanding deeply would take ummmmm more time than i have right now :D [17:24:30] fyi, we set service-runner num_workers to 0, meaning there is only one nodejs process spawned, running master+worker in same process. This means 1/2 the number of threads created (UV_THREADPOOL_SIZE is set to 128 in the docker image). [17:24:35] since then: no throttling! [17:24:47] https://grafana.wikimedia.org/goto/6KmODK4Sk?orgId=1 [17:25:06] https://phabricator.wikimedia.org/T347477#9302664 [17:45:50] {◕ ◡ ◕}