[18:10:09] AaronSchulz: whose next for T401855? [18:10:09] T401855: ☂ PHP 8.3 issues found during WMF rollout - https://phabricator.wikimedia.org/T401855 [18:12:34] Krinkle: bpirkle again [20:55:14] the other day _joe_ was ranting about memcached usage in core, which got me thinking.. [20:55:59] is it indeed the case that a lot of memcached usage has negative value, because the cost of computing a value locally is less than the fetch overhead? (network plus deserialization) [20:58:35] if so, could that be detected and logged automatically by BagOStuff? The way I imagine this working: for a getWithSetCallback that misses in the cache, (a) measure the execution time of the callback; (b) measure the size of the serialized value before writing it to the cache [20:59:58] if callback_execution_time > expected_latency_of_fetching_value_of_size(n), log a message [21:12:28] uhh, *less than* rather. [21:26:17] ori: There is one caveat there which is that memcached isn't always about speed but also about availability. Three of the most popular keygroups ("page" for LinkCache , and "SqlBlobStore-blob" / "revision-row" when fetching wikitext) are very fast and well-indexed single row SELECT by primary key. [21:26:39] We don't want those extra 1M/s qps on mysql where it's much more expensive than memc [21:27:16] I wrote about this a year or two ago when re-writing this guide: https://wikitech.wikimedia.org/wiki/MediaWiki_Engineering/Guides/Backend_performance_practices [21:27:23] anyway, putting that rare case aside... [21:27:29] We have metrics on this in WANObjectCache [21:29:37] Pick a popular key at https://grafana.wikimedia.org/d/2Zx07tGZz/wanobjectcache then check the by-keygroup dash. Then look at "Regeneration callback time". If thats <5ms, then it might be on par with a memcached fetch. [21:30:10] Since the migration from Graphite to Prometheus we lost meaningful distinction below 5ms. Ideally we'd know if it was <0.3 which is closer to actual mem latency. [21:34:50] Prometheus is slow and unusable as it is, but if we manage to fix that, it might be worth adding a 0-1ms bucket. https://codesearch.wmcloud.org/operations/?q=histogram_buckets&files=%28webperf%7Cmediawiki%7Cstatsd_exporter%29.*pp&excludeFiles=&repos= [21:34:55] ref T371102 [21:34:56] T371102: Include long-retention Prometheus data from Thanos into Grafana queries - https://phabricator.wikimedia.org/T371102 [22:32:31] oh nice [22:32:35] thanks! [23:15:54] Anyone knows which wikis should be in the categories-rdf.dblist and which not? The dblist seems to have not been touched for 5 years and seems to include most wikis. Probably new wikis should be added although it has not been touched for 5 years now? [23:28:15] zabe: Or whether it's even used anymore... [23:29:12] For some dumping apparently https://codesearch.wmcloud.org/search/?q=categories-rdf [23:29:34] oh it's gone out of puppet... [23:29:52] I'd kinda presume old wikis were selectively added... and new wikis probably should be added