[01:11:37] that's awesome - how's the rest of the infrastructure stuff? [01:12:05] I’ve been trying to tune the db (specifically the open table cache thingy) [01:13:24] Found that the open file limit is now set in the systemd unit so we’re using way under what we set in my.cnf. Without the larger increase mariadb would force open table cache to be much smaller then what we set (also limits how many innodb open files are open as well) [01:13:42] Fixed it now although for it to take effect, mariadb has to be restarted [01:13:47] how's the swiftobject servers? it seems like there was a major improvement around june 22nd [01:13:58] Oh they’ve been going brilliantly [01:14:04] Better load balance [01:14:40] was there something explicit that happened around june 22? [01:14:54] literally all 5 of them dramatically improved their resource utilization at that time [01:15:25] Yeh I had changed the weight (which took swift a while to work through and rebalance properly). [01:15:41] huh, i'm curious how changing the weight would make ALL of them better [01:15:52] Better load balancing [01:15:59] I had presumed it was 0 to 100 [01:16:07] But no, it’s based on disk size [01:16:22] You can do it to the hundread [01:16:49] So 9tb would have a weight of 9000 [01:17:01] 925g would be 900 or 950 would be 900 [01:17:45] I’ve added something to try and help prevent a full disk ( leaving 1% free) [01:18:29] https://github.com/miraheze/puppet/blob/master/modules/swift/files/disable_rsync.py [01:20:44] i still don't understand why better load balancing would improve ALL of them [01:20:49] unless they were literally out of disk [01:20:58] Because of rsync [01:21:08] And yeh we did run out of space on most of the nodes [01:21:25] Apparently it causes this in a out of disk thing with Swift [01:23:09] oh does it constantly try to shuffle stuff around? [01:23:14] i guess that makes sense [01:23:31] but yeah i'm looking over your grafana stuff and it's all looking way better [01:31:40] @paladox one thing i would maybe think about is more memcached. do you use it for parser cache or just standard object caching? [01:31:53] We use it for both [01:32:21] We have it fallback to the db when it cannot find it in memcached [01:40:51] alright, do you know if mem141 and mem131 are used for different things, or just sharded access to the same thing? [01:41:36] Used for different things. I don’t remember which one is used for what but the one using the highest amount of ram (used) is parser cache [01:41:42] The other one is sessions I think [01:42:01] what about object cache? [01:42:11] like the WANCache [01:42:55] https://github.com/miraheze/mw-config/blob/master/GlobalCache.php [01:49:19] interesting, so object cache is on mem141 but smw cache + sessions are on mem131? [01:49:26] am i interpreting that right? [01:50:34] Yeh [01:51:15] https://grafana.miraheze.org/d/LqXeQmsMk/memcached-pods-monitoring?orgId=1&refresh=1m is this the sum of both memcached? [02:18:04] Oh that’s a dashboard we don’t really use [02:18:06] https://grafana.miraheze.org/d/0uBBwmsMk/memcached?orgId=1&refresh=10s [02:18:13] Is the one we use [03:49:57] 9 million objects in cache on 141 and 30k queries per second. Wow [20:10:50] [1/2] should I create phab ticket for adding footer link setting of [[mw:Extension:ContactPage]]? [20:10:50] [2/2] because it doesn't seem to be presented [20:10:51] [20:11:50] according to documentation, it requires additional code in LocalSettings.php [20:14:33] Yes [20:17:20] does it fit into configuration change? [20:17:38] or "anything else"? [20:33:13] aight, created