[10:50:24] decommissioning ms-fe200[5-8] [10:54:27] Emperor: I guess those alerts on -operations are expected? [10:56:00] I wasn't expecting them, I think related to having moved the codfw stats node. I'll talk to go.dog (in -operations) [10:57:17] ...has recovered by itself [11:27:23] I'm going to take a break, yesterday I stay a bit more after the late meeting and need some fresh air [11:27:31] *stayed [12:37:25] Amir1: do you know if this is true? https://phabricator.wikimedia.org/T300394#7696623 [12:39:22] marostegui: never heard of the extension before [12:39:33] 🤦 [12:39:41] from what I can see it is not on s1, s2 or s3 [12:39:46] Let me check and I will get back to you [12:39:50] so I assume it is only going to be in x1 [12:39:58] Amir1: No worries, I will check the rest of the sections [12:40:46] it should be set by wgBounceHandlerSharedDB, let me look it up in production [12:41:09] wmf-config/CommonSettings.php: $wgBounceHandlerSharedDB = 'wikishared'; [12:41:14] riiight [12:41:17] so that's x1 [12:41:25] that makes that schema change easy, I will get it done now [12:41:33] awesome [12:41:46] thank you [12:43:54] I remember some weird s3 dbs that had x1 tables locally, but cannot find the ticket right now to see which db or tables were affected, or if it was fixed [12:44:16] yeah, I know the one you mean [12:44:26] I think it was https://phabricator.wikimedia.org/T119154 [12:44:34] only for echo [12:44:43] yeah, I check s3 for that reason, but it was empty [12:44:49] *checked [12:45:16] yeah, bringing it up for more db weirdness for Amir1 to know :-) [12:56:53] PROBLEM - MariaDB sustained replica lag on s4 on db1143 is CRITICAL: 3 ge 2 https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting%23Replication_lag https://grafana.wikimedia.org/d/000000273/mysql?orgId=1&var-server=db1143&var-port=9104 [12:58:03] RECOVERY - MariaDB sustained replica lag on s4 on db1143 is OK: (C)2 ge (W)1 ge 0 https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting%23Replication_lag https://grafana.wikimedia.org/d/000000273/mysql?orgId=1&var-server=db1143&var-port=9104 [13:20:30] nice, templatelinks on cebwiki is 178G [18:11:40] marostegui: that's why we are normalizing it, fun fact: botpedias get much much smaller (due to higher duplication ratio) [21:28:59] for the templatelinks change I am going to need to downtime each host for around 24h, as that is what it takes for it to finish on dewiki+cebwiki+catchup after all that. heh [21:29:10] I will do that tomorrow [21:29:19] I am leaving db1100 depooled for now [21:29:24] And killed the script