[01:06:38] PROBLEM - MariaDB sustained replica lag on m1 on db1117 is CRITICAL: 4 ge 2 https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting%23Replication_lag https://grafana.wikimedia.org/d/000000273/mysql?orgId=1&var-server=db1117&var-port=13321 [01:06:48] PROBLEM - MariaDB sustained replica lag on m1 on db2160 is CRITICAL: 4 ge 2 https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting%23Replication_lag https://grafana.wikimedia.org/d/000000273/mysql?orgId=1&var-server=db2160&var-port=13321 [01:07:48] RECOVERY - MariaDB sustained replica lag on m1 on db1117 is OK: (C)2 ge (W)1 ge 0 https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting%23Replication_lag https://grafana.wikimedia.org/d/000000273/mysql?orgId=1&var-server=db1117&var-port=13321 [01:07:56] RECOVERY - MariaDB sustained replica lag on m1 on db2160 is OK: (C)2 ge (W)1 ge 0 https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting%23Replication_lag https://grafana.wikimedia.org/d/000000273/mysql?orgId=1&var-server=db2160&var-port=13321 [08:35:22] the db maintenance map isn't working? [08:35:25] or is it me? [08:35:57] let me try with s3@eqiad which I didn't do [08:38:43] that was it yeah [13:16:45] in 2021 we did 31, schema changes. This year 89: https://phabricator.wikimedia.org/maniphest/query/C9bPxtbDvb7p/#R vs https://phabricator.wikimedia.org/maniphest/query/Dz3ptsJh4vEn/#R [13:17:05] and technically it doesn't include around ten schema changes that are almost finished [13:18:36] nice work, both of you! [13:37:52] wow [13:52:13] sigh, the installer wiped a drive of thanos-be1001 [17:09:13] ...and still rebalancing, so no more reimages for me today.