[00:04:57] PROBLEM - MariaDB sustained replica lag on es5 on es1025 is CRITICAL: 3.8 ge 2 https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting%23Replication_lag https://grafana.wikimedia.org/d/000000273/mysql?orgId=1&var-server=es1025&var-port=9104 [00:05:59] RECOVERY - MariaDB sustained replica lag on es5 on es1025 is OK: (C)2 ge (W)1 ge 0 https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting%23Replication_lag https://grafana.wikimedia.org/d/000000273/mysql?orgId=1&var-server=es1025&var-port=9104 [01:09:47] PROBLEM - MariaDB sustained replica lag on m1 on db1117 is CRITICAL: 16.4 ge 2 https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting%23Replication_lag https://grafana.wikimedia.org/d/000000273/mysql?orgId=1&var-server=db1117&var-port=13321 [01:10:19] PROBLEM - MariaDB sustained replica lag on m1 on db2160 is CRITICAL: 14 ge 2 https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting%23Replication_lag https://grafana.wikimedia.org/d/000000273/mysql?orgId=1&var-server=db2160&var-port=13321 [01:11:37] RECOVERY - MariaDB sustained replica lag on m1 on db1117 is OK: (C)2 ge (W)1 ge 0 https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting%23Replication_lag https://grafana.wikimedia.org/d/000000273/mysql?orgId=1&var-server=db1117&var-port=13321 [01:12:07] RECOVERY - MariaDB sustained replica lag on m1 on db2160 is OK: (C)2 ge (W)1 ge 0 https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting%23Replication_lag https://grafana.wikimedia.org/d/000000273/mysql?orgId=1&var-server=db2160&var-port=13321 [06:42:32] (MysqlReplicationLag) firing: MySQL instance db2181:9104 has too large replication lag (14h 5m 49s) - https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting#Depooling_a_replica - https://grafana.wikimedia.org/d/000000273/mysql?orgId=1&refresh=1m&var-job=All&var-server=db2181&var-port=9104 - https://alerts.wikimedia.org/?q=alertname%3DMysqlReplicationLag [06:42:54] ^ me [06:58:25] Amir1: db2094 replication broken with alter table module_deps ADD PRIMARY KEY(md_module, md_skin), drop index if exists md_module_skin [06:58:26] that you? [07:57:32] (MysqlReplicationLag) resolved: MySQL instance db2181:9104 has too large replication lag (6m 38s) - https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting#Depooling_a_replica - https://grafana.wikimedia.org/d/000000273/mysql?orgId=1&refresh=1m&var-job=All&var-server=db2181&var-port=9104 - https://alerts.wikimedia.org/?q=alertname%3DMysqlReplicationLag [11:30:29] marostegui: that is me (good morning) :D [11:30:33] let me fix it [11:35:38] it should be fixed now [17:17:01] hi all, what's the current status of the mariadb104-test cloud vps project? it's unclaimed in this year's edition of the cloud vps purge (https://wikitech.wikimedia.org/wiki/News/Cloud_VPS_2022_Purge) and I see several instances failing to run Puppet (https://prometheus-alerts.wmcloud.org/?q=%40state%3Dactive&q=project%3Dmariadb104-test) [17:24:50] taavi: unsure, but my guess is it is no longer in use (was used to test the 10.4 migration)- but I don't know much about it [17:32:17] Emperor shows as admin, maybe he uses it? ^ [17:57:53] * Emperor doesn't [17:58:10] I think maybe k.ormat showed me some stuff on it when I arrived? [17:58:22] dunno if she would still want it [19:08:18] taavi: Honestly, I want to request a new one that we could use for testing rewrite of wmfmariadbpy and other puppet changes [19:08:27] so I think it's okay to purge it