[05:10:12] Curious about the enwiktionary pages-articles.xml dump if this is the right place to ask. Its status still is "waiting". I don't recall it taking this long before. Is there a technical problem that's holding them up (an overload caused by the DynamicPageList catastrophe)? [07:39:04] !log tools.dannys712-bot stop running and queued GlobalRollbacker-unreviewer grid jobs per private tool maintainer request [07:39:06] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.dannys712-bot/SAL [11:49:29] is there a way I can see the ToolsDB replication status? [11:49:42] (so I know how long to sleep when deleting large batches of rows without causing excessive replication lag) [11:49:56] SHOW MASTER STATUS tells me I don’t have the required privileges ^^ [11:55:18] (relatedly, can I actually connect to the replica or use it in my tools, or is it purely a hot standby?) [11:55:38] (if it’s only a hot standby, then I guess it’s not as bad if replication lag goes up for a while as it is in production) [12:00:55] https://wikitech.wikimedia.org/wiki/Help:Toolforge/Database#Identifying_lag mentions a heartbeat table but it looks like that’s only for the wiki replicas, not ToolsDB [12:01:10] (no results in `SELECT * FROM information_schema.tables WHERE table_name LIKE '%heartbeat%'`) [12:46:58] lucaswerkmeister: https://replag.toolforge.org/ for prod databases, I think toolsdb is not replicated? [12:47:29] err, scratch that [12:48:41] You could create your own heartbeat table, i.e. just update an entry in a table and check how long it takes for it to pop out on the other side [12:50:01] does that mean it *is* possible to connect to the replica? [12:50:21] (or how else would I check when the entry shows up on the other side?) [12:57:32] Hmm, seems it's not exposed officially. https://wikitech.wikimedia.org/wiki/Portal:Data_Services/Admin/Toolsdb suggests clouddb1002 is the replica [12:58:16] `echo "SELECT COUNT(*) FROM information_schema.SCHEMATA;" | mysql -h clouddb1001.clouddb-services.eqiad1.wikimedia.cloud` and `echo "SELECT COUNT(*) FROM information_schema.SCHEMATA;" | mysql -h clouddb1002.clouddb-services.eqiad1.wikimedia.cloud` do both return 710 [13:03:59] hm, good to know [13:04:21] but I think building my own heartbeat table is a bit too much effort for what I’m doing here ^^ [13:04:30] I’ll just sleep for 1s per 1000 rows deleted and hope that’s good enough [13:11:54] mlucaswerkmeister: https://grafana-labs.wikimedia.org/d/000000273/tools-mariadb?viewPanel=6&orgId=1&var-dc=Tools%20Prometheus&var-server=clouddb1002.clouddb-services.eqiad.wmflabs&var-port=9104 maybe? [13:12:04] lucaswerkmeister: ^ [13:13:27] hm, that says lag went from 0 to 24 minutes behind master within one and a half minutes? [13:14:03] but I’ll keep an eye on that dashboard and kill my script if it seems necessary [13:14:05] thanks [13:16:31] !log tools.quickcategories START - purge querytime rows older than 30 days, in batches of 1000 sleeping for 1s between batches [13:16:34] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.quickcategories/SAL [14:20:57] o_O the table I’m purging suddenly seems to have some 8 million rows instead of 2.8? [14:25:26] or maybe the statistics in information_schema were just very outdated and it actually had all those rows all along… [14:31:38] yeah, there’s 10 million rows in the local backup I took earlier today, information_schema was just outdated [14:32:01] I guess that just means this’ll take rather longer than expected, but should still be done before the end of the day [22:02:09] !log tools.quickcategories END - purge querytime rows older than 30 days, in batches of 1000 sleeping for 1s between batches (deleted 10257486 rows) [22:02:13] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.quickcategories/SAL [22:08:43] !log tools.quickcategories START - optimize querytime table [22:08:47] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.quickcategories/SAL [22:08:48] !log tools.quickcategories END - optimize querytime table [22:08:50] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.quickcategories/SAL [22:08:59] ok that didn’t take so long ^^ [22:09:40] but it still reduced the size pretty significantly, yay (though it looks like mysql already reclaimed *some* storage space while rows were being deleted)