[10:01:36] lunch [12:39:21] dcausse: would you have a minute to discuss T361935 when you are back? [12:39:22] T361935: Adapt the WDQS Streaming Updater to update multiple WDQS subgraphs - https://phabricator.wikimedia.org/T361935 [12:40:06] pfischer: sure, I'm around [12:41:18] dcausse: https://meet.google.com/kfc-jayd-tun?authuser=0 [12:57:09] Hello. Could someone look into something for me please? Does discovery-analytics still need to be deployed with scap to a specific stat host, or can I ship this change as-is? https://gerrit.wikimedia.org/r/c/operations/puppet/+/1038288 [12:58:46] dcausse if you have time today, I might need some help with the maxlag unit tests per https://gerrit.wikimedia.org/r/1037602 . Having trouble getting them to trigger [13:04:42] inflatador: sure [13:06:34] inflatador: you mean https://gerrit.wikimedia.org/r/c/operations/alerts/+/1037850 ? [13:09:21] o/ [13:16:42] dcausse oops! Yes that's the one [13:18:35] inflatador: I have 30 mins now if you want to look at this [13:22:01] dcausse cool, I have a hard stop at the same time but I'm up at https://meet.google.com/iqf-wsop-qab [13:45:21] dropping off my son, back in ~15 [14:04:52] back [14:38:10] https://gerrit.wikimedia.org/r/c/operations/alerts/+/1037850 is ready for review after d-causse's help [14:46:57] \o [14:47:18] eqiad and codfw claim to be done reindexing. Although i guess i gotta run the private wikis the old way [14:48:12] o/ [14:55:00] took 689 backfills in eqiad, 637 in codfw. For 987 wikis that suggests batching didn't really make a huge difference [14:55:28] i guess what made the difference was running smaller wikis in parallel, and running backfill and reindexing in parallel? [14:57:54] took 5.5 days for eqiad, 3.5 for codfw (the retry on commonswiki is basically the whole extra time) [15:04:01] 3.5 days is nice! [15:11:20] indeed, rolling back a number of years in how long it took [15:46:25] now to finish the per-index refactor...it's almost there [16:06:01] if someone has time: https://gerrit.wikimedia.org/r/c/wikidata/query/rdf/+/1037825 (would like to finish small details about how rdf dumps are generated before testing cookbooks) (cc dr0ptp4kt) [16:31:07] * ebernhardson is finding python generic collection typing quite tedious compared to java.util.Collection [16:31:30] might as well just accept List or Set as appropriate and keep it simple [16:41:55] it has Iterable IIRC? [16:42:11] i was, but you can't len an iterable [16:42:36] which is only in this context for logging the size, but it turns out there is a typing.Collection which is what i wanted. Not sure how i've never seen it before [16:44:12] I love how Cirrus is telling me that my config is wrong: "Error: Typed property MediaWiki\Extension\Elastica\ElasticaConnection::$client must not be accessed before initialization" [16:45:41] lol, sadly yes. [17:04:40] dinner [18:01:13] lunch, back in ~40 [18:44:44] they're decomming stat1007 , do we need to update discovery-analytics scap to use a different host or it is OK to just remove the config completely? ref: https://gerrit.wikimedia.org/r/c/operations/puppet/+/1038288 [18:47:02] inflatador: that is probably something we failed to cleanup from the pre-airflow days. I can't think of anything we deploy there [18:47:45] ebernhardson cool, just wanted to make sure. I'll go ahead and +2/merge [18:55:42] dcausse: where are we at with the current patchset of the data-reload cookbook https://gerrit.wikimedia.org/r/c/operations/cookbooks/+/1031933 ? is it in a good place for me to try out a data reload with it? [19:01:06] ebernhardson: 1:1? [19:01:23] gehel: 1 sec [20:47:06] picking up my son...back in ~15