[08:13:33] errand, back in a few [10:10:59] Lunch [10:36:13] lunch [10:37:09] Likely missing triage this morning, just finished getting tortured by a buggy smoke alarm for the last few hours :x [12:03:49] ouch. I hope you still got some sleep! [12:04:04] * gehel wonders why smoke alarms are not that much of a thing in Switzerland... [12:05:59] ryankemper (for when you're back): I might have to cancel our 1:1 this evening. Evening is already busy with meetings, and kids have not fully re-adapted to CEST... [12:53:39] greetings [13:42:15] o/ [13:50:09] o/ [15:03:44] ejoseph: triage meeting: https://meet.google.com/eki-rafx-cxi [16:00:32] re: reindexing all wikis, i can kick it off this week but highly likely it fails wikidata/commonswiki/maybe others as instances are restarted for the bullseye upgrade. I guess my overall idea is run the bulk of it, then try and schedule the remaining ones w/SRE for some time period where we don't restart any instances [16:11:12] reindexing is to finally resolve the elasticsearch transition to all indexes having a mapping type of _doc [16:11:47] (which they started years ago, but we couldn't actually create indices with _doc as the mapping type until 6.8) [16:14:26] hope we can't stop talking about elastic types after this :) [16:15:47] i also wonder if we could reindex commonswiki and wikidata in parallel. We always ran them one at a time, maybe should continue. I dunno. [16:16:11] indeed, hopefully we can finally stop thinking about this oddity :) [16:19:26] we could try, I think most of the time the failure was due to the underlying scroll request being killed by a node restart, I don't it failed because of resources constraints (putting aside cloudelastic) [16:19:54] s/I don't it failed/I don't *think* it failed/ [16:20:12] yea i can't remember reindexing ever putting a particularly noticable load on the clusters, it just takes time. [16:20:44] probably can't hurt, and can hopefully only stop us from restarting servers for a couple days instead of a week+ [16:20:56] makes sense [16:34:15] mpham: is there an appropriate phab ticket i can tag on the patch to undeploy ApiFeatureUsage? [16:37:29] heh, apifeatureusage is only mentioned in 25 files across the puppet repository (maybe some are false positives). What could go wrong? :P [16:39:18] :) [17:00:00] going to lunch, back in ~1 hr [17:02:33] \o [17:02:50] gehel: ack RE 1:1 [18:57:36] sorry, been back [18:57:50] But now have to go to doctor appt, back in ~1hr [19:12:39] ebernhardson: sorry, just saw your message. but saw that you created a new ticket as well. do you still need me to track something down? (I think the new ticket is fine) [19:18:05] mpham: i think we're good, i thought there might have been something that i just wasn't finding [19:26:00] apparently aws goes as far as 24TB of memory in a single instance these days, with a price of "call us". 6TB is ~400k per year, how bad could 4TB be? :) [19:26:31] 2/4TB/24TB/ [19:27:05] anyways, lunch time [19:57:00] back [20:00:11] back [20:33:52] ebernhardson as long as it's opex instead of capex, spend all the $$$ you want! [20:34:57] (or at least, that seems to be the company line for spending $$$ to spend more $$$ in the cloud) [20:54:31] sounds like a startup idea, capex to opex as a service [20:54:42] /s [20:58:37] started reindexing for all wikis except commonswiki/wikidatawiki in eqiad and codfw [20:58:50] ACK