[07:09:14] mpham: I closed the ticket regarding scary transclusion [07:12:11] mpham: regarding Glent, AFAIK Erik did refactor the instrumentation code to fix the A/B test issues but no new tests have been scheduled/started yet [10:11:02] dcausse: what is the plan with those two: https://gerrit.wikimedia.org/r/c/operations/mediawiki-config/+/704389 and the next one in chain? [10:12:27] the plan is to finish the switch-over, as we moved from eqiad to codfw we kept morelike queries being served from eqiad as the cirrus cache disappears during the switch over [10:13:19] since we want to make sure that we can serve 100% of the search traffic from one DC we need to serve morelike from codfw now [10:14:17] it's done in 2 steps, enwiki first and then the rest to avoid overloading codfw too much as again the cirruscache will be invalidated IIRC [10:15:07] or not, can't remember if the target cluster is part of the cache key [10:22:49] ok, I understand, so the second in chain should be merged after some time [10:23:31] yes [10:24:11] break until 2:30 UTC [10:27:20] lunch [12:55:50] thanks dcausse! [13:27:32] zpapierski: interview [14:50:32] dcausse: catching up on the #wikimedia-operations context, I gather that the readahead mitigation wasn't being applied on 2054 / 2045? [14:50:59] should be running on all the hosts (except maybe cloudelastic) so that would surprise me [14:51:20] ryankemper: yes just realised that there's a cron running [14:51:26] so no clur what happened :/ [14:51:33] s/clur/clue [14:53:11] yeah I'll need to take a look [16:35:17] https://gerrit.wikimedia.org/r/c/operations/puppet/+/704567 This should fix the issue with the readahead mitigation not working. On all hosts the timer ran once when we deployed the mitigation - july 2 - and then hasn't ran since (it was supposed to run every 30 minutes). [16:35:36] TL;DR: Had to switch `OnActiveSec` to `OnUnitActiveSec` to get the behavior we expected