[10:24:48] Let me check [18:24:39] Reedy: could use a second opinion on https://phabricator.wikimedia.org/T48098#8602507 - am I seeing right that this config does nothing to whether and how frequent pages are updated? [18:29:36] MiserMode mostly seems to affect API queries [19:06:42] Reedy: by "this config" I meant wgDisableQueryPageUpdate, not wgMiserMode. [19:11:53] I guess, technically it does, but not by the variable in itself... [19:12:16] We run updateSpecialPages at various different times, with various different configs [19:14:36] Reedy: right, but puppet seems to be running all non-enwiki jobs the same way (half-monthly). There isn't one where I found something to be updated more often or e.g. allowing it to run on-demand or with no caching or any other way less than half-monthly. [19:14:54] so the fact that `small => []` seems to just hide the interface message that talks about half-monthly [19:15:03] and in puppet we don't mirror that, do we? [19:15:54] I mean what would the alternative to half-monthly even be, given it's marked isExpensive and with wgMiserMode that means serve from cache or do nothing. [19:19:30] >Disable all query pages if $wgMiserMode is on, not just some. [19:22:35] that's wgDisableQueryPages [19:22:42] yeah [19:22:43] maybe that's worth setting differently for small/medium [19:22:46] we don't now [19:23:04] but we already don't use that in prod [19:23:06] I think there's an assumption (and I've no idea if it's true) that if the wiki was "small enough" that basically the query should/could/is generated at runtime (with some caching at that point) [19:23:08] so I guess that's the same [19:23:24] vs the ones that take an age, so are run in batch via CLI [19:23:31] yeah.. [19:23:47] So there might be somewhat of a disconnect between reality and expectation [19:23:57] Which I guess is kinda the question you're actually asking [19:24:21] It's also possible this edge case was broken/fixed at some point [19:24:33] Right, is there anything today at all different about special pages on large vs small/medium wikis in proday with regards to whether or how long even 1 special page is cached for. [19:24:57] it appears the answer is no, but I'm not sure. [19:25:44] i.e. if we remove: [19:25:45] 'small' => [], // T45668 [19:25:45] 'medium' => [], // T48094 [19:25:46] T45668: Re-enable disabled Special pages on small wikis (wikis in small.dblist) - https://phabricator.wikimedia.org/T45668 [19:25:48] T48094: Re-enable disabled Special pages on medium wikis (wikis in medium.dblist) - https://phabricator.wikimedia.org/T48094 [19:26:01] will that change anything other than make a (truthful) interface message re-appear? [19:26:04] you do wonder if the placebo effect is in play [19:26:14] we remove the message, so people think they're updated more frequently than they are [19:26:18] xD [19:26:30] But then again... With our communities, if that was the case, and they weren't being updated... They'd notice, eventually [19:26:32] Back in ~30 mins [19:26:44] youd think so yes. [19:26:57] the interface does accurately state when the data was last updated and that it was cached [19:27:09] it's just (potentialy) lying about when it will be updated next [19:28:50] If we're talking "nice", I'd say it'd be nice if this wasn't managed in puppet but within MW. E.g. "just" run the same maintenance script on all wikis and then the runtime will decide what if anything needs updating. Run it every hour in a loop and anything fresh enough will no-op. If the previous one is running still, systemd will ensure it won't start another. Move it all out of puppet. Then MW could more accurately state what will [19:28:50] happen when. And we could tweak it more easily for small wikis.