[00:02:23] ori, oh, oof... do you know where I can ask? [00:27:59] you could ask the wiki owners, alternatively you can make use of the export APIs to download data in XML format. Really depends on what you need [00:29:52] moonmoon, I just need the wiki to work in xowa [00:30:03] I have no idea what that is [00:31:02] It's an offline wiki reader that uses the XML files to generate the wiki [00:31:11] I have en.wikipedia.org done up like that [00:36:12] Hawker: if it uses the XML dumps and not the .sql dumps, then script a bot to hit the export API on the target wiki. Chances are they don't provide all of the pages in one convenient dump file so you'll need to grab and collect them individually [00:36:51] oh no... can you point to some resources for that please? [00:38:04] https://www.mediawiki.org/wiki/API:Query for obtaining a page list as well as exporting pages to xml. pywikibot may or may not already have some pre-made scripts to do exactly that so I'd check there first [00:44:38] ok thanks man! [22:18:08] Krinkle: how does it break cross wiki cache purge [22:18:24] Every wiki in the cluster should have the same key [22:18:47] TimStarling: I ended up with https://gerrit.wikimedia.org/r/c/mediawiki/core/+/754909/10 [22:19:00] But you can't call services that early [22:19:08] So should a global do? [22:21:54] RhinosF1: makeGlobalKey() is for data accessed from multiple wikis. For example, if I upload or overwrite a file on commons, then a global key is touched so that enwiki's cached view of the local file descrition page is re-generated. [22:22:05] This doesn't work if some wikis have a different "global" than others. [22:22:21] global keys are based on wiki farm, not based on CA cluster. [22:22:35] RhinosF1: I replied on-task, hope that makes more sense :) [22:23:09] You can have multiple memcached processes on the same server. [22:26:54] Krinkle: I suppose multi memcache could work [22:27:04] And work around not needing new hardware [22:27:14] But also steal memory [22:28:31] I don't know if one of them is "deprecated" or if there are plans to migrate/consolidate, but they could have different memory allocations indeed [22:28:48] it'll work itself out automatically in terms of keeping the most needed data [22:29:06] I mean nothing on beta is that needed [22:29:28] right [22:29:41] well, it could be pretty tiny in that case. [22:29:41] It's supposed to be part of my plan to stop upgrades to the testwiki breaking production [22:29:50] As it did on many occasions [22:30:01] From incompatible cross wiki stuff [22:30:07] And bad caches [22:30:19] right [22:30:46] yeah, I'd say separate it that way. That'd be more stable and supported in the long run. [22:30:52] otherwise you'll keep fighting edge cases [22:31:51] My plan was to find less edge cases [23:17:06] MatmaRex: could you also take a look at https://gerrit.wikimedia.org/r/c/mediawiki/extensions/AbuseFilter/+/755037? [23:19:26] yes. i was also going to look for other issues, but i was away for a moment [23:19:33] oh, it's already +2'd [23:22:20] and i don't see any more issues in the code