[03:33:38] Is it possible for CreateWiki to recognise language variants? eg domain.com for english and domain.com/fr for french etc? which would operate how fandom operates? I tried it by changing the domain to domain.com/fr, but it just redirects to 404 [03:36:15] You would need to change how the hostname is extracted from the URL [03:37:50] interesting, [03:38:10] If you're using WikiInitialize, you'd probably need to make changes around about here [03:38:37] I'm using the default set up same as MH but I'll dive more into this [03:39:41] I've previously got a setup working where it was localhost//w/index.php, but it required some custom extraction and rewrite rules. [03:41:06] It would probably be easier to use a subdomain [03:42:25] yeah, but its nice to have all language variants on one domain [03:45:17] It's an interesting challeng though, and one I think is completely possible. [03:47:42] do you know off the top of your head where the database is detected when not using WikiInit? I presume its MHFunctions [03:48:13] https://github.com/miraheze/mw-config/blob/master/initialise/MirahezeFunctions.php#L294 seems to be about here [14:09:43] Experiencing 502/503 errors trying to access wiki requests [14:20:22] it's resolved [14:20:30] seemingly [15:16:20] @paladox how is rollout going? [15:16:26] I see a change for commonswiki [15:16:47] job queue is backing up [15:16:57] there's https://github.com/wikimedia/mediawiki/commit/b424671726a6a3e42dbcb9ad15652f9f1baded57 [15:17:10] but it seems to require https://github.com/wikimedia/mediawiki/commit/8108767e16ea9ddab7a5a785a6fff07cf33a9372 [15:17:39] Okay [15:18:27] @paladox the rise in jobs times with https://github.com/miraheze/mw-config/compare/d677be6b989e...9865f2f3d3d7 [15:45:09] that's unrelated [15:45:11] [1/2] > > var_dump($enableWarmup); [15:45:11] [2/2] > bool(false) [15:45:17] for a wiki that is s [15:53:16] we're at 34k jobs [16:03:51] That's not good [16:04:06] It's going to be about 2 hours before I can look [16:10:51] This is just not sustainable. We carn't have this tbh. We've tried everything we can. We don't have the resources to expand. Running the job is also slow. [16:11:05] i've deployed the deduplicate thing above [16:11:08] it's not working [16:11:19] the backlog is just growing and growing [16:11:19] Job levels should decrease once it's been done the first time [16:11:28] But the jump at 12 doesn't make any sense [16:11:35] It was near 0 before that [16:11:50] We can raise it upstream though for advice [16:12:11] i restarted redis and we're already back at 2k+ jobs [16:12:15] and growing very quickly [16:13:01] I have no idea why this is performing so shit [16:13:25] But we're going to struggle with rolling parsoid out for reads if we can't even get content in cache [16:14:43] @paladox do you know how long the job is taking to run? [16:15:02] some are fast others just are so slow. Like really really slow [16:15:29] Are these jobs one time things or have to constantly be rerun? [16:16:07] @pixldev every edit and first read in the last 10 days [16:16:28] Jesus Christ [16:16:41] The ones really slow are worth investigating in #mediawiki-parsoid [16:16:46] So it’s a constant issue [16:17:02] Because stashing content in the parser cache should take seconds [16:17:31] And these are taking hours? [16:18:06] I doubt hours [16:18:15] I'm pretty sure php would have long killed them [16:18:28] But tens of times longer then they should [16:18:33] @paladox do we know what really really slow is [16:18:58] no? Just the job takes ages to complete for some wikis. [16:19:08] and i mean per job. Not entirety. [16:19:20] @paladox he but what is ages? In seconds [16:19:25] Could it be like lots of nested transclutions and functions on some pages? [16:19:28] i don't know [16:19:29] Like a minute? 10? [16:19:50] Quite possibly but it shouldn't happen [16:20:20] Hm [16:20:29] I've asked the people who developed it what ideas they have [16:22:11] Smart [16:22:55] The wikis it stalls on are primarily the large ones right? Seems proportional to size [16:26:21] ok yeh it just gets stucks, wait a long time and then completes. [16:26:25] that's not sustainable [16:26:30] think we won't be able to use this [16:27:37] @paladox we will very much struggle with the rollout of 1.42 onwards if we don't [16:27:51] well there's not much we can do. [16:27:56] If you know what it is getting stuck on, that would be good @paladox [16:28:00] we don't have the resources and it's just a clusterfuck [16:28:05] we've tried everything [16:28:08] @paladox you're going to have to debug why it's broken [16:28:10] it's just not working [16:28:15] And help me relay it upstream [16:28:51] i don't know how to debug it. [16:29:17] [1/2] maybe Cook knows? [16:29:17] [2/2] although RS wikis are on 1.39 rn [16:29:20] @paladox is there anything we can use to see what the process is doing when it gets stuck? The equivalent of a profile but for jobs? [16:29:38] not that i'm aware of [16:29:52] Can you ask in #mediawiki-parsoid @paladox [16:30:02] I've already pinged sub you [16:30:05] you've already asked. [16:30:15] @paladox not specifically about profiling [16:30:25] Say you suspect the jobs are stalling but aren't sure why [16:33:40] I’m off to prepare dinner [16:41:35] @paladox subbu asked for a task, can you create one as you've actually got the access to debug [16:42:56] Maybe @orduin has some ideas [16:43:09] I don’t have a log tho [16:55:23] filed https://phabricator.wikimedia.org/T350600 [19:40:46] @paladox @orduin when one of you has the time there are numerous requests (https://phabricator.miraheze.org/project/view/12/) that require DB server access (mostly wiki renames, but some drop and recreate as well) [19:51:05] Might help for debugging, but I'm seeing a few thousand messages in graylog with "Parsoid does not support content model " and "The ID of the new revision does not match the page's current revision ID" [19:51:22] Won't account for all excess jobs we've been having but may be somewhat related [20:18:13] Graylog? [20:23:34] @pixldev centralized location where all error logs are sent to make debugging easier (otherwise your checking logs on multiple individual servers) [20:24:02] https://meta.miraheze.org/wiki/Tech:Graylog [20:24:16] Ah thanks Max [20:37:05] hmm, ImportImages seemingly cannot handle having spaces in a file path [20:37:27] also mwscript isn't running via run.php [20:47:10] Isn’t it supposed to? [20:47:28] So that’s why the command was being run so much.. [20:51:41] Yeah, it kept not ding any images until I renamed the directory to not have a space [20:51:48] finding* [20:53:39] If there is not a PR to make that default, make one [20:53:44] I'm pretty sure it has the option [20:55:47] It is behind --140 / --use-runner as a parameter [20:56:28] Either force args.runner to true at the top of the run method or wait for me to do an actual PR if you want a cleanup [20:56:33] The heck do run.PHP do anyways, I’ve checked the MW page but it just said it’s used for all scripts not why [20:57:19] It's a way to run maintenance scripts built by paratroop [20:57:39] That was a ridiculous autocorrect for irc people [20:57:57] What was wrong with just running the script itself? [20:58:02] I can find the email from ladsgroup as to why [20:59:11] I always forget mailing groups exist, I mainly use chats like discord and talk pages. But thank you Rhinos ^^ [20:59:50] Not ladsgroup but Daniel [20:59:53] https://lists.wikimedia.org/hyperkitty/list/wikitech-l@lists.wikimedia.org/message/UFANXHGOD3FYEGQQFHECSO55UMXX4AMD/ [21:00:11] I'm too used to ladsgroup breaking stuff [21:00:22] @pixldev Miraheze has its own wrapper too [21:00:32] But our wrapper does the old way by default [21:01:03] It does some magic around making the commands shorter and snappier [21:01:34] Hmm, in my experience run.php may be incompatible with maintenance scripts that run getArg with a numeric parameter. Eg `run.php someMaintScript someArg` the action `getArg(0)` in someMaintScript appears to return `someMaintScript` instead of `someArg` as expected. [21:01:41] Mainly because a certain former sysadmin is lazy so wrote a script to shorten stuff by a few words [21:01:59] @orduin I will be including a use old way option [21:02:26] If you have examples, please file bugs though [21:02:47] Yeah, I'll confirm this issue, and file a bug if I can [21:03:59] So a version of run.php that works closer to the original method of directly running scripts [21:04:03] <:ThinkerMH:912930078646730792> [21:04:16] It's nothing like run.php no [21:04:27] It just shortens the command [21:04:47] Ah mb [21:05:20] So not lazy, efficient [21:05:42] So instead of typing 'sudo -u www-data /srv/mediawiki/w/maintenance/test.php --wiki=examplewiki' became 'mwscript test.php examplewiki' [21:06:38] There is extra magic for if it's an extension script or if you want to run on all wikis / all wikis in a dblist (can generate the dblist for extensions/skins) [21:07:06] The certain former sysadmin is me so I'm calling myself lazy 🙂 [21:07:31] Mwscript & deploy-mediawiki were the children of my SRE automation work [21:08:21] Don’t think I’ve seen deploy MediaWiki, will prob stumble across it considering my idea of fun is looking at Miraheze GitHub and infrastructure [21:08:26] <:nomChocoStrawberry:938647184973365318> [21:08:42] Deploy-mediawiki completely overhauled the way we deploy [21:09:01] It changed from having to run every command on each server [21:09:14] You had to run 1 command on 1 server [21:09:39] And a single deployment instance pushed it out to the rest [21:09:56] Efficiency is laziness’ more productive cousin [21:10:07] so like how puppet managed configs [21:10:26] It's not a config management tool [21:10:46] It's basically a wrapper around git, rsync & a few maintenance scripts [21:10:59] It's also completely stateless [21:11:59] I meant in the sense of being able to do something on a lot of servers from one server [21:14:00] my brain 404s with single server MediaWiki so comprehending infrastructure like miraheze’s is a pipe dream:p but hey i try [21:15:03] Not really, puppet deploys a manifest of stuff to each server. You could have 10 servers with all different stuff just sharing some common config. [21:15:38] The deploy tool can only have what's on the deployment machine on all VMs [21:15:53] It's current code can't even only deploy certain stuff to certain machines [21:16:23] Puppet you don't need for everything the actual files on the puppetmaster [21:18:48] @pixldev [21:18:56] <:ThinkerMH:912930078646730792> [21:19:01] my brain expands [21:19:13] muchas gracias [21:19:48] ty rhinos you are very cool [21:20:04] Eh I'm just a weird tech person [21:20:18] We all have areas we understand [21:20:19] Aren’t we all [21:20:35] but hey those are the people who keep the internet as we know it running [21:20:59] tru, its all a matter of what you’ve worked with and how much [21:21:05] I might know a lot about software but as I said to my new starters today, the work they are doing is electronics so it means nothing to me [21:21:33] I still have my experts I ask when I'm baffled by stuff [23:33:11] MacFan is on the import grind dang [23:51:18] What means an "import grind dang"? [23:54:40] I just mean he’s doing a lot of wiki imports, sorry