[05:07:09] Hi there, I would like to know if it's safe to use Expires Headers for js and css files? [05:10:13] Hi there, I'm running MediaWiki on 1.37.2 and some pages giving LCP issue (7.0) so I would like to know if it's safe to use Expires Headers for js and css files? [05:25:26] Can someone please answer? [08:40:28] Hi, I just run showJobs.php and it's showing some job including cirrusSearchElasticaWrite: 902648 queued; 1 claimed (1 active, 0 abandoned); 0 delayed and I want to know what does it mean? Is there a problem that queued jobs are too large? Thank you [08:43:17] Guest54: jobs in cirrusSearchElasticaWrite are waiting for being indexed in elasticsearch, is the backlog decreasing or increasing? [08:44:12] 902648 pending jobs, seems quite big even for a large wiki [08:45:33] I'm not sure if they are decreasing or increasing because I just checked today. [08:46:09] Guest54: I'd check the logs of the php application servers running the jobs [08:46:56] I have around 300000 pages in my site so is it possible why it has 902648 pending jobs? [08:47:31] no I don't think that's normal [08:47:35] And is it harmful to have this huge list of pending jobs? [08:48:21] well it probably means that your index if out of date, and if the backlog is increasing indefinitely then something will blowup at somepoint [08:48:53] Ok so what is the solution to fix this list? [08:49:38] you have to track down the root cause, so first I'd check the logs [08:50:27] Ok so which log to check to identify the root cause? [08:50:41] Sorry I never faced this issue so I'm unaware of it. [08:50:55] the mediawiki logs of the application servers running the jobs [08:51:07] it all depends on how you've setup mediawiki [08:52:50] I'd also check the kind of jobqueue you've setup (https://www.mediawiki.org/wiki/Manual:Job_queue) [08:52:57] Ok I'll try to figure it out. I'm actually running it on Openlitespeed and it's bit complicated. [08:53:07] Thank you for the link. [08:55:50] I have no settings related to jobqueue in my LocalSettings.php [08:57:26] And the numbers increased to  902737 since I'm chating here. [08:57:44] I'm concerned with this issue now. :| [08:58:06] Guest54: how are your jobs run then? [08:58:24] is cirrusSearchElasticaWrite the only one with a backlog? [08:58:45] Yes cirrusSearchElasticaWrite is the only one with a backlog so far. [08:59:00] cirrusSearchIncomingLinkCount: 0 queued; 1 claimed (1 active, 0 abandoned); 0 delayed [08:59:00] cirrusSearchLinksUpdate: 0 queued; 3 claimed (3 active, 0 abandoned); 0 delayed [08:59:01] cirrusSearchElasticaWrite: 902737 queued; 1 claimed (1 active, 0 abandoned); 0 delayed [08:59:01] refreshLinks: 713 queued; 35 claimed (21 active, 14 abandoned); 0 delayed [08:59:02] refreshLinksDynamic: 0 queued; 7 claimed (5 active, 2 abandoned); 0 delayed [08:59:51] I have no idea how these jobs are running because I have no setup in localsettings. [09:00:14] Do you think I should add something to local settings under $wgJobRunRate ?? [09:00:39] no I don't think so [09:01:07] if cirrusSearchElasticaWrite is the only one causing trouble it means that your jobs are run somehow [09:01:26] so I'd check the mediawiki logs and the status of elasticsearch [09:02:01] I tried to look at logs but can't find anything related to elasticsearch maybe I'm looking at the wrong log file. [09:03:06] grep for "CirrusSearch" and possibly "send_data_write" [09:03:55] looking at the states of your indices and elasticsearch logs might help as well [09:04:54] Ok I'm trying to look for the job if I can't find I shell ask my server provider to look at and I'll update you with the status. [09:28:56] @dcausse I just notice that I didn't run updateSearchIndexConfig.php at the time of installation and also didn't added $wgSearchType = 'CirrusSearch'; in LocalSettings.php so is it possible that these mistakes are causing the issue? [09:29:47] I disabled the extension temporarily because the numbers are keep increasing. [09:30:41] After disabling I can only see the following option now: [09:30:41] refreshLinks: 1357 queued; 61 claimed (47 active, 14 abandoned); 0 delayed [09:30:42] refreshLinksDynamic: 0 queued; 6 claimed (4 active, 2 abandoned); 0 delayed [09:33:11] Guest54: so CirrusSearch was never really in use? [09:33:39] :| [09:34:19] Unfortunately no and I didn't notice this earlier because it was not mentioned properly in the extension page on MediaWiki. [09:44:31] So if I re-activate the extension is it safe to run updateSearchIndexConfig.php? because it says that the script must be run before using CirrusSearch. [09:46:44] And I guess I was not using CirrusSearch because $wgSearchType = 'CirrusSearch'; was not present in my LocalSettings.php [09:53:00] Guest54: I would not enable CirrusSearch without knowing why the jobs failed [09:53:27] Got it thank you so much for your help. [09:53:42] I will first find out the root cause and then take action accordingly. :) [09:54:02] Thanks and stay blessed. [09:58:12] I have a question related to Wikidata and Item/Property federation. Where is the best place to post a question about this? [10:03:43] I've found this page: https://www.mediawiki.org/wiki/Communication [10:05:33] but most of the things listed there are related to installation and bug reports [10:11:20] VeniVidiVicipedi: You can try the #wikidata irc channel [10:15:17] Thank you! I will try there as well =) [12:39:53] Hi! I'm trying to import Wikipedia's infoboxes to my wiki, but I can't get it to work. I've been following the guide at https://www.mediawiki.org/wiki/Manual:Importing_Wikipedia_infoboxes_tutorial but it seems that those instructions are not working. I've installed all the extensions and followed the instructions, but my wiki's import page hangs for a while and then the server shows an "error 500" page. I did get an import to run [12:39:54] successfully at one point but now the wiki says "Script error: No such module "Infobox"." Am I doing something wrong? [13:47:39] No [13:49:31] LorenDB[m]: most likely, the PHP execution of the import timed out, and the import was incomplete. Apparently, the page "Module:Infobox" was not imported [13:50:09] Try importing the same file again (it should detect the already imported revisions and skip them) [14:00:54] OK, I'll give that a whirl. [15:17:50] I tried importing the same file 3 times in a row and it gave me a 500 every time. What now? [15:18:07] Also, I should mention that this is running on Windows Server 2019 using IIS. [15:18:47] Well, that's not the most optimal setup for MediaWiki, for sure :-) [15:19:30] Try enabling some debugging, at least to see a meaningful error instead of an HTTP 500 error... https://www.mediawiki.org/wiki/Manual:How_to_debug [15:20:02] Recommended the error_reporting( -1 ); ini_set( 'display_errors', 1 ); $wgShowExceptionDetails = true; [15:20:39] Yeah, well, we have a bunch of Windows servers and I thought it would be best to run on IIS as the path of least resistance :) [15:20:43] And the $wgDebugLogGroups pointing to a writable file by the webserver, for the exception and error groups [15:21:31] Timeout and/or memory limits seem likely [15:22:06] I gave the machine 16 GB of RAM, so that shouldn't be the problem. [15:22:51] And timeout is at 2 minutes. I can try bumping it, I suppose. [15:23:46] Not the machine, the PHP process [15:23:53] Which will have (probably low) memory limits [15:30:57] I get this output in the log without changing any timeouts or PHP memory: https://gist.github.com/LorenDB/b42531020147afeec766f64f7fe66d7b [15:35:34] which mediawiki version are you using? [15:36:35] 1.38 [15:40:08] Ok, that [GlobalTitleFail] message is not an error, just a warning about deprecated stuff. The PHP engine may be halting the execution and the actual error doesn't get logged [15:40:27] Most likely an out of memory error [15:41:33] PHP usually outputs a descriptive message when this happens, but IIS probably doesn't play nice when presenting cgi errors [15:41:34] I knocked PHP from 128K to 4G, but I'm still getting a 500. Let's go crazy and give it 12G because that's what's free. [15:41:57] Maybe the Windows Event Log has more details [15:42:05] That error is not memory limit exhaustion [15:42:15] Oh wait, do I have to restart PHP for that to take effect? [15:42:16] That was just a guess based on common usual failures of imports [15:42:59] Yeah, more than 500MB for PHP is not reasonable [15:43:26] :) [15:44:52] Hi my server provider was trying to install LuaSandbox on my MediaWiki 1.37.2 running on Openlitespeed but they are getting the following error so can someone please explain of Luasandbox can be used on Openlitespeed? [15:44:53] Invalid configuration value: failovermethod=priority in /etc/yum.repos.d/litespeed.repo; Configuration: OptionBinding with id "failovermethod" does not exist [15:45:52] I'm pretty sure that's not a MediaWiki error Guest76... [15:46:46] Does anyone know where I can get some help with restbase/hyperswitch? [15:46:58] Thank you Reedy That sound like I have to contact either Openlitespeed or the developer of Luasandbox. [15:47:43] I'm not sure it's even a Luasandbox error [15:47:43] Vulpix: I am not seeing anything in Windows event viewer [15:48:00] Guest76: that is a yum/yum repo error [15:48:14] Guest76: https://communicode.io/how-to-fix-failovermethod-error-fedora/ [15:49:12] hexmode: hyperswitch? [15:49:34] I just notice that commenting out failovermethod method from /etc/yum.repos.d/litespeed.repo might work. [15:49:38] hyperswitch: https://www.mediawiki.org/wiki/HyperSwitch [15:50:43] Reedy: I'm seeing that when I POST to the math/check/tex endpoint without q param, I get errors saying specify q, but when I specify q, I get 404 [15:51:01] Thank you for the link @Reedy, it also suggesting the same method. [15:51:03] Cheers [15:51:36] and the 404 is coming from hyperswitch which can't resolve the restbase endpoint [15:52:07] Looking at https://github.com/wikimedia/hyperswitch/graphs/contributors [15:52:15] Most of the top contributors aren't here anymore... [15:52:24] :( [15:52:47] cscott isn't? [15:53:01] I said most, not all ;) [15:53:12] Ah... maybe tgr will have a clue. [15:53:36] Looking at the history... They haven't made 10 commits between them in that repo [15:53:38] And not for 6+ years [15:53:38] but, yes, the heavy hitters on this project are ... ? [15:54:01] not here [15:54:34] Ah... I may have to go back to the tried and true code spelunking [15:54:54] I think restbase is generally considered deprecated too AIUI [15:55:11] I know. I've heard rumors of a replacement [15:55:28] but I'm working on deploying mathoid for 1.35 [15:56:22] ty anyway, Reedy :) [15:59:57] Hmm... is there anywhere I can just download a Mediawiki installer package that is pre-bundled with everything that Wikipedia uses? [16:00:33] No [16:00:47] Unfortunately [16:00:50] Phooey. [16:01:04] Some wmf stuff is really a pita to setup [16:01:52] Although most of the core stuff is in the normal installer [17:53:50] hello MW actor vs user table question: [17:54:32] Do I understand correctly that if there is a user table entry, actor.actor_user == user.user_id, and actor.actor_name == user.user_name ? [17:56:26] and, if a real user changes their user.user_name, is the actor.actor_name updated too? [17:56:27] "name": "actor_user", [17:56:27] "comment": "Key to user.user_id, or NULL for anonymous edits", [17:56:35] "name": "actor_name", [17:56:35] "comment": "Text username or IP address", [17:56:48] ya am reading https://www.mediawiki.org/wiki/Manual:Actor_table and https://www.mediawiki.org/wiki/Manual:User_table [18:01:34] so, Reedy, actor_name == user.user_name then? in the case where actor_user is user.user_id? [18:01:54] Yup [18:05:12] ok great. [18:05:33] i'm thinking about how to model users outside of mediawiki, and it seems that actor is mostly an internal implementation detail, eh? [18:07:24] It looks like we expose actor_user (like we expose user_id), but we don't seem to expose actor_id [18:07:49] makes sense. [18:19:43] hah. Um. what are 'bogo-bytes'? :p [18:19:46] https://doc.wikimedia.org/mediawiki-core/master/php/classMediaWiki_1_1Revision_1_1RevisionRecord.html#a23b86172e78d3db5e6e9c09889b7f5c6 [18:19:53] "Returns the nominal size of this revision, in bogo-bytes." [18:22:41] I think it's because strlen() [18:23:13] Number of characters, rather than actual bytes [18:24:48] hm, interesting, okay [18:25:23] some assumption that 1 char is 1 byte etc [18:27:22] hm, https://www.php.net/manual/en/function.strlen.php says strlen() returns the number of bytes rather than the number of characters in a string. [18:27:22] but I imagine it is weird. [18:29:09] mb_strlen [19:13:23] Hello! [19:13:24] I am Somya from India. [19:13:24] I am interested in contributing to media wiki. Do we have any React.js projects? [19:59:39] ottomata: I believe it's along the same lines of https://en.wikipedia.org/wiki/Bogosort "bogus bytes". For some content models, they're internally represented as JSON, so the length is not really a representation of the contents, but of the contents wrapped in an arbitrary serialization format which often makes it useless to compare [20:20:30] bogosort isn't bogus. It's O(1) in space complexity! [23:30:27] "Does anyone know where I can get..." <- The [API Platform team](https://www.mediawiki.org/wiki/Platform_Engineering_Team/API_Value_Stream) own it in theory. Daniel Kinzler is leading the work on reimplementing Parsoid-related RESTBase functionality in MediaWiki, not sure how much hyperswitch is involved in that.