[14:33:11] Hi, what are the benefits of adding pages to a custom namespace? does namespace has any special properties or can I do some action/limitation on the namespace level? [14:33:59] https://www.mediawiki.org/wiki/Manual:Using_custom_namespaces#Why_you_would_want_a_custom_namespace [14:35:23] Found out on the searchable filter only, ty [14:35:56] What is the meaning of this one please? [14:35:57] A uniform prefix for specific content(s), which is searchable for that namespace only [14:36:10] The namespace name itself [14:36:22] So you can search for MyStuff:.... as an autocomplete prefix [14:37:19] There's a way to show the autocomplete results if the user is typing the page name but without the ns prefix? [14:37:56] The more advanced search forms can do filtering by NS [14:38:09] I use ns for organization and avoid duplicate names, for example: "Movie:blabla" and "Book:blabla". I want that if the user is searching "blabla" they will see both pages in the auto complete [16:17:32] Hello! [16:18:11] I was wondering what is the correct way to get a dump of images and descriptions to train a machine vision IA model [16:20:13] Ramiroaisen: images from where? [16:22:59] From the wikimedia dumps [16:25:20] AFAIK there is no such thing as a dump of images -- you would have to get them directly [16:29:15] I would suggest looking at https://www.mediawiki.org/wiki/InstantCommons [16:29:59] That would be a way to get the images and the text for any page in commons [16:30:14] (any file page) [16:30:22] that isn't particularly useful or usable for training a ML model [16:30:52] (it's not particularly usable in MediaWiki itself either, but that's another matter entirely) [16:31:17] woffle: I wouldn't know about any ML model... I'm just thinking about the process of matching images with text [16:31:51] you'd ideally want all of the training data to be local, or at least accessible with minimal latency. Which means not doing an API request per image [16:32:22] sure, but you have to get the images some how [16:32:42] if it's supervised ML then the dataset would need to be labelled as well which would require augmenting the data returned from the API [16:33:09] dunno what this person is attempting to accomplish so I can't get more specific than that. Also I'm not an AI expert :P [16:33:43] if you don't have TBs of storage space, downloading the dumps for image description pages and just grabbing the images over HTTPS is probably a decent compromise [16:33:52] just need to re-implement the hashing scheme, which is easy enough to do [16:41:34] Thank you all for your insights! I will continue investigating this [16:42:28] noting not all images will have structured data, descriptions or tags [18:08:56] Hi there, how to make MediaWiki:Common.js load another MediaWiki:Dummy.js JavaScript that is in the same week ? [18:20:05] adfeno: https://www.mediawiki.org/wiki/ResourceLoader/Core_modules#mw.loader.load [18:25:42] Thanks hexmode[m] :) [18:49:37] Any chance https://www.mediawiki.org/wiki/Skin:BlueSky could be marked for translation [18:50:23] ashley: ^ [18:51:26] if you/someone adds the appropriate tags I'd be happy to review and mark it for translation [18:51:36] I've no clue how to do that (but as usual/per T2001, it seems that the documentation is outdated and could do w/ a thorough review first) [18:51:37] T2001: [DO NOT USE] Documentation is out of date, incomplete (tracking) [superseded by #Documentation] - https://phabricator.wikimedia.org/T2001 [18:53:13] woffle: I can see about it [18:53:15] ashley: ok [19:15:25] How do I copy a page to the same wiki with different page title, but keep the revision history, while allowing the original page to continue/branch/receive new revisions? [19:16:01] (Preferably I would like to keep all past history information intact) [19:17:02] adfeno: you want to have two pages with identical histories? [19:17:11] Yes [19:19:15] I first thought about "Export pages" then "Import pages" but then I don't know if I'll be able to change the target page name. [19:19:35] export the page, edit the title in the xml export file by hand to the new page name and then import it again [19:19:55] look for the element [19:20:57] <adfeno> OK, hm… I see. [19:21:41] <hexmode[m]> Have you tried editing the export file before? [19:24:09] <adfeno> No, will do it now. [19:56:47] <woffle> alternatively export the page, then rename it on-wiki (without leaving a redirect), then import to restore the old name [19:57:51] <woffle> alternatively export the page, import it under a different namespace or as a subpage to some other page, then rename the imported page (without leaving a redirect) [20:06:04] <adfeno> Thanks woffle ;) [21:41:35] <jfolv> To anyone familiar with AbuseFilter: When we recently upgraded from 1.29.1 to 1.35.2, one of our filters seems to be the subject of the emergency throttle. It apparently has also been throttled for 2 years, and some of the spambots are bypassing it. Where should I begin looking to diagnose and fix this issue? [21:42:46] <RhinosF1> jfolv: make sure it's not being throttled for good reason (it's broke and has a high false positive rate) and then if not there's a config rate to change the throttle limit [21:44:11] <Reedy> You need to verify the logic you're using, the regexes etc [21:45:40] <Reedy> You're either getting a looooot of spam, or the false positive rate is too high as per RhinosF1 [21:46:01] <RhinosF1> Pretty much [21:46:25] <Reedy> And if it's the latter, your rule probably has an issue or two [21:48:52] <jfolv> The rule looks alright. https://pastebin.com/DZwkAb8c is the source, in case it's me being an idiot. [21:49:11] <jfolv> It's supposed to restrict new accounts from posting links [21:49:33] <jfolv> We tend to get a lot of spambots joining and posting external links to their sites, so this helps to stop that [21:50:48] <Reedy> fwiw, you can simplify that too [21:50:57] <Reedy> (rmwhitespace(new_wikitext) irlike "http://") | (rmwhitespace(new_wikitext) irlike "https://") [21:51:01] <Reedy> can just be [21:51:28] <Reedy> (rmwhitespace(new_wikitext) irlike "https?://") [21:51:31] <woffle> or length(added_links) > 0 [21:51:41] <woffle> ;) [21:52:07] <woffle> (that's what I use on my wiki as an anti-spambot filter) [21:52:44] <jfolv> Ah, nice. I didn't set this up personally, so I'm not as familiar as I probably should be [21:53:06] <woffle> specifically https://dpaste.org/KZ3k is what I use [21:53:18] <woffle> your spam may be different so may not be 100% applicable to you [21:53:21] <jfolv> That aside, if I'm 100% sure it's not false positives, what can I do to immediately remove the throttle? [21:53:39] <jfolv> There doesn't seem to be an option in the configs [21:53:47] <jfolv> (on-site configs, I mean) [21:54:31] <Reedy> https://www.mediawiki.org/wiki/Extension:AbuseFilter#Emergency_throttling [21:55:16] <Reedy> (yes, it's not onwiki config) [21:56:16] <jfolv> So I basically have to set some localsettings variables to get it to fix itself? [21:56:43] <Reedy> Well, it's not fixing it. It's increasing the threshold etc [21:56:58] <Reedy> If it's causing that much false positives, it might not be high enough [21:57:02] <jfolv> Ah, so there's no way to manually remove the throttle. [21:58:36] <Reedy> you can set the values really high, but that doesn't necessarily help you [22:00:07] <woffle> edit notes and re-save to unthrottle temporarily. $wgAbuseFilterEmergencyDisableAge is probably your best bet, since that disables throttling of "old enough" filters. Set it to like a few days or a week so that legitimately broken filters still get throttled, but then after they've proven themselves they will be exempt [22:01:47] <woffle> hmm, default for that is 1 day, I wonder if that config isn't working as it should [22:02:17] <jfolv> One of our head admins reported to me that it was throttled for 2 years [22:02:39] <jfolv> Take that with a grain of salt though, because I haven't been able to find where she got that from [22:03:11] <jfolv> Maybe I'm lacking permissions, but I have every privileged group we have on my account, so I'm a little unsure. [22:03:24] <Reedy> What does it say for the hits and statistics? [22:03:31] <Reedy> Filter hits: 49,338 hits [22:03:31] <Reedy> Statistics: Of the last 7,554 actions, this filter has matched 101 (1.34%). On average, its run time is 0.87 ms, and it consumes 3.7 conditions of the condition limit. [22:04:01] <jfolv> Filter hits: 6,279 hits [22:04:01] <jfolv> Statistics: Of the last 1,176 actions, this filter has matched 20 (1.7%). On average, its run time is 4.37 ms, and it consumes 3.1 conditions of the condition limit. [22:05:04] <Reedy> Are you sure it's just not the case that your rule isn't right to actually stop the spam? [22:12:26] <jfolv> I would like to say yes, but I'll double check with my guy. It's entirely possible that they bypassed it because it was insufficient; I don't have enough info yet to say definitively. [22:17:17] <Reedy> If you've some examples of edits you think should've been prevented, you can test them against the AF rule [22:22:44] <jfolv> Yeah, the edits match the rule. Seems like it's definitely the throttle. [22:22:58] <jfolv> I'll try editing it slightly to remove the throttle [22:24:19] <jfolv> Wait a minute... why can I not edit this filter? I have literally every privileged group we have. [22:24:36] <jfolv> There's no "save" button [22:24:51] <jfolv> Additionally, all the options are greyed out [22:24:55] <jfolv> And I can't change them [22:26:14] <Reedy> abusefilter-modify and abusefilter-modify-restricted are probably the rights you need [22:31:06] <jfolv> I should have both of those as an admin\ [22:31:15] <jfolv> (sysop) [22:35:47] <woffle> check Special:ListGroupRights on your wiki [22:36:00] <woffle> abusefilter-modify-restricted iirc isn't given to anyone by default [22:36:59] <jfolv> I did check it; according to that, admins have that right [22:37:10] <jfolv> Yet I'm still not allowed to touch this filter [22:37:28] <jfolv> Could this be related to the upgrade? Maybe the permissions completely crapped out? [22:41:31] <jfolv> If it helps, I tried to grant that right to our "tech staff" role, but it refuses to show up on the group rights page [22:41:51] <jfolv> (Despite multiple cache clears) [22:42:21] <jfolv> I was able to get other permissions to show up/disappear, additionally. [22:47:39] <Reedy> sysop have abusefilter-modify-restricted by default [22:57:20] <jfolv> According to our group rights list, they are allowed to do it. I wonder if there's a conflict somewhere. [22:59:25] <Guest90> HI, There's a way to search something just by the page title? [22:59:52] <Guest90> Using the searchbox, not extensions [23:10:14] <Guest90> When using "WhatslinkHere" special page, there is a way to differentiate between a direct/inline link to a page that someone wrote: [[myPage]] to a dynamic value of it? [[{{my_var}}]] [23:17:11] <jfolv> Is there some special right for editing global abusefilters? [23:18:09] <lens0021> yes, and you can see all rights in Special:UserGroupRights [23:18:46] <lens0021> @jfolv [23:21:22] <jfolv> No offense, but I've already stated multiple times that I'm viewing that special page. Thankfully, I did find the right I needed, which was abusefilter-modify-global. For some reason, it wasn't granted to the appropriate group. [23:31:28] <Reedy> There is abusefilter-modify-globa [23:31:51] <Reedy> stupid irc client