[04:22:33] "I was a MediaWiki consultant for..." <- If I wanted to do that, I'd probably make a short survey about MediaWiki use at one's organization (with a "you can contact me for a follow-up interview" option) and seek community approval to advertise it at places where people get MediaWiki news or MediaWiki itself (the download page, the upgrade notes, maybe the entire mediawiki.org as a sitenotice). [04:29:42] "If I wanted to do that, I'd..." <- Thanks for the suggestion. Great idea. [04:50:01] hello [04:53:28] im having an issue, visualeditor does not work. when i click on the edit button, it simply does nothing. it redirects me to veaction=edit, but doesn't actually open the visualeditor. it just keeps me on the same page. im using the latest version of mediawiki, have javascript enabled and have visualeditor enabled in LocalSettings.php. if you need more info let me know. [05:13:46] you do realize that Visual Editor is not a separate window, right? [05:13:46] It simply places the cursor in the page and you can edit the page as it looks visually. [05:14:10] Is there a toolbar? [05:14:56] i know it isnt a seperate window, no theres no toolbar, nothing pops out at all, and there isnt any cursor either. i cant edit anything. [05:42:41] any error message? anything in the chrome/firefox DevTools console? [05:43:37] i get a "VisualEditor failed to load: Error: Dependency ext.visualEditor.data failed to load" in my browsers console [05:44:28] I'm not familiar with that... sorry have to go to bed now. [05:47:59] good night [07:19:19] "i get a "VisualEditor failed..." <- You need to find out what the error was for that dependency. You could check the relevant `load.php` request, or the `resourceloader` log channel I believe. [07:22:07] tgr_: sorry. im a newb, how do i check for this error? im looking at load.php but theres nothing mentioning visualeditor [07:36:18] hm, maybe the simplest is if you try issuing `mw.loader.using('ext.visualEditor.data').catch(console.log)` in the JS console [07:39:56] or maybe even simpler if you just open `/w/load.php?lang=en&modules=ext.visualEditor.data` in a browser [07:45:33] tgr_: going to that domain, i get this text: https://paste.rs/9i6 [08:03:32] Try setting `$wgDebugLogGroups['exception'] = '';` in your LocalSettings.php and then check the exception details in that file [09:51:19] tgr_: i dont understand, is there meant to be stuff written to that file? i did as you said and nothing has really happened and there is nothing in the file i set [14:08:32] today I learnt that MW's CdnCacheUpdate stuff somehow uses CONNECT for PURGE of URLs generated using https wgUploadPath [14:10:21] I wonder if this is because of Guzzle [14:13:35] Request { method: CONNECT, uri: hostname.tld:443, version: HTTP/1.1, headers: {"host": "hostname.tld:443", "proxy-connection": "Keep-Alive"}, body: Body(Empty) } [14:14:14] pretty sure Squid and Varnish would not understand these :\ [14:14:20] Remilia: check https://phabricator.wikimedia.org/T285504 [14:14:34] oh [14:14:45] Remilia: You probably need to set $wgInternalServer [14:15:15] Vulpix: it is set, the problem is that wgUploadPath is different [14:15:43] I don't see how wgUploadPath is relevant for PURGE requests [14:16:06] when I purge cache for a File: page, it emits CONNECT requests to wgUploadPath's host [14:16:13] for the thumbnails [14:16:17] is this a bug? [14:16:53] I mean, is it actually not supposed to purge thumbnails as well? [14:17:03] hmmm, I have never seen MediaWiki sending PURGE requests for thumbnails or media in general. Is that provided by an extension? [14:17:34] I am actually unsure if this is for thumbnails, need to debug further, but I have no such extensions [14:18:26] yes these are definitely for thumbnails, just checked debug log [14:20:56] Vulpix: [2022-11-18T14:04:17.960353+00:00] http.WARNING: Error fetching URL "https://…/images/thumb/0/01/Z16SportWithoutBG.png/146px-Z16SportWithoutBG.png": (curl error: 56) Failure when receiving data from the peer [] {"url":"/w/api.php","ip":"2a05:3580:d200:1e01:75ab:7f9:52f9:68d9","http_method":"POST","server":"…","referrer":"https://…/wiki/File:Z16SportWithoutBG.png","uid":"187a835","process_id":39894,"host":"wiki.koumakan.jp","wiki":" [14:20:56] azurlane_wiki","mwversion":"1.38.4","reqId":"136a01df018726ebce930f39"} [14:21:51] as an example [14:23:36] but that says it's trying to do a POST request. Is that really a PURGE attempt? [14:23:53] I assume the POST is from me clicking the Purge cache link in the menu [14:25:51] Vulpix: I am writing a service that accepts PURGE requests and when I click Purge cache for a page, the service immediately receives a batch of requests; when I do it for a File:, these include CONNECTs that from the debug log seem to be for the thumbnails [14:27:07] lemme remove wguploadpath to bypass CDN [14:28:48] Ok, well, it makes sense to try to purge thumbnails and media. It just won't work for me because my media files are served by a separate domain, and one shouldn't expect the media server to be the same as wgCdnServers [14:29:12] with wgUploadPath set to http:// these become PURGE and are all for thumbnails [14:30:00] That sounds like a bug, not respecting the $wgInternalServer logic for such purges [14:30:04] Vulpix: it does not seem like the cdn client cares about different domains, since I have the same situation (I use a CF-backed domain for uploadpath) [14:31:14] wgInternalServer is not the same as wgUploadPath when I had it set to https [14:31:58] Note that wgUploadPath is a path, it shouldn't include a server nor protocol! [14:32:46] Vulpix, are you sure? [14:32:56] the documentation needs updating then D: [14:32:57] Yes, you should use $wgUploadBaseUrl instead [14:32:59] https://www.mediawiki.org/wiki/Manual:%24wgUploadPath [14:33:13] "If uploaded files are served from a different domain, this can be a fully-qualified URL with hostname, such as "http://upload.wikimedia.org/wikipedia/en". " [14:33:52] my apologies then, I had no idea [14:34:20] was relying on this page above :( [14:34:37] https://www.mediawiki.org/wiki/Manual:%24wgUploadBaseUrl says [14:34:38] Well, it's not your fault, the documentation seems to imply that you can use a full path there [14:34:46] This can be used to set the server to use with $wgUploadPath if it is not the current server. However most people just put the full url in $wgUploadPath instead. [14:35:17] Try using wgUploadBaseUrl for the domain, and wgUploadPath for the path, and see if that makes any difference [14:35:19] which seems to be a documentation bug because it means the latter approach is fine [14:36:01] Maybe those assumptions changed recently and the docs weren't updated [14:37:46] I set wgUploadBaseUrl and wgUploadPath correctly according to your advice and cdncacheupdate still attempts CONNECT (as I expected) [14:38:31] I think I will just write a local patch for mediawiki that fixes this [14:38:56] I already have like five local patches for cdncache :D [14:39:42] fun [14:40:18] if you use squid/varnish I am pretty sure your MW also emits these CONNECTs, you just did not notice haha [14:42:00] (my project is basically a load 'unbalancer' that will convert purges for wguploadbaseurl to Cloudflare API calls and pass everything else to my Varnish) [14:42:59] I could maybe write an extension but PHP scares me so it is async Rust instead [14:43:19] This may be possible, yes. When I wrote that phab task was because purges weren't being sent, and then I discovered the CONNECT by doing a tcpdump. Of course, chances are I haven't tested it when purging file pages [15:06:16] Indeed Mediawiki does not have thumbnail purging. This is a common issue reported as well, where if you have a new revision of an image, some users might still see the old revision for up to like half a day or so. [15:07:24] * or so. (or worse, depending on how you configured your caching) [15:15:12] My extension approach was to modify the URLs and add the timestamp of the file to it. It fixes all caching problems, and saves a lot of bandwidth by having long cache expiration both on CDN and browser cache [15:16:10] You can also tell your frontend to force e-tag validation based on file checksum or something. also works. [15:16:41] * force e-tag generation and validation based [15:17:14] depends a little bit on your file size. [15:17:18] TheDJ[m]: my experiments above sort of prove that it does have thumbnail purging [15:18:11] DEBUG purgeproxy::runner: New queue entry: http://…/images/thumb/archive/0/01/20221118094401%21….png/120px-….png [15:18:16] ^ this came from MW [15:18:44] when I set wgupload stuff to http [15:21:47] normal If-Modified-Since headers work already, but I wanted to avoid useless polling to see if the file is stale [15:22:20] when the URL changes that automatically "invalidates" the cache (by never ever requesting it again) [15:23:11] And works perfectly fine for Cloudflare [15:24:38] Vulpix: my issue with CF is thumbnails etc.; when a user uploads a new version of an image, the URL stays the same [15:24:38] WMF has a task to implement a similar approach, but I don't know why they haven't made any progress. I imagine with Wikimedia Commons they'd save a lot of bandwidth [15:25:16] Remilia: I know, this is what I solved with the extension I wrote [15:25:28] and CF has some weird issues where their stated 4 hours TTL for the free tier sometimes occasionally does not purge anything [15:26:11] I had cases where an image would stay unchanged in CF's cluster in Oceania for two months [15:27:15] It's easy to implement in MediaWiki. It has more work to do in the rewrite URLs, though https://github.com/ciencia/mediawiki-extensions-WikiDexFileRepository [15:28:11] Could be simplified if the variable part of the URL were a query string instead of the path itself, but I suspected a query string may give some caches a false impression it would be a dynamic asset and reduce the cache TTL artificially [15:32:09] versioned URLs are a tradeoff between media caching and text caching. There's complications in WMF prod that make it more complex. https://phabricator.wikimedia.org/T19577 is the most current task [15:33:36] Ah, yes, propagating the change of a file from the central repository (Wikimedia Commons) to the wikis using that file is challenging. I forgot about that [15:39:17] aye I remember first touching reverse proxying in 2000 and coming to the conclusing that caching is really darn hard [15:40:22] though I already knew that from implementing hardware cache on my hand-made Zilog Z80 PC I did not expect it to propagate to high-level software stuff [19:12:24] "caching is really darn hard" This, 100% of the way. It's one of the first things i teach fresh out of school employees where I work. [19:23:40] obligatory "two hard problems in computer science" [19:45:04] Is the second one naming? [19:45:38] omg I guessed it right [20:07:41] the two hard problems are naming things, cache invalidation, and off-by-one errors [20:52:34] yes [20:52:41] and the third one is off-by-one errors 🙂 [20:56:37] is there a suggested order for loading extensions and skins in ls.php? [20:58:03] it's usually not supposed to matter [20:59:36] if there's guidance in the documentation for a specific extension, follow that guidance, otherwise don't worry about it [21:00:25] though from what I recall, extensions with guidance like that are generally more worried about configuration than other extensions [21:00:42] e.g. SMW extensions have (or at least had? I haven't looked in a while) to be loaded before the `EnableSemantics()` call [21:00:50] yeah, it's usually a good idea to load the extension before trying to configure it [21:06:02] i already have like, a chunk of my ls containing wfloadExtensions then extension settings, same with skins