[01:20:46] I had an idea for an XSS protection extension, that i think mostly does not break compatibility with other extensions [that follow coding conventions] while still preventing 95% of XSS vulns [01:21:02] Anyways, its experimental and hacky, but its at https://www.mediawiki.org/wiki/Extension:XSSProtector if anyone wants to test it out [07:25:07] bd808: you know what really sucks about running this stuff on FreeBSD though, you get no profiling [07:25:25] PHP people are like lol no profiler if you aren't on Linux [16:05:24] Remilia: I wonder if https://www.mediawiki.org/wiki/Excimer would help you get useful profiling data. Traces from that extension power https://performance.wikimedia.org/php-profiling/ [17:05:47] bd808: oh nice this is in ports [17:07:16] thank you for the suggestion [17:23:50] Remilia: out of curiosity, which profiler would you have used on Linux? (I'm guessing something was advertised somewhere on php.net or elsewhere that you tried but wasn't availabe for FreeBSD?) [17:24:20] Krinkle: I cannot recall exactly what it was, but yes, something like that, Linux/Mac OS only if I recall right [17:25:54] it is similar to how on FreeBSD you cannot rely on `make test` for php extensions like wikidiff2 or luasandbox, the first one fails every test and the second hangs because the code is mostly Linux-specific and does not use BSD timers [17:30:12] think I will also try switching to PHP 8.4 and see if that improves performance (I know, grasping at straws) [17:36:23] I do suspect the postgres support has questionable performance if your wiki is large. Almost nobody tests it, so if it was missing an index somewhere basically nobody would notice [17:36:45] I'm not sure I would call it large haha [17:38:15] bawolff: https://i.koumakan.jp/2025-07-29/1753810669.png I think this can be considered relatively small? [17:40:59] I mean, it could be a lot of things [17:41:25] Lots of wikis are reporting problems with crawlers, where the AI bot hits Special:RecentChangesLinked really fast with different url parameters (Among other things) [17:42:10] it's a horrible special page [17:42:11] I suppose though if it was a DB problem, you'd notice, since then DB would be maxed out and php workers would be sitting idle, so its probably not that [17:42:33] it's php workers at 99% haha [17:42:39] thankfully there are solutions like Extension:DisableSpecialPages or Extension:SpecialPageCaptcha (full disclaimer: I'm the maintainer of both and the original and only author of the latter ;) [17:42:52] Am i the only one who likes Special:RelatedChangesLinked. Its like the multi-watchlist feature that literally everyone asks except with a bad interface so nobody uses it ;) [17:43:29] ashley: in my case it's well before php: http-request return status 403 content-type text/plain lf-string "Anonymous access denied. Ref. IP: %[src]. Thank you!" if wikis mw_blockpages !mw_loggedin [17:43:52] ah... [17:44:30] the wiki does not allow anonymous editing so we're fine with locking down special pages like that [17:46:35] Without profiling we're all just randomy speculating, but my go to for things to check would be first to see if parser cache is indeed active and pages are being cached for a reasonable period of time (There should be an html comment in the page source that says whether or not that happened) [17:52:43] yeah, "cachereport":{"timestamp":"20250729105707","ttl":86400,"transientcontent":false} suggests it's fine (there's memcached) [17:53:11] but I'll look into excimer later today or tomorrow now that I'm on holidays [17:54:58] I managed to somewhat reduce load to a point where no one gets backend timeouts any more by optimising reverse caching (using Vary: Content-Encoding for anonymous users) [19:34:04] Remilia: once you have Excimer up, you can also use Speedscope to view the data. There are copy-paste snippets for capturing a profile at https://www.mediawiki.org/wiki/Excimer#Per-request_flame_graph_from_MediaWiki [19:34:21] These can then be dragged and dropped into https://performance.wikimedia.org/excimer/speedscope/ [19:34:35] Speedscope is a static web app (nothing uploaded, all in your browser) [19:35:05] you may need to scp or otherwise download the file from your server to your local to view it etc [19:38:11] aye, thank you for the hints [19:38:55] I upgraded PHP to 8.4 and for now it looks fine, will need to wait for another 10x jump [20:00:04] I wonder if one can magick something that would create those speedscope files for requests that take more than, say, 5 seconds [20:00:20] I think this should be possible [20:02:27] Krinkle: right, that profiler you asked about is tideways [20:02:58] they have 'Windows or FreeBSD are not supported.' on their compatibility and requirements web page [20:40:18] Remilia: interesting. I wonder if they ended up removing that, because XHProf (before the tideways_xhprof fork) did support BSD afaik. [20:40:39] Note that Tideways abandoned their fork in 2023. https://github.com/tideways/php-xhprof-extension/ [20:41:46] The original XHProf was unmaintained for a while, but that's since been resolved. [20:41:59] https://pecl.php.net/package/xhprof works for me on macOS/Darwin, so probably on FreeBSD as well. [20:42:13] But it has a lot more overhead, so in terms of "time spent" it isn't very useful. [20:42:26] but if you'd want to measure function call counts or memory, xhprof shoudl work :) [20:42:39] https://techblog.wikimedia.org/2021/03/03/profiling-php-in-production-at-scale/ [21:13:18] https://www.mediawiki.org/wiki/MediaWiki-Docker/Configuration_recipes/Profiling so this might be a bit out of date then? since it refers to tideways