[01:25:58] [1/2] The Hungarian flag on this page is broken: [01:25:58] [2/2] https://leunissendoranproductionswiki.miraheze.org/wiki/Category:Hungarian_characters [01:48:09] [1/2] Same with the American flag on this page: [01:48:09] [2/2] https://leunissendoranproductionswiki.miraheze.org/wiki/Category:American_characters [01:58:44] Seems its supposed to be loading from instant commins, which is enabled on the wiki, but its not loading for some reason. [01:59:18] Can you fix it, though? [01:59:28] I will take a look shortly. [01:59:33] Okay then! [01:59:35] I am not sure why it isnt loading. [01:59:53] Are you a bureaucrat on the wiki? [02:00:09] Well, I did co-found it. [02:01:36] Can you try and disable "Enable Wikimedia Commons Files ($wgUseQuickInstantCommons)" in Special:ManageWiki/settings and then re-enable it if you have correct permissions to? [02:04:17] No, I don't have permission to do that. [02:04:33] It might be an upstream issue with Wikimedia Commons [02:05:52] Although its very odd that it just acts like instant commons is disabled. [02:06:00] Tell me about it! [02:10:14] It's weird, since I've been getting the dreaded ``[[Category:Pages with broken file links]]`` category after crosschecking on my wiki as of late. [02:10:36] Yeah Wikimedia Commons is down it seems globally(?) [02:10:50] I guess its an upstream issue. [02:11:47] I guess so. [05:13:32] commons doesn't load images for me either.. [05:14:19] at least negative caching is only for 5m [07:04:28] We've used Instant Commons on UT/DR up until recently and it really frequently happened that the images just turned into redlinks because the WMF server died [07:05:41] So we switched away from it [07:51:11] I'm not sure whether this is related, but we will need to upgrade QuickInstantCommons soon because WMF will require a UA for requests like this, and block requests without one [07:53:00] [1/2] https://phabricator.wikimedia.org/T400881 [07:53:00] [2/2] https://phabricator.wikimedia.org/T400119 [07:56:01] I think it might be that [08:04:20] It very well might be that [08:05:32] try updating quickinstantcommons then, a change was made there that adds a link to the mainpage of the wiki it is being used at [08:05:42] Will do [08:06:11] unless im logged into bots171 trying to run mwdeploy lol [08:06:40] thought I was on mwtask181 [08:10:47] doesnt seem to have made a difference but done. [08:21:28] maybe our IPs are still under the effect of the rate limit then? [08:21:39] may fix itself magically later then [10:18:37] @cosmicalpha is there anything in the logs by any chance? [10:19:19] the extension does log all HTTP requests and failures according to the source code, should tell us what the deal is [10:22:24] @abaddriverlol or @blankeclair able to look ^ [10:22:37] graylog won't load for me on my laptop for some reason [10:22:39] god damn it, okay [10:25:07] we only log up to error [10:25:23] pump it up to warning [10:25:36] what if i do a manual curl on bast161? [10:25:41] [1/2] that's when it logs HTTP failures [10:25:41] [2/2] https://github.com/wikimedia/mediawiki-extensions-QuickInstantCommons/blob/master/src/MultiHttpClient.php#L334C6-L335C35 [10:27:08] [1/13] ``` [10:27:08] [2/13] [blankeclair@bast161:~]$ curl -v 'https://commons.wikimedia.org/wiki/File:Zumenon_(estradiol_hemihydrate)_tablets_in_Australia,_with_one_obverse_and_one_reverse_blister_packs.jpg' --user-agent 'Miraheze, Claire manually using curl to check responses (https://miraheze.org; blankeclair@wikitide.org)' [10:27:08] [3/13] * Trying [2606:4700::6812:6be]:443... [10:27:09] [4/13] * Connected to commons.wikimedia.org (2606:4700::6812:6be) port 443 (#0) [10:27:09] [5/13] * ALPN: offers h2,http/1.1 [10:27:09] [6/13] * TLSv1.3 (OUT), TLS handshake, Client hello (1): [10:27:10] [7/13] * CAfile: /etc/ssl/certs/ca-certificates.crt [10:27:10] [8/13] * CApath: /etc/ssl/certs [10:27:10] [9/13] * TLSv1.3 (IN), TLS alert, handshake failure (552): [10:27:11] [10/13] * OpenSSL/3.0.17: error:0A000410:SSL routines::sslv3 alert handshake failure [10:27:11] [11/13] * Closing connection 0 [10:27:11] [12/13] curl: (35) OpenSSL/3.0.17: error:0A000410:SSL routines::sslv3 alert handshake failure [10:27:12] [13/13] ``` [10:27:12] ? [10:27:26] works well on my laptop... [10:27:32] Do -vvv @blankeclair [10:27:46] ineffective for curl, but okay [10:27:57] same thing, i'll try the proxy even though it probably wouldn't help [10:29:24] [1/24] ``` [10:29:24] [2/24] [blankeclair@bast161:~]$ curl -v 'http://commons.wikimedia.org/wiki/File:Zumenon_(estradiol_hemihydrate)_tablets_in_Australia,_with_one_obverse_and_one_reverse_blister_packs.jpg' --user-agent 'Miraheze, Claire manually using curl to check responses (https://miraheze.org; blankeclair@wikitide.org)' -vvv [10:29:25] [3/24] * Trying [2606:4700::6812:6be]:80... [10:29:25] [4/24] * Connected to commons.wikimedia.org (2606:4700::6812:6be) port 80 (#0) [10:29:25] [5/24] > GET /wiki/File:Zumenon_(estradiol_hemihydrate)_tablets_in_Australia,_with_one_obverse_and_one_reverse_blister_packs.jpg HTTP/1.1 [10:29:26] [6/24] > Host: commons.wikimedia.org [10:29:26] [7/24] > User-Agent: Miraheze, Claire manually using curl to check responses (https://miraheze.org; blankeclair@wikitide.org) [10:29:26] [8/24] > Accept: / [10:29:27] [9/24] > [10:29:27] [10/24] < HTTP/1.1 409 Conflict [10:29:27] [11/24] < Date: Fri, 22 Aug 2025 10:29:09 GMT [10:29:28] [12/24] < Content-Type: text/plain; charset=UTF-8 [10:29:28] [13/24] < Content-Length: 16 [10:29:29] [14/24] < Connection: close [10:29:29] [15/24] < X-Frame-Options: SAMEORIGIN [10:29:30] [16/24] < Referrer-Policy: same-origin [10:29:30] [17/24] < Cache-Control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0 [10:29:31] [18/24] < Expires: Thu, 01 Jan 1970 00:00:01 GMT [10:29:31] [19/24] < Server: cloudflare [10:29:32] [20/24] < CF-RAY: 9731b001482cd96a-SLC [10:29:32] [21/24] < [10:29:33] [22/24] * Closing connection 0 [10:29:33] [23/24] error code: 1001[blankeclair@bast161:~]$ [10:29:34] [24/24] ``` [10:30:26] ugh, what kinda fucking dns resolver are we using [10:30:32] \* Connected to commons.wikimedia.org.invalid (2606:4700::6812:7be) port 80 (#0) [10:31:09] we do recursive resolving on the servers with powerdns I believe? [10:31:30] how did we convert NXDOMAIN to cloudflare [10:32:21] I don't know, I never really touched DNS, only NTP and databases 🤷 [10:32:27] OH [10:32:31] [1/2] [blankeclair@bast161:~]$ cat /etc/resolv.conf [10:32:31] [2/2] search wikitide.net miraheze.org [10:32:34] god fucking hell [10:32:38] no wonder why [10:33:42] [1/2] search directives [10:33:42] [2/2] [10:34:07] better link: https://github.com/ArchiveTeam/ArchiveBot/issues/318 [10:35:23] https://github.com/miraheze/puppet/blob/main/modules/base/templates/dns/resolv.conf.erb [10:36:31] [1/15] ``` [10:36:31] [2/15] [blankeclair@bast161:~]$ wget https://commons.wikimedia.org [10:36:32] [3/15] --2025-08-22 10:35:49-- https://commons.wikimedia.org/ [10:36:32] [4/15] Resolving commons.wikimedia.org (commons.wikimedia.org)... 2606:4700::6812:7be, 2606:4700::6812:6be, 104.18.6.190, ... [10:36:32] [5/15] Connecting to commons.wikimedia.org (commons.wikimedia.org)|2606:4700::6812:7be|:443... connected. [10:36:33] [6/15] GnuTLS: A TLS fatal alert has been received. [10:36:33] [7/15] GnuTLS: received alert [40]: Handshake failed [10:36:33] [8/15] Unable to establish SSL connection. [10:36:34] [9/15] [blankeclair@bast161:~]$ dig +short commons.wikimedia.org.miraheze.org AAAA [10:36:34] [10/15] 2606:4700::6812:6be [10:36:34] [11/15] 2606:4700::6812:7be [10:36:35] [12/15] [blankeclair@bast161:~]$ dig +short commons.wikimedia.org.miraheze.org A [10:36:35] [13/15] 104.18.6.190 [10:36:36] [14/15] 104.18.7.190 [10:36:36] [15/15] ``` [10:36:37] mmh, how annoying [10:42:31] hmm... it only manifested since all the resolvers failed [10:43:31] wot [10:43:51] [1/15] ``` [10:43:52] [2/15] [blankeclair@bast161:~]$ grep nameserver /etc/resolv.conf | cut -d' ' -f2 | while read -r i; do echo "$i:"; dig @"$i" commons.wikimedia.org AAAA | grep HEADER; done [10:43:52] [3/15] ::1: [10:43:52] [4/15] ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 36562 [10:43:52] [5/15] 10.0.17.136: [10:43:53] [6/15] ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 32118 [10:43:53] [7/15] 2602:294:0:b23::111: [10:43:53] [8/15] ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 63498 [10:43:54] [9/15] 2001:41d0:801:2000::4089: [10:43:54] [10/15] ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 1923 [10:43:54] [11/15] 38.46.223.204: [10:43:55] [12/15] ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 47670 [10:43:55] [13/15] 51.75.170.66: [10:43:56] [14/15] ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 18967 [10:43:56] [15/15] ``` [10:44:02] lmao [10:44:18] uh [10:44:22] everything's not resolving? [10:44:31] why tf [10:44:44] this is like a super critical everyone gets out of bed and comes help priority task on Phorge [10:45:19] okay but like girl wtf why [10:45:23] why is this happeninggggg [10:46:53] could be lots of things [10:47:08] I assume those external IPs aren't ours, maybe they blocked us for spamming requests [10:47:28] maybe something happened with our host and they're blocking DNS traffic somehow [10:47:48] huh [10:48:19] what servers have powerdns again? [10:48:24] we should look at the pdns-recursor logs at least, first is figuring out why our own recursive resolvers aren't working [10:49:01] everyone runs their own instance [10:49:06] https://github.com/miraheze/puppet/blob/main/modules/base/manifests/dns.pp [10:49:20] ah, each server then? [10:49:51] I believe, although I'm not sure: https://github.com/miraheze/puppet/blob/main/modules/base/manifests/init.pp#L20C4-L26C6 [10:50:05] idk when this is true [10:50:19] mwtask181 has pdns-recursor, neat [10:50:29] uh [10:50:35] it should be firewalled off... [10:55:17] Hello, I'm here to chat with you! ;) [10:55:20] 03IP14 [[m:User:2.73.144.214]] 03Tiny create14 [[m:Translations:Wikimedia chapters/5/gn]] 04(+2)14 10URL:12 https://meta.wikimedia.org/w/index.php?oldid=29163648&rcid=3657902314 "Created page with "н"" [10:55:40] kicked [10:55:47] who even is this [10:56:00] some spammer, it was also on #miraheze-tech-ops earlier today [11:05:57] [1/7] ``` [11:05:57] [2/7] [1] commons.wikimedia.org: Failed to get IP for NS ns0.wikimedia.org, trying next if available [11:05:58] [3/7] [1] commons.wikimedia.org: Failed to resolve via any of the 3 offered NS at level 'wikimedia.org' [11:05:58] [4/7] [1] commons.wikimedia.org: Ageing nameservers for level 'wikimedia.org', next query might succeed [11:05:58] [5/7] [1] commons.wikimedia.org: failed (res=-1) [11:05:58] [6/7] ``` [11:05:59] [7/7] hmm... [11:07:09] i'm not really in the mood to digest pdns logs tho ^^; [11:07:20] i'll be a blunt pig and become the dns resolver [11:10:32] yeah i have no ide what i'm doing [11:19:49] [1/2] hmm, typical pdns log: [11:19:49] [2/2] https://cdn.discordapp.com/attachments/1006789349498699827/1408410328043356190/1755861588829.txt?ex=68a9a3d5&is=68a85255&hm=0f0e5faceee354f3b5338c79195ebaa8b51ff1ac816946df114036f25714cfda& [11:19:59] [1/24] [1] wikimedia.org: Trying to resolve NS 'a0.org.afilias-nst.info' (1/6) [11:19:59] [2/24] [1] Nameserver a0.org.afilias-nst.info IPs: 2001:500:e::1(0.00ms) [11:20:00] [3/24] [1] wikimedia.org: Resolved 'org' NS a0.org.afilias-nst.info to: 2001:500:e::1 [11:20:00] [4/24] [1] wikimedia.org: Trying IP [2001:500:e::1]:53, asking 'wikimedia.org|A' [11:20:00] [5/24] [1] wikimedia.org: Got 7 answers from a0.org.afilias-nst.info (2001:500:e::1), rcode=0 (No Error), aa=0, in 121ms [11:20:00] [6/24] [1] wikimedia.org: accept answer 'wikimedia.org|NS|ns2.wikimedia.org.' from 'org' nameservers? ttl=3600, place=2 YES! [11:20:01] [7/24] [1] wikimedia.org: accept answer 'wikimedia.org|NS|ns1.wikimedia.org.' from 'org' nameservers? ttl=3600, place=2 YES! [11:20:01] [8/24] [1] wikimedia.org: accept answer 'wikimedia.org|NS|ns0.wikimedia.org.' from 'org' nameservers? ttl=3600, place=2 YES! [11:20:01] [9/24] [1] wikimedia.org: accept answer 'ns0.wikimedia.org|A|208.80.154.238' from 'org' nameservers? ttl=3600, place=3 YES! [11:20:02] [10/24] [1] wikimedia.org: accept answer 'ns1.wikimedia.org|A|208.80.153.231' from 'org' nameservers? ttl=3600, place=3 YES! [11:20:02] [11/24] [1] wikimedia.org: accept answer 'ns2.wikimedia.org|A|198.35.27.27' from 'org' nameservers? ttl=3600, place=3 YES! [11:20:03] [12/24] [1] wikimedia.org: OPT answer '.' from 'org' nameservers [11:20:03] [13/24] [1] wikimedia.org: determining status after receiving this packet [11:20:04] [14/24] [1] wikimedia.org: got NS record 'wikimedia.org' -> 'ns2.wikimedia.org.' [11:20:04] [15/24] [1] wikimedia.org: got NS record 'wikimedia.org' -> 'ns1.wikimedia.org.' [11:20:05] [16/24] [1] wikimedia.org: got NS record 'wikimedia.org' -> 'ns0.wikimedia.org.' [11:20:05] [17/24] [1] wikimedia.org: status=did not resolve, got 3 NS, looping to them [11:20:06] [18/24] [1] wikimedia.org.: Nameservers: ns1.wikimedia.org(0.00ms), ns0.wikimedia.org(0.00ms), ns2.wikimedia.org(0.00ms) [11:20:06] [19/24] [1] wikimedia.org: Trying to resolve NS 'ns1.wikimedia.org' (1/3) [11:20:07] [20/24] [1] : no TA found for 'ns1.wikimedia.org' among 1 [11:20:07] [21/24] [1] : no TA found for 'wikimedia.org' among 1 [11:20:08] [22/24] [1] : no TA found for 'org' among 1 [11:20:08] [23/24] [1] : got TA for '.' [11:20:09] [24/24] [1] QM ns1.wikimedia.org.|AAAA child=(empty): doResolve [11:21:59] hypothesis: does pdns fail to resolve domains if the glue records for the nameservers do not have AAAA records? [11:26:57] hmm, tried on compass.education [11:27:17] https://cdn.discordapp.com/attachments/1006789349498699827/1408412208383131719/1755862035538.txt?ex=68a9a595&is=68a85415&hm=84e75a4dcb11cf227a84f094a228421314f12a12aaa8198e8f863e71de4a4eee& [11:27:46] did wikimedia change anything regarding ipv6? [11:28:00] but also, why is our resolver so ipv6 short sighted [11:31:50] rip my laundry btw [11:42:02] [1/4] ``` [11:42:02] [2/4] msg="NOT using IPv4 for outgoing queries - add an IPv4 address (like '0.0.0.0') to query-local-address to enable" subsystem="config" level="0" prio="Warning" tid="0" ts="1755861830.842" [11:42:02] [3/4] msg="Enabling IPv6 transport for outgoing queries" subsystem="config" level="0" prio="Notice" tid="0" ts="1755861830.843" [11:42:03] [4/4] ``` [11:43:36] need ipv4 on https://github.com/miraheze/puppet/blob/042d36a57ba33c9c6e201f1b35a12d6d18a8cc7e/modules/base/templates/dns/recursor.conf.erb#L21 [12:07:48] https://github.com/miraheze/puppet/pull/4478 [12:14:20] wow ain't that a headache [12:14:29] @paladox ^ [12:14:44] two hours of debugging culminates into one line diff [12:14:56] proportionately, three days of debugging culminates into five lines diff [12:18:05] This is safe to merge right? I’m mobile rn [12:18:15] afaik, yes [12:18:18] In Scotland [12:18:42] Merged [12:18:42] watch as i introduce config breaking syntax that fully takes down dns resolution [12:18:52] Not sure if service restarts on config change [12:18:53] ty ^_^ [12:19:05] i'm gonna be watching lesbian memes to relax after this [12:49:28] https://github.com/miraheze/puppet/blob/63dd30230ee362660e6a8eb6f0caea0dea9e46bb/modules/base/manifests/dns.pp#L17 [12:49:31] Seems it does [13:49:33] nice, images are coming back, though it will take time for the logged out cache to get evicted [13:51:27] so what happened is for all this time powerdns has been unable to resolve the commons domain, we've been relying on external nameservers for that, and when for some reason those stopped working, we were no longer able to pull stuff from commons [13:53:13] wait [13:53:19] those weren't external nameservers [13:53:46] I just checked and the rest of the ips are just of ns1 and ns2, as well as the private ip of ns1 [14:07:46] [1/2] What does it mean when a wiki (e.g. in this case) shows this error when visited through the .miraheze.org subdomain? [14:07:46] [2/2] https://cdn.discordapp.com/attachments/1006789349498699827/1408452593306304674/image.png?ex=68a9cb32&is=68a879b2&hm=e2cc6212f5dcb550e0981e2f5979bcd309c87545b99c5298f0a05f2eb246c974& [14:08:26] All config overrides for `wgGalleryOptions` except for the one I added yesterday/today are for wikis that show this error [14:11:42] what the fuck? [14:12:23] darkangel seems to use wikitide instead [14:12:37] https://darkangel.wikitide.org/wiki/Dark_Angel_Wiki [14:12:46] ohh [14:12:49] so all of these probably do [14:12:57] [14:13:50] seems like it [16:38:00] There were squid security updates last night [16:38:06] I wonder if the restart purged dns cache [19:14:15] have any major breaking changes to managewiki or createwiki been made since 2bd0390 and f73038a respectively or are they safe to update [19:20:19] [1/2] probably not since one sleepless man has been doing quite a few changes [19:20:19] [2/2] cc @cosmicalpha [19:21:27] Will chrck [19:21:32] check [19:22:43] uh yes a lot of them [19:22:57] okay I should probably make a full diff then [19:23:10] ManageWiki and CreateWiki were completely split so ManageWiki doesnt require CreateWiki anymore. [19:23:12] I need to eventually update wikioasis to 1.44 and while I'm there I may as well update everything [19:23:21] ah [19:24:24] that does ring a bell lol [19:24:50] So Hooks like CreateWikiDataFactoryBuilder was completely removed, a new one exists in ManageWiki but its not needed to set at least for Miraheze since its all baked into ManageWiki now. Also you need to set $wgManageWikiCacheType and $wgManageWikiCacheDirectory similar to CreateWiki equivalent configs. [19:25:35] I'll make a full diff tonight and review it all, is it mostly on the managewiki side or has createwiki had some changes too? [19:25:42] Both [19:25:53] A lot of things were removed from CreateWiki and moved to ManageWiki. [19:26:37] probably, but I have questions regarding how we were ever able to resolve commons [19:27:01] I was thinking of making a changelog once I finish the final split (moving private wiki handling fully to ManageWiki, its already handled there but just kept in both places to be absolutely sure it works first) [19:27:14] have we been just carrying a very old response all this time and just been avoiding this through sheer luck? [19:27:31] My assumption is something unrelated to squid. [19:27:32] private wikis are done by createwiki then? [19:27:41] I always assumed it was a permission thing [19:27:45] by managewiki [19:29:01] [1/2] Technically by both now. Historically by CreateWiki. But now CreateWiki just provides the functionality to ManageWiki now and ManageWiki handles it. CreateWiki will eventually just be for creating and requesting wikis. For legacy reasons cw_wikis will be kept but RemoteWikiFactory will be removed and just used as the provider for ManageWiki core via the ho [19:29:01] [2/2] ok. [19:30:05] squid makes no sense as the underlying cause because squid does use IPv4 properly. Unless something caused powerdns on bast to stop forwarding proxy requests to ipv4. But its passed through IPV4 there anyway. [19:30:55] when were they made? [19:31:10] ManageWiki and CreateWiki? [19:31:26] I mean when were the changes made [19:31:32] to run managewiki standalone [19:31:42] Written on the front lines of the battle of hastings 1066 [19:32:01] I'm not wording it well lmao [19:32:07] like when was the split made [19:32:22] last week. [19:32:34] ahh ic [19:33:06] https://github.com/miraheze/ManageWiki/commit/ad220f0d032e13efd0204c9d164fc1cf220a03f5 was primary patch for it but other migration patches have been done since. [19:34:03] and last question is then have changes been made to cw that make it difficult to update cw first then mw or is it just mw doesn't require cw as a dependency anymore [19:34:33] or is it cw maintains it's own cache now [19:34:52] Changes have been made to CW also. Upgrading might not be 100% straightforward especially if you have something similar to our MirahezeMagic and MirahezeFunctions then other changes may be needed as well. [19:35:26] iirc all wikioasismagic does is generate sitemaps and provide some localisation [19:35:36] but the mf probably needs updating in turn too [19:36:01] previously CW had both the ManageWiki (setting cache) and CreateWiki (database cache), now ManageWiki has its own cache and CreateWiki just has database list cache. [19:36:21] ah but afaik it does avoid private wikis to not leak them through sitemaps [19:36:33] so might need to check how it gathers that and modify it to use the managewiki cache then [19:37:31] Actually I think it may not be needed. All that was changed for that here is removing a hook which could technically be done afterwards. Didn't need any replacement now. The eventual goal is removal of MirahezeFunctions here and make ManageWiki be a drop in extension handling everything dynamically itself. [19:38:26] Will probably be done with composer autoloading or similar. But I havent fully worked it out yet. [19:38:26] I assume the reason for not combining cw/mw is to allow people who don't operate a farm to use managewiki? [19:38:36] Yep. [19:38:50] easier to maintain too. [19:39:02] tbh I might just wait a bit and see how it plays out, I'll see if anything interesting happens with 1.45 and then see whether it's better just to skip 1.44 [19:39:14] One massive extension is harder to maintain and test then 2 smaller ones. (both massive in their own right also) [19:40:24] That sounds difficult since extensions aren’t loaded until LocalSettings is read - unless you mean moving MirahezeFunctions into ManageWiki but even so it would still just be reading from a cache file? [19:41:26] You can use an onregistation callback and SettingsBuilder technically but im more thinking composer autoloader might be better to just load and read a setup file immediately also. [19:42:19] I’m not sure that works because Setup.php would bail before the queue is read from? [19:42:25] SettingsBuilder is a bit slower and sometimes it would rely on load order. Also doesnt supoort loading extensions like wikibase without a massive hack I did once and abandoned. Basically extending ExtensionRegistry and make it supoort it. [19:42:37] I had it working for the most part but abandoned just because of that. [19:44:30] oh btw what are the implications for requestssl with the managewiki changes [19:44:32] Yeah I personally just went with reading from redis which saves messing with cache files [19:44:45] with regards to changing wgServer [19:45:03] RequestSSL was renamed to RequestCustomDomain and no longer relies on either technically, but won't set wgServer without ManageWiki. [19:46:05] I thought about that but it was much slower when I tried it. Using PHP caches allows using opcode cache which is extremely fast if tuned right. [19:46:27] ah I'll need to look into that then as well [19:46:46] although I'm assuming if I update them all at the same time they'll probably work together [19:47:36] Also thought about using MapCacheLRU which may be a decent option to add an extra layer of processing and improve reliability but havent really dived into it to much yet. [19:49:02] Works fine in my testing, its about a 5ms latency for the initial connection but then almost immediate fir future fetches [19:49:29] yep should. Almost all of our extensions have massive changes as of late. My goal was to make it easier to install externally as well as locally to make it easier to contribute to. But it had the side effect of some pretty large breaking changes in some cases. [19:50:55] what does wgManageWikiCacheType do btw? [19:51:01] is it redis vs filesystem [19:51:02] it had more latency when I tried it also worked fine for a bit then slowed down massively again. In my opinion though some of what I've done as made 1ms performance improvements which in my book is a gain. Every ms matters when you have so many wikis reading from things consistently. [19:51:15] I can't find any real definitions for it looking around [19:51:44] lol should've read extension.json [19:51:55] github's searching for definitions isn't the best [19:52:27] set to something from $wgObjectCaches or if not set uses $wgMainCacheType by default but yeah description for it is there. [19:53:34] we probably need to switch to redis eventually [19:57:33] Strange [20:43:22] Those tech issues with the flags I recently reported are still unresolved. How long do you think it'll take for the upstream issue to be resolved? [20:43:53] If it's upstream then we have no control [20:44:10] What extension is it? [20:44:18] Extension? [20:44:19] Or part of core? [20:44:28] @picholasstripes2000 which task is it? [20:45:26] We have very little control over upstream but I might be able to tell you who to ask if we know what part of the upstream code base it is [20:45:49] Sorry, I don't understand what you're talking about. [20:46:17] [1/2] if you mean pages having broken images, those should be fixed now [20:46:17] [2/2] you may need to purge your pages to see them [20:46:17] I don't know how else to word which issue are you talking about [20:46:52] (i hope you mean these: https://discord.com/channels/407504499280707585/1006789349498699827/1408266461989966005) [20:47:06] Oh, don't worry! I clicked on purge, and it worked! [20:47:15] yippee! [20:47:24] Thanks for the help! [20:47:35] Nice [20:49:29] Btw there's 55,000 open & stalled tasks on Wikimedia Phab. Around 114 raised by us. [20:54:39] only 114? damn [21:00:51] we have ::1 in there [21:01:07] and pdns does its own recursive resolution, starting at the root [21:01:45] was there an ipv6 change on wmf's side? [21:12:18] wmf parsoid team ignoring my questions about portable infobox 💔 [23:09:26] Oh they brought that dumb as fuck pipe operator to PHP 8.5 😶‍🌫️