[02:07:45] So if it's shared somewhere accessible, if you create a ticket at phabricator.miraheze.org after your wiki's been approved, our SRE team can load it on to servers, unpack it, and import it. [02:53:28] oh shit, i should get the rest lined up first, then [02:53:36] okay, this weekend i will devote to the project [18:40:41] We really not doing so hot on the documentation side of things huh [18:41:29] Orange_Star: what's documentation [18:41:30] indeed... [18:41:51] I tried keeping track in https://meta.miraheze.org/wiki/User:Reception123/docs [18:41:56] but was ultimately not very successful [18:42:54] Documentation is a thing? I thought we just asked LLMs for those [18:43:11] chatgpt how do i install an extension in miraheze? [18:44:25] [1/11] ``` [18:44:25] [2/11] To install an extension on a Miraheze wiki, you typically need administrative privileges or access to the wiki's configuration settings. Here are the general steps to install an extension on Miraheze: [18:44:26] [3/11] 1. Access Wiki Configuration: Log in to your Miraheze wiki with administrative privileges. [18:44:26] [4/11] 2. Navigate to Special:ManageWiki: Go to the Special page named "ManageWiki" on your wiki. You can usually find it by appending "Special:ManageWiki" to the end of your wiki's URL. [18:44:26] [5/11] 3. Extension Management: Look for a section or tab related to "Extension management" or similar wording. It should allow you to manage extensions installed on your wiki. [18:44:27] [6/11] ``` [18:44:27] [7/11] 4. Find and Install the Extension: Locate the extension you want to install from the list provided or by using the search functionality if available. Once you find the extension, there should be an option to install it. [18:44:27] [8/11] 5. Follow Installation Instructions: Sometimes, installing an extension might require additional steps like editing configuration files or running maintenance scripts. Follow any instructions provided to complete the installation process. [18:44:28] [9/11] 6. Verify Installation: Once installed, verify that the extension is working correctly by checking its functionality on your wiki. [18:44:28] [10/11] 7. Maintenance and Updates: Regularly check for updates and maintain the extensions installed on your wiki to ensure they are up-to-date and compatible with your wiki's software version. [18:44:28] [11/11] If you encounter any difficulties during the installation process, you can refer to Miraheze's documentation or seek assistance from the Miraheze community through their forums or chat channels. [18:45:01] yeah, not quite! [18:45:08] a good try though [18:45:09] almost, almost. It actually knows what ManageWiki is which is a miracle in and of itself [18:45:22] it gets a B- [18:45:26] I was surprised by the managewiki reference yeah. [18:45:55] in a few years, we'll have MirahezeHelpGPT as a Discord bot 😄 [18:46:17] Orange_Star: I mean that's not a huge shock [18:46:28] It seems to know ManageWiki is a thing but then get confused [18:46:38] We could teach Miraheze gpt for a few million [18:46:50] too bad I wanted to know how to install a _new_ extension [18:47:17] it kinda is surprising for me RhinosF1, I didn't know ManageWiki made it to the training data [18:51:46] Orange_Star: anything on Google did [18:51:52] It knows a fair bit about MH [18:52:02] But we aren't that notable for it to know enough properly [18:52:29] It puts a jigsaw together using half pieces on miraheze and half from mediawiki esque and possibly related stuff [18:54:49] even https://meta.miraheze.org/wiki/User:Reception123/docs is outdated it seems [18:54:52] technically we could use https://openai.com/blog/introducing-gpts and create one based on MH specific data [18:55:08] yeah, unfortunately I didn't get to finish it, but even if I did things have changed a lot since then [18:55:09] Divine intervention [18:55:13] we don't maintain a fork of mediawiki anymore (good riddance) [18:56:04] we now have some better system to update and install extensions [18:58:56] Orange_Star: by the way, not sure what your python/DNS knowledge is like but if you've got time I wanted to ask you if you have any idea why this script isn't working right [18:58:56] https://phabricator.miraheze.org/P501$45 [18:59:21] [1/4] and got [18:59:21] [2/4] File "/home/reception/.local/lib/python3.11/site-packages/dns/resolver.py", line 763, in next_nameserver [18:59:22] [3/4] raise NoNameservers(request=self.request, errors=self.errors) [18:59:22] [4/4] dns.resolver.NoNameservers: All nameservers failed to answer the query holocron.net. IN NS: Server Do53:2606:4700:4700::1111@53 answered SERVFAIL [18:59:35] i mean doesn't look like your fault [18:59:42] wait [18:59:54] Do53:2606:4700:4700::1111??? [19:00:14] is that a typo or? [19:00:54] yeah I don't know why it's saying that [19:00:59] you can see in the script itself that there's no typo [19:01:08] and the domain was pointed to MH's NS, I checked [19:02:09] What happens if you query 1.1.1.1 using dig [19:02:17] Or 2606:4700:4700::1111 [19:03:48] that exception also doesn't indicate that it is fault of the script [19:03:51] I can't check with puppet since it's on cloud18... [19:04:03] I guess I can try on test151 but not sure if we'd get the same results [19:04:25] oh hmm, 151 is also inaccessible [19:05:06] I see that same exception on #miraheze-sre all the time [19:05:27] maybe you're just spamming Cloudflare's recursive DNS a bit too much? [19:06:03] if we didn't wouldn't there be a more specific error? [19:06:45] but yeah I guess if puppet181 is down we'll have to wait until tomorrow (hopefully) to try to debug more [19:07:41] I think, since SERVFAIL indicates a problem with either Cloudflare or some server asked by Cloudflare, like that domain's authoritative namserver [19:07:57] Inclined to think it is with Cloudflare though [19:08:34] I do remember this always being an issue even with previous versions of the script that was never resolved [19:08:39] but if we want requestssl we'll have to figure out a way [19:09:02] host your own recursive dns resolver, easy**** [19:09:41] but if that's the issue, how does rDNS work for icinga alerts? or at least it did at some point [19:10:28] source code for that? [19:10:42] I don't know my way around the puppet repo yet [19:10:48] that's the thing, it's the same code [19:11:02] copied it from https://github.com/miraheze/puppet/blob/fec5c1dfa8dd4592a727c41bc4e29155c229feca/modules/monitoring/files/check_reverse_dns.py#L79 [19:11:11] I doubt cloudflare rate limiting [19:11:13] We could ask them [19:11:54] I do remember trying other resolvers before with my old version and it never worked. I thought first it could be because of bast* but then again, how does the icinga script work? [19:14:34] Ye [19:16:12] Works the same as the other checks? [19:16:22] just that instead of telling you SERVFAIL it says this: [19:16:31] PROBLEM - zhacg.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for zhacg.wiki could not be found [19:16:49] yeah but it says that for wikis that are actually not pointing to our dns [19:17:07] the one I checked I'm almost sure it was, at least according to a whois tool online [19:18:02] I kind of have a hunch [19:18:08] so, I checked that domain and got this: [19:18:23] ;; AUTHORITY SECTION: [19:18:24] wiki. 600 IN SOA a.nic.wiki. admin.tldns.godaddy. 1706900885 1800 300 604800 1800 [19:18:47] since it doesn't have an IP it can check the reverseDNS records of, it returns that [19:19:08] which is technically true, it couldn't find a rDNS entry [19:19:11] I just used https://mxtoolbox.com/SuperTool.aspx?action=whois%3aholocron.net&run=toolpage [19:19:25] oh, interesting [19:19:34] any idea what solution there would be to that? [19:19:40] since clearly whois tools can do it! [19:22:07] well, idk how they do it but the way we do it would depend on how that dns library we use works [19:22:38] should remark that's just a theory, not sure that is what is actually going on, but makes sense from my reading of the code [19:23:27] holocron.net is pointed at our nameservers [19:24:19] Orange_Star: what do you think the error is [19:24:46] not sure I would call it an error tbh [19:25:23] I guess it would be that message could tell us some more what happened [19:26:24] since it outputs the same for both NXDOMAIN and SERVFAIL, it could be more specific [19:27:12] for example, treat NXDOMAIN as a CRITICAL, instead of just saying WARNING, since that means the domain is not even registered [19:28:43] actually, now that I reread my theory regarding zhacg.wiki, why does it even hit the exception handler? [19:29:46] i'm testing reception's script [19:30:22] https://www.irccloud.com/pastebin/bSFXpfc0/ [19:30:58] @reception123 [19:34:24] re: exception handler: I think it may just be hiding another SERVFAIL actually [19:35:07] Orange_Star: The DNS response does not contain an answer to the question: holocron.net. IN AAAA [19:35:32] https://www.irccloud.com/pastebin/6701Jitb/ [19:35:54] oh, so I'm right :) [19:37:07] zhacg.wiki is utterly dead [19:37:29] ummm, is that what you're getting regarding holocron.net? [19:37:56] kdig holocron.net AAAA through my dns resolver is returning AAAA records [19:39:05] RhinosF1: it is still registered tho, and pointed to our nameservers at that [19:39:09] Orange_Star: why is that a warning for zhacg [19:39:15] it should not be warning [19:39:31] correct [19:39:36] ye that's a bug [19:39:45] file a task under sre-automation [19:39:56] it's what I tried to say before, I don't think I explained myself well there [19:40:05] i'll fix that when i do some work sunday [19:41:39] Orange_Star: ye, holocron.net makes no sense [19:41:45] wait, something is up with zhacg.wiki [19:42:06] I'm getting contradictory information from whois and the authoritative nameservers for .wiki [19:42:14] turns out is because it is on clientHold [19:42:18] Orange_Star: it has no NS record [19:43:04] idk why you're not getting records from holocron.net honestly [19:43:12] neither do I [19:43:13] everything looks okay from my recursive DNS [19:43:45] Orange_Star: I can browse the wiki so it can't not have records [19:44:10] well i can get a TLS error [19:46:25] my dig does give me NOANSWER though for AAAA [19:47:26] You should call your recursive DNS provider honestly, something is up with their server [19:48:16] wait [19:48:19] Orange_Star: i found the error [19:48:25] cp28 doesn't have AAAA records [19:48:26] 159.255.146.183 [19:48:50] yep, looks like you're right [19:49:09] now wtf [19:49:11] why the fuck [19:49:23] IPv6 is overrated [19:49:26] the future is IPv4 [19:49:28] why would an org that doesn't even have ipv4 records on all its servers [19:49:35] not have ipv6 on it's outgoing [19:49:45] Orange_Star: yes but we're internally screwing ourselves [19:50:22] I think Agent discovered something wrong with cp28's IPv6 [19:50:32] https://phabricator.miraheze.org/rDNS30c64affa47afb2405178e566cde8450423c30b6 [19:50:36] https://github.com/miraheze/dns/commit/30c64affa47afb2405178e566cde8450423c30b6 [19:50:37] ye [19:50:51] @agentisai: no, no and no [19:50:53] That url hurts my eyes [19:50:56] depool the whole cp [19:51:09] that causes wonderfuck [19:51:38] because our entire own infra makes the assumption that ipv6 will exist [19:54:52] geoip/generic-map/cp-v6 => DOWN [19:54:56] may work [19:55:01] per wikimedia docs [19:55:09] what is that?? [19:55:36] not cp [19:56:12] geoip/generic-map-v6/lu => DOWN [19:56:17] that will work [19:56:18] i think [19:56:30] Orange_Star: how to depool stuff [19:56:39] not by removing stuff from config like agent did [19:59:33] Orange_Star: I think https://github.com/miraheze/dns/pull/487/files would make it serve ipv6 from another DC [19:59:59] if not, we need to depool the entire DC [20:00:09] because that's gonna go weird [20:00:32] https://phabricator.miraheze.org/T11768 filed btw [20:04:31] ty [20:07:18] cc @paladox [20:07:30] re cp28 ipv6 [23:34:28] ugh why does the cvtbot not load automatically in a systemd service, only if I manually start it... [23:36:05] Only the bright side I finally removed the mono dependency from it... but seriously I don't get why it won't run in systemd...