[04:10:11] Is there a technical limitation to transcluding from the dev wiki? Caching aside, it may require some special treatment for transcluding the page itself vs transcluding the page as a template? [05:29:48] Scary transclusion is awful [05:34:21] Hmmm. I guess the alternative would be making dev wiki exports/imports easier. The current way of doing it doesn't work well. We need a gadget that reads dependency data from the page and then exports these (and not all the Template:Documentation junk). Something to add to the TODO list I guess. [08:22:33] On Fandom Dev I kept asking Staff to add Dev to $wgCrossSiteAJAXdomains so we can make a script that easily installs and configures things without having to ask the user "copy this here" 5 times [08:23:13] They never responded, but for you it might be an option worth considering in the long term [08:23:49] It won't work for custom domains, but at least it's something [08:26:52] User clicks a button, selects their wiki, optionally goes through a configuration process, then the script copies/imports things into their wiki and it's done [08:43:41] There is already an import button gadget which I made a few months ago, I already had automatic imports in mind back then but I didn't implement it [08:44:15] So the wiki selection already exists, the main thing that would need to be added is the cross-wiki API call [08:58:05] And also the configuration required to make that call, I suppose [09:13:28] Yep [13:58:43] [1/2] Oh this config is really nice. Thanks for bringing it up. It sounds like a huge security problem but if it's limited to the dev wiki it should be fine. [13:58:43] [2/2] I had thought of showcasing gadgets that require custom js on dev at some point, but this could discourage that since an XSS vulnerability found on dev can affect every other Miraheze wiki, so the risk is dispropotionately large. [13:59:28] Hmmm. It could go to the original domain but I don't know how that will interact with wikis on 301 redirects. [14:00:52] Ah so it was you. The button is very nice to have. The only problem is MediaWiki's dependency resolution system which includes a bunch of useless documentation templates but not modules. [14:04:37] The main problem with those cross site requests is that dev interface admins would need to be trusted as much as people with global roles since the JS they write can affect other wikis [14:08:50] or do it like fandom where dev wiki scripts can be edited by anyone, but they have to be manually submitted for review by staff. probably not manageable when it's just volunteers working though [14:09:48] i mean with 1.44 the option for marking revs as reviewed is there [14:10:05] not sure if that works with non-main content tho [14:11:25] nvm i think that's just approved revs lmfao [14:14:21] wait actually fandom does the review process for all wikis so if it's just the dev wiki it might be possible [14:29:58] Or we could use [a GitHub repo](https://github.com/lihaohong6/MirahezeDevScripts). I'm thinking of transferring some of my other scripts to dev and this repo would make my life a lot easier. Still need to write the automated syncing code, though. [14:33:09] [1/2] Yeah. The alternative I thought of was to fetch all manually-marked dependencies and then export them in an XML file so that it can be imported on another wiki. It's not as efficient, which is unfortunate. [14:33:09] [2/2] It's still a "one-click" solution in the sense that users don't have to manually import every file (such as css pages). [21:29:07] [1/3] is there anyone that can run [21:29:07] [2/3] `find . -type f > everyimage.txt` [21:29:08] [3/3] on the images folder for the pcb wiki and get me that text file, 2x2 has been taking far too long to actually get the images out and the images are still accessable as ive been able to see static content hosted there. ex. https://static.miraheze.org/polcompballwiki/1/1a/0q0gwywfq2s41.png [21:34:26] the gzip archieve of the images would not decompress so i think the wget method is the next best thing [21:36:41] [1/2] ^ pertains to the shut down polcompball wiki, is being asked in place of advise to the main known representative by the name of 2x2 who has been advised to make a phorge task for xml/images after the period they had to retrieve that data when the wiki was up, the purpose is to use wget to scrape off the images from the wiki based on assumed still public and available files from the w [21:36:42] [2/2] iki [21:37:41] as for who i am, i own the subreddit and am part of the gc to restore the wiki [21:50:10] We don’t use a classic file system storage form for file hosting. We use OpenStack Swift. However, if you need an image dump, the bureaucrat can request that from us stewards [21:51:05] alright so you cannot get a list of every image file link in the wiki [21:51:11] no [21:51:33] alright well so the dump 2x2 downloaded had something break in it [21:51:45] We can make a new one if needed [21:51:53] that would be golden [21:52:29] man linux hosting is so much more conveniant aint it (except for getitng worldwide fast first contentful paints) [22:02:05] cloudflare is your friend for that 😛 [22:02:13] sorry didn't mean to ping accidentally hit send before toggling it off [22:03:16] it does help out [22:03:40] but non caches pages end up having slow times [22:40:17] yo, any updates on thechurchofthestatuewiki? [22:40:23] not sure if its safe to import [22:47:04] and remembering last time....