[00:38:55] [1/2] Question, I'm planning on transferring my wiki from Fandom to Miraheze, I got the xml file dump, but I'm having to download the images all separately manually. [00:38:55] [2/2] If I were to do the xml import so the pages and everything transfers, I will be able to mass upload all the files with their same file names at a later time and they'll just fill into their spaces on the pages right? Because the "[[File:" stuff is still in the source? [00:42:35] Yes [00:42:51] You may need to purge the page after uploading for it to show [00:43:11] How many images is it [00:45:02] Looking like 547 right now [00:45:09] It's been a real pain to try to extract them all [00:45:34] Wait so I should upload images first? [00:45:47] I can do that [00:45:59] pats @WikiBot [00:46:20] I didnt know this website existed until today so please fill me in on what that bot does [00:46:51] Fair [00:47:14] The bot runs a thingy to be able to mass download both contents and images from a wiki [00:47:27] Even if the wiki has been closed by Fandom? [00:47:35] I can confirm the images still exist because the links to them exist [00:47:39] Thats what I've been doing [00:47:47] Just going to each link and downloading [00:48:01] No if its closed the public can’t access [00:48:18] Welp back to my version of this grind then [00:48:43] Nobody seems to know how long after a closure does fandom start wiping the files [00:48:47] But for now theyre all here [00:48:49] the bot exists to archive wikis on the [[Internet Archive]] specifically so if/when they go down, the wiki may be preserved [00:48:50] [00:49:06] By close can you access the wiki [00:49:26] Not at all [00:49:39] But the static wiki image links work if you put in the MD5 dash code and the filename [00:50:00] So I made this excel script that recreates the links for me [00:50:13] That’s pretty smart [00:50:29] I requested from Fandom to just give me the files but they said they wouldnt for "licensing" which just cemented that I dont really want to make wikis for them anymore [00:50:37] Crazy that I cant access my own files [00:51:01] @WikiBot isn’t equipped to do that but yoh could absolutely write a quick and dirty python script to mass download those files [00:51:13] Thats the plan [00:51:23] Glad to know just dumping them onto the new wiki will just fill them in though [00:51:36] But I should do this right [00:51:41] When the new wiki actually exists [00:53:59] Uuuuuuuh. [00:54:05] not sure actually [00:54:11] how many pages [00:55:39] 301 [00:56:07] If you're saying I should purge pages after file upload to make them show up [00:56:13] It makes more sense to me to do images first then [03:15:35] [1/2] BWAAAAAAAAAAHHH [03:15:35] [2/2] https://cdn.discordapp.com/attachments/407537962553966603/1357191775990841404/document_5388564530471137800.mp4?ex=67ef4ed6&is=67edfd56&hm=ef9f25d08367acc85c0948642be6c53df42a525db9125b7a94854134f96e560b& [03:18:44] Its smile is vissiouse 😮 [03:51:13] [1/2] is anyone else's miraheze not loading images [03:51:13] [2/2] https://cdn.discordapp.com/attachments/407537962553966603/1357200744838725863/Screenshot_2025-04-02_215052.png?ex=67ef5731&is=67ee05b1&hm=d4e2c3023f2e0453129d0748c00a9426445af4c9aef88c1fe2f8f7c1dc1a7501& [03:51:21] like all over the site [05:00:28] you are second person reporting this in the last 24 hours [08:41:02] [1/2] Is there a better way to do this? [08:41:02] [2/2] https://cdn.discordapp.com/attachments/407537962553966603/1357273678282231930/IMG_2666.png?ex=67ef9b1d&is=67ee499d&hm=3dd0c472574a83aca85d6e110dede59b962d43d2d4358bf50e2c33894395cdf1& [08:41:14] The onlyinclude includeonly [08:41:31] I don’t want a broken template transcluding on the template’s page [09:08:53] huh [09:09:04] not sure what are you trying to achieve here [09:16:30] I want the template to only transclude what’s between the tags and I don’t want it to appear when not being transcluded [09:16:42] I was just wondering if there’s like a combined tag or something that could be used instead [10:16:08] why you wrap them in each other? [10:16:57] you mean like, template should be completely invisible on its own Template: page? [10:55:14] Yeah [13:40:57] Is there a way that I can protect all of the templates and modules without protecting the site? [13:41:51] namespace settings, pick template and module and limit editing to editinterface permission [13:45:49] Please explain what those are. [13:49:06] [1/7] namespaces are types of pages on wikis [13:49:06] [2/7] templates are in Template namespace, have `Template:` prefix in their names, and so on [13:49:06] [3/7] look at admin sidebar - you should see "Manage this wiki's namespaces" link - click on it [13:49:07] [4/7] new page will open, there will be dropdown menu - choose in Template and load [13:49:07] [5/7] there must be a dropdown setting, something like "who can edit this namespace" - you need to pick `editinterface` [13:49:07] [6/7] save changes [13:49:07] [7/7] same process for Module [13:51:15] So, userright? [13:52:07] probably, I can't check rn [14:00:56] I'm new here,I want to know how to attach css and javascript to a Template? [14:04:32] [1/2] JavaScript will work only through `MediaWiki:Common.js` page [14:04:32] [2/2] CSS can be defined at `MediaWiki:Common.css` page or subpages created w/ [[mw:Extension:TemplateStyles]] [14:04:33] [14:05:05] thx [14:41:00] Is there any reason to use the og dpl over dpl3? [14:42:09] dpl3 is more advanced but its up to you to enable whcih version you'd like [14:47:25] dpl isn't supported anymore iirc? [14:47:46] didn't Cosmic took over dpl3? [15:03:01] you only need the includeonly tags [17:01:44] Do templates carry over to miraheze from fandom in the xml dump file? [17:01:49] I wont have to create them all again? [17:05:42] they do, the only hiccups would be making small edits to infoboxes to start working along w/ PortableInfoboxes extension [17:06:19] Can you elaborate a bit more on that? What is that extension? [17:08:21] [1/4] PortableInfobox is a method of coding/making infobox templates, it was developed by Wikia/Fandom, and most of templates over there are like that [17:08:22] [2/4] it's pretty neat [17:08:22] [3/4] Miraheze has this extension too, you gotta enable it in admin settings first [17:08:22] [4/4] but for some reason, PI templates after import look like raw code and need "a kick" w/ a small edit to start working/look nice [17:18:25] Oh ok [17:18:40] I dont think I made any of those on the fandom wiki, so I shouldn't have to worry about them right? [17:25:38] there might be default ones but if you haven't used them before sure [17:35:48] Also saw a user report it on our wiki an hour ago. [17:38:11] better make a phorge task [20:22:51] If the onlyinclude is removed then it includes the documentation and everything else on the page and not just the template [20:41:59] wrap everything you don't want to display, like the documentation, in tags [20:41:59] Then that’s just doing the same thing but with different tags [20:41:59] a:CerberLoading1: [20:41:59] So there was no point changing it in the first place [20:41:59] Since before it accomplished the same thing [20:41:59] every time i get a everyone ping here my heart beats about 50% faster [20:41:59] I'm glad it's a good thing 🙂 [20:41:59] yoooooo [20:41:59] What do you think I'm like as tech [20:41:59] hi yall [20:41:59] Yet somehow, we already have a thumbs down on that announcement xD [20:41:59] And it says are we down [20:43:31] Wiki trauma… [20:48:37] hello, i also asked this on support entries but maybe it would be good to write here too. is there a way to make dark mode the default on my wiki for everyone? [20:52:23] Miraheze shutdown scare 2.0: April Fools edition [21:00:49] in additional settings I think [21:09:23] it didn't work, will it take some time to show for unlogged visitors? [21:09:43] because it also doesn't update some changes i've made recently [21:16:03] probably? [21:16:44] okay, i'll wait and look again later. thank you! [22:13:12] is there a class in mediawiki or the vector skin for sans serif fonts [22:39:59] Heads up @Meta Administrators and @Stewards, I’m planning on adding https://meta.miraheze.org/wiki/Template:Autoarchive/config (probably slower config though) to the Steward request pages to have [[User:BeeBot take care of that jazz, anyone have issue or objections to that? [22:50:09] I'd prefer it over my manual archiving when I have time [22:50:14] Actually, the default of 28d is probably way too long for RC and UD [22:50:31] Anything that's been resolved for 7+ days can be archived [22:50:55] I’d say 14 days, 7 if we can get it to respect pending/on hold and not archive them [22:51:15] That may need to be a fork/PR to pywikibot or a second bot soooo [22:51:40] I'm not sure how Agent's bot was doing it before [22:52:16] It was the same PyWikiBot instance [22:52:27] BeeBot is that same instance but renamed without DSRE authorization [22:53:49] Oh? lmao [22:54:02] No one brought that up for [22:54:30] 8 months :p [22:54:42] CosmicAlpha didn’t know until 5 months after the fact [22:54:45] So it does have checks for status? [22:54:53] it never did, no [22:55:10] Ah [22:55:18] Would be neat to add if possible [22:55:29] Idk bout if we could upstream it [22:55:37] project fork maybe? [22:55:51] That’s messy [22:55:55] Huh [22:56:02] perhaps just forking the archivebot script [22:56:11] yeah [22:56:15] for now [22:56:19] 14d it is ig [22:56:52] still waiting for that deletion request 2 go thru [22:57:24] time for an AI bot to clerk SR [22:58:32] Nah ill just make a sock account to run for Steward [22:59:14] It’s all fun and games until you get checkusered [22:59:31] a:waaa: [22:59:55] i reeeeeally don't want 2 sound impatient but like. the wiki in question is undoubtedly almost entirely copy-and-pasted. it has been like 10 days since i first brought it up here, and the deletion request queue still has it marked as "pending" despite more recent requests being taken care of. there are people waiting on this 2 go through [22:59:56] Agent do you know the story of Croatian Wikipedia [23:00:48] I do [23:01:04] It’s crazy that such a wiki can exist within the Wikipedia ecosystem [23:01:26] on Miraheze, we would’ve intervened like the U.S. intervenes in Latin American elections [23:01:31] The Kubera story was insane [23:01:46] [1/2] like??????? [23:01:46] [2/2] https://cdn.discordapp.com/attachments/407537962553966603/1357490288846700625/image.png?ex=67f064d9&is=67ef1359&hm=a623b20da16e2d64f580db5535ead8c0b4c6ecca1e8e0ee5d426db2010a812eb& [23:02:17] oh well then why don’t you just say so [23:03:07] I think they did [23:03:51] yeah but it’s the learningblocks wiki case [23:03:56] as i said, i did not want 2 sound impatient, as i had the request in the queue. seeing it still marked "pending" despite other requests made after it being dealt with is what made me bring it up here [23:05:51] if something requires more thought than just a simple click of a button then it’ll be delayed [23:06:03] this was more than just rubber stamping a request like the rest are [23:06:13] I think Wikimedia just never expected something of this scale to actually happen, + not caring much about other language projects maybe. He also had socks(or cronies, I think) with CheckUser locally, so a global CU never went in to flag it [23:06:30] anyway, the wiki has been deleted [23:06:47] If I recall, the only reason he got caught was because he socked… on Wikimedia Meta Wiki [23:06:47] no yeah, that wiki seems to have had rot to the core [23:07:03] for sure no one thought that a wiki could be compromised so institutionally [23:07:04] thank you [23:07:06] Nearly the entire editorial base was Kubera’s laundry basket [23:07:16] Or his friends [23:07:37] I was talking bout this a bit last month at WikiWednesday [23:07:43] I wouldn’t be surprised if Wikimedia primarily focuses on their biggest wiki, enwiki [23:07:54] anything that’s not a Wikipedia is basically irrelevant either way lol [23:08:07] Commons is something [23:08:11] Wikinews [23:08:12] that too [23:08:17] Rip [23:08:17] Wikinews is irrelevant [23:08:31] poor old Wikivoyage is also basically not really used by the public [23:08:39] Wiktionary we give a pass because of good old Google [23:08:48] Wiktionary is my go to [23:08:58] some folks from WikiNYC has been trying to work with wikinews [23:09:11] Wiktionary my go to for etymologies [23:09:20] Wikisource, books, versity [23:09:21] I was knee deep in that when I began to study French [23:09:30] Wikisource is sort of for nerds [23:09:34] Mediawikiwiki doesn’t really count [23:09:36] Wikibooks is irrelevant [23:09:40] as well as Wikiversity [23:09:46] They exist just to exist [23:09:51] It’s an archive [23:09:56] exactly [23:09:59] nerds [23:10:11] I do use it to read executive orders though [23:10:11] I’d love to see them grow and find a use though [23:10:15] Wikispecies [23:10:16] it’s much better organized [23:10:26] oh yeah no, that’s peak nerd [23:10:41] Wikifunctions is a whoooole other can of worms [23:11:07] I forgot about Wikidata [23:11:13] That’s a pretty important backbone [23:11:19] even though it is pretty nerdy [23:11:31] and let’s not start on Wikiquote [23:12:14] Yeah [23:12:24] Makes lots of cool stuff [23:12:28] like wikishootme [23:12:32] /srs [23:12:39] I wonder if it’s easy to do a hat collection speedrun on some of those smaller wikis [23:12:52] I know simplewiki had a big problem where they’d promote anything that breathes [23:13:04] a:sussywussy: [23:13:16] I made a few edits on simple I think [23:13:38] [[w:simple:collateral (finance)]] [23:13:38] [23:14:01] Longest simplewiki article [23:14:15] Shame [23:14:22] It has real potential [23:17:01] where were we again [23:17:39] oh yes [23:18:33] all fun and games until Raidarr suspects you’re BZPN and checks you \:p [23:19:01] 😂 [23:20:53] Use my main to unblock my open proxy :Troll: [23:33:16] Slightly less joking, maybe using the shiny new Steward appointment policy to grant WM/deputize GAs to clerk RC and maybe permissions (right would need to be added) would be a better solution to lowering steward workload while folks are busy [23:33:43] 🧐 [23:34:29] Its scope fits I think, least for the button pressing ones that don’t require as much judgement [23:44:47] @notaracham @agentisai added autoarchive for 14d on [[SR/G]] [23:44:47] [23:44:56] lets see how that works next time bee runs