[00:05:56] [1/2] why wiki editing looks like this ? [00:05:56] [2/2] https://cdn.discordapp.com/attachments/407537962553966603/1189720856331571300/image.png?ex=659f30e4&is=658cbbe4&hm=85a5ab047431596c4bd9679e9690cf8de82418d40640a993ea3f294fef8bab51& [00:08:43] from my personal experience, am 99% sure it is cocopuff [00:09:25] had a wiki, cocopuff and dorkboy were in the same ip [00:10:55] +conversational patterns are similar [00:10:59] Identified as such already, dealt with accordingly [00:11:03] ah [00:11:29] Thanks for the additional insight, though. 🙂 [01:34:49] is it just me that always has problem importing wikipedia articles? [01:34:54] or is it a common issue? [01:35:08] not asking in support because not like a specific case im talkin bout rn, just curious [01:35:45] [1/2] Not just you, a former user even wrote an essay on it, which I archived: [01:35:45] [2/2] https://meta.miraheze.org/wiki/User:NotAracham/RaidarrArchive/Don%27t_import_Wikipedia [01:36:44] i meant like it doesnt work lol [01:37:11] wikipedia articles are very helpful for the scope of my wiki which is ultra specific, and the theme [01:37:20] because it reduces the workload substantially [01:37:28] i dont do it for templates though [01:37:52] Do you mean doesn't work as in you get a 503 error, or something else? [01:38:07] usually either cant import from fail or more commonly a 504 [01:38:12] *file [01:40:20] Might be worth doing then, especially if you can get all the articles you want in only one or a few exports. [01:41:29] I need to request templates separately or will it automatically do the ones required for pages? [01:41:29] also [01:41:32] this sounds really silly [01:42:04] but i didnt want to do that because i didnt want to create work for the volunteers 😅 [02:00:35] Hey I've been worried recently on the state of Miraheze and whether it's going to be able to survive too much longer. The site, donations, and leadership don't seem to be in the greatest state right now. Will Miraheze be alright? [02:02:52] yes [02:06:32] so there’s no chance of miraheze shutting down in the near future ? [02:33:59] [1/2] Both teams are well positioned should B or C pass to keep things going and restore momentum on some delayed improvements/backlogged requests. [02:33:59] [2/2] I won't say never, but shutdown seems very unlikely at present. [02:45:34] If you mean the wiki farm, sure. [02:45:53] The Miraheze Limited? Likely gone. [02:47:33] ^Miraheze Limited is for the legal stuffa, not the wiki farm [02:55:29] More specifically, Miraheze Limited is the company that currently runs Miraheze [10:34:23] [1/2] Miraheze Limited is only the company on UK. [10:34:23] [2/2] So, due to Miraheze being incorporated in US, maybe this new life is like a phoenix. MH Limited has to die and from its ashes will be born a new Miraheze incorporated into a new company. [10:40:05] haha, I like that analogy 😄 [10:40:39] Maybe Miraheze Unlimited will rise from the ashes ... [10:44:59] With an unlimited supply of volunteers to work on the issues 😄 [10:56:40] Unlimited may be a little ambitious, but hopefully a lot more than what we have now [14:27:22] <.tounae_official> But what I'm curious now is that if Miraheze would be temporarily temporarily inaccessible until the new company takes over and completes the relevant registration process [14:28:37] <.tounae_official> Or would the volunteer team still be working during the transition [14:29:26] <.tounae_official> <:squint:755498402346827946> [14:34:23] there's no reason for Miraheze taking a pause in operations [14:34:32] I think [14:35:11] it still did when Zppix and Labster were working on their transition [14:35:17] that's B option [14:35:23] Likely some downtime if we are going to be migrating to our lord and savior the cliff [14:35:28] Wait no [14:35:30] The cloud [14:35:33] ah, right [14:35:35] fuck [14:39:15] <.tounae_official> 🗿 [14:42:22] hi, ages ago I requested an import to my wiki, but I didn't realise it was just the current pages at the time. this was may 2023 I think. I have a full dump from the same day. there's been loads of edits though since then in the new wiki (not the old!) if I requested on phab an import would it wipe all edits made since then or just backfill the edits? [14:42:32] whoa big message sorry [15:42:38] Probably only less than a day I would imagine. Isn’t the cloud supposed to be easy to set up, where all out have to do is create an account, choose a plan, and upload files? [15:43:24] No matter how easy the cloud platform is Miraheze’s infrastructure is still super complex [15:43:53] I got a cloud server and managed to get my discord bot running in a few minutes [15:43:58] but for this? [15:44:00] God knows [15:45:02] They may have to have one or a few servers go to the cloud at a time, causing intermittent downtime or outages of some wikis until the migration is complete. [15:46:31] Sorry, I don’t know much about cloud servers. I use MongoDB in the cloud for an app I am working on, but that’s about it. [15:46:36] Yup [15:47:18] Same lol, I have a racknerd VPS running my discord bot and have used Supabase’s Postgres server but beyond that :p [15:47:27] wanna try other stuff though [15:48:51] I plan on using either Firebase or Azure. I think both of them have free plans for when you’re just getting started, and you pay just for what you need. [16:08:11] Haven’t used firebase but hear good things [16:08:20] Vendor Lock in is a bitch [16:09:25] theres lots of open source backends as well. If I made a web app I might use supabase or a VPS on smt like EC2 or GCP running a pocketbase backend [16:09:38] imo i don’t think host matters toooo much [16:09:53] some are static vps and some scale with use i thino but [16:33:48] Azure is deadly expensive. [16:45:26] [1/2] How do i change my wiki's logo? i've looked everywhere where the manual says [16:45:26] [2/2] https://cdn.discordapp.com/attachments/407537962553966603/1189972389417734214/image.png?ex=65a01b26&is=658da626&hm=203acc408b15fc20fd142a567937c6fae1bc72841d4f543b62bf67f3e6091a51& [16:49:31] Special:ManageWiki/settings > Styling > paste the raw URL of the image you want as your logo in the relevant logo setting [16:52:37] i can't find the styling section [16:53:46] It's one of the tabs in the ManageWiki page I mentioned; make sure you're in ManageWiki/settings and look for a tab that says "Styling" [17:02:36] oh i was on the wrong page [17:02:39] thanks [18:36:40] We've done a DC switch with basically zero downtime [18:36:52] it was a bit rocky [18:37:21] But we probably would only need to pause wiki creation for a day and go read only for like half an hour [18:37:40] The WMF run in multi DC mode 11 months a year [18:38:51] I am more than open to discussing zero downtime switchover with the WikiTide team [18:38:56] (Cc @cosmicalpha ) [18:42:59] [1/2] I think we can even potentially do some sort of DB replication for migration, WT has planned to do some multi DC setup for awhile also but I'm not sure we will go that direction yet. But there might be some intermediate downtime when servers are actually switched over or latency during migration if we don't do a replication setup. We would definitely aim to have as little interru [18:43:00] [2/2] ption as possible with any migration though. But I've learned during migrations there is a lot of unpredictable so every precaution would have to be taken also. [18:44:34] @cosmicalpha not sure how replication would work for part replication if we used existing Tide DBs or whether new would be better [18:44:45] But we definitely want some form of replication [18:45:00] Yeah that's something we need to figure out a concrete plan. [18:45:14] Latency will always go up with a DC switchover. It's how much that would be my worry. [18:45:34] It was a lot during ovlon -> scsvg [18:46:02] But then multi DC by wikimedia is much further ahead than when we did that [18:46:12] So we might be able to warm some caches easier [18:46:26] Yep. Regardless there would be planned downtime probably... how much actually happened, I hope we could minimize a lot... [18:46:30] By switch GET requests for not loginwiki earlier than POST [18:46:41] ovlon -> scsvg was basically none [18:47:03] I'm more worried about merging CentralAuth in all honesty @cosmicalpha [18:58:53] Yeah thats going to be tricky. [21:20:07] actually the full dump is only 300kb more than the 400kb "current" dump. could i import it myself? would that delete all the history made since that dump? [21:32:03] importing will not overwrite, entries will be added to article history. you can request for extension:changeauthor and manually assign each revision id to relevant users [21:58:04] Don’t you have vms in South Asia @cosmicalpha [21:59:46] A cp in Singapore [21:59:52] Nice [22:00:04] That would benefit @gelato_affogato [22:01:35] Much better than US or EU, ofc [22:02:00] (While still a few thousand KMs away) [22:02:17] well probably not merely a few thousands tho [22:02:46] It would be nice to see a cp return to Asia [22:03:41] Well at some stage I considered paying the damn bill again lol [22:04:24] You should have asked wikimedia to keep funding it [22:05:02] We have cps in Australia, GB, US, CA, Poland, and Singapore IIRC [22:05:23] That's decent [22:06:05] [1/11] I'm facing a new problem with my wiki and that's basically a problem which I knew I would soon or later have to deal with some pages have too much content on them. So it is a wall of text this is very much prevalent on a mobile device so I now want to basically reduce the amount of Text being displayed but still keeping on the same page. [22:06:05] [2/11] Now I understand there are two methods of doing this one the collapsible box which I'm already using for something else. [22:06:06] [3/11] And the second method I'm trying to get to work but I can't get it to work [22:06:06] [4/11] I'm referring to this [22:06:06] [5/11] |-|First Tab Title= [22:06:07] [6/11] First tab sample text. [22:06:07] [7/11] |-|Second Tab Title= [22:06:07] [8/11] Second tab content goes here. [22:06:07] [9/11] |-|Third Tab Title= [22:06:08] [10/11] Third tab content goes here. [22:06:08] [11/11] /tabber [22:06:09] I'd be intrigued to see what the benefit of each is [22:06:24] WMID? I don't really have much contact there [22:06:29] At what point does it become cheaper to just use cloudflare workers and benefit everyone? [22:06:46] When Cloudflare offers php execution /joke [22:06:49] Would be nice to see something like we had for fossbots with cloudflare where we had metrics from somewhere near each cp to each cp [22:06:58] We tried Cloudflare. It was worse not better, and we had to stop using it. [22:07:08] @originalauthority moving Miraheze behind cloudflare is something I considered [22:07:11] Cloudflare or Cloudflare workers [22:07:16] @cosmicalpha with pro or free? [22:07:16] What ecactly was worse about it? [22:07:25] Pro is free for FOSS [22:07:57] I'm very intrigued at it being worse [22:08:25] Well if I wanted to pay the bill it'd be Tokyo not Singapore [22:08:39] I can't remember exactly what was done. But I do know MediaWiki doesn't like CF caching and it also caused some caching issues as it doesn't like to purge then when cache is supposed to purge, etc.. then there is having to maintain a massive list of CF IPs in CDN configs. [22:08:40] Yes WMID [22:08:53] Hmm [22:09:10] I don't have much contacts with SEA chapters :-p [22:09:17] Heh, technically cp3 is still online, I just don't think anyone can access it. [22:09:27] (Mostly the F or WMTW or WMKR) [22:09:29] Massive list of IPs I can kinda see but it should be workable tbh [22:09:42] We might be able to make it work. [22:09:49] It definitely has its advantages [22:09:54] The purge not working is interesting [22:09:54] And disadvantages also [22:10:34] You need an extension like MultiPurge to purge the cache in CF. and the IP thing can be worked around iirc because you can configure Cloufdlare to send the ip in a different header, and then configure nginx to use that header as the XFF and then MediaWiki will just read thd XFF like usual. [22:10:47] if you haven't yet, you should enable the tabberneue extension to use those tags [22:11:00] Thats an interesting idea... I kinda like that plan. [22:11:07] [1/2] I advice you to make support threads, in the middle of ongoing convos stuff gets lost quickly [22:11:07] [2/2] have you enabled TabberNeue extension? [22:11:28] The only problem with CF is load balancing can get expensive [22:11:36] If you want it configured properly [22:11:37] ah Chime beat me [22:11:47] Maybe that was the issue we had.... [22:12:06] I can't remember exactly what the issue was though... [22:12:15] It won't load balance if you just add servers as A/AAAA records [22:12:19] In that regard I would reprovision a server to run HAProxy and load balance that way. Would possibly still work out cheaper, but would obviously have to be investigated. [22:12:41] haproxy is definitely something uve actually been looking into as of late. [22:13:08] It was gonna be used for thumbor, but I want to expand it to help MediaWiki also. [22:13:08] I've been using HAProxy with no issues for about 2/3 months, works quite well and is a breeze to configure with Puppet. [22:13:41] WMF use haproxy for some databases [22:13:46] And I use it to take the load off Nginx, so I have HAProxy doing the SSL termination and then nginx just has to do the php stuff. [22:13:57] That is nice! [22:13:58] Yup https://developers.cloudflare.com/load-balancing/ [22:14:01] Or is that something else [22:14:10] I used it at fossbots [22:14:13] It was great [22:14:16] But expensive [22:14:21] Even at just 4 servers [22:14:26] Lol [22:14:44] And I didn't have it with everything turned on [22:14:52] There are a lot of options to consider. But we want to do whatever is both affordable and the best for performance... [22:14:57] > we sometimes use it in different parts of our infrastructure as either a TLS terminator, load balancer, or automatic switchover handler [22:15:03] Seems to be what WMF is using it for [22:15:12] Although I suppose it's a basic pool / depool for WikiTide [22:15:20] Because I assume you don't have multi DC [22:15:29] Although you will when Miraheze comes on [22:15:57] Probably. There is a ton to figure out in the infrastructure for sure. [22:16:16] We want to do things as clean as possible and not a mess with the migration... [22:17:51] Yeah I didn't have that enabled yet I'm now making a test page it doesn't seem to work so far. [22:17:59] @cosmicalpha how many mw* you have for tide? [22:18:09] Do you use varnish [22:18:16] Nope CF. [22:18:22] 2 active with high resources [22:18:23] Oh [22:18:25] Up until last week I used Fastly [22:18:32] Which is basically varnish in the cloud. [22:18:39] Oh you'd be fine for load balancing then [22:18:40] So right now we do nginx -> varnish -> nginx -> nginx [22:18:50] [1/2] give it couple of min [22:18:50] [2/2] check if syntax is correct [22:18:55] But they have a minimum spend of $50 and they charge you even if you don't use the $50 [22:19:29] We'd need more eventually especially with MH traffic I believe... [22:19:59] Yeh [22:20:08] We have 4 cpu and 4 gb of ram [22:20:26] it's working thank you now I have a way to cut down that wall of text 🙂 [22:20:28] It's $5/month/origin above 2 [22:20:43] what for? [22:20:47] You only have 2 pools though [22:20:54] Load balancing [22:21:04] The issue with cloudflare I think is also custom domains. But maybe with Pro that would work? [22:21:07] Monitoring is fun to configure [22:21:17] With pro it should work if I understand the docs [22:21:21] Wonder if it would be benifital to switch to haproxy on cp [22:22:13] Worth applying via https://www.cloudflare.com/en-gb/lp/oss-sponsorship/ early [22:22:25] It's normally a very long time [22:22:36] Ask about load balancing, they might give you a discount [22:22:49] I've never used that tbf, with Workers, you write all of the config in JavaScript and don't use the web controls, but I think there are different ways you can configure the domains [22:23:08] Heh, just realized there was RfC [22:23:19] (/me ignores all -at-everyone) [22:23:52] Custom domains can be done I think [22:24:18] @cosmicalpha https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-for-saas/plans/ [22:25:14] No custom certs make me sad [22:25:28] _(not really)_ [22:25:39] Well ye it would mean no custom certs [22:25:46] As they'd be free anyway [22:25:52] But that's not a bad thing for most [22:26:04] But what's their definition of hostname for billing purposes [22:26:29] some people also raised privacy concerns with CF [22:26:38] that's primarily what pushed us over the edge for returning to Varnish [22:27:01] If each wiki is their own hostname, MH would be F-ed up because… the size [22:27:06] The comparison shows no difference between Free and Pro? [22:27:22] Pro is not really that much of an improvement tbh [22:27:23] Oh yes this too. That is why MH never did CF before also IIRC [22:27:28] When I used CloudFlare pro there is nothing more except analytics [22:27:32] yep [22:27:40] We tried Pro for WF and it wasn't worth it [22:28:09] There's a given that there's going to be some privacy issues, since CF must decrypt the traffic for DDOS protection, but I'd argue it's worth it (that's just my view) [22:28:30] Cloudflare pro allows a lot more caching rules [22:28:48] If you're using workers, though, you don't need rules. [22:28:55] never really looked into workers [22:28:58] My usual way of abusing CF is either 1. Free CDN when I'm out of East Asia / 2. automatic https renewals (yeah, I'm lazy and don't want to even deal with `certbot renew`) [22:29:06] If you're workers, free is fine [22:29:23] Not sure how WikiTide/Forge did it, but one reason I could never use CF without workers is because their bypass on cookie is an enterprise only feature. [22:29:32] is it? [22:29:35] Yeah [22:29:49] I believe I saw something similar in page rules [22:30:18] https://community.cloudflare.com/t/bypass-cache-on-cookie/231541 for referencd [22:30:47] what I'm remembering is the selection in Configuration Rules that if cookie matches X then execute Y [22:31:00] I used that to serve all logged out users with cache and bypass it for logged in users [22:31:17] but it didn't really help out on big wikis [22:31:29] Yeah, because then you're not caching Load.php are you [22:32:03] we did iirc [22:32:07] I don't think those options were available when I tried. I recall they did release more rules recently [22:32:18] Workers & pages are pretty decent [22:33:04] using CF would be nice but privacy issues + complex cache rules + funkiness with custom domains would be a no-go for us [22:33:12] custom domains were always a hit or miss [22:33:25] Latter doesn't exist anymore to my knowledge [22:33:33] Middle one is kinda fair [22:33:51] I'd argue they're not that complex, but [22:33:51] Privacy issues, like half the web uses CF in some way [22:33:59] Including the WMF [22:34:19] All caching can get complex if you make it [22:34:22] that's true, for magic transit [22:35:09] thats the thing: if you make it 😉 [22:35:26] Correct, ever since the infamous ddos [22:35:43] That somehow feels like both forever ago and yesterday [22:38:14] For me Orain is [22:38:37] Wait, it's been past 10 years since I joined Orain lol [22:41:00] And back to CF… I don't recall the name but their captcha solution [22:41:00] good times [22:41:19] Turnstile [22:41:23] Also using that its pretty neat [22:41:43] I kinda think… 'if you're using Google for captcha what's the diff between using CF huh' but YMMV [22:41:55] (But that would probably not work for MH because you only get 10 hostnames free) [22:42:13] Oh yeah, Cloudflare. [22:42:16] Didn't expect that. [22:43:02] It is pretty neat though and no annoying puzzles [22:44:36] Yeah, who would not love no puzzle