[02:30:51] [1/2] interesting [02:30:52] [2/2] https://cdn.discordapp.com/attachments/1006789349498699827/1461185839441903808/image.png?ex=6969a2db&is=6968515b&hm=3e216877c087c7be8f92dd2f5d0a574ad4683a4740035c8836dee38fdf56fdc7& [02:31:03] while trying to delete a page [02:50:12] The site props just had a blip at the same time, based off it being a 503 error [03:18:16] I tried purging my cache and opened a rift in time with a black hole forming. [04:00:33] farting [04:33:17] Not really? It works normally, for me, at least [08:00:03] @Infrastructure Specialists icinga is saying some lovely things [08:00:18] I'm about to arrive at the office so can I have help [08:01:47] We need to figure out why this keeps happening. [08:02:43] Yes we do [08:02:45] That was very brief but it's not good [08:04:06] There are very few possibilities. It cant be db or basically anything other then networking in some way. Because test151 also goes down. I don't understand why only mw and test does and not phorge which is also on cp/cf/etc... so I dont know why this can happen for sure. [08:04:31] It's not db [08:04:34] That's for sure [08:04:37] test151 and mw servers keep going down at the same time but everything else is fine. [08:04:57] Fun [08:05:04] ^ [08:05:10] why not Phorge is a good question [08:05:20] this started like a week ago and there aren't too many puppet changes in that timespan [08:05:31] phorge doesnt go down so dont see how cp would be the issue either. [08:05:55] well what else would be the issue [08:06:26] I investigated this the first time it happened and there were no errors on the appservers [08:06:41] Wait do we not use healthchecks for phorge? Maybe that pool will never go down and something else is triggering mw pools to go down. [08:27:26] Just to note db drops are still happening with SLEEP [08:27:40] Though if this started a week ago it can't be that since I only started on the 10th [08:36:42] I first noticed it on the 10th. [08:37:10] But I dont think its related. [09:29:53] theres a miraheze office? [09:32:53] no, his actual work has one [09:33:06] thought so [09:33:07] shame [12:09:45] No my day job [12:10:30] @dgox2 we do have an address though [12:10:36] It's just a registered agent though [12:10:44] There probably is something there [12:12:58] worksop [12:28:01] not anymore, that was during the MH Limited days [12:28:06] the registered agent is in Idaho, US now [14:40:49] [1/4] This technical channel includes asking about Pywikibot? [14:40:49] [2/4] Someone on my wiki said: [14:40:49] [3/4] > In Windows, you cannot replace a line break character `\n` while writing . [14:40:50] [4/4] Do you know a solution for this on Pywikibot while writing? [14:52:48] wouldn't \r\n work then [15:01:13] Use Linux 😝 [15:01:21] _/s_ [15:59:42] In my experience Python allows you to use `\n` regardless of whether you're on Windows or not [19:33:05] [1/2] https://meta.miraheze.org/wiki/Special:RequestWikiQueue/72672 [19:33:06] [2/2] was marked as approved, but I don’t see any indication the wiki creation actually occurred [20:00:05] Probably a nondescript office building with a singular person whose job is to accept legal process [21:24:31] Not really actually [21:57:48] @paladox you or a different outage? [21:58:17] it could be related (but not sure). We've hit a snag and @cosmicalpha is looking at the servers [21:58:34] @paladox are you in a safe place to hold while we assess [21:58:37] @Infrastructure Specialists [21:58:40] no we aren't [21:58:53] @paladox okay, try not to destabilise anything too much [21:59:01] i'll stop the swift-proxy [21:59:05] maybe that'll fix it [21:59:05] And just talk out loud [21:59:10] with the object servers being down [22:00:03] mw is very sad [22:00:07] unless this is a problem we've been experencecing ocasionally recently [22:00:10] But it doesn't make sense test is sad too [22:00:12] this happens like once a day [22:00:14] It might be [22:00:33] i've stopped swift-proxy [22:00:34] knowing where you are though is good for ruling stuff in and out [22:00:55] cp171 says all backends are healthy again [22:01:33] @abaddriverlol a fair few mw's were out of fpm workers [22:01:49] So it's something slowing php down on both mw and test at the same time [22:01:54] And that doesn't hurt a db [22:02:18] [1/2] Might want to change the link as 502 points to error 500 [22:02:19] [2/2] https://cdn.discordapp.com/attachments/1006789349498699827/1461480641961918770/image.png?ex=696ab56a&is=696963ea&hm=b33d69cf8f5d6758c019bd2466442d2a2885082ac3f1be384e5227a8c9e063c4& [22:02:29] Overall we had more than enough workers [22:02:48] But individual servers shat themselves [22:03:33] But why would test be affected? Surely theres no reason for it to run out FPM workers, nor would it really be used enough to be affected by a php slowdown? [22:04:20] It's a cf thing I think [22:04:44] I don't know if we can do it further beyond error class [22:04:54] Ye I agree [22:04:59] It makes zero utter sense [22:05:47] Only reason I could assume is maybe FPM workers are being killed/recycled properly? [22:05:47] It self recovers [22:05:53] Affects both prod and test [22:06:01] And seemingly kills mediawiki rendering [22:06:09] And isn't database related [22:06:41] @paladox we seem to be recovering and I think it was the same as the other recent incidents so you can proceed as normal [22:08:21] I don't see how that would take beta down if there's barely any traffic to it [22:08:48] Scraper bots exist, but I never noticed a high amount of requests to beta [22:09:00] True, I can’t think of anything that would cover both prod and test [22:09:59] [1/2] looks like this was actually not caused by the cache proxies [22:09:59] [2/2] https://cdn.discordapp.com/attachments/1006789349498699827/1461482574424571914/image.png?ex=696ab736&is=696965b6&hm=aa99e198683b7ca2d67a7835465761bf69403c2923b0b7faed6293c305f10771& [22:11:16] [1/2] are those logs actually sent/saved anywhere? [22:11:16] [2/2] https://cdn.discordapp.com/attachments/1006789349498699827/1461482899298844834/image.png?ex=696ab784&is=69696604&hm=f70fc0815f82f1c4ab83130d48abfeef1bdfc68fbc5fe00f929854731bc9c812& [22:11:40] Why the fuck would it slow down that much [22:11:44] Seemingly randomly [22:11:53] Not sure [22:23:09] it happened on test151 per the log i sent to cosmic yesterday [22:56:34] Ah, by any chance, is what you are doing affecting the images? curiosity @rhinosf1 @cosmicalpha [22:56:53] Yes it is [22:56:57] Swift (our file storage) is under maint [22:58:29] Oh, I missed it. hehe 😅 thanks