[01:56:08] [02RequestSSL] 07dependabot[bot] created 03dependabot/npm_and_yarn/stylelint-config-wikimedia-0.18.0 (+1 new commit) 13https://github.com/miraheze/RequestSSL/commit/fe6262f36612 [01:56:10] 02RequestSSL/03dependabot/npm_and_yarn/stylelint-config-wikimedia-0.18.0 07dependabot[bot] 03fe6262f Bump stylelint-config-wikimedia from 0.17.2 to 0.18.0… [01:56:12] [02RequestSSL] 07dependabot[bot] added the label 'dependencies' to pull request #62 (Bump stylelint-config-wikimedia from 0.17.2 to 0.18.0) 13https://github.com/miraheze/RequestSSL/pull/62 [01:56:14] [02RequestSSL] 07dependabot[bot] added the label 'javascript' to pull request #62 (Bump stylelint-config-wikimedia from 0.17.2 to 0.18.0) 13https://github.com/miraheze/RequestSSL/pull/62 [01:56:16] [02RequestSSL] 07dependabot[bot] opened pull request #62: Bump stylelint-config-wikimedia from 0.17.2 to 0.18.0 (03master...03dependabot/npm_and_yarn/stylelint-config-wikimedia-0.18.0) 13https://github.com/miraheze/RequestSSL/pull/62 [01:56:17] [02RequestSSL] 07coderabbitai[bot] commented on pull request #62: --- […] 13https://github.com/miraheze/RequestSSL/pull/62#issuecomment-2552611703 [02:07:24] miraheze/RequestSSL - dependabot[bot] the build passed. [03:19:59] PROBLEM - wizardia.wiki - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'wizardia.wiki' expires in 14 day(s) (Fri 03 Jan 2025 02:58:23 AM GMT +0000). [03:20:11] [02ssl] 07WikiTideSSLBot pushed 1 new commit to 03master 13https://github.com/miraheze/ssl/commit/8a419d6cc780dd426a4c9cbaaf413e3cfe9e3a5e [03:20:11] 02ssl/03master 07WikiTideSSLBot 038a419d6 Bot: Update SSL cert for wizardia.wiki [03:49:57] RECOVERY - wizardia.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'wizardia.wiki' will expire on Wed 19 Mar 2025 02:21:35 AM GMT +0000. [05:16:12] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 90%, RTA = 31.38 ms [05:20:21] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.41 ms [05:27:07] PROBLEM - mw152 HTTPS on mw152 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [05:27:14] PROBLEM - mw152 MediaWiki Rendering on mw152 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.010 second response time [05:27:17] PROBLEM - mw172 HTTPS on mw172 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [05:27:17] PROBLEM - cp37 HTTPS on cp37 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 503 [05:27:30] PROBLEM - cp36 Varnish Backends on cp36 is CRITICAL: 17 backends are down. mw151 mw152 mw161 mw162 mw171 mw172 mw181 mw182 mw153 mw154 mw163 mw164 mw173 mw174 mw183 mw184 mediawiki [05:27:33] PROBLEM - mw173 HTTPS on mw173 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [05:27:34] PROBLEM - mw184 MediaWiki Rendering on mw184 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [05:27:49] PROBLEM - mw151 MediaWiki Rendering on mw151 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.012 second response time [05:27:51] PROBLEM - mw171 HTTPS on mw171 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10002 milliseconds with 0 bytes received [05:27:53] PROBLEM - mw174 HTTPS on mw174 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10001 milliseconds with 0 bytes received [05:27:54] PROBLEM - mw162 HTTPS on mw162 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10003 milliseconds with 0 bytes received [05:27:59] PROBLEM - mw163 HTTPS on mw163 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [05:28:01] PROBLEM - mw163 MediaWiki Rendering on mw163 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.010 second response time [05:28:03] PROBLEM - mw181 MediaWiki Rendering on mw181 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [05:28:31] PROBLEM - cp36 HTTPS on cp36 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 503 [05:28:32] PROBLEM - db161 Current Load on db161 is CRITICAL: LOAD CRITICAL - total load average: 525.90, 217.27, 83.38 [05:28:33] PROBLEM - cp37 Varnish Backends on cp37 is CRITICAL: 17 backends are down. mw151 mw152 mw161 mw162 mw171 mw172 mw181 mw182 mw153 mw154 mw163 mw164 mw173 mw174 mw183 mw184 mediawiki [05:28:55] PROBLEM - mw172 MediaWiki Rendering on mw172 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [05:29:07] RECOVERY - mw152 HTTPS on mw152 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.046 second response time [05:29:14] RECOVERY - cp37 HTTPS on cp37 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 4141 bytes in 0.059 second response time [05:29:14] RECOVERY - mw152 MediaWiki Rendering on mw152 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.183 second response time [05:29:16] RECOVERY - mw172 HTTPS on mw172 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.071 second response time [05:29:27] RECOVERY - mw173 HTTPS on mw173 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.053 second response time [05:29:28] RECOVERY - mw184 MediaWiki Rendering on mw184 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.182 second response time [05:29:48] RECOVERY - mw171 HTTPS on mw171 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.051 second response time [05:29:49] RECOVERY - mw151 MediaWiki Rendering on mw151 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.163 second response time [05:29:50] RECOVERY - mw174 HTTPS on mw174 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.051 second response time [05:29:51] RECOVERY - mw162 HTTPS on mw162 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.056 second response time [05:29:56] RECOVERY - mw163 HTTPS on mw163 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.054 second response time [05:29:56] RECOVERY - mw163 MediaWiki Rendering on mw163 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.177 second response time [05:30:00] RECOVERY - mw181 MediaWiki Rendering on mw181 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.178 second response time [05:30:35] RECOVERY - cp36 HTTPS on cp36 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 4141 bytes in 0.060 second response time [05:30:36] RECOVERY - cp37 Varnish Backends on cp37 is OK: All 29 backends are healthy [05:30:38] PROBLEM - db161 MariaDB Connections on db161 is UNKNOWN: PHP Fatal error: Uncaught mysqli_sql_exception: Too many connections in /usr/lib/nagios/plugins/check_mysql_connections.php:66Stack trace:#0 /usr/lib/nagios/plugins/check_mysql_connections.php(66): mysqli_real_connect(Object(mysqli), 'db161.wikitide....', 'icinga', Object(SensitiveParameterValue), NULL, NULL, NULL, true)#1 {main} thrown in /usr/lib/nagios/plugins/check_mysql_conne [05:30:38] on line 66Fatal error: Uncaught mysqli_sql_exception: Too many connections in /usr/lib/nagios/plugins/check_mysql_connections.php:66Stack trace:#0 /usr/lib/nagios/plugins/check_mysql_connections.php(66): mysqli_real_connect(Object(mysqli), 'db161.wikitide....', 'icinga', Object(SensitiveParameterValue), NULL, NULL, NULL, true)#1 {main} thrown in /usr/lib/nagios/plugins/check_mysql_connections.php on line 66 [05:30:51] RECOVERY - mw172 MediaWiki Rendering on mw172 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.147 second response time [05:30:57] babe, how the fuck are we doing? [05:31:19] PROBLEM - db161 Puppet on db161 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [05:31:29] PROBLEM - db161 MariaDB on db161 is CRITICAL: Too many connections [05:31:30] RECOVERY - cp36 Varnish Backends on cp36 is OK: All 29 backends are healthy [05:32:56] PROBLEM - db161 APT on db161 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [05:33:16] [1/2] db161 😍 [05:33:17] [2/2] https://cdn.discordapp.com/attachments/808001911868489748/1319175702834184203/image0.jpg?ex=6765019c&is=6763b01c&hm=60cb7a3a6d089c12dbf0985db5a0d92dbe747c6e8f797c87e89911f9da969157& [05:35:14] yippee [05:35:45] I'm no genius but this doesn't seem good [05:35:51] yeah nope [05:36:00] db161 is at 100% cpu: https://grafana.wikitide.net/d/W9MIkA7iz/wikitide-cluster?orgId=1&from=now-6h&to=now&timezone=browser&var-job=node&var-node=db161.wikitide.net&var-port=9100 [05:36:07] i think someone put a brick in the washing machine [05:36:47] sorry that was me. like actually i really think this could have been entirely my fault, i opened like a bajillion tabs at once to bulk-add stuff to a private wiki [05:37:19] it'd be kinda funny if one person could take down a part of the whole farm [05:37:36] 😭 [05:38:05] in my defense i have done this before and nothing happened?? [05:38:14] Please DO NOT open 500 tabs to do stuff on your private wiki. This has been a reoccuring issue, [05:38:15] not this time ❌ [05:38:48] > This has been a reoccuring issue [05:38:50] wait really? [05:38:55] just use massedit 😭 [05:38:57] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [05:39:13] :squint: [05:39:23] No I'm referring to the copy pasta [05:39:27] oh [05:39:35] guess i should try to ping the rest of tech [05:40:10] !tech db161 is shitting the bed with too many connections and 100% cpu usage, along with partially missing data in grafana [05:40:45] Well I was editing the protect and survive lore when it crashed [05:40:53] cammy killed db161 :kek: [05:40:59] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.42 ms [05:41:48] I WONT DO IT AGAIN PROMISE [05:42:01] i don't think it was you [05:42:03] i hope [05:42:11] if so, then we need to figure out why that'd happen :p [05:42:53] the ice spice effect [05:42:56] 500 tabs [05:43:04] created a task: https://issue-tracker.miraheze.org/T13020 [05:43:05] https://tenor.com/view/five-hundred-cigarettes-bortus-orville-cigarettes-500-gif-7580742793072271551 [05:43:18] I wish !tech worked [05:43:30] unbreak now! priority [05:43:30] you're welcome [05:43:34] 😭 [05:43:46] oh wow, i never noticed the exclamation mark [05:43:48] db161, I order you to UNBREAK NOW [05:43:52] ! [05:44:06] db171, i order you to BREAK NOW!! [05:44:31] 😱😱😱😱 [05:44:34] \db151 breaks\ /j [05:44:44] db141 when [05:45:05] ohey, the rainverse wiki is on db171 [05:45:20] [1/2] Is this good [05:45:20] [2/2] https://cdn.discordapp.com/attachments/808001911868489748/1319178737337499688/Screenshot_2024-12-19-10-44-54-261_org.mozilla.firefox-edit.jpg?ex=67650470&is=6763b2f0&hm=9e29850c6e7c563be9eb6eeaaf948c47225c4965c0d646164544719ae796cda8& [05:45:27] not very [05:45:45] absolutely [05:46:54] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.42/maintenance/run.php /srv/mediawiki/1.42/maintenance/rebuildall.php --wiki=fairytailwiki (END - exit=256) [05:46:55] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.42/maintenance/run.php /srv/mediawiki/1.42/maintenance/initSiteStats.php --wiki=fairytailwiki --update (END - exit=256) [05:47:23] [1/2] ooh is phorge down [05:47:23] [2/2] https://cdn.discordapp.com/attachments/808001911868489748/1319179253228634122/image0.jpg?ex=676504eb&is=6763b36b&hm=d114028b47d8b027d534b6a7fc21b75cd51bf484a957ffb8a0d570f8169c782d& [05:47:30] db151?! [05:48:03] from fairytailwiki: (Cannot access the database: Cannot access the database: Connection refused (db151)) [05:48:16] not just me? [05:48:32] so far, db151 and db161 is down [05:48:39] oops, db171 is now down [05:48:48] probably everything is down now [05:49:01] my wiki is fine [05:49:05] oh okay [05:49:08] maybe [05:49:11] now it won't load [05:49:13] rainversewiki: (Cannot access the database: Cannot access the database: Connection refused (db171)) [05:49:22] PROBLEM - mw183 MediaWiki Rendering on mw183 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [05:49:24] PROBLEM - mw152 MediaWiki Rendering on mw152 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [05:49:25] PROBLEM - mw171 MediaWiki Rendering on mw171 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [05:49:27] i think someone opened 256 pages [05:49:29] PROBLEM - mw184 MediaWiki Rendering on mw184 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [05:49:35] PROBLEM - mw154 MediaWiki Rendering on mw154 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [05:49:39] PROBLEM - mw181 MediaWiki Rendering on mw181 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.019 second response time [05:49:43] i swear to christ it was only like 30 [05:49:45] i blame cammy [05:49:47] db181 has been on and off for me [05:49:48] interesting [05:49:49] did i just do this shit at the EXACT wrong time [05:49:49] ah yeah now i'm getting 502s [05:49:50] LMAO [05:49:58] i blame Cammy /j [05:50:34] phorge is abck [05:50:39] PROBLEM - mw182 MediaWiki Rendering on mw182 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [05:50:40] PROBLEM - mw174 MediaWiki Rendering on mw174 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.016 second response time [05:50:40] PROBLEM - mw153 MediaWiki Rendering on mw153 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.010 second response time [05:50:43] db151 now has a Giant Spike [05:51:00] wait there is actually a 151? 😭 [05:51:09] [1/2] what the [05:51:10] [2/2] https://cdn.discordapp.com/attachments/808001911868489748/1319180203171381319/J7JTMhl.png?ex=676505cd&is=6763b44d&hm=5b491afd731d212f22276218966682f6fdfcb6322f4e4c0969160a439d398541& [05:51:10] https://files.catbox.moe/bqvca8.png [05:51:12] PROBLEM - mw152 HTTPS on mw152 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [05:51:15] PROBLEM - mw151 HTTPS on mw151 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [05:51:17] PROBLEM - cp37 HTTPS on cp37 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 503 [05:51:23] ellieknowsyou: it's called Magic [05:51:29] PROBLEM - db161 PowerDNS Recursor on db161 is CRITICAL: connect to address 10.0.16.128 port 5666: Connection refusedconnect to host 10.0.16.128 port 5666: Connection refused [05:51:33] PROBLEM - mw173 HTTPS on mw173 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [05:51:33] PROBLEM - mw172 HTTPS on mw172 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10003 milliseconds with 0 bytes received [05:51:41] bringyourmittens: 151, 161, 171, 181 [05:51:41] PROBLEM - mw171 HTTPS on mw171 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10003 milliseconds with 0 bytes received [05:51:46] PROBLEM - mw174 HTTPS on mw174 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [05:51:51] PROBLEM - mw163 HTTPS on mw163 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [05:51:54] hmm [05:51:54] PROBLEM - mw162 HTTPS on mw162 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [05:52:14] PROBLEM - db151 Current Load on db151 is CRITICAL: LOAD CRITICAL - total load average: 797.20, 326.12, 123.20 [05:52:24] huh [05:52:27] ???? [05:52:27] welp there goes the rest of Miraheze [05:52:31] PROBLEM - db161 Disk Space on db161 is CRITICAL: connect to address 10.0.16.128 port 5666: Connection refusedconnect to host 10.0.16.128 port 5666: Connection refused [05:52:31] PROBLEM - cp36 HTTPS on cp36 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 503 [05:52:33] PROBLEM - cp37 Varnish Backends on cp37 is CRITICAL: 15 backends are down. mw151 mw152 mw161 mw162 mw171 mw172 mw181 mw182 mw153 mw154 mw163 mw164 mw173 mw174 mw183 [05:52:35] RECOVERY - mw182 MediaWiki Rendering on mw182 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.214 second response time [05:52:36] oh I know why [05:52:36] PROBLEM - db161 ferm_active on db161 is CRITICAL: connect to address 10.0.16.128 port 5666: Connection refusedconnect to host 10.0.16.128 port 5666: Connection refused [05:52:39] PROBLEM - db161 NTP time on db161 is CRITICAL: connect to address 10.0.16.128 port 5666: Connection refusedconnect to host 10.0.16.128 port 5666: Connection refused [05:52:40] RECOVERY - mw153 MediaWiki Rendering on mw153 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.162 second response time [05:52:40] RECOVERY - mw174 MediaWiki Rendering on mw174 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.200 second response time [05:52:55] PROBLEM - db161 conntrack_table_size on db161 is CRITICAL: connect to address 10.0.16.128 port 5666: Connection refusedconnect to host 10.0.16.128 port 5666: Connection refused [05:53:07] RECOVERY - mw152 HTTPS on mw152 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.051 second response time [05:53:12] RECOVERY - mw151 HTTPS on mw151 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.053 second response time [05:53:14] RECOVERY - cp37 HTTPS on cp37 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 4141 bytes in 0.060 second response time [05:53:17] RECOVERY - mw152 MediaWiki Rendering on mw152 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.163 second response time [05:53:19] RECOVERY - mw183 MediaWiki Rendering on mw183 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.188 second response time [05:53:27] PROBLEM - db161 Backups SQL on db161 is CRITICAL: connect to address 10.0.16.128 port 5666: Connection refusedconnect to host 10.0.16.128 port 5666: Connection refused [05:53:28] RECOVERY - mw173 HTTPS on mw173 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.053 second response time [05:53:28] RECOVERY - mw171 MediaWiki Rendering on mw171 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.153 second response time [05:53:28] RECOVERY - mw184 MediaWiki Rendering on mw184 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.173 second response time [05:53:31] RECOVERY - mw154 MediaWiki Rendering on mw154 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.163 second response time [05:53:31] PROBLEM - db161 SSH on db161 is CRITICAL: connect to address 10.0.16.128 and port 22: Connection refused [05:53:32] RECOVERY - mw172 HTTPS on mw172 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.053 second response time [05:53:38] RECOVERY - mw171 HTTPS on mw171 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.050 second response time [05:53:42] PROBLEM - db151 MariaDB Connections on db151 is UNKNOWN: PHP Fatal error: Uncaught mysqli_sql_exception: Too many connections in /usr/lib/nagios/plugins/check_mysql_connections.php:66Stack trace:#0 /usr/lib/nagios/plugins/check_mysql_connections.php(66): mysqli_real_connect(Object(mysqli), 'db151.wikitide....', 'icinga', Object(SensitiveParameterValue), NULL, NULL, NULL, true)#1 {main} thrown in /usr/lib/nagios/plugins/check_mysql_conne [05:53:42] on line 66Fatal error: Uncaught mysqli_sql_exception: Too many connections in /usr/lib/nagios/plugins/check_mysql_connections.php:66Stack trace:#0 /usr/lib/nagios/plugins/check_mysql_connections.php(66): mysqli_real_connect(Object(mysqli), 'db151.wikitide....', 'icinga', Object(SensitiveParameterValue), NULL, NULL, NULL, true)#1 {main} thrown in /usr/lib/nagios/plugins/check_mysql_connections.php on line 66 [05:53:43] RECOVERY - mw174 HTTPS on mw174 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.053 second response time [05:53:44] mhglobal is on db171 right? as long as that is not down, hopefully the entire farm won't shit itself [05:53:46] RECOVERY - mw181 MediaWiki Rendering on mw181 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.170 second response time [05:53:51] RECOVERY - mw162 HTTPS on mw162 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.053 second response time [05:53:51] RECOVERY - mw163 HTTPS on mw163 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.052 second response time [05:54:16] huh, didn't know there were nagios plugins in php [05:54:31] RECOVERY - cp36 HTTPS on cp36 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 4141 bytes in 0.061 second response time [05:54:33] RECOVERY - cp37 Varnish Backends on cp37 is OK: All 29 backends are healthy [05:55:23] What happened to battle cat wiki [05:55:38] PROBLEM - db151 MariaDB on db151 is CRITICAL: Received error packet before completion of TLS handshake. The authenticity of the following error cannot be verified: 1040 - Too many connections [05:55:40] down [05:55:45] kento4043: at least two databases are uh... being a little bit unresponsive [05:55:47] Ran off into the sunset [05:55:52] capacity issues [05:56:02] its undergoing maintnenance [05:56:03] it’s great to see we have a lot of success but man [05:56:14] wdym capacity issues [05:56:26] too much wikis [05:56:29] not enough servers [05:56:33] oh [05:57:04] guess we should reshuffle them huh [05:57:15] PROBLEM - mwtask181 MediaWiki Rendering on mwtask181 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 8191 bytes in 0.087 second response time [05:57:23] if by reshuffle you mean buy more servers then yes [05:57:37] RECOVERY - db151 MariaDB on db151 is OK: Uptime: 535 Threads: 667 Questions: 3459 Slow queries: 9 Opens: 622 Open tables: 625 Queries per second avg: 6.465 [05:57:42] RECOVERY - db151 MariaDB Connections on db151 is OK: OK connection usage: 56.4%Current connections: 564 [05:58:01] trying to find the task rn [05:58:04] PROBLEM - mw173 MediaWiki Rendering on mw173 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 8191 bytes in 0.069 second response time [05:58:22] PROBLEM - mw151 MediaWiki Rendering on mw151 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 8191 bytes in 0.069 second response time [05:58:25] PROBLEM - mw182 MediaWiki Rendering on mw182 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 8191 bytes in 0.085 second response time [05:58:35] PROBLEM - mw163 MediaWiki Rendering on mw163 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 8191 bytes in 0.070 second response time [05:58:37] PROBLEM - mw153 MediaWiki Rendering on mw153 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 8191 bytes in 0.078 second response time [05:58:40] PROBLEM - mw174 MediaWiki Rendering on mw174 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 8191 bytes in 0.071 second response time [05:58:44] PROBLEM - mwtask161 MediaWiki Rendering on mwtask161 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 8191 bytes in 0.068 second response time [05:58:45] PROBLEM - mw161 MediaWiki Rendering on mw161 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 8191 bytes in 0.071 second response time [05:58:46] PROBLEM - mw172 MediaWiki Rendering on mw172 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 8191 bytes in 0.070 second response time [05:58:51] PROBLEM - mw164 MediaWiki Rendering on mw164 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 8191 bytes in 0.070 second response time [05:58:53] PROBLEM - mw162 MediaWiki Rendering on mw162 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 8191 bytes in 0.068 second response time [05:59:01] PROBLEM - mwtask151 MediaWiki Rendering on mwtask151 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 8191 bytes in 0.080 second response time [05:59:07] PROBLEM - mwtask171 MediaWiki Rendering on mwtask171 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 8191 bytes in 0.074 second response time [05:59:12] PROBLEM - mw183 MediaWiki Rendering on mw183 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 8191 bytes in 0.076 second response time [05:59:13] basically, i meant balancing the wikis on the databases [05:59:14] PROBLEM - mw152 MediaWiki Rendering on mw152 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 8191 bytes in 0.068 second response time [05:59:15] like uhh [05:59:19] PROBLEM - mw184 MediaWiki Rendering on mw184 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 8191 bytes in 0.077 second response time [05:59:20] PROBLEM - mw171 MediaWiki Rendering on mw171 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 8191 bytes in 0.069 second response time [05:59:20] database osmosis [05:59:23] PROBLEM - mw154 MediaWiki Rendering on mw154 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 8191 bytes in 0.069 second response time [05:59:33] I want our wiki on the database that works [05:59:39] PROBLEM - mw181 MediaWiki Rendering on mw181 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Service Unavailable - 8191 bytes in 0.076 second response time [05:59:53] ideally, all of them should work [06:00:03] me too [06:00:04] Don't forget my reservation... [06:15:35] PROBLEM - cp36 HTTP 4xx/5xx ERROR Rate on cp36 is WARNING: WARNING - NGINX Error Rate is 43% [06:17:44] [02puppet] 07AgentIsai pushed 1 new commit to 03master 13https://github.com/miraheze/puppet/commit/1c107f0d08e1a67e518d33584c63eaf8720df57e [06:17:44] 02puppet/03master 07Agent Isai 031c107f0 Add CloudFlare support for Varnish [06:18:08] PROBLEM - db161 MariaDB on db161 is UNKNOWN: [06:18:26] PROBLEM - cp37 HTTP 4xx/5xx ERROR Rate on cp37 is WARNING: WARNING - NGINX Error Rate is 47% [06:19:15] PROBLEM - ping on db161 is CRITICAL: CRITICAL - Host Unreachable (10.0.16.128) [06:19:25] RECOVERY - cp36 HTTP 4xx/5xx ERROR Rate on cp36 is OK: OK - NGINX Error Rate is 37% [06:19:50] PROBLEM - Host db161 is DOWN: CRITICAL - Host Unreachable (10.0.16.128) [06:20:23] PROBLEM - cp37 HTTP 4xx/5xx ERROR Rate on cp37 is CRITICAL: CRITICAL - NGINX Error Rate is 60% [06:22:54] RECOVERY - mwtask181 MediaWiki Rendering on mwtask181 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.230 second response time [06:23:00] RECOVERY - mw154 MediaWiki Rendering on mw154 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.165 second response time [06:23:07] RECOVERY - mwtask171 MediaWiki Rendering on mwtask171 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.200 second response time [06:23:12] RECOVERY - mw183 MediaWiki Rendering on mw183 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.180 second response time [06:23:14] RECOVERY - mw152 MediaWiki Rendering on mw152 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.152 second response time [06:23:15] RECOVERY - mw171 MediaWiki Rendering on mw171 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.149 second response time [06:23:19] RECOVERY - mw184 MediaWiki Rendering on mw184 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.182 second response time [06:23:39] RECOVERY - mw181 MediaWiki Rendering on mw181 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.187 second response time [06:23:44] RECOVERY - mw163 MediaWiki Rendering on mw163 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.163 second response time [06:23:49] RECOVERY - mw151 MediaWiki Rendering on mw151 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.173 second response time [06:24:04] RECOVERY - mw173 MediaWiki Rendering on mw173 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.170 second response time [06:24:17] RECOVERY - cp37 HTTP 4xx/5xx ERROR Rate on cp37 is OK: OK - NGINX Error Rate is 6% [06:24:20] RECOVERY - mw182 MediaWiki Rendering on mw182 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.187 second response time [06:24:24] RECOVERY - mwtask161 MediaWiki Rendering on mwtask161 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.192 second response time [06:24:30] RECOVERY - mw153 MediaWiki Rendering on mw153 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.150 second response time [06:24:40] RECOVERY - mw174 MediaWiki Rendering on mw174 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.167 second response time [06:24:41] RECOVERY - mw161 MediaWiki Rendering on mw161 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.181 second response time [06:24:44] RECOVERY - mw162 MediaWiki Rendering on mw162 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.169 second response time [06:24:45] RECOVERY - mw172 MediaWiki Rendering on mw172 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.156 second response time [06:24:51] RECOVERY - mw164 MediaWiki Rendering on mw164 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.176 second response time [06:25:01] RECOVERY - mwtask151 MediaWiki Rendering on mwtask151 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.201 second response time [06:25:52] RECOVERY - Host db161 is UP: PING OK - Packet loss = 0%, RTA = 0.23 ms [06:25:56] RECOVERY - db161 MariaDB on db161 is OK: Uptime: 36 Threads: 322 Questions: 1083 Slow queries: 0 Opens: 84 Open tables: 78 Queries per second avg: 30.083 [06:26:28] RECOVERY - db161 PowerDNS Recursor on db161 is OK: DNS OK: 2.082 seconds response time. wikitide.net returns 2602:294:0:b13::110,2602:294:0:b23::112,38.46.223.205,38.46.223.206 [06:26:29] RECOVERY - db161 ferm_active on db161 is OK: OK ferm input default policy is set [06:26:32] RECOVERY - db161 Disk Space on db161 is OK: DISK OK - free space: / 345410MiB (38% inode=98%); [06:26:35] RECOVERY - db161 conntrack_table_size on db161 is OK: OK: nf_conntrack is 0 % full [06:26:38] RECOVERY - db161 MariaDB Connections on db161 is OK: OK connection usage: 33.5%Current connections: 335 [06:26:45] RECOVERY - db161 NTP time on db161 is OK: NTP OK: Offset -0.009972810745 secs [06:27:18] RECOVERY - ping on db161 is OK: PING OK - Packet loss = 0%, RTA = 0.26 ms [06:27:24] RECOVERY - db161 Backups SQL on db161 is OK: FILE_AGE OK: /var/log/db-backups/db-backups/db-backups.log is 2294 seconds old and 1434620 bytes [06:27:37] RECOVERY - db161 SSH on db161 is OK: SSH OK - OpenSSH_9.2p1 Debian-2+deb12u3 (protocol 2.0) [06:29:39] RECOVERY - db161 Puppet on db161 is OK: OK: Puppet is currently enabled, last run 35 seconds ago with 0 failures [06:30:07] RECOVERY - db161 APT on db161 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [06:30:59] [02mw-config] 07AgentIsai pushed 1 new commit to 03master 13https://github.com/miraheze/mw-config/commit/ecd9d1b0b3f301c92e568b3e8842e404d649afbb [06:30:59] 02mw-config/03master 07Agent Isai 03ecd9d1b Add localhost to CdnNoPurge [06:31:29] !log [agent@mwtask181] starting deploy of {'pull': 'config', 'config': True} to all [06:31:34] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [06:31:55] !log [agent@mwtask181] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 25s [06:32:02] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [06:32:06] miraheze/mw-config - AgentIsai the build passed. [06:33:22] [02dns] 07AgentIsai pushed 1 new commit to 03master 13https://github.com/miraheze/dns/commit/eaaad47eee744610f571348d8e76637cbc6ce5f6 [06:33:22] 02dns/03master 07Agent Isai 03eaaad47 Depool cp37 [06:34:17] miraheze/dns - AgentIsai the build passed. [06:52:08] PROBLEM - db151 Current Load on db151 is WARNING: LOAD WARNING - total load average: 0.59, 0.51, 11.58 [06:52:30] PROBLEM - db161 Current Load on db161 is WARNING: LOAD WARNING - total load average: 0.44, 1.59, 11.97 [06:56:08] RECOVERY - db151 Current Load on db151 is OK: LOAD OK - total load average: 0.16, 0.36, 9.04 [06:56:30] RECOVERY - db161 Current Load on db161 is OK: LOAD OK - total load average: 0.52, 1.06, 9.38 [07:31:46] PROBLEM - cp37 Current Load on cp37 is CRITICAL: LOAD CRITICAL - total load average: 8.27, 6.79, 4.86 [07:33:45] RECOVERY - cp37 Current Load on cp37 is OK: LOAD OK - total load average: 5.89, 6.20, 4.87 [07:39:57] PROBLEM - cp37 Nginx Backend for mon181 on cp37 is CRITICAL: connect to address localhost and port 8201: Connection refused [07:40:05] PROBLEM - cp37 Nginx Backend for phorge171 on cp37 is CRITICAL: connect to address localhost and port 8202: Connection refused [07:40:06] PROBLEM - cp37 Nginx Backend for puppet181 on cp37 is CRITICAL: connect to address localhost and port 8204: Connection refused [07:40:08] PROBLEM - cp37 Nginx Backend for mw162 on cp37 is CRITICAL: connect to address localhost and port 8116: Connection refused [07:40:13] PROBLEM - cp37 Nginx Backend for mw151 on cp37 is CRITICAL: connect to address localhost and port 8113: Connection refused [07:40:18] PROBLEM - cp37 Nginx Backend for mw171 on cp37 is CRITICAL: connect to address localhost and port 8117: Connection refused [07:40:25] PROBLEM - cp37 Nginx Backend for mw181 on cp37 is CRITICAL: connect to address localhost and port 8119: Connection refused [07:40:26] PROBLEM - cp37 Nginx Backend for mwtask171 on cp37 is CRITICAL: connect to address localhost and port 8161: Connection refused [07:40:26] PROBLEM - cp37 Nginx Backend for swiftproxy161 on cp37 is CRITICAL: connect to address localhost and port 8206: Connection refused [07:40:29] PROBLEM - cp37 Nginx Backend for mw173 on cp37 is CRITICAL: connect to address localhost and port 8125: Connection refused [07:40:30] PROBLEM - cp37 Nginx Backend for mwtask181 on cp37 is CRITICAL: connect to address localhost and port 8160: Connection refused [07:40:34] PROBLEM - cp37 Varnish Backends on cp37 is CRITICAL: 17 backends are down. mw151 mw152 mw161 mw162 mw171 mw172 mw181 mw182 mw153 mw154 mw163 mw164 mw173 mw174 mw183 mw184 mediawiki [07:40:38] PROBLEM - cp37 Nginx Backend for test151 on cp37 is CRITICAL: connect to address localhost and port 8181: Connection refused [07:40:53] PROBLEM - cp37 Nginx Backend for mw154 on cp37 is CRITICAL: connect to address localhost and port 8122: Connection refused [07:40:56] PROBLEM - cp37 Nginx Backend for mw174 on cp37 is CRITICAL: connect to address localhost and port 8126: Connection refused [07:41:00] PROBLEM - cp37 Nginx Backend for reports171 on cp37 is CRITICAL: connect to address localhost and port 8205: Connection refused [07:41:03] PROBLEM - cp37 Nginx Backend for mw152 on cp37 is CRITICAL: connect to address localhost and port 8114: Connection refused [07:41:06] PROBLEM - cp37 Nginx Backend for mw172 on cp37 is CRITICAL: connect to address localhost and port 8118: Connection refused [07:41:06] PROBLEM - cp37 Nginx Backend for mwtask151 on cp37 is CRITICAL: connect to address localhost and port 8162: Connection refused [07:41:09] PROBLEM - cp37 HTTPS on cp37 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to cp37.wikitide.net port 443 after 0 ms: Couldn't connect to server [07:41:14] PROBLEM - cp37 Nginx Backend for mw184 on cp37 is CRITICAL: connect to address localhost and port 8128: Connection refused [07:41:20] PROBLEM - cp37 Nginx Backend for mwtask161 on cp37 is CRITICAL: connect to address localhost and port 8163: Connection refused [07:41:30] PROBLEM - cp37 Nginx Backend for mw161 on cp37 is CRITICAL: connect to address localhost and port 8115: Connection refused [07:41:30] PROBLEM - cp37 Nginx Backend for matomo151 on cp37 is CRITICAL: connect to address localhost and port 8203: Connection refused [07:41:30] PROBLEM - cp37 Nginx Backend for swiftproxy171 on cp37 is CRITICAL: connect to address localhost and port 8207: Connection refused [07:41:40] PROBLEM - cp37 Nginx Backend for mw182 on cp37 is CRITICAL: connect to address localhost and port 8120: Connection refused [07:41:42] PROBLEM - cp37 Nginx Backend for mw183 on cp37 is CRITICAL: connect to address localhost and port 8127: Connection refused [07:41:50] PROBLEM - cp37 Nginx Backend for mw163 on cp37 is CRITICAL: connect to address localhost and port 8123: Connection refused [07:41:50] PROBLEM - cp37 Nginx Backend for mw164 on cp37 is CRITICAL: connect to address localhost and port 8124: Connection refused [07:41:50] PROBLEM - cp37 Nginx Backend for mw153 on cp37 is CRITICAL: connect to address localhost and port 8121: Connection refused [07:43:40] RECOVERY - cp37 Nginx Backend for mw182 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8120 [07:43:42] RECOVERY - cp37 Nginx Backend for mw183 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8127 [07:43:50] RECOVERY - cp37 Nginx Backend for mw163 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8123 [07:43:50] RECOVERY - cp37 Nginx Backend for mw164 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8124 [07:43:50] RECOVERY - cp37 Nginx Backend for mw153 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8121 [07:43:57] RECOVERY - cp37 Nginx Backend for mon181 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8201 [07:44:05] RECOVERY - cp37 Nginx Backend for phorge171 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8202 [07:44:06] RECOVERY - cp37 Nginx Backend for puppet181 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8204 [07:44:08] RECOVERY - cp37 Nginx Backend for mw162 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8116 [07:44:13] RECOVERY - cp37 Nginx Backend for mw151 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8113 [07:44:18] RECOVERY - cp37 Nginx Backend for mw171 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8117 [07:44:25] RECOVERY - cp37 Nginx Backend for mwtask171 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8161 [07:44:26] RECOVERY - cp37 Nginx Backend for mw181 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8119 [07:44:26] RECOVERY - cp37 Nginx Backend for swiftproxy161 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8206 [07:44:29] RECOVERY - cp37 Nginx Backend for mw173 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8125 [07:44:30] RECOVERY - cp37 Nginx Backend for mwtask181 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8160 [07:44:33] RECOVERY - cp37 Varnish Backends on cp37 is OK: All 29 backends are healthy [07:44:38] RECOVERY - cp37 Nginx Backend for test151 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8181 [07:44:53] RECOVERY - cp37 Nginx Backend for mw154 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8122 [07:44:56] RECOVERY - cp37 Nginx Backend for mw174 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8126 [07:45:00] RECOVERY - cp37 Nginx Backend for reports171 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8205 [07:45:03] RECOVERY - cp37 Nginx Backend for mw152 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8114 [07:45:06] RECOVERY - cp37 Nginx Backend for mw172 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8118 [07:45:06] RECOVERY - cp37 Nginx Backend for mwtask151 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8162 [07:45:15] RECOVERY - cp37 HTTPS on cp37 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 4140 bytes in 0.061 second response time [07:45:22] RECOVERY - cp37 Nginx Backend for mwtask161 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8163 [07:45:26] RECOVERY - cp37 Nginx Backend for mw184 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8128 [07:45:30] RECOVERY - cp37 Nginx Backend for mw161 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8115 [07:45:30] RECOVERY - cp37 Nginx Backend for matomo151 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8203 [07:45:30] RECOVERY - cp37 Nginx Backend for swiftproxy171 on cp37 is OK: TCP OK - 0.000 second response time on localhost port 8207 [08:07:11] PROBLEM - ping6 on ns2 is CRITICAL: PING CRITICAL - Packet loss = 16%, RTA = 141.81 ms [08:09:12] RECOVERY - ping6 on ns2 is OK: PING OK - Packet loss = 0%, RTA = 141.71 ms [11:37:43] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [11:39:45] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.36 ms [12:20:23] [02puppet] 07paladox created 03paladox-patch-1 (+1 new commit) 13https://github.com/miraheze/puppet/commit/a2805375483e [12:20:24] 02puppet/03paladox-patch-1 07paladox 03a280537 varnish: allow restricting firewall to only cloudflare [12:20:28] [02puppet] 07paladox opened pull request #4123: varnish: allow restricting firewall to only cloudflare (03master...03paladox-patch-1) 13https://github.com/miraheze/puppet/pull/4123 [12:20:37] [02puppet] 07coderabbitai[bot] commented on pull request #4123: --- […] 13https://github.com/miraheze/puppet/pull/4123#issuecomment-2553658293 [12:20:58] [02puppet] 07paladox pushed 1 new commit to 03paladox-patch-1 13https://github.com/miraheze/puppet/commit/f6960a59ba9a131fd6af94a89dee63f976f30c5c [12:20:58] 02puppet/03paladox-patch-1 07paladox 03f6960a5 Update cp37.yaml [12:21:36] [02puppet] 07github-actions[bot] pushed 1 new commit to 03paladox-patch-1 13https://github.com/miraheze/puppet/commit/90956588225b1eff0d4f81aa117a90482aae70e1 [12:21:36] 02puppet/03paladox-patch-1 07github-actions 039095658 CI: lint puppet code to standards… [12:21:49] [02puppet] 07paladox merged pull request #4123: varnish: allow restricting firewall to only cloudflare (03master...03paladox-patch-1) 13https://github.com/miraheze/puppet/pull/4123 [12:21:49] [02puppet] 07paladox pushed 1 new commit to 03master 13https://github.com/miraheze/puppet/commit/a02b927854915f16ee5cce10229f70af0ff45165 [12:21:49] 02puppet/03master 07paladox 03a02b927 varnish: allow restricting firewall to only cloudflare (#4123)… [12:21:50] [02puppet] 07paladox 04deleted 03paladox-patch-1 at 039095658 13https://github.com/miraheze/puppet/commit/9095658 [12:24:18] [02puppet] 07paladox pushed 1 new commit to 03master 13https://github.com/miraheze/puppet/commit/e814ce70acfb56a043406c44c1cfdfbe1335dd4d [12:24:18] 02puppet/03master 07paladox 03e814ce7 Fix [12:26:17] PROBLEM - cp37 Puppet on cp37 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [12:30:20] PROBLEM - cp37 Puppet on cp37 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 1 minute ago with 0 failures [12:30:43] [02puppet] 07paladox pushed 1 new commit to 03master 13https://github.com/miraheze/puppet/commit/b8ce310092a45d64b8a6363850a5f944b74477c6 [12:30:43] 02puppet/03master 07paladox 03b8ce310 Update varnish.pp [12:32:16] RECOVERY - cp37 Puppet on cp37 is OK: OK: Puppet is currently enabled, last run 42 seconds ago with 0 failures [12:35:37] PROBLEM - cp37 HTTPS on cp37 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Connection timed out after 10002 milliseconds [12:47:54] [02puppet] 07paladox pushed 1 new commit to 03master 13https://github.com/miraheze/puppet/commit/c5e415967ec7f83c27e825298bd165f9fa080e15 [12:47:54] 02puppet/03master 07paladox 03c5e4159 Allow icinga2 to access varnish [12:50:12] RECOVERY - cp37 HTTPS on cp37 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 4140 bytes in 0.049 second response time [13:11:03] [02puppet] 07paladox pushed 1 new commit to 03master 13https://github.com/miraheze/puppet/commit/a50d17d5b5c961d52b580a4aab4631fa1034ea27 [13:11:04] 02puppet/03master 07paladox 03a50d17d varnish: Remove unset of cache-control for static.wikitide.net [13:25:23] [02ImportDump] 07dependabot[bot] created 03dependabot/npm_and_yarn/stylelint-config-wikimedia-0.18.0 (+1 new commit) 13https://github.com/miraheze/ImportDump/commit/9ede806257f9 [13:25:24] 02ImportDump/03dependabot/npm_and_yarn/stylelint-config-wikimedia-0.18.0 07dependabot[bot] 039ede806 Bump stylelint-config-wikimedia from 0.17.2 to 0.18.0… [13:25:25] [02ImportDump] 07dependabot[bot] added the label 'dependencies' to pull request #125 (Bump stylelint-config-wikimedia from 0.17.2 to 0.18.0) 13https://github.com/miraheze/ImportDump/pull/125 [13:25:25] [02ImportDump] 07dependabot[bot] added the label 'javascript' to pull request #125 (Bump stylelint-config-wikimedia from 0.17.2 to 0.18.0) 13https://github.com/miraheze/ImportDump/pull/125 [13:25:27] [02ImportDump] 07dependabot[bot] opened pull request #125: Bump stylelint-config-wikimedia from 0.17.2 to 0.18.0 (03master...03dependabot/npm_and_yarn/stylelint-config-wikimedia-0.18.0) 13https://github.com/miraheze/ImportDump/pull/125 [13:25:35] [02ImportDump] 07coderabbitai[bot] commented on pull request #125: --- […] 13https://github.com/miraheze/ImportDump/pull/125#issuecomment-2553962067 [13:36:04] miraheze/ImportDump - dependabot[bot] the build passed. [13:47:18] [02puppet] 07paladox created 03paladox-patch-1 (+1 new commit) 13https://github.com/miraheze/puppet/commit/f77d03833fc8 [13:47:19] 02puppet/03paladox-patch-1 07paladox 03f77d038 varnish: Don't set Cache-Control for static.wikitide.net [13:47:23] [02puppet] 07paladox opened pull request #4124: varnish: Don't set Cache-Control for static.wikitide.net (03master...03paladox-patch-1) 13https://github.com/miraheze/puppet/pull/4124 [13:47:32] [02puppet] 07coderabbitai[bot] commented on pull request #4124: --- […] 13https://github.com/miraheze/puppet/pull/4124#issuecomment-2554081917 [13:48:06] [02puppet] 07paladox merged pull request #4124: varnish: Don't set Cache-Control for static.wikitide.net (03master...03paladox-patch-1) 13https://github.com/miraheze/puppet/pull/4124 [13:48:07] [02puppet] 07paladox pushed 1 new commit to 03master 13https://github.com/miraheze/puppet/commit/a2f872d48f2bdf8c79dc2044d2b8ed3cd19f9993 [13:48:07] 02puppet/03master 07paladox 03a2f872d varnish: Don't set Cache-Control for static.wikitide.net (#4124) [13:48:08] [02puppet] 07paladox 04deleted 03paladox-patch-1 at 03f77d038 13https://github.com/miraheze/puppet/commit/f77d038 [14:24:44] [02puppet] 07paladox created 03paladox-patch-1 (+1 new commit) 13https://github.com/miraheze/puppet/commit/c47e3fcf1f4e [14:24:44] 02puppet/03paladox-patch-1 07paladox 03c47e3fc cp37: Set nginx::use_varnish_directly to false [14:24:49] [02puppet] 07paladox opened pull request #4125: cp37: Set nginx::use_varnish_directly to false (03master...03paladox-patch-1) 13https://github.com/miraheze/puppet/pull/4125 [14:24:56] [02puppet] 07coderabbitai[bot] commented on pull request #4125: --- […] 13https://github.com/miraheze/puppet/pull/4125#issuecomment-2554282716 [14:26:24] [02puppet] 07paladox merged pull request #4125: cp37: Set nginx::use_varnish_directly to false (03master...03paladox-patch-1) 13https://github.com/miraheze/puppet/pull/4125 [14:26:25] [02puppet] 07paladox pushed 1 new commit to 03master 13https://github.com/miraheze/puppet/commit/11ed7f192267dffdf1a4dd102dfa8051a6b23e06 [14:26:25] 02puppet/03master 07paladox 0311ed7f1 cp37: Set nginx::use_varnish_directly to false (#4125) [14:26:27] [02puppet] 07paladox 04deleted 03paladox-patch-1 at 03c47e3fc 13https://github.com/miraheze/puppet/commit/c47e3fc [14:32:23] PROBLEM - puppet181 Puppet on puppet181 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[git_pull_puppet] [15:06:26] PROBLEM - cp37 Disk Space on cp37 is WARNING: DISK WARNING - free space: / 9685MiB (10% inode=98%); [15:08:40] [02mw-config] 07paladox created 03paladox-patch-2 (+1 new commit) 13https://github.com/miraheze/mw-config/commit/b9782d314883 [15:08:40] 02mw-config/03paladox-patch-2 07paladox 03b9782d3 Fix bast161 and bast181 ipv6 in wgCdnServersNoPurge… [15:08:44] [02mw-config] 07paladox opened pull request #5771: Fix bast161 and bast181 ipv6 in wgCdnServersNoPurge (03master...03paladox-patch-2) 13https://github.com/miraheze/mw-config/pull/5771 [15:08:54] [02mw-config] 07paladox pushed 1 new commit to 03master 13https://github.com/miraheze/mw-config/commit/faf9d1c4b10ed06445bf12eacd07ec0bb8f22d0f [15:08:55] 02mw-config/03master 07paladox 03faf9d1c Fix bast161 and bast181 ipv6 in wgCdnServersNoPurge (#5771)… [15:08:57] [02mw-config] 07paladox merged pull request #5771: Fix bast161 and bast181 ipv6 in wgCdnServersNoPurge (03master...03paladox-patch-2) 13https://github.com/miraheze/mw-config/pull/5771 [15:08:59] [02mw-config] 07paladox 04deleted 03paladox-patch-2 at 03b9782d3 13https://github.com/miraheze/mw-config/commit/b9782d3 [15:09:01] !log [paladox@mwtask181] starting deploy of {'pull': 'config', 'config': True} to all [15:09:06] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [15:09:29] !log [paladox@mwtask181] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 27s [15:09:34] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [15:09:44] miraheze/mw-config - paladox the build passed. [15:09:52] miraheze/mw-config - paladox the build passed. [15:18:37] [02puppet] 07paladox pushed 1 new commit to 03master 13https://github.com/miraheze/puppet/commit/454fb1ccffc57b8709090081782988d9f54861d3 [15:18:37] 02puppet/03master 07paladox 03454fb1c Revert "varnish: Remove unset of cache-control for static.wikitide.net"… [15:30:23] RECOVERY - puppet181 Puppet on puppet181 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [16:06:30] [02MatomoAnalytics] 07dependabot[bot] created 03dependabot/npm_and_yarn/stylelint-config-wikimedia-0.18.0 (+1 new commit) 13https://github.com/miraheze/MatomoAnalytics/commit/47f5fcd3e3d4 [16:06:30] 02MatomoAnalytics/03dependabot/npm_and_yarn/stylelint-config-wikimedia-0.18.0 07dependabot[bot] 0347f5fcd Bump stylelint-config-wikimedia from 0.17.2 to 0.18.0… [16:06:31] [02MatomoAnalytics] 07dependabot[bot] added the label 'dependencies' to pull request #153 (Bump stylelint-config-wikimedia from 0.17.2 to 0.18.0) 13https://github.com/miraheze/MatomoAnalytics/pull/153 [16:06:33] [02MatomoAnalytics] 07dependabot[bot] added the label 'javascript' to pull request #153 (Bump stylelint-config-wikimedia from 0.17.2 to 0.18.0) 13https://github.com/miraheze/MatomoAnalytics/pull/153 [16:06:35] [02MatomoAnalytics] 07dependabot[bot] opened pull request #153: Bump stylelint-config-wikimedia from 0.17.2 to 0.18.0 (03master...03dependabot/npm_and_yarn/stylelint-config-wikimedia-0.18.0) 13https://github.com/miraheze/MatomoAnalytics/pull/153 [16:06:38] [02MatomoAnalytics] 07coderabbitai[bot] commented on pull request #153: --- […] 13https://github.com/miraheze/MatomoAnalytics/pull/153#issuecomment-2554826580 [16:11:32] miraheze/MatomoAnalytics - dependabot[bot] the build passed. [17:06:21] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.42/maintenance/run.php /srv/mediawiki/1.42/maintenance/importDump.php --wiki=fairytailwiki dump.xml --no-updates (START) [17:06:22] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.42/maintenance/run.php /srv/mediawiki/1.42/maintenance/importDump.php --wiki=fairytailwiki dump.xml --no-updates (END - exit=2) [17:06:23] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.42/maintenance/run.php /srv/mediawiki/1.42/maintenance/rebuildall.php --wiki=fairytailwiki (START) [17:06:26] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:06:32] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:06:37] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:17:37] [02dns] 07MacFan4000 pushed 1 new commit to 03master 13https://github.com/miraheze/dns/commit/7ee8d24f83b3d1bca0708e0a91420dcd140ae0ec [17:17:38] 02dns/03master 07MacFan4000 037ee8d24 Create raidrush.wiki [17:18:28] [02ssl] 07WikiTideSSLBot pushed 1 new commit to 03master 13https://github.com/miraheze/ssl/commit/b0cd5e2b7df2cdaee13332db0fb88d27d77f477f [17:18:28] 02ssl/03master 07WikiTideSSLBot 03b0cd5e2 Bot: Add SSL cert for raidrush.wiki… [17:18:41] miraheze/dns - MacFan4000 the build passed. [17:44:25] [02ManageWiki] 07dependabot[bot] created 03dependabot/npm_and_yarn/stylelint-config-wikimedia-0.18.0 (+1 new commit) 13https://github.com/miraheze/ManageWiki/commit/15dad99ab9b5 [17:44:25] 02ManageWiki/03dependabot/npm_and_yarn/stylelint-config-wikimedia-0.18.0 07dependabot[bot] 0315dad99 Bump stylelint-config-wikimedia from 0.17.2 to 0.18.0… [17:44:26] [02ManageWiki] 07dependabot[bot] opened pull request #524: Bump stylelint-config-wikimedia from 0.17.2 to 0.18.0 (03master...03dependabot/npm_and_yarn/stylelint-config-wikimedia-0.18.0) 13https://github.com/miraheze/ManageWiki/pull/524 [17:44:28] [02ManageWiki] 07dependabot[bot] added the label 'javascript' to pull request #524 (Bump stylelint-config-wikimedia from 0.17.2 to 0.18.0) 13https://github.com/miraheze/ManageWiki/pull/524 [17:44:30] [02ManageWiki] 07dependabot[bot] added the label 'dependencies' to pull request #524 (Bump stylelint-config-wikimedia from 0.17.2 to 0.18.0) 13https://github.com/miraheze/ManageWiki/pull/524 [17:44:33] [02ManageWiki] 07coderabbitai[bot] commented on pull request #524: --- […] 13https://github.com/miraheze/ManageWiki/pull/524#issuecomment-2555360645 [17:47:51] two robots yelling at each other?? [17:49:34] miraheze/ManageWiki - dependabot[bot] the build passed. [18:28:51] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [18:29:12] shut up [18:29:31] icinga-miraheze: please go on annual leave [18:30:55] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.39 ms [18:34:11] they said OK! [18:44:32] PROBLEM - cp37 Disk Space on cp37 is CRITICAL: DISK CRITICAL - free space: / 5288MiB (5% inode=98%); [19:16:32] PROBLEM - cp37 Disk Space on cp37 is WARNING: DISK WARNING - free space: / 5486MiB (6% inode=98%); [19:20:32] RECOVERY - cp37 Disk Space on cp37 is OK: DISK OK - free space: / 23503MiB (26% inode=98%); [21:03:53] PROBLEM - cp36 Varnish Backends on cp36 is CRITICAL: 1 backends are down. mw152 [21:05:51] RECOVERY - cp36 Varnish Backends on cp36 is OK: All 29 backends are healthy [21:12:32] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.42/maintenance/run.php /srv/mediawiki/1.42/maintenance/rebuildall.php --wiki=fairytailwiki (END - exit=0) [21:12:34] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.42/maintenance/run.php /srv/mediawiki/1.42/maintenance/initSiteStats.php --wiki=fairytailwiki --update (END - exit=0) [21:12:37] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:12:42] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [23:14:02] [02puppet] 07paladox created 03paladox-patch-1 (+1 new commit) 13https://github.com/miraheze/puppet/commit/12f518434b86 [23:14:02] 02puppet/03paladox-patch-1 07paladox 0312f5184 mediawiki: remove Cloudflare from firewall… [23:14:08] [02puppet] 07paladox opened pull request #4126: mediawiki: remove Cloudflare from firewall (03master...03paladox-patch-1) 13https://github.com/miraheze/puppet/pull/4126 [23:14:15] [02puppet] 07coderabbitai[bot] commented on pull request #4126: --- […] 13https://github.com/miraheze/puppet/pull/4126#issuecomment-2555935330 [23:14:52] [02puppet] 07paladox pushed 1 new commit to 03paladox-patch-1 13https://github.com/miraheze/puppet/commit/1a5a1e0973c96da2b473e7ddafd51aae4cc2773d [23:14:52] 02puppet/03paladox-patch-1 07paladox 031a5a1e0 Update mediawiki_beta.pp [23:15:23] [02puppet] 07paladox pushed 1 new commit to 03paladox-patch-1 13https://github.com/miraheze/puppet/commit/904a643fdfde6e88976ad53bc9f8370027df0391 [23:15:23] 02puppet/03paladox-patch-1 07paladox 03904a643 Update mediawiki_task.pp [23:27:01] [02puppet] 07paladox pushed 1 new commit to 03master 13https://github.com/miraheze/puppet/commit/4e7d8932077bcd957d151bb26a8d6f9779c37e23 [23:27:01] 02puppet/03master 07paladox 034e7d893 mediawiki: remove Cloudflare from firewall (#4126)… [23:27:03] [02puppet] 07paladox merged pull request #4126: mediawiki: remove Cloudflare from firewall (03master...03paladox-patch-1) 13https://github.com/miraheze/puppet/pull/4126 [23:27:05] [02puppet] 07paladox 04deleted 03paladox-patch-1 at 03904a643 13https://github.com/miraheze/puppet/commit/904a643 [23:37:06] PROBLEM - lostmediawiki.ru - LetsEncrypt on sslhost is CRITICAL: connect to address lostmediawiki.ru and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket