[00:02:36] [02MirahezeMagic] 07paladox created 03paladox-patch-3 (+1 new commit) 13https://github.com/miraheze/MirahezeMagic/commit/1cd92f0fd445 [00:02:38] 02MirahezeMagic/03paladox-patch-3 07paladox 031cd92f0 Add missing "mirahezemagic-extensionname" to i18n [00:02:40] [02MirahezeMagic] 07paladox opened pull request #548: Add missing "mirahezemagic-extensionname" to i18n (03master...03paladox-patch-3) 13https://github.com/miraheze/MirahezeMagic/pull/548 [00:02:49] [02MirahezeMagic] 07coderabbitai[bot] commented on pull request #548: --- […] 13https://github.com/miraheze/MirahezeMagic/pull/548#issuecomment-2608521116 [00:03:31] [02MirahezeMagic] 07paladox pushed 1 new commit to 03paladox-patch-3 13https://github.com/miraheze/MirahezeMagic/commit/6573655f45f19c9c53eba355540cc4707499bf40 [00:03:31] 02MirahezeMagic/03paladox-patch-3 07paladox 036573655 Update qqq.json [00:03:51] [02MirahezeMagic] 07paladox merged 07dependabot[bot]'s pull request #547: Update mediawiki/mediawiki-phan-config requirement from 0.14.0 to 0.15.1 (03master...03dependabot/composer/mediawiki/mediawiki-phan-config-0.15.1) 13https://github.com/miraheze/MirahezeMagic/pull/547 [00:03:53] [02MirahezeMagic] 07paladox pushed 1 new commit to 03master 13https://github.com/miraheze/MirahezeMagic/commit/36517b54ed3d4504edabe0686d01f3b9fb21abc9 [00:03:55] [02MirahezeMagic] 07paladox 04deleted 03dependabot/composer/mediawiki/mediawiki-phan-config-0.15.1 at 0306fe03f 13https://github.com/miraheze/MirahezeMagic/commit/06fe03f [00:03:57] 02MirahezeMagic/03master 07dependabot[bot] 0336517b5 Update mediawiki/mediawiki-phan-config requirement from 0.14.0 to 0.15.1 (#547)… [00:05:04] [02MirahezeMagic] 07paladox merged pull request #548: Add missing "mirahezemagic-extensionname" to i18n (03master...03paladox-patch-3) 13https://github.com/miraheze/MirahezeMagic/pull/548 [00:05:04] [02MirahezeMagic] 07paladox pushed 1 new commit to 03master 13https://github.com/miraheze/MirahezeMagic/commit/6953a5cdb04e7d042b2d6faf6846f62bf79428cc [00:05:04] 02MirahezeMagic/03master 07paladox 036953a5c Add missing "mirahezemagic-extensionname" to i18n (#548) [00:05:05] [02MirahezeMagic] 07paladox 04deleted 03paladox-patch-3 at 036573655 13https://github.com/miraheze/MirahezeMagic/commit/6573655 [00:05:40] !log [paladox@mwtask181] starting deploy of {'l10n': True, 'versions': ['1.42', '1.43'], 'upgrade_extensions': 'MirahezeMagic'} to all [00:05:42] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [00:05:53] !log [paladox@test151] starting deploy of {'l10n': True, 'versions': ['1.42', '1.43', '1.44'], 'upgrade_extensions': 'MirahezeMagic'} to test151 [00:05:55] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [00:06:07] [02MirahezeMagic] 07paladox pushed 1 new commit to 03paladox-patch-2 13https://github.com/miraheze/MirahezeMagic/commit/9be72ce6fc0686c2e621c5cd9db99c0b4e9ea39e [00:06:07] 02MirahezeMagic/03paladox-patch-2 07paladox 039be72ce Merge branch 'master' into paladox-patch-2 [00:14:26] miraheze/MirahezeMagic - paladox the build passed. [00:15:59] PROBLEM - cp37 Disk Space on cp37 is WARNING: DISK WARNING - free space: / 40922MiB (9% inode=99%); [00:16:40] miraheze/MirahezeMagic - paladox the build passed. [00:16:53] miraheze/MirahezeMagic - paladox the build passed. [00:21:38] !log [paladox@test151] finished deploy of {'l10n': True, 'versions': ['1.42', '1.43', '1.44'], 'upgrade_extensions': 'MirahezeMagic'} to test151 - SUCCESS in 945s [00:21:40] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [00:23:01] !log [paladox@mwtask181] finished deploy of {'l10n': True, 'versions': ['1.42', '1.43'], 'upgrade_extensions': 'MirahezeMagic'} to all - SUCCESS in 1040s [00:23:03] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [00:28:02] PROBLEM - matomo151 Current Load on matomo151 is WARNING: LOAD WARNING - total load average: 6.71, 7.67, 6.09 [00:30:00] RECOVERY - matomo151 Current Load on matomo151 is OK: LOAD OK - total load average: 5.07, 6.61, 5.89 [02:45:58] PROBLEM - changeprop151 Current Load on changeprop151 is CRITICAL: LOAD CRITICAL - total load average: 16.97, 11.45, 5.81 [02:47:58] RECOVERY - changeprop151 Current Load on changeprop151 is OK: LOAD OK - total load average: 3.76, 8.61, 5.48 [03:28:42] PROBLEM - matomo151 Current Load on matomo151 is WARNING: LOAD WARNING - total load average: 6.86, 6.85, 5.81 [03:30:40] RECOVERY - matomo151 Current Load on matomo151 is OK: LOAD OK - total load average: 5.35, 6.30, 5.73 [04:02:04] PROBLEM - mw181 HTTPS on mw181 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [04:02:05] PROBLEM - mw182 HTTPS on mw182 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [04:02:11] PROBLEM - mw152 MediaWiki Rendering on mw152 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.009 second response time [04:02:11] PROBLEM - mw171 HTTPS on mw171 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [04:02:18] PROBLEM - mw171 MediaWiki Rendering on mw171 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.012 second response time [04:02:19] PROBLEM - mw184 MediaWiki Rendering on mw184 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.010 second response time [04:02:19] PROBLEM - mw173 MediaWiki Rendering on mw173 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.012 second response time [04:02:21] PROBLEM - mw174 HTTPS on mw174 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [04:02:25] PROBLEM - mw164 MediaWiki Rendering on mw164 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.011 second response time [04:02:31] PROBLEM - mw164 HTTPS on mw164 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [04:02:34] PROBLEM - cp37 Varnish Backends on cp37 is CRITICAL: 17 backends are down. mw151 mw152 mw161 mw162 mw171 mw172 mw181 mw182 mw153 mw154 mw163 mw164 mw173 mw174 mw183 mw184 mediawiki [04:02:34] PROBLEM - mw182 MediaWiki Rendering on mw182 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.011 second response time [04:02:37] PROBLEM - cp36 Varnish Backends on cp36 is CRITICAL: 17 backends are down. mw151 mw152 mw161 mw162 mw171 mw172 mw181 mw182 mw153 mw154 mw163 mw164 mw173 mw174 mw183 mw184 mediawiki [04:02:38] PROBLEM - mw152 HTTPS on mw152 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [04:02:38] PROBLEM - mw173 HTTPS on mw173 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [04:02:47] PROBLEM - mw162 MediaWiki Rendering on mw162 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.014 second response time [04:02:51] PROBLEM - mw172 HTTPS on mw172 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [04:02:52] PROBLEM - mw183 MediaWiki Rendering on mw183 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.010 second response time [04:02:52] PROBLEM - cp36 HTTPS on cp36 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 503 [04:03:01] PROBLEM - cp37 HTTPS on cp37 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 503 [04:03:02] PROBLEM - mw183 HTTPS on mw183 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 502 [04:03:06] PROBLEM - mw151 MediaWiki Rendering on mw151 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.011 second response time [04:03:16] PROBLEM - mw161 MediaWiki Rendering on mw161 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.014 second response time [04:03:17] PROBLEM - mw172 MediaWiki Rendering on mw172 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 8191 bytes in 0.016 second response time [04:03:17] PROBLEM - db181 Current Load on db181 is CRITICAL: LOAD CRITICAL - total load average: 70.60, 36.06, 15.08 [04:03:23] PROBLEM - mw163 MediaWiki Rendering on mw163 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [04:03:31] PROBLEM - mw184 HTTPS on mw184 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [04:03:37] PROBLEM - mw162 HTTPS on mw162 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10001 milliseconds with 0 bytes received [04:03:38] PROBLEM - mw153 HTTPS on mw153 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [04:03:41] PROBLEM - mw181 MediaWiki Rendering on mw181 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [04:03:50] PROBLEM - mw163 HTTPS on mw163 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [04:03:50] PROBLEM - mw174 MediaWiki Rendering on mw174 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [04:03:50] PROBLEM - mw161 HTTPS on mw161 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10001 milliseconds with 0 bytes received [04:03:51] PROBLEM - mw154 MediaWiki Rendering on mw154 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [04:03:53] PROBLEM - mw151 HTTPS on mw151 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [04:03:54] PROBLEM - mw153 MediaWiki Rendering on mw153 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [04:04:04] PROBLEM - mw154 HTTPS on mw154 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [04:05:09] @Infrastructure Specialists its back [04:05:36] I want to say this is an attack [04:05:42] do we have any idea of root cause, I’ve seen agent say reboot cp, I saw paladox bump down connection proxies [04:05:57] Honestly that’s what I was wondering this morning [04:06:34] RECOVERY - cp37 Varnish Backends on cp37 is OK: All 29 backends are healthy [04:06:38] The mw* servers were showing a spike in network traffic at the same time as the resource spike and outage [04:06:51] grah [04:06:54] RECOVERY - mw152 HTTPS on mw152 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 9.222 second response time [04:06:57] RECOVERY - mw162 MediaWiki Rendering on mw162 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.156 second response time [04:06:58] RECOVERY - mw172 HTTPS on mw172 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.050 second response time [04:07:00] RECOVERY - mw183 MediaWiki Rendering on mw183 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.184 second response time [04:07:06] tell em icinga [04:07:08] gag them [04:07:09] RECOVERY - mw183 HTTPS on mw183 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.063 second response time [04:07:10] RECOVERY - mw151 MediaWiki Rendering on mw151 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.158 second response time [04:07:14] Could it be someone DDoSing a varnish backed domain like with fisch again @agentisai? [04:07:22] RECOVERY - mw163 MediaWiki Rendering on mw163 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.180 second response time [04:07:24] RECOVERY - mw161 MediaWiki Rendering on mw161 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.160 second response time [04:07:25] RECOVERY - mw172 MediaWiki Rendering on mw172 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.147 second response time [04:07:28] ooh web server is returning an unknown error [04:07:34] RECOVERY - mw184 HTTPS on mw184 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.163 second response time [04:07:40] RECOVERY - mw162 HTTPS on mw162 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.046 second response time [04:07:41] miraback [04:07:42] RECOVERY - mw181 MediaWiki Rendering on mw181 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.195 second response time [04:07:44] RECOVERY - mw153 HTTPS on mw153 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 1.841 second response time [04:07:48] PROBLEM - db181 MariaDB Connections on db181 is UNKNOWN: PHP Fatal error: Uncaught mysqli_sql_exception: Too many connections in /usr/lib/nagios/plugins/check_mysql_connections.php:66Stack trace:#0 /usr/lib/nagios/plugins/check_mysql_connections.php(66): mysqli_real_connect(Object(mysqli), 'db181.wikitide....', 'icinga', Object(SensitiveParameterValue), NULL, NULL, NULL, true)#1 {main} thrown in /usr/lib/nagios/plugins/check_mysql_conne [04:07:48] on line 66Fatal error: Uncaught mysqli_sql_exception: Too many connections in /usr/lib/nagios/plugins/check_mysql_connections.php:66Stack trace:#0 /usr/lib/nagios/plugins/check_mysql_connections.php(66): mysqli_real_connect(Object(mysqli), 'db181.wikitide....', 'icinga', Object(SensitiveParameterValue), NULL, NULL, NULL, true)#1 {main} thrown in /usr/lib/nagios/plugins/check_mysql_connections.php on line 66 [04:07:52] RECOVERY - mw151 HTTPS on mw151 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.429 second response time [04:07:53] Uh [04:07:54] RECOVERY - mw163 HTTPS on mw163 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.057 second response time [04:07:57] RECOVERY - mw161 HTTPS on mw161 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.050 second response time [04:07:58] RECOVERY - mw153 MediaWiki Rendering on mw153 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 1.317 second response time [04:08:00] RECOVERY - mw174 MediaWiki Rendering on mw174 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 3.771 second response time [04:08:00] RECOVERY - mw154 MediaWiki Rendering on mw154 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.178 second response time [04:08:02] that’s it [04:08:09] that’s not good [04:08:12] RECOVERY - mw154 HTTPS on mw154 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.086 second response time [04:08:17] RECOVERY - mw181 HTTPS on mw181 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.227 second response time [04:08:18] PROBLEM - db181 MariaDB on db181 is CRITICAL: Too many connections [04:08:19] The DB or DDoS? [04:08:19] RECOVERY - mw182 HTTPS on mw182 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.062 second response time [04:08:25] RECOVERY - mw171 MediaWiki Rendering on mw171 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.179 second response time [04:08:26] RECOVERY - mw152 MediaWiki Rendering on mw152 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.200 second response time [04:08:26] RECOVERY - mw171 HTTPS on mw171 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.061 second response time [04:08:27] db is at 400% load [04:08:28] RECOVERY - mw184 MediaWiki Rendering on mw184 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.200 second response time [04:08:29] RECOVERY - mw173 MediaWiki Rendering on mw173 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.257 second response time [04:08:31] RECOVERY - mw164 HTTPS on mw164 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.045 second response time [04:08:34] RECOVERY - mw164 MediaWiki Rendering on mw164 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.161 second response time [04:08:37] RECOVERY - cp36 Varnish Backends on cp36 is OK: All 29 backends are healthy [04:08:41] RECOVERY - mw173 HTTPS on mw173 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 0.053 second response time [04:08:41] RECOVERY - mw182 MediaWiki Rendering on mw182 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.261 second response time [04:08:43] RECOVERY - mw174 HTTPS on mw174 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3945 bytes in 7.488 second response time [04:08:52] Is it [04:08:52] RECOVERY - cp36 HTTPS on cp36 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 4108 bytes in 0.050 second response time [04:08:57] the fucking backups [04:08:58] again [04:09:05] PROBLEM - db181 Puppet on db181 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [04:09:07] RECOVERY - cp37 HTTPS on cp37 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 4108 bytes in 5.685 second response time [04:09:48] RECOVERY - db181 MariaDB Connections on db181 is OK: OK connection usage: 59.4%Current connections: 594 [04:09:50] also, how is that physically possible [04:10:18] RECOVERY - db181 MariaDB on db181 is OK: Uptime: 342521 Threads: 16 Questions: 838331828 Slow queries: 2021 Opens: 2584670 Open tables: 100000 Queries per second avg: 2447.534 [04:11:02] RECOVERY - db181 Puppet on db181 is OK: OK: Puppet is currently enabled, last run 11 minutes ago with 0 failures [04:21:47] > [23/01/2025 15:09] also, how is that physically possible [04:21:50] pixldev: multiple CPUs? [04:51:30] PROBLEM - db181 Current Load on db181 is WARNING: LOAD WARNING - total load average: 1.77, 2.10, 11.00 [05:01:59] RECOVERY - db181 Current Load on db181 is OK: LOAD OK - total load average: 1.16, 1.73, 9.78 [07:09:28] PROBLEM - mwtask171 NTP time on mwtask171 is CRITICAL: NTP CRITICAL: No response from NTP server [07:10:28] PROBLEM - db172 NTP time on db172 is UNKNOWN: check_ntp_time: Invalid hostname/address - time.cloudflare.comUsage: check_ntp_time -H [-4|-6] [-w ] [-c ] [-v verbose] [-o