[00:02:28] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 15.81, 13.80, 6.54 [00:02:32] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 4.08, 4.49, 3.60 [00:04:32] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 1.51, 3.55, 3.37 [00:06:08] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 6.27, 4.13, 3.81 [00:06:31] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.47, 2.96, 3.18 [00:10:08] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.81, 3.38, 3.62 [00:14:01] PROBLEM - cp22 PowerDNS Recursor on cp22 is CRITICAL: CRITICAL - Plugin timed out while executing system call [00:14:06] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 19.70, 9.45, 4.78 [00:15:12] PROBLEM - cp22 Puppet on cp22 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [00:16:19] PROBLEM - cp22 SSH on cp22 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [00:17:07] RECOVERY - cp22 Puppet on cp22 is OK: OK: Puppet is currently enabled, last run 31 minutes ago with 0 failures [00:18:08] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 6.37, 4.21, 3.79 [00:18:17] RECOVERY - cp22 SSH on cp22 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [00:18:18] RECOVERY - cp22 PowerDNS Recursor on cp22 is OK: DNS OK: 9.207 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [00:22:08] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 0.87, 2.97, 3.45 [00:24:08] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.80, 2.55, 3.23 [00:25:55] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 4.19, 3.59, 3.21 [00:28:01] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 3.15, 3.20, 3.88 [00:29:46] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.13, 3.87, 3.47 [00:33:37] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.62, 2.75, 3.11 [00:34:01] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.90, 2.21, 3.25 [00:36:58] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.83, 3.45, 3.18 [00:38:54] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.36, 4.04, 3.42 [00:44:42] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.62, 3.56, 3.45 [00:46:38] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 3.00, 3.34, 3.38 [00:50:12] PROBLEM - cp22 Current Load on cp22 is WARNING: WARNING - load average: 1.39, 1.87, 3.82 [00:54:12] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 2.17, 1.82, 3.32 [00:55:46] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 6.38, 4.00, 3.14 [00:57:41] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.15, 3.73, 3.16 [00:59:36] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 5.41, 4.04, 3.32 [01:01:32] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.94, 3.23, 3.11 [01:06:08] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 6.24, 4.48, 3.49 [01:16:08] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.73, 3.34, 3.68 [01:18:08] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.21, 4.10, 3.92 [01:24:08] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.68, 3.72, 3.97 [01:26:35] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 7.57, 4.46, 3.25 [01:30:01] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 5.93, 3.66, 2.36 [01:30:31] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.51, 3.30, 3.09 [01:32:01] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.60, 2.93, 2.27 [01:32:06] [02miraheze/mw-config] 07Universal-Omega pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/mw-config/compare/9005b74b4ffb...c93afb5d9e48 [01:32:07] [02miraheze/mw-config] 07Universal-Omega 03c93afb5 - T9884: disable sidebar cache for nonciclopediawiki [01:33:20] miraheze/mw-config - Universal-Omega the build passed. [01:39:32] !log [@mwtask141] starting deploy of {'config': True} to all [01:39:37] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [01:39:39] !log [@mwtask141] finished deploy of {'config': True} to all - SUCCESS in 6s [01:39:45] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [01:40:08] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.11, 2.90, 3.31 [01:44:13] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 5.74, 3.56, 2.32 [01:48:12] PROBLEM - cp22 Current Load on cp22 is WARNING: WARNING - load average: 3.10, 3.89, 2.78 [01:50:12] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 0.68, 2.74, 2.50 [01:54:25] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [01:56:40] !log [@test131] starting deploy of {'config': True} to all [01:56:41] !log [@test131] DEPLOY ABORTED: Canary check failed for beta.betaheze.org@localhost [01:56:46] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [01:56:50] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [01:56:55] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [01:58:05] [02miraheze/MirahezeMagic] 07Universal-Omega pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/MirahezeMagic/compare/6be00e4862f9...2abafaa08c98 [01:58:08] [02miraheze/MirahezeMagic] 07Universal-Omega 032abafaa - Add miraheze-sitenotice-learnmore message [01:59:34] PROBLEM - test131 Puppet on test131 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[MediaWiki Config Sync] [02:00:01] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 6.46, 3.77, 2.46 [02:04:01] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 1.41, 3.86, 2.94 [02:06:01] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 0.34, 2.66, 2.61 [02:07:33] miraheze/MirahezeMagic - Universal-Omega the build passed. [02:08:08] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.79, 4.50, 3.49 [02:15:48] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 11.81, 12.50, 8.60 [02:15:56] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:16:08] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.46, 3.88, 3.70 [02:17:04] PROBLEM - mw141 nutcracker process on mw141 is CRITICAL: PROCS CRITICAL: 0 processes with UID = 115 (nutcracker), command name 'nutcracker' [02:17:23] PROBLEM - graylog121 Current Load on graylog121 is CRITICAL: CRITICAL - load average: 5.72, 3.86, 2.64 [02:17:42] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 5.41, 9.92, 8.11 [02:20:01] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 4.15, 3.41, 2.68 [02:20:08] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.43, 2.94, 3.34 [02:20:15] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [02:22:01] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.00, 2.42, 2.40 [02:22:01] PROBLEM - mw131 MediaWiki Rendering on mw131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:22:13] PROBLEM - mw121 MediaWiki Rendering on mw121 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:23:32] PROBLEM - mw131 nutcracker process on mw131 is CRITICAL: PROCS CRITICAL: 0 processes with UID = 115 (nutcracker), command name 'nutcracker' [02:24:14] RECOVERY - mw121 MediaWiki Rendering on mw121 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 1.615 second response time [02:25:08] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 5.19, 5.44, 3.70 [02:25:15] PROBLEM - cp23 Varnish Backends on cp23 is CRITICAL: 1 backends are down. mw131 [02:25:58] PROBLEM - cp22 Varnish Backends on cp22 is CRITICAL: 1 backends are down. mw131 [02:26:12] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [02:27:03] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 1.44, 4.00, 3.38 [02:27:16] RECOVERY - cp23 Varnish Backends on cp23 is OK: All 14 backends are healthy [02:27:53] RECOVERY - test131 Puppet on test131 is OK: OK: Puppet is currently enabled, last run 5 seconds ago with 0 failures [02:27:56] RECOVERY - cp22 Varnish Backends on cp22 is OK: All 14 backends are healthy [02:27:58] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.91, 3.25, 1.99 [02:28:59] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.05, 3.02, 3.10 [02:29:11] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [02:31:11] PROBLEM - gluster102 Current Load on gluster102 is CRITICAL: CRITICAL - load average: 6.02, 4.25, 2.70 [02:31:58] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [02:32:51] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 6.55, 4.59, 3.20 [02:33:11] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 2.71, 3.73, 2.71 [02:33:58] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.50, 3.25, 2.52 [02:34:46] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.14, 3.24, 2.87 [02:35:11] RECOVERY - gluster102 Current Load on gluster102 is OK: OK - load average: 0.86, 2.72, 2.46 [02:36:07] PROBLEM - mw122 MediaWiki Rendering on mw122 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:36:27] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.245 second response time [02:36:32] PROBLEM - mw121 MediaWiki Rendering on mw121 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:37:04] RECOVERY - mw141 nutcracker process on mw141 is OK: PROCS OK: 1 process with UID = 115 (nutcracker), command name 'nutcracker' [02:37:21] PROBLEM - cp23 Varnish Backends on cp23 is CRITICAL: 1 backends are down. mw141 [02:37:42] PROBLEM - cp32 Varnish Backends on cp32 is CRITICAL: 2 backends are down. mw131 mw141 [02:37:43] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [02:37:46] PROBLEM - cp22 Varnish Backends on cp22 is CRITICAL: 1 backends are down. mw141 [02:38:09] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 6.36, 4.26, 3.48 [02:38:13] PROBLEM - cp33 Varnish Backends on cp33 is CRITICAL: 2 backends are down. mw132 mw141 [02:39:44] RECOVERY - cp22 Varnish Backends on cp22 is OK: All 14 backends are healthy [02:40:07] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [02:40:08] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.72, 4.00, 3.48 [02:40:08] RECOVERY - mw122 MediaWiki Rendering on mw122 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 2.023 second response time [02:40:13] RECOVERY - mw131 MediaWiki Rendering on mw131 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 7.340 second response time [02:40:15] RECOVERY - cp33 Varnish Backends on cp33 is OK: All 14 backends are healthy [02:41:15] RECOVERY - cp23 Varnish Backends on cp23 is OK: All 14 backends are healthy [02:41:31] RECOVERY - mw131 nutcracker process on mw131 is OK: PROCS OK: 1 process with UID = 115 (nutcracker), command name 'nutcracker' [02:42:40] RECOVERY - mw121 MediaWiki Rendering on mw121 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.954 second response time [02:43:07] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [02:43:39] RECOVERY - cp32 Varnish Backends on cp32 is OK: All 14 backends are healthy [02:45:36] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [02:46:06] PROBLEM - gluster102 Current Load on gluster102 is CRITICAL: CRITICAL - load average: 3.85, 4.10, 3.17 [02:46:08] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.07, 2.92, 3.21 [02:46:12] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 4.09, 2.66, 1.80 [02:46:48] PROBLEM - swiftobject115 PowerDNS Recursor on swiftobject115 is CRITICAL: CRITICAL - Plugin timed out while executing system call [02:46:52] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.07, 3.61, 3.02 [02:47:08] PROBLEM - mw121 MediaWiki Rendering on mw121 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:48:04] RECOVERY - gluster102 Current Load on gluster102 is OK: OK - load average: 1.94, 3.31, 2.99 [02:48:12] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 0.81, 1.93, 1.64 [02:48:18] PROBLEM - mw122 MediaWiki Rendering on mw122 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:48:33] PROBLEM - mw131 MediaWiki Rendering on mw131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:48:48] RECOVERY - swiftobject115 PowerDNS Recursor on swiftobject115 is OK: DNS OK: 0.875 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [02:48:50] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.92, 3.16, 2.93 [02:49:01] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:49:02] PROBLEM - mw132 MediaWiki Rendering on mw132 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:51:14] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [02:52:39] RECOVERY - mw131 MediaWiki Rendering on mw131 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.290 second response time [02:53:02] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.64, 3.82, 3.47 [02:53:04] PROBLEM - graylog121 Current Load on graylog121 is WARNING: WARNING - load average: 1.29, 1.81, 3.70 [02:53:07] RECOVERY - mw132 MediaWiki Rendering on mw132 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.542 second response time [02:53:10] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.254 second response time [02:53:31] RECOVERY - mw121 MediaWiki Rendering on mw121 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 4.677 second response time [02:53:54] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [02:54:22] RECOVERY - mw122 MediaWiki Rendering on mw122 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.531 second response time [02:54:58] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.02, 3.74, 3.49 [02:55:03] RECOVERY - graylog121 Current Load on graylog121 is OK: OK - load average: 1.35, 1.58, 3.38 [02:56:53] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.25, 3.25, 3.34 [02:57:24] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 2.85, 3.63, 3.31 [02:57:54] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 2.95, 3.47, 3.08 [02:58:37] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 1.90, 3.43, 3.15 [02:59:19] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 4.01, 4.03, 3.50 [02:59:53] RECOVERY - gluster102 Current Load on gluster102 is OK: OK - load average: 2.71, 3.11, 2.98 [03:00:35] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.65, 2.82, 2.95 [03:04:39] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.74, 4.49, 3.75 [03:10:06] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [03:10:28] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.08, 3.93, 3.74 [03:12:32] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [03:14:20] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.12, 2.39, 3.14 [03:15:16] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.22, 3.76, 3.11 [03:16:00] PROBLEM - test131 Current Load on test131 is CRITICAL: CRITICAL - load average: 5.73, 4.62, 3.23 [03:16:14] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 4.04, 2.98, 1.90 [03:17:14] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.38, 3.39, 3.08 [03:18:12] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 2.41, 3.08, 2.09 [03:18:35] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 2.34, 2.66, 3.88 [03:20:00] PROBLEM - test131 Current Load on test131 is WARNING: WARNING - load average: 2.30, 3.72, 3.20 [03:22:00] RECOVERY - test131 Current Load on test131 is OK: OK - load average: 1.30, 2.88, 2.95 [03:28:32] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.71, 2.37, 3.22 [03:30:11] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [03:32:32] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [03:34:58] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 7.52, 4.95, 3.54 [03:37:50] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [03:43:36] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [03:44:38] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.79, 3.75, 3.89 [03:48:29] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.18, 3.36, 3.65 [03:50:25] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.21, 2.76, 3.38 [03:53:20] PROBLEM - mw122 MediaWiki Rendering on mw122 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:54:24] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [03:55:03] PROBLEM - swiftobject115 PowerDNS Recursor on swiftobject115 is CRITICAL: CRITICAL - Plugin timed out while executing system call [03:55:14] RECOVERY - mw122 MediaWiki Rendering on mw122 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.333 second response time [03:59:11] RECOVERY - swiftobject115 PowerDNS Recursor on swiftobject115 is OK: DNS OK: 0.141 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [04:06:08] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.96, 3.95, 3.32 [04:08:08] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.79, 3.94, 3.40 [04:10:08] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.92, 3.40, 3.29 [04:11:55] PROBLEM - swiftobject115 PowerDNS Recursor on swiftobject115 is CRITICAL: CRITICAL - Plugin timed out while executing system call [04:18:08] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 11.54, 5.81, 4.09 [04:22:33] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 2.44, 3.50, 3.04 [04:25:41] PROBLEM - swiftobject115 SSH on swiftobject115 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [04:30:28] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.05, 3.34, 3.87 [04:30:46] PROBLEM - swiftobject115 Puppet on swiftobject115 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [04:33:58] RECOVERY - swiftobject115 SSH on swiftobject115 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [04:34:24] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 7.47, 4.82, 4.28 [04:35:43] RECOVERY - swiftobject115 Puppet on swiftobject115 is OK: OK: Puppet is currently enabled, last run 44 minutes ago with 0 failures [04:39:37] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [04:42:14] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.62, 3.59, 3.99 [04:47:55] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [04:48:06] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.46, 3.13, 3.57 [04:50:03] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.63, 3.20, 3.54 [04:50:48] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [04:51:38] RECOVERY - swiftobject115 PowerDNS Recursor on swiftobject115 is OK: DNS OK: 9.673 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [04:51:56] PROBLEM - swiftobject115 APT on swiftobject115 is WARNING: APT WARNING: 0 packages available for upgrade (0 critical updates). warnings detected, errors detected. [04:52:01] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.29, 2.41, 3.20 [04:54:38] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.42, 2.96, 2.56 [04:54:53] PROBLEM - swiftobject115 Puppet on swiftobject115 is WARNING: WARNING: Puppet last ran 1 hour ago [04:54:56] PROBLEM - swiftobject115 APT on swiftobject115 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [04:56:10] PROBLEM - swiftobject115 PowerDNS Recursor on swiftobject115 is CRITICAL: CRITICAL - Plugin timed out while executing system call [04:58:06] RECOVERY - swiftobject115 PowerDNS Recursor on swiftobject115 is OK: DNS OK: 1.691 second response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [04:58:38] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.25, 2.37, 2.43 [04:59:54] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 5.39, 3.74, 2.49 [05:03:46] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.73, 3.34, 3.15 [05:04:23] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [05:05:41] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 0.40, 3.42, 3.10 [05:05:43] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.05, 3.95, 3.40 [05:06:57] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [05:07:36] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 0.27, 2.43, 2.77 [05:09:38] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.20, 3.82, 3.47 [05:13:33] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.34, 2.77, 3.15 [05:22:10] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [05:22:24] PROBLEM - swiftobject114 PowerDNS Recursor on swiftobject114 is CRITICAL: CRITICAL - Plugin timed out while executing system call [05:24:25] RECOVERY - swiftobject114 PowerDNS Recursor on swiftobject114 is OK: DNS OK: 1.903 second response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [05:24:38] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 5.37, 4.08, 2.63 [05:24:48] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [05:26:38] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.58, 3.71, 2.67 [05:28:38] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 4.82, 4.23, 2.99 [05:29:08] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 4.14, 3.44, 2.94 [05:30:37] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 2.08, 3.33, 2.81 [05:31:08] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.44, 2.75, 2.75 [05:33:02] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [05:35:29] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [05:45:10] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 5.48, 3.80, 2.31 [05:49:07] PROBLEM - cp22 Current Load on cp22 is WARNING: WARNING - load average: 1.92, 3.45, 2.56 [05:51:05] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 0.64, 2.53, 2.33 [06:01:08] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 8.12, 4.97, 3.06 [06:06:42] RECOVERY - swiftobject115 Puppet on swiftobject115 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [06:08:38] PROBLEM - swiftobject115 PowerDNS Recursor on swiftobject115 is CRITICAL: CRITICAL - Plugin timed out while executing system call [06:09:08] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 1.18, 3.85, 3.48 [06:11:08] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 0.31, 2.63, 3.08 [06:14:54] RECOVERY - swiftobject115 PowerDNS Recursor on swiftobject115 is OK: DNS OK: 9.153 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [06:16:56] PROBLEM - test131 Current Load on test131 is CRITICAL: CRITICAL - load average: 4.18, 3.35, 2.45 [06:18:56] RECOVERY - test131 Current Load on test131 is OK: OK - load average: 2.54, 3.03, 2.44 [06:19:42] PROBLEM - cp22 Current Load on cp22 is WARNING: WARNING - load average: 3.27, 3.67, 2.80 [06:21:40] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 0.71, 2.57, 2.50 [06:23:32] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [06:26:01] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [06:28:46] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 2.27, 4.15, 3.29 [06:29:11] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 9.00, 11.60, 6.50 [06:29:26] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 2.63, 3.63, 3.00 [06:29:33] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 4.13, 3.70, 2.91 [06:30:35] PROBLEM - gluster102 Current Load on gluster102 is CRITICAL: CRITICAL - load average: 4.58, 4.50, 2.84 [06:30:46] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.38, 3.32, 3.10 [06:31:21] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.16, 2.84, 2.78 [06:31:31] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 1.20, 2.72, 2.65 [06:31:40] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [06:32:33] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 3.52, 3.87, 2.78 [06:34:30] RECOVERY - gluster102 Current Load on gluster102 is OK: OK - load average: 1.47, 2.96, 2.58 [06:36:46] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.09, 3.53, 3.15 [06:39:08] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 0.47, 2.19, 3.83 [06:40:22] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [06:40:46] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.44, 2.84, 2.98 [06:43:08] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.92, 1.69, 3.21 [06:43:21] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [06:47:08] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 4.36, 3.40, 3.57 [06:48:59] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [06:49:08] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.77, 2.74, 3.30 [06:51:57] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [06:52:13] PROBLEM - cp22 NTP time on cp22 is WARNING: NTP WARNING: Offset 0.1027930379 secs [06:56:39] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 5.81, 3.43, 2.52 [06:58:38] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 2.49, 3.29, 2.60 [07:00:35] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [07:00:37] PROBLEM - swiftobject114 Puppet on swiftobject114 is WARNING: WARNING: Puppet last ran 1 hour ago [07:01:10] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 9.87, 5.59, 3.71 [07:03:33] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [07:05:08] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.00, 3.14, 3.14 [07:06:18] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [07:09:17] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [07:14:13] RECOVERY - cp22 NTP time on cp22 is OK: NTP OK: Offset 0.08792665601 secs [07:15:07] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [07:30:55] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [07:33:48] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [07:34:13] PROBLEM - cp22 NTP time on cp22 is WARNING: NTP WARNING: Offset 0.1043170989 secs [07:35:49] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.97, 4.09, 2.82 [07:36:13] RECOVERY - cp22 NTP time on cp22 is OK: NTP OK: Offset 0.09991565347 secs [07:36:46] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [07:37:47] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.77, 3.92, 2.93 [07:38:03] PROBLEM - swiftobject115 PowerDNS Recursor on swiftobject115 is CRITICAL: CRITICAL - Plugin timed out while executing system call [07:39:42] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [07:41:42] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.31, 2.76, 2.69 [07:44:16] RECOVERY - swiftobject114 Puppet on swiftobject114 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [07:44:59] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [07:45:39] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 4.96, 3.57, 2.39 [07:47:37] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 1.09, 2.64, 2.20 [08:02:56] PROBLEM - swiftobject115 Puppet on swiftobject115 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [08:05:10] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.36, 3.44, 2.70 [08:09:05] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.93, 3.76, 3.07 [08:09:25] PROBLEM - swiftobject115 ferm_active on swiftobject115 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [08:11:02] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.83, 3.00, 2.86 [08:11:44] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [08:12:35] PROBLEM - swiftobject115 SSH on swiftobject115 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [08:14:10] PROBLEM - swiftobject115 conntrack_table_size on swiftobject115 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [08:14:42] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [08:15:16] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 5.68, 4.16, 2.56 [08:17:15] PROBLEM - cp22 Current Load on cp22 is WARNING: WARNING - load average: 2.59, 3.59, 2.55 [08:19:14] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 0.57, 2.52, 2.29 [08:19:17] PROBLEM - swiftobject115 NTP time on swiftobject115 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [08:20:09] PROBLEM - db121 MariaDB on db121 is CRITICAL: Can't connect to MySQL server on 'db121.miraheze.org' (115) [08:20:32] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [08:20:48] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.13, 3.68, 3.07 [08:21:06] PROBLEM - test131 MediaWiki Rendering on test131 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 951 bytes in 0.333 second response time [08:22:46] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.92, 3.26, 3.02 [08:23:10] PROBLEM - graylog121 Current Load on graylog121 is CRITICAL: CRITICAL - load average: 4.57, 3.23, 2.34 [08:25:10] RECOVERY - graylog121 Current Load on graylog121 is OK: OK - load average: 2.36, 2.87, 2.32 [08:28:38] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [08:28:45] PROBLEM - swiftobject115 Disk Space on swiftobject115 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [08:29:10] PROBLEM - graylog121 Current Load on graylog121 is CRITICAL: CRITICAL - load average: 4.02, 3.49, 2.69 [08:31:04] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [08:31:10] RECOVERY - graylog121 Current Load on graylog121 is OK: OK - load average: 3.05, 3.33, 2.73 [08:34:02] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [08:36:20] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [08:42:08] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [08:54:38] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.92, 3.09, 1.97 [08:56:38] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 2.93, 2.99, 2.07 [08:56:43] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [08:59:41] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [09:01:07] RECOVERY - test131 MediaWiki Rendering on test131 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 1.479 second response time [09:02:09] RECOVERY - db121 MariaDB on db121 is OK: Uptime: 158 Threads: 12 Questions: 46007 Slow queries: 5159 Opens: 2279 Open tables: 2273 Queries per second avg: 291.183 [09:02:38] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 2.00, 3.65, 2.78 [09:04:38] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 0.69, 2.60, 2.49 [09:04:53] PROBLEM - swiftobject114 PowerDNS Recursor on swiftobject114 is CRITICAL: CRITICAL - Plugin timed out while executing system call [09:07:29] [02miraheze/puppet] 07paladox pushed 031 commit to 03revert-2981-revert-2967-revert-2965-revert-2961-paladox-patch-5 [+0/-0/±2] 13https://github.com/miraheze/puppet/commit/5107dedef14d [09:07:32] [02miraheze/puppet] 07paladox 035107ded - Revert "Revert "swiftobject114/5: reduce workers" (#2981)" [09:07:34] [02puppet] 07paladox created branch 03revert-2981-revert-2967-revert-2965-revert-2961-paladox-patch-5 - 13https://github.com/miraheze/puppet [09:07:41] [02puppet] 07paladox opened pull request 03#2982: swiftobject114/5: reduce workers - 13https://github.com/miraheze/puppet/pull/2982 [09:07:48] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±2] 13https://github.com/miraheze/puppet/compare/691dd53dc2c9...7ac5b9625e29 [09:07:51] [02miraheze/puppet] 07paladox 037ac5b96 - swiftobject114/5: reduce workers (#2982) [09:07:52] [02miraheze/puppet] 07paladox deleted branch 03revert-2981-revert-2967-revert-2965-revert-2961-paladox-patch-5 [09:07:55] [02puppet] 07paladox closed pull request 03#2982: swiftobject114/5: reduce workers - 13https://github.com/miraheze/puppet/pull/2982 [09:07:56] [02puppet] 07paladox deleted branch 03revert-2981-revert-2967-revert-2965-revert-2961-paladox-patch-5 - 13https://github.com/miraheze/puppet [09:11:03] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:11:29] PROBLEM - mw121 MediaWiki Rendering on mw121 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:13:02] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.251 second response time [09:13:29] RECOVERY - mw121 MediaWiki Rendering on mw121 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.295 second response time [09:15:31] PROBLEM - cp22 Current Load on cp22 is WARNING: WARNING - load average: 3.82, 2.96, 2.18 [09:15:42] RECOVERY - swiftobject115 conntrack_table_size on swiftobject115 is OK: OK: nf_conntrack is 0 % full [09:15:55] RECOVERY - swiftobject115 Disk Space on swiftobject115 is OK: DISK OK - free space: / 653848 MB (73% inode=95%); [09:16:07] RECOVERY - swiftobject115 ferm_active on swiftobject115 is OK: OK ferm input default policy is set [09:16:47] RECOVERY - swiftobject115 SSH on swiftobject115 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [09:16:57] PROBLEM - mw132 MediaWiki Rendering on mw132 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:17:09] RECOVERY - swiftobject115 NTP time on swiftobject115 is OK: NTP OK: Offset -0.0002038776875 secs [09:17:29] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 1.55, 2.58, 2.15 [09:18:21] PROBLEM - mw131 MediaWiki Rendering on mw131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:20:21] RECOVERY - mw131 MediaWiki Rendering on mw131 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.206 second response time [09:21:05] RECOVERY - mw132 MediaWiki Rendering on mw132 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.283 second response time [09:26:21] PROBLEM - swiftobject115 Puppet on swiftobject115 is WARNING: WARNING: Puppet last ran 1 hour ago [09:26:36] RECOVERY - swiftobject114 PowerDNS Recursor on swiftobject114 is OK: DNS OK: 6.048 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [09:31:08] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 4.10, 3.49, 2.44 [09:33:08] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.52, 2.83, 2.33 [09:42:49] RECOVERY - swiftobject115 PowerDNS Recursor on swiftobject115 is OK: DNS OK: 1.328 second response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [09:43:47] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [09:46:45] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [09:48:24] PROBLEM - swiftobject115 APT on swiftobject115 is WARNING: APT WARNING: 0 packages available for upgrade (0 critical updates). warnings detected, errors detected. [09:49:08] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [09:50:27] PROBLEM - swiftobject115 APT on swiftobject115 is CRITICAL: APT CRITICAL: 16 packages available for upgrade (7 critical updates). [09:52:06] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [09:52:47] PROBLEM - swiftobject114 PowerDNS Recursor on swiftobject114 is CRITICAL: CRITICAL - Plugin timed out while executing system call [09:53:10] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:53:24] PROBLEM - mw121 MediaWiki Rendering on mw121 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:53:28] PROBLEM - mw132 MediaWiki Rendering on mw132 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:53:28] PROBLEM - mw142 MediaWiki Rendering on mw142 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:55:10] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.280 second response time [09:55:23] RECOVERY - mw121 MediaWiki Rendering on mw121 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.378 second response time [09:55:26] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.30, 3.89, 2.76 [09:55:27] RECOVERY - mw132 MediaWiki Rendering on mw132 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.202 second response time [09:55:28] RECOVERY - mw142 MediaWiki Rendering on mw142 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.216 second response time [09:57:22] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 5.09, 4.38, 3.07 [10:00:38] RECOVERY - swiftobject115 Puppet on swiftobject115 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [10:01:13] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 1.25, 3.43, 3.03 [10:03:08] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 0.29, 2.38, 2.70 [10:06:46] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.52, 3.72, 2.58 [10:08:11] !log removed entry from renameuser_status (TallAutism2006) [10:08:16] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [10:08:46] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.66, 3.17, 2.52 [10:09:51] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [10:12:49] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [10:16:42] PROBLEM - cp22 Current Load on cp22 is WARNING: WARNING - load average: 2.90, 3.68, 2.62 [10:18:37] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [10:18:41] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 0.80, 2.60, 2.35 [10:31:26] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 5.20, 4.69, 3.15 [10:32:38] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [10:33:21] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.17, 3.35, 2.84 [10:44:04] PROBLEM - swiftobject114 Puppet on swiftobject114 is WARNING: WARNING: Puppet last ran 1 hour ago [10:45:15] PROBLEM - mw121 MediaWiki Rendering on mw121 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [10:45:27] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [10:45:43] PROBLEM - mw142 MediaWiki Rendering on mw142 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [10:47:19] RECOVERY - mw121 MediaWiki Rendering on mw121 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 5.867 second response time [10:47:26] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.192 second response time [10:47:42] RECOVERY - mw142 MediaWiki Rendering on mw142 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.192 second response time [10:49:35] PROBLEM - cp23 Varnish Backends on cp23 is CRITICAL: 1 backends are down. mw121 [10:51:34] RECOVERY - cp23 Varnish Backends on cp23 is OK: All 14 backends are healthy [10:51:37] PROBLEM - swiftobject114 Puppet on swiftobject114 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [10:56:23] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [10:57:23] PROBLEM - swiftobject114 Puppet on swiftobject114 is WARNING: WARNING: Puppet last ran 1 hour ago [10:59:21] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [11:01:57] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [11:03:26] RECOVERY - cloud11 IPMI Sensors on cloud11 is OK: IPMI Status: OK [11:07:26] PROBLEM - cloud11 IPMI Sensors on cloud11 is CRITICAL: IPMI Status: Critical [Cntlr 2 Bay 7 = Critical] [11:10:22] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [11:10:50] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.84, 3.69, 2.41 [11:11:26] PROBLEM - swiftobject112 Current Load on swiftobject112 is WARNING: WARNING - load average: 3.58, 2.59, 2.05 [11:12:48] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.43, 2.80, 2.24 [11:13:26] RECOVERY - swiftobject112 Current Load on swiftobject112 is OK: OK - load average: 1.85, 2.21, 1.97 [11:19:14] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [11:22:24] PROBLEM - swiftobject114 Puppet on swiftobject114 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [11:24:29] PROBLEM - swiftobject114 Puppet on swiftobject114 is WARNING: WARNING: Puppet last ran 1 hour ago [11:25:02] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [11:27:28] PROBLEM - swiftobject114 Puppet on swiftobject114 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [11:38:46] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.79, 3.61, 2.88 [11:39:19] PROBLEM - swiftobject114 SSH on swiftobject114 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [11:40:13] PROBLEM - swiftobject114 Disk Space on swiftobject114 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [11:40:55] PROBLEM - swiftobject114 ferm_active on swiftobject114 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [11:41:33] PROBLEM - swiftobject114 NTP time on swiftobject114 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [11:42:14] PROBLEM - swiftobject114 conntrack_table_size on swiftobject114 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [11:42:22] RECOVERY - swiftobject114 Disk Space on swiftobject114 is OK: DISK OK - free space: / 649995 MB (73% inode=95%); [11:42:46] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.45, 2.90, 2.81 [11:43:57] RECOVERY - swiftobject114 NTP time on swiftobject114 is OK: NTP OK: Offset 5.862116814e-05 secs [11:50:52] PROBLEM - swiftobject114 NTP time on swiftobject114 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [11:51:21] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [11:53:48] PROBLEM - swiftobject114 Disk Space on swiftobject114 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [11:54:19] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [11:56:34] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [11:57:26] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.85, 3.32, 2.46 [11:59:22] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 7.96, 4.88, 3.12 [12:07:03] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 0.90, 3.36, 3.32 [12:09:21] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.69, 3.70, 2.98 [12:11:18] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.18, 2.74, 2.71 [12:15:15] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [12:18:09] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [12:22:00] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.28, 3.54, 2.94 [12:23:58] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.24, 2.67, 2.69 [12:26:04] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [12:28:56] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [12:33:33] RECOVERY - swiftobject114 Disk Space on swiftobject114 is OK: DISK OK - free space: / 649950 MB (73% inode=95%); [12:33:34] PROBLEM - swiftobject114 Puppet on swiftobject114 is WARNING: WARNING: Puppet last ran 2 hours ago [12:33:43] RECOVERY - swiftobject114 SSH on swiftobject114 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [12:34:02] RECOVERY - swiftobject114 NTP time on swiftobject114 is OK: NTP OK: Offset -0.0004469454288 secs [12:34:56] RECOVERY - swiftobject114 conntrack_table_size on swiftobject114 is OK: OK: nf_conntrack is 0 % full [12:36:41] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [12:37:05] RECOVERY - swiftobject114 ferm_active on swiftobject114 is OK: OK ferm input default policy is set [12:42:50] PROBLEM - mw142 MediaWiki Rendering on mw142 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [12:44:49] RECOVERY - mw142 MediaWiki Rendering on mw142 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.286 second response time [12:54:38] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 4.48, 3.59, 2.50 [12:56:38] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 2.49, 3.13, 2.46 [13:03:23] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [13:06:22] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [13:06:46] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 6.93, 4.17, 2.88 [13:08:46] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.32, 3.56, 2.83 [13:09:03] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [13:10:46] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.10, 2.76, 2.62 [13:13:26] !log SET GLOBAL read_only=0; on db121 [13:13:33] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [13:14:04] RECOVERY - swiftobject114 PowerDNS Recursor on swiftobject114 is OK: DNS OK: 0.291 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [13:16:08] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 17.51, 12.53, 8.97 [13:18:07] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 8.73, 10.43, 8.59 [13:19:10] RECOVERY - swiftobject114 Puppet on swiftobject114 is OK: OK: Puppet is currently enabled, last run 42 seconds ago with 0 failures [13:20:07] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 5.70, 8.81, 8.23 [13:24:38] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 6.74, 4.05, 2.58 [13:26:38] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.74, 3.95, 2.73 [13:28:38] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 6.91, 4.91, 3.22 [13:31:57] PROBLEM - gluster102 Current Load on gluster102 is CRITICAL: CRITICAL - load average: 3.66, 5.36, 3.80 [13:32:03] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 2.56, 4.02, 3.10 [13:34:03] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.40, 3.78, 3.14 [13:34:13] PROBLEM - cp22 NTP time on cp22 is WARNING: NTP WARNING: Offset 0.1100403965 secs [13:34:38] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 0.91, 2.96, 2.97 [13:36:46] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 6.27, 3.77, 2.71 [13:38:03] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.11, 3.61, 3.22 [13:40:03] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.85, 3.03, 3.06 [13:41:57] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 1.37, 3.30, 3.64 [13:43:57] PROBLEM - gluster102 Current Load on gluster102 is CRITICAL: CRITICAL - load average: 7.60, 4.30, 3.92 [13:44:46] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.21, 3.58, 3.20 [13:45:57] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 2.09, 3.33, 3.62 [13:46:03] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.56, 3.41, 3.18 [13:46:46] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.11, 3.30, 3.15 [13:47:57] RECOVERY - gluster102 Current Load on gluster102 is OK: OK - load average: 0.80, 2.39, 3.23 [13:48:03] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.50, 2.70, 2.95 [13:54:46] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.73, 3.77, 3.47 [13:56:46] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 0.98, 2.89, 3.19 [14:00:24] PROBLEM - mw142 MediaWiki Rendering on mw142 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:00:35] PROBLEM - mw132 MediaWiki Rendering on mw132 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:00:47] PROBLEM - mw122 MediaWiki Rendering on mw122 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:01:04] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:01:06] PROBLEM - mw131 MediaWiki Rendering on mw131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:02:31] RECOVERY - mw142 MediaWiki Rendering on mw142 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 7.353 second response time [14:02:34] RECOVERY - mw132 MediaWiki Rendering on mw132 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.702 second response time [14:02:50] RECOVERY - mw122 MediaWiki Rendering on mw122 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 6.304 second response time [14:02:53] PROBLEM - mw121 MediaWiki Rendering on mw121 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:03:04] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.205 second response time [14:03:06] RECOVERY - mw131 MediaWiki Rendering on mw131 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.211 second response time [14:04:13] RECOVERY - cp22 NTP time on cp22 is OK: NTP OK: Offset 0.09463429451 secs [14:07:00] RECOVERY - mw121 MediaWiki Rendering on mw121 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.371 second response time [14:07:39] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 3.70, 4.73, 3.84 [14:11:34] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.64, 3.85, 3.70 [14:13:31] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.40, 2.91, 3.37 [14:22:54] PROBLEM - mw142 MediaWiki Rendering on mw142 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:24:54] RECOVERY - mw142 MediaWiki Rendering on mw142 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.278 second response time [14:28:57] PROBLEM - swiftobject115 PowerDNS Recursor on swiftobject115 is CRITICAL: CRITICAL - Plugin timed out while executing system call [14:30:02] PROBLEM - cp32 NTP time on cp32 is WARNING: NTP WARNING: Offset 0.3106764853 secs [14:30:56] RECOVERY - swiftobject115 PowerDNS Recursor on swiftobject115 is OK: DNS OK: 3.943 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [14:31:58] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 2.73, 3.79, 3.27 [14:33:57] RECOVERY - gluster102 Current Load on gluster102 is OK: OK - load average: 1.47, 2.93, 3.02 [14:34:02] RECOVERY - cp32 NTP time on cp32 is OK: NTP OK: Offset -0.02885085344 secs [14:36:56] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 7.18, 4.36, 3.26 [14:45:07] PROBLEM - swiftobject115 Puppet on swiftobject115 is WARNING: WARNING: Puppet last ran 1 hour ago [14:46:04] PROBLEM - cp32 NTP time on cp32 is WARNING: NTP WARNING: Offset -0.148260951 secs [14:49:22] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 4.15, 3.28, 2.51 [14:50:04] RECOVERY - cp32 NTP time on cp32 is OK: NTP OK: Offset -0.06394609809 secs [14:51:17] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 2.28, 2.67, 2.37 [14:52:46] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.49, 2.97, 3.89 [14:58:46] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.06, 2.31, 3.29 [15:00:38] PROBLEM - www.dlfm-wiki.top - LetsEncrypt on sslhost is CRITICAL: connect to address www.dlfm-wiki.top and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [15:04:46] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.67, 3.77, 3.55 [15:06:46] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.23, 3.19, 3.36 [15:13:02] PROBLEM - cp23 NTP time on cp23 is WARNING: NTP WARNING: Offset 0.1920045018 secs [15:13:41] PROBLEM - swiftobject115 PowerDNS Recursor on swiftobject115 is CRITICAL: CRITICAL - Plugin timed out while executing system call [15:15:45] RECOVERY - swiftobject115 PowerDNS Recursor on swiftobject115 is OK: DNS OK: 8.579 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [15:19:24] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 6.48, 4.44, 2.97 [15:21:20] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.86, 3.39, 2.76 [15:27:25] RECOVERY - swiftobject115 Puppet on swiftobject115 is OK: OK: Puppet is currently enabled, last run 22 seconds ago with 0 failures [15:29:44] PROBLEM - swiftobject115 PowerDNS Recursor on swiftobject115 is CRITICAL: CRITICAL - Plugin timed out while executing system call [15:30:37] PROBLEM - www.dlfm-wiki.top - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'dlfm-wiki.top' expires in 15 day(s) (Sat 19 Nov 2022 16:02:27 GMT +0000). [15:33:01] RECOVERY - cp23 NTP time on cp23 is OK: NTP OK: Offset 0.05921655893 secs [15:38:01] RECOVERY - swiftobject115 PowerDNS Recursor on swiftobject115 is OK: DNS OK: 5.639 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [15:58:40] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 6.55, 4.26, 3.08 [16:03:58] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [16:04:38] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.13, 3.06, 3.14 [16:06:23] PROBLEM - www.project-patterns.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Certificate 'project-patterns.com' expires in 7 day(s) (Sat 12 Nov 2022 15:41:27 GMT +0000). [16:11:37] PROBLEM - swiftobject115 Puppet on swiftobject115 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [16:16:39] RECOVERY - swiftobject115 Puppet on swiftobject115 is OK: OK: Puppet is currently enabled, last run 14 minutes ago with 0 failures [16:21:04] PROBLEM - db121 Current Load on db121 is CRITICAL: CRITICAL - load average: 8.31, 5.96, 3.63 [16:23:04] RECOVERY - db121 Current Load on db121 is OK: OK - load average: 2.71, 4.55, 3.39 [16:30:29] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [16:38:33] PROBLEM - swiftobject115 PowerDNS Recursor on swiftobject115 is CRITICAL: CRITICAL - Plugin timed out while executing system call [16:46:56] RECOVERY - swiftobject115 PowerDNS Recursor on swiftobject115 is OK: DNS OK: 3.676 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [17:02:35] PROBLEM - gluster102 Current Load on gluster102 is CRITICAL: CRITICAL - load average: 3.92, 4.52, 3.25 [17:03:08] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 3.95, 3.38, 2.37 [17:04:32] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 2.08, 3.65, 3.08 [17:05:53] PROBLEM - swiftobject115 Puppet on swiftobject115 is WARNING: WARNING: Puppet last ran 1 hour ago [17:07:08] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 2.41, 3.05, 2.50 [17:08:03] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 8.03, 5.75, 3.79 [17:08:27] RECOVERY - gluster102 Current Load on gluster102 is OK: OK - load average: 2.02, 2.89, 2.91 [17:19:48] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.08, 3.36, 3.89 [17:21:46] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.42, 3.42, 3.82 [17:23:43] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.46, 3.03, 3.63 [17:27:57] PROBLEM - swiftobject115 Puppet on swiftobject115 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [17:29:35] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.67, 2.41, 3.20 [17:30:13] PROBLEM - swiftobject115 PowerDNS Recursor on swiftobject115 is CRITICAL: CRITICAL - Plugin timed out while executing system call [17:31:59] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [17:32:10] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 16.63, 11.75, 6.31 [17:33:20] PROBLEM - swiftobject115 Puppet on swiftobject115 is WARNING: WARNING: Puppet last ran 1 hour ago [17:37:24] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 8.65, 5.80, 4.28 [17:38:11] PROBLEM - swiftobject115 Puppet on swiftobject115 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [17:38:20] PROBLEM - swiftobject115 SSH on swiftobject115 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:40:23] RECOVERY - swiftobject115 SSH on swiftobject115 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [17:43:16] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.77, 3.65, 3.83 [17:45:37] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 1.93, 2.97, 3.91 [17:45:43] PROBLEM - biblestrength.net - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'biblestrength.net' expires in 15 day(s) (Sun 20 Nov 2022 17:39:25 GMT +0000). [17:46:45] PROBLEM - swiftobject115 SSH on swiftobject115 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:47:21] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/ssl/compare/2683786814e9...07bf3c30bd75 [17:47:21] PROBLEM - swiftobject115 ferm_active on swiftobject115 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [17:47:24] [02miraheze/ssl] 07MirahezeSSLBot 0307bf3c3 - Bot: Update SSL cert for biblestrength.net [17:53:04] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.57, 2.87, 3.36 [17:53:19] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 4.84, 3.22, 3.55 [17:54:17] PROBLEM - swiftobject115 conntrack_table_size on swiftobject115 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [17:55:14] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.43, 3.32, 3.55 [17:56:57] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.21, 3.49, 3.53 [17:59:05] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.98, 3.00, 3.39 [18:00:52] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.96, 4.09, 3.71 [18:05:50] PROBLEM - swiftobject115 Disk Space on swiftobject115 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:07:01] PROBLEM - swiftobject115 NTP time on swiftobject115 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:07:03] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [18:12:46] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.24, 3.41, 3.82 [18:14:25] RECOVERY - biblestrength.net - LetsEncrypt on sslhost is OK: OK - Certificate 'biblestrength.net' will expire on Thu 02 Feb 2023 16:47:15 GMT +0000. [18:15:35] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:16:46] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.24, 2.07, 3.15 [18:18:01] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [18:26:38] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.60, 3.16, 2.69 [18:28:34] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:29:08] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 3.63, 2.57, 2.00 [18:30:38] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 12.14, 6.00, 3.79 [18:31:08] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.05, 1.93, 1.83 [18:33:57] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [18:34:38] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 2.53, 3.72, 3.33 [18:36:38] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.12, 2.75, 3.01 [18:36:46] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 7.16, 4.96, 3.60 [18:39:28] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:40:46] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.21, 3.71, 3.44 [18:42:46] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.73, 3.24, 3.32 [18:45:23] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [18:50:08] PROBLEM - gluster102 Current Load on gluster102 is CRITICAL: CRITICAL - load average: 2.19, 4.12, 3.21 [18:52:05] RECOVERY - gluster102 Current Load on gluster102 is OK: OK - load average: 1.10, 3.02, 2.92 [18:54:38] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.46, 2.82, 2.56 [18:56:38] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 2.52, 2.61, 2.51 [18:56:49] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:59:08] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 3.73, 2.74, 2.03 [18:59:46] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [19:03:08] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.29, 2.45, 2.12 [19:06:00] RECOVERY - swiftobject115 NTP time on swiftobject115 is OK: NTP OK: Offset -0.0003811120987 secs [19:06:14] PROBLEM - swiftobject115 Puppet on swiftobject115 is WARNING: WARNING: Puppet last ran 3 hours ago [19:06:46] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 13.87, 7.67, 4.67 [19:06:50] RECOVERY - swiftobject115 conntrack_table_size on swiftobject115 is OK: OK: nf_conntrack is 0 % full [19:07:16] RECOVERY - swiftobject115 Disk Space on swiftobject115 is OK: DISK OK - free space: / 651807 MB (73% inode=95%); [19:07:41] RECOVERY - swiftobject115 SSH on swiftobject115 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [19:08:28] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:10:13] RECOVERY - swiftobject115 ferm_active on swiftobject115 is OK: OK ferm input default policy is set [19:15:09] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 5.30, 3.55, 2.91 [19:16:12] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 4.86, 3.78, 2.64 [19:17:04] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.80, 2.77, 2.69 [19:18:11] PROBLEM - cp22 Current Load on cp22 is WARNING: WARNING - load average: 2.84, 3.46, 2.66 [19:19:26] RECOVERY - swiftobject115 PowerDNS Recursor on swiftobject115 is OK: DNS OK: 0.036 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [19:20:09] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 0.59, 2.43, 2.38 [19:25:45] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.60, 3.69, 3.15 [19:27:41] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 4.03, 4.05, 3.35 [19:30:46] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.66, 2.88, 3.98 [19:31:32] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.23, 3.99, 3.50 [19:33:27] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.50, 3.13, 3.24 [19:35:09] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,6] - No health status line found, Self-test log contains errors --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,2] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean| [19:36:46] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 6.24, 3.61, 3.82 [19:38:46] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.42, 3.27, 3.69 [19:40:20] PROBLEM - swiftproxy111 Puppet on swiftproxy111 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 28 minutes ago with 0 failures [19:41:45] PROBLEM - wiki.nj.cn.eu.org - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.nj.cn.eu.org All nameservers failed to answer the query. [19:43:18] PROBLEM - swiftproxy131 Puppet on swiftproxy131 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 6 minutes ago with 0 failures [19:44:46] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.59, 2.43, 3.24 [19:45:49] PROBLEM - cp22 Current Load on cp22 is WARNING: WARNING - load average: 3.41, 2.89, 2.28 [19:45:58] PROBLEM - swiftobject114 Puppet on swiftobject114 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 24 minutes ago with 0 failures [19:47:00] [Grafana] FIRING: Some MediaWiki Appservers are running out of PHP-FPM workers. https://grafana.miraheze.org/d/GtxbP1Xnk [19:47:47] PROBLEM - mw121 MediaWiki Rendering on mw121 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:47:48] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 1.05, 2.17, 2.09 [19:49:46] RECOVERY - mw121 MediaWiki Rendering on mw121 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.685 second response time [19:50:07] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.63, 9.30, 7.82 [19:52:07] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 6.21, 7.72, 7.40 [19:54:39] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 5.37, 3.46, 2.84 [19:58:07] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.64, 9.17, 7.80 [19:58:07] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 13.05, 10.34, 8.58 [19:58:38] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.77, 2.73, 2.70 [20:00:07] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 8.03, 8.70, 7.80 [20:00:37] RECOVERY - swiftobject115 Puppet on swiftobject115 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [20:02:07] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.02, 11.18, 9.33 [20:04:07] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 6.04, 9.62, 9.01 [20:08:07] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.15, 10.52, 9.55 [20:08:07] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.38, 9.89, 8.63 [20:10:07] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 8.21, 9.78, 9.40 [20:10:07] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 7.23, 9.06, 8.50 [20:11:07] RECOVERY - wiki.nj.cn.eu.org - reverse DNS on sslhost is OK: SSL OK - wiki.nj.cn.eu.org reverse DNS resolves to cp23.miraheze.org - CNAME OK [20:11:38] PROBLEM - swiftobject111 Current Load on swiftobject111 is WARNING: WARNING - load average: 3.54, 3.03, 2.51 [20:12:46] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 4.00, 3.96, 3.12 [20:12:59] PROBLEM - mw122 MediaWiki Rendering on mw122 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:13:07] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 3.41, 2.94, 2.53 [20:14:46] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.69, 4.75, 3.51 [20:14:57] RECOVERY - mw122 MediaWiki Rendering on mw122 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.613 second response time [20:15:01] RECOVERY - swiftobject113 Current Load on swiftobject113 is OK: OK - load average: 2.41, 2.76, 2.52 [20:15:38] RECOVERY - swiftobject111 Current Load on swiftobject111 is OK: OK - load average: 1.93, 2.72, 2.53 [20:22:00] [Grafana] RESOLVED: PHP-FPM Worker Usage High https://grafana.miraheze.org/d/GtxbP1Xnk [20:24:38] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 4.24, 3.43, 2.79 [20:24:46] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.85, 3.60, 3.51 [20:26:38] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.30, 2.58, 2.56 [20:26:46] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.36, 3.02, 3.30 [20:34:18] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 15.66, 11.46, 9.70 [20:34:56] PROBLEM - test131 Current Load on test131 is CRITICAL: CRITICAL - load average: 4.89, 4.04, 3.12 [20:35:36] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 8.96, 4.95, 3.93 [20:36:45] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.80, 10.08, 9.59 [20:38:40] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.35, 10.49, 9.78 [20:40:07] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 8.15, 11.00, 10.26 [20:40:34] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.08, 10.28, 9.78 [20:40:36] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 3.91, 3.38, 2.92 [20:41:28] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.99, 3.76, 3.79 [20:42:07] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 7.54, 9.90, 9.95 [20:42:29] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 8.21, 9.55, 9.57 [20:42:56] PROBLEM - test131 Current Load on test131 is WARNING: WARNING - load average: 2.82, 3.77, 3.47 [20:45:23] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 0.94, 2.11, 3.10 [20:46:36] PROBLEM - swiftobject113 Current Load on swiftobject113 is CRITICAL: CRITICAL - load average: 4.17, 3.80, 3.25 [20:48:36] RECOVERY - swiftobject113 Current Load on swiftobject113 is OK: OK - load average: 1.73, 2.97, 3.01 [20:48:56] RECOVERY - test131 Current Load on test131 is OK: OK - load average: 1.98, 2.94, 3.25 [20:54:43] !log set weight for swift object rings to 0 for swiftobject114/5 and have its data rebalanced to other swift data servers [20:54:55] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:56:36] PROBLEM - swiftobject113 Current Load on swiftobject113 is CRITICAL: CRITICAL - load average: 9.48, 5.07, 3.65 [20:57:05] PROBLEM - swiftobject112 Current Load on swiftobject112 is CRITICAL: CRITICAL - load average: 10.82, 6.34, 3.90 [20:57:38] PROBLEM - swiftobject111 Current Load on swiftobject111 is CRITICAL: CRITICAL - load average: 12.15, 6.63, 3.91 [20:58:03] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.58, 3.46, 2.36 [20:58:56] PROBLEM - test131 Current Load on test131 is CRITICAL: CRITICAL - load average: 19.91, 12.16, 6.77 [21:00:07] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.98, 9.44, 8.85 [21:00:21] RECOVERY - swiftproxy111 Puppet on swiftproxy111 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [21:01:18] RECOVERY - swiftproxy131 Puppet on swiftproxy131 is OK: OK: Puppet is currently enabled, last run 5 seconds ago with 0 failures [21:02:02] RECOVERY - swiftobject114 Puppet on swiftobject114 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [21:02:07] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 14.06, 11.02, 9.50 [21:02:32] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 2.75, 3.77, 2.89 [21:04:07] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.35, 10.97, 9.60 [21:04:29] RECOVERY - gluster102 Current Load on gluster102 is OK: OK - load average: 2.14, 3.28, 2.81 [21:05:10] PROBLEM - graylog121 Current Load on graylog121 is CRITICAL: CRITICAL - load average: 4.52, 3.50, 2.70 [21:06:07] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 8.46, 11.41, 10.16 [21:06:54] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 9.79, 5.35, 3.60 [21:08:03] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.66, 3.88, 3.50 [21:08:07] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.92, 11.68, 10.18 [21:08:21] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 2.21, 3.51, 3.06 [21:09:10] PROBLEM - graylog121 Current Load on graylog121 is WARNING: WARNING - load average: 3.74, 3.81, 3.01 [21:10:07] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 13.40, 12.03, 10.67 [21:10:18] PROBLEM - gluster102 Current Load on gluster102 is CRITICAL: CRITICAL - load average: 4.53, 3.72, 3.18 [21:12:15] RECOVERY - gluster102 Current Load on gluster102 is OK: OK - load average: 3.09, 3.39, 3.11 [21:12:38] PROBLEM - swiftobject115 Puppet on swiftobject115 is WARNING: WARNING: Puppet is currently disabled, message: paladoxz, last run 6 minutes ago with 0 failures [21:12:46] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.59, 3.92, 3.54 [21:13:10] RECOVERY - graylog121 Current Load on graylog121 is OK: OK - load average: 2.58, 3.10, 2.91 [21:14:46] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.30, 4.20, 3.66 [21:16:00] PROBLEM - swiftobject114 Puppet on swiftobject114 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 13 minutes ago with 0 failures [21:16:03] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.06, 3.13, 3.32 [21:18:07] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.73, 11.59, 11.12 [21:18:46] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.72, 3.64, 3.57 [21:20:07] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 13.43, 12.43, 11.49 [21:20:46] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.15, 3.11, 3.38 [21:22:07] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 8.97, 11.15, 11.22 [21:25:10] PROBLEM - graylog121 Current Load on graylog121 is WARNING: WARNING - load average: 3.51, 3.06, 2.88 [21:25:57] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 3.19, 3.65, 3.44 [21:26:38] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 5.10, 4.10, 3.11 [21:27:10] PROBLEM - graylog121 Current Load on graylog121 is CRITICAL: CRITICAL - load average: 4.13, 3.37, 3.02 [21:27:33] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-6 [+0/-0/±1] 13https://github.com/miraheze/puppet/commit/94ccb19fbdb3 [21:27:36] [02miraheze/puppet] 07paladox 0394ccb19 - swift: increase memory limit to 40% for swift-object-replicator [21:27:37] [02puppet] 07paladox created branch 03paladox-patch-6 - 13https://github.com/miraheze/puppet [21:27:39] [02puppet] 07paladox opened pull request 03#2983: swift: increase memory limit to 40% for swift-object-replicator - 13https://github.com/miraheze/puppet/pull/2983 [21:28:01] PROBLEM - gluster102 Current Load on gluster102 is CRITICAL: CRITICAL - load average: 5.19, 4.57, 3.82 [21:28:38] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.54, 3.07, 2.85 [21:29:05] [02puppet] 07paladox closed pull request 03#2983: swift: increase memory limit to 40% for swift-object-replicator - 13https://github.com/miraheze/puppet/pull/2983 [21:29:07] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/puppet/compare/7ac5b9625e29...51054d98c44d [21:29:08] [02miraheze/puppet] 07paladox 0351054d9 - swift: increase memory limit to 40% for swift-object-replicator (#2983) [21:29:10] RECOVERY - graylog121 Current Load on graylog121 is OK: OK - load average: 2.43, 3.12, 2.98 [21:29:11] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-6 [21:29:12] [02puppet] 07paladox deleted branch 03paladox-patch-6 - 13https://github.com/miraheze/puppet [21:30:03] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 7.63, 5.62, 4.13 [21:36:46] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 6.32, 4.36, 3.59 [21:38:07] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 9.92, 11.20, 11.58 [21:40:07] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.60, 11.00, 10.72 [21:42:46] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.47, 3.98, 3.77 [21:43:57] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 3.05, 3.27, 3.95 [21:44:07] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.88, 11.50, 11.00 [21:46:38] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.52, 3.49, 2.92 [21:46:46] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.14, 2.65, 3.29 [21:48:03] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.58, 3.33, 3.94 [21:48:07] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 13.36, 12.07, 11.33 [21:48:38] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.41, 2.62, 2.66 [21:50:07] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.33, 11.33, 11.15 [21:51:57] RECOVERY - gluster102 Current Load on gluster102 is OK: OK - load average: 2.10, 2.36, 3.26 [21:52:03] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.52, 2.21, 3.33 [21:52:07] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 12.85, 11.51, 11.23 [21:54:38] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 8.16, 4.73, 3.40 [21:56:07] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 9.94, 10.93, 11.11 [21:58:07] PROBLEM - swiftac111 Current Load on swiftac111 is CRITICAL: CRITICAL - load average: 9.44, 6.40, 4.05 [21:58:38] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 2.26, 3.35, 3.14 [22:00:06] RECOVERY - swiftac111 Current Load on swiftac111 is OK: OK - load average: 6.66, 5.97, 4.16 [22:02:07] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 7.13, 8.88, 10.09 [22:02:07] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 6.55, 8.35, 9.91 [22:04:23] PROBLEM - swiftac111 Current Load on swiftac111 is CRITICAL: CRITICAL - load average: 21.95, 14.67, 8.10 [22:06:46] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 9.53, 5.67, 3.94 [22:09:06] PROBLEM - swiftac111 SSH on swiftac111 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:10:22] RECOVERY - swiftobject114 Puppet on swiftobject114 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [22:10:37] RECOVERY - swiftobject115 Puppet on swiftobject115 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [22:10:46] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.13, 3.08, 3.27 [22:10:51] PROBLEM - swiftac111 PowerDNS Recursor on swiftac111 is CRITICAL: CRITICAL - Plugin timed out while executing system call [22:11:33] PROBLEM - swiftac111 Puppet on swiftac111 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [22:11:42] PROBLEM - swiftac111 conntrack_table_size on swiftac111 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [22:12:30] PROBLEM - swiftac111 ferm_active on swiftac111 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [22:12:33] PROBLEM - swiftac111 NTP time on swiftac111 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:14:25] PROBLEM - swiftproxy131 HTTP on swiftproxy131 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host: HTTP/1.1 401 Unauthorized [22:14:59] PROBLEM - swiftproxy111 HTTPS on swiftproxy111 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/1.1 401 Unauthorized [22:15:09] PROBLEM - swiftproxy111 HTTP on swiftproxy111 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host: HTTP/1.1 401 Unauthorized [22:15:11] PROBLEM - swiftproxy131 HTTPS on swiftproxy131 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/1.1 401 Unauthorized [22:17:38] PROBLEM - swiftobject111 Current Load on swiftobject111 is WARNING: WARNING - load average: 1.27, 1.88, 3.83 [22:18:24] PROBLEM - swiftac111 Swift Account Service on swiftac111 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:20:36] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 2.09, 1.90, 3.85 [22:21:26] PROBLEM - swiftobject112 Current Load on swiftobject112 is WARNING: WARNING - load average: 1.66, 2.07, 3.87 [22:22:01] PROBLEM - swiftac111 Disk Space on swiftac111 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [22:23:38] RECOVERY - swiftobject111 Current Load on swiftobject111 is OK: OK - load average: 0.96, 1.61, 3.10 [22:24:28] PROBLEM - swiftac111 Swift Container Service on swiftac111 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:24:36] RECOVERY - swiftobject113 Current Load on swiftobject113 is OK: OK - load average: 1.20, 1.63, 3.30 [22:25:08] PROBLEM - swiftobject112 Swift Object Service on swiftobject112 is CRITICAL: connect to address 2a10:6740::6:204 and port 6000: Connection refused [22:25:26] RECOVERY - swiftobject112 Current Load on swiftobject112 is OK: OK - load average: 0.81, 1.48, 3.22 [22:25:48] PROBLEM - swiftobject115 Swift Object Service on swiftobject115 is CRITICAL: connect to address 2a10:6740::6:207 and port 6000: Connection refused [22:25:52] PROBLEM - swiftobject113 Swift Object Service on swiftobject113 is CRITICAL: connect to address 2a10:6740::6:205 and port 6000: Connection refused [22:26:13] PROBLEM - swiftobject114 Swift Object Service on swiftobject114 is CRITICAL: connect to address 2a10:6740::6:206 and port 6000: Connection refused [22:26:26] PROBLEM - swiftobject111 Swift Object Service on swiftobject111 is CRITICAL: connect to address 2a10:6740::6:203 and port 6000: Connection refused [22:28:54] RECOVERY - swiftac111 Swift Account Service on swiftac111 is OK: TCP OK - 7.259 second response time on 2a10:6740::6:202 port 6002 [22:28:55] RECOVERY - swiftac111 conntrack_table_size on swiftac111 is OK: OK: nf_conntrack is 0 % full [22:28:56] PROBLEM - test131 Current Load on test131 is WARNING: WARNING - load average: 1.21, 1.54, 3.89 [22:29:04] PROBLEM - swiftac111 Current Load on swiftac111 is WARNING: WARNING - load average: 7.23, 1.64, 0.54 [22:29:20] RECOVERY - swiftac111 ferm_active on swiftac111 is OK: OK ferm input default policy is set [22:29:48] RECOVERY - swiftobject115 Swift Object Service on swiftobject115 is OK: TCP OK - 0.001 second response time on 2a10:6740::6:207 port 6000 [22:29:49] RECOVERY - swiftproxy131 HTTP on swiftproxy131 is OK: HTTP OK: Status line output matched "HTTP/1.1 404" - 400 bytes in 0.661 second response time [22:29:52] RECOVERY - swiftobject113 Swift Object Service on swiftobject113 is OK: TCP OK - 0.001 second response time on 2a10:6740::6:205 port 6000 [22:29:53] RECOVERY - swiftac111 Disk Space on swiftac111 is OK: DISK OK - free space: / 108600 MB (88% inode=96%); [22:30:05] RECOVERY - swiftac111 PowerDNS Recursor on swiftac111 is OK: DNS OK: 0.072 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [22:30:11] PROBLEM - graylog121 Current Load on graylog121 is CRITICAL: CRITICAL - load average: 4.14, 3.32, 2.84 [22:30:13] RECOVERY - swiftobject114 Swift Object Service on swiftobject114 is OK: TCP OK - 0.001 second response time on 2a10:6740::6:206 port 6000 [22:30:15] RECOVERY - swiftac111 SSH on swiftac111 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [22:30:26] RECOVERY - swiftobject111 Swift Object Service on swiftobject111 is OK: TCP OK - 0.001 second response time on 2a10:6740::6:203 port 6000 [22:30:34] RECOVERY - swiftac111 Swift Container Service on swiftac111 is OK: TCP OK - 0.001 second response time on 2a10:6740::6:202 port 6001 [22:30:47] RECOVERY - swiftproxy111 HTTPS on swiftproxy111 is OK: HTTP OK: Status line output matched "HTTP/1.1 404" - 367 bytes in 5.407 second response time [22:30:58] RECOVERY - swiftac111 Puppet on swiftac111 is OK: OK: Puppet is currently enabled, last run 49 seconds ago with 0 failures [22:31:03] RECOVERY - swiftac111 Current Load on swiftac111 is OK: OK - load average: 2.63, 1.72, 0.70 [22:31:08] RECOVERY - swiftproxy111 HTTP on swiftproxy111 is OK: HTTP OK: Status line output matched "HTTP/1.1 404" - 400 bytes in 0.205 second response time [22:31:08] RECOVERY - swiftobject112 Swift Object Service on swiftobject112 is OK: TCP OK - 0.001 second response time on 2a10:6740::6:204 port 6000 [22:31:12] RECOVERY - swiftproxy131 HTTPS on swiftproxy131 is OK: HTTP OK: Status line output matched "HTTP/1.1 404" - 367 bytes in 0.337 second response time [22:31:32] RECOVERY - swiftac111 NTP time on swiftac111 is OK: NTP OK: Offset 0.0001059472561 secs [22:32:08] RECOVERY - graylog121 Current Load on graylog121 is OK: OK - load average: 2.68, 3.16, 2.84 [22:35:26] PROBLEM - swiftobject112 Current Load on swiftobject112 is CRITICAL: CRITICAL - load average: 4.02, 3.05, 3.01 [22:36:46] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.97, 4.12, 3.03 [22:37:38] PROBLEM - swiftobject111 Current Load on swiftobject111 is WARNING: WARNING - load average: 3.76, 3.24, 2.90 [22:38:36] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 3.89, 3.54, 3.15 [22:39:38] RECOVERY - swiftobject111 Current Load on swiftobject111 is OK: OK - load average: 3.38, 3.29, 2.97 [22:40:36] RECOVERY - swiftobject113 Current Load on swiftobject113 is OK: OK - load average: 3.06, 3.31, 3.11 [22:40:46] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.31, 3.50, 3.03 [22:40:56] PROBLEM - test131 Current Load on test131 is CRITICAL: CRITICAL - load average: 6.19, 4.37, 4.02 [22:42:46] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.70, 3.80, 3.18 [22:42:48] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 13.41, 10.75, 9.31 [22:43:59] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-6 [+0/-0/±1] 13https://github.com/miraheze/puppet/commit/99bd6d85db5d [22:44:02] [02miraheze/puppet] 07paladox 0399bd6d8 - swift: reduce account/container workers to 12 [22:44:04] [02puppet] 07paladox created branch 03paladox-patch-6 - 13https://github.com/miraheze/puppet [22:44:05] [02puppet] 07paladox opened pull request 03#2984: swift: reduce account/container workers to 12 - 13https://github.com/miraheze/puppet/pull/2984 [22:44:14] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-6 [+0/-0/±1] 13https://github.com/miraheze/puppet/compare/99bd6d85db5d...6ea89b8acd33 [22:44:17] [02miraheze/puppet] 07paladox 036ea89b8 - Update account-server.conf.erb [22:44:20] [02puppet] 07paladox synchronize pull request 03#2984: swift: reduce account/container workers to 12 - 13https://github.com/miraheze/puppet/pull/2984 [22:44:26] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-6 [22:44:29] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±2] 13https://github.com/miraheze/puppet/compare/51054d98c44d...58af7604ff54 [22:44:31] [02miraheze/puppet] 07paladox 0358af760 - swift: reduce account/container workers to 12 (#2984) [22:44:34] [02puppet] 07paladox closed pull request 03#2984: swift: reduce account/container workers to 12 - 13https://github.com/miraheze/puppet/pull/2984 [22:44:35] [02puppet] 07paladox deleted branch 03paladox-patch-6 - 13https://github.com/miraheze/puppet [22:44:42] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 8.91, 9.86, 9.15 [22:44:46] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.52, 3.34, 3.07 [22:46:36] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 3.51, 3.53, 3.28 [22:48:36] RECOVERY - swiftobject113 Current Load on swiftobject113 is OK: OK - load average: 2.95, 3.23, 3.19 [22:50:46] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.40, 3.11, 3.10 [22:52:40] PROBLEM - gluster102 Current Load on gluster102 is CRITICAL: CRITICAL - load average: 4.25, 3.13, 2.23 [22:55:24] PROBLEM - swiftobject113 Current Load on swiftobject113 is CRITICAL: CRITICAL - load average: 4.57, 4.43, 3.70 [22:56:22] PROBLEM - swiftobject111 Current Load on swiftobject111 is WARNING: WARNING - load average: 3.34, 3.54, 3.13 [22:56:34] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 3.25, 3.56, 2.63 [22:58:17] PROBLEM - swiftobject111 Current Load on swiftobject111 is CRITICAL: CRITICAL - load average: 4.54, 3.77, 3.25 [22:58:31] RECOVERY - gluster102 Current Load on gluster102 is OK: OK - load average: 3.11, 3.25, 2.62 [23:02:09] PROBLEM - swiftobject111 Current Load on swiftobject111 is WARNING: WARNING - load average: 3.42, 3.57, 3.26 [23:02:23] PROBLEM - gluster102 Current Load on gluster102 is CRITICAL: CRITICAL - load average: 4.37, 3.94, 3.07 [23:04:04] PROBLEM - swiftobject111 Current Load on swiftobject111 is CRITICAL: CRITICAL - load average: 9.36, 5.45, 3.97 [23:04:20] RECOVERY - gluster102 Current Load on gluster102 is OK: OK - load average: 2.51, 3.37, 2.96 [23:09:22] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 3.57, 4.20, 3.56 [23:10:09] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 3.57, 3.46, 3.12 [23:11:20] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.39, 3.18, 3.26 [23:12:06] PROBLEM - gluster102 Current Load on gluster102 is CRITICAL: CRITICAL - load average: 4.61, 4.07, 3.39 [23:14:03] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 3.68, 3.74, 3.34 [23:17:50] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.55, 3.42, 3.08 [23:17:58] RECOVERY - gluster102 Current Load on gluster102 is OK: OK - load average: 1.87, 3.06, 3.20 [23:19:47] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.91, 3.00, 2.97 [23:20:07] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.93, 10.60, 9.46 [23:22:07] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 8.44, 9.61, 9.22 [23:24:38] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 4.13, 3.21, 2.49 [23:26:38] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.60, 2.64, 2.38 [23:27:09] PROBLEM - mw121 MediaWiki Rendering on mw121 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:28:07] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.42, 11.28, 10.09 [23:29:08] RECOVERY - mw121 MediaWiki Rendering on mw121 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.514 second response time [23:32:07] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 8.88, 9.89, 9.82 [23:39:37] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 6.64, 6.38, 4.36 [23:44:45] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.27, 10.22, 9.74 [23:46:40] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.41, 10.72, 9.99 [23:46:59] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.67, 3.20, 2.47 [23:48:56] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.22, 2.79, 2.41 [23:50:07] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 12.73, 10.91, 9.83 [23:52:07] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.60, 10.87, 9.93 [23:52:23] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 9.67, 10.12, 9.96 [23:53:20] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.21, 3.38, 3.92 [23:57:14] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 12.92, 5.25, 4.39 [23:59:41] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 3.49, 3.15, 2.25