[00:04:11] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 8.92, 6.41, 4.02 [00:09:56] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.14, 3.86, 3.58 [00:11:51] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.84, 3.15, 3.35 [00:12:29] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.53, 3.24, 2.85 [00:15:42] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.62, 3.33, 3.41 [00:16:26] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 3.32, 3.35, 3.00 [00:17:37] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.41, 3.08, 3.31 [00:25:51] RECOVERY - reports121 NTP time on reports121 is OK: NTP OK: Offset -0.0003573000431 secs [00:28:01] RECOVERY - mon141 NTP time on mon141 is OK: NTP OK: Offset 6.81579113e-05 secs [00:31:12] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 6.68, 3.98, 2.28 [00:34:14] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 12.66, 7.80, 4.57 [00:37:07] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 8.42, 5.06, 3.60 [00:39:12] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 0.59, 3.38, 3.04 [00:40:10] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 1.62, 3.53, 3.64 [00:40:57] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.86, 3.94, 3.56 [00:42:05] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.60, 2.82, 3.36 [00:42:52] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.88, 3.35, 3.39 [00:44:01] PROBLEM - cp23 NTP time on cp23 is WARNING: NTP WARNING: Offset 0.1025840044 secs [00:46:01] RECOVERY - cp23 NTP time on cp23 is OK: NTP OK: Offset 0.09984993935 secs [00:50:02] PROBLEM - cp23 NTP time on cp23 is WARNING: NTP WARNING: Offset 0.1022971272 secs [00:53:49] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [00:54:01] RECOVERY - cp23 NTP time on cp23 is OK: NTP OK: Offset 0.09876558185 secs [00:58:49] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 8.58, 4.45, 3.31 [01:04:09] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.58, 3.59, 3.21 [01:09:55] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.49, 3.96, 3.47 [01:11:50] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.40, 3.46, 3.35 [01:15:41] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.13, 3.75, 3.51 [01:19:38] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 1.55, 2.09, 3.85 [01:23:21] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.58, 3.80, 3.78 [01:23:35] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 6.85, 3.30, 3.83 [01:25:33] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.39, 3.35, 3.80 [01:27:19] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.07, 3.70, 3.75 [01:27:31] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 4.10, 3.81, 3.93 [01:29:30] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 2.57, 3.19, 3.67 [01:31:19] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.84, 3.02, 3.48 [01:33:12] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 5.72, 4.02, 2.57 [01:33:21] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 6.46, 4.33, 3.91 [01:33:28] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.43, 2.38, 3.24 [01:39:12] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 0.46, 2.74, 2.62 [01:43:19] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.46, 3.21, 3.88 [01:47:22] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.37, 3.33, 3.74 [01:49:19] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.68, 2.74, 3.48 [01:53:19] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.33, 2.59, 3.31 [01:59:19] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.69, 3.41, 3.48 [02:03:19] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.67, 3.77, 3.60 [02:03:47] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.68, 2.77, 3.85 [02:05:45] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.37, 3.38, 3.93 [02:07:42] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.61, 3.10, 3.76 [02:11:38] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 6.21, 3.69, 3.77 [02:13:19] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.46, 3.45, 3.90 [02:13:36] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.95, 3.38, 3.63 [02:17:19] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.05, 2.96, 3.53 [02:17:32] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.99, 2.68, 3.30 [02:19:19] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.85, 3.09, 3.53 [02:21:19] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.30, 2.86, 3.39 [02:21:25] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.87, 3.93, 3.71 [02:29:16] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 3.18, 3.02, 3.35 [02:31:19] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.25, 4.00, 3.67 [02:35:10] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.80, 4.91, 4.07 [02:39:10] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 1.90, 3.35, 3.63 [02:45:10] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.00, 2.81, 3.33 [02:55:19] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.98, 2.79, 3.97 [03:01:53] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.63, 3.44, 3.32 [03:03:19] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.61, 3.16, 3.59 [03:03:50] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.34, 2.62, 3.02 [03:03:52] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 12.95, 7.56, 4.06 [03:09:20] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.39, 3.67, 3.75 [03:09:47] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 2.85, 3.69, 3.35 [03:11:19] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.57, 4.23, 3.94 [03:11:45] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 2.70, 3.21, 3.21 [03:13:19] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.03, 3.88, 3.86 [03:13:37] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.36, 4.06, 3.31 [03:15:35] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.20, 3.36, 3.15 [03:16:03] PROBLEM - cp22 Current Load on cp22 is WARNING: WARNING - load average: 3.82, 2.86, 1.75 [03:17:59] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 1.02, 2.17, 1.63 [03:21:19] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.39, 2.41, 3.16 [03:31:14] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.66, 3.20, 3.09 [03:33:12] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.57, 2.69, 2.92 [03:36:00] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.20, 5.29, 3.94 [03:41:46] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.09, 3.41, 3.55 [03:43:41] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.46, 4.16, 3.82 [03:45:35] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.77, 3.25, 3.52 [03:46:01] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.01, 3.78, 3.32 [03:47:30] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 0.87, 2.51, 3.22 [04:13:27] PROBLEM - cp22 Current Load on cp22 is WARNING: WARNING - load average: 3.63, 3.12, 1.92 [04:15:27] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 3.35, 3.08, 2.05 [04:22:22] PROBLEM - ns1 NTP time on ns1 is WARNING: NTP WARNING: Offset 0.1015983522 secs [04:33:17] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.18, 2.89, 3.78 [04:35:14] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 6.54, 4.13, 4.13 [04:37:19] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 2.90, 4.93, 3.61 [04:41:10] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.66, 3.67, 3.99 [04:41:19] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 0.73, 3.02, 3.17 [04:46:22] RECOVERY - ns1 NTP time on ns1 is OK: NTP OK: Offset 0.089599967 secs [04:51:10] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.30, 3.98, 3.78 [04:57:10] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.15, 3.55, 3.72 [05:05:10] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.37, 2.98, 3.35 [05:17:27] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 4.21, 3.04, 1.94 [05:19:27] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 2.79, 2.75, 1.96 [05:25:00] [02miraheze/ssl] 07Reception123 pushed 031 commit to 03master [+1/-0/±1] 13https://github.com/miraheze/ssl/compare/f3aac0e2045c...dfb45b35684a [05:25:01] [url] Comparing f3aac0e2045c...dfb45b35684a · miraheze/ssl · GitHub | github.com [05:25:01] [02miraheze/ssl] 07Reception123 03dfb45b3 - re-add en.clockupwiki.org cert [05:25:59] [02miraheze/dns] 07Reception123 pushed 031 commit to 03master [+1/-0/±0] 13https://github.com/miraheze/dns/compare/fc3690e2259d...e0061ff51696 [05:26:00] [url] Comparing fc3690e2259d...e0061ff51696 · miraheze/dns · GitHub | github.com [05:26:02] [02miraheze/dns] 07Reception123 03e0061ff - add fotnswiki.com zone [05:26:11] miraheze/ssl - Reception123 the build has errored. [05:28:18] [02miraheze/ssl] 07Reception123 pushed 031 commit to 03master [+1/-0/±1] 13https://github.com/miraheze/ssl/compare/dfb45b35684a...455eb72048c3 [05:28:19] [url] Comparing dfb45b35684a...455eb72048c3 · miraheze/ssl · GitHub | github.com [05:28:20] [02miraheze/ssl] 07Reception123 03455eb72 - add fotnswiki.com cert [05:34:25] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.22, 3.41, 3.03 [05:36:20] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.59, 2.90, 2.89 [05:41:46] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.63, 3.30, 3.14 [05:43:44] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.83, 2.75, 2.96 [06:19:12] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.44, 3.40, 2.71 [06:21:11] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.48, 2.66, 2.52 [06:56:48] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.10, 3.49, 3.00 [06:58:47] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.05, 3.01, 2.88 [07:03:03] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 1.87, 3.45, 2.84 [07:05:02] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.93, 3.16, 2.82 [07:19:30] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.06, 4.03, 3.31 [07:21:29] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.40, 3.49, 3.20 [07:23:29] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.84, 2.96, 3.04 [07:36:22] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.90, 3.40, 2.99 [07:38:21] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 7.88, 5.10, 3.66 [07:44:17] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.28, 3.41, 3.41 [07:46:15] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 0.91, 2.52, 3.08 [07:49:57] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 3.41, 2.51, 1.80 [07:51:55] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.83, 2.47, 1.89 [08:05:03] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.90, 3.60, 2.93 [08:09:01] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.88, 3.11, 2.92 [08:22:21] PROBLEM - cloud13 Puppet on cloud13 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[ulogd2] [08:25:51] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.12, 3.68, 2.83 [08:27:49] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.12, 3.72, 2.98 [08:29:22] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 4.70, 3.26, 2.19 [08:29:48] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.32, 3.93, 3.15 [08:31:46] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.21, 3.26, 2.99 [08:34:14] PROBLEM - cp32 NTP time on cp32 is WARNING: NTP WARNING: Offset 0.2591681778 secs [08:35:23] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.40, 3.04, 2.58 [08:35:50] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 6.94, 6.16, 4.28 [08:43:44] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.57, 3.58, 3.88 [08:47:44] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 0.52, 1.77, 3.06 [08:50:20] RECOVERY - cloud13 Puppet on cloud13 is OK: OK: Puppet is currently enabled, last run 9 seconds ago with 0 failures [08:56:12] RECOVERY - cp32 NTP time on cp32 is OK: NTP OK: Offset -0.02149307728 secs [09:02:15] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.29, 3.42, 2.65 [09:04:14] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.06, 2.88, 2.54 [10:36:30] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.96, 2.43, 1.69 [10:38:29] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.63, 2.59, 1.85 [10:54:17] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.73, 2.76, 1.97 [10:56:15] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.40, 2.66, 2.04 [11:27:06] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 5.65, 3.74, 2.37 [11:29:07] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 2.13, 3.04, 2.28 [12:06:23] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.26, 4.03, 2.61 [12:10:20] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.06, 3.71, 2.85 [12:12:18] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 0.56, 2.61, 2.55 [12:52:45] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.10, 2.78, 2.13 [12:54:43] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 3.39, 3.03, 2.30 [12:59:45] RECOVERY - Host cloud11 is UP: PING OK - Packet loss = 0%, RTA = 0.48 ms [13:00:23] PROBLEM - cloud11 ferm_active on cloud11 is CRITICAL: connect to address 2a10:6740::6:200 port 5666: Connection refusedconnect to host 2a10:6740::6:200 port 5666: Connection refused [13:00:28] RECOVERY - ping6 on cloud11 is OK: PING OK - Packet loss = 0%, RTA = 0.68 ms [13:00:33] RECOVERY - cloud11 SSH on cloud11 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [13:00:33] PROBLEM - cloud11 PowerDNS Recursor on cloud11 is CRITICAL: connect to address 2a10:6740::6:200 port 5666: Connection refusedconnect to host 2a10:6740::6:200 port 5666: Connection refused [13:00:33] PROBLEM - cloud11 Puppet on cloud11 is CRITICAL: connect to address 2a10:6740::6:200 port 5666: Connection refusedconnect to host 2a10:6740::6:200 port 5666: Connection refused [13:00:33] PROBLEM - cloud11 conntrack_table_size on cloud11 is CRITICAL: connect to address 2a10:6740::6:200 port 5666: Connection refusedconnect to host 2a10:6740::6:200 port 5666: Connection refused [13:00:43] PROBLEM - cloud11 Disk Space on cloud11 is CRITICAL: connect to address 2a10:6740::6:200 port 5666: Connection refusedconnect to host 2a10:6740::6:200 port 5666: Connection refused [13:00:43] PROBLEM - cloud11 NTP time on cloud11 is CRITICAL: connect to address 2a10:6740::6:200 port 5666: Connection refusedconnect to host 2a10:6740::6:200 port 5666: Connection refused [13:00:43] PROBLEM - cloud11 Current Load on cloud11 is CRITICAL: connect to address 2a10:6740::6:200 port 5666: Connection refusedconnect to host 2a10:6740::6:200 port 5666: Connection refused [13:01:08] PROBLEM - cloud11 IPMI Sensors on cloud11 is CRITICAL: connect to address 2a10:6740::6:200 port 5666: Connection refusedconnect to host 2a10:6740::6:200 port 5666: Connection refused [13:06:33] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.32, 2.81, 2.25 [13:08:32] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.66, 2.93, 2.38 [13:09:24] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,2] - Reallocated_Sector_Ct is non-zero (3) --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean --- [cciss,6] - Device is clean| [13:09:29] RECOVERY - cloud11 IPMI Sensors on cloud11 is OK: IPMI Status: OK [13:10:00] PROBLEM - cloud11 conntrack_table_size on cloud11 is UNKNOWN: NRPE: Unable to read output [13:10:07] PROBLEM - cloud11 Puppet on cloud11 is UNKNOWN: NRPE: Unable to read output [13:10:08] RECOVERY - cloud11 PowerDNS Recursor on cloud11 is OK: DNS OK: 0.042 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [13:11:13] PROBLEM - cloud11 ferm_active on cloud11 is UNKNOWN: NRPE: Unable to read output [13:11:25] RECOVERY - cloud11 Disk Space on cloud11 is OK: DISK OK - free space: / 24911 MB (87% inode=95%); [13:11:31] RECOVERY - cloud11 Current Load on cloud11 is OK: OK - load average: 0.27, 0.49, 0.32 [13:11:41] RECOVERY - cloud11 NTP time on cloud11 is OK: NTP OK: Offset 0.0003867447376 secs [13:13:48] RECOVERY - cloud11 conntrack_table_size on cloud11 is OK: OK: nf_conntrack is 0 % full [13:14:07] RECOVERY - cloud11 Puppet on cloud11 is OK: OK: Puppet is currently enabled, last run 34 seconds ago with 0 failures [13:15:03] RECOVERY - cloud11 ferm_active on cloud11 is OK: OK ferm input default policy is set [13:15:25] RECOVERY - cloud11 APT on cloud11 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [13:17:34] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: connect to address 2a10:6740::6:200 port 5666: Connection refusedconnect to host 2a10:6740::6:200 port 5666: Connection refused [13:20:50] PROBLEM - cloud11 SSH on cloud11 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [13:21:11] PROBLEM - ping6 on cloud11 is CRITICAL: PING CRITICAL - Packet loss = 100% [13:21:35] PROBLEM - cloud11 conntrack_table_size on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [13:22:14] PROBLEM - cloud11 NTP time on cloud11 is WARNING: NTP WARNING: Offset -0.2245141268 secs [13:22:33] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,2] - Reallocated_Sector_Ct is non-zero (3) --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean --- [cciss,6] - Device is clean| [13:22:49] RECOVERY - cloud11 SSH on cloud11 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [13:23:13] RECOVERY - ping6 on cloud11 is OK: PING OK - Packet loss = 0%, RTA = 0.58 ms [13:23:30] RECOVERY - cloud11 conntrack_table_size on cloud11 is OK: OK: nf_conntrack is 0 % full [13:24:19] RECOVERY - cloud11 NTP time on cloud11 is OK: NTP OK: Offset 0.0003160834312 secs [13:36:20] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.70, 3.72, 2.76 [13:38:18] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.70, 3.25, 2.70 [13:45:21] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 5.32, 4.00, 2.64 [13:47:21] RECOVERY - Host swiftobject111 is UP: PING OK - Packet loss = 0%, RTA = 2.28 ms [13:47:39] RECOVERY - ping6 on swiftobject111 is OK: PING OK - Packet loss = 0%, RTA = 0.95 ms [13:48:39] RECOVERY - swiftobject111 Swift Object Service on swiftobject111 is OK: TCP OK - 0.002 second response time on 2a10:6740::6:203 port 6000 [13:49:09] RECOVERY - swiftobject111 SSH on swiftobject111 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [13:49:21] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 1.69, 3.30, 2.73 [13:59:26] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 6.41, 4.45, 3.18 [14:04:10] RECOVERY - Host swiftac111 is UP: PING OK - Packet loss = 0%, RTA = 0.72 ms [14:04:29] PROBLEM - swiftac111 ferm_active on swiftac111 is CRITICAL: connect to address 2a10:6740::6:202 port 5666: Connection refusedconnect to host 2a10:6740::6:202 port 5666: Connection refused [14:04:39] PROBLEM - swiftac111 Disk Space on swiftac111 is CRITICAL: connect to address 2a10:6740::6:202 port 5666: Connection refusedconnect to host 2a10:6740::6:202 port 5666: Connection refused [14:05:24] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.47, 3.17, 3.08 [14:05:27] RECOVERY - ping6 on swiftac111 is OK: PING OK - Packet loss = 0%, RTA = 0.71 ms [14:05:29] PROBLEM - swiftac111 Puppet on swiftac111 is CRITICAL: connect to address 2a10:6740::6:202 port 5666: Connection refusedconnect to host 2a10:6740::6:202 port 5666: Connection refused [14:05:54] PROBLEM - swiftac111 PowerDNS Recursor on swiftac111 is CRITICAL: connect to address 2a10:6740::6:202 port 5666: Connection refusedconnect to host 2a10:6740::6:202 port 5666: Connection refused [14:05:57] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.42, 3.51, 2.58 [14:05:59] PROBLEM - swiftac111 NTP time on swiftac111 is CRITICAL: connect to address 2a10:6740::6:202 port 5666: Connection refusedconnect to host 2a10:6740::6:202 port 5666: Connection refused [14:06:04] PROBLEM - swiftac111 Current Load on swiftac111 is CRITICAL: connect to address 2a10:6740::6:202 port 5666: Connection refusedconnect to host 2a10:6740::6:202 port 5666: Connection refused [14:06:35] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 5.43, 3.97, 2.85 [14:07:56] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.39, 2.68, 2.39 [14:08:33] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.99, 3.28, 2.74 [14:09:15] RECOVERY - Host swiftproxy111 is UP: PING OK - Packet loss = 0%, RTA = 1.10 ms [14:09:44] PROBLEM - swiftproxy111 ferm_active on swiftproxy111 is CRITICAL: connect to address 2a10:6740::6:201 port 5666: Connection refusedconnect to host 2a10:6740::6:201 port 5666: Connection refused [14:10:04] PROBLEM - swiftproxy111 conntrack_table_size on swiftproxy111 is CRITICAL: connect to address 2a10:6740::6:201 port 5666: Connection refusedconnect to host 2a10:6740::6:201 port 5666: Connection refused [14:10:04] PROBLEM - swiftproxy111 Disk Space on swiftproxy111 is CRITICAL: connect to address 2a10:6740::6:201 port 5666: Connection refusedconnect to host 2a10:6740::6:201 port 5666: Connection refused [14:10:04] RECOVERY - ping6 on swiftproxy111 is OK: PING OK - Packet loss = 0%, RTA = 0.76 ms [14:11:04] PROBLEM - swiftproxy111 PowerDNS Recursor on swiftproxy111 is CRITICAL: connect to address 2a10:6740::6:201 port 5666: Connection refusedconnect to host 2a10:6740::6:201 port 5666: Connection refused [14:11:14] PROBLEM - swiftproxy111 Puppet on swiftproxy111 is CRITICAL: connect to address 2a10:6740::6:201 port 5666: Connection refusedconnect to host 2a10:6740::6:201 port 5666: Connection refused [14:11:20] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.83, 3.55, 3.23 [14:13:20] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.96, 2.99, 3.06 [14:17:32] RECOVERY - Host swiftobject112 is UP: PING OK - Packet loss = 0%, RTA = 1.39 ms [14:18:17] RECOVERY - Host swiftobject114 is UP: PING OK - Packet loss = 0%, RTA = 0.88 ms [14:18:29] PROBLEM - swiftobject114 Current Load on swiftobject114 is CRITICAL: CRITICAL - load average: 6.01, 1.52, 0.51 [14:18:31] RECOVERY - swiftobject114 SSH on swiftobject114 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [14:18:49] RECOVERY - swiftobject112 Swift Object Service on swiftobject112 is OK: TCP OK - 0.001 second response time on 2a10:6740::6:204 port 6000 [14:18:54] PROBLEM - swiftobject114 Puppet on swiftobject114 is CRITICAL: CRITICAL: Puppet last ran 1 day ago [14:19:25] RECOVERY - Host swiftobject113 is UP: PING OK - Packet loss = 0%, RTA = 0.77 ms [14:19:29] RECOVERY - swiftobject112 SSH on swiftobject112 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [14:19:30] RECOVERY - Host swiftobject115 is UP: PING OK - Packet loss = 0%, RTA = 0.77 ms [14:19:34] RECOVERY - ping6 on swiftobject112 is OK: PING OK - Packet loss = 0%, RTA = 0.77 ms [14:19:54] RECOVERY - ping6 on swiftobject113 is OK: PING OK - Packet loss = 0%, RTA = 0.80 ms [14:20:04] RECOVERY - swiftobject113 Swift Object Service on swiftobject113 is OK: TCP OK - 0.001 second response time on 2a10:6740::6:205 port 6000 [14:20:21] RECOVERY - swiftobject114 Current Load on swiftobject114 is OK: OK - load average: 1.81, 1.62, 0.68 [14:20:38] RECOVERY - swiftobject115 Current Load on swiftobject115 is OK: OK - load average: 2.92, 0.65, 0.21 [14:20:39] PROBLEM - swiftobject115 Puppet on swiftobject115 is CRITICAL: connect to address 2a10:6740::6:207 port 5666: Connection refusedconnect to host 2a10:6740::6:207 port 5666: Connection refused [14:20:47] RECOVERY - swiftobject114 Puppet on swiftobject114 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:21:09] RECOVERY - swiftobject115 Swift Object Service on swiftobject115 is OK: TCP OK - 0.001 second response time on 2a10:6740::6:207 port 6000 [14:21:19] RECOVERY - swiftobject113 SSH on swiftobject113 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [14:21:19] RECOVERY - ping6 on swiftobject115 is OK: PING OK - Packet loss = 0%, RTA = 0.74 ms [14:22:35] RECOVERY - swiftobject115 Puppet on swiftobject115 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:29:51] PROBLEM - Host swiftac111 is DOWN: PING CRITICAL - Packet loss = 100% [14:30:32] PROBLEM - ping6 on swiftproxy111 is CRITICAL: PING CRITICAL - Packet loss = 100% [14:31:00] PROBLEM - Host swiftproxy111 is DOWN: PING CRITICAL - Packet loss = 100% [14:34:20] RECOVERY - Host swiftac111 is UP: PING OK - Packet loss = 0%, RTA = 0.71 ms [14:35:30] RECOVERY - Host swiftproxy111 is UP: PING OK - Packet loss = 0%, RTA = 0.84 ms [14:36:44] RECOVERY - ping6 on swiftproxy111 is OK: PING OK - Packet loss = 0%, RTA = 0.78 ms [14:38:31] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.79, 4.16, 3.01 [14:39:07] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.86, 2.86, 2.29 [14:39:11] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.40, 3.10, 2.79 [14:40:29] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.38, 3.80, 3.02 [14:41:06] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.70, 2.59, 2.27 [14:41:11] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.72, 2.99, 2.80 [14:42:28] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.03, 3.31, 2.94 [14:44:00] RECOVERY - swiftac111 SSH on swiftac111 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [14:45:32] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 5.17, 3.81, 2.59 [14:45:42] RECOVERY - swiftproxy111 SSH on swiftproxy111 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [14:47:29] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 1.93, 3.23, 2.54 [14:57:19] RECOVERY - swiftac111 Swift Account Service on swiftac111 is OK: TCP OK - 0.001 second response time on 2a10:6740::6:202 port 6002 [14:57:19] RECOVERY - swiftac111 Swift Container Service on swiftac111 is OK: TCP OK - 7.158 second response time on 2a10:6740::6:202 port 6001 [14:57:44] PROBLEM - swiftac111 Puppet on swiftac111 is UNKNOWN: NRPE: Unable to read output [14:57:47] RECOVERY - swiftproxy111 PowerDNS Recursor on swiftproxy111 is OK: DNS OK: 0.076 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [14:57:49] RECOVERY - swiftproxy111 Current Load on swiftproxy111 is OK: OK - load average: 0.52, 0.76, 0.40 [14:57:53] PROBLEM - swiftac111 ferm_active on swiftac111 is UNKNOWN: NRPE: Unable to read output [14:58:09] RECOVERY - swiftac111 Disk Space on swiftac111 is OK: DISK OK - free space: / 134403 MB (98% inode=99%); [14:58:15] RECOVERY - swiftac111 APT on swiftac111 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [14:58:26] RECOVERY - swiftproxy111 APT on swiftproxy111 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [14:58:34] PROBLEM - swiftproxy111 Puppet on swiftproxy111 is UNKNOWN: NRPE: Unable to read output [14:58:36] RECOVERY - swiftproxy111 NTP time on swiftproxy111 is OK: NTP OK: Offset 0.003419607878 secs [14:58:41] RECOVERY - swiftac111 Current Load on swiftac111 is OK: OK - load average: 0.78, 0.89, 0.45 [14:58:42] PROBLEM - swiftproxy111 conntrack_table_size on swiftproxy111 is UNKNOWN: NRPE: Unable to read output [14:59:11] RECOVERY - swiftac111 PowerDNS Recursor on swiftac111 is OK: DNS OK: 0.082 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [14:59:13] PROBLEM - swiftac111 conntrack_table_size on swiftac111 is UNKNOWN: NRPE: Unable to read output [14:59:14] RECOVERY - swiftac111 NTP time on swiftac111 is OK: NTP OK: Offset 0.004531145096 secs [14:59:16] RECOVERY - swiftproxy111 Disk Space on swiftproxy111 is OK: DISK OK - free space: / 26196 MB (91% inode=95%); [14:59:24] RECOVERY - swiftproxy111 ferm_active on swiftproxy111 is OK: OK ferm input default policy is set [14:59:39] RECOVERY - swiftac111 Puppet on swiftac111 is OK: OK: Puppet is currently enabled, last run 20 seconds ago with 0 failures [14:59:49] RECOVERY - swiftac111 ferm_active on swiftac111 is OK: OK ferm input default policy is set [15:00:33] RECOVERY - swiftproxy111 Puppet on swiftproxy111 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [15:00:36] RECOVERY - swiftproxy111 conntrack_table_size on swiftproxy111 is OK: OK: nf_conntrack is 0 % full [15:01:11] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.83, 3.58, 3.13 [15:01:16] RECOVERY - swiftac111 conntrack_table_size on swiftac111 is OK: OK: nf_conntrack is 0 % full [15:01:41] RECOVERY - swiftproxy111 memcached on swiftproxy111 is OK: TCP OK - 0.006 second response time on 2a10:6740::6:201 port 11211 [15:03:11] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.52, 3.23, 3.06 [15:03:28] PROBLEM - swiftproxy111 HTTPS on swiftproxy111 is WARNING: HTTP WARNING: HTTP/1.1 401 Unauthorized - 473 bytes in 0.035 second response time [15:07:08] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.06, 3.46, 2.99 [15:07:11] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.81, 3.63, 3.31 [15:09:06] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.53, 3.30, 2.99 [15:11:11] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.86, 3.00, 3.17 [15:32:05] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.70, 3.17, 2.94 [15:34:04] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 3.21, 3.14, 2.95 [15:34:35] PROBLEM - cp22 NTP time on cp22 is WARNING: NTP WARNING: Offset 0.1189526618 secs [15:36:18] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-4 [+0/-0/±1] 13https://github.com/miraheze/puppet/commit/3bd284e975d1 [15:36:20] [02miraheze/puppet] 07paladox 033bd284e - swift: stop using swift-drive-audit for now [15:36:23] [02puppet] 07paladox created branch 03paladox-patch-4 - 13https://github.com/miraheze/puppet [15:36:24] [url] Page not found · GitHub · GitHub | github.com [15:36:26] [02puppet] 07paladox opened pull request 03#2938: swift: stop using swift-drive-audit for now - 13https://github.com/miraheze/puppet/pull/2938 [15:36:26] [url] Page not found · GitHub · GitHub | github.com [15:36:31] [02puppet] 07paladox closed pull request 03#2938: swift: stop using swift-drive-audit for now - 13https://github.com/miraheze/puppet/pull/2938 [15:36:31] [url] Page not found · GitHub · GitHub | github.com [15:36:33] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-4 [15:36:36] [02puppet] 07paladox deleted branch 03paladox-patch-4 - 13https://github.com/miraheze/puppet [15:36:36] [url] Page not found · GitHub · GitHub | github.com [15:36:38] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/puppet/compare/8c10b0f97152...d77848f4b7ca [15:36:38] [url] Comparing 8c10b0f97152...d77848f4b7ca · miraheze/puppet · GitHub | github.com [15:36:41] [02miraheze/puppet] 07paladox 03d77848f - swift: stop using swift-drive-audit for now (#2938) [15:36:42] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.96, 3.85, 3.18 [15:37:00] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/puppet/compare/d77848f4b7ca...0643bd3f40fe [15:37:03] [02miraheze/puppet] 07paladox 030643bd3 - swift: stop using swift-drive-audit [15:37:05] [url] Comparing d77848f4b7ca...0643bd3f40fe · miraheze/puppet · GitHub | github.com [15:38:40] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.06, 3.40, 3.11 [15:42:00] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.64, 3.62, 3.25 [15:46:26] PROBLEM - cp22 Current Load on cp22 is WARNING: WARNING - load average: 3.17, 3.43, 2.46 [15:48:23] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 2.09, 2.81, 2.34 [15:51:56] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 7.46, 4.58, 3.77 [15:54:35] RECOVERY - cp22 NTP time on cp22 is OK: NTP OK: Offset 0.09419709444 secs [15:55:54] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.38, 3.90, 3.66 [15:59:53] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.35, 3.02, 3.34 [16:06:17] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.65, 3.60, 2.83 [16:10:14] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.50, 3.24, 2.87 [16:25:06] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 6.09, 3.62, 2.56 [16:27:06] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 2.34, 3.07, 2.49 [16:28:39] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.04, 3.69, 3.30 [16:30:38] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.20, 4.76, 3.77 [16:34:37] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.63, 3.65, 3.54 [16:36:36] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.91, 2.92, 3.28 [16:37:52] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.98, 4.16, 3.03 [16:41:49] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.80, 3.53, 3.06 [16:43:47] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.19, 2.65, 2.79 [16:44:32] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 6.01, 4.62, 3.80 [16:48:30] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.20, 3.47, 3.53 [16:52:29] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.70, 3.58, 3.52 [16:55:07] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 6.13, 4.14, 2.69 [16:56:27] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.06, 3.53, 3.58 [16:58:26] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.15, 2.75, 3.29 [16:59:06] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 2.63, 3.63, 2.85 [16:59:22] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 7.08, 3.29, 1.89 [17:00:47] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 7.46, 11.12, 8.83 [17:01:06] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 0.89, 2.69, 2.60 [17:01:22] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 2.01, 2.58, 1.80 [17:02:24] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 6.33, 4.16, 3.77 [17:02:47] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 4.87, 9.11, 8.38 [17:06:23] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.11, 3.99, 3.84 [17:08:22] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.51, 4.18, 3.91 [17:08:24] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.70, 3.90, 3.17 [17:09:57] PROBLEM - wiki.autocountsoft.com - reverse DNS on sslhost is WARNING: Timeout: The DNS operation timed out after 5.406017065048218 seconds [17:12:21] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.32, 3.30, 3.11 [17:22:13] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.65, 4.04, 3.30 [17:23:07] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.42, 3.13, 2.52 [17:26:09] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.75, 3.45, 3.29 [17:27:07] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 4.57, 3.78, 2.92 [17:28:08] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.62, 3.12, 3.18 [17:29:06] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.92, 3.06, 2.77 [17:29:22] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 4.66, 3.33, 2.74 [17:30:40] PROBLEM - db112 Current Load on db112 is CRITICAL: CRITICAL - load average: 28.99, 13.57, 6.70 [17:31:22] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 3.84, 3.37, 2.82 [17:33:22] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 5.40, 3.86, 3.05 [17:35:22] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.58, 3.01, 2.85 [17:37:59] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.24, 3.67, 3.23 [17:38:39] RECOVERY - wiki.autocountsoft.com - reverse DNS on sslhost is OK: SSL OK - wiki.autocountsoft.com reverse DNS resolves to cp22.miraheze.org - CNAME OK [17:43:55] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.87, 3.40, 3.34 [17:44:40] PROBLEM - db112 Current Load on db112 is WARNING: WARNING - load average: 3.63, 6.85, 6.79 [17:45:45] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 4.39, 3.24, 2.36 [17:46:40] PROBLEM - db112 Current Load on db112 is CRITICAL: CRITICAL - load average: 12.02, 7.94, 7.14 [17:46:42] PROBLEM - swiftobject115 Current Load on swiftobject115 is WARNING: WARNING - load average: 3.59, 3.13, 2.50 [17:47:42] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 2.13, 2.85, 2.33 [17:48:40] RECOVERY - swiftobject115 Current Load on swiftobject115 is OK: OK - load average: 3.12, 3.03, 2.53 [17:48:55] PROBLEM - swiftobject114 Current Load on swiftobject114 is CRITICAL: CRITICAL - load average: 4.19, 3.64, 2.65 [17:50:51] PROBLEM - swiftobject114 Current Load on swiftobject114 is WARNING: WARNING - load average: 3.58, 3.58, 2.74 [17:52:04] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 1.39, 2.59, 3.71 [17:52:46] RECOVERY - swiftobject114 Current Load on swiftobject114 is OK: OK - load average: 0.94, 2.63, 2.49 [17:52:46] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 3.76, 4.38, 3.71 [17:54:40] PROBLEM - db112 Current Load on db112 is WARNING: WARNING - load average: 4.54, 6.95, 7.25 [17:55:52] PROBLEM - swiftproxy111 HTTPS on swiftproxy111 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:56:00] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [17:56:15] PROBLEM - swiftproxy111 SSH on swiftproxy111 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:56:24] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.07, 2.29, 3.29 [17:56:28] PROBLEM - swiftobject111 SSH on swiftobject111 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:56:29] PROBLEM - swiftobject114 Swift Object Service on swiftobject114 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:56:37] PROBLEM - swiftac111 SSH on swiftac111 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:56:46] PROBLEM - swiftobject114 SSH on swiftobject114 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:56:57] PROBLEM - cloud11 SSH on cloud11 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:56:59] PROBLEM - db112 Current Load on db112 is CRITICAL: CRITICAL - load average: 20.47, 9.89, 8.14 [17:57:00] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.01, 3.30, 3.46 [17:57:09] PROBLEM - ping6 on swiftobject112 is CRITICAL: PING CRITICAL - Packet loss = 100% [17:57:11] PROBLEM - swiftobject115 SSH on swiftobject115 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:57:11] PROBLEM - swiftproxy111 memcached on swiftproxy111 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:57:16] PROBLEM - swiftac111 Swift Container Service on swiftac111 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:57:19] PROBLEM - swiftobject111 Swift Object Service on swiftobject111 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:57:21] PROBLEM - swiftobject113 Swift Object Service on swiftobject113 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:57:24] PROBLEM - swiftobject112 Swift Object Service on swiftobject112 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:57:34] PROBLEM - ping6 on cloud11 is CRITICAL: PING CRITICAL - Packet loss = 100% [17:57:42] PROBLEM - ping6 on swiftproxy111 is CRITICAL: PING CRITICAL - Packet loss = 100% [17:57:51] PROBLEM - swiftobject112 SSH on swiftobject112 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:57:51] PROBLEM - swiftobject115 Swift Object Service on swiftobject115 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:57:59] PROBLEM - ping6 on swiftobject114 is CRITICAL: PING CRITICAL - Packet loss = 100% [17:58:00] PROBLEM - Host swiftobject114 is DOWN: PING CRITICAL - Packet loss = 100% [17:58:03] PROBLEM - swiftac111 Swift Account Service on swiftac111 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:58:12] PROBLEM - ping6 on swiftobject113 is CRITICAL: PING CRITICAL - Packet loss = 100% [17:58:14] PROBLEM - swiftobject113 SSH on swiftobject113 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [17:58:17] PROBLEM - Host cloud11 is DOWN: PING CRITICAL - Packet loss = 100% [17:58:32] PROBLEM - ping6 on swiftobject111 is CRITICAL: PING CRITICAL - Packet loss = 100% [17:58:33] PROBLEM - ping6 on swiftobject115 is CRITICAL: PING CRITICAL - Packet loss = 100% [17:58:42] PROBLEM - ping6 on swiftac111 is CRITICAL: PING CRITICAL - Packet loss = 100% [17:59:10] PROBLEM - Host swiftobject111 is DOWN: PING CRITICAL - Packet loss = 100% [17:59:13] PROBLEM - swiftac111 PowerDNS Recursor on swiftac111 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [17:59:14] PROBLEM - swiftproxy111 PowerDNS Recursor on swiftproxy111 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [17:59:17] PROBLEM - swiftobject115 NTP time on swiftobject115 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [17:59:17] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.46, 2.91, 3.28 [17:59:17] PROBLEM - swiftproxy111 conntrack_table_size on swiftproxy111 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [17:59:24] PROBLEM - swiftac111 APT on swiftac111 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [17:59:25] PROBLEM - swiftproxy111 Disk Space on swiftproxy111 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [17:59:31] PROBLEM - Host swiftac111 is DOWN: PING CRITICAL - Packet loss = 100% [17:59:34] PROBLEM - swiftobject113 ferm_active on swiftobject113 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [17:59:35] PROBLEM - Host swiftobject112 is DOWN: PING CRITICAL - Packet loss = 100% [17:59:38] PROBLEM - swiftobject115 Disk Space on swiftobject115 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [17:59:44] PROBLEM - Host swiftobject115 is DOWN: PING CRITICAL - Packet loss = 100% [17:59:54] PROBLEM - Host swiftproxy111 is DOWN: PING CRITICAL - Packet loss = 100% [17:59:59] PROBLEM - swiftobject113 PowerDNS Recursor on swiftobject113 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:00:00] PROBLEM - swiftobject113 NTP time on swiftobject113 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:00:06] PROBLEM - Host swiftobject113 is DOWN: PING CRITICAL - Packet loss = 100% [18:02:01] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 5.48, 3.46, 2.55 [18:03:59] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 3.85, 3.89, 2.84 [18:04:34] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.50, 3.87, 3.57 [18:05:13] PROBLEM - db112 Current Load on db112 is WARNING: WARNING - load average: 4.16, 6.81, 7.83 [18:05:58] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.08, 2.84, 2.58 [18:06:34] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.84, 3.28, 3.37 [18:07:11] PROBLEM - db112 Current Load on db112 is CRITICAL: CRITICAL - load average: 18.45, 8.86, 8.33 [18:09:14] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.85, 4.40, 3.66 [18:12:30] RECOVERY - Host cloud11 is UP: PING OK - Packet loss = 0%, RTA = 1.16 ms [18:12:42] RECOVERY - ping6 on cloud11 is OK: PING OK - Packet loss = 0%, RTA = 0.62 ms [18:12:59] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,2] - Reallocated_Sector_Ct is non-zero (3) --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean --- [cciss,6] - Device is clean| [18:13:12] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.42, 3.26, 3.41 [18:14:11] RECOVERY - cloud11 SSH on cloud11 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [18:14:19] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-4 [+0/-0/±1] 13https://github.com/miraheze/puppet/commit/1440e6a8ff08 [18:14:20] [02miraheze/puppet] 07paladox 031440e6a - Phabricator: change to static php [18:14:23] [02puppet] 07paladox created branch 03paladox-patch-4 - 13https://github.com/miraheze/puppet [18:14:24] [url] Page not found · GitHub · GitHub | github.com [18:14:25] [02puppet] 07paladox opened pull request 03#2939: Phabricator: change to static php - 13https://github.com/miraheze/puppet/pull/2939 [18:14:26] ... [18:14:54] [02puppet] 07paladox edited pull request 03#2939: Phabricator: change to static php - 13https://github.com/miraheze/puppet/pull/2939 [18:14:55] ... [18:15:07] [02puppet] 07paladox closed pull request 03#2939: Phabricator: change to static php - 13https://github.com/miraheze/puppet/pull/2939 [18:15:07] [url] Page not found · GitHub · GitHub | github.com [18:15:10] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/puppet/compare/0643bd3f40fe...698d4702c9fd [18:15:11] [url] Comparing 0643bd3f40fe...698d4702c9fd · miraheze/puppet · GitHub | github.com [18:15:11] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.82, 2.97, 3.29 [18:15:12] [02miraheze/puppet] 07paladox 03698d470 - Phabricator: change to static php (#2939) [18:15:13] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-4 [18:15:14] [02puppet] 07paladox deleted branch 03paladox-patch-4 - 13https://github.com/miraheze/puppet [18:15:15] [url] Page not found · GitHub · GitHub | github.com [18:15:45] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 4.13, 3.63, 2.58 [18:17:42] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 1.37, 2.74, 2.37 [18:31:33] PROBLEM - db101 Current Load on db101 is WARNING: WARNING - load average: 7.21, 6.97, 5.70 [18:33:33] RECOVERY - db101 Current Load on db101 is OK: OK - load average: 4.83, 6.05, 5.51 [18:35:54] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 11.77, 6.33, 3.90 [18:38:48] PROBLEM - db112 Current Load on db112 is WARNING: WARNING - load average: 1.20, 5.30, 7.60 [18:42:44] RECOVERY - db112 Current Load on db112 is OK: OK - load average: 0.75, 2.83, 6.06 [18:43:48] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.19, 3.60, 3.74 [18:46:08] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 4.37, 3.81, 2.68 [18:48:04] PROBLEM - cp22 Current Load on cp22 is WARNING: WARNING - load average: 2.61, 3.47, 2.70 [18:49:44] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.02, 3.35, 3.49 [18:50:01] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 1.39, 2.63, 2.47 [18:50:19] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.52, 3.39, 2.89 [18:51:42] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.07, 2.86, 3.30 [18:52:18] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.96, 2.86, 2.76 [18:55:22] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.55, 3.00, 2.56 [18:57:16] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 2.28, 2.86, 2.57 [19:05:12] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.70, 4.17, 3.31 [19:05:31] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 6.54, 3.71, 3.17 [19:07:11] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.37, 3.64, 3.20 [19:11:11] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.92, 3.34, 3.19 [19:11:29] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.20, 3.41, 3.25 [19:13:29] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.81, 3.34, 3.25 [19:17:11] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 3.17, 4.92, 4.08 [19:21:13] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.46, 3.81, 3.83 [19:27:09] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:27:14] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.51, 2.40, 3.19 [19:28:28] PROBLEM - cloud11 SSH on cloud11 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:29:30] PROBLEM - cloud11 conntrack_table_size on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:29:37] PROBLEM - Host cloud11 is DOWN: PING CRITICAL - Packet loss = 100% [19:31:11] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.48, 3.25, 3.36 [19:31:36] RECOVERY - Host cloud11 is UP: PING OK - Packet loss = 0%, RTA = 1.09 ms [19:32:08] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,2] - Reallocated_Sector_Ct is non-zero (3) --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean --- [cciss,6] - Device is clean| [19:32:35] RECOVERY - cloud11 SSH on cloud11 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [19:33:17] PROBLEM - cloud11 ferm_active on cloud11 is CRITICAL: connect to address 2a10:6740::6:200 port 5666: Connection refusedconnect to host 2a10:6740::6:200 port 5666: Connection refused [19:34:02] RECOVERY - cloud11 conntrack_table_size on cloud11 is OK: OK: nf_conntrack is 0 % full [19:35:05] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:35:11] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.62, 3.13, 3.29 [19:36:11] PROBLEM - ping6 on cloud11 is CRITICAL: PING CRITICAL - Packet loss = 100% [19:36:15] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 12.86, 5.89, 3.62 [19:36:55] PROBLEM - Host cloud11 is DOWN: PING CRITICAL - Packet loss = 100% [19:39:11] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.69, 3.67, 3.47 [19:41:11] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 3.02, 3.38, 3.39 [19:42:47] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.61, 9.95, 8.60 [19:44:10] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 0.91, 3.33, 3.43 [19:46:10] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.60, 2.76, 3.21 [19:46:47] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 7.61, 9.35, 8.72 [19:47:11] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.70, 5.04, 4.12 [19:50:56] RECOVERY - Host cloud11 is UP: PING OK - Packet loss = 0%, RTA = 0.51 ms [19:51:07] RECOVERY - ping6 on cloud11 is OK: PING OK - Packet loss = 0%, RTA = 0.63 ms [19:51:11] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 1.76, 3.37, 3.65 [19:51:35] RECOVERY - cloud11 ferm_active on cloud11 is OK: OK ferm input default policy is set [19:51:50] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,2] - Reallocated_Sector_Ct is non-zero (3) --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean --- [cciss,6] - Device is clean| [19:54:47] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:55:32] PROBLEM - ping6 on cloud11 is CRITICAL: PING CRITICAL - Packet loss = 100% [19:56:15] PROBLEM - Host cloud11 is DOWN: PING CRITICAL - Packet loss = 100% [19:57:11] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.47, 2.65, 3.29 [19:58:49] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 3.58, 2.95, 2.20 [20:00:47] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 4.54, 3.54, 2.51 [20:02:45] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 3.47, 3.53, 2.63 [20:06:41] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.27, 2.59, 2.49 [20:10:47] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 15.51, 11.86, 10.05 [20:12:47] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.50, 11.71, 10.23 [20:14:09] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 8.18, 5.04, 3.90 [20:16:47] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 13.45, 11.23, 9.52 [20:17:31] RECOVERY - Host cloud11 is UP: PING OK - Packet loss = 0%, RTA = 0.49 ms [20:18:47] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 9.15, 10.16, 10.00 [20:18:47] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 9.49, 10.67, 9.54 [20:18:57] RECOVERY - ping6 on cloud11 is OK: PING OK - Packet loss = 0%, RTA = 1.15 ms [20:19:07] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,2] - Reallocated_Sector_Ct is non-zero (3) --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean --- [cciss,6] - Device is clean| [20:22:05] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.10, 3.34, 3.65 [20:22:23] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 4.68, 3.67, 2.77 [20:22:47] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 7.62, 9.60, 9.40 [20:24:21] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.16, 2.70, 2.52 [20:24:48] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 5.02, 3.63, 2.60 [20:26:04] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.76, 3.60, 3.66 [20:26:43] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.57, 3.56, 2.70 [20:28:37] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 2.55, 3.20, 2.67 [20:32:30] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.67, 4.39, 3.32 [20:34:00] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.37, 3.51, 3.70 [20:36:00] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.33, 2.76, 3.40 [20:44:21] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.74, 3.29, 3.67 [20:44:55] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.05, 3.48, 3.43 [20:45:21] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 6.24, 4.17, 2.76 [20:46:20] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.12, 3.31, 3.61 [20:46:54] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.61, 3.10, 3.29 [20:52:16] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.59, 3.90, 3.85 [20:52:29] PROBLEM - cloud11 APT on cloud11 is CRITICAL: APT CRITICAL: 1 packages available for upgrade (1 critical updates). [20:55:21] PROBLEM - cp22 Current Load on cp22 is WARNING: WARNING - load average: 0.84, 3.45, 3.46 [20:56:13] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 0.70, 2.14, 3.13 [20:57:21] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 0.55, 2.46, 3.09 [21:04:06] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.69, 3.06, 3.16 [21:04:42] PROBLEM - cloud11 SMART on cloud11 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [21:05:51] PROBLEM - cloud11 SSH on cloud11 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:06:05] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.80, 3.70, 3.39 [21:06:06] PROBLEM - ping6 on cloud11 is CRITICAL: PING CRITICAL - Packet loss = 100% [21:06:16] PROBLEM - Host cloud11 is DOWN: PING CRITICAL - Packet loss = 100% [21:10:02] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.59, 3.31, 3.38 [21:10:41] RECOVERY - Host cloud11 is UP: PING OK - Packet loss = 0%, RTA = 0.71 ms [21:11:38] RECOVERY - Host swiftobject113 is UP: PING OK - Packet loss = 16%, RTA = 0.75 ms [21:11:45] RECOVERY - swiftobject113 Swift Object Service on swiftobject113 is OK: TCP OK - 0.001 second response time on 2a10:6740::6:205 port 6000 [21:11:46] RECOVERY - Host swiftobject114 is UP: PING OK - Packet loss = 79%, RTA = 20.06 ms [21:11:46] RECOVERY - Host swiftobject115 is UP: PING WARNING - Packet loss = 82%, RTA = 1.63 ms [21:11:53] PROBLEM - cloud11 SMART on cloud11 is WARNING: WARNING: [cciss,2] - Reallocated_Sector_Ct is non-zero (3) --- [cciss,0] - Device is clean --- [cciss,1] - Device is clean --- [cciss,3] - Device is clean --- [cciss,4] - Device is clean --- [cciss,5] - Device is clean --- [cciss,6] - Device is clean| [21:12:03] RECOVERY - cloud11 SSH on cloud11 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [21:12:13] RECOVERY - swiftobject113 PowerDNS Recursor on swiftobject113 is OK: DNS OK: 0.527 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [21:12:17] RECOVERY - Host swiftobject111 is UP: PING OK - Packet loss = 0%, RTA = 0.74 ms [21:12:18] PROBLEM - swiftobject114 Current Load on swiftobject114 is WARNING: WARNING - load average: 3.58, 1.02, 0.35 [21:12:23] RECOVERY - ping6 on cloud11 is OK: PING OK - Packet loss = 0%, RTA = 2.44 ms [21:12:28] RECOVERY - ping6 on swiftobject113 is OK: PING OK - Packet loss = 0%, RTA = 1.30 ms [21:12:28] RECOVERY - Host swiftproxy111 is UP: PING OK - Packet loss = 0%, RTA = 2.15 ms [21:12:33] PROBLEM - swiftobject113 Puppet on swiftobject113 is WARNING: WARNING: Puppet last ran 3 hours ago [21:12:33] RECOVERY - swiftobject113 NTP time on swiftobject113 is OK: NTP OK: Offset 0.02069523931 secs [21:12:35] RECOVERY - swiftproxy111 PowerDNS Recursor on swiftproxy111 is OK: DNS OK: 0.070 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [21:12:43] PROBLEM - swiftobject115 Puppet on swiftobject115 is WARNING: WARNING: Puppet last ran 3 hours ago [21:12:45] RECOVERY - swiftobject113 ferm_active on swiftobject113 is OK: OK ferm input default policy is set [21:12:53] PROBLEM - swiftproxy111 HTTPS on swiftproxy111 is WARNING: HTTP WARNING: HTTP/1.1 401 Unauthorized - 473 bytes in 0.021 second response time [21:12:53] PROBLEM - swiftproxy111 APT on swiftproxy111 is CRITICAL: APT CRITICAL: 1 packages available for upgrade (1 critical updates). [21:13:01] RECOVERY - swiftproxy111 conntrack_table_size on swiftproxy111 is OK: OK: nf_conntrack is 0 % full [21:13:06] RECOVERY - ping6 on swiftobject111 is OK: PING OK - Packet loss = 0%, RTA = 0.76 ms [21:13:09] RECOVERY - swiftobject113 SSH on swiftobject113 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [21:13:13] RECOVERY - swiftobject115 NTP time on swiftobject115 is OK: NTP OK: Offset 0.0005616545677 secs [21:13:13] RECOVERY - swiftobject114 SSH on swiftobject114 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [21:13:15] RECOVERY - swiftobject111 SSH on swiftobject111 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [21:13:15] RECOVERY - Host swiftobject112 is UP: PING OK - Packet loss = 0%, RTA = 0.77 ms [21:13:20] RECOVERY - ping6 on swiftobject114 is OK: PING OK - Packet loss = 0%, RTA = 0.85 ms [21:13:24] RECOVERY - swiftobject115 SSH on swiftobject115 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [21:13:24] RECOVERY - swiftproxy111 Disk Space on swiftproxy111 is OK: DISK OK - free space: / 26173 MB (91% inode=95%); [21:13:25] RECOVERY - swiftobject114 Swift Object Service on swiftobject114 is OK: TCP OK - 0.001 second response time on 2a10:6740::6:206 port 6000 [21:13:33] RECOVERY - swiftobject112 SSH on swiftobject112 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [21:13:33] RECOVERY - swiftobject115 Disk Space on swiftobject115 is OK: DISK OK - free space: / 885879 MB (99% inode=99%); [21:13:39] RECOVERY - ping6 on swiftobject115 is OK: PING OK - Packet loss = 0%, RTA = 1.34 ms [21:13:39] RECOVERY - Host swiftac111 is UP: PING OK - Packet loss = 0%, RTA = 1.51 ms [21:13:43] RECOVERY - swiftobject111 Swift Object Service on swiftobject111 is OK: TCP OK - 0.001 second response time on 2a10:6740::6:203 port 6000 [21:13:48] RECOVERY - swiftobject115 Swift Object Service on swiftobject115 is OK: TCP OK - 0.001 second response time on 2a10:6740::6:207 port 6000 [21:13:53] RECOVERY - swiftobject112 Swift Object Service on swiftobject112 is OK: TCP OK - 0.001 second response time on 2a10:6740::6:204 port 6000 [21:14:03] RECOVERY - swiftac111 SSH on swiftac111 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [21:14:08] RECOVERY - ping6 on swiftobject112 is OK: PING OK - Packet loss = 0%, RTA = 0.67 ms [21:14:11] RECOVERY - swiftobject114 Current Load on swiftobject114 is OK: OK - load average: 1.80, 1.29, 0.54 [21:14:13] RECOVERY - ping6 on swiftproxy111 is OK: PING OK - Packet loss = 0%, RTA = 1.38 ms [21:14:13] RECOVERY - swiftproxy111 memcached on swiftproxy111 is OK: TCP OK - 0.001 second response time on 2a10:6740::6:201 port 11211 [21:14:13] RECOVERY - swiftproxy111 SSH on swiftproxy111 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [21:14:25] RECOVERY - swiftobject113 Puppet on swiftobject113 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [21:14:33] RECOVERY - swiftac111 PowerDNS Recursor on swiftac111 is OK: DNS OK: 0.030 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [21:14:37] RECOVERY - swiftobject115 Puppet on swiftobject115 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [21:15:23] RECOVERY - swiftac111 Swift Account Service on swiftac111 is OK: TCP OK - 0.001 second response time on 2a10:6740::6:202 port 6002 [21:15:23] RECOVERY - swiftac111 Swift Container Service on swiftac111 is OK: TCP OK - 0.001 second response time on 2a10:6740::6:202 port 6001 [21:15:23] RECOVERY - ping6 on swiftac111 is OK: PING OK - Packet loss = 0%, RTA = 0.76 ms [21:15:56] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.13, 3.59, 3.48 [21:17:41] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.96, 3.67, 3.02 [21:17:55] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.60, 2.78, 3.19 [21:19:40] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.57, 3.27, 2.95 [21:30:34] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.31, 3.73, 3.24 [21:32:33] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.79, 3.45, 3.18 [21:34:39] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.18, 3.80, 3.30 [21:38:30] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.43, 3.10, 3.16 [21:40:35] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.06, 3.29, 3.43 [21:42:28] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.96, 4.23, 3.59 [21:44:32] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.72, 2.94, 3.26 [21:55:23] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.94, 3.37, 2.49 [21:57:18] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 3.08, 3.13, 2.49 [22:06:15] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.97, 4.23, 3.29 [22:10:13] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.29, 3.72, 3.35 [22:10:17] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.23, 3.64, 3.94 [22:11:46] PROBLEM - cp23 NTP time on cp23 is WARNING: NTP WARNING: Offset 0.1199686527 secs [22:12:11] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 0.87, 2.86, 3.08 [22:16:15] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.38, 4.44, 4.15 [22:22:02] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.98, 3.52, 3.21 [22:24:11] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.35, 3.52, 3.90 [22:25:59] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.05, 3.75, 3.37 [22:27:07] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 5.33, 4.40, 3.00 [22:27:58] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.17, 2.79, 3.07 [22:29:06] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 2.90, 3.87, 2.98 [22:30:09] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.33, 3.47, 3.72 [22:31:06] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 4.43, 4.27, 3.25 [22:32:08] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.97, 3.54, 3.74 [22:33:07] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.03, 3.08, 2.94 [22:34:07] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.45, 3.90, 3.85 [22:36:07] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.02, 3.72, 3.80 [22:37:49] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.50, 4.28, 3.51 [22:39:46] RECOVERY - cp23 NTP time on cp23 is OK: NTP OK: Offset 0.08961382508 secs [22:42:04] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.62, 3.66, 3.72 [22:44:03] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.81, 3.27, 3.56 [22:46:03] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.23, 2.89, 3.38 [22:47:44] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 0.62, 3.18, 3.87 [22:50:00] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.19, 3.32, 3.49 [22:51:59] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.22, 4.08, 3.78 [22:53:39] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.38, 2.40, 3.34 [22:53:59] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.77, 3.88, 3.74 [22:56:17] PROBLEM - swiftobject114 Current Load on swiftobject114 is WARNING: WARNING - load average: 3.52, 2.67, 1.94 [22:58:12] RECOVERY - swiftobject114 Current Load on swiftobject114 is OK: OK - load average: 2.72, 2.69, 2.03 [22:59:45] PROBLEM - swiftobject115 Current Load on swiftobject115 is CRITICAL: CRITICAL - load average: 4.03, 3.32, 2.28 [22:59:56] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 7.09, 5.33, 4.30 [23:01:45] RECOVERY - swiftobject115 Current Load on swiftobject115 is OK: OK - load average: 2.82, 3.15, 2.34 [23:02:09] PROBLEM - swiftobject114 Current Load on swiftobject114 is WARNING: WARNING - load average: 3.46, 3.27, 2.43 [23:03:55] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.07, 3.99, 3.97 [23:04:07] PROBLEM - swiftobject114 Current Load on swiftobject114 is CRITICAL: CRITICAL - load average: 4.65, 3.80, 2.73 [23:05:54] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 3.86, 4.12, 4.03 [23:06:30] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 7.41, 4.17, 3.52 [23:07:53] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 1.79, 3.25, 3.72 [23:08:07] PROBLEM - swiftobject114 Current Load on swiftobject114 is WARNING: WARNING - load average: 3.88, 3.77, 2.96 [23:09:32] PROBLEM - swiftobject115 Current Load on swiftobject115 is WARNING: WARNING - load average: 3.91, 3.79, 3.01 [23:10:07] PROBLEM - swiftobject114 Current Load on swiftobject114 is CRITICAL: CRITICAL - load average: 4.02, 3.94, 3.12 [23:10:27] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.38, 3.48, 3.44 [23:11:52] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 6.91, 3.69, 3.68 [23:12:26] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.11, 3.16, 3.33 [23:14:07] PROBLEM - swiftobject114 Current Load on swiftobject114 is WARNING: WARNING - load average: 3.87, 3.84, 3.26 [23:15:25] RECOVERY - swiftobject115 Current Load on swiftobject115 is OK: OK - load average: 2.63, 3.30, 3.05 [23:15:50] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.68, 3.43, 3.59 [23:16:07] PROBLEM - swiftobject114 Current Load on swiftobject114 is CRITICAL: CRITICAL - load average: 4.21, 3.89, 3.34 [23:19:49] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.93, 2.75, 3.28 [23:23:14] PROBLEM - swiftobject115 Current Load on swiftobject115 is CRITICAL: CRITICAL - load average: 5.57, 3.88, 3.32 [23:24:08] PROBLEM - swiftobject114 Current Load on swiftobject114 is WARNING: WARNING - load average: 2.88, 3.96, 3.65 [23:25:07] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 7.37, 4.37, 3.03 [23:25:12] RECOVERY - swiftobject115 Current Load on swiftobject115 is OK: OK - load average: 2.79, 3.35, 3.19 [23:27:16] PROBLEM - wiki.recaptime.tk - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Certificate 'wiki.recaptime.tk' expires in 7 day(s) (Wed 26 Oct 2022 23:10:27 GMT +0000). [23:29:44] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.86, 3.51, 3.31 [23:31:07] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 2.37, 3.43, 3.10 [23:32:08] RECOVERY - swiftobject114 Current Load on swiftobject114 is OK: OK - load average: 2.36, 3.11, 3.40 [23:33:06] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.48, 2.81, 2.91 [23:33:43] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.76, 3.85, 3.49 [23:37:07] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.99, 4.16, 3.06 [23:37:41] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.75, 3.82, 3.61 [23:43:04] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.18, 3.90, 3.45 [23:43:39] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.43, 4.32, 3.83 [23:47:01] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.08, 3.29, 3.31 [23:51:35] PROBLEM - wiki.rtapp.tk - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Certificate 'wiki.rtapp.tk' expires in 7 day(s) (Wed 26 Oct 2022 23:42:41 GMT +0000). [23:51:36] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.70, 3.84, 3.87 [23:53:54] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.51, 3.74, 3.53 [23:55:34] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.13, 2.76, 3.40 [23:57:51] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.57, 2.84, 3.25