[00:03:36] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 5.32, 4.46, 3.52 [00:03:43] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [00:05:35] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 1.86, 3.52, 3.29 [00:05:58] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 6.93, 5.99, 4.06 [00:07:36] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 2.78, 3.11, 3.15 [00:10:06] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [00:12:01] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [00:12:38] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [00:13:59] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.14, 3.52, 3.79 [00:14:15] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [00:14:50] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 4.06, 3.63, 2.68 [00:16:00] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.23, 3.60, 3.76 [00:16:05] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [00:16:13] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [00:16:50] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 0.92, 2.59, 2.41 [00:16:54] PROBLEM - cp22 Current Load on cp22 is WARNING: WARNING - load average: 2.94, 3.63, 2.49 [00:18:19] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.02, 3.41, 4.00 [00:18:54] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 0.94, 2.63, 2.26 [00:20:01] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.79, 2.93, 3.47 [00:22:01] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.01, 2.71, 3.32 [00:24:02] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.73, 3.49, 3.76 [00:25:56] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.82, 3.18, 3.61 [00:27:54] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.48, 2.59, 3.34 [00:30:18] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [00:32:14] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [00:32:52] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 7.97, 6.61, 3.91 [00:36:50] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 0.65, 3.25, 3.13 [00:44:05] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.61, 3.49, 3.16 [00:44:24] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [00:46:21] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [00:48:05] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.01, 3.00, 3.07 [00:49:56] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.45, 3.44, 3.27 [00:50:45] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [00:51:18] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [00:51:54] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.80, 3.40, 3.29 [00:53:33] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [00:54:48] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [00:55:28] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [00:57:43] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [00:59:12] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [01:00:18] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 5.94, 4.40, 3.10 [01:01:10] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [01:04:09] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 0.92, 3.16, 2.95 [01:07:08] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.18, 3.46, 3.13 [01:09:08] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.64, 3.82, 3.32 [01:09:28] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [01:13:09] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.90, 3.02, 3.13 [01:13:31] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [01:17:55] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [01:19:56] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [01:24:17] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 5.93, 3.81, 2.96 [01:29:13] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [01:31:54] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.13, 3.46, 2.98 [01:33:21] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [01:33:54] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.27, 3.03, 2.89 [01:39:47] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [01:41:23] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [01:41:53] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [01:42:13] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 2.79, 2.91, 3.85 [01:43:56] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [01:44:02] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [01:45:19] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 9.35, 5.78, 3.24 [01:47:32] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [01:48:28] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [01:50:10] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 2.15, 2.55, 3.36 [01:50:28] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [01:54:56] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 0.64, 3.21, 3.88 [01:56:52] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 4.63, 3.36, 3.82 [01:58:55] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [02:00:51] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [02:04:50] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 0.39, 2.92, 3.82 [02:04:54] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.05, 3.57, 3.12 [02:06:18] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 6.52, 4.71, 3.38 [02:08:49] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 0.30, 1.63, 3.08 [02:10:18] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.93, 3.51, 3.23 [02:10:37] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.53, 3.82, 3.42 [02:12:18] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [02:12:18] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.20, 2.66, 2.94 [02:12:31] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.38, 3.01, 3.16 [02:14:01] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.45, 2.85, 2.46 [02:18:00] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 7.64, 4.45, 3.16 [02:18:36] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [02:19:59] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 2.67, 3.57, 2.99 [02:20:47] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [02:21:58] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 4.47, 3.76, 3.11 [02:22:50] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 6.53, 4.16, 3.06 [02:23:02] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [02:23:57] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 2.42, 3.29, 3.02 [02:24:50] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [02:26:54] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 9.00, 4.50, 2.27 [02:27:56] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 9.20, 5.70, 4.01 [02:28:54] PROBLEM - cp22 Current Load on cp22 is WARNING: WARNING - load average: 2.63, 3.90, 2.35 [02:28:54] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [02:29:18] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [02:30:23] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.79, 4.67, 3.70 [02:30:54] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 0.73, 2.75, 2.11 [02:31:10] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [02:31:55] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.87, 3.95, 3.63 [02:32:29] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.67, 3.56, 3.25 [02:34:23] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.26, 3.86, 3.38 [02:36:17] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.49, 3.50, 3.29 [02:37:53] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 3.24, 3.11, 3.36 [02:38:01] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [02:38:11] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.41, 3.05, 3.15 [02:38:24] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.11, 3.72, 3.71 [02:40:50] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 0.44, 1.84, 3.92 [02:42:24] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.19, 2.81, 3.32 [02:43:50] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 5.32, 4.21, 3.73 [02:44:14] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [02:44:23] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [02:45:08] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [02:45:49] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 1.99, 3.37, 3.48 [02:46:50] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 0.76, 1.41, 3.12 [02:46:54] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 7.02, 4.38, 2.69 [02:47:37] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [02:47:49] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 2.44, 3.16, 3.39 [02:48:23] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [02:49:34] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [02:51:25] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [02:55:47] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 8.49, 5.43, 4.12 [02:55:58] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [02:57:04] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [02:58:00] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [02:58:54] PROBLEM - cp22 Current Load on cp22 is WARNING: WARNING - load average: 0.34, 2.78, 3.95 [03:00:43] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.56, 3.96, 3.22 [03:01:07] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [03:02:54] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 0.15, 1.38, 3.11 [03:03:44] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 2.07, 3.62, 3.83 [03:04:31] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.69, 3.25, 3.13 [03:06:43] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [03:07:39] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [03:07:43] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 9.54, 4.90, 4.17 [03:08:45] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [03:08:54] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 6.20, 3.88, 3.55 [03:09:36] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [03:09:42] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.06, 3.90, 3.88 [03:10:54] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 1.92, 3.31, 3.40 [03:13:41] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.99, 2.66, 3.36 [03:14:02] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [03:15:14] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [03:15:56] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.42, 3.50, 3.20 [03:16:54] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 5.89, 4.36, 3.75 [03:17:14] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [03:19:54] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.83, 2.88, 3.03 [03:24:32] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [03:24:37] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 7.59, 4.72, 3.86 [03:28:35] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 1.17, 3.34, 3.56 [03:32:34] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.89, 2.81, 3.30 [03:36:03] miraheze/ErrorPages - dependabot[bot] the build passed. [03:37:14] miraheze/ErrorPages - Universal-Omega the build passed. [03:40:54] PROBLEM - cp22 Current Load on cp22 is WARNING: WARNING - load average: 0.52, 1.90, 3.84 [03:43:28] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [03:43:33] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [03:44:29] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 6.13, 3.74, 3.14 [03:44:37] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.41, 3.72, 3.20 [03:45:33] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [03:46:29] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.79, 2.85, 2.89 [03:46:54] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 0.75, 1.80, 3.22 [03:47:51] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [03:48:25] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.48, 3.12, 3.07 [03:49:51] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [03:49:59] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [03:50:50] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 4.43, 3.43, 1.88 [03:51:57] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [03:52:04] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [03:52:26] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 5.00, 3.72, 3.20 [03:52:49] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 0.75, 2.35, 1.67 [03:54:27] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.45, 3.22, 3.06 [03:56:26] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 6.28, 4.44, 3.53 [03:56:30] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [03:57:40] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 12.89, 5.28, 3.15 [03:58:18] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [03:58:19] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [03:58:25] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 1.80, 3.45, 3.29 [03:58:29] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [03:59:07] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [04:00:16] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [04:00:25] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.15, 2.65, 3.01 [04:01:06] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [04:01:54] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.76, 3.34, 3.01 [04:03:54] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.04, 2.80, 2.85 [04:07:29] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [04:09:00] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [04:09:42] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.70, 3.54, 3.41 [04:11:33] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [04:11:42] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 9.04, 4.88, 3.87 [04:13:04] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [04:13:43] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.91, 3.96, 3.65 [04:16:37] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.70, 3.41, 3.07 [04:18:31] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.81, 3.32, 3.08 [04:19:20] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [04:19:43] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.87, 3.11, 3.37 [04:25:50] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [04:26:15] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 4.91, 4.61, 3.58 [04:27:49] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [04:28:14] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 1.94, 3.54, 3.31 [04:30:13] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.06, 2.71, 3.03 [04:36:19] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [04:37:33] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [04:39:33] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [04:40:32] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [04:42:09] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [04:44:15] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [04:46:12] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [04:50:24] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [04:53:04] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [04:55:04] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.68, 3.54, 2.79 [04:57:03] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 6.03, 5.14, 3.51 [04:57:11] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [04:59:49] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.32, 3.50, 2.73 [05:01:02] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.27, 3.02, 3.00 [05:01:50] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.68, 2.74, 2.55 [05:07:52] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 6.91, 5.91, 3.95 [05:08:38] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [05:11:52] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 0.72, 3.23, 3.31 [05:15:48] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [05:16:19] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [05:16:50] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 6.69, 3.56, 2.15 [05:16:56] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [05:17:44] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [05:18:49] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.99, 2.76, 2.02 [05:24:05] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [05:26:02] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [05:26:50] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 3.71, 4.59, 3.06 [05:28:50] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 0.85, 3.24, 2.74 [05:29:01] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [05:31:09] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [05:33:09] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [05:36:35] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [05:37:42] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [05:38:35] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [05:41:50] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [05:43:01] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [05:43:46] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 13.39, 5.38, 3.27 [05:48:19] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [05:49:19] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [05:49:25] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 7.12, 4.14, 2.75 [05:52:21] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [05:55:12] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 0.90, 3.90, 3.34 [05:58:41] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [05:59:03] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 4.46, 4.34, 3.63 [06:00:38] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [06:00:58] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 1.36, 3.41, 3.39 [06:02:53] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 0.78, 2.55, 3.07 [06:03:39] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [06:05:36] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [06:05:52] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 1.79, 2.22, 3.76 [06:08:28] PROBLEM - test131 Current Load on test131 is CRITICAL: CRITICAL - load average: 2.33, 1.74, 1.35 [06:09:50] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.53, 2.06, 3.36 [06:14:26] PROBLEM - test131 Current Load on test131 is WARNING: WARNING - load average: 1.78, 1.90, 1.57 [06:15:36] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [06:15:50] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 13.83, 7.07, 4.78 [06:17:36] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [06:18:25] PROBLEM - test131 Current Load on test131 is CRITICAL: CRITICAL - load average: 2.12, 2.01, 1.69 [06:19:08] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [06:19:48] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 1.42, 3.84, 3.97 [06:21:08] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [06:22:05] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.04, 3.61, 2.77 [06:23:47] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 0.81, 2.15, 3.23 [06:24:05] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 0.83, 2.56, 2.49 [06:32:25] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [06:34:22] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [06:37:02] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [06:37:07] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 6.15, 4.00, 3.01 [06:37:09] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [06:38:49] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [06:39:45] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 11.26, 6.68, 4.21 [06:40:46] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [06:41:11] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [06:43:27] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [06:43:48] miraheze/landing - dependabot[bot] the build passed. [06:44:56] miraheze/landing - Universal-Omega the build passed. [06:45:08] miraheze/phabricator-extensions - dependabot[bot] the build passed. [06:47:11] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 0.73, 2.96, 3.46 [06:47:38] miraheze/phabricator-extensions - Universal-Omega the build passed. [06:49:11] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 0.35, 2.07, 3.07 [06:51:41] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 0.96, 2.49, 3.69 [06:55:10] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [06:55:40] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 5.36, 3.48, 3.77 [06:57:10] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [06:57:23] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [06:57:39] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.16, 2.49, 3.37 [06:58:57] PROBLEM - hoodwiki.xyz - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'hoodwiki.xyz' expires in 15 day(s) (Wed 19 Oct 2022 06:37:38 GMT +0000). [06:59:19] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [07:00:10] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [07:02:08] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [07:06:34] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [07:06:42] miraheze/ssl - MirahezeSSLBot the build has errored. [07:07:11] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [07:07:34] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [07:07:36] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [07:08:36] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [07:09:16] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 6.89, 3.90, 2.80 [07:09:32] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [07:09:34] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [07:09:37] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 6.76, 5.10, 3.78 [07:11:16] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.50, 3.64, 2.87 [07:11:24] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [07:11:36] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 1.52, 3.62, 3.39 [07:12:12] PROBLEM - test131 Current Load on test131 is WARNING: WARNING - load average: 1.05, 1.42, 1.92 [07:13:02] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [07:13:16] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 0.85, 2.67, 2.61 [07:13:35] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 0.95, 2.71, 3.08 [07:14:59] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [07:16:11] RECOVERY - test131 Current Load on test131 is OK: OK - load average: 0.95, 1.13, 1.68 [07:19:58] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [07:23:13] PROBLEM - pokemonwiki.info - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'pokemonwiki.info' expires in 15 day(s) (Wed 19 Oct 2022 06:57:42 GMT +0000). [07:26:01] miraheze/ssl - MirahezeSSLBot the build has errored. [07:28:25] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [07:28:36] RECOVERY - hoodwiki.xyz - LetsEncrypt on sslhost is OK: OK - Certificate 'hoodwiki.xyz' will expire on Sun 01 Jan 2023 06:05:29 GMT +0000. [07:36:50] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [07:38:39] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [07:38:49] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [07:40:36] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [07:52:28] RECOVERY - pokemonwiki.info - LetsEncrypt on sslhost is OK: OK - Certificate 'pokemonwiki.info' will expire on Sun 01 Jan 2023 06:23:41 GMT +0000. [08:04:16] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [08:12:42] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [08:14:19] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [08:16:18] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [08:16:54] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 5.71, 4.63, 2.72 [08:18:54] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 1.51, 3.35, 2.48 [08:22:31] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [08:24:27] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [08:26:09] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [08:29:37] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 9.21, 5.51, 2.98 [08:30:17] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [08:33:36] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 0.74, 3.18, 2.64 [08:42:10] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [08:44:08] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [08:52:18] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [08:53:41] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [08:55:32] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.89, 4.27, 2.42 [08:55:48] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [08:56:26] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [08:57:32] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.01, 2.97, 2.16 [08:59:36] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 5.81, 3.92, 2.51 [08:59:52] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [09:02:52] PROBLEM - cp33 PowerDNS Recursor on cp33 is CRITICAL: CRITICAL - Plugin timed out while executing system call [09:02:54] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [09:05:17] PROBLEM - cp33 Puppet on cp33 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [09:05:33] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.23, 4.00, 2.88 [09:05:35] RECOVERY - cloud11 IPMI Sensors on cloud11 is OK: IPMI Status: OK [09:07:10] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [09:07:34] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.06, 3.46, 2.83 [09:08:36] PROBLEM - cp33 SSH on cp33 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:09:34] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 0.75, 2.49, 2.55 [09:09:35] PROBLEM - cloud11 IPMI Sensors on cloud11 is CRITICAL: IPMI Status: Critical [Cntlr 2 Bay 7 = Critical] [09:10:31] PROBLEM - cp33 Varnish Backends on cp33 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [09:13:18] PROBLEM - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [09:13:44] PROBLEM - cp33 conntrack_table_size on cp33 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [09:14:34] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 74.208.203.152/cpweb, 2607:f1c0:1800:26f::1/cpweb [09:14:37] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 74.208.203.152/cpweb, 2607:f1c0:1800:26f::1/cpweb [09:14:51] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [09:14:54] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 6.13, 3.63, 2.52 [09:16:20] PROBLEM - cp33 HTTPS on cp33 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:16:48] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [09:16:54] PROBLEM - cp22 Current Load on cp22 is WARNING: WARNING - load average: 2.42, 3.44, 2.61 [09:18:54] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 0.89, 2.52, 2.37 [09:23:08] RECOVERY - cp33 Puppet on cp33 is OK: OK: Puppet is currently enabled, last run 51 minutes ago with 0 failures [09:24:17] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [09:24:34] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [09:26:57] RECOVERY - cp33 HTTPS on cp33 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3362 bytes in 1.417 second response time [09:27:07] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [09:27:11] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [09:28:00] PROBLEM - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is WARNING: WARNING - NGINX Error Rate is 58% [09:28:31] RECOVERY - cp33 conntrack_table_size on cp33 is OK: OK: nf_conntrack is 1 % full [09:29:05] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [09:30:08] RECOVERY - cp33 SSH on cp33 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [09:30:08] PROBLEM - cp33 Puppet on cp33 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [09:30:23] RECOVERY - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is OK: OK - NGINX Error Rate is 15% [09:33:23] PROBLEM - cp33 HTTPS on cp33 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:33:29] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [09:33:30] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 74.208.203.152/cpweb, 2607:f1c0:1800:26f::1/cpweb [09:34:36] PROBLEM - cp33 SSH on cp33 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:35:02] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 74.208.203.152/cpweb, 2607:f1c0:1800:26f::1/cpweb [09:35:23] RECOVERY - cp33 HTTPS on cp33 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3362 bytes in 0.558 second response time [09:35:30] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [09:36:51] RECOVERY - cp33 Varnish Backends on cp33 is OK: All 14 backends are healthy [09:36:59] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [09:37:31] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [09:37:48] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [09:38:07] PROBLEM - cp33 Puppet on cp33 is WARNING: WARNING: Puppet last ran 1 hour ago [09:38:14] RECOVERY - cp33 PowerDNS Recursor on cp33 is OK: DNS OK: 0.085 seconds response time. miraheze.org returns 108.175.15.182,2607:f1c0:1800:26f::1,2607:f1c0:1800:8100::1,74.208.203.152 [09:38:45] RECOVERY - cp33 SSH on cp33 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [09:41:50] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [09:42:05] RECOVERY - cp33 Puppet on cp33 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [09:45:44] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [09:47:41] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [09:52:13] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [09:54:40] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 6.42, 3.39, 2.40 [09:58:31] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [09:58:37] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [09:59:39] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 4.11, 3.27, 2.31 [10:00:34] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [10:00:41] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 0.62, 2.87, 2.72 [10:01:35] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.46, 2.51, 2.14 [10:08:55] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [10:10:57] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [10:14:54] PROBLEM - cp22 Current Load on cp22 is CRITICAL: CRITICAL - load average: 5.67, 3.19, 2.13 [10:15:10] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [10:16:37] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [10:18:34] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [10:20:15] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 2.10, 1.95, 3.91 [10:21:23] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [10:22:54] PROBLEM - cp22 Current Load on cp22 is WARNING: WARNING - load average: 1.04, 3.80, 3.43 [10:24:15] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 8.37, 4.34, 4.34 [10:24:54] RECOVERY - cp22 Current Load on cp22 is OK: OK - load average: 1.35, 3.04, 3.19 [10:24:55] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [10:26:54] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [10:28:14] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 1.01, 3.16, 3.94 [10:32:12] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 0.45, 1.61, 3.12 [10:37:43] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [10:38:50] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.89, 3.92, 2.45 [10:39:42] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [10:40:18] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [10:40:50] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.34, 2.91, 2.26 [10:42:49] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 8.04, 4.54, 2.71 [10:44:21] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [10:44:50] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.33, 3.16, 2.43 [10:54:08] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [10:54:52] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 6.68, 3.55, 2.34 [10:56:09] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [10:56:53] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.59, 2.59, 2.13 [10:58:56] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [11:00:52] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [11:03:09] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [11:04:54] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.33, 3.48, 2.57 [11:05:10] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [11:06:55] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.85, 3.43, 2.68 [11:06:57] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [11:08:55] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.27, 3.70, 2.87 [11:11:06] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [11:12:55] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.81, 3.50, 3.08 [11:14:56] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 0.98, 2.72, 2.85 [11:17:16] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [11:21:26] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [11:25:50] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [11:28:51] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 6.75, 3.93, 2.73 [11:30:54] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [11:31:00] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 8.09, 4.38, 2.92 [11:32:03] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [11:32:49] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 2.14, 3.68, 2.94 [11:33:00] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.89, 3.19, 2.65 [11:34:49] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.50, 3.04, 2.80 [11:37:01] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 7.33, 5.25, 3.56 [11:37:12] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [11:39:18] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [11:40:48] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 10.73, 5.21, 3.08 [11:41:01] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.75, 3.49, 3.21 [11:41:16] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [11:42:47] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 2.07, 3.92, 2.87 [11:43:02] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 0.81, 2.55, 2.90 [11:43:26] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [11:44:46] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 0.53, 2.70, 2.55 [11:49:35] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [11:54:47] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 8.90, 4.58, 3.06 [11:56:46] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.33, 3.94, 3.01 [11:58:13] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [11:58:45] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 3.56, 4.10, 3.19 [12:00:45] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.07, 2.96, 2.88 [12:06:26] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [12:07:08] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 10.09, 4.60, 3.13 [12:08:34] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [12:08:42] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 7.90, 5.20, 3.63 [12:10:41] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 2.08, 3.98, 3.38 [12:12:40] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.18, 2.96, 3.07 [12:13:08] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [12:17:13] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.36, 3.69, 3.85 [12:21:14] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 0.72, 2.21, 3.21 [12:38:17] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.25, 3.89, 3.23 [12:40:17] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.22, 2.87, 2.93 [12:40:56] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [12:42:17] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [12:42:53] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [12:43:19] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 3.68, 3.85, 2.35 [12:45:15] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.62, 2.93, 2.18 [12:46:21] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [12:47:02] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [12:49:01] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [12:54:18] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [12:56:50] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 5.61, 3.58, 2.55 [12:58:50] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 3.22, 3.35, 2.59 [13:00:27] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [13:07:39] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [13:08:21] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.51, 3.54, 2.85 [13:08:44] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [13:09:36] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [13:10:22] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.25, 2.68, 2.61 [13:12:53] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [13:24:32] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [13:26:23] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [13:26:42] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 22.77, 12.17, 6.38 [13:28:19] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [13:33:00] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [13:41:42] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [13:42:52] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 22.81, 8.18, 3.82 [13:43:39] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [13:46:36] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 0.49, 2.10, 3.89 [13:48:35] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 4.59, 2.98, 3.98 [13:48:50] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 0.59, 3.58, 3.22 [13:49:09] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [13:50:49] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 0.88, 2.72, 2.95 [13:54:52] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [13:55:29] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [13:56:51] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [13:59:47] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [14:01:47] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [14:06:29] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 2.44, 2.30, 3.84 [14:07:36] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 3.60, 4.10, 3.21 [14:09:36] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.70, 4.00, 3.28 [14:10:34] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [14:11:38] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 9.03, 6.22, 4.20 [14:12:27] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 0.81, 1.60, 3.06 [14:12:34] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [14:15:38] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.02, 3.94, 3.79 [14:19:39] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.16, 2.53, 3.24 [14:21:32] miraheze/mw-config - dependabot[bot] the build passed. [14:24:56] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [14:26:53] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [14:26:55] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [14:27:18] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [14:28:49] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [14:28:54] PROBLEM - swiftproxy111 Puppet on swiftproxy111 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 9 minutes ago with 0 failures [14:29:40] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.81, 3.16, 3.09 [14:31:27] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [14:31:41] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.53, 2.77, 2.95 [14:35:53] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [14:39:42] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.02, 4.00, 3.34 [14:44:29] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [14:46:30] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [14:48:31] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [14:51:20] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [14:52:13] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.91, 3.64, 2.89 [14:53:20] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [14:54:12] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 2.36, 3.23, 2.83 [14:57:05] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 3.93, 3.02, 2.44 [14:59:00] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.48, 2.58, 2.36 [15:00:58] PROBLEM - swiftobject114 Current Load on swiftobject114 is CRITICAL: CRITICAL - load average: 7.34, 3.56, 1.69 [15:01:02] PROBLEM - swiftobject115 Current Load on swiftobject115 is CRITICAL: CRITICAL - load average: 6.30, 3.61, 1.71 [15:08:02] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [15:08:19] PROBLEM - test131 Current Load on test131 is CRITICAL: CRITICAL - load average: 12.15, 5.53, 2.73 [15:09:16] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [15:10:07] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [15:10:59] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 8.45, 5.39, 3.64 [15:12:58] RECOVERY - swiftobject114 Current Load on swiftobject114 is OK: OK - load average: 0.65, 3.10, 3.09 [15:13:02] RECOVERY - swiftobject115 Current Load on swiftobject115 is OK: OK - load average: 1.27, 3.28, 3.17 [15:13:23] PROBLEM - gluster102 Current Load on gluster102 is CRITICAL: CRITICAL - load average: 6.56, 5.17, 3.40 [15:14:34] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [15:15:04] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 5.79, 3.89, 2.90 [15:15:23] RECOVERY - gluster102 Current Load on gluster102 is OK: OK - load average: 3.21, 4.58, 3.42 [15:17:03] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.91, 2.94, 2.66 [15:17:46] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [15:18:42] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [15:22:54] PROBLEM - cp22 APT on cp22 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [15:22:56] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [15:22:58] PROBLEM - swiftobject114 Current Load on swiftobject114 is CRITICAL: CRITICAL - load average: 7.44, 4.48, 3.40 [15:23:03] PROBLEM - swiftobject115 Current Load on swiftobject115 is CRITICAL: CRITICAL - load average: 5.92, 3.97, 3.25 [15:23:09] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [15:24:13] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [15:24:59] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [15:25:15] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 26.25, 12.28, 6.16 [15:25:23] PROBLEM - gluster102 Current Load on gluster102 is CRITICAL: CRITICAL - load average: 6.34, 5.76, 4.58 [15:25:31] PROBLEM - cp33 PowerDNS Recursor on cp33 is CRITICAL: CRITICAL - Plugin timed out while executing system call [15:26:13] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [15:26:18] PROBLEM - cp33 SSH on cp33 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:27:23] RECOVERY - gluster102 Current Load on gluster102 is OK: OK - load average: 3.30, 4.69, 4.32 [15:27:45] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 8.37, 5.52, 3.33 [15:28:31] PROBLEM - cp33 Varnish Backends on cp33 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [15:28:35] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2607:f1c0:1800:26f::1/cpweb [15:29:05] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [15:29:19] PROBLEM - cp33 Puppet on cp33 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [15:29:47] RECOVERY - cp33 PowerDNS Recursor on cp33 is OK: DNS OK: 4.807 seconds response time. miraheze.org returns 108.175.15.182,2607:f1c0:1800:26f::1,2607:f1c0:1800:8100::1,74.208.203.152 [15:30:26] RECOVERY - cp33 Varnish Backends on cp33 is OK: All 14 backends are healthy [15:30:27] RECOVERY - cp33 SSH on cp33 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [15:30:32] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [15:31:18] RECOVERY - cp33 Puppet on cp33 is OK: OK: Puppet is currently enabled, last run 35 minutes ago with 0 failures [15:31:23] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 4.71, 5.52, 4.80 [15:31:36] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 1.75, 3.33, 2.91 [15:31:40] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [15:33:23] RECOVERY - gluster102 Current Load on gluster102 is OK: OK - load average: 2.12, 4.24, 4.42 [15:37:00] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [15:38:56] RECOVERY - swiftproxy111 Puppet on swiftproxy111 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [15:40:05] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [15:43:19] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [15:44:13] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [15:44:58] PROBLEM - swiftobject114 Current Load on swiftobject114 is WARNING: WARNING - load average: 1.62, 2.93, 3.68 [15:45:34] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [15:46:58] PROBLEM - swiftobject114 Current Load on swiftobject114 is CRITICAL: CRITICAL - load average: 6.49, 4.58, 4.21 [15:47:32] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [15:53:05] miraheze/mw-config - Universal-Omega the build passed. [15:57:13] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 1.07, 2.01, 3.84 [15:58:12] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 1.51, 2.07, 3.76 [15:59:53] !log [@test131] starting deploy of {'config': True} to all [15:59:55] !log [@test131] finished deploy of {'config': True} to all - SUCCESS in 3s [16:00:04] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:00:18] PROBLEM - gluster102 Current Load on gluster102 is CRITICAL: CRITICAL - load average: 6.02, 5.72, 4.87 [16:00:25] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:01:12] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 0.96, 1.37, 3.15 [16:02:13] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.05, 1.85, 3.29 [16:02:59] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.54, 3.53, 3.00 [16:04:59] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.15, 2.91, 2.83 [16:07:37] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [16:08:16] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 7.53, 4.98, 4.10 [16:09:37] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [16:10:03] !log [@mwtask141] starting deploy of {'config': True} to all [16:10:10] !log [@mwtask141] finished deploy of {'config': True} to all - SUCCESS in 6s [16:10:10] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 3.67, 5.42, 5.51 [16:10:11] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:10:16] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:10:16] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.51, 3.99, 3.84 [16:13:02] PROBLEM - swiftobject115 Current Load on swiftobject115 is WARNING: WARNING - load average: 0.61, 1.68, 3.80 [16:14:07] PROBLEM - gluster102 Current Load on gluster102 is CRITICAL: CRITICAL - load average: 12.91, 8.37, 6.57 [16:14:16] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.21, 2.85, 3.40 [16:15:02] RECOVERY - swiftobject115 Current Load on swiftobject115 is OK: OK - load average: 0.47, 1.24, 3.37 [16:15:37] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [16:17:35] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [16:20:58] PROBLEM - swiftobject114 Current Load on swiftobject114 is WARNING: WARNING - load average: 0.58, 1.75, 3.68 [16:22:58] RECOVERY - swiftobject114 Current Load on swiftobject114 is OK: OK - load average: 0.66, 1.33, 3.29 [16:23:14] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [16:25:14] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [16:28:18] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [16:30:15] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [16:33:20] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 2.88, 4.45, 3.68 [16:34:51] PROBLEM - cp33 APT on cp33 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [16:36:54] RECOVERY - cp33 APT on cp33 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [16:37:20] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.92, 3.66, 3.56 [16:38:32] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [16:39:21] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.63, 4.08, 3.72 [16:40:29] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [16:41:21] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.83, 3.49, 3.54 [16:43:21] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 2.72, 3.14, 3.39 [16:47:55] PROBLEM - test131 Current Load on test131 is WARNING: WARNING - load average: 1.04, 1.10, 1.90 [16:51:39] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 1.91, 4.38, 5.99 [16:51:54] RECOVERY - test131 Current Load on test131 is OK: OK - load average: 0.91, 1.02, 1.68 [16:53:02] PROBLEM - cp23 APT on cp23 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [16:54:20] PROBLEM - cp23 Current Load on cp23 is CRITICAL: CRITICAL - load average: 9.52, 5.24, 3.07 [16:54:59] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [16:55:56] PROBLEM - cp32 APT on cp32 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [16:56:16] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 2.16, 3.89, 2.83 [16:56:24] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.35, 3.78, 3.19 [16:56:40] RECOVERY - cp32 APT on cp32 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [16:56:44] PROBLEM - puppet141 Current Load on puppet141 is CRITICAL: CRITICAL - load average: 4.21, 3.19, 2.12 [16:58:11] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 0.59, 2.75, 2.53 [16:58:24] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.71, 2.99, 2.98 [16:59:33] RECOVERY - gluster102 Current Load on gluster102 is OK: OK - load average: 3.04, 3.54, 4.94 [17:00:44] PROBLEM - puppet141 Current Load on puppet141 is WARNING: WARNING - load average: 3.15, 3.65, 2.59 [17:02:44] RECOVERY - puppet141 Current Load on puppet141 is OK: OK - load average: 1.57, 2.89, 2.44 [17:04:50] PROBLEM - test131 Current Load on test131 is CRITICAL: CRITICAL - load average: 13.24, 6.73, 3.56 [17:05:27] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 4.96, 5.61, 5.47 [17:07:26] PROBLEM - gluster102 Current Load on gluster102 is CRITICAL: CRITICAL - load average: 6.48, 5.93, 5.60 [17:09:24] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 4.21, 5.49, 5.49 [17:15:23] PROBLEM - gluster102 Current Load on gluster102 is CRITICAL: CRITICAL - load average: 6.45, 5.58, 5.45 [17:24:12] paladox: hi, what would the status be about swift by the way? Haven't heard much about it in a while [17:24:29] I've got it working [17:24:44] oh wow, that's great :) [17:25:23] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 5.16, 5.62, 5.72 [17:33:23] PROBLEM - gluster102 Current Load on gluster102 is CRITICAL: CRITICAL - load average: 7.73, 5.58, 5.46 [17:37:23] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 1.76, 4.67, 5.24 [17:37:24] PROBLEM - swiftobject115 Current Load on swiftobject115 is CRITICAL: CRITICAL - load average: 7.40, 4.25, 2.30 [17:37:32] PROBLEM - swiftobject114 Current Load on swiftobject114 is CRITICAL: CRITICAL - load average: 6.04, 4.15, 2.31 [17:39:23] PROBLEM - gluster102 Current Load on gluster102 is CRITICAL: CRITICAL - load average: 8.11, 5.84, 5.57 [17:46:25] !log [reception@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/initSiteStats.php --wiki=ucroniaswiki --update (END - exit=0) [17:46:29] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:49:02] PROBLEM - swiftobject115 Current Load on swiftobject115 is WARNING: WARNING - load average: 0.61, 3.40, 3.46 [17:49:19] RECOVERY - swiftobject114 Current Load on swiftobject114 is OK: OK - load average: 0.59, 3.07, 3.28 [17:49:23] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 2.49, 5.12, 5.77 [17:51:02] RECOVERY - swiftobject115 Current Load on swiftobject115 is OK: OK - load average: 0.61, 2.49, 3.12 [17:56:34] RECOVERY - gluster102 Current Load on gluster102 is OK: OK - load average: 1.80, 3.15, 4.62 [17:57:11] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki=equinewikiwiki --no-updates /mnt/mediawiki-static/metawiki/ImportDump/equinewikiwiki-20221003065654.xml (START) [17:57:18] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:00:11] PROBLEM - test131 Current Load on test131 is WARNING: WARNING - load average: 1.02, 1.28, 2.00 [18:00:55] PROBLEM - test131 Puppet on test131 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[nginx] [18:06:44] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 3.84, 2.85, 2.10 [18:08:04] PROBLEM - test131 Current Load on test131 is CRITICAL: CRITICAL - load average: 2.25, 1.66, 1.84 [18:10:35] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.36, 2.43, 2.13 [18:12:13] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.32, 3.04, 3.93 [18:14:07] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki=equinewikiwiki --no-updates /mnt/mediawiki-static/metawiki/ImportDump/equinewikiwiki-20221003065654.xml (END - exit=0) [18:14:08] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/rebuildall.php --wiki=equinewikiwiki (START) [18:14:12] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:14:16] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:16:07] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.18, 3.54, 3.91 [18:18:03] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.03, 3.21, 3.74 [18:29:44] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.95, 3.99, 3.70 [18:31:01] RECOVERY - test131 Puppet on test131 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:31:52] PROBLEM - gluster102 Current Load on gluster102 is WARNING: WARNING - load average: 5.54, 5.20, 3.94 [18:33:38] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.93, 3.87, 3.77 [18:33:52] RECOVERY - gluster102 Current Load on gluster102 is OK: OK - load average: 3.76, 4.51, 3.83 [18:37:32] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 7.83, 4.83, 4.09 [18:41:25] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.16, 3.91, 3.88 [18:45:19] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.61, 4.06, 3.93 [18:47:16] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.13, 3.42, 3.72 [18:53:15] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.87, 2.58, 3.25 [19:02:18] PROBLEM - swiftproxy111 HTTPS on swiftproxy111 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 195 bytes in 0.010 second response time [19:02:55] PROBLEM - test131 Puppet on test131 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 3 minutes ago with 0 failures [19:12:18] PROBLEM - swiftproxy111 HTTPS on swiftproxy111 is WARNING: HTTP WARNING: HTTP/1.1 401 Unauthorized - 473 bytes in 0.031 second response time [19:13:20] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/rebuildall.php --wiki=equinewikiwiki (END - exit=0) [19:13:22] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/initSiteStats.php --wiki=equinewikiwiki --active --update (END - exit=0) [19:13:26] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:13:30] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:13:58] PROBLEM - test131 Current Load on test131 is WARNING: WARNING - load average: 1.02, 1.56, 1.91 [19:17:58] RECOVERY - test131 Current Load on test131 is OK: OK - load average: 1.15, 1.26, 1.70 [19:29:55] PROBLEM - swiftproxy111 Puppet on swiftproxy111 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 17 minutes ago with 0 failures [19:32:18] PROBLEM - swiftproxy111 HTTPS on swiftproxy111 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 195 bytes in 0.011 second response time [19:37:20] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 4.33, 3.04, 2.37 [19:42:01] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.19, 3.70, 2.80 [19:42:21] PROBLEM - swiftproxy111 HTTPS on swiftproxy111 is WARNING: HTTP WARNING: HTTP/1.1 401 Unauthorized - 473 bytes in 3.229 second response time [19:43:20] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.38, 3.04, 2.71 [19:45:55] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.08, 3.42, 2.88 [19:47:51] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.38, 3.04, 2.81 [19:54:31] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 6.08, 3.80, 2.57 [19:57:05] PROBLEM - cp23 APT on cp23 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:57:52] PROBLEM - cp23 Current Load on cp23 is WARNING: WARNING - load average: 3.48, 3.01, 2.06 [19:59:06] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [19:59:49] RECOVERY - cp23 Current Load on cp23 is OK: OK - load average: 2.20, 2.86, 2.13 [20:02:55] RECOVERY - test131 Puppet on test131 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [20:03:41] RECOVERY - swiftproxy111 Puppet on swiftproxy111 is OK: OK: Puppet is currently enabled, last run 12 seconds ago with 0 failures [20:06:04] PROBLEM - cp23 APT on cp23 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:08:05] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [20:08:32] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 2.45, 3.74, 3.66 [20:09:03] PROBLEM - gluster122 Current Load on gluster122 is CRITICAL: CRITICAL - load average: 5.90, 3.41, 2.12 [20:10:28] miraheze/YouTube - Universal-Omega the build passed. [20:10:32] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.53, 3.01, 3.40 [20:11:19] PROBLEM - gluster122 APT on gluster122 is WARNING: APT WARNING: 0 packages available for upgrade (0 critical updates). warnings detected, errors detected. [20:13:11] PROBLEM - mw142 HTTPS on mw142 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:13:53] PROBLEM - mw142 MediaWiki Rendering on mw142 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:14:53] PROBLEM - mw132 HTTPS on mw132 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:14:56] PROBLEM - mw132 MediaWiki Rendering on mw132 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:15:10] PROBLEM - mw141 HTTPS on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:15:20] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:15:53] PROBLEM - cp32 Varnish Backends on cp32 is CRITICAL: 3 backends are down. mw132 mw141 mw142 [20:16:16] PROBLEM - cp22 Varnish Backends on cp22 is CRITICAL: 3 backends are down. mw132 mw141 mw142 [20:16:35] PROBLEM - cp33 Varnish Backends on cp33 is CRITICAL: 3 backends are down. mw132 mw141 mw142 [20:17:30] PROBLEM - cp23 Varnish Backends on cp23 is CRITICAL: 3 backends are down. mw132 mw141 mw142 [20:17:51] PROBLEM - mw122 MediaWiki Rendering on mw122 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:18:12] PROBLEM - mw131 HTTPS on mw131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:18:23] PROBLEM - mw131 MediaWiki Rendering on mw131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:18:38] PROBLEM - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is CRITICAL: CRITICAL - NGINX Error Rate is 88% [20:19:02] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 8 datacenters are down: 217.174.247.33/cpweb, 109.228.51.216/cpweb, 2a00:da00:1800:326::1/cpweb, 2a00:da00:1800:328::1/cpweb, 108.175.15.182/cpweb, 74.208.203.152/cpweb, 2607:f1c0:1800:8100::1/cpweb, 2607:f1c0:1800:26f::1/cpweb [20:19:09] PROBLEM - cp32 HTTPS on cp32 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:19:18] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 8 datacenters are down: 217.174.247.33/cpweb, 109.228.51.216/cpweb, 2a00:da00:1800:326::1/cpweb, 2a00:da00:1800:328::1/cpweb, 108.175.15.182/cpweb, 74.208.203.152/cpweb, 2607:f1c0:1800:8100::1/cpweb, 2607:f1c0:1800:26f::1/cpweb [20:19:23] PROBLEM - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is CRITICAL: CRITICAL - NGINX Error Rate is 72% [20:19:49] PROBLEM - mw122 HTTPS on mw122 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:19:59] PROBLEM - cp33 HTTPS on cp33 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:20:18] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is CRITICAL: CRITICAL - NGINX Error Rate is 82% [20:20:19] PROBLEM - mw121 MediaWiki Rendering on mw121 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:21:06] RECOVERY - cp32 HTTPS on cp32 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3362 bytes in 0.415 second response time [20:21:25] PROBLEM - mw121 HTTPS on mw121 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:21:55] RECOVERY - cp33 HTTPS on cp33 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3362 bytes in 0.438 second response time [20:23:37] Um... it looks like we may be down? [20:23:51] completely... [20:24:42] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [20:25:09] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [20:25:20] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is WARNING: WARNING - NGINX Error Rate is 43% [20:25:31] we do appear to be down [20:25:34] PROBLEM - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is WARNING: WARNING - NGINX Error Rate is 46% [20:26:39] RECOVERY - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is OK: OK - NGINX Error Rate is 16% [20:27:20] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is CRITICAL: CRITICAL - NGINX Error Rate is 70% [20:27:32] PROBLEM - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is CRITICAL: CRITICAL - NGINX Error Rate is 66% [20:28:11] no idea what the cause would be [20:29:29] PROBLEM - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is WARNING: WARNING - NGINX Error Rate is 54% [20:30:12] hmm [20:31:19] PROBLEM - cp23 HTTPS on cp23 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 6075 bytes in 0.038 second response time [20:31:24] there's no issues cp22/23 connecting to mw* [20:31:27] RECOVERY - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is OK: OK - NGINX Error Rate is 28% [20:31:31] RECOVERY - mw142 MediaWiki Rendering on mw142 is OK: HTTP OK: HTTP/1.1 200 OK - 29520 bytes in 0.269 second response time [20:31:50] RECOVERY - mw142 HTTPS on mw142 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 0.008 second response time [20:32:18] miraheze/ManageWiki - Universal-Omega the build passed. [20:32:52] miraheze/MirahezeMagic - Universal-Omega the build passed. [20:32:59] miraheze/DataDump - Universal-Omega the build passed. [20:33:02] I'm getting 503s across all of Miraheze [20:33:02] RECOVERY - mw121 HTTPS on mw121 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 0.010 second response time [20:33:16] musikanimal: we're aware [20:33:31] RECOVERY - mw122 HTTPS on mw122 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 0.011 second response time [20:33:37] RECOVERY - mw141 HTTPS on mw141 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 0.008 second response time [20:33:37] RECOVERY - mw131 HTTPS on mw131 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 0.008 second response time [20:33:38] RECOVERY - mw121 MediaWiki Rendering on mw121 is OK: HTTP OK: HTTP/1.1 200 OK - 29521 bytes in 2.513 second response time [20:33:41] miraheze/ManageWiki - Universal-Omega the build passed. [20:33:41] RECOVERY - mw132 HTTPS on mw132 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 0.011 second response time [20:33:43] RECOVERY - mw131 MediaWiki Rendering on mw131 is OK: HTTP OK: HTTP/1.1 200 OK - 29521 bytes in 1.108 second response time [20:33:55] RECOVERY - mw132 MediaWiki Rendering on mw132 is OK: HTTP OK: HTTP/1.1 200 OK - 29521 bytes in 1.723 second response time [20:34:01] !log reboot mw141 and mw142 [20:34:51] PROBLEM - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is WARNING: WARNING - NGINX Error Rate is 50% [20:35:21] PROBLEM - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is WARNING: WARNING - NGINX Error Rate is 44% [20:35:58] PROBLEM - mw142 MediaWiki Rendering on mw142 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:35:58] !log reboot mw122 [20:36:09] PROBLEM - mw141 Check Gluster Clients on mw141 is CRITICAL: PROCS CRITICAL: 0 processes with args '/usr/sbin/glusterfs' [20:36:25] miraheze/WikiDiscover - Universal-Omega the build passed. [20:36:27] PROBLEM - mw142 Check Gluster Clients on mw142 is CRITICAL: PROCS CRITICAL: 0 processes with args '/usr/sbin/glusterfs' [20:36:38] miraheze/MatomoAnalytics - Universal-Omega the build passed. [20:36:51] RECOVERY - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is OK: OK - NGINX Error Rate is 11% [20:36:57] miraheze/RemovePII - Universal-Omega the build passed. [20:36:58] miraheze/MirahezeMagic - Universal-Omega the build passed. [20:37:05] miraheze/SpriteSheet - Universal-Omega the build passed. [20:37:09] PROBLEM - mw142 JobRunner Service on mw142 is CRITICAL: PROCS CRITICAL: 0 processes with args 'redisJobRunnerService' [20:37:12] miraheze/GlobalNewFiles - Universal-Omega the build passed. [20:37:14] RECOVERY - cp23 HTTPS on cp23 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3343 bytes in 0.081 second response time [20:37:18] miraheze/WikiDiscover - Universal-Omega the build passed. [20:37:19] RECOVERY - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is OK: OK - NGINX Error Rate is 11% [20:37:21] miraheze/CustomHeader - Universal-Omega the build passed. [20:37:21] RECOVERY - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is OK: OK - NGINX Error Rate is 15% [20:37:30] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 29521 bytes in 1.991 second response time [20:37:35] miraheze/IncidentReporting - Universal-Omega the build passed. [20:37:51] miraheze/MatomoAnalytics - Universal-Omega the build passed. [20:37:56] PROBLEM - mw142 HTTPS on mw142 is CRITICAL: connect to address 2a10:6740::6:503 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [20:37:59] miraheze/GlobalNewFiles - Universal-Omega the build passed. [20:38:00] PROBLEM - mw122 Disk Space on mw122 is CRITICAL: connect to address 2a10:6740::6:310 port 5666: Connection refusedconnect to host 2a10:6740::6:310 port 5666: Connection refused [20:38:07] PROBLEM - mw122 Puppet on mw122 is CRITICAL: connect to address 2a10:6740::6:310 port 5666: Connection refusedconnect to host 2a10:6740::6:310 port 5666: Connection refused [20:38:12] miraheze/CustomHeader - Universal-Omega the build passed. [20:38:13] PROBLEM - mw122 ferm_active on mw122 is CRITICAL: connect to address 2a10:6740::6:310 port 5666: Connection refusedconnect to host 2a10:6740::6:310 port 5666: Connection refused [20:38:14] miraheze/RottenLinks - Universal-Omega the build passed. [20:38:23] miraheze/RemovePII - Universal-Omega the build passed. [20:38:34] PROBLEM - mw122 Check Gluster Clients on mw122 is CRITICAL: connect to address 2a10:6740::6:310 port 5666: Connection refusedconnect to host 2a10:6740::6:310 port 5666: Connection refused [20:38:35] miraheze/SpriteSheet - Universal-Omega the build passed. [20:38:39] miraheze/PDFEmbed - Universal-Omega the build passed. [20:38:42] PROBLEM - mw122 PowerDNS Recursor on mw122 is CRITICAL: connect to address 2a10:6740::6:310 port 5666: Connection refusedconnect to host 2a10:6740::6:310 port 5666: Connection refused [20:38:42] PROBLEM - mw122 nutcracker process on mw122 is CRITICAL: connect to address 2a10:6740::6:310 port 5666: Connection refusedconnect to host 2a10:6740::6:310 port 5666: Connection refused [20:38:48] PROBLEM - mw122 php-fpm on mw122 is CRITICAL: connect to address 2a10:6740::6:310 port 5666: Connection refusedconnect to host 2a10:6740::6:310 port 5666: Connection refused [20:39:00] miraheze/YouTube - Universal-Omega the build passed. [20:39:08] RECOVERY - mw142 JobRunner Service on mw142 is OK: PROCS OK: 1 process with args 'redisJobRunnerService' [20:39:10] PROBLEM - mw122 SSH on mw122 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:39:48] PROBLEM - mw131 MediaWiki Rendering on mw131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:39:54] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 19.20, 12.39, 8.64 [20:39:56] RECOVERY - mw142 HTTPS on mw142 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 0.017 second response time [20:39:57] RECOVERY - mw141 Check Gluster Clients on mw141 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [20:39:58] RECOVERY - mw142 MediaWiki Rendering on mw142 is OK: HTTP OK: HTTP/1.1 200 OK - 29521 bytes in 1.840 second response time [20:40:01] RECOVERY - mw122 Disk Space on mw122 is OK: DISK OK - free space: / 10191 MB (46% inode=81%); [20:40:13] RECOVERY - mw122 ferm_active on mw122 is OK: OK ferm input default policy is set [20:40:36] RECOVERY - mw122 PowerDNS Recursor on mw122 is OK: DNS OK: 0.304 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [20:40:40] RECOVERY - mw122 nutcracker process on mw122 is OK: PROCS OK: 1 process with UID = 116 (nutcracker), command name 'nutcracker' [20:40:47] RECOVERY - mw122 php-fpm on mw122 is OK: PROCS OK: 21 processes with command name 'php-fpm7.4' [20:40:49] RECOVERY - hi.famepedia.org - LetsEncrypt on sslhost is OK: OK - Certificate 'hi.famepedia.org' will expire on Sun 01 Jan 2023 19:14:02 GMT +0000. [20:40:50] PROBLEM - mw132 Current Load on mw132 is CRITICAL: CRITICAL - load average: 13.49, 9.81, 7.01 [20:41:08] RECOVERY - mw122 SSH on mw122 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [20:41:11] miraheze/CreateWiki - Universal-Omega the build passed. [20:41:11] RECOVERY - mw122 MediaWiki Rendering on mw122 is OK: HTTP OK: HTTP/1.1 200 OK - 29521 bytes in 4.139 second response time [20:41:58] RECOVERY - mw122 Puppet on mw122 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [20:42:15] RECOVERY - mw142 Check Gluster Clients on mw142 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [20:42:48] PROBLEM - mw132 Current Load on mw132 is WARNING: WARNING - load average: 11.24, 10.07, 7.43 [20:42:59] PROBLEM - gluster122 Puppet on gluster122 is WARNING: WARNING: Puppet last ran 1 hour ago [20:43:49] RECOVERY - mw131 MediaWiki Rendering on mw131 is OK: HTTP OK: HTTP/1.1 200 OK - 29520 bytes in 0.234 second response time [20:44:34] !log reboot mon141 [20:45:46] miraheze/ImportDump - Universal-Omega the build passed. [20:49:06] !log [20::44:34 UTC] reboot mon141 [20:49:31] !log [20:35:59 UTC] reboot mw122 [20:49:49] !log [20:34:02 UTC] reboot mw141 and mw142 [20:49:54] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.36, 11.32, 10.23 [20:50:09] PROBLEM - mw121 HTTPS on mw121 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:50:29] PROBLEM - mw121 MediaWiki Rendering on mw121 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:51:02] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:51:08] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:51:23] PROBLEM - gluster122 Puppet on gluster122 is WARNING: WARNING: Puppet last ran 1 hour ago [20:51:53] RECOVERY - mw122 MediaWiki Rendering on mw122 is OK: HTTP OK: HTTP/1.1 200 OK - 29520 bytes in 0.421 second response time [20:52:14] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:52:16] PROBLEM - mw132 Current Load on mw132 is CRITICAL: CRITICAL - load average: 13.11, 11.16, 9.01 [20:52:42] PROBLEM - mw141 HTTPS on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:53:42] !log reboot mw132 [20:54:16] PROBLEM - mw131 Current Load on mw131 is CRITICAL: CRITICAL - load average: 12.09, 10.39, 8.34 [20:55:38] PROBLEM - mw132 APT on mw132 is CRITICAL: connect to address 2a10:6740::6:404 port 5666: Connection refusedconnect to host 2a10:6740::6:404 port 5666: Connection refused [20:55:46] PROBLEM - mw132 PowerDNS Recursor on mw132 is CRITICAL: connect to address 2a10:6740::6:404 port 5666: Connection refusedconnect to host 2a10:6740::6:404 port 5666: Connection refused [20:55:47] PROBLEM - mw132 php-fpm on mw132 is CRITICAL: connect to address 2a10:6740::6:404 port 5666: Connection refusedconnect to host 2a10:6740::6:404 port 5666: Connection refused [20:55:49] PROBLEM - mw132 nutcracker process on mw132 is CRITICAL: connect to address 2a10:6740::6:404 port 5666: Connection refusedconnect to host 2a10:6740::6:404 port 5666: Connection refused [20:55:52] PROBLEM - mw132 NTP time on mw132 is CRITICAL: connect to address 2a10:6740::6:404 port 5666: Connection refusedconnect to host 2a10:6740::6:404 port 5666: Connection refused [20:55:55] PROBLEM - mw132 conntrack_table_size on mw132 is CRITICAL: connect to address 2a10:6740::6:404 port 5666: Connection refusedconnect to host 2a10:6740::6:404 port 5666: Connection refused [20:56:05] PROBLEM - mw132 Disk Space on mw132 is CRITICAL: connect to address 2a10:6740::6:404 port 5666: Connection refusedconnect to host 2a10:6740::6:404 port 5666: Connection refused [20:56:10] PROBLEM - mw131 Current Load on mw131 is WARNING: WARNING - load average: 10.52, 10.29, 8.54 [20:56:25] PROBLEM - cp33 Current Load on cp33 is CRITICAL: CRITICAL - load average: 4.21, 4.00, 2.55 [20:56:29] PROBLEM - mw132 Check Gluster Clients on mw132 is CRITICAL: PROCS CRITICAL: 0 processes with args '/usr/sbin/glusterfs' [20:56:30] PROBLEM - mw132 HTTPS on mw132 is CRITICAL: connect to address 2a10:6740::6:404 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [20:56:33] PROBLEM - mw132 MediaWiki Rendering on mw132 is CRITICAL: connect to address 2a10:6740::6:404 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [20:56:36] PROBLEM - mw132 JobRunner Service on mw132 is CRITICAL: PROCS CRITICAL: 0 processes with args 'redisJobRunnerService' [20:56:44] RECOVERY - mw132 Current Load on mw132 is OK: OK - load average: 0.80, 0.19, 0.06 [20:56:48] PROBLEM - mw131 MediaWiki Rendering on mw131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:57:37] RECOVERY - mw132 APT on mw132 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [20:57:40] RECOVERY - mw132 PowerDNS Recursor on mw132 is OK: DNS OK: 0.069 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [20:57:42] RECOVERY - mw132 php-fpm on mw132 is OK: PROCS OK: 21 processes with command name 'php-fpm7.4' [20:57:47] RECOVERY - mw132 nutcracker process on mw132 is OK: PROCS OK: 1 process with UID = 115 (nutcracker), command name 'nutcracker' [20:57:51] RECOVERY - mw132 NTP time on mw132 is OK: NTP OK: Offset 0.0008691847324 secs [20:57:52] RECOVERY - mw132 conntrack_table_size on mw132 is OK: OK: nf_conntrack is 1 % full [20:58:01] RECOVERY - mw132 Disk Space on mw132 is OK: DISK OK - free space: / 12369 MB (52% inode=82%); [20:58:21] PROBLEM - mw122 MediaWiki Rendering on mw122 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:58:25] RECOVERY - mw132 Check Gluster Clients on mw132 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [20:58:25] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.86, 3.22, 2.45 [20:58:27] RECOVERY - mw132 HTTPS on mw132 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 0.010 second response time [20:58:31] RECOVERY - mw132 MediaWiki Rendering on mw132 is OK: HTTP OK: HTTP/1.1 200 OK - 29520 bytes in 0.445 second response time [20:58:34] PROBLEM - mw122 HTTPS on mw122 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:58:36] RECOVERY - cp33 Varnish Backends on cp33 is OK: All 14 backends are healthy [20:58:44] PROBLEM - mw131 HTTPS on mw131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:59:47] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 74.208.203.152/cpweb, 2607:f1c0:1800:26f::1/cpweb [20:59:54] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 74.208.203.152/cpweb, 2607:f1c0:1800:26f::1/cpweb [21:00:10] PROBLEM - cp33 HTTPS on cp33 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:00:23] PROBLEM - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is WARNING: WARNING - NGINX Error Rate is 55% [21:00:34] RECOVERY - mw132 JobRunner Service on mw132 is OK: PROCS OK: 1 process with args 'redisJobRunnerService' [21:01:51] PROBLEM - mw131 Current Load on mw131 is CRITICAL: CRITICAL - load average: 15.32, 12.56, 9.98 [21:02:23] PROBLEM - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is CRITICAL: CRITICAL - NGINX Error Rate is 86% [21:02:35] PROBLEM - cp33 Varnish Backends on cp33 is CRITICAL: 7 backends are down. mw121 mw122 mw131 mw132 mw141 mw142 mediawiki [21:02:44] PROBLEM - mw142 MediaWiki Rendering on mw142 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:02:46] PROBLEM - mw142 HTTPS on mw142 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:03:12] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.15, 11.35, 10.65 [21:03:43] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [21:03:44] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [21:04:12] PROBLEM - mw142 Puppet on mw142 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[nginx] [21:04:49] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 29521 bytes in 4.125 second response time [21:04:55] RECOVERY - mw141 HTTPS on mw141 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 0.014 second response time [21:05:06] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 0.73, 0.23, 0.08 [21:05:28] PROBLEM - mw141 Check Gluster Clients on mw141 is CRITICAL: PROCS CRITICAL: 0 processes with args '/usr/sbin/glusterfs' [21:05:39] RECOVERY - mw131 Current Load on mw131 is OK: OK - load average: 0.36, 0.10, 0.03 [21:05:47] PROBLEM - mw122 Puppet on mw122 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[nginx] [21:06:05] PROBLEM - mw121 Check Gluster Clients on mw121 is CRITICAL: PROCS CRITICAL: 0 processes with args '/usr/sbin/glusterfs' [21:06:31] PROBLEM - mw132 Current Load on mw132 is CRITICAL: CRITICAL - load average: 13.16, 8.44, 3.86 [21:06:37] PROBLEM - mw122 Check Gluster Clients on mw122 is CRITICAL: PROCS CRITICAL: 0 processes with args '/usr/sbin/glusterfs' [21:06:41] RECOVERY - mw121 HTTPS on mw121 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 0.012 second response time [21:06:49] RECOVERY - mw122 HTTPS on mw122 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 0.032 second response time [21:06:50] RECOVERY - mw142 HTTPS on mw142 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 0.018 second response time [21:06:53] RECOVERY - mw121 MediaWiki Rendering on mw121 is OK: HTTP OK: HTTP/1.1 200 OK - 29521 bytes in 5.176 second response time [21:06:54] RECOVERY - mw142 MediaWiki Rendering on mw142 is OK: HTTP OK: HTTP/1.1 200 OK - 29521 bytes in 1.623 second response time [21:07:25] PROBLEM - mw142 Check Gluster Clients on mw142 is CRITICAL: PROCS CRITICAL: 0 processes with args '/usr/sbin/glusterfs' [21:07:31] PROBLEM - mw131 Check Gluster Clients on mw131 is CRITICAL: PROCS CRITICAL: 0 processes with args '/usr/sbin/glusterfs' [21:07:45] RECOVERY - mw122 Puppet on mw122 is OK: OK: Puppet is currently enabled, last run 59 seconds ago with 0 failures [21:08:05] RECOVERY - cp33 HTTPS on cp33 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3342 bytes in 0.589 second response time [21:08:08] RECOVERY - mw142 Puppet on mw142 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [21:08:23] RECOVERY - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is OK: OK - NGINX Error Rate is 25% [21:08:40] RECOVERY - mw122 MediaWiki Rendering on mw122 is OK: HTTP OK: HTTP/1.1 200 OK - 29521 bytes in 3.983 second response time [21:08:43] RECOVERY - mw131 HTTPS on mw131 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 0.007 second response time [21:08:45] RECOVERY - mw131 MediaWiki Rendering on mw131 is OK: HTTP OK: HTTP/1.1 200 OK - 29521 bytes in 1.101 second response time [21:09:24] RECOVERY - mw141 Check Gluster Clients on mw141 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [21:09:53] RECOVERY - mw121 Check Gluster Clients on mw121 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [21:10:27] PROBLEM - mw132 Current Load on mw132 is WARNING: WARNING - load average: 9.99, 10.33, 5.74 [21:10:36] RECOVERY - mw122 Check Gluster Clients on mw122 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [21:10:46] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:11:16] RECOVERY - mw142 Check Gluster Clients on mw142 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [21:11:28] RECOVERY - mw131 Check Gluster Clients on mw131 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [21:11:39] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 5.56, 2.71, 1.55 [21:12:25] RECOVERY - mw132 Current Load on mw132 is OK: OK - load average: 4.81, 8.23, 5.53 [21:12:43] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 29520 bytes in 0.538 second response time [21:12:46] PROBLEM - mw132 MediaWiki Rendering on mw132 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:12:50] PROBLEM - mw121 MediaWiki Rendering on mw121 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 1595 bytes in 0.014 second response time [21:13:52] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.45, 3.52, 1.97 [21:14:40] RECOVERY - mw132 MediaWiki Rendering on mw132 is OK: HTTP OK: HTTP/1.1 200 OK - 29520 bytes in 0.391 second response time [21:15:28] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.95, 3.12, 1.99 [21:15:50] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.97, 3.08, 2.01 [21:16:40] RECOVERY - mw121 MediaWiki Rendering on mw121 is OK: HTTP OK: HTTP/1.1 200 OK - 29520 bytes in 0.278 second response time [21:16:46] PROBLEM - mw131 MediaWiki Rendering on mw131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:17:00] [Grafana] FIRING: Some MediaWiki Appservers are running out of PHP-FPM workers. https://grafana.miraheze.org/d/GtxbP1Xnk [21:18:38] PROBLEM - mw141 HTTPS on mw141 is CRITICAL: connect to address 2a10:6740::6:502 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [21:19:21] RECOVERY - mw141 HTTPS on mw141 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 0.012 second response time [21:20:27] PROBLEM - mw141 Puppet on mw141 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[nginx] [21:21:22] PROBLEM - mw141 Check Gluster Clients on mw141 is CRITICAL: PROCS CRITICAL: 0 processes with args '/usr/sbin/glusterfs' [21:22:24] RECOVERY - mw141 Puppet on mw141 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [21:22:35] PROBLEM - mw121 MediaWiki Rendering on mw121 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:23:22] RECOVERY - mw141 Check Gluster Clients on mw141 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [21:25:13] PROBLEM - cp23 NTP time on cp23 is WARNING: NTP WARNING: Offset 0.1361805797 secs [21:26:09] PROBLEM - test131 Puppet on test131 is WARNING: WARNING: Puppet is currently disabled, message: paladox, last run 6 minutes ago with 0 failures [21:26:40] RECOVERY - mw121 MediaWiki Rendering on mw121 is OK: HTTP OK: HTTP/1.1 200 OK - 29521 bytes in 4.810 second response time [21:26:44] RECOVERY - mw131 MediaWiki Rendering on mw131 is OK: HTTP OK: HTTP/1.1 200 OK - 29521 bytes in 1.319 second response time [21:27:14] PROBLEM - mw131 Check Gluster Clients on mw131 is CRITICAL: PROCS CRITICAL: 0 processes with args '/usr/sbin/glusterfs' [21:28:13] PROBLEM - mw122 MediaWiki Rendering on mw122 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 1595 bytes in 0.010 second response time [21:29:13] RECOVERY - mw131 Check Gluster Clients on mw131 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [21:29:36] PROBLEM - mw121 Check Gluster Clients on mw121 is CRITICAL: PROCS CRITICAL: 0 processes with args '/usr/sbin/glusterfs' [21:31:36] RECOVERY - mw121 Check Gluster Clients on mw121 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [21:33:53] RECOVERY - mw122 MediaWiki Rendering on mw122 is OK: HTTP OK: HTTP/1.1 200 OK - 29512 bytes in 0.554 second response time [21:34:51] PROBLEM - mw121 MediaWiki Rendering on mw121 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:35:03] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:35:06] PROBLEM - mw141 HTTPS on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:36:41] PROBLEM - mw121 HTTPS on mw121 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:37:00] RECOVERY - mw141 HTTPS on mw141 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 0.014 second response time [21:38:38] PROBLEM - mw142 HTTPS on mw142 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:38:56] PROBLEM - mw142 MediaWiki Rendering on mw142 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:39:50] PROBLEM - mw132 MediaWiki Rendering on mw132 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:40:20] PROBLEM - mw122 MediaWiki Rendering on mw122 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:40:21] PROBLEM - mw132 HTTPS on mw132 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:40:59] PROBLEM - mw122 HTTPS on mw122 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:41:07] PROBLEM - gluster122 APT on gluster122 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [21:41:25] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 4 datacenters are down: 108.175.15.182/cpweb, 74.208.203.152/cpweb, 2607:f1c0:1800:8100::1/cpweb, 2607:f1c0:1800:26f::1/cpweb [21:42:43] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 7.66, 4.12, 2.65 [21:43:03] PROBLEM - gluster122 APT on gluster122 is WARNING: APT WARNING: 0 packages available for upgrade (0 critical updates). warnings detected, errors detected. [21:43:20] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [21:43:34] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.59, 2.97, 2.37 [21:44:23] PROBLEM - mw131 Current Load on mw131 is CRITICAL: CRITICAL - load average: 15.01, 10.60, 6.16 [21:47:00] [Grafana] RESOLVED: PHP-FPM Worker Usage High https://grafana.miraheze.org/d/GtxbP1Xnk [21:48:31] PROBLEM - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is WARNING: WARNING - NGINX Error Rate is 47% [21:48:40] PROBLEM - mw131 MediaWiki Rendering on mw131 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 1595 bytes in 7.635 second response time [21:50:04] RECOVERY - mw131 Current Load on mw131 is OK: OK - load average: 5.96, 9.81, 7.40 [21:50:31] PROBLEM - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is CRITICAL: CRITICAL - NGINX Error Rate is 73% [21:51:52] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is CRITICAL: CRITICAL - NGINX Error Rate is 63% [21:52:33] RECOVERY - cp23 NTP time on cp23 is OK: NTP OK: Offset 0.08825817704 secs [21:53:28] PROBLEM - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is CRITICAL: CRITICAL - NGINX Error Rate is 68% [21:55:27] RECOVERY - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is OK: OK - NGINX Error Rate is 15% [21:55:49] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is WARNING: WARNING - NGINX Error Rate is 46% [21:56:41] RECOVERY - mw131 MediaWiki Rendering on mw131 is OK: HTTP OK: HTTP/1.1 200 OK - 29513 bytes in 2.363 second response time [21:57:48] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is CRITICAL: CRITICAL - NGINX Error Rate is 85% [21:59:53] !log fsck.ext4 -f /dev/sda1 [22:00:01] !log on gluster122 [22:00:07] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:00:14] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 29512 bytes in 0.221 second response time [22:00:32] RECOVERY - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is OK: OK - NGINX Error Rate is 32% [22:01:45] RECOVERY - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is OK: OK - NGINX Error Rate is 3% [22:02:37] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:07:36] PROBLEM - mw131 Current Load on mw131 is WARNING: WARNING - load average: 10.54, 8.55, 4.41 [22:11:36] PROBLEM - mw131 Current Load on mw131 is CRITICAL: CRITICAL - load average: 17.44, 12.60, 6.97 [22:12:09] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:13:36] PROBLEM - mw131 Current Load on mw131 is WARNING: WARNING - load average: 10.29, 11.03, 7.06 [22:15:36] PROBLEM - mw131 Current Load on mw131 is CRITICAL: CRITICAL - load average: 12.87, 11.49, 7.69 [22:16:13] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 29512 bytes in 0.475 second response time [22:16:37] PROBLEM - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is CRITICAL: CRITICAL - NGINX Error Rate is 60% [22:17:44] PROBLEM - cp22 APT on cp22 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [22:18:37] RECOVERY - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is OK: OK - NGINX Error Rate is 11% [22:19:36] PROBLEM - mw131 Current Load on mw131 is WARNING: WARNING - load average: 8.44, 10.91, 8.41 [22:19:43] RECOVERY - cp22 APT on cp22 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [22:22:20] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:25:36] PROBLEM - mw131 Current Load on mw131 is CRITICAL: CRITICAL - load average: 12.27, 11.52, 9.49 [22:25:41] PROBLEM - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is WARNING: WARNING - NGINX Error Rate is 40% [22:26:57] PROBLEM - mw131 MediaWiki Rendering on mw131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:27:36] PROBLEM - mw131 Current Load on mw131 is WARNING: WARNING - load average: 10.08, 11.12, 9.60 [22:29:38] RECOVERY - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is OK: OK - NGINX Error Rate is 18% [22:30:11] PROBLEM - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is WARNING: WARNING - NGINX Error Rate is 50% [22:30:30] [Grafana] FIRING: Some MediaWiki Appservers are running out of PHP-FPM workers. https://grafana.miraheze.org/d/GtxbP1Xnk [22:31:36] PROBLEM - mw131 Current Load on mw131 is CRITICAL: CRITICAL - load average: 12.60, 11.37, 9.98 [22:32:10] RECOVERY - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is OK: OK - NGINX Error Rate is 10% [22:33:36] RECOVERY - mw131 Current Load on mw131 is OK: OK - load average: 4.60, 9.03, 9.32 [22:35:32] PROBLEM - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is CRITICAL: CRITICAL - NGINX Error Rate is 74% [22:35:41] PROBLEM - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is CRITICAL: CRITICAL - NGINX Error Rate is 81% [22:36:07] PROBLEM - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is CRITICAL: CRITICAL - NGINX Error Rate is 61% [22:37:30] PROBLEM - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is WARNING: WARNING - NGINX Error Rate is 45% [22:38:41] RECOVERY - mw131 MediaWiki Rendering on mw131 is OK: HTTP OK: HTTP/1.1 200 OK - 29520 bytes in 0.194 second response time [22:39:05] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 29520 bytes in 0.488 second response time [22:39:17] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is WARNING: WARNING - NGINX Error Rate is 45% [22:39:34] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.20, 3.38, 3.94 [22:40:30] [Grafana] RESOLVED: PHP-FPM Worker Usage High https://grafana.miraheze.org/d/GtxbP1Xnk [22:40:59] .op [22:40:59] Attempting to OP... [22:41:16] RECOVERY - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is OK: OK - NGINX Error Rate is 8% [22:41:42] RECOVERY - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is OK: OK - NGINX Error Rate is 10% [22:42:00] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 42.04, 21.72, 11.83 [22:42:05] RECOVERY - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is OK: OK - NGINX Error Rate is 16% [22:42:29] PROBLEM - mw122 SSH on mw122 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:42:36] PROBLEM - mw122 APT on mw122 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [22:42:38] PROBLEM - mw122 PowerDNS Recursor on mw122 is CRITICAL: CRITICAL - Plugin timed out while executing system call [22:43:15] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:43:25] RECOVERY - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is OK: OK - NGINX Error Rate is 8% [22:43:38] PROBLEM - mw122 php-fpm on mw122 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [22:43:47] PROBLEM - mw122 Check Gluster Clients on mw122 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [22:44:28] PROBLEM - graylog121 Current Load on graylog121 is WARNING: WARNING - load average: 3.58, 2.84, 1.71 [22:44:29] PROBLEM - mw122 NTP time on mw122 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:44:32] PROBLEM - mw122 conntrack_table_size on mw122 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [22:44:52] PROBLEM - mw122 nutcracker process on mw122 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [22:44:56] PROBLEM - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is CRITICAL: CRITICAL - NGINX Error Rate is 93% [22:45:12] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 29520 bytes in 0.516 second response time [22:46:03] RECOVERY - mw122 php-fpm on mw122 is OK: PROCS OK: 21 processes with command name 'php-fpm7.4' [22:46:04] RECOVERY - mw122 Check Gluster Clients on mw122 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [22:46:27] PROBLEM - mw122 ferm_active on mw122 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [22:46:28] PROBLEM - mw122 Puppet on mw122 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [22:47:10] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is CRITICAL: CRITICAL - NGINX Error Rate is 85% [22:47:56] RECOVERY - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is OK: OK - NGINX Error Rate is 29% [22:48:57] PROBLEM - mw131 MediaWiki Rendering on mw131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:49:09] RECOVERY - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is OK: OK - NGINX Error Rate is 5% [22:49:33] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:50:26] PROBLEM - graylog121 Current Load on graylog121 is CRITICAL: CRITICAL - load average: 4.34, 3.68, 2.44 [22:52:07] PROBLEM - mw122 Check Gluster Clients on mw122 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [22:52:36] RECOVERY - mw142 HTTPS on mw142 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 0.007 second response time [22:52:55] PROBLEM - mw122 php-fpm on mw122 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [22:52:56] PROBLEM - mw132 SSH on mw132 is CRITICAL: connect to address 2a10:6740::6:404 and port 22: Connection refused [22:53:23] PROBLEM - cp23 APT on cp23 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [22:54:12] RECOVERY - mw142 MediaWiki Rendering on mw142 is OK: HTTP OK: HTTP/1.1 200 OK - 29521 bytes in 2.614 second response time [22:54:14] RECOVERY - mw132 MediaWiki Rendering on mw132 is OK: HTTP OK: HTTP/1.1 200 OK - 29521 bytes in 2.305 second response time [22:54:18] RECOVERY - mw132 HTTPS on mw132 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 0.007 second response time [22:54:52] RECOVERY - mw132 SSH on mw132 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [22:55:23] RECOVERY - cp23 APT on cp23 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [22:55:25] PROBLEM - mw141 Check Gluster Clients on mw141 is CRITICAL: PROCS CRITICAL: 0 processes with args '/usr/sbin/glusterfs' [22:55:27] PROBLEM - mw141 JobRunner Service on mw141 is CRITICAL: PROCS CRITICAL: 0 processes with args 'redisJobRunnerService' [22:55:28] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 29521 bytes in 4.529 second response time [22:55:31] PROBLEM - mw142 Check Gluster Clients on mw142 is CRITICAL: PROCS CRITICAL: 0 processes with args '/usr/sbin/glusterfs' [22:55:34] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.37, 2.42, 3.13 [22:55:53] PROBLEM - mwtask141 Check Gluster Clients on mwtask141 is CRITICAL: PROCS CRITICAL: 0 processes with args '/usr/sbin/glusterfs' [22:56:05] PROBLEM - mw121 Check Gluster Clients on mw121 is CRITICAL: PROCS CRITICAL: 0 processes with args '/usr/sbin/glusterfs' [22:56:40] RECOVERY - mw121 HTTPS on mw121 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 0.026 second response time [22:56:43] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.99, 3.10, 3.96 [22:57:03] RECOVERY - mw131 MediaWiki Rendering on mw131 is OK: HTTP OK: HTTP/1.1 200 OK - 29520 bytes in 0.197 second response time [22:57:06] PROBLEM - mw122 Disk Space on mw122 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [22:57:20] RECOVERY - mw122 ferm_active on mw122 is OK: OK ferm input default policy is set [22:57:23] RECOVERY - mw141 Check Gluster Clients on mw141 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [22:57:25] RECOVERY - mw122 SSH on mw122 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [22:57:25] RECOVERY - mw141 JobRunner Service on mw141 is OK: PROCS OK: 1 process with args 'redisJobRunnerService' [22:57:26] RECOVERY - mw142 Check Gluster Clients on mw142 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [22:57:27] RECOVERY - mw122 Puppet on mw122 is OK: OK: Puppet is currently enabled, last run 22 minutes ago with 0 failures [22:57:34] RECOVERY - mw122 APT on mw122 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [22:58:01] RECOVERY - mw121 MediaWiki Rendering on mw121 is OK: HTTP OK: HTTP/1.1 200 OK - 29520 bytes in 0.351 second response time [22:58:04] RECOVERY - mw122 php-fpm on mw122 is OK: PROCS OK: 21 processes with command name 'php-fpm7.4' [22:58:05] RECOVERY - mw122 nutcracker process on mw122 is OK: PROCS OK: 1 process with UID = 116 (nutcracker), command name 'nutcracker' [22:58:06] RECOVERY - mw122 PowerDNS Recursor on mw122 is OK: DNS OK: 0.174 seconds response time. miraheze.org returns 109.228.51.216,217.174.247.33,2a00:da00:1800:326::1,2a00:da00:1800:328::1 [22:58:07] RECOVERY - mw122 HTTPS on mw122 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 0.016 second response time [22:58:26] PROBLEM - graylog121 Current Load on graylog121 is WARNING: WARNING - load average: 2.28, 3.60, 3.02 [22:58:36] RECOVERY - mw122 conntrack_table_size on mw122 is OK: OK: nf_conntrack is 0 % full [22:59:02] RECOVERY - mw122 Disk Space on mw122 is OK: DISK OK - free space: / 10096 MB (45% inode=81%); [22:59:04] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 0.98, 0.32, 0.11 [22:59:06] RECOVERY - mw122 NTP time on mw122 is OK: NTP OK: Offset 0.001212477684 secs [22:59:39] RECOVERY - mw122 MediaWiki Rendering on mw122 is OK: HTTP OK: HTTP/1.1 200 OK - 29521 bytes in 3.381 second response time [22:59:53] RECOVERY - mw121 Check Gluster Clients on mw121 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [23:00:26] RECOVERY - graylog121 Current Load on graylog121 is OK: OK - load average: 0.77, 2.60, 2.72 [23:00:33] RECOVERY - cp32 Varnish Backends on cp32 is OK: All 14 backends are healthy [23:00:42] RECOVERY - cp22 Varnish Backends on cp22 is OK: All 14 backends are healthy [23:00:47] RECOVERY - cp23 Varnish Backends on cp23 is OK: All 14 backends are healthy [23:00:52] RECOVERY - mw122 Check Gluster Clients on mw122 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [23:01:30] RECOVERY - cp33 Varnish Backends on cp33 is OK: All 14 backends are healthy [23:01:36] !log reboot mw* and mwtask141 [23:01:41] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [23:02:43] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.41, 3.74, 3.94 [23:07:22] !log mount /mnt/mediawiki-static on mwtask141 [23:07:27] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [23:07:35] RECOVERY - mwtask141 Check Gluster Clients on mwtask141 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [23:10:34] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.45, 4.02, 3.46 [23:12:33] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.51, 3.73, 3.42 [23:14:31] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 8.38, 4.57, 3.71 [23:14:50] PROBLEM - mw131 MediaWiki Rendering on mw131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:15:02] PROBLEM - cp32 Varnish Backends on cp32 is CRITICAL: 1 backends are down. mw131 [23:16:20] PROBLEM - cp22 Varnish Backends on cp22 is CRITICAL: 1 backends are down. mw131 [23:16:29] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.46, 3.85, 3.54 [23:17:10] PROBLEM - mw131 HTTPS on mw131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:17:25] PROBLEM - cp33 Varnish Backends on cp33 is CRITICAL: 1 backends are down. mw131 [23:17:31] PROBLEM - mw131 ferm_active on mw131 is CRITICAL: connect to address 2a10:6740::6:403 port 5666: Connection refusedconnect to host 2a10:6740::6:403 port 5666: Connection refused [23:17:36] PROBLEM - mw131 Current Load on mw131 is CRITICAL: connect to address 2a10:6740::6:403 port 5666: Connection refusedconnect to host 2a10:6740::6:403 port 5666: Connection refused [23:17:38] PROBLEM - mw131 conntrack_table_size on mw131 is CRITICAL: connect to address 2a10:6740::6:403 port 5666: Connection refusedconnect to host 2a10:6740::6:403 port 5666: Connection refused [23:17:59] PROBLEM - mw131 nutcracker process on mw131 is CRITICAL: connect to address 2a10:6740::6:403 port 5666: Connection refusedconnect to host 2a10:6740::6:403 port 5666: Connection refused [23:18:00] PROBLEM - mw131 Check Gluster Clients on mw131 is CRITICAL: connect to address 2a10:6740::6:403 port 5666: Connection refusedconnect to host 2a10:6740::6:403 port 5666: Connection refused [23:18:10] PROBLEM - cp23 Varnish Backends on cp23 is CRITICAL: 1 backends are down. mw131 [23:18:14] PROBLEM - mw131 SSH on mw131 is CRITICAL: connect to address 2a10:6740::6:403 and port 22: Connection refused [23:18:21] PROBLEM - mw131 Puppet on mw131 is CRITICAL: connect to address 2a10:6740::6:403 port 5666: Connection refusedconnect to host 2a10:6740::6:403 port 5666: Connection refused [23:18:27] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.39, 3.30, 3.38 [23:18:50] RECOVERY - mw131 MediaWiki Rendering on mw131 is OK: HTTP OK: HTTP/1.1 200 OK - 29521 bytes in 2.310 second response time [23:18:53] PROBLEM - mw131 JobRunner Service on mw131 is CRITICAL: PROCS CRITICAL: 0 processes with args 'redisJobRunnerService' [23:19:04] RECOVERY - mw131 HTTPS on mw131 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 0.011 second response time [23:19:17] RECOVERY - mw131 ferm_active on mw131 is OK: OK ferm input default policy is set [23:19:22] PROBLEM - mw131 NTP time on mw131 is CRITICAL: NTP CRITICAL: Offset -1.393164665 secs [23:19:36] RECOVERY - mw131 Current Load on mw131 is OK: OK - load average: 0.34, 0.10, 0.03 [23:19:37] RECOVERY - mw131 conntrack_table_size on mw131 is OK: OK: nf_conntrack is 1 % full [23:19:59] RECOVERY - mw131 nutcracker process on mw131 is OK: PROCS OK: 1 process with UID = 115 (nutcracker), command name 'nutcracker' [23:20:06] RECOVERY - mw131 SSH on mw131 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [23:20:10] RECOVERY - mw131 Puppet on mw131 is OK: OK: Puppet is currently enabled, last run 9 minutes ago with 0 failures [23:20:12] RECOVERY - mw131 JobRunner Service on mw131 is OK: PROCS OK: 1 process with args 'redisJobRunnerService' [23:21:27] RECOVERY - mw131 NTP time on mw131 is OK: NTP OK: Offset 0.00243011117 secs [23:22:19] PROBLEM - test131 MediaWiki Rendering on test131 is CRITICAL: connect to address 2a10:6740::6:406 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [23:23:16] RECOVERY - test131 MediaWiki Rendering on test131 is OK: HTTP OK: HTTP/1.1 200 OK - 35090 bytes in 1.294 second response time [23:23:47] PROBLEM - mw132 MediaWiki Rendering on mw132 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:23:55] RECOVERY - mw131 Check Gluster Clients on mw131 is OK: PROCS OK: 1 process with args '/usr/sbin/glusterfs' [23:24:21] PROBLEM - mw132 HTTPS on mw132 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:28:00] PROBLEM - mw131 MediaWiki Rendering on mw131 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 1595 bytes in 2.803 second response time [23:29:00] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:30:46] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.39, 10.10, 7.22 [23:32:46] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 12.76, 11.02, 7.90 [23:32:51] PROBLEM - mw122 MediaWiki Rendering on mw122 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:33:04] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 29512 bytes in 0.202 second response time [23:34:27] PROBLEM - mw121 MediaWiki Rendering on mw121 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:34:46] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 7.91, 9.83, 7.85 [23:35:15] PROBLEM - cp32 Current Load on cp32 is CRITICAL: CRITICAL - load average: 5.48, 3.75, 2.84 [23:36:06] RECOVERY - mw131 MediaWiki Rendering on mw131 is OK: HTTP OK: HTTP/1.1 200 OK - 29512 bytes in 0.227 second response time [23:36:28] RECOVERY - mw132 HTTPS on mw132 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 545 bytes in 2.542 second response time [23:37:00] RECOVERY - mw122 MediaWiki Rendering on mw122 is OK: HTTP OK: HTTP/1.1 200 OK - 29512 bytes in 0.344 second response time [23:37:15] PROBLEM - cp32 Current Load on cp32 is WARNING: WARNING - load average: 2.72, 3.44, 2.85 [23:38:27] RECOVERY - mw121 MediaWiki Rendering on mw121 is OK: HTTP OK: HTTP/1.1 200 OK - 29512 bytes in 0.255 second response time [23:39:12] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:39:15] RECOVERY - cp32 Current Load on cp32 is OK: OK - load average: 1.28, 2.64, 2.62 [23:41:09] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 29512 bytes in 0.419 second response time [23:41:39] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 12.86, 11.19, 9.07 [23:42:40] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.03, 10.66, 9.02 [23:45:03] RECOVERY - mw132 MediaWiki Rendering on mw132 is OK: HTTP OK: HTTP/1.1 200 OK - 29512 bytes in 0.230 second response time [23:46:35] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 13.75, 11.80, 9.82 [23:47:04] RECOVERY - cp33 Varnish Backends on cp33 is OK: All 14 backends are healthy [23:48:33] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.20, 11.60, 10.00 [23:48:39] RECOVERY - cp22 Varnish Backends on cp22 is OK: All 14 backends are healthy [23:48:55] RECOVERY - cp23 Varnish Backends on cp23 is OK: All 14 backends are healthy [23:49:02] RECOVERY - cp32 Varnish Backends on cp32 is OK: All 14 backends are healthy [23:49:12] PROBLEM - agentisai.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Certificate 'www.agentisai.com' expired on Wed 28 Sep 2022 11:24:06 GMT +0000. [23:49:17] PROBLEM - www.agentisai.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Certificate 'www.agentisai.com' expired on Wed 28 Sep 2022 11:24:06 GMT +0000. [23:49:22] PROBLEM - vault.agentisai.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Certificate 'vault.agentisai.com' expired on Thu 29 Sep 2022 08:34:24 GMT +0000. [23:49:39] PROBLEM - cp22 Puppet on cp22 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/etc/ssl/private/www.agentisai.com.key] [23:50:51] ^ will be fixed next puppet run [23:51:16] PROBLEM - mw132 MediaWiki Rendering on mw132 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:52:28] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 12.43, 11.66, 10.34 [23:55:45] RECOVERY - mw132 MediaWiki Rendering on mw132 is OK: HTTP OK: HTTP/1.1 200 OK - 29512 bytes in 9.959 second response time [23:57:03] PROBLEM - cp33 Current Load on cp33 is WARNING: WARNING - load average: 3.85, 3.85, 2.94 [23:57:52] PROBLEM - mw131 MediaWiki Rendering on mw131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:59:01] RECOVERY - cp33 Current Load on cp33 is OK: OK - load average: 1.45, 2.92, 2.70 [23:59:48] PROBLEM - mw132 MediaWiki Rendering on mw132 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 1595 bytes in 0.007 second response time [23:59:51] RECOVERY - mw131 MediaWiki Rendering on mw131 is OK: HTTP OK: HTTP/1.1 200 OK - 29512 bytes in 0.218 second response time