[00:00:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 3.43, 2.33, 2.05 [00:02:23] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.33, 9.75, 9.49 [00:04:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.83, 2.23, 1.91 [00:06:07] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.03, 3.48, 2.98 [00:07:00] RECOVERY - cp31 Disk Space on cp31 is OK: DISK OK - free space: / 10905 MB (28% inode=96%); [00:07:38] RECOVERY - cp30 Disk Space on cp30 is OK: DISK OK - free space: / 10773 MB (27% inode=96%); [00:08:11] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 14.55, 11.81, 10.41 [00:08:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.33, 1.95, 1.89 [00:10:07] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.49, 11.65, 10.53 [00:11:44] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.06, 10.05, 9.18 [00:12:00] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.56, 3.69, 3.28 [00:12:05] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.12, 3.44, 3.11 [00:13:42] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 9.94, 9.27, 8.96 [00:13:59] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 15.25, 11.55, 10.62 [00:14:00] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 2.42, 3.27, 3.18 [00:14:05] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.96, 3.89, 3.30 [00:15:55] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.14, 11.96, 10.94 [00:16:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.16, 2.21, 1.95 [00:18:04] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.51, 3.91, 3.45 [00:21:42] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 6.70, 8.77, 9.86 [00:22:02] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.87, 3.36, 3.36 [00:27:24] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 16.96, 10.65, 8.97 [00:27:38] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 14.61, 10.92, 10.23 [00:27:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.97, 4.59, 3.63 [00:28:00] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 6.84, 5.57, 4.22 [00:28:24] PROBLEM - wiki.mastodon.kr - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'wiki.mastodon.kr' expires in 15 day(s) (Mon 05 Sep 2022 00:02:52 GMT +0000). [00:29:22] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.06, 10.47, 9.10 [00:31:19] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 8.55, 10.04, 9.14 [00:31:27] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/ssl/compare/570912d015f2...a9fd9ed1e111 [00:31:29] [02miraheze/ssl] 07MirahezeSSLBot 03a9fd9ed - Bot: Update SSL cert for wiki.mastodon.kr [00:31:29] [url] Comparing 570912d015f2...a9fd9ed1e111 · miraheze/ssl · GitHub | github.com [00:31:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.92, 3.68, 3.48 [00:32:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.12, 1.89, 1.97 [00:34:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.39, 2.45, 2.17 [00:35:13] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.08, 10.31, 9.47 [00:35:38] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 8.91, 11.20, 10.81 [00:35:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.31, 3.67, 3.50 [00:37:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.68, 3.75, 3.56 [00:39:08] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.97, 11.19, 9.98 [00:41:06] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.58, 11.59, 10.29 [00:41:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.47, 4.10, 3.73 [00:43:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.45, 3.77, 3.68 [00:45:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 5.24, 4.13, 3.80 [00:46:59] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 8.86, 9.71, 9.91 [00:47:38] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 13.79, 11.61, 11.06 [00:48:50] PROBLEM - test131 Current Load on test131 is WARNING: WARNING - load average: 1.92, 1.68, 1.48 [00:49:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.76, 3.84, 3.74 [00:50:49] PROBLEM - test131 Current Load on test131 is CRITICAL: CRITICAL - load average: 2.08, 1.80, 1.55 [00:51:38] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.21, 11.29, 11.05 [00:52:48] PROBLEM - test131 Current Load on test131 is WARNING: WARNING - load average: 1.83, 1.80, 1.58 [00:54:47] RECOVERY - test131 Current Load on test131 is OK: OK - load average: 1.49, 1.67, 1.55 [00:56:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.06, 1.58, 1.91 [00:57:51] RECOVERY - wiki.mastodon.kr - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.mastodon.kr' will expire on Thu 17 Nov 2022 23:31:22 GMT +0000. [00:57:52] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 1.92, 3.22, 3.99 [00:58:44] PROBLEM - test131 Current Load on test131 is WARNING: WARNING - load average: 1.69, 1.82, 1.65 [00:59:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 5.50, 3.50, 3.52 [01:02:41] RECOVERY - test131 Current Load on test131 is OK: OK - load average: 1.31, 1.55, 1.58 [01:03:51] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.98, 3.67, 3.89 [01:05:38] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 8.51, 9.47, 10.18 [01:05:50] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.69, 3.59, 3.83 [01:07:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.39, 3.47, 3.60 [01:09:49] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.16, 4.37, 4.09 [01:11:48] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.25, 3.86, 3.93 [01:12:38] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 1.04, 1.45, 1.66 [01:12:58] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.80, 10.44, 9.54 [01:13:48] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.12, 4.13, 4.00 [01:14:58] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 7.24, 9.36, 9.26 [01:17:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.47, 4.40, 3.98 [01:18:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 8.23, 3.72, 2.45 [01:24:58] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 13.58, 11.74, 10.18 [01:27:18] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 18.87, 13.06, 11.00 [01:28:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.34, 1.71, 1.95 [01:28:58] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.96, 11.41, 10.39 [01:33:06] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.52, 11.93, 11.18 [01:34:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.06, 1.68, 1.84 [01:35:01] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 10.48, 12.05, 11.35 [01:36:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.70, 1.67, 1.82 [01:36:57] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.85, 11.50, 11.22 [01:40:49] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 14.99, 12.66, 11.70 [01:42:45] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.50, 11.99, 11.59 [01:42:58] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 8.29, 9.66, 10.16 [01:46:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 0.76, 1.35, 1.90 [01:48:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.27, 1.80, 1.80 [01:54:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.30, 1.98, 1.91 [01:56:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.71, 1.82, 1.86 [02:00:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.39, 1.88, 1.85 [02:04:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.08, 1.58, 1.74 [02:05:55] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 7.29, 9.26, 10.20 [02:07:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.14, 3.49, 3.93 [02:08:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.09, 1.79, 1.79 [02:11:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 5.79, 4.29, 4.09 [02:13:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.25, 3.78, 3.92 [02:19:55] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 1.83, 2.63, 3.34 [02:20:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.71, 1.74, 1.99 [02:23:29] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 1.70, 2.70, 3.76 [02:27:28] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.06, 2.32, 3.34 [02:28:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.22, 1.83, 1.90 [02:30:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.89, 1.78, 1.86 [02:32:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.39, 1.80, 1.98 [02:32:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.08, 1.83, 1.87 [02:40:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.06, 1.57, 1.75 [02:42:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.96, 1.81, 1.82 [02:46:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.05, 2.59, 2.12 [02:52:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.56, 1.99, 2.00 [02:54:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.78, 2.31, 2.09 [02:59:18] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.83, 3.29, 2.60 [03:00:34] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 6.30, 4.34, 3.34 [03:06:18] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.94, 3.97, 3.55 [03:06:56] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.38, 11.56, 9.65 [03:08:12] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.87, 3.36, 3.37 [03:08:19] PROBLEM - roblox-wiki.tk - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - roblox-wiki.tk All nameservers failed to answer the query. [03:08:48] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 13.03, 10.85, 8.91 [03:08:52] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 13.72, 12.05, 10.04 [03:09:18] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.37, 3.82, 3.50 [03:10:47] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.42, 11.57, 10.08 [03:12:02] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.34, 4.17, 3.71 [03:12:43] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.05, 11.51, 9.62 [03:13:57] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.01, 3.56, 3.52 [03:14:39] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 12.41, 11.96, 10.58 [03:15:18] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.91, 4.76, 3.99 [03:15:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.55, 4.43, 3.88 [03:16:35] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.15, 11.22, 10.47 [03:19:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.27, 3.53, 3.66 [03:20:34] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 7.66, 9.51, 9.52 [03:21:18] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.07, 3.31, 3.64 [03:22:23] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 13.28, 11.28, 10.68 [03:23:18] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.37, 3.60, 3.69 [03:25:18] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.84, 3.29, 3.57 [03:26:14] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 9.87, 11.24, 10.89 [03:27:18] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.91, 2.69, 3.30 [03:28:10] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 19.02, 14.13, 11.98 [03:30:20] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.79, 11.27, 10.08 [03:31:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.50, 3.84, 3.71 [03:32:18] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.52, 11.19, 10.20 [03:33:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.24, 3.29, 3.53 [03:35:55] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 1.73, 2.77, 3.31 [03:36:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 0.95, 1.42, 1.94 [03:37:29] RECOVERY - roblox-wiki.tk - reverse DNS on sslhost is OK: SSL OK - roblox-wiki.tk reverse DNS resolves to cp31.miraheze.org - NS RECORDS OK [03:38:11] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.41, 10.91, 10.28 [03:40:09] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.95, 10.23, 10.09 [03:42:07] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.09, 11.36, 10.55 [03:44:04] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.40, 10.68, 10.41 [03:48:12] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.19, 4.44, 3.47 [03:49:58] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.78, 10.64, 10.42 [03:50:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.68, 1.89, 1.81 [03:51:55] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.66, 11.01, 10.58 [03:52:11] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.58, 3.53, 3.32 [03:54:10] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.60, 2.96, 3.14 [03:57:49] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 8.05, 9.58, 10.13 [03:58:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.67, 1.96, 1.92 [04:01:42] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.44, 10.57, 10.46 [04:06:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.09, 1.70, 1.78 [04:07:35] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 9.42, 9.71, 10.11 [04:07:38] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 8.91, 10.50, 11.90 [04:10:38] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 0.84, 1.47, 1.69 [04:10:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 0.89, 1.50, 1.98 [04:17:30] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.13, 2.01, 1.87 [04:18:54] RECOVERY - cp30 Current Load on cp30 is OK: OK - load average: 0.90, 1.22, 1.66 [04:21:38] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 7.15, 8.69, 10.19 [04:22:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 4.50, 2.86, 2.23 [04:27:08] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.33, 1.84, 1.96 [04:38:41] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.72, 2.19, 1.98 [04:40:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.47, 1.82, 1.87 [04:41:18] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.85, 9.91, 9.60 [04:42:58] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.02, 9.88, 9.17 [04:45:10] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 8.40, 9.81, 9.72 [04:46:58] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 8.95, 9.99, 9.40 [04:48:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.09, 2.16, 1.95 [04:52:55] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 12.40, 11.07, 10.24 [04:58:42] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 9.25, 11.22, 10.65 [05:00:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.36, 1.74, 1.99 [05:01:47] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [05:06:26] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 12.19, 10.91, 10.59 [05:08:22] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 9.05, 10.16, 10.35 [05:10:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.35, 2.04, 1.94 [05:12:13] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 8.25, 9.49, 10.07 [05:12:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.16, 1.73, 1.84 [05:16:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.02, 1.80, 1.83 [05:17:47] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [05:18:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.68, 1.83, 1.84 [05:19:45] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.34, 3.58, 2.66 [05:19:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.02, 3.59, 2.85 [05:19:59] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.45, 11.36, 10.53 [05:21:44] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 3.17, 3.29, 2.66 [05:21:54] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 12.42, 11.49, 10.66 [05:21:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.39, 3.66, 2.95 [05:23:55] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 1.97, 3.05, 2.82 [05:25:46] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.27, 11.20, 10.72 [05:26:38] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 0.99, 1.45, 1.68 [05:27:42] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 19.77, 13.26, 11.47 [05:30:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.18, 2.16, 1.95 [05:31:38] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 9.52, 11.33, 11.12 [05:32:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 0.89, 1.67, 1.79 [05:33:38] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 12.92, 11.74, 11.28 [05:34:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 4.11, 2.44, 2.04 [05:35:38] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.36, 11.50, 11.25 [05:37:38] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 12.80, 11.45, 11.22 [05:39:21] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.81, 3.35, 2.98 [05:39:38] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.21, 10.81, 11.00 [05:41:15] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.52, 3.06, 2.93 [05:43:35] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [05:43:38] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 7.73, 8.89, 10.13 [05:54:32] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.31, 11.35, 10.51 [05:55:29] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.23, 10.98, 10.17 [05:57:27] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.44, 10.60, 10.14 [05:58:23] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 7.03, 9.47, 9.96 [06:03:20] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 7.40, 9.30, 9.78 [06:04:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.89, 1.81, 1.97 [06:08:35] PROBLEM - test131 Current Load on test131 is CRITICAL: CRITICAL - load average: 2.17, 1.72, 1.34 [06:11:21] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [06:12:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.18, 1.81, 1.84 [06:14:35] PROBLEM - test131 Current Load on test131 is WARNING: WARNING - load average: 1.92, 1.98, 1.59 [06:15:28] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.52, 3.91, 2.91 [06:16:35] PROBLEM - test131 Current Load on test131 is CRITICAL: CRITICAL - load average: 2.24, 2.08, 1.68 [06:17:28] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.54, 3.35, 2.82 [06:18:35] PROBLEM - test131 Current Load on test131 is WARNING: WARNING - load average: 1.65, 1.93, 1.67 [06:24:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.22, 1.74, 1.92 [06:26:35] PROBLEM - test131 Current Load on test131 is CRITICAL: CRITICAL - load average: 2.40, 2.01, 1.79 [06:26:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.25, 1.94, 1.97 [06:28:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.85, 1.92, 1.96 [06:30:35] PROBLEM - test131 Current Load on test131 is WARNING: WARNING - load average: 1.80, 1.97, 1.84 [06:30:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.06, 1.81, 1.91 [06:32:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.18, 1.53, 1.79 [06:33:07] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [06:34:35] PROBLEM - test131 Current Load on test131 is CRITICAL: CRITICAL - load average: 2.02, 1.92, 1.84 [06:36:35] PROBLEM - test131 Current Load on test131 is WARNING: WARNING - load average: 1.53, 1.77, 1.80 [06:36:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.09, 1.40, 1.98 [06:38:38] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 1.23, 1.38, 1.65 [06:44:35] RECOVERY - test131 Current Load on test131 is OK: OK - load average: 1.46, 1.56, 1.68 [06:45:38] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 19.55, 19.45, 12.50 [06:46:54] RECOVERY - cp30 Current Load on cp30 is OK: OK - load average: 1.21, 1.27, 1.67 [06:51:31] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 4.95, 9.64, 10.24 [06:51:32] PROBLEM - test131 Current Load on test131 is WARNING: WARNING - load average: 1.99, 1.74, 1.72 [06:53:29] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 5.97, 8.53, 9.77 [06:53:31] PROBLEM - test131 Current Load on test131 is CRITICAL: CRITICAL - load average: 2.09, 1.91, 1.79 [06:55:18] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.60, 1.90, 1.70 [06:58:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.57, 1.72, 1.70 [07:00:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.13, 1.80, 1.73 [07:01:05] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.06, 1.97, 1.79 [07:01:26] PROBLEM - test131 Current Load on test131 is WARNING: WARNING - load average: 1.21, 1.95, 1.92 [07:02:52] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [07:03:00] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 1.11, 1.67, 1.70 [07:05:24] RECOVERY - test131 Current Load on test131 is OK: OK - load average: 0.62, 1.21, 1.62 [07:06:49] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 2001:41d0:801:2000::4c25/cpweb, 2001:41d0:801:2000::1b80/cpweb [07:10:47] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [07:12:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.80, 1.95, 1.95 [07:21:20] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.31, 1.68, 1.46 [07:22:39] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [07:22:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.16, 1.71, 1.76 [07:23:16] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.91, 1.84, 1.55 [07:24:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.61, 1.63, 1.72 [07:25:11] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 0.93, 1.52, 1.47 [07:26:54] RECOVERY - cp30 Current Load on cp30 is OK: OK - load average: 1.17, 1.52, 1.67 [07:29:11] PROBLEM - fanon.polandballwiki.com - reverse DNS on sslhost is WARNING: SSL WARNING - rDNS OK but records conflict. {'NS': ['dns2.registrar-servers.com.', 'dns101.registrar-servers.com.', 'dns1.registrar-servers.com.', 'dns102.registrar-servers.com.'], 'CNAME': None} [07:35:45] PROBLEM - polandballwiki.com - reverse DNS on sslhost is WARNING: SSL WARNING - rDNS OK but records conflict. {'NS': ['dns1.registrar-servers.com.', 'dns101.registrar-servers.com.', 'dns2.registrar-servers.com.', 'dns102.registrar-servers.com.'], 'CNAME': None} [07:37:44] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.46, 1.85, 1.58 [07:38:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.45, 1.88, 1.74 [07:39:40] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.92, 1.83, 1.61 [07:40:54] RECOVERY - cp30 Current Load on cp30 is OK: OK - load average: 1.41, 1.68, 1.68 [07:41:35] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 1.51, 1.66, 1.56 [07:51:15] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.85, 1.97, 1.64 [07:52:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.99, 1.84, 1.73 [07:53:37] PROBLEM - www.polandballwiki.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.polandballwiki.com could not be found [07:54:54] RECOVERY - cp30 Current Load on cp30 is OK: OK - load average: 1.30, 1.58, 1.64 [07:58:20] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [07:58:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.39, 1.95, 1.78 [08:00:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.92, 1.92, 1.79 [08:01:10] PROBLEM - www.polandballwiki.com - LetsEncrypt on sslhost is CRITICAL: connect to address www.polandballwiki.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [08:02:17] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::1b80/cpweb [08:02:48] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.48, 1.99, 1.89 [08:06:15] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [08:06:39] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.03, 2.16, 1.97 [08:06:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.53, 2.04, 1.85 [08:08:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.17, 1.78, 1.86 [08:08:54] RECOVERY - cp30 Current Load on cp30 is OK: OK - load average: 0.83, 1.57, 1.70 [08:10:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.34, 2.51, 2.12 [08:14:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.70, 1.78, 1.75 [08:16:08] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [08:18:54] RECOVERY - cp30 Current Load on cp30 is OK: OK - load average: 1.36, 1.63, 1.70 [08:20:45] PROBLEM - pt.polandballwiki.com - LetsEncrypt on sslhost is CRITICAL: connect to address pt.polandballwiki.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [08:22:42] RECOVERY - www.polandballwiki.com - reverse DNS on sslhost is OK: SSL OK - www.polandballwiki.com reverse DNS resolves to cp30.miraheze.org - CNAME FLAT [08:22:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.68, 2.05, 1.84 [08:24:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.65, 1.93, 2.00 [08:24:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.18, 1.72, 1.75 [08:25:18] PROBLEM - commons.polandballwiki.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for commons.polandballwiki.com could not be found [08:26:53] PROBLEM - small.polandballwiki.com - LetsEncrypt on sslhost is CRITICAL: connect to address small.polandballwiki.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [08:34:38] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 1.19, 1.34, 1.69 [08:34:54] RECOVERY - cp30 Current Load on cp30 is OK: OK - load average: 1.02, 1.54, 1.68 [08:40:23] PROBLEM - pt.polandballwiki.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for pt.polandballwiki.com could not be found [08:42:30] PROBLEM - staff.polandballwiki.com - LetsEncrypt on sslhost is CRITICAL: connect to address staff.polandballwiki.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [08:44:41] PROBLEM - staff.polandballwiki.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for staff.polandballwiki.com could not be found [08:46:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.84, 1.69, 1.65 [08:47:52] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [08:48:54] RECOVERY - cp30 Current Load on cp30 is OK: OK - load average: 1.23, 1.58, 1.61 [08:50:38] PROBLEM - polandballwiki.com - LetsEncrypt on sslhost is CRITICAL: connect to address polandballwiki.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [08:51:10] PROBLEM - fanon.polandballwiki.com - LetsEncrypt on sslhost is CRITICAL: connect to address fanon.polandballwiki.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [08:52:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 3.34, 2.51, 1.98 [08:54:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.26, 1.72, 1.58 [08:55:38] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.54, 10.13, 8.20 [08:55:44] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.51, 3.08, 2.43 [08:56:38] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 1.32, 1.55, 1.53 [08:57:10] RECOVERY - fanon.polandballwiki.com - reverse DNS on sslhost is OK: SSL OK - fanon.polandballwiki.com reverse DNS resolves to cp30.miraheze.org - CNAME FLAT [08:57:38] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 7.67, 9.19, 8.09 [08:57:43] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.78, 2.46, 2.28 [09:02:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.45, 1.88, 1.98 [09:04:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.49, 4.18, 2.79 [09:07:40] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [09:09:59] PROBLEM - small.polandballwiki.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for small.polandballwiki.com could not be found [09:10:44] PROBLEM - pwsc.polandballwiki.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for pwsc.polandballwiki.com could not be found [09:10:54] RECOVERY - cp30 Current Load on cp30 is OK: OK - load average: 1.12, 1.27, 1.65 [09:18:15] RECOVERY - pt.polandballwiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'pt.polandballwiki.com' will expire on Thu 29 Sep 2022 08:59:53 GMT +0000. [09:20:34] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.99, 1.74, 1.69 [09:23:05] PROBLEM - commons.polandballwiki.com - LetsEncrypt on sslhost is CRITICAL: connect to address commons.polandballwiki.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [09:24:46] PROBLEM - www.polandballwiki.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.polandballwiki.com could not be found [09:26:17] RECOVERY - cp30 Current Load on cp30 is OK: OK - load average: 1.12, 1.58, 1.69 [09:28:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 0.70, 1.41, 1.86 [09:29:10] PROBLEM - fanon.polandballwiki.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for fanon.polandballwiki.com could not be found [09:34:38] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 0.56, 1.11, 1.61 [09:39:23] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [09:39:36] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.16, 1.90, 1.74 [09:40:22] PROBLEM - pwsc.polandballwiki.com - LetsEncrypt on sslhost is CRITICAL: connect to address pwsc.polandballwiki.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [09:41:31] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.35, 1.71, 1.69 [09:43:25] RECOVERY - cp30 Current Load on cp30 is OK: OK - load average: 1.30, 1.60, 1.65 [09:47:13] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.41, 1.74, 1.71 [09:48:17] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 2001:41d0:801:2000::4c25/cpweb, 2001:41d0:801:2000::1b80/cpweb [09:48:53] PROBLEM - pt.polandballwiki.com - LetsEncrypt on sslhost is CRITICAL: connect to address pt.polandballwiki.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [09:49:08] RECOVERY - cp30 Current Load on cp30 is OK: OK - load average: 1.54, 1.67, 1.69 [09:52:15] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [09:54:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.38, 1.75, 1.73 [09:56:54] RECOVERY - cp30 Current Load on cp30 is OK: OK - load average: 1.00, 1.52, 1.65 [10:00:09] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [10:06:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.55, 1.72, 1.65 [10:08:54] RECOVERY - cp30 Current Load on cp30 is OK: OK - load average: 1.46, 1.65, 1.63 [10:20:47] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 7.05, 3.10, 1.88 [10:22:29] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.14, 2.16, 1.88 [10:24:23] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.62, 1.94, 1.83 [10:26:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.16, 1.85, 1.72 [10:28:11] RECOVERY - cp30 Current Load on cp30 is OK: OK - load average: 1.45, 1.57, 1.70 [10:28:38] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 0.96, 1.53, 1.61 [10:32:00] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 4.55, 2.59, 2.05 [10:37:50] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [10:43:25] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.92, 1.98, 1.99 [10:49:08] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 5.17, 2.61, 1.87 [10:51:02] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 4.86, 2.95, 2.28 [10:51:41] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [10:56:50] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.04, 1.86, 1.86 [11:00:41] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 1.26, 1.48, 1.70 [11:00:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.34, 1.68, 1.96 [11:02:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 1.93, 1.94, 2.03 [11:10:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.78, 1.84, 1.98 [11:16:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.07, 1.80, 1.90 [11:27:23] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [11:30:11] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 11.07, 5.41, 2.87 [11:32:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.59, 1.87, 1.97 [11:34:18] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [11:36:17] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [11:36:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.40, 1.85, 1.91 [11:38:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.59, 1.81, 1.90 [11:40:14] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::1b80/cpweb [11:41:12] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.88, 4.84, 3.21 [11:42:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.04, 2.00, 1.96 [11:44:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.76, 1.98, 1.96 [11:48:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.81, 2.00, 1.95 [11:51:09] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.80, 3.23, 3.27 [11:54:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.52, 1.98, 2.00 [11:55:13] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 0.84, 1.51, 1.93 [11:56:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 7.81, 3.44, 2.48 [11:57:07] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.48, 3.63, 3.44 [11:59:06] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.10, 3.01, 3.23 [12:00:59] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.52, 1.89, 1.92 [12:01:44] RECOVERY - wiki.minebox.fr - reverse DNS on sslhost is OK: SSL OK - wiki.minebox.fr reverse DNS resolves to cp31.miraheze.org - CNAME OK [12:02:55] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.14, 1.59, 1.81 [12:04:50] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.12, 1.67, 1.80 [12:06:46] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.75, 1.73, 1.81 [12:12:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.75, 1.73, 1.98 [12:12:58] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.86, 10.78, 8.65 [12:13:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.13, 3.84, 3.04 [12:14:38] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 0.97, 1.45, 1.68 [12:14:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.52, 2.06, 2.08 [12:14:58] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 6.69, 9.46, 8.44 [12:15:38] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 12.30, 10.41, 8.52 [12:15:55] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.27, 3.29, 2.94 [12:17:38] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.98, 10.17, 8.64 [12:17:54] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [12:21:30] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.10, 2.63, 2.11 [12:21:38] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 8.52, 9.97, 8.97 [12:26:58] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.38, 9.83, 8.95 [12:27:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.55, 1.95, 1.97 [12:27:38] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 12.06, 11.84, 10.11 [12:27:47] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 2001:41d0:801:2000::4c25/cpweb, 2001:41d0:801:2000::1b80/cpweb [12:28:57] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.39, 4.35, 3.44 [12:28:58] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 16.77, 12.23, 9.91 [12:29:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.04, 3.55, 3.14 [12:30:56] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.11, 3.79, 3.34 [12:31:55] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.45, 3.11, 3.03 [12:32:56] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 3.74, 4.05, 3.51 [12:32:58] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.82, 11.57, 10.19 [12:34:55] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.91, 3.63, 3.41 [12:34:58] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 15.17, 12.92, 10.85 [12:34:59] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.04, 2.04, 1.93 [12:35:47] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [12:37:38] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.85, 11.86, 11.08 [12:38:54] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.14, 3.18, 3.31 [12:39:38] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 11.46, 12.08, 11.27 [12:39:47] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [12:41:38] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.99, 11.55, 11.16 [12:42:58] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.45, 11.26, 11.04 [12:44:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.67, 1.89, 1.96 [12:47:38] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 13.19, 11.13, 10.97 [12:47:51] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 14.83, 7.60, 4.82 [12:48:28] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 6.87, 5.27, 3.89 [12:48:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.02, 1.64, 1.82 [12:48:58] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.11, 11.79, 11.24 [12:49:38] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 8.26, 10.03, 10.59 [12:50:58] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.51, 10.72, 10.90 [12:52:58] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 16.25, 12.83, 11.63 [12:58:58] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.99, 11.71, 11.58 [13:03:38] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 13.15, 11.02, 10.85 [13:05:38] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.05, 10.65, 10.74 [13:09:38] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 6.00, 8.42, 9.83 [13:09:45] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.94, 3.37, 3.88 [13:09:47] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [13:09:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.93, 3.49, 3.92 [13:13:47] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::1b80/cpweb [13:15:55] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.05, 2.58, 3.35 [13:16:58] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 6.44, 8.17, 9.72 [13:17:43] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.15, 2.73, 3.37 [13:19:47] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [13:19:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 6.39, 3.74, 3.59 [13:23:47] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::1b80/cpweb [13:23:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.09, 3.90, 3.71 [13:29:55] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.11, 2.95, 3.34 [13:31:39] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.53, 3.58, 3.35 [13:35:37] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.81, 2.80, 3.10 [13:36:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.19, 1.65, 1.95 [13:41:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.21, 3.36, 3.17 [13:43:35] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.38, 4.20, 3.58 [13:44:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.10, 1.68, 1.81 [13:45:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.70, 3.52, 3.27 [13:46:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.35, 1.55, 1.75 [13:47:33] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.10, 3.32, 3.41 [13:47:55] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.48, 3.09, 3.14 [13:48:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.09, 1.91, 1.84 [13:49:33] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.97, 3.30, 3.39 [13:52:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.17, 1.63, 1.75 [13:54:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.08, 1.71, 1.76 [14:00:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.12, 1.57, 1.71 [14:04:38] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 1.34, 1.44, 1.62 [14:07:47] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [14:07:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.64, 3.67, 3.34 [14:08:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.89, 1.93, 1.79 [14:09:55] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.46, 3.40, 3.29 [14:10:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 4.44, 2.57, 2.03 [14:16:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.53, 1.95, 1.91 [14:18:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.93, 2.88, 2.27 [14:22:40] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [14:26:20] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.65, 3.55, 3.26 [14:26:34] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 5.82, 4.04, 3.43 [14:27:38] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 14.51, 11.08, 8.95 [14:28:20] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.28, 3.78, 3.38 [14:32:31] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.66, 10.90, 9.13 [14:32:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.55, 1.67, 1.98 [14:33:43] PROBLEM - wiki.minebox.fr - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for wiki.minebox.fr could not be found [14:34:28] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.85, 10.63, 9.24 [14:35:38] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 8.31, 11.25, 10.15 [14:36:07] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.55, 4.00, 3.88 [14:36:26] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 9.67, 10.20, 9.24 [14:38:02] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 8.16, 5.41, 4.39 [14:40:19] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.64, 12.00, 10.24 [14:41:38] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 8.01, 9.41, 9.69 [14:42:17] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.48, 11.39, 10.22 [14:42:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.17, 2.03, 1.93 [14:44:15] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 7.63, 10.06, 9.88 [14:44:27] PROBLEM - www.dovearchives.wiki - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'www.dovearchives.wiki' expires in 15 day(s) (Mon 05 Sep 2022 14:22:04 GMT +0000). [14:44:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.16, 1.72, 1.83 [14:45:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.26, 3.39, 3.88 [14:46:03] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/ssl/compare/a9fd9ed1e111...0d7840c56dbe [14:46:03] [url] Comparing a9fd9ed1e111...0d7840c56dbe · miraheze/ssl · GitHub | github.com [14:46:04] [02miraheze/ssl] 07MirahezeSSLBot 030d7840c - Bot: Update SSL cert for www.dovearchives.wiki [14:47:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.22, 3.80, 3.97 [14:49:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.89, 3.44, 3.82 [14:54:12] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.45, 3.49, 3.90 [14:55:55] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.72, 2.72, 3.36 [14:56:12] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.01, 3.59, 3.88 [14:56:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.51, 2.23, 2.00 [14:58:11] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.19, 3.00, 3.62 [15:00:11] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.76, 2.47, 3.34 [15:00:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.05, 1.93, 1.95 [15:04:19] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [15:08:08] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.73, 3.23, 3.28 [15:08:38] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 1.31, 1.36, 1.64 [15:10:07] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.51, 2.87, 3.13 [15:13:56] RECOVERY - www.dovearchives.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'www.dovearchives.wiki' will expire on Fri 18 Nov 2022 13:45:57 GMT +0000. [15:14:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.03, 1.96, 1.77 [15:16:10] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [15:32:01] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.63, 3.09, 2.90 [15:34:00] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.40, 2.78, 2.81 [15:39:58] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 7.19, 4.50, 3.41 [15:41:57] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.56, 3.91, 3.32 [15:44:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.29, 1.52, 1.93 [15:45:56] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.35, 3.16, 3.15 [15:48:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.24, 1.67, 1.88 [15:49:52] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [15:52:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.68, 1.84, 1.91 [15:54:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 4.21, 3.08, 2.39 [15:55:48] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::1b80/cpweb [16:01:47] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [16:05:47] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [16:06:32] [02miraheze/dns] 07Universal-Omega pushed 031 commit to 03Universal-Omega-patch-1 [+0/-0/±1] 13https://github.com/miraheze/dns/commit/68ebf67de483 [16:06:33] [02miraheze/dns] 07Universal-Omega 0368ebf67 - Remove old acme challenges for m.miraheze.org [16:06:35] [02dns] 07Universal-Omega created branch 03Universal-Omega-patch-1 - 13https://github.com/miraheze/dns [16:06:35] [url] Page not found · GitHub · GitHub | github.com [16:06:36] [02dns] 07Universal-Omega opened pull request 03#333: Remove old acme challenges for m.miraheze.org - 13https://github.com/miraheze/dns/pull/333 [16:06:37] [url] Page not found · GitHub · GitHub | github.com [16:07:31] [02dns] 07Universal-Omega closed pull request 03#333: Remove old acme challenges for m.miraheze.org - 13https://github.com/miraheze/dns/pull/333 [16:07:31] [url] Page not found · GitHub · GitHub | github.com [16:07:33] [02miraheze/dns] 07Universal-Omega pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/dns/compare/4c8a9a98e458...6e88f458cfa5 [16:07:33] [url] Comparing 4c8a9a98e458...6e88f458cfa5 · miraheze/dns · GitHub | github.com [16:07:34] [02miraheze/dns] 07Universal-Omega 036e88f45 - Remove old acme challenges for m.miraheze.org (#333) [16:07:36] [02miraheze/dns] 07Universal-Omega deleted branch 03Universal-Omega-patch-1 [16:07:37] [02dns] 07Universal-Omega deleted branch 03Universal-Omega-patch-1 - 13https://github.com/miraheze/dns [16:07:37] [url] Page not found · GitHub · GitHub | github.com [16:08:19] [02miraheze/dns] 07Universal-Omega deleted branch 03paladox-patch-2 [16:08:21] [02dns] 07Universal-Omega deleted branch 03paladox-patch-2 - 13https://github.com/miraheze/dns [16:08:21] [url] Page not found · GitHub · GitHub | github.com [16:08:27] [02miraheze/dns] 07Universal-Omega deleted branch 03paladox-patch-1 [16:08:28] [02dns] 07Universal-Omega deleted branch 03paladox-patch-1 - 13https://github.com/miraheze/dns [16:08:29] ... [16:09:33] [02miraheze/dns] 07Universal-Omega deleted branch 03gdnsd3 [16:09:34] [02dns] 07Universal-Omega deleted branch 03gdnsd3 - 13https://github.com/miraheze/dns [16:09:35] ... [16:11:15] [02miraheze/mw-config] 07Universal-Omega deleted branch 03RhinosF1-patch-2 [16:11:17] [02mw-config] 07Universal-Omega deleted branch 03RhinosF1-patch-2 - 13https://github.com/miraheze/mw-config [16:11:17] ... [16:11:33] [02mw-config] 07Universal-Omega deleted branch 03paladox-patch-1 - 13https://github.com/miraheze/mw-config [16:11:34] [url] Page not found · GitHub · GitHub | github.com [16:11:35] [02miraheze/mw-config] 07Universal-Omega deleted branch 03paladox-patch-1 [16:12:46] [02miraheze/MirahezeMagic] 07Universal-Omega deleted branch 03Universal-Omega-patch-2 [16:12:48] [02MirahezeMagic] 07Universal-Omega deleted branch 03Universal-Omega-patch-2 - 13https://github.com/miraheze/MirahezeMagic [16:12:48] [url] Page not found · GitHub · GitHub | github.com [16:12:49] [02miraheze/MirahezeMagic] 07Universal-Omega deleted branch 03revert-342-Universal-Omega-patch-1 [16:12:51] [02MirahezeMagic] 07Universal-Omega deleted branch 03revert-342-Universal-Omega-patch-1 - 13https://github.com/miraheze/MirahezeMagic [16:12:51] [url] Page not found · GitHub · GitHub | github.com [16:12:53] [02miraheze/MirahezeMagic] 07Universal-Omega deleted branch 03RhinosF1-patch-2 [16:12:54] [02MirahezeMagic] 07Universal-Omega deleted branch 03RhinosF1-patch-2 - 13https://github.com/miraheze/MirahezeMagic [16:15:47] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.39, 4.81, 3.52 [16:17:00] [02miraheze/ErrorPages] 07Universal-Omega deleted branch 03Universal-Omega-patch-1 [16:17:01] [02ErrorPages] 07Universal-Omega deleted branch 03Universal-Omega-patch-1 - 13https://github.com/miraheze/ErrorPages [16:17:02] [url] Page not found · GitHub · GitHub | github.com [16:17:29] [02miraheze/MirahezeDebug] 07Universal-Omega deleted branch 03paladox-patch-1 [16:17:30] [02MirahezeDebug] 07Universal-Omega deleted branch 03paladox-patch-1 - 13https://github.com/miraheze/MirahezeDebug [16:17:47] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.35, 3.99, 3.39 [16:19:46] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.17, 3.29, 3.19 [16:19:47] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [16:23:45] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 7.74, 6.05, 4.33 [16:23:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 6.52, 5.84, 3.72 [16:26:44] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [16:29:15] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.36, 10.70, 9.33 [16:31:12] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 14.35, 12.10, 10.01 [16:32:32] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.53, 10.71, 9.44 [16:33:10] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.92, 11.75, 10.13 [16:36:23] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 6.78, 9.39, 9.26 [16:38:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.13, 1.74, 1.98 [16:39:03] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 8.55, 9.91, 9.85 [16:40:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.84, 2.20, 2.12 [16:45:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.18, 3.32, 3.91 [16:46:33] miraheze/MirahezeMagic - translatewiki the build passed. [16:47:38] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.21, 3.18, 3.97 [16:48:09] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/ssl/compare/0d7840c56dbe...e91623785f9d [16:48:09] [url] Comparing 0d7840c56dbe...e91623785f9d · miraheze/ssl · GitHub | github.com [16:48:10] [02miraheze/ssl] 07MirahezeSSLBot 03e916237 - Bot: Update SSL cert for wiki.mostly.vet [16:52:52] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/ssl/compare/e91623785f9d...44bac71cd7d8 [16:52:52] [url] Comparing e91623785f9d...44bac71cd7d8 · miraheze/ssl · GitHub | github.com [16:52:53] [02miraheze/ssl] 07MirahezeSSLBot 0344bac71 - Bot: Update SSL cert for smashloop.wiki [16:53:12] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/ssl/compare/44bac71cd7d8...989c6817d4d3 [16:53:12] [url] Comparing 44bac71cd7d8...989c6817d4d3 · miraheze/ssl · GitHub | github.com [16:53:13] [02miraheze/ssl] 07MirahezeSSLBot 03989c681 - Bot: Update SSL cert for podpedia.org [16:53:36] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.72, 2.42, 3.37 [16:55:55] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.91, 2.63, 3.33 [16:58:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.05, 1.56, 1.99 [17:00:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 3.48, 2.18, 2.15 [17:04:25] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [17:04:29] [02miraheze/puppet] 07Universal-Omega pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/puppet/compare/59358b743e90...bad521690f65 [17:04:30] [url] Comparing 59358b743e90...bad521690f65 · miraheze/puppet · GitHub | github.com [17:04:31] [02miraheze/puppet] 07Universal-Omega 03bad5216 - jobrunner-hi: don't set server twice in redis.queues [17:04:44] RECOVERY - wiki.mostly.vet - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.mostly.vet' will expire on Fri 18 Nov 2022 15:48:03 GMT +0000. [17:04:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.23, 1.76, 1.99 [17:06:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.14, 1.89, 2.01 [17:08:22] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [17:08:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.58, 1.78, 1.95 [17:09:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.77, 3.04, 3.02 [17:10:21] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [17:10:43] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki=androidwiki --no-updates --username-prefix=w /mnt/mediawiki-static/metawiki/ImportDump/androidwiki-20220818211225.xml (START) [17:10:50] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:10:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.48, 1.94, 1.98 [17:11:16] [02miraheze/phabricator-extensions] 07Universal-Omega deleted branch 03paladox-patch-1 [17:11:17] [02phabricator-extensions] 07Universal-Omega deleted branch 03paladox-patch-1 - 13https://github.com/miraheze/phabricator-extensions [17:11:18] [url] Page not found · GitHub · GitHub | github.com [17:11:55] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.80, 2.99, 3.01 [17:14:18] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::1b80/cpweb [17:15:34] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki=androidwiki --no-updates --username-prefix=w /mnt/mediawiki-static/metawiki/ImportDump/androidwiki-20220818211225.xml (END - exit=0) [17:15:35] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/rebuildall.php --wiki=androidwiki (START) [17:15:42] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:15:49] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:16:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.27, 1.69, 1.98 [17:20:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.75, 2.15, 2.07 [17:22:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.65, 1.88, 1.98 [17:24:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.83, 2.17, 2.07 [17:24:45] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 5.50, 3.82, 3.27 [17:25:43] RECOVERY - smashloop.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'smashloop.wiki' will expire on Fri 18 Nov 2022 15:52:46 GMT +0000. [17:26:26] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.39, 3.85, 3.31 [17:26:39] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.43, 3.62, 3.26 [17:29:46] PROBLEM - roblox-wiki.tk - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - roblox-wiki.tk All nameservers failed to answer the query. [17:30:25] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.57, 4.55, 3.72 [17:30:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.69, 1.87, 1.98 [17:34:18] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.27, 3.13, 3.22 [17:34:24] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.71, 3.60, 3.52 [17:36:23] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.75, 3.19, 3.37 [17:37:35] RECOVERY - podpedia.org - LetsEncrypt on sslhost is OK: OK - Certificate 'podpedia.org' will expire on Fri 18 Nov 2022 15:53:06 GMT +0000. [17:38:38] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 1.12, 1.30, 1.66 [17:42:36] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/rebuildall.php --wiki=androidwiki (END - exit=0) [17:42:37] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/initSiteStats.php --wiki=androidwiki --active --update (END - exit=0) [17:42:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.60, 4.18, 2.84 [17:42:44] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:42:55] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:43:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.62, 3.26, 3.13 [17:45:20] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.45, 3.68, 3.50 [17:45:55] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 3.23, 3.20, 3.12 [17:47:19] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.26, 3.92, 3.60 [17:49:19] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.36, 3.37, 3.45 [17:51:18] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.53, 3.24, 3.40 [17:51:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 5.95, 4.01, 3.44 [17:53:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.98, 3.50, 3.32 [17:53:58] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [17:55:55] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.34, 3.01, 3.16 [17:58:34] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki=nightfallmcwiki --no-updates --username-prefix=wikia:nightfallmc /mnt/mediawiki-static/metawiki/ImportDump/nightfallmcwiki-20220819020604.xml (START) [17:58:41] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:58:56] RECOVERY - roblox-wiki.tk - reverse DNS on sslhost is OK: SSL OK - roblox-wiki.tk reverse DNS resolves to cp30.miraheze.org - NS RECORDS OK [17:59:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.44, 3.50, 3.34 [18:00:54] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki=nightfallmcwiki --no-updates --username-prefix=wikia:nightfallmc /mnt/mediawiki-static/metawiki/ImportDump/nightfallmcwiki-20220819020604.xml (END - exit=0) [18:00:55] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/rebuildall.php --wiki=nightfallmcwiki (START) [18:01:00] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:01:11] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:01:18] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.08, 3.66, 3.28 [18:01:55] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/rebuildall.php --wiki=nightfallmcwiki (END - exit=0) [18:01:56] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/initSiteStats.php --wiki=nightfallmcwiki --active --update (END - exit=0) [18:02:02] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:02:08] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:03:18] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.69, 3.56, 3.29 [18:03:55] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 3.35, 3.38, 3.32 [18:04:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.06, 1.31, 1.90 [18:07:18] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.92, 3.87, 3.42 [18:08:46] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.56, 9.95, 8.54 [18:08:48] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [18:09:18] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.21, 3.54, 3.36 [18:10:44] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.11, 10.86, 9.06 [18:11:18] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.88, 2.96, 3.16 [18:12:42] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.65, 11.09, 9.37 [18:14:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.43, 1.77, 2.00 [18:18:35] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 8.21, 9.77, 9.37 [18:18:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.45, 1.87, 1.83 [18:18:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.10, 1.85, 1.97 [18:22:39] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki=recaptimewiki --no-updates --username-prefix=w /mnt/mediawiki-static/metawiki/ImportDump/recaptimewiki-20220819063252.xml (START) [18:22:44] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:23:08] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki=recaptimewiki --no-updates --username-prefix=w /mnt/mediawiki-static/metawiki/ImportDump/recaptimewiki-20220819063252.xml (END - exit=0) [18:23:09] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/rebuildall.php --wiki=recaptimewiki (START) [18:23:17] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:23:22] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:23:57] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/rebuildall.php --wiki=recaptimewiki (END - exit=0) [18:23:58] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/initSiteStats.php --wiki=recaptimewiki --active --update (END - exit=0) [18:24:07] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:24:13] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:24:26] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.31, 11.38, 10.25 [18:25:28] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki=sadisticdreamerwiki --no-updates --username-prefix=wikia:sadistic-dreamer /mnt/mediawiki-static/metawiki/ImportDump/sadisticdreamerwiki-20220820113757.xml (START) [18:25:36] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:26:21] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki=sadisticdreamerwiki --no-updates --username-prefix=wikia:sadistic-dreamer /mnt/mediawiki-static/metawiki/ImportDump/sadisticdreamerwiki-20220820113757.xml (END - exit=0) [18:26:22] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/rebuildall.php --wiki=sadisticdreamerwiki (START) [18:26:27] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:26:33] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:26:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.21, 1.85, 1.94 [18:26:44] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/rebuildall.php --wiki=sadisticdreamerwiki (END - exit=35584) [18:26:45] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/initSiteStats.php --wiki=sadisticdreamerwiki --active --update (END - exit=0) [18:26:50] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:26:56] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:30:19] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 7.47, 9.52, 9.81 [18:30:43] PROBLEM - wiki.beergeeks.co.il - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.beergeeks.co.il All nameservers failed to answer the query. [18:31:18] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 6.31, 5.60, 3.98 [18:33:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.36, 3.79, 3.18 [18:34:24] PROBLEM - cp20 APT on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:34:25] PROBLEM - cp20 Varnish Backends on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:34:25] PROBLEM - cp20 Puppet on cp20 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:34:38] RECOVERY - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is OK: OK - NGINX Error Rate is 27% [18:34:38] RECOVERY - cp21 Stunnel HTTP for mon141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 35770 bytes in 0.029 second response time [18:34:38] RECOVERY - cp21 Stunnel HTTP for mail121 on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 427 bytes in 2.025 second response time [18:34:38] RECOVERY - cp21 Stunnel HTTP for mw131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.019 second response time [18:34:39] PROBLEM - cp21 APT on cp21 is CRITICAL: APT CRITICAL: 4 packages available for upgrade (3 critical updates). [18:34:40] PROBLEM - cp21 Varnish Backends on cp21 is CRITICAL: 5 backends are down. mw121 mw122 mw131 mw132 mw142 [18:34:43] RECOVERY - cp21 Current Load on cp21 is OK: OK - load average: 0.00, 0.03, 0.00 [18:34:53] PROBLEM - cp20 Stunnel HTTP for mail121 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:34:58] PROBLEM - cp20 ferm_active on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:35:10] PROBLEM - cp21 APT on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:35:10] PROBLEM - cp20 Varnish Backends on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:35:10] PROBLEM - cp20 ferm_active on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:35:10] PROBLEM - cp20 Stunnel HTTP for mail121 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:35:10] PROBLEM - cp21 Varnish Backends on cp21 is CRITICAL: 5 backends are down. mw121 mw122 mw131 mw132 mw142 [18:35:10] PROBLEM - cp20 APT on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:35:10] PROBLEM - cp20 Puppet on cp20 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:35:13] RECOVERY - cp20 Stunnel HTTP for phab121 on cp20 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 5.135 second response time [18:35:13] RECOVERY - cp20 Stunnel HTTP for puppet141 on cp20 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 0.010 second response time [18:35:13] RECOVERY - cp21 ferm_active on cp21 is OK: OK ferm input default policy is set [18:35:15] PROBLEM - cp20 SSH on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:35:15] PROBLEM - cp20 Stunnel HTTP for mw122 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:35:18] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.09, 3.82, 3.63 [18:35:18] RECOVERY - cp21 Puppet on cp21 is OK: OK: Puppet is currently enabled, last run 9 minutes ago with 0 failures [18:35:23] RECOVERY - cp20 Stunnel HTTP for test131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14740 bytes in 2.024 second response time [18:35:25] PROBLEM - cp20 HTTPS on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:35:53] PROBLEM - cp21 NTP time on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:35:55] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.60, 3.38, 3.11 [18:36:00] PROBLEM - cp21 HTTPS on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:36:18] PROBLEM - cp21 Stunnel HTTP for puppet141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:36:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 4.29, 2.24, 1.98 [18:36:48] PROBLEM - cp20 Current Load on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:36:50] RECOVERY - cp20 Stunnel HTTP for mail121 on cp20 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 427 bytes in 0.011 second response time [18:37:03] PROBLEM - cp20 Stunnel HTTP for mw141 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:37:14] RECOVERY - cp20 SSH on cp20 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [18:37:18] PROBLEM - cp20 Stunnel HTTP for mw142 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:37:23] RECOVERY - cp20 ferm_active on cp20 is OK: OK ferm input default policy is set [18:37:46] RECOVERY - cp21 NTP time on cp21 is OK: NTP OK: Offset -0.0006031095982 secs [18:37:48] PROBLEM - cp21 Stunnel HTTP for mw142 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:37:50] RECOVERY - cp21 HTTPS on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3344 bytes in 0.015 second response time [18:38:13] RECOVERY - cp20 Stunnel HTTP for mw122 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 0.017 second response time [18:38:16] RECOVERY - cp21 Stunnel HTTP for puppet141 on cp21 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 0.016 second response time [18:38:59] PROBLEM - cp20 Stunnel HTTP for matomo131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:39:00] RECOVERY - cp20 Stunnel HTTP for mw141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 7.229 second response time [18:39:04] PROBLEM - cp20 Stunnel HTTP for mw132 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:39:18] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.87, 3.14, 3.37 [18:39:32] RECOVERY - cp20 HTTPS on cp20 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3344 bytes in 3.046 second response time [18:39:49] PROBLEM - cp21 Stunnel HTTP for reports121 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:39:57] PROBLEM - cp20 Stunnel HTTP for mw121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:40:55] PROBLEM - cp21 Stunnel HTTP for test131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:41:01] PROBLEM - cp20 conntrack_table_size on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:41:06] PROBLEM - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:41:55] PROBLEM - cp21 Stunnel HTTP for mw122 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:42:00] PROBLEM - cp21 Stunnel HTTP for mon141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:42:13] PROBLEM - cp21 HTTPS on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:42:51] RECOVERY - cp21 Stunnel HTTP for mw142 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14724 bytes in 2.281 second response time [18:42:56] PROBLEM - cp20 HTTP 4xx/5xx ERROR Rate on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:43:02] RECOVERY - cp20 conntrack_table_size on cp20 is OK: OK: nf_conntrack is 0 % full [18:43:04] RECOVERY - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is OK: OK - NGINX Error Rate is 34% [18:43:25] RECOVERY - cp21 Stunnel HTTP for test131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14740 bytes in 1.029 second response time [18:43:35] PROBLEM - cp21 Stunnel HTTP for mwtask141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:43:40] RECOVERY - cp20 Stunnel HTTP for matomo131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 100174 bytes in 1.254 second response time [18:44:16] RECOVERY - cp21 HTTPS on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3343 bytes in 7.209 second response time [18:44:17] RECOVERY - cp20 Stunnel HTTP for mw121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14724 bytes in 7.849 second response time [18:44:26] RECOVERY - cp21 Stunnel HTTP for mw122 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14724 bytes in 1.383 second response time [18:44:54] RECOVERY - cp20 HTTP 4xx/5xx ERROR Rate on cp20 is OK: OK - NGINX Error Rate is 35% [18:45:06] PROBLEM - cp20 Stunnel HTTP for reports121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:45:08] PROBLEM - cp20 Stunnel HTTP for test131 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:45:14] PROBLEM - cp21 Stunnel HTTP for mail121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:45:15] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 15.04, 12.35, 9.81 [18:45:18] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 6.68, 6.13, 4.53 [18:45:34] RECOVERY - cp21 Stunnel HTTP for mwtask141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14742 bytes in 0.017 second response time [18:45:37] PROBLEM - cp20 Stunnel HTTP for mon141 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:45:58] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 8.08, 11.01, 10.31 [18:46:42] PROBLEM - cp20 SSH on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:47:03] RECOVERY - cp21 Stunnel HTTP for mon141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 35770 bytes in 0.034 second response time [18:47:10] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 7.33, 10.60, 9.48 [18:47:27] RECOVERY - cp20 Current Load on cp20 is OK: OK - load average: 0.07, 0.04, 0.01 [18:47:41] RECOVERY - cp21 Stunnel HTTP for mail121 on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 427 bytes in 1.025 second response time [18:47:41] RECOVERY - cp20 Stunnel HTTP for test131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14740 bytes in 1.048 second response time [18:47:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.58, 3.84, 3.38 [18:47:56] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 6.23, 9.42, 9.82 [18:48:25] PROBLEM - cp21 Stunnel HTTP for mw132 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:48:46] RECOVERY - cp20 SSH on cp20 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [18:48:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.86, 1.70, 2.00 [18:49:06] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 8.79, 9.93, 9.36 [18:49:15] RECOVERY - cp20 Stunnel HTTP for reports121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 1.082 second response time [18:50:31] PROBLEM - cp21 Stunnel HTTP for mw142 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:50:40] PROBLEM - cp21 Stunnel HTTP for mw141 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:50:47] RECOVERY - cp20 Stunnel HTTP for mon141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 35724 bytes in 7.165 second response time [18:50:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.37, 1.98, 2.07 [18:51:03] RECOVERY - cp21 Stunnel HTTP for reports121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 2.063 second response time [18:51:05] PROBLEM - cp21 NTP time on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:51:14] PROBLEM - cp21 Stunnel HTTP for phab121 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:51:45] PROBLEM - cp20 conntrack_table_size on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:51:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.20, 3.81, 3.49 [18:52:28] RECOVERY - cp21 Stunnel HTTP for mw142 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.021 second response time [18:53:01] RECOVERY - cp21 NTP time on cp21 is OK: NTP OK: Offset 0.0004547834396 secs [18:53:39] RECOVERY - cp20 conntrack_table_size on cp20 is OK: OK: nf_conntrack is 0 % full [18:53:59] PROBLEM - cp20 Stunnel HTTP for mw141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:55:31] RECOVERY - cp21 Stunnel HTTP for mw141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 1.027 second response time [18:55:53] RECOVERY - cp20 Stunnel HTTP for mw142 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 2.022 second response time [18:55:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.28, 3.87, 3.56 [18:56:18] PROBLEM - cp21 Stunnel HTTP for mail121 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:56:23] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [18:56:47] PROBLEM - cp21 HTTPS on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:57:18] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 1.77, 3.35, 3.99 [18:57:40] PROBLEM - cp21 Stunnel HTTP for mon141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:57:54] PROBLEM - cp20 HTTP 4xx/5xx ERROR Rate on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:57:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.90, 3.46, 3.44 [18:58:00] PROBLEM - cp21 Stunnel HTTP for test131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:58:44] RECOVERY - cp21 Stunnel HTTP for mail121 on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 427 bytes in 0.014 second response time [18:58:46] RECOVERY - cp21 Stunnel HTTP for phab121 on cp21 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 3.097 second response time [18:58:48] RECOVERY - cp20 Stunnel HTTP for mw141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 0.014 second response time [18:58:50] RECOVERY - cp21 HTTPS on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3344 bytes in 7.181 second response time [18:58:59] PROBLEM - cp20 Stunnel HTTP for reports121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:58:59] PROBLEM - cp20 PowerDNS Recursor on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:59:04] PROBLEM - cp21 SSH on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:59:04] PROBLEM - cp21 Stunnel HTTP for mw121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:59:16] RECOVERY - cp20 Puppet on cp20 is OK: OK: Puppet is currently enabled, last run 4 minutes ago with 0 failures [18:59:18] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.10, 3.72, 4.04 [18:59:27] RECOVERY - wiki.beergeeks.co.il - reverse DNS on sslhost is OK: SSL OK - wiki.beergeeks.co.il reverse DNS resolves to cp30.miraheze.org - CNAME OK [18:59:50] PROBLEM - cp21 Stunnel HTTP for mw122 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:59:55] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 3.02, 3.30, 3.38 [18:59:57] PROBLEM - cp20 Stunnel HTTP for puppet141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:00:03] PROBLEM - cp20 Stunnel HTTP for mw121 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:00:10] RECOVERY - cp21 Stunnel HTTP for mon141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 35770 bytes in 0.037 second response time [19:00:20] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::1b80/cpweb [19:00:39] PROBLEM - cp20 HTTPS on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:00:54] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.59, 1.86, 1.99 [19:01:04] PROBLEM - cp20 Stunnel HTTP for phab121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:01:14] PROBLEM - cp20 SSH on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:01:18] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.98, 3.39, 3.88 [19:02:02] RECOVERY - cp20 Stunnel HTTP for mw132 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.029 second response time [19:02:02] PROBLEM - cp21 Stunnel HTTP for reports121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:02:08] RECOVERY - cp20 Stunnel HTTP for mw121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 2.062 second response time [19:02:11] PROBLEM - cp20 Stunnel HTTP for matomo131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:02:14] RECOVERY - cp21 Stunnel HTTP for mw132 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.018 second response time [19:02:18] PROBLEM - cp20 NTP time on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:02:19] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [19:02:27] RECOVERY - cp20 Stunnel HTTP for puppet141 on cp20 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 1.020 second response time [19:02:44] PROBLEM - cp20 Stunnel HTTP for mon141 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:02:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.38, 2.06, 2.04 [19:03:03] PROBLEM - cp20 Stunnel HTTP for test131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:03:07] RECOVERY - cp21 SSH on cp21 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [19:03:11] RECOVERY - cp20 Stunnel HTTP for reports121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 1.072 second response time [19:03:11] RECOVERY - cp20 SSH on cp20 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [19:03:14] RECOVERY - cp20 HTTP 4xx/5xx ERROR Rate on cp20 is OK: OK - NGINX Error Rate is 22% [19:03:18] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.82, 4.58, 4.30 [19:03:21] RECOVERY - cp21 Stunnel HTTP for mw121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14724 bytes in 1.180 second response time [19:03:22] PROBLEM - cp21 PowerDNS Recursor on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:03:32] RECOVERY - cp21 Stunnel HTTP for test131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14726 bytes in 5.288 second response time [19:03:38] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.69, 10.23, 9.59 [19:03:52] PROBLEM - cp20 Stunnel HTTP for mw122 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:03:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.84, 3.71, 3.57 [19:03:57] PROBLEM - cp21 conntrack_table_size on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:04:05] RECOVERY - cp21 Stunnel HTTP for reports121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 3.105 second response time [19:04:12] RECOVERY - cp20 NTP time on cp20 is OK: NTP OK: Offset 0.001754045486 secs [19:04:48] RECOVERY - cp20 HTTPS on cp20 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3344 bytes in 1.033 second response time [19:04:50] PROBLEM - cp20 conntrack_table_size on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:04:57] PROBLEM - cp20 ferm_active on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:04:57] PROBLEM - cp20 Disk Space on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:05:05] RECOVERY - cp20 Stunnel HTTP for test131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14740 bytes in 2.037 second response time [19:05:06] PROBLEM - cp21 Stunnel HTTP for mw142 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:05:42] PROBLEM - cp21 Stunnel HTTP for phab121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:05:59] RECOVERY - cp20 Stunnel HTTP for phab121 on cp20 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 0.085 second response time [19:06:00] PROBLEM - cp21 Puppet on cp21 is CRITICAL: CRITICAL: Puppet has 3 failures. Last run 4 minutes ago with 3 failures. Failed resources (up to 3 shown): File[/etc/apt/trusted.gpg.d/puppetlabs.gpg],File[/usr/local/bin/puppet-enabled],File[/etc/rsyslog.d] [19:06:16] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [19:06:52] RECOVERY - cp20 conntrack_table_size on cp20 is OK: OK: nf_conntrack is 0 % full [19:07:43] RECOVERY - cp21 Stunnel HTTP for phab121 on cp21 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 1.104 second response time [19:07:55] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 1.63, 2.81, 3.24 [19:08:00] PROBLEM - cp21 Disk Space on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:08:10] PROBLEM - cp21 Stunnel HTTP for mw121 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:09:11] PROBLEM - cp21 Stunnel HTTP for mwtask141 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:09:18] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.45, 3.66, 3.93 [19:09:19] RECOVERY - cp20 Stunnel HTTP for mw122 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 3.057 second response time [19:09:38] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 6.76, 8.83, 9.35 [19:09:51] PROBLEM - cp20 Stunnel HTTP for mw142 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:09:54] PROBLEM - cp21 Stunnel HTTP for reports121 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:09:58] PROBLEM - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:10:00] RECOVERY - cp21 Disk Space on cp21 is OK: DISK OK - free space: / 13126 MB (34% inode=96%); [19:10:08] RECOVERY - cp20 PowerDNS Recursor on cp20 is OK: DNS OK: 0.133 seconds response time. miraheze.org returns 149.56.140.43,149.56.141.75,2607:5300:201:3100::5ebc,2607:5300:201:3100::929a [19:10:09] RECOVERY - cp20 Disk Space on cp20 is OK: DISK OK - free space: / 13781 MB (35% inode=97%); [19:10:31] PROBLEM - cp21 Current Load on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:10:54] PROBLEM - cp20 Current Load on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:11:09] RECOVERY - cp21 Stunnel HTTP for mwtask141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14736 bytes in 0.017 second response time [19:11:19] PROBLEM - cp21 HTTPS on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:11:24] RECOVERY - cp21 PowerDNS Recursor on cp21 is OK: DNS OK: 0.031 seconds response time. miraheze.org returns 149.56.140.43,149.56.141.75,2607:5300:201:3100::5ebc,2607:5300:201:3100::929a [19:11:45] PROBLEM - cp20 SSH on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:11:56] PROBLEM - cp20 HTTP 4xx/5xx ERROR Rate on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:12:04] RECOVERY - cp21 Stunnel HTTP for reports121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 3.129 second response time [19:12:49] RECOVERY - cp20 ferm_active on cp20 is OK: OK ferm input default policy is set [19:12:50] RECOVERY - cp20 Current Load on cp20 is OK: OK - load average: 0.00, 0.00, 0.00 [19:13:14] RECOVERY - cp21 HTTPS on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3345 bytes in 0.016 second response time [19:13:38] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.01, 10.37, 9.88 [19:13:50] RECOVERY - cp20 SSH on cp20 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [19:14:47] RECOVERY - cp20 Stunnel HTTP for mw142 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.025 second response time [19:14:56] RECOVERY - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is OK: OK - NGINX Error Rate is 21% [19:15:00] RECOVERY - cp21 conntrack_table_size on cp21 is OK: OK: nf_conntrack is 0 % full [19:15:18] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.12, 2.59, 3.33 [19:15:25] RECOVERY - cp21 Current Load on cp21 is OK: OK - load average: 0.21, 0.09, 0.02 [19:15:27] PROBLEM - cp21 Stunnel HTTP for mw132 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:15:38] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 8.46, 9.71, 9.71 [19:15:52] RECOVERY - cp21 Stunnel HTTP for mw142 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 5.062 second response time [19:16:03] PROBLEM - cp20 Stunnel HTTP for mw121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:16:53] PROBLEM - cp20 Stunnel HTTP for phab121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:17:15] RECOVERY - cp20 HTTP 4xx/5xx ERROR Rate on cp20 is OK: OK - NGINX Error Rate is 23% [19:17:28] RECOVERY - cp21 Stunnel HTTP for mw132 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.017 second response time [19:17:31] PROBLEM - cp21 ferm_active on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:17:37] PROBLEM - cp21 HTTPS on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:17:57] PROBLEM - cp20 Stunnel HTTP for test131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:18:11] PROBLEM - cp20 SSH on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:18:16] RECOVERY - cp20 Stunnel HTTP for mw121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 2.071 second response time [19:18:18] RECOVERY - cp21 Stunnel HTTP for mw121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 3.062 second response time [19:18:26] PROBLEM - cp21 Stunnel HTTP for reports121 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:18:48] PROBLEM - cp20 Stunnel HTTP for puppet141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:18:57] RECOVERY - cp20 Stunnel HTTP for phab121 on cp20 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 3.124 second response time [19:19:00] PROBLEM - cp21 Disk Space on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:19:06] RECOVERY - cp20 Stunnel HTTP for mon141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 35770 bytes in 0.043 second response time [19:19:14] PROBLEM - cp21 Stunnel HTTP for mail121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:19:28] RECOVERY - cp21 ferm_active on cp21 is OK: OK ferm input default policy is set [19:20:10] RECOVERY - cp20 SSH on cp20 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [19:20:27] RECOVERY - cp21 Stunnel HTTP for reports121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 0.063 second response time [19:20:54] RECOVERY - cp21 Disk Space on cp21 is OK: DISK OK - free space: / 13125 MB (34% inode=96%); [19:21:24] PROBLEM - cp20 Stunnel HTTP for mw132 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:21:35] PROBLEM - cp21 Stunnel HTTP for matomo131 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:21:41] PROBLEM - cp20 HTTPS on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:22:30] PROBLEM - cp21 Stunnel HTTP for test131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:22:32] PROBLEM - cp20 Stunnel HTTP for mw141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:22:35] RECOVERY - cp20 Stunnel HTTP for matomo131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 100174 bytes in 0.146 second response time [19:22:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 0.78, 1.57, 1.90 [19:23:04] PROBLEM - cp21 SSH on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:23:24] RECOVERY - cp20 Stunnel HTTP for puppet141 on cp20 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 0.012 second response time [19:23:31] RECOVERY - cp21 Stunnel HTTP for matomo131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 100182 bytes in 0.164 second response time [19:23:57] PROBLEM - cp21 conntrack_table_size on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:24:32] RECOVERY - cp20 Stunnel HTTP for mw141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 3.067 second response time [19:24:39] PROBLEM - cp21 Stunnel HTTP for mw142 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:25:27] RECOVERY - cp21 Stunnel HTTP for mail121 on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 427 bytes in 1.042 second response time [19:25:49] RECOVERY - cp20 HTTPS on cp20 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3345 bytes in 0.014 second response time [19:25:50] PROBLEM - cp20 Stunnel HTTP for mail121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:25:50] RECOVERY - cp21 HTTPS on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3345 bytes in 0.016 second response time [19:25:58] PROBLEM - cp20 HTTP 4xx/5xx ERROR Rate on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:26:28] PROBLEM - cp21 Stunnel HTTP for mon141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:26:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.11, 1.77, 1.91 [19:26:41] RECOVERY - cp21 Stunnel HTTP for mw142 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 7.247 second response time [19:27:24] PROBLEM - cp21 Stunnel HTTP for reports121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:27:29] RECOVERY - cp21 Stunnel HTTP for test131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14740 bytes in 0.039 second response time [19:27:35] PROBLEM - cp21 Stunnel HTTP for mw131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:27:53] RECOVERY - cp20 Stunnel HTTP for mail121 on cp20 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 427 bytes in 1.028 second response time [19:29:00] RECOVERY - cp20 Stunnel HTTP for mw132 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.018 second response time [19:29:32] PROBLEM - cp20 PowerDNS Recursor on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:29:34] RECOVERY - cp21 Stunnel HTTP for mw131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.036 second response time [19:30:14] PROBLEM - cp21 HTTPS on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:30:15] PROBLEM - cp20 Stunnel HTTP for phab121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:30:17] PROBLEM - cp20 HTTPS on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:30:45] PROBLEM - cp20 SSH on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:31:23] RECOVERY - cp21 SSH on cp21 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [19:31:23] RECOVERY - cp20 Stunnel HTTP for test131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14740 bytes in 2.048 second response time [19:31:24] PROBLEM - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is WARNING: WARNING - NGINX Error Rate is 41% [19:31:29] RECOVERY - cp21 Stunnel HTTP for mon141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 35762 bytes in 5.073 second response time [19:32:09] PROBLEM - cp21 Stunnel HTTP for mw121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:32:17] RECOVERY - cp20 HTTPS on cp20 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3343 bytes in 1.050 second response time [19:32:19] PROBLEM - cp20 Puppet on cp20 is CRITICAL: CRITICAL: Puppet has 4 failures. Last run 2 minutes ago with 4 failures. Failed resources (up to 3 shown): File[/etc/apt/trusted.gpg.d/puppetlabs.gpg],File[/usr/local/bin/puppet-enabled],File[/etc/rsyslog.d],File[/etc/rsyslog.conf] [19:33:23] RECOVERY - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is OK: OK - NGINX Error Rate is 35% [19:33:32] PROBLEM - cp21 Stunnel HTTP for matomo131 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:34:10] RECOVERY - cp21 Stunnel HTTP for mw121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.031 second response time [19:34:16] RECOVERY - cp21 HTTPS on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3344 bytes in 1.054 second response time [19:34:32] !log UPDATE user_profile SET blurb = '' WHERE blurb LIKE '%in Delhi%'; [19:34:35] RECOVERY - cp21 Stunnel HTTP for reports121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 1.076 second response time [19:34:38] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:34:39] !log UPDATE user_profile SET blurb = '' WHERE blurb LIKE '%Healthy Life Human%'; [19:34:47] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:34:56] RECOVERY - cp20 SSH on cp20 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [19:35:12] RECOVERY - cp20 Stunnel HTTP for phab121 on cp20 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 1.056 second response time [19:35:16] RECOVERY - cp21 conntrack_table_size on cp21 is OK: OK: nf_conntrack is 0 % full [19:35:43] PROBLEM - cp21 Stunnel HTTP for mw131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:36:31] PROBLEM - cp20 Stunnel HTTP for mw131 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:36:49] RECOVERY - cp20 HTTP 4xx/5xx ERROR Rate on cp20 is OK: OK - NGINX Error Rate is 22% [19:37:42] RECOVERY - cp21 Stunnel HTTP for mw131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 0.018 second response time [19:38:02] PROBLEM - cp20 Stunnel HTTP for matomo131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:38:13] PROBLEM - cp21 PowerDNS Recursor on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:38:28] RECOVERY - cp21 Stunnel HTTP for matomo131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 100182 bytes in 1.154 second response time [19:38:43] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.93, 3.44, 3.09 [19:38:57] PROBLEM - cp21 SSH on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:39:22] PROBLEM - cp21 Stunnel HTTP for test131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:39:34] PROBLEM - cp20 Stunnel HTTP for puppet141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:39:43] PROBLEM - cp21 Stunnel HTTP for mw121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:40:02] RECOVERY - cp21 Stunnel HTTP for mw122 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 7.099 second response time [19:40:36] PROBLEM - cp21 Stunnel HTTP for mw141 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:40:36] RECOVERY - cp20 Stunnel HTTP for matomo131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 100174 bytes in 3.341 second response time [19:40:37] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.58, 3.48, 3.12 [19:40:40] RECOVERY - cp21 PowerDNS Recursor on cp21 is OK: DNS OK: 0.215 seconds response time. miraheze.org returns 149.56.140.43,149.56.141.75,2607:5300:201:3100::5ebc,2607:5300:201:3100::929a [19:41:22] RECOVERY - cp21 Stunnel HTTP for test131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14740 bytes in 1.031 second response time [19:41:23] PROBLEM - cp21 NTP time on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:42:04] RECOVERY - cp20 Stunnel HTTP for puppet141 on cp20 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 1.023 second response time [19:42:32] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.36, 2.97, 2.98 [19:42:34] RECOVERY - cp21 Stunnel HTTP for mw141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 3.040 second response time [19:43:29] PROBLEM - cp20 Stunnel HTTP for phab121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:43:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [19:44:20] PROBLEM - cp20 NTP time on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:44:38] PROBLEM - cp21 Stunnel HTTP for mw131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:45:42] RECOVERY - cp20 Stunnel HTTP for phab121 on cp20 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 0.061 second response time [19:46:19] RECOVERY - cp20 Stunnel HTTP for mw131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.068 second response time [19:46:22] RECOVERY - cp20 NTP time on cp20 is OK: NTP OK: Offset 0.0001935064793 secs [19:46:51] RECOVERY - cp20 PowerDNS Recursor on cp20 is OK: DNS OK: 0.029 seconds response time. miraheze.org returns 149.56.140.43,149.56.141.75,2607:5300:201:3100::5ebc,2607:5300:201:3100::929a [19:46:52] RECOVERY - cp21 Stunnel HTTP for mw131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.018 second response time [19:47:21] RECOVERY - cp21 Stunnel HTTP for mw121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.025 second response time [19:47:48] PROBLEM - cp21 Stunnel HTTP for mw141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:47:54] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::1b80/cpweb [19:48:27] PROBLEM - cp20 Stunnel HTTP for mw122 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:48:37] PROBLEM - cp21 Stunnel HTTP for mw122 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:49:50] RECOVERY - cp21 Stunnel HTTP for mw141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14724 bytes in 0.019 second response time [19:50:01] PROBLEM - cp20 Stunnel HTTP for mw132 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:50:29] PROBLEM - cp20 Stunnel HTTP for mw141 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:51:20] RECOVERY - cp21 SSH on cp21 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [19:51:45] PROBLEM - cp20 SSH on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:52:24] PROBLEM - cp21 Stunnel HTTP for mon141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:52:29] RECOVERY - cp20 Stunnel HTTP for mw141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 5.107 second response time [19:53:20] RECOVERY - cp20 Stunnel HTTP for mw122 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.017 second response time [19:53:36] PROBLEM - cp20 Stunnel HTTP for test131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:53:36] RECOVERY - cp21 Stunnel HTTP for mw122 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.027 second response time [19:53:51] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [19:53:54] PROBLEM - cp20 HTTP 4xx/5xx ERROR Rate on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:54:54] RECOVERY - cp21 Stunnel HTTP for mon141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 35724 bytes in 0.038 second response time [19:55:15] PROBLEM - cp20 Stunnel HTTP for reports121 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:55:28] PROBLEM - cp21 Stunnel HTTP for mw121 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:55:33] RECOVERY - cp20 Stunnel HTTP for test131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14740 bytes in 1.024 second response time [19:55:42] PROBLEM - cp21 SSH on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:55:48] PROBLEM - cp20 Stunnel HTTP for mw142 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:56:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.85, 3.81, 3.24 [19:57:06] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.09, 3.82, 3.22 [19:57:14] RECOVERY - cp20 Puppet on cp20 is OK: OK: Puppet is currently enabled, last run 2 minutes ago with 0 failures [19:57:27] RECOVERY - cp21 Stunnel HTTP for mw121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.026 second response time [19:57:37] PROBLEM - cp20 Stunnel HTTP for mw131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:57:43] RECOVERY - cp21 Puppet on cp21 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [19:57:43] RECOVERY - cp20 Stunnel HTTP for mw142 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 0.041 second response time [19:57:48] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [19:58:24] PROBLEM - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:58:37] PROBLEM - cp21 Stunnel HTTP for mw142 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:59:00] PROBLEM - cp21 Stunnel HTTP for puppet141 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:59:41] PROBLEM - cp21 Stunnel HTTP for mw122 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:00:11] RECOVERY - cp20 SSH on cp20 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [20:00:23] RECOVERY - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is OK: OK - NGINX Error Rate is 11% [20:00:25] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.31, 10.38, 9.18 [20:00:54] RECOVERY - cp21 NTP time on cp21 is OK: NTP OK: Offset 6.696581841e-05 secs [20:01:09] PROBLEM - cp20 Stunnel HTTP for puppet141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:01:21] RECOVERY - cp21 Stunnel HTTP for puppet141 on cp21 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 7.083 second response time [20:01:24] PROBLEM - cp21 Stunnel HTTP for mwtask141 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:01:56] RECOVERY - cp21 SSH on cp21 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [20:02:08] RECOVERY - cp20 HTTP 4xx/5xx ERROR Rate on cp20 is OK: OK - NGINX Error Rate is 15% [20:02:12] PROBLEM - cp20 Stunnel HTTP for matomo131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:02:23] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 8.77, 9.67, 9.06 [20:02:26] RECOVERY - cp20 Stunnel HTTP for mw131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.015 second response time [20:02:38] PROBLEM - cp21 Stunnel HTTP for mail121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:02:48] RECOVERY - cp20 Stunnel HTTP for reports121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 0.081 second response time [20:03:15] PROBLEM - cp21 Stunnel HTTP for matomo131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:03:24] RECOVERY - cp21 Stunnel HTTP for mwtask141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14728 bytes in 0.017 second response time [20:03:43] RECOVERY - cp21 Stunnel HTTP for mw142 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.023 second response time [20:03:57] PROBLEM - cp20 Stunnel HTTP for mw142 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:04:18] PROBLEM - cp20 Stunnel HTTP for phab121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:04:37] RECOVERY - cp21 Stunnel HTTP for mail121 on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 427 bytes in 3.049 second response time [20:05:10] RECOVERY - cp21 Stunnel HTTP for mw122 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 1.154 second response time [20:05:17] PROBLEM - cp20 Stunnel HTTP for mw121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:05:38] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.60, 10.25, 8.71 [20:05:48] PROBLEM - gluster122 Current Load on gluster122 is WARNING: WARNING - load average: 3.70, 3.34, 2.52 [20:05:53] PROBLEM - cp21 Stunnel HTTP for reports121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:05:56] RECOVERY - cp20 Stunnel HTTP for mw142 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 3.065 second response time [20:06:10] PROBLEM - cp20 Stunnel HTTP for mw122 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:06:16] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.10, 11.23, 9.88 [20:06:46] RECOVERY - cp20 Stunnel HTTP for phab121 on cp20 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 0.094 second response time [20:07:09] PROBLEM - cp21 PowerDNS Recursor on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:07:20] RECOVERY - cp20 Stunnel HTTP for mw132 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.024 second response time [20:07:38] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 8.07, 9.39, 8.58 [20:07:46] RECOVERY - gluster122 Current Load on gluster122 is OK: OK - load average: 2.40, 2.94, 2.47 [20:07:47] RECOVERY - cp20 Stunnel HTTP for mw121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 3.112 second response time [20:07:52] RECOVERY - cp21 Stunnel HTTP for reports121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 0.051 second response time [20:08:12] RECOVERY - cp20 Stunnel HTTP for mw122 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 2.034 second response time [20:08:56] PROBLEM - cp20 HTTP 4xx/5xx ERROR Rate on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:09:03] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.77, 3.85, 3.88 [20:10:11] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 14.55, 11.50, 10.18 [20:10:57] RECOVERY - cp20 HTTP 4xx/5xx ERROR Rate on cp20 is OK: OK - NGINX Error Rate is 24% [20:11:34] PROBLEM - cp21 Current Load on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:11:45] PROBLEM - cp20 Stunnel HTTP for mw132 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:12:00] RECOVERY - cp21 PowerDNS Recursor on cp21 is OK: DNS OK: 0.135 seconds response time. miraheze.org returns 149.56.140.43,149.56.141.75,2607:5300:201:3100::5ebc,2607:5300:201:3100::929a [20:12:09] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.21, 10.60, 10.01 [20:12:11] PROBLEM - cp20 Stunnel HTTP for mw142 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:13:32] RECOVERY - cp21 Current Load on cp21 is OK: OK - load average: 0.05, 0.05, 0.01 [20:13:32] RECOVERY - cp21 Stunnel HTTP for matomo131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 100174 bytes in 8.332 second response time [20:13:43] RECOVERY - cp20 Stunnel HTTP for matomo131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 100174 bytes in 3.497 second response time [20:14:08] RECOVERY - cp20 Stunnel HTTP for mw142 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 3.062 second response time [20:14:18] PROBLEM - cp21 Stunnel HTTP for puppet141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:14:55] RECOVERY - cp20 Stunnel HTTP for puppet141 on cp20 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 1.023 second response time [20:16:13] PROBLEM - cp20 PowerDNS Recursor on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:16:20] RECOVERY - cp21 Stunnel HTTP for puppet141 on cp21 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 1.018 second response time [20:18:02] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 9.67, 9.87, 9.89 [20:18:11] RECOVERY - cp20 PowerDNS Recursor on cp20 is OK: DNS OK: 0.030 seconds response time. miraheze.org returns 149.56.140.43,149.56.141.75,2607:5300:201:3100::5ebc,2607:5300:201:3100::929a [20:18:57] PROBLEM - cp20 Stunnel HTTP for mw131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:19:21] PROBLEM - cp21 Stunnel HTTP for mw142 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:19:40] PROBLEM - cp21 Stunnel HTTP for mw122 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:20:40] PROBLEM - cp20 Stunnel HTTP for matomo131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:20:55] RECOVERY - cp20 Stunnel HTTP for mw131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 1.047 second response time [20:21:00] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.94, 3.01, 3.39 [20:21:55] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.25, 3.39, 3.83 [20:21:58] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.66, 11.12, 10.41 [20:21:58] PROBLEM - cp20 Stunnel HTTP for mw121 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:22:27] PROBLEM - cp21 Stunnel HTTP for puppet141 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:22:33] PROBLEM - cp21 conntrack_table_size on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:22:36] PROBLEM - cp20 Stunnel HTTP for test131 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:22:44] PROBLEM - cp20 conntrack_table_size on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:23:56] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.02, 11.44, 10.61 [20:24:01] RECOVERY - cp20 Stunnel HTTP for mw121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.019 second response time [20:24:30] RECOVERY - cp21 conntrack_table_size on cp21 is OK: OK: nf_conntrack is 0 % full [20:24:45] RECOVERY - cp20 conntrack_table_size on cp20 is OK: OK: nf_conntrack is 0 % full [20:24:53] PROBLEM - cp20 Stunnel HTTP for puppet141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:24:58] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.52, 4.31, 3.81 [20:25:08] RECOVERY - cp21 Stunnel HTTP for mw122 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.071 second response time [20:25:53] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.72, 10.85, 10.50 [20:25:54] RECOVERY - cp20 Stunnel HTTP for matomo131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 100174 bytes in 0.802 second response time [20:26:35] PROBLEM - cp21 Stunnel HTTP for phab121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:26:57] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.65, 3.63, 3.62 [20:27:10] RECOVERY - cp21 Stunnel HTTP for mw142 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 7.074 second response time [20:27:29] RECOVERY - cp20 Stunnel HTTP for test131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14740 bytes in 1.029 second response time [20:27:45] PROBLEM - cp21 Stunnel HTTP for mwtask141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:29:01] PROBLEM - cp20 Current Load on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:29:30] PROBLEM - cp20 Puppet on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:29:47] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [20:29:49] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 7.44, 9.63, 10.15 [20:29:57] RECOVERY - cp20 Stunnel HTTP for puppet141 on cp20 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 3.057 second response time [20:30:23] PROBLEM - cp21 HTTPS on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:30:26] RECOVERY - cp21 Stunnel HTTP for puppet141 on cp21 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 3.033 second response time [20:30:57] RECOVERY - cp20 Current Load on cp20 is OK: OK - load average: 0.03, 0.05, 0.01 [20:31:38] PROBLEM - cp20 Stunnel HTTP for mw131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:32:15] PROBLEM - cp20 Stunnel HTTP for phab121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:32:20] RECOVERY - cp21 HTTPS on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3344 bytes in 1.037 second response time [20:32:56] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 7.38, 4.21, 3.75 [20:33:00] PROBLEM - cp21 Stunnel HTTP for mw142 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:33:40] RECOVERY - cp20 Stunnel HTTP for mw131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.015 second response time [20:33:43] RECOVERY - cp21 Stunnel HTTP for phab121 on cp21 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 1.075 second response time [20:34:17] RECOVERY - cp20 Stunnel HTTP for phab121 on cp20 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 3.141 second response time [20:34:24] RECOVERY - cp20 Puppet on cp20 is OK: OK: Puppet is currently enabled, last run 10 minutes ago with 0 failures [20:34:55] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.82, 3.51, 3.55 [20:35:07] RECOVERY - cp21 Stunnel HTTP for mwtask141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14728 bytes in 1.030 second response time [20:35:55] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 1.45, 2.52, 3.21 [20:36:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.23, 1.62, 2.00 [20:36:55] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.05, 3.57, 3.56 [20:37:00] PROBLEM - cp20 Stunnel HTTP for mon141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:37:17] PROBLEM - cp20 Stunnel HTTP for test131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:37:22] PROBLEM - cp21 Stunnel HTTP for mail121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:37:27] RECOVERY - cp20 Stunnel HTTP for mw132 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 2.064 second response time [20:37:47] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [20:38:25] RECOVERY - cp21 Stunnel HTTP for mw142 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 3.065 second response time [20:38:54] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.48, 3.84, 3.69 [20:39:17] RECOVERY - cp20 Stunnel HTTP for test131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14740 bytes in 2.032 second response time [20:39:26] RECOVERY - cp20 Stunnel HTTP for mon141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 35724 bytes in 1.053 second response time [20:39:47] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [20:39:54] PROBLEM - cp20 HTTP 4xx/5xx ERROR Rate on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:40:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.55, 1.86, 1.98 [20:41:33] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.34, 10.03, 9.79 [20:41:44] PROBLEM - cp20 Stunnel HTTP for mw141 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:42:11] RECOVERY - cp21 Stunnel HTTP for mail121 on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 427 bytes in 0.013 second response time [20:42:41] PROBLEM - cp20 SSH on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:43:31] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 7.89, 9.32, 9.56 [20:43:43] PROBLEM - cp21 Stunnel HTTP for mw121 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:43:47] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 2001:41d0:801:2000::4c25/cpweb, 2001:41d0:801:2000::1b80/cpweb [20:44:18] PROBLEM - cp21 Stunnel HTTP for mwtask141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:45:43] RECOVERY - cp21 Stunnel HTTP for mw121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 1.045 second response time [20:45:47] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [20:46:18] RECOVERY - cp21 Stunnel HTTP for mwtask141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14742 bytes in 1.050 second response time [20:46:29] PROBLEM - cp20 HTTPS on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:46:34] PROBLEM - cp21 Stunnel HTTP for mw122 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:46:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.50, 1.77, 1.94 [20:46:52] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.16, 2.91, 3.32 [20:47:48] PROBLEM - cp20 Stunnel HTTP for mw142 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:48:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.04, 1.93, 1.99 [20:49:29] RECOVERY - cp20 Stunnel HTTP for mw141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.015 second response time [20:49:45] RECOVERY - cp20 Stunnel HTTP for mw142 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.031 second response time [20:50:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.67, 1.91, 1.97 [20:50:45] RECOVERY - cp20 HTTPS on cp20 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3345 bytes in 7.314 second response time [20:50:46] RECOVERY - cp20 HTTP 4xx/5xx ERROR Rate on cp20 is OK: OK - NGINX Error Rate is 14% [20:52:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.25, 1.93, 1.96 [20:53:08] RECOVERY - cp20 SSH on cp20 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [20:53:36] PROBLEM - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:53:47] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [20:54:07] PROBLEM - cp21 Stunnel HTTP for mon141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:54:27] PROBLEM - cp21 Stunnel HTTP for mwtask141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:55:35] RECOVERY - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is OK: OK - NGINX Error Rate is 21% [20:55:50] PROBLEM - cp20 Stunnel HTTP for mw121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:56:27] RECOVERY - cp21 Stunnel HTTP for mwtask141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14736 bytes in 0.020 second response time [20:56:37] RECOVERY - cp21 Stunnel HTTP for mon141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 35724 bytes in 0.044 second response time [20:56:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.62, 1.88, 1.94 [20:56:55] PROBLEM - cp21 SSH on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:57:01] RECOVERY - cp21 Stunnel HTTP for mw122 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 0.019 second response time [20:57:52] RECOVERY - cp20 Stunnel HTTP for mw121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 0.021 second response time [20:58:39] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.58, 3.65, 3.21 [20:58:50] RECOVERY - cp21 SSH on cp21 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [21:00:34] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.01, 2.93, 2.99 [21:00:46] PROBLEM - cp21 Puppet on cp21 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/etc/apt/trusted.gpg.d/puppetlabs.gpg] [21:01:23] PROBLEM - cp20 Stunnel HTTP for mw142 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:01:47] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 11.17, 5.64, 3.84 [21:02:32] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 13.99, 11.64, 9.74 [21:02:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.20, 1.89, 1.90 [21:03:00] PROBLEM - cp21 Current Load on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [21:03:06] PROBLEM - cp21 Stunnel HTTP for mw122 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [21:03:27] RECOVERY - cp20 Stunnel HTTP for mw142 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 3.132 second response time [21:04:14] PROBLEM - cp21 HTTPS on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:04:24] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.99, 4.70, 3.79 [21:04:27] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.18, 11.54, 9.93 [21:04:57] RECOVERY - cp21 Current Load on cp21 is OK: OK - load average: 0.36, 0.13, 0.03 [21:05:46] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 1.84, 3.50, 3.36 [21:06:11] RECOVERY - cp21 HTTPS on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3346 bytes in 1.026 second response time [21:06:18] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.59, 3.92, 3.61 [21:07:45] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.18, 3.11, 3.23 [21:08:13] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 1.67, 3.16, 3.37 [21:10:15] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 9.15, 9.79, 9.62 [21:11:51] PROBLEM - cp21 Stunnel HTTP for mw141 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:14:18] RECOVERY - cp21 Stunnel HTTP for mw141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.038 second response time [21:16:24] RECOVERY - cp21 Stunnel HTTP for mw122 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 0.023 second response time [21:23:47] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [21:30:02] RECOVERY - cp21 Puppet on cp21 is OK: OK: Puppet is currently enabled, last run 5 minutes ago with 0 failures [21:31:08] PROBLEM - cp20 Puppet on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [21:31:37] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.24, 3.65, 3.09 [21:33:37] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.18, 3.20, 3.00 [21:35:20] PROBLEM - cp21 Stunnel HTTP for mw122 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [21:35:31] PROBLEM - cp20 Stunnel HTTP for matomo131 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:39:22] PROBLEM - cp21 HTTPS on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:40:32] RECOVERY - cp21 Stunnel HTTP for mw122 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.031 second response time [21:41:19] RECOVERY - cp21 HTTPS on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3346 bytes in 1.032 second response time [21:44:02] RECOVERY - cp20 Stunnel HTTP for matomo131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 100174 bytes in 1.540 second response time [21:48:39] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [21:48:59] PROBLEM - cp20 Stunnel HTTP for mw132 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:50:17] PROBLEM - cp20 Stunnel HTTP for puppet141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [21:52:24] RECOVERY - cp20 Stunnel HTTP for puppet141 on cp20 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 3.056 second response time [21:53:12] RECOVERY - cp20 Stunnel HTTP for mw132 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 7.141 second response time [21:54:10] PROBLEM - cp21 Stunnel HTTP for mw121 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:54:13] RECOVERY - cp20 Puppet on cp20 is OK: OK: Puppet is currently enabled, last run 10 seconds ago with 0 failures [21:56:09] RECOVERY - cp21 Stunnel HTTP for mw121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.023 second response time [21:57:29] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.04, 3.47, 3.19 [21:58:07] PROBLEM - cp21 Stunnel HTTP for mw142 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [21:59:28] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.26, 3.04, 3.06 [22:00:06] RECOVERY - cp21 Stunnel HTTP for mw142 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.020 second response time [22:00:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.57, 1.56, 1.99 [22:04:21] PROBLEM - cp20 Stunnel HTTP for puppet141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [22:06:06] PROBLEM - cp20 Stunnel HTTP for matomo131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [22:06:49] RECOVERY - cp20 Stunnel HTTP for puppet141 on cp20 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 0.010 second response time [22:08:38] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 0.68, 1.17, 1.65 [22:11:42] RECOVERY - cp20 Stunnel HTTP for matomo131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 100174 bytes in 7.256 second response time [22:16:25] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [22:16:45] RECOVERY - cp20 Varnish Backends on cp20 is OK: All 14 backends are healthy [22:19:26] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 9.93, 3.28, 2.17 [22:29:03] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.20, 1.84, 1.96 [22:31:06] RECOVERY - cp21 Varnish Backends on cp21 is OK: All 14 backends are healthy [22:31:18] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.31, 3.75, 3.14 [22:33:18] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.98, 3.38, 3.07 [22:34:50] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.19, 2.07, 2.02 [22:36:27] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.37, 10.70, 9.13 [22:38:23] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 13.67, 11.62, 9.64 [22:42:15] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.39, 11.33, 10.04 [22:44:11] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 13.07, 11.71, 10.30 [22:48:02] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.51, 11.45, 10.50 [22:51:54] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 7.08, 9.40, 9.90 [22:54:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.26, 1.60, 1.87 [22:58:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.11, 1.77, 1.86 [23:04:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 0.84, 1.80, 1.93 [23:08:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.57, 2.05, 1.98 [23:10:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.59, 1.83, 1.91 [23:18:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.32, 2.11, 1.93 [23:26:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.20, 1.78, 1.90 [23:28:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 4.43, 2.70, 2.22 [23:42:23] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 5.30, 4.16, 2.95 [23:42:57] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.67, 3.73, 2.76 [23:44:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.06, 1.53, 1.89 [23:44:57] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.79, 3.70, 2.87 [23:46:12] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.63, 4.00, 3.17 [23:46:56] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.22, 3.12, 2.76 [23:49:38] PROBLEM - cp30 Disk Space on cp30 is WARNING: DISK WARNING - free space: / 4220 MB (10% inode=96%); [23:50:01] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.23, 3.26, 3.06 [23:54:38] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 0.87, 1.38, 1.67 [23:55:53] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 3.83, 4.05, 3.34 [23:57:52] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.67, 3.90, 3.37 [23:59:52] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.33, 3.96, 3.44