[00:03:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.18, 11.44, 10.19 [00:07:14] RECOVERY - cp30 Disk Space on cp30 is OK: DISK OK - free space: / 10566 MB (27% inode=96%); [00:07:21] RECOVERY - cp31 Disk Space on cp31 is OK: DISK OK - free space: / 11112 MB (28% inode=96%); [00:09:36] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 7.60, 9.59, 9.86 [00:12:55] PROBLEM - roblox-wiki.tk - reverse DNS on sslhost is WARNING: NoNameservers: All nameservers failed to answer the query roblox-wiki.tk. IN NS: Server 2606:4700:4700::1111 UDP port 53 answered SERVFAIL [00:15:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.48, 11.07, 11.81 [00:16:29] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.75, 3.88, 3.02 [00:17:48] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 6.79, 4.38, 3.12 [00:17:51] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 15.52, 12.34, 12.17 [00:19:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.47, 10.43, 9.96 [00:20:26] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.92, 3.99, 3.31 [00:20:27] !log reboot mw132 (nutcracker stopped, and could not restart it) [00:20:47] RECOVERY - mw132 nutcracker process on mw132 is OK: PROCS OK: 1 process with UID = 115 (nutcracker), command name 'nutcracker' [00:22:46] RECOVERY - mw132 MediaWiki Rendering on mw132 is OK: HTTP OK: HTTP/1.1 200 OK - 29512 bytes in 0.194 second response time [00:23:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.96, 3.99, 3.38 [00:27:36] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 8.59, 9.58, 9.79 [00:27:48] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 3.09, 3.35, 3.23 [00:28:20] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.01, 3.89, 3.49 [00:30:19] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.26, 3.40, 3.37 [00:36:14] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 7.38, 4.75, 3.89 [00:37:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.57, 11.09, 10.31 [00:40:11] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.09, 3.98, 3.87 [00:42:33] PROBLEM - roblox-wiki.tk - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - roblox-wiki.tk All nameservers failed to answer the query. [00:43:36] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 7.98, 9.90, 10.07 [00:44:08] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.43, 3.53, 3.65 [00:50:04] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.04, 3.51, 3.65 [00:51:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 8.49, 10.19, 11.93 [00:54:01] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.95, 2.97, 3.37 [00:55:51] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 12.12, 11.27, 11.94 [00:56:07] !log reboot mw132 (nutcracker stopped, and could not restart it) [00:56:16] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [00:57:48] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.91, 3.58, 3.03 [00:59:48] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.56, 3.34, 3.03 [01:01:55] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.08, 3.40, 3.46 [01:03:53] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.44, 2.71, 3.20 [01:05:20] [02miraheze/mw-config] 07Universal-Omega pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/mw-config/compare/fa1d06328202...f199075d0f3f [01:05:21] [url] Comparing fa1d06328202...f199075d0f3f · miraheze/mw-config · GitHub | github.com [01:05:22] [02miraheze/mw-config] 07Universal-Omega 03f199075 - T9624: add bnwiki as an import source for yahyawiki [01:05:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 8.42, 10.86, 11.87 [01:06:25] miraheze/mw-config - Universal-Omega the build passed. [01:10:04] !log [@mwtask141] starting deploy of {'config': True} to all [01:10:11] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [01:10:13] !log [@mwtask141] finished deploy of {'config': True} to all - SUCCESS in 9s [01:10:19] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [01:12:11] RECOVERY - roblox-wiki.tk - reverse DNS on sslhost is OK: SSL OK - roblox-wiki.tk reverse DNS resolves to cp30.miraheze.org - NS RECORDS OK [01:17:51] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 7.85, 8.66, 10.15 [01:26:55] !log [@test131] starting deploy of {'config': True} to all [01:26:56] !log [@test131] finished deploy of {'config': True} to all - SUCCESS in 0s [01:27:01] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [01:27:10] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [01:31:59] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.56, 1.74, 1.98 [01:33:59] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 1.97, 1.88, 2.01 [01:35:51] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 15.97, 11.15, 9.83 [01:37:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.70, 10.82, 9.85 [01:41:51] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 8.20, 10.05, 9.81 [01:43:58] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.72, 10.25, 9.34 [01:44:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.32, 1.55, 1.95 [01:45:52] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 7.38, 9.30, 9.10 [01:50:19] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.49, 3.03, 2.57 [01:52:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.78, 1.96, 1.95 [01:52:18] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.60, 2.79, 2.53 [01:54:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.52, 1.83, 1.91 [01:58:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.21, 2.01, 1.96 [02:00:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.38, 1.78, 1.88 [02:01:15] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.29, 10.22, 9.90 [02:03:12] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 16.80, 11.83, 10.46 [02:06:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.12, 1.98, 1.92 [02:07:06] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 8.46, 10.58, 10.31 [02:09:02] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 8.22, 9.75, 10.03 [02:10:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.46, 1.78, 1.85 [02:13:28] PROBLEM - roblox-wiki.tk - reverse DNS on sslhost is WARNING: NoNameservers: All nameservers failed to answer the query roblox-wiki.tk. IN NS: Server 2606:4700:4700::1111 UDP port 53 answered SERVFAIL [02:14:17] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 1.32, 1.46, 1.70 [02:20:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.29, 1.95, 1.81 [02:22:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.76, 1.83, 1.78 [02:24:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.35, 2.07, 1.88 [02:32:18] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.49, 1.93, 1.93 [02:34:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.36, 1.90, 1.91 [02:35:11] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 15.43, 13.21, 10.85 [02:35:44] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.26, 4.09, 3.05 [02:37:42] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.80, 3.70, 3.03 [02:39:04] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.72, 11.66, 10.73 [02:39:41] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.01, 3.08, 2.88 [02:40:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.46, 1.86, 1.90 [02:42:17] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.71, 11.34, 9.48 [02:42:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.49, 2.46, 2.11 [02:42:58] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 14.32, 12.21, 11.13 [02:43:06] RECOVERY - roblox-wiki.tk - reverse DNS on sslhost is OK: SSL OK - roblox-wiki.tk reverse DNS resolves to cp30.miraheze.org - NS RECORDS OK [02:44:11] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.61, 11.05, 9.62 [02:44:55] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 9.90, 11.71, 11.11 [02:51:46] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 9.17, 9.98, 9.71 [02:54:39] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 6.99, 9.13, 10.17 [03:00:26] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 13.61, 12.36, 11.18 [03:01:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.52, 3.47, 2.99 [03:02:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.12, 1.79, 1.99 [03:02:23] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.95, 4.72, 3.53 [03:03:48] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.01, 3.51, 3.05 [03:04:22] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.27, 3.80, 3.34 [03:05:49] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.69, 3.18, 2.99 [03:06:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.35, 1.89, 1.97 [03:06:20] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.17, 3.18, 3.16 [03:10:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.31, 1.65, 1.86 [03:11:52] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.11, 10.50, 10.10 [03:13:46] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 16.00, 12.66, 10.92 [03:15:39] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.31, 11.78, 10.79 [03:18:17] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 1.30, 1.53, 1.70 [03:22:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.06, 1.60, 1.71 [03:23:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 9.75, 11.16, 11.99 [03:24:17] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 1.00, 1.37, 1.62 [03:27:36] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 9.16, 9.48, 10.16 [03:39:54] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.67, 3.19, 2.71 [03:40:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.20, 2.38, 1.94 [03:41:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.94, 3.11, 2.65 [03:41:51] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 14.33, 10.70, 10.67 [03:41:53] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.30, 2.87, 2.65 [03:43:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.19, 10.74, 10.14 [03:43:48] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.19, 2.73, 2.56 [03:43:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.53, 10.42, 10.56 [03:49:36] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 13.38, 11.34, 10.52 [03:49:51] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 16.38, 13.06, 11.51 [03:51:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.31, 11.04, 10.52 [03:52:43] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.92, 4.32, 3.28 [03:55:36] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.12, 11.52, 10.85 [03:56:40] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.45, 3.98, 3.40 [03:57:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 8.77, 10.62, 10.62 [03:58:39] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.25, 3.36, 3.25 [04:02:35] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.37, 3.97, 3.56 [04:03:36] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 8.33, 9.19, 9.96 [04:04:34] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.80, 3.18, 3.32 [04:07:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 9.70, 11.63, 11.94 [04:09:51] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 15.42, 13.20, 12.48 [04:11:36] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.51, 11.12, 10.49 [04:11:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 8.93, 11.54, 11.96 [04:13:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 8.19, 10.11, 10.21 [04:15:36] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 6.73, 8.97, 9.79 [04:25:51] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 9.49, 9.23, 10.19 [04:46:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.26, 1.51, 1.99 [04:49:59] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.76, 1.71, 1.96 [04:50:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.12, 1.75, 1.96 [04:51:59] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.60, 2.15, 2.10 [05:00:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.36, 1.80, 1.98 [05:06:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.03, 2.33, 2.10 [05:08:41] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 13.72, 11.11, 9.60 [05:09:36] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 13.97, 11.05, 9.25 [05:10:38] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.23, 11.22, 9.86 [05:11:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.82, 11.63, 9.70 [05:12:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.22, 1.71, 1.92 [05:15:36] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 7.69, 9.31, 9.17 [05:16:28] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 7.15, 9.44, 9.55 [05:24:12] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.43, 10.70, 9.99 [05:26:09] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 13.91, 11.30, 10.26 [05:26:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.33, 2.42, 2.09 [05:28:06] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.89, 11.16, 10.32 [05:28:17] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.76, 10.16, 9.40 [05:28:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.17, 1.99, 1.98 [05:30:11] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 8.83, 9.78, 9.36 [05:30:47] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 2001:41d0:801:2000::4c25/cpweb, 2001:41d0:801:2000::1b80/cpweb [05:31:59] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 9.01, 10.00, 10.03 [05:38:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.19, 2.41, 2.08 [05:44:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.17, 1.94, 2.00 [05:47:29] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.15, 3.36, 2.51 [05:47:30] [02puppet] 07Universal-Omega edited pull request 03#2781: Convert LICENSE to markdown - 13https://github.com/miraheze/puppet/pull/2781 [05:47:31] [url] Page not found · GitHub · GitHub | github.com [05:47:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.43, 11.44, 10.45 [05:48:03] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.97, 2.99, 2.37 [05:48:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.17, 2.00, 2.00 [05:49:51] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 18.75, 13.71, 11.37 [05:49:58] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 11.66, 6.54, 3.77 [05:50:14] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [05:51:36] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 13.38, 12.44, 10.31 [05:55:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.69, 11.48, 10.38 [05:56:46] [02puppet] 07Universal-Omega opened pull request 03#2782: csp: add www.gstatic.cn - 13https://github.com/miraheze/puppet/pull/2782 [05:56:47] [url] Page not found · GitHub · GitHub | github.com [05:57:29] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.45, 3.93, 3.75 [05:57:36] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.11, 11.71, 10.60 [05:59:33] [02dns] 07Reception123 closed pull request 03#331: Remove wiki-asterix.cf zone - 13https://github.com/miraheze/dns/pull/331 [05:59:34] [02miraheze/dns] 07Reception123 pushed 031 commit to 03master [+0/-1/±0] 13https://github.com/miraheze/dns/compare/ea2b65e94af0...acae9a3ae526 [05:59:36] [02miraheze/dns] 07Universal-Omega 03acae9a3 - Remove wiki-asterix.cf zone (#331) [05:59:36] [url] Page not found · GitHub · GitHub | github.com [05:59:40] [url] Comparing ea2b65e94af0...acae9a3ae526 · miraheze/dns · GitHub | github.com [05:59:46] [02puppet] 07Reception123 closed pull request 03#2782: csp: add www.gstatic.cn - 13https://github.com/miraheze/puppet/pull/2782 [05:59:48] [02miraheze/puppet] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/puppet/compare/425d197f7497...f519dd400df8 [05:59:48] [url] Comparing 425d197f7497...f519dd400df8 · miraheze/puppet · GitHub | github.com [05:59:49] [02miraheze/puppet] 07Universal-Omega 03f519dd4 - csp: add www.gstatic.cn (#2782) [05:59:52] [url] Page not found · GitHub · GitHub | github.com [06:01:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.29, 10.73, 10.46 [06:01:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.98, 3.75, 3.78 [06:03:36] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 6.72, 9.48, 10.05 [06:07:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 9.72, 11.18, 11.96 [06:09:29] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.62, 3.08, 3.35 [06:09:48] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.69, 3.92, 3.84 [06:11:36] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 13.55, 11.47, 10.58 [06:11:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.36, 3.55, 3.70 [06:12:28] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [06:13:36] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 7.79, 10.06, 10.18 [06:13:48] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.53, 3.69, 3.72 [06:15:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.41, 3.23, 3.55 [06:17:51] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 7.52, 7.95, 9.90 [06:23:48] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.75, 4.08, 3.75 [06:27:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.55, 3.43, 3.57 [06:28:23] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.37, 10.30, 9.87 [06:29:48] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.09, 2.96, 3.37 [06:32:11] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 8.20, 9.72, 9.78 [06:33:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.44, 3.61, 3.55 [06:34:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.33, 1.66, 1.95 [06:34:32] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.41, 10.29, 9.98 [06:35:48] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.53, 4.05, 3.72 [06:35:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [06:37:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.82, 3.58, 3.59 [06:38:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.40, 2.00, 2.02 [06:39:57] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 2001:41d0:801:2000::4c25/cpweb, 2001:41d0:801:2000::1b80/cpweb [06:40:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.31, 1.75, 1.93 [06:40:22] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 14.31, 12.53, 11.00 [06:41:39] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.22, 11.69, 10.61 [06:42:19] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 9.64, 11.17, 10.68 [06:43:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.20, 10.89, 10.45 [06:43:48] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.10, 3.65, 3.59 [06:43:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [06:45:48] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.04, 3.08, 3.38 [06:46:12] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 7.03, 9.09, 9.95 [06:47:36] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 8.02, 9.53, 10.03 [06:47:57] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::1b80/cpweb [06:48:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.54, 2.14, 1.97 [06:54:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.85, 1.94, 1.96 [06:57:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [06:58:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.13, 1.72, 1.84 [07:00:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.17, 1.56, 1.77 [07:01:56] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.92, 3.92, 3.31 [07:01:57] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::1b80/cpweb [07:02:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.25, 1.66, 1.77 [07:02:41] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 13.34, 10.90, 10.13 [07:03:54] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 3.89, 4.32, 3.54 [07:03:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [07:04:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.96, 1.78, 1.80 [07:04:38] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 9.52, 10.06, 9.90 [07:05:53] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.37, 3.49, 3.32 [07:06:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.75, 2.04, 1.89 [07:07:52] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.91, 3.30, 3.27 [07:07:57] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [07:14:19] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 12.65, 10.28, 9.83 [07:16:15] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 7.86, 9.34, 9.54 [07:19:42] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.50, 3.25, 3.06 [07:20:17] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.32, 10.03, 9.42 [07:21:41] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.50, 2.86, 2.93 [07:22:11] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 9.36, 9.96, 9.47 [07:26:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.08, 1.77, 2.00 [07:27:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [07:29:34] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 7.24, 4.85, 3.58 [07:29:51] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 14.75, 11.49, 10.15 [07:31:33] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.82, 4.00, 3.42 [07:31:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 8.76, 10.40, 9.92 [07:31:57] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 2001:41d0:801:2000::4c25/cpweb, 2001:41d0:801:2000::1b80/cpweb [07:33:51] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 22.61, 14.38, 11.38 [07:34:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.22, 2.26, 2.09 [07:35:30] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.54, 2.88, 3.10 [07:38:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.50, 1.84, 1.95 [07:41:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 7.45, 11.12, 11.19 [07:43:02] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 16.22, 13.20, 10.92 [07:43:29] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.92, 3.68, 3.37 [07:43:51] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 23.94, 15.58, 12.77 [07:45:29] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.58, 3.05, 3.18 [07:46:17] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 1.24, 1.35, 1.65 [07:46:50] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 7.50, 11.93, 11.11 [07:49:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 6.75, 10.21, 11.35 [07:49:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [07:50:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.38, 1.99, 1.85 [07:50:37] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 5.20, 8.32, 9.81 [07:52:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.37, 1.78, 1.79 [07:56:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.62, 1.98, 1.85 [07:58:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.21, 1.69, 1.76 [07:59:51] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 6.28, 8.29, 9.79 [07:59:57] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [08:00:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 4.03, 2.43, 2.01 [08:06:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.24, 1.77, 1.85 [08:10:17] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 0.75, 1.31, 1.64 [08:19:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [08:20:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.68, 1.86, 1.72 [08:22:17] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 1.12, 1.66, 1.67 [08:23:57] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [08:27:29] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.12, 3.70, 2.69 [08:31:29] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.82, 3.63, 2.90 [08:35:29] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.18, 3.09, 2.85 [08:49:04] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.85, 1.77, 1.60 [08:51:03] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 1.90, 2.01, 1.72 [08:57:00] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.33, 1.88, 1.78 [08:59:00] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.67, 2.36, 1.98 [09:06:56] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.82, 1.96, 1.95 [09:08:55] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.28, 2.07, 1.99 [09:11:59] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.24, 9.09, 8.67 [09:12:53] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.19, 1.87, 1.94 [09:13:56] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 8.75, 9.05, 8.72 [09:14:52] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.00, 2.05, 1.98 [09:16:51] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.63, 1.83, 1.90 [09:20:49] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.13, 2.39, 2.10 [09:33:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [09:34:42] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.22, 1.72, 1.96 [09:36:41] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.54, 1.90, 1.99 [09:37:57] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::1b80/cpweb [09:38:03] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.07, 3.16, 2.69 [09:39:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [09:39:58] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.31, 2.88, 2.64 [09:41:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.75, 10.52, 9.06 [09:43:36] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 7.29, 9.67, 8.95 [09:43:59] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.68, 1.69, 1.97 [09:45:19] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.75, 10.39, 9.56 [09:45:57] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [09:45:59] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.88, 2.47, 2.25 [09:46:42] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.71, 3.82, 2.98 [09:47:15] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 9.86, 9.84, 9.43 [09:48:40] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.49, 2.99, 2.78 [09:54:32] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.50, 1.80, 2.00 [09:55:27] [02puppet] 07Reception123 closed pull request 03#2679: Allow www.gstatic.cn in the CSP list - 13https://github.com/miraheze/puppet/pull/2679 [09:55:28] [url] Page not found · GitHub · GitHub | github.com [09:56:31] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.02, 1.81, 1.97 [10:00:29] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.56, 1.88, 1.98 [10:02:28] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.13, 2.01, 2.01 [10:02:30] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 6.07, 4.62, 3.24 [10:04:27] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.88, 1.87, 1.95 [10:04:28] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.48, 3.88, 3.11 [10:06:27] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.33, 3.38, 3.03 [10:11:59] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.32, 1.87, 2.00 [10:16:21] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 1.37, 1.44, 1.69 [10:17:59] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 4.00, 2.49, 2.14 [10:20:18] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.58, 2.01, 1.86 [10:22:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.89, 1.84, 1.81 [10:24:17] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 0.87, 1.52, 1.70 [10:25:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [10:29:57] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::1b80/cpweb [10:29:58] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.48, 10.18, 9.11 [10:31:55] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 8.35, 9.38, 8.94 [10:31:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [10:35:57] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [10:37:13] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.32, 1.91, 1.78 [10:39:12] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.74, 1.77, 1.74 [10:41:11] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.08, 1.83, 1.76 [10:43:10] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 1.08, 1.56, 1.67 [10:47:07] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.47, 1.66, 1.71 [10:51:05] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.45, 2.02, 1.83 [10:53:04] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.80, 1.84, 1.79 [10:57:02] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.07, 1.76, 1.75 [10:59:01] RECOVERY - cp31 Current Load on cp31 is OK: OK - load average: 0.75, 1.44, 1.64 [11:04:57] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.56, 1.75, 1.71 [11:08:55] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.22, 1.88, 1.78 [11:16:51] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.47, 1.97, 1.97 [11:18:50] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 4.42, 2.81, 2.27 [11:23:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [11:26:46] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.51, 1.80, 1.99 [11:28:45] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.45, 2.00, 2.03 [11:29:57] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [11:29:59] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.34, 1.58, 1.97 [11:30:44] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.11, 1.66, 1.90 [11:34:42] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 3.58, 2.18, 2.03 [11:36:15] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.52, 10.00, 8.79 [11:38:12] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 6.13, 8.56, 8.41 [11:38:40] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.32, 1.82, 1.94 [11:41:59] RECOVERY - cp30 Current Load on cp30 is OK: OK - load average: 1.02, 1.38, 1.66 [11:48:35] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 16.94, 9.30, 4.64 [11:48:55] PROBLEM - cp30 Current Load on cp30 is WARNING: WARNING - load average: 1.94, 1.95, 1.83 [11:49:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.68, 10.09, 8.72 [11:50:54] PROBLEM - cp30 Current Load on cp30 is CRITICAL: CRITICAL - load average: 2.14, 1.98, 1.85 [11:51:36] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.91, 10.88, 9.16 [11:51:49] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 7.00, 3.74, 2.53 [11:53:10] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.09, 3.40, 2.53 [11:53:51] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 18.07, 12.72, 9.89 [12:08:59] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.02, 3.38, 3.80 [12:09:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.03, 11.71, 11.89 [12:09:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.02, 3.63, 3.86 [12:11:36] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 16.15, 13.24, 12.43 [12:11:48] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.10, 3.77, 3.87 [12:13:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.11, 3.16, 3.64 [12:15:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [12:18:53] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.46, 4.39, 3.98 [12:19:57] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [12:20:51] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.53, 3.56, 3.72 [12:21:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.46, 11.13, 11.82 [12:21:48] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 1.73, 2.56, 3.19 [12:24:48] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.64, 3.51, 3.62 [12:25:36] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 15.30, 12.64, 12.19 [12:26:47] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.56, 3.48, 3.59 [12:28:46] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.16, 3.68, 3.64 [12:30:44] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.79, 3.24, 3.48 [12:32:43] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 3.10, 3.10, 3.39 [12:39:36] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.42, 4.18, 3.73 [12:41:35] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.94, 3.93, 3.69 [12:43:12] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 6.52, 4.70, 3.63 [12:43:33] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 6.56, 5.24, 4.21 [12:47:01] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.45, 3.81, 3.53 [12:48:55] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.91, 4.17, 3.69 [12:49:29] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.46, 3.63, 3.87 [12:50:49] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.78, 3.64, 3.55 [12:55:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.22, 9.95, 11.80 [12:55:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.20, 10.53, 12.00 [12:56:32] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.04, 3.07, 3.40 [13:00:21] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.34, 3.22, 3.44 [13:01:29] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.12, 3.39, 3.56 [13:03:29] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.96, 2.89, 3.36 [13:04:10] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 1.50, 2.93, 3.32 [13:05:36] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 6.30, 7.95, 10.01 [13:05:51] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 5.85, 7.71, 9.99 [13:09:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [13:13:57] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [13:25:48] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 6.38, 4.14, 3.35 [13:27:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.29, 10.64, 9.22 [13:28:25] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.33, 4.09, 3.36 [13:30:23] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.59, 3.83, 3.35 [13:32:22] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.77, 3.97, 3.44 [13:33:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.50, 3.83, 3.65 [13:33:51] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 13.55, 11.58, 10.06 [13:35:48] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 3.68, 4.36, 3.90 [13:35:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.27, 11.81, 10.39 [13:37:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.94, 3.88, 3.78 [13:37:51] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 15.02, 13.05, 11.01 [13:38:18] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.62, 3.90, 3.69 [13:39:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 9.35, 11.55, 10.71 [13:42:15] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 6.42, 4.40, 3.87 [13:42:24] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 21.23, 14.68, 11.17 [13:43:48] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.12, 3.67, 3.66 [13:44:14] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.66, 3.79, 3.70 [13:45:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.42, 3.05, 3.43 [13:47:48] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.19, 3.37, 3.49 [13:47:51] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 7.85, 9.31, 10.01 [13:48:05] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 7.56, 10.93, 10.64 [13:48:11] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.27, 2.47, 3.17 [13:49:48] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.78, 3.06, 3.35 [13:55:40] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 7.41, 9.33, 10.01 [13:56:04] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.84, 3.82, 3.44 [13:58:03] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.56, 3.75, 3.47 [13:58:37] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.33, 3.54, 3.46 [13:58:41] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.48, 10.75, 10.20 [14:00:17] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.76, 1.77, 1.99 [14:00:32] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.75, 3.23, 3.35 [14:00:38] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 8.74, 10.13, 10.05 [14:01:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [14:02:00] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.86, 3.03, 3.25 [14:02:17] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.06, 1.79, 1.97 [14:07:57] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [14:11:52] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.42, 10.89, 10.03 [14:15:40] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 13.63, 11.51, 10.42 [14:16:06] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 17.10, 12.74, 10.90 [14:19:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.18, 11.50, 10.72 [14:21:36] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 19.67, 14.18, 11.77 [14:22:37] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.85, 3.95, 3.50 [14:24:32] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.46, 3.85, 3.52 [14:28:20] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.75, 3.36, 3.40 [14:28:40] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.52, 3.39, 3.15 [14:30:38] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.76, 3.11, 3.07 [14:34:04] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.24, 3.46, 3.45 [14:34:34] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.73, 3.69, 3.32 [14:35:58] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.59, 3.79, 3.56 [14:37:52] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.76, 3.31, 3.41 [14:38:31] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.96, 3.97, 3.54 [14:39:48] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.47, 3.13, 3.34 [14:40:30] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.43, 3.36, 3.37 [14:46:25] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.08, 4.02, 3.62 [14:48:23] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.88, 3.47, 3.46 [14:49:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.07, 3.58, 3.48 [14:50:22] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.30, 3.15, 3.35 [14:51:49] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 5.20, 4.05, 3.66 [14:53:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [14:54:18] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.50, 3.92, 3.65 [14:57:57] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [15:00:14] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.64, 4.11, 3.76 [15:02:12] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.35, 3.77, 3.67 [15:04:11] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.30, 4.32, 3.87 [15:08:08] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.95, 3.72, 3.73 [15:09:49] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 1.94, 3.09, 3.75 [15:12:06] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.98, 2.73, 3.31 [15:15:48] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.05, 2.69, 3.39 [15:19:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.82, 10.88, 11.93 [15:21:36] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 11.41, 11.75, 12.16 [15:23:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.13, 11.46, 12.00 [15:23:48] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.11, 3.48, 3.43 [15:25:36] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 21.96, 15.42, 13.39 [15:27:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.52, 3.80, 3.59 [15:29:49] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.80, 3.47, 3.44 [15:31:48] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.07, 3.81, 3.62 [15:33:46] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 6.73, 4.50, 3.81 [15:39:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [15:45:57] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::1b80/cpweb [15:46:03] PROBLEM - www.fortis.wiki - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Certificate 'fortis.wiki' expires in 7 day(s) (Mon 22 Aug 2022 15:42:16 GMT +0000). [15:47:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [15:51:57] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [15:52:54] PROBLEM - fortis.wiki - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Certificate 'fortis.wiki' expires in 7 day(s) (Mon 22 Aug 2022 15:42:16 GMT +0000). [15:59:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.42, 3.58, 3.93 [16:03:29] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 1.93, 3.01, 3.84 [16:05:48] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.08, 2.44, 3.30 [16:07:29] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.62, 2.15, 3.29 [16:11:52] [02miraheze/mw-config] 07Universal-Omega pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/mw-config/compare/f199075d0f3f...be2c589a46a4 [16:11:52] [url] Comparing f199075d0f3f...be2c589a46a4 · miraheze/mw-config · GitHub | github.com [16:11:53] [02miraheze/mw-config] 07Universal-Omega 03be2c589 - T9663: remove license change for whobasewiki [16:13:01] miraheze/mw-config - Universal-Omega the build passed. [16:17:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.55, 10.84, 11.93 [16:17:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 7.48, 10.51, 11.85 [16:21:36] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.06, 11.07, 11.73 [16:21:51] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 15.31, 12.19, 12.16 [16:23:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.59, 11.12, 11.69 [16:26:17] !log [@test131] starting deploy of {'config': True} to all [16:26:18] !log [@test131] finished deploy of {'config': True} to all - SUCCESS in 0s [16:26:25] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:26:32] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:31:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 8.60, 10.58, 11.48 [16:33:51] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 21.74, 14.18, 12.63 [16:36:27] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 5.21, 3.84, 3.15 [16:38:21] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.76, 3.60, 3.14 [16:39:25] !log [@mwtask141] starting deploy of {'config': True} to all [16:39:34] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:39:36] !log [@mwtask141] finished deploy of {'config': True} to all - SUCCESS in 10s [16:39:36] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 13.29, 10.49, 10.57 [16:39:42] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:39:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [16:40:15] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 1.72, 2.86, 2.92 [16:41:32] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/ssl/compare/5ad4d092b816...593f5792c1ff [16:41:33] [url] Comparing 5ad4d092b816...593f5792c1ff · miraheze/ssl · GitHub | github.com [16:41:34] [02miraheze/ssl] 07MirahezeSSLBot 03593f579 - Bot: Update SSL cert for project-patterns.com [16:41:36] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.48, 10.31, 10.49 [16:41:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 7.60, 10.67, 11.68 [16:43:36] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 7.14, 9.36, 10.13 [16:43:57] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [16:51:14] [02miraheze/mw-config] 07Universal-Omega pushed 031 commit to 03Universal-Omega-patch-1 [+0/-0/±1] 13https://github.com/miraheze/mw-config/commit/08a33efb6f87 [16:51:16] [02miraheze/mw-config] 07Universal-Omega 0308a33ef - Set `$wgPageImagesScores['position']` for houkai2ndwiki [16:51:17] [02mw-config] 07Universal-Omega created branch 03Universal-Omega-patch-1 - 13https://github.com/miraheze/mw-config [16:51:18] [url] Page not found · GitHub · GitHub | github.com [16:51:31] [02mw-config] 07Universal-Omega opened pull request 03#4873: T9551: set `$wgPageImagesScores['position']` for houkai2ndwiki - 13https://github.com/miraheze/mw-config/pull/4873 [16:51:32] [url] Page not found · GitHub · GitHub | github.com [16:51:51] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 15.47, 10.68, 10.84 [16:52:43] miraheze/mw-config - Universal-Omega the build passed. [16:52:56] [02mw-config] 07Universal-Omega closed pull request 03#4873: T9551: set `$wgPageImagesScores['position']` for houkai2ndwiki - 13https://github.com/miraheze/mw-config/pull/4873 [16:52:57] [url] Page not found · GitHub · GitHub | github.com [16:52:58] [02miraheze/mw-config] 07Universal-Omega deleted branch 03Universal-Omega-patch-1 [16:52:59] [02miraheze/mw-config] 07Universal-Omega pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/mw-config/compare/be2c589a46a4...807f03407d9e [16:53:00] [url] Comparing be2c589a46a4...807f03407d9e · miraheze/mw-config · GitHub | github.com [16:53:00] PROBLEM - landofliberos.com - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'landofliberos.com' expires in 15 day(s) (Tue 30 Aug 2022 16:38:50 GMT +0000). [16:53:01] [02miraheze/mw-config] 07Universal-Omega 03807f034 - T9551: set `$wgPageImagesScores['position']` for houkai2ndwiki (#4873) [16:53:02] [02mw-config] 07Universal-Omega deleted branch 03Universal-Omega-patch-1 - 13https://github.com/miraheze/mw-config [16:53:03] ... [16:53:55] miraheze/mw-config - Universal-Omega the build passed. [16:54:17] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 19.95, 13.22, 11.08 [16:55:05] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/ssl/compare/593f5792c1ff...16ce14a57e72 [16:55:06] [url] Comparing 593f5792c1ff...16ce14a57e72 · miraheze/ssl · GitHub | github.com [16:55:07] [02miraheze/ssl] 07MirahezeSSLBot 0316ce14a - Bot: Update SSL cert for landofliberos.com [16:56:03] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 7.04, 4.81, 3.59 [16:57:12] !log [@test131] starting deploy of {'config': True} to all [16:57:13] !log [@test131] finished deploy of {'config': True} to all - SUCCESS in 0s [16:57:28] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:57:38] [02miraheze/ssl] 07MacFan4000 pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/ssl/compare/16ce14a57e72...ba381318073b [16:57:38] [url] Comparing 16ce14a57e72...ba381318073b · miraheze/ssl · GitHub | github.com [16:57:39] [02miraheze/ssl] 07MacFan4000 03ba38131 - renew feathercare.net [16:57:48] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 9.89, 6.65, 4.44 [16:57:49] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:00:20] PROBLEM - test131 Puppet on test131 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[nginx] [17:07:59] RECOVERY - www.project-patterns.com - LetsEncrypt on sslhost is OK: OK - Certificate 'project-patterns.com' will expire on Sat 12 Nov 2022 15:41:27 GMT +0000. [17:09:53] !log [@mwtask141] starting deploy of {'config': True} to all [17:10:01] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:10:08] !log [@mwtask141] finished deploy of {'config': True} to all - SUCCESS in 15s [17:10:15] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:11:28] [02dns] 07Universal-Omega opened pull request 03#332: Remove no-longer pointing domain zones - 13https://github.com/miraheze/dns/pull/332 [17:11:29] [url] Page not found · GitHub · GitHub | github.com [17:11:47] [02dns] 07Universal-Omega synchronize pull request 03#332: Remove no-longer pointing domain zones - 13https://github.com/miraheze/dns/pull/332 [17:11:48] [url] Page not found · GitHub · GitHub | github.com [17:13:16] [02dns] 07Universal-Omega synchronize pull request 03#332: Remove no-longer pointing domain zones - 13https://github.com/miraheze/dns/pull/332 [17:13:17] [url] Page not found · GitHub · GitHub | github.com [17:14:00] [02miraheze/ssl] 07Universal-Omega pushed 031 commit to 03Universal-Omega-patch-1 [+0/-1/±0] 13https://github.com/miraheze/ssl/commit/ea08bbeba94e [17:14:02] [02miraheze/ssl] 07Universal-Omega 03ea08bbe - Delete fortis.wiki.crt [17:14:03] [02ssl] 07Universal-Omega created branch 03Universal-Omega-patch-1 - 13https://github.com/miraheze/ssl [17:14:04] [url] Page not found · GitHub · GitHub | github.com [17:14:19] [02ssl] 07Universal-Omega opened pull request 03#591: Remove no-longer pointing custom domains - 13https://github.com/miraheze/ssl/pull/591 [17:14:19] [url] Page not found · GitHub · GitHub | github.com [17:14:48] [02miraheze/ssl] 07Universal-Omega pushed 031 commit to 03Universal-Omega-patch-1 [+0/-1/±0] 13https://github.com/miraheze/ssl/compare/ea08bbeba94e...5a5ad6c054c8 [17:14:49] [url] Comparing ea08bbeba94e...5a5ad6c054c8 · miraheze/ssl · GitHub | github.com [17:14:50] [02miraheze/ssl] 07Universal-Omega 035a5ad6c - Delete zh.internetpedia.tk.crt [17:14:51] [02ssl] 07Universal-Omega synchronize pull request 03#591: Remove no-longer pointing custom domains - 13https://github.com/miraheze/ssl/pull/591 [17:14:52] ... [17:14:56] [02dns] 07Reception123 closed pull request 03#332: Remove no-longer pointing domain zones - 13https://github.com/miraheze/dns/pull/332 [17:14:56] ... [17:14:57] [02miraheze/dns] 07Reception123 pushed 031 commit to 03master [+0/-3/±0] 13https://github.com/miraheze/dns/compare/acae9a3ae526...5073918b629a [17:14:58] [url] Comparing acae9a3ae526...5073918b629a · miraheze/dns · GitHub | github.com [17:14:59] [02miraheze/dns] 07Universal-Omega 035073918 - Remove no-longer pointing domain zones (#332) [17:15:12] [02miraheze/ssl] 07Universal-Omega pushed 031 commit to 03Universal-Omega-patch-1 [+0/-1/±0] 13https://github.com/miraheze/ssl/compare/5a5ad6c054c8...4c3f575e3390 [17:15:13] [url] Comparing 5a5ad6c054c8...4c3f575e3390 · miraheze/ssl · GitHub | github.com [17:15:13] [02miraheze/ssl] 07Universal-Omega 034c3f575 - Delete ipv6bolivia.tk.crt [17:15:15] [02ssl] 07Universal-Omega synchronize pull request 03#591: Remove no-longer pointing custom domains - 13https://github.com/miraheze/ssl/pull/591 [17:15:15] [url] Page not found · GitHub · GitHub | github.com [17:16:02] [02miraheze/ssl] 07Universal-Omega pushed 031 commit to 03Universal-Omega-patch-1 [+0/-1/±0] 13https://github.com/miraheze/ssl/compare/4c3f575e3390...c979603556c6 [17:16:02] [url] Comparing 4c3f575e3390...c979603556c6 · miraheze/ssl · GitHub | github.com [17:16:03] [02miraheze/ssl] 07Universal-Omega 03c979603 - Delete wiki.ameristraliagov.com.crt [17:16:05] [02ssl] 07Universal-Omega synchronize pull request 03#591: Remove no-longer pointing custom domains - 13https://github.com/miraheze/ssl/pull/591 [17:16:05] [url] Page not found · GitHub · GitHub | github.com [17:17:51] [02miraheze/ssl] 07Universal-Omega pushed 031 commit to 03Universal-Omega-patch-1 [+0/-0/±1] 13https://github.com/miraheze/ssl/compare/c979603556c6...d8687a90bb5e [17:17:51] [url] Comparing c979603556c6...d8687a90bb5e · miraheze/ssl · GitHub | github.com [17:17:52] [02miraheze/ssl] 07Universal-Omega 03d8687a9 - Update redirects.yaml [17:17:54] [02ssl] 07Universal-Omega synchronize pull request 03#591: Remove no-longer pointing custom domains - 13https://github.com/miraheze/ssl/pull/591 [17:17:54] [url] Page not found · GitHub · GitHub | github.com [17:19:14] [02miraheze/ssl] 07Universal-Omega pushed 031 commit to 03Universal-Omega-patch-1 [+0/-0/±1] 13https://github.com/miraheze/ssl/compare/d8687a90bb5e...f6606f9b1181 [17:19:14] [url] Comparing d8687a90bb5e...f6606f9b1181 · miraheze/ssl · GitHub | github.com [17:19:15] [02miraheze/ssl] 07Universal-Omega 03f6606f9 - Update certs.yaml [17:19:17] [02ssl] 07Universal-Omega synchronize pull request 03#591: Remove no-longer pointing custom domains - 13https://github.com/miraheze/ssl/pull/591 [17:19:17] [url] Page not found · GitHub · GitHub | github.com [17:19:28] [02ssl] 07Universal-Omega edited pull request 03#591: Remove no-longer pointing custom domain certs - 13https://github.com/miraheze/ssl/pull/591 [17:19:28] [url] Page not found · GitHub · GitHub | github.com [17:21:46] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.74, 3.21, 3.84 [17:21:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.76, 3.44, 3.90 [17:23:32] RECOVERY - feathercare.net - LetsEncrypt on sslhost is OK: OK - Certificate 'feathercare.net' will expire on Sat 12 Nov 2022 15:56:52 GMT +0000. [17:27:42] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.02, 3.25, 3.60 [17:27:48] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 5.04, 3.96, 3.93 [17:28:25] [02miraheze/ssl] 07Universal-Omega deleted branch 03Universal-Omega-patch-1 [17:28:26] [02ssl] 07Universal-Omega closed pull request 03#591: Remove no-longer pointing custom domain certs - 13https://github.com/miraheze/ssl/pull/591 [17:28:27] [url] Page not found · GitHub · GitHub | github.com [17:28:28] [02miraheze/ssl] 07Universal-Omega pushed 031 commit to 03master [+0/-4/±2] 13https://github.com/miraheze/ssl/compare/ba381318073b...280b14956fc8 [17:28:28] [url] Comparing ba381318073b...280b14956fc8 · miraheze/ssl · GitHub | github.com [17:28:29] [02miraheze/ssl] 07Universal-Omega 03280b149 - Remove no-longer pointing custom domain certs (#591) [17:28:31] [02ssl] 07Universal-Omega deleted branch 03Universal-Omega-patch-1 - 13https://github.com/miraheze/ssl [17:28:31] ... [17:29:40] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.60, 3.55, 3.68 [17:29:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.45, 3.41, 3.73 [17:33:48] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.83, 3.72, 3.76 [17:35:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.69, 3.30, 3.60 [17:39:48] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.10, 2.89, 3.38 [17:39:57] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [17:41:32] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.16, 2.43, 3.16 [17:45:48] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.61, 3.51, 3.46 [17:47:48] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 1.75, 2.80, 3.21 [17:51:00] RECOVERY - landofliberos.com - LetsEncrypt on sslhost is OK: OK - Certificate 'landofliberos.com' will expire on Sat 12 Nov 2022 15:55:00 GMT +0000. [17:51:40] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 2001:41d0:801:2000::4c25/cpweb, 2001:41d0:801:2000::1b80/cpweb [17:51:51] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.28, 10.91, 11.96 [17:55:45] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 14.18, 11.32, 11.80 [17:57:19] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.70, 10.99, 11.89 [17:57:42] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.32, 11.44, 11.83 [17:58:19] RECOVERY - test131 Puppet on test131 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:07:27] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 12.85, 11.36, 11.34 [18:09:14] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 7.14, 8.77, 10.10 [18:09:23] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 9.58, 10.65, 11.08 [18:13:11] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.82, 10.01, 10.36 [18:15:11] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 6.32, 8.62, 9.81 [18:23:02] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 8.30, 9.06, 10.14 [18:23:48] RECOVERY - cp20 NTP time on cp20 is OK: NTP OK: Offset 0.001469999552 secs [18:23:48] RECOVERY - cp20 Puppet on cp20 is OK: OK: Puppet is currently enabled, last run 29 minutes ago with 0 failures [18:23:48] RECOVERY - cp20 Stunnel HTTP for matomo131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 99061 bytes in 5.236 second response time [18:23:48] PROBLEM - cp20 Stunnel HTTP for test131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:23:48] PROBLEM - cp20 Disk Space on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:23:49] PROBLEM - cp20 SSH on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:23:51] PROBLEM - cp20 APT on cp20 is CRITICAL: APT CRITICAL: 19 packages available for upgrade (18 critical updates). [18:23:51] PROBLEM - cp20 Stunnel HTTP for mw132 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:23:51] PROBLEM - cp20 Varnish Backends on cp20 is CRITICAL: 5 backends are down. mw121 mw122 mw131 mw141 mw142 [18:23:58] RECOVERY - cp20 Stunnel HTTP for mw122 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.018 second response time [18:24:13] RECOVERY - cp20 Stunnel HTTP for reports121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 1.068 second response time [18:24:18] RECOVERY - cp20 Stunnel HTTP for mw131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.023 second response time [18:24:21] PROBLEM - cp20 APT on cp20 is CRITICAL: APT CRITICAL: 19 packages available for upgrade (18 critical updates). [18:24:21] PROBLEM - cp20 Disk Space on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:24:21] PROBLEM - cp20 Stunnel HTTP for mw132 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:24:21] PROBLEM - cp20 Stunnel HTTP for test131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:24:21] PROBLEM - cp20 SSH on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:24:21] PROBLEM - cp20 Varnish Backends on cp20 is CRITICAL: 5 backends are down. mw121 mw122 mw131 mw141 mw142 [18:24:33] RECOVERY - cp21 Stunnel HTTP for matomo131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 99061 bytes in 0.156 second response time [18:24:33] RECOVERY - cp21 Stunnel HTTP for reports121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 0.063 second response time [18:24:33] RECOVERY - cp20 HTTP 4xx/5xx ERROR Rate on cp20 is OK: OK - NGINX Error Rate is 19% [18:24:33] PROBLEM - cp21 Disk Space on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:24:36] PROBLEM - cp21 Stunnel HTTP for puppet141 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds PROBLEM - cp21 APT on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:24:36] PROBLEM - cp21 Varnish Backends on cp21 is CRITICAL: 7 backends are down. mw121 mw122 mw131 mw132 mw141 mw142 mediawiki [18:24:41] PROBLEM - cp21 Stunnel HTTP for mw122 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:24:43] PROBLEM - cp20 Stunnel HTTP for puppet141 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:24:46] PROBLEM - cp20 Stunnel HTTP for mail121 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:24:48] RECOVERY - cp21 PowerDNS Recursor on cp21 is OK: DNS OK: 0.158 seconds response time. miraheze.org returns 149.56.140.43,149.56.141.75,2607:5300:201:3100::5ebc,2607:5300:201:3100::929a [18:24:53] PROBLEM - cp20 Stunnel HTTP for mw142 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:25:11] PROBLEM - cp21 Puppet on cp21 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 25 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/etc/apt/trusted.gpg.d/puppetlabs.gpg] [18:25:21] PROBLEM - cp21 Varnish Backends on cp21 is CRITICAL: 7 backends are down. mw121 mw122 mw131 mw132 mw141 mw142 mediawiki [18:25:21] PROBLEM - cp21 Stunnel HTTP for puppet141 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:25:21] PROBLEM - cp21 Puppet on cp21 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 25 minutes ago with 1 failures. Failed resources (up to 3 shown): File[/etc/apt/trusted.gpg.d/puppetlabs.gpg] [18:25:21] PROBLEM - cp21 APT on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:25:21] PROBLEM - cp21 Stunnel HTTP for mw122 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:25:21] PROBLEM - cp21 Disk Space on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:25:24] RECOVERY - cp20 SSH on cp20 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [18:25:32] RECOVERY - cp20 Stunnel HTTP for mw132 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 1.018 second response time [18:26:31] PROBLEM - cp21 HTTPS on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:26:41] PROBLEM - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is WARNING: WARNING - NGINX Error Rate is 42% [18:26:46] PROBLEM - cp20 Stunnel HTTP for mw121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:26:46] RECOVERY - cp20 Stunnel HTTP for puppet141 on cp20 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 3.055 second response time [18:26:47] RECOVERY - cp20 Stunnel HTTP for mail121 on cp20 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 427 bytes in 0.050 second response time [18:26:53] RECOVERY - cp21 Disk Space on cp21 is OK: DISK OK - free space: / 12644 MB (32% inode=96%); [18:27:01] PROBLEM - cp20 Stunnel HTTP for phab121 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:27:18] PROBLEM - cp21 Stunnel HTTP for mw121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:27:43] PROBLEM - cp20 Stunnel HTTP for mwtask141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:28:34] RECOVERY - cp20 Disk Space on cp20 is OK: DISK OK - free space: / 12603 MB (32% inode=96%); [18:28:39] RECOVERY - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is OK: OK - NGINX Error Rate is 36% [18:28:50] PROBLEM - cp21 Stunnel HTTP for mw131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:29:00] RECOVERY - cp20 Stunnel HTTP for phab121 on cp20 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 1.060 second response time [18:29:11] PROBLEM - cp20 Stunnel HTTP for matomo131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:29:37] RECOVERY - cp21 Stunnel HTTP for puppet141 on cp21 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 3.046 second response time [18:29:40] RECOVERY - cp20 Stunnel HTTP for mwtask141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14742 bytes in 2.074 second response time [18:29:50] PROBLEM - cp20 Stunnel HTTP for mon141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:30:06] PROBLEM - cp21 Stunnel HTTP for test131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:30:08] PROBLEM - cp21 Stunnel HTTP for mon141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:30:20] PROBLEM - cp20 Stunnel HTTP for mw141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:30:32] PROBLEM - cp20 Puppet on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:30:34] RECOVERY - cp21 HTTPS on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3420 bytes in 7.315 second response time [18:30:56] RECOVERY - cp21 Stunnel HTTP for mw131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.027 second response time [18:31:08] PROBLEM - cp20 Stunnel HTTP for reports121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:31:27] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [18:31:36] RECOVERY - cp20 Stunnel HTTP for mw142 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 3.042 second response time [18:32:17] PROBLEM - cp21 Stunnel HTTP for mw132 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:33:48] PROBLEM - cp21 Stunnel HTTP for mw142 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:34:09] RECOVERY - cp20 Stunnel HTTP for matomo131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 99038 bytes in 0.297 second response time [18:34:42] PROBLEM - cp21 Stunnel HTTP for mail121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:34:47] RECOVERY - cp20 Stunnel HTTP for mw121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 7.221 second response time [18:35:11] RECOVERY - cp21 Stunnel HTTP for test131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14740 bytes in 0.017 second response time [18:35:33] PROBLEM - cp20 Stunnel HTTP for puppet141 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:35:44] PROBLEM - cp21 Stunnel HTTP for puppet141 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:37:15] RECOVERY - cp20 Stunnel HTTP for mon141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 35770 bytes in 7.162 second response time [18:37:15] RECOVERY - cp21 Stunnel HTTP for mw132 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.048 second response time [18:37:22] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::1b80/cpweb [18:37:44] PROBLEM - cp21 conntrack_table_size on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:37:46] PROBLEM - cp20 Stunnel HTTP for mw142 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:37:53] RECOVERY - cp20 Stunnel HTTP for mw141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 3.056 second response time [18:38:12] RECOVERY - cp21 Stunnel HTTP for mw121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.017 second response time [18:38:12] PROBLEM - cp21 Stunnel HTTP for reports121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:38:17] PROBLEM - cp21 Stunnel HTTP for phab121 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:38:20] RECOVERY - cp20 Puppet on cp20 is OK: OK: Puppet is currently enabled, last run 13 minutes ago with 0 failures [18:38:47] PROBLEM - cp20 conntrack_table_size on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:39:43] RECOVERY - cp20 Stunnel HTTP for mw142 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.014 second response time [18:39:46] RECOVERY - cp20 Stunnel HTTP for test131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14740 bytes in 0.018 second response time [18:40:07] RECOVERY - cp21 Stunnel HTTP for reports121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 0.029 second response time [18:40:28] RECOVERY - cp20 Stunnel HTTP for puppet141 on cp20 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 0.010 second response time [18:41:18] PROBLEM - cp20 Stunnel HTTP for mail121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:41:25] PROBLEM - cp21 Stunnel HTTP for test131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:42:01] RECOVERY - cp21 Stunnel HTTP for mail121 on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 427 bytes in 1.040 second response time [18:42:22] PROBLEM - cp21 Stunnel HTTP for matomo131 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:42:39] RECOVERY - cp21 conntrack_table_size on cp21 is OK: OK: nf_conntrack is 0 % full [18:42:49] PROBLEM - cp21 Stunnel HTTP for mw131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:42:53] RECOVERY - cp21 Stunnel HTTP for mon141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 35724 bytes in 0.032 second response time [18:42:53] PROBLEM - cp20 Stunnel HTTP for mwtask141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:42:59] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 13.02, 11.08, 9.33 [18:43:19] RECOVERY - cp20 Stunnel HTTP for mail121 on cp20 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 427 bytes in 3.040 second response time [18:43:37] RECOVERY - cp20 conntrack_table_size on cp20 is OK: OK: nf_conntrack is 0 % full [18:43:58] PROBLEM - cp21 SSH on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:44:16] PROBLEM - cp20 Stunnel HTTP for mw121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:44:27] RECOVERY - cp21 Stunnel HTTP for matomo131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 99061 bytes in 0.148 second response time [18:44:58] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.05, 10.44, 9.28 [18:45:52] RECOVERY - cp21 Stunnel HTTP for phab121 on cp21 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 0.062 second response time [18:45:53] RECOVERY - cp21 SSH on cp21 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [18:46:04] RECOVERY - cp21 Stunnel HTTP for mw122 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.033 second response time [18:46:12] PROBLEM - cp21 Stunnel HTTP for mw132 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:46:23] RECOVERY - cp21 Stunnel HTTP for puppet141 on cp21 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 1.019 second response time [18:46:57] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 8.07, 9.62, 9.13 [18:47:34] PROBLEM - cp20 Stunnel HTTP for mw131 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:47:38] PROBLEM - cp21 Stunnel HTTP for mw141 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:47:50] RECOVERY - cp21 Stunnel HTTP for mw131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.037 second response time [18:47:51] PROBLEM - cp20 SSH on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:47:53] PROBLEM - cp20 Stunnel HTTP for mon141 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:48:02] RECOVERY - cp20 Stunnel HTTP for mwtask141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14736 bytes in 2.067 second response time [18:48:02] PROBLEM - cp21 Stunnel HTTP for mail121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:49:17] RECOVERY - cp20 Stunnel HTTP for mw121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.048 second response time [18:49:20] PROBLEM - cp20 Stunnel HTTP for matomo131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:49:46] PROBLEM - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:50:04] RECOVERY - cp21 Stunnel HTTP for mail121 on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 427 bytes in 0.014 second response time [18:50:48] RECOVERY - cp20 Stunnel HTTP for reports121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 1.073 second response time [18:51:18] PROBLEM - cp21 Stunnel HTTP for mon141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:51:31] RECOVERY - cp20 Stunnel HTTP for matomo131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 99061 bytes in 0.259 second response time [18:51:45] PROBLEM - cp20 HTTPS on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:52:12] RECOVERY - cp20 Stunnel HTTP for mon141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 35762 bytes in 7.286 second response time [18:52:36] RECOVERY - cp21 Stunnel HTTP for mw141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 5.089 second response time [18:52:59] PROBLEM - cp21 HTTPS on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:53:09] RECOVERY - cp21 Stunnel HTTP for mw142 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 3.052 second response time [18:53:20] PROBLEM - cp21 Stunnel HTTP for puppet141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:53:30] RECOVERY - cp21 Stunnel HTTP for mon141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 35724 bytes in 1.053 second response time [18:53:34] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki=psychevoswiki --no-updates --username-prefix=w /mnt/mediawiki-static/metawiki/ImportDump/psychevoswiki-20220812232100.xml (START) [18:53:39] RECOVERY - cp20 HTTPS on cp20 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3418 bytes in 0.016 second response time [18:53:40] PROBLEM - cloud13 Puppet on cloud13 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[ulogd2] [18:53:43] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:53:52] RECOVERY - cp20 SSH on cp20 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [18:54:01] PROBLEM - cp20 Stunnel HTTP for mail121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:55:06] PROBLEM - cp21 Stunnel HTTP for mw122 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:55:33] RECOVERY - cp20 Stunnel HTTP for mw131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.047 second response time [18:55:41] RECOVERY - cp21 Stunnel HTTP for puppet141 on cp21 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 7.188 second response time [18:55:54] PROBLEM - cp21 Stunnel HTTP for reports121 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:56:12] PROBLEM - cp20 conntrack_table_size on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:56:13] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki=psychevoswiki --no-updates --username-prefix=w /mnt/mediawiki-static/metawiki/ImportDump/psychevoswiki-20220812232100.xml (END - exit=0) [18:56:14] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/rebuildall.php --wiki=psychevoswiki (START) [18:56:14] RECOVERY - cp20 Stunnel HTTP for mail121 on cp20 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 427 bytes in 0.012 second response time [18:56:19] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:56:25] PROBLEM - cp21 SSH on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:56:26] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:56:52] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.60, 10.31, 9.51 [18:57:07] RECOVERY - cp21 Stunnel HTTP for mw122 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 0.017 second response time [18:57:33] PROBLEM - cp20 Stunnel HTTP for matomo131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:57:52] PROBLEM - cp21 Stunnel HTTP for mw141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:57:59] PROBLEM - cp20 Puppet on cp20 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [18:58:13] RECOVERY - cp20 conntrack_table_size on cp20 is OK: OK: nf_conntrack is 0 % full [18:58:23] PROBLEM - cp20 Stunnel HTTP for mw141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:58:53] PROBLEM - cp20 Stunnel HTTP for mwtask141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:59:03] PROBLEM - cp20 Stunnel HTTP for reports121 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [18:59:07] RECOVERY - cp21 HTTPS on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3420 bytes in 1.032 second response time [18:59:12] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/rebuildall.php --wiki=psychevoswiki (END - exit=0) [18:59:13] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/initSiteStats.php --wiki=psychevoswiki --active --update (END - exit=0) [18:59:19] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:59:27] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:59:27] PROBLEM - cp20 Stunnel HTTP for mw121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [18:59:38] PROBLEM - cp20 Stunnel HTTP for test131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:00:17] RECOVERY - cp21 Puppet on cp21 is OK: OK: Puppet is currently enabled, last run 4 minutes ago with 0 failures [19:00:24] RECOVERY - cp21 SSH on cp21 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [19:00:24] PROBLEM - cp20 Stunnel HTTP for puppet141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:00:27] PROBLEM - cp21 Stunnel HTTP for mwtask141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:00:45] RECOVERY - cp21 Stunnel HTTP for reports121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 2.088 second response time [19:00:47] RECOVERY - cp21 Stunnel HTTP for test131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14740 bytes in 2.049 second response time [19:00:47] RECOVERY - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is OK: OK - NGINX Error Rate is 22% [19:00:53] RECOVERY - cp20 Stunnel HTTP for mwtask141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14736 bytes in 0.014 second response time [19:01:34] RECOVERY - cp20 Stunnel HTTP for test131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14740 bytes in 1.016 second response time [19:01:57] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 14.37, 11.46, 10.11 [19:02:33] RECOVERY - cp21 Stunnel HTTP for mwtask141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14736 bytes in 0.020 second response time [19:02:38] PROBLEM - cp21 Stunnel HTTP for puppet141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:02:50] PROBLEM - cp20 Stunnel HTTP for mail121 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:03:20] PROBLEM - cp21 Stunnel HTTP for mw122 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:03:57] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.57, 10.91, 10.07 [19:04:10] RECOVERY - cp20 Stunnel HTTP for reports121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 0.040 second response time [19:04:42] RECOVERY - cp20 Stunnel HTTP for matomo131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 99061 bytes in 0.141 second response time [19:04:44] PROBLEM - cp21 SSH on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:05:14] [02puppet] 07MacFan4000 opened pull request 03#2783: move universalomega to ops - 13https://github.com/miraheze/puppet/pull/2783 [19:05:15] [url] Page not found · GitHub · GitHub | github.com [19:05:26] PROBLEM - cp20 HTTP 4xx/5xx ERROR Rate on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:05:33] RECOVERY - cp20 Stunnel HTTP for mw141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 3.045 second response time [19:05:35] PROBLEM - cp21 Stunnel HTTP for mw131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:05:59] PROBLEM - cp21 Stunnel HTTP for matomo131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:06:16] RECOVERY - cp21 Stunnel HTTP for mw132 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.021 second response time [19:06:38] RECOVERY - cp21 SSH on cp21 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [19:06:43] PROBLEM - cp21 Current Load on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:07:29] PROBLEM - cp21 Stunnel HTTP for mail121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:07:37] RECOVERY - cp20 HTTP 4xx/5xx ERROR Rate on cp20 is OK: OK - NGINX Error Rate is 24% [19:07:51] RECOVERY - cp20 Stunnel HTTP for mail121 on cp20 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 427 bytes in 3.036 second response time [19:07:53] RECOVERY - cp21 Stunnel HTTP for puppet141 on cp21 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 1.021 second response time [19:07:57] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 12.11, 10.90, 10.21 [19:08:06] PROBLEM - cp20 Stunnel HTTP for mw132 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:08:24] RECOVERY - cp21 Stunnel HTTP for mw122 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 0.026 second response time [19:08:31] PROBLEM - cp21 HTTPS on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:08:48] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 13.99, 11.82, 10.59 [19:09:13] PROBLEM - cp21 Stunnel HTTP for mwtask141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:09:42] PROBLEM - cp21 Disk Space on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:09:50] PROBLEM - cp21 Stunnel HTTP for test131 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:09:57] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 11.44, 10.89, 10.27 [19:10:38] PROBLEM - cp21 Stunnel HTTP for mon141 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:10:49] PROBLEM - cp20 ferm_active on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:10:58] PROBLEM - cp21 SSH on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:11:14] RECOVERY - cp21 Stunnel HTTP for mwtask141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14736 bytes in 1.023 second response time [19:11:19] PROBLEM - cp20 Stunnel HTTP for mw131 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:11:57] RECOVERY - mw122 Current Load on mw122 is OK: OK - load average: 7.20, 9.54, 9.85 [19:12:01] RECOVERY - cp21 Stunnel HTTP for mw141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.026 second response time [19:12:01] RECOVERY - cp21 Stunnel HTTP for test131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14740 bytes in 1.046 second response time [19:12:32] RECOVERY - cp20 Stunnel HTTP for mw132 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 3.054 second response time [19:12:44] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki=chikkuntakkunwiki --no-updates --username-prefix=wikia:chikkuntakkun /mnt/mediawiki-static/metawiki/ImportDump/chikkuntakkunwiki-20220813012321.xml (START) [19:12:46] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.23, 11.94, 11.00 [19:12:52] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:12:58] RECOVERY - cp21 Stunnel HTTP for mw131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.016 second response time [19:12:58] RECOVERY - cp20 Stunnel HTTP for mw121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 5.106 second response time [19:13:08] RECOVERY - cp21 Stunnel HTTP for matomo131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 99061 bytes in 5.282 second response time [19:13:08] RECOVERY - cp21 Stunnel HTTP for mon141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 35731 bytes in 3.093 second response time [19:13:10] PROBLEM - cp21 Stunnel HTTP for puppet141 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:13:16] RECOVERY - cp20 ferm_active on cp20 is OK: OK ferm input default policy is set [19:13:25] PROBLEM - cp20 HTTPS on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:13:37] RECOVERY - cp20 Stunnel HTTP for puppet141 on cp20 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 0.011 second response time [19:14:32] RECOVERY - cp21 Current Load on cp21 is OK: OK - load average: 0.08, 0.03, 0.03 [19:14:46] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki=chikkuntakkunwiki --no-updates --username-prefix=wikia:chikkuntakkun /mnt/mediawiki-static/metawiki/ImportDump/chikkuntakkunwiki-20220813012321.xml (END - exit=0) [19:14:47] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/rebuildall.php --wiki=chikkuntakkunwiki (START) [19:14:55] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:14:59] PROBLEM - cp20 Stunnel HTTP for mwtask141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:15:03] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:15:18] RECOVERY - cp21 Stunnel HTTP for mail121 on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 427 bytes in 1.032 second response time [19:15:19] RECOVERY - cp20 HTTPS on cp20 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3419 bytes in 0.015 second response time [19:16:00] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/rebuildall.php --wiki=chikkuntakkunwiki (END - exit=0) [19:16:01] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/initSiteStats.php --wiki=chikkuntakkunwiki --active --update (END - exit=0) [19:16:11] PROBLEM - cp20 Stunnel HTTP for phab121 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:16:12] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:16:19] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:17:01] RECOVERY - cp21 SSH on cp21 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [19:17:03] PROBLEM - cp21 Stunnel HTTP for mw142 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:17:41] RECOVERY - cp21 Disk Space on cp21 is OK: DISK OK - free space: / 12639 MB (32% inode=96%); [19:17:57] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.43, 10.17, 10.02 [19:18:09] RECOVERY - cp20 Stunnel HTTP for phab121 on cp20 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 0.077 second response time [19:18:26] PROBLEM - cp21 Puppet on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:18:41] RECOVERY - cp21 Stunnel HTTP for puppet141 on cp21 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 1.021 second response time [19:18:44] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.11, 11.30, 10.97 [19:18:49] RECOVERY - cp21 HTTPS on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3426 bytes in 0.017 second response time [19:18:58] RECOVERY - cp21 Stunnel HTTP for mw142 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.026 second response time [19:19:27] PROBLEM - cp20 Stunnel HTTP for mw122 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:19:36] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.85, 3.59, 3.19 [19:19:43] PROBLEM - cp20 PowerDNS Recursor on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:19:57] PROBLEM - cp20 Stunnel HTTP for puppet141 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:20:03] RECOVERY - cp20 Stunnel HTTP for mwtask141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14736 bytes in 2.077 second response time [19:20:06] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki=androidwiki --no-updates --username-prefix=w /mnt/mediawiki-static/metawiki/ImportDump/androidwiki-20220814135447.xml (START) [19:20:15] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:20:21] PROBLEM - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:20:28] RECOVERY - cp21 Puppet on cp21 is OK: OK: Puppet is currently enabled, last run 25 minutes ago with 0 failures [19:20:43] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.58, 11.69, 11.18 [19:20:45] PROBLEM - cp20 Stunnel HTTP for matomo131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:20:45] PROBLEM - cp20 Stunnel HTTP for reports121 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:21:24] RECOVERY - cloud13 Puppet on cloud13 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [19:21:30] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.84, 4.31, 3.49 [19:21:32] RECOVERY - cp20 Stunnel HTTP for mw122 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 7.390 second response time [19:21:42] RECOVERY - cp20 PowerDNS Recursor on cp20 is OK: DNS OK: 0.136 seconds response time. miraheze.org returns 149.56.140.43,149.56.141.75,2607:5300:201:3100::5ebc,2607:5300:201:3100::929a [19:21:55] [02miraheze/puppet] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/puppet/compare/f519dd400df8...69ac85ca4ce2 [19:21:56] [url] Comparing f519dd400df8...69ac85ca4ce2 · miraheze/puppet · GitHub | github.com [19:21:57] [02miraheze/puppet] 07Reception123 0369ac85c - add universalomega to ops [19:21:57] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 14.71, 11.31, 10.42 [19:22:03] RECOVERY - cp20 Stunnel HTTP for puppet141 on cp20 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 3.047 second response time [19:22:08] PROBLEM - cp21 Stunnel HTTP for matomo131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:22:25] RECOVERY - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is OK: OK - NGINX Error Rate is 30% [19:22:42] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 14.71, 12.85, 11.65 [19:23:07] PROBLEM - cp20 Stunnel HTTP for phab121 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:23:58] Reception123: he also should be removed from bastion and ssl-admins [19:24:00] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/importDump.php --wiki=androidwiki --no-updates --username-prefix=w /mnt/mediawiki-static/metawiki/ImportDump/androidwiki-20220814135447.xml (END - exit=0) [19:24:01] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/rebuildall.php --wiki=androidwiki (START) [19:24:05] PROBLEM - cp21 Stunnel HTTP for puppet141 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:24:08] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:24:19] PROBLEM - cp21 Stunnel HTTP for mw142 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:24:21] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:24:35] MacFan4000: ah thanks for noticing [19:24:38] RECOVERY - cp21 Stunnel HTTP for matomo131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 99061 bytes in 1.182 second response time [19:24:39] PROBLEM - cp20 Stunnel HTTP for mw141 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:24:40] [02puppet] 07MacFan4000 closed pull request 03#2783: move universalomega to ops - 13https://github.com/miraheze/puppet/pull/2783 [19:24:40] [url] Page not found · GitHub · GitHub | github.com [19:24:41] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.54, 3.32, 2.73 [19:24:52] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [19:25:03] [02miraheze/puppet] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/puppet/compare/69ac85ca4ce2...a7919a70cfb8 [19:25:04] [url] Comparing 69ac85ca4ce2...a7919a70cfb8 · miraheze/puppet · GitHub | github.com [19:25:05] [02miraheze/puppet] 07Reception123 03a7919a7 - rm universalomega from bastion/ssl-admins as he's now ops [19:25:05] RECOVERY - cp20 Stunnel HTTP for phab121 on cp20 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 0.070 second response time [19:25:09] PROBLEM - cp21 Current Load on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:25:57] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 10.43, 11.90, 10.96 [19:26:20] RECOVERY - cp21 Stunnel HTTP for puppet141 on cp21 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 0.015 second response time [19:26:38] RECOVERY - cp20 Stunnel HTTP for mw141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.021 second response time [19:26:57] PROBLEM - cp20 Stunnel HTTP for mail121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:27:05] RECOVERY - cp21 Current Load on cp21 is OK: OK - load average: 0.24, 0.13, 0.04 [19:28:07] PROBLEM - cp20 Stunnel HTTP for puppet141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:28:37] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.49, 3.69, 3.02 [19:28:40] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.54, 11.27, 11.44 [19:28:51] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/rebuildall.php --wiki=androidwiki (END - exit=0) [19:28:52] !log [macfan@mwtask141] sudo -u www-data php /srv/mediawiki/w/maintenance/initSiteStats.php --wiki=androidwiki --active --update (END - exit=0) [19:29:01] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:29:05] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.85, 3.90, 3.77 [19:29:10] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:29:12] RECOVERY - cp21 Stunnel HTTP for mw142 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.048 second response time [19:29:27] PROBLEM - cp21 PowerDNS Recursor on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:29:33] PROBLEM - cp20 Stunnel HTTP for phab121 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:29:57] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 13.56, 11.08, 10.73 [19:30:15] PROBLEM - cp21 SSH on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:30:35] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.93, 3.74, 3.12 [19:30:39] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.24, 11.57, 11.52 [19:31:07] RECOVERY - cp20 Stunnel HTTP for mail121 on cp20 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 427 bytes in 3.042 second response time [19:31:08] RECOVERY - cp20 Stunnel HTTP for matomo131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 99061 bytes in 1.253 second response time [19:31:19] RECOVERY - cp20 Stunnel HTTP for reports121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 1.090 second response time [19:31:24] RECOVERY - cp21 PowerDNS Recursor on cp21 is OK: DNS OK: 0.142 seconds response time. miraheze.org returns 149.56.140.43,149.56.141.75,2607:5300:201:3100::5ebc,2607:5300:201:3100::929a [19:31:25] RECOVERY - cp20 Stunnel HTTP for mw131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 5.090 second response time [19:32:12] RECOVERY - cp21 SSH on cp21 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [19:32:38] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.61, 10.93, 11.29 [19:32:56] RECOVERY - cp20 Stunnel HTTP for puppet141 on cp20 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 0.010 second response time [19:33:32] PROBLEM - cp20 Stunnel HTTP for mw141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:33:38] PROBLEM - cp21 Stunnel HTTP for mw132 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:33:40] PROBLEM - cp21 Stunnel HTTP for reports121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:34:31] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.23, 3.01, 2.97 [19:35:19] PROBLEM - cp21 Stunnel HTTP for puppet141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:35:29] RECOVERY - cp20 Stunnel HTTP for mw141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 0.014 second response time [19:35:40] RECOVERY - cp21 Stunnel HTTP for reports121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 2.068 second response time [19:35:57] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 9.36, 11.12, 10.98 [19:36:19] PROBLEM - cp20 Stunnel HTTP for mwtask141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:36:32] PROBLEM - cp21 SSH on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:36:41] PROBLEM - cp21 Stunnel HTTP for mw122 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:36:47] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [19:36:48] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 3.37, 2.93, 3.33 [19:37:04] PROBLEM - cp21 Stunnel HTTP for mon141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:37:58] PROBLEM - cp20 HTTPS on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:38:16] PROBLEM - cp20 SSH on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:38:29] RECOVERY - cp21 SSH on cp21 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [19:38:35] RECOVERY - cp20 Stunnel HTTP for mwtask141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14742 bytes in 1.029 second response time [19:38:46] RECOVERY - cp21 Stunnel HTTP for mw122 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14724 bytes in 3.616 second response time [19:39:19] [02puppet] 07MacFan4000 opened pull request 03#2784: update icinga groups for universalomega - 13https://github.com/miraheze/puppet/pull/2784 [19:39:20] [url] Page not found · GitHub · GitHub | github.com [19:39:31] Reception123: ^ [19:40:00] PROBLEM - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:40:10] RECOVERY - cp20 SSH on cp20 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [19:40:29] PROBLEM - cp20 Stunnel HTTP for reports121 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:40:35] RECOVERY - mw121 Current Load on mw121 is OK: OK - load average: 8.08, 8.94, 10.18 [19:41:07] PROBLEM - cp20 Stunnel HTTP for matomo131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:41:14] thanks MacFan4000 [19:41:21] [02puppet] 07Reception123 closed pull request 03#2784: update icinga groups for universalomega - 13https://github.com/miraheze/puppet/pull/2784 [19:41:21] [url] Page not found · GitHub · GitHub | github.com [19:41:22] [02miraheze/puppet] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/puppet/compare/a7919a70cfb8...52764760ad4f [19:41:23] [url] Comparing a7919a70cfb8...52764760ad4f · miraheze/puppet · GitHub | github.com [19:41:24] [02miraheze/puppet] 07MacFan4000 035276476 - update icinga groups for universalomega (#2784) [19:41:34] PROBLEM - cp20 Stunnel HTTP for mw141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:41:42] RECOVERY - cp21 Stunnel HTTP for mw132 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.032 second response time [19:41:44] PROBLEM - cp21 Current Load on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:41:47] RECOVERY - cp21 Stunnel HTTP for puppet141 on cp21 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 3.053 second response time [19:41:57] RECOVERY - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is OK: OK - NGINX Error Rate is 38% [19:42:44] RECOVERY - cp20 Stunnel HTTP for phab121 on cp20 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 3.083 second response time [19:42:49] PROBLEM - cp21 SSH on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:42:53] PROBLEM - cp20 Stunnel HTTP for mw132 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:43:04] RECOVERY - cp20 Stunnel HTTP for matomo131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 99061 bytes in 1.167 second response time [19:43:45] RECOVERY - cp21 Current Load on cp21 is OK: OK - load average: 0.06, 0.03, 0.00 [19:43:53] PROBLEM - cp21 Stunnel HTTP for mw131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:44:06] PROBLEM - cp21 Stunnel HTTP for mw121 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:44:39] RECOVERY - cp21 Stunnel HTTP for mon141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 35762 bytes in 0.033 second response time [19:45:06] PROBLEM - cp21 Stunnel HTTP for matomo131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:45:13] PROBLEM - cp20 Stunnel HTTP for mw121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:46:02] PROBLEM - eng.archiopedia.org - LetsEncrypt on sslhost is CRITICAL: connect to address eng.archiopedia.org and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:46:03] RECOVERY - cp21 Stunnel HTTP for mw121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.043 second response time [19:46:10] RECOVERY - cp20 HTTPS on cp20 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3419 bytes in 3.054 second response time [19:46:12] PROBLEM - cp21 Stunnel HTTP for mw132 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:46:19] PROBLEM - cp20 Stunnel HTTP for puppet141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:46:31] RECOVERY - cp20 Stunnel HTTP for mw141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.026 second response time [19:46:40] PROBLEM - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is WARNING: WARNING - NGINX Error Rate is 40% [19:47:18] RECOVERY - cp20 Stunnel HTTP for mw121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 5.481 second response time [19:47:29] RECOVERY - cp20 Stunnel HTTP for reports121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 1.084 second response time [19:47:50] RECOVERY - cp20 Stunnel HTTP for mw132 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 0.018 second response time [19:48:17] PROBLEM - cp20 Stunnel HTTP for mw131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:48:19] RECOVERY - cp21 Stunnel HTTP for mw132 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 2.038 second response time [19:48:38] RECOVERY - cp21 HTTP 4xx/5xx ERROR Rate on cp21 is OK: OK - NGINX Error Rate is 33% [19:48:47] RECOVERY - cp20 Stunnel HTTP for puppet141 on cp20 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 1.017 second response time [19:48:49] PROBLEM - cp20 Stunnel HTTP for phab121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:49:09] PROBLEM - cp20 Stunnel HTTP for mon141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:49:57] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 12.89, 11.42, 10.92 [19:50:12] RECOVERY - cp21 Stunnel HTTP for matomo131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 99061 bytes in 7.079 second response time [19:50:30] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 14.73, 11.91, 10.88 [19:50:45] PROBLEM - cp21 Stunnel HTTP for puppet141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:51:38] PROBLEM - cp21 Stunnel HTTP for mw122 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:51:58] PROBLEM - cp21 Stunnel HTTP for phab121 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:55:15] PROBLEM - cp21 Stunnel HTTP for mw142 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:55:16] RECOVERY - cp21 SSH on cp21 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [19:56:12] PROBLEM - cp21 Stunnel HTTP for mw121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:56:37] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.39, 11.62, 11.09 [19:57:34] PROBLEM - cp20 HTTPS on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:57:38] PROBLEM - cp21 Stunnel HTTP for matomo131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:57:47] RECOVERY - cp20 Stunnel HTTP for mon141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 35770 bytes in 3.070 second response time [19:57:59] RECOVERY - cp21 Stunnel HTTP for mw122 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 7.216 second response time [19:57:59] PROBLEM - cp20 Stunnel HTTP for mw141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:58:13] PROBLEM - cp21 Current Load on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:58:18] RECOVERY - cp21 Stunnel HTTP for phab121 on cp21 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 0.069 second response time [19:58:32] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 16.53, 13.39, 11.79 [19:58:33] PROBLEM - cp20 Stunnel HTTP for mw142 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:59:09] PROBLEM - cp21 Stunnel HTTP for reports121 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [19:59:28] PROBLEM - cp20 Stunnel HTTP for mwtask141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [19:59:50] RECOVERY - cp21 Stunnel HTTP for matomo131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 99061 bytes in 3.221 second response time [20:00:04] RECOVERY - cp20 Stunnel HTTP for mw141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.021 second response time [20:00:10] RECOVERY - cp21 Current Load on cp21 is OK: OK - load average: 0.05, 0.08, 0.03 [20:00:11] RECOVERY - cp21 Stunnel HTTP for puppet141 on cp21 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 7.122 second response time [20:00:28] PROBLEM - cp20 Disk Space on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:00:46] RECOVERY - cp20 Stunnel HTTP for phab121 on cp20 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 1.096 second response time [20:00:51] RECOVERY - cp20 Puppet on cp20 is OK: OK: Puppet is currently enabled, last run 4 minutes ago with 0 failures [20:00:57] RECOVERY - cp21 Stunnel HTTP for mw131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.015 second response time [20:01:03] RECOVERY - cp21 Stunnel HTTP for mw121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 2.048 second response time [20:01:13] RECOVERY - cp21 Stunnel HTTP for reports121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 0.044 second response time [20:01:34] RECOVERY - cp20 HTTPS on cp20 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3419 bytes in 0.015 second response time [20:02:17] RECOVERY - cp20 Stunnel HTTP for mw131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.029 second response time [20:02:23] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.83, 11.47, 11.31 [20:02:42] RECOVERY - cp20 Disk Space on cp20 is OK: DISK OK - free space: / 12595 MB (32% inode=96%); [20:03:44] PROBLEM - cp20 Stunnel HTTP for test131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:03:54] PROBLEM - cp21 Stunnel HTTP for test131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:04:57] PROBLEM - cp21 Stunnel HTTP for mwtask141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:05:40] RECOVERY - cp20 Stunnel HTTP for mw142 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14724 bytes in 0.090 second response time [20:05:58] RECOVERY - cp21 Stunnel HTTP for test131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14740 bytes in 1.059 second response time [20:06:11] RECOVERY - cp20 Stunnel HTTP for test131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14726 bytes in 0.014 second response time [20:06:13] PROBLEM - cp21 Stunnel HTTP for puppet141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:08:09] RECOVERY - cp21 Stunnel HTTP for puppet141 on cp21 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 1.036 second response time [20:08:13] PROBLEM - cp21 Stunnel HTTP for mw131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:08:32] PROBLEM - cp21 Stunnel HTTP for mail121 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:09:02] RECOVERY - cp21 Stunnel HTTP for mwtask141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14728 bytes in 1.035 second response time [20:10:05] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.98, 10.87, 11.00 [20:10:33] RECOVERY - cp21 Stunnel HTTP for mail121 on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 427 bytes in 3.038 second response time [20:10:39] RECOVERY - cp21 Stunnel HTTP for mw131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.045 second response time [20:11:08] PROBLEM - cp21 Stunnel HTTP for mw121 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:11:11] PROBLEM - cp20 Stunnel HTTP for mw141 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:11:43] PROBLEM - cp20 Stunnel HTTP for reports121 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:12:07] PROBLEM - cp21 Stunnel HTTP for test131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:12:12] PROBLEM - cp20 Stunnel HTTP for test131 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:13:23] RECOVERY - cp20 Stunnel HTTP for mw141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.016 second response time [20:13:49] PROBLEM - cp21 Disk Space on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:13:50] PROBLEM - cp20 Stunnel HTTP for phab121 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:14:09] RECOVERY - cp21 Stunnel HTTP for test131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14740 bytes in 0.016 second response time [20:14:12] RECOVERY - cp20 Stunnel HTTP for test131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14740 bytes in 2.045 second response time [20:14:41] RECOVERY - cp21 Stunnel HTTP for mw142 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14724 bytes in 1.079 second response time [20:15:04] RECOVERY - eng.archiopedia.org - LetsEncrypt on sslhost is OK: OK - Certificate 'eng.archiopedia.org' will expire on Sun 02 Oct 2022 05:35:50 GMT +0000. [20:15:51] PROBLEM - cp20 HTTPS on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:16:18] PROBLEM - cp21 Stunnel HTTP for phab121 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:16:21] PROBLEM - cp21 Stunnel HTTP for mw131 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:17:50] PROBLEM - cp21 Stunnel HTTP for mw122 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:17:52] RECOVERY - cp20 HTTPS on cp20 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3419 bytes in 3.065 second response time [20:18:16] RECOVERY - cp21 Stunnel HTTP for phab121 on cp21 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 3.086 second response time [20:18:21] RECOVERY - cp21 Stunnel HTTP for mw131 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.016 second response time [20:18:28] RECOVERY - cp20 Stunnel HTTP for mwtask141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14736 bytes in 7.298 second response time [20:18:49] RECOVERY - cp20 Stunnel HTTP for phab121 on cp20 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 1.105 second response time [20:19:11] RECOVERY - cp21 Disk Space on cp21 is OK: DISK OK - free space: / 12634 MB (32% inode=96%); [20:19:50] RECOVERY - cp21 Stunnel HTTP for mw122 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14724 bytes in 5.418 second response time [20:20:08] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [20:20:12] RECOVERY - cp21 Stunnel HTTP for mw121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 4.488 second response time [20:20:40] PROBLEM - cp20 Stunnel HTTP for matomo131 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:21:20] RECOVERY - cp20 Stunnel HTTP for reports121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 0.040 second response time [20:21:54] [02miraheze/ssl] 07Universal-Omega pushed 031 commit to 03Universal-Omega-patch-1 [+1/-0/±0] 13https://github.com/miraheze/ssl/commit/4a009a161c27 [20:21:56] [02miraheze/ssl] 07Universal-Omega 034a009a1 - Add wiki.bluevertigo.org cert [20:21:57] [02ssl] 07Universal-Omega created branch 03Universal-Omega-patch-1 - 13https://github.com/miraheze/ssl [20:21:58] [url] Page not found · GitHub · GitHub | github.com [20:21:59] [02ssl] 07Universal-Omega opened pull request 03#592: Add wiki.bluevertigo.org cert - 13https://github.com/miraheze/ssl/pull/592 [20:21:59] ... [20:22:25] RECOVERY - cp21 Varnish Backends on cp21 is OK: All 14 backends are healthy [20:22:34] [02miraheze/ssl] 07Universal-Omega pushed 031 commit to 03Universal-Omega-patch-1 [+0/-0/±1] 13https://github.com/miraheze/ssl/compare/4a009a161c27...3f01202e0bc5 [20:22:35] [url] Comparing 4a009a161c27...3f01202e0bc5 · miraheze/ssl · GitHub | github.com [20:22:36] [02miraheze/ssl] 07Universal-Omega 033f01202 - Update certs.yaml [20:22:37] [02ssl] 07Universal-Omega synchronize pull request 03#592: Add wiki.bluevertigo.org cert - 13https://github.com/miraheze/ssl/pull/592 [20:22:38] [url] Page not found · GitHub · GitHub | github.com [20:22:47] PROBLEM - gluster122 Current Load on gluster122 is CRITICAL: CRITICAL - load average: 4.20, 2.75, 1.98 [20:23:06] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 12.09, 6.49, 4.20 [20:23:06] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 10.07, 5.97, 3.88 [20:24:06] [02ssl] 07Universal-Omega closed pull request 03#592: Add wiki.bluevertigo.org cert - 13https://github.com/miraheze/ssl/pull/592 [20:24:07] [url] Page not found · GitHub · GitHub | github.com [20:24:08] [02miraheze/ssl] 07Universal-Omega deleted branch 03Universal-Omega-patch-1 [20:24:09] [02miraheze/ssl] 07Universal-Omega pushed 031 commit to 03master [+1/-0/±1] 13https://github.com/miraheze/ssl/compare/280b14956fc8...2bd1daaf93b7 [20:24:10] [url] Comparing 280b14956fc8...2bd1daaf93b7 · miraheze/ssl · GitHub | github.com [20:24:11] [02miraheze/ssl] 07Universal-Omega 032bd1daa - Add wiki.bluevertigo.org cert (#592) [20:24:12] [02ssl] 07Universal-Omega deleted branch 03Universal-Omega-patch-1 - 13https://github.com/miraheze/ssl [20:24:13] [url] Page not found · GitHub · GitHub | github.com [20:24:44] RECOVERY - cp20 Stunnel HTTP for matomo131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 99061 bytes in 3.246 second response time [20:24:47] RECOVERY - gluster122 Current Load on gluster122 is OK: OK - load average: 3.21, 2.77, 2.07 [20:25:28] RECOVERY - cp20 Varnish Backends on cp20 is OK: All 14 backends are healthy [20:26:43] PROBLEM - cp21 Varnish Backends on cp21 is CRITICAL: 1 backends are down. mw122 [20:26:47] [02puppet] 07Universal-Omega commented on pull request 03#2597: add WikiMiniAtlas to frame-src - 13https://github.com/miraheze/puppet/pull/2597#issuecomment-1214443433 [20:26:48] [url] add WikiMiniAtlas to frame-src by ugochimobi · Pull Request #2597 · miraheze/puppet · GitHub | github.com [20:26:49] [02puppet] 07Universal-Omega closed pull request 03#2597: add WikiMiniAtlas to frame-src - 13https://github.com/miraheze/puppet/pull/2597 [20:26:49] [url] Page not found · GitHub · GitHub | github.com [20:28:40] RECOVERY - cp21 Varnish Backends on cp21 is OK: All 14 backends are healthy [20:34:18] PROBLEM - cp20 SSH on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:34:21] PROBLEM - cp21 Stunnel HTTP for phab121 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:35:08] PROBLEM - cp20 Puppet on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:35:37] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 2001:41d0:801:2000::4c25/cpweb, 2001:41d0:801:2000::1b80/cpweb [20:35:48] PROBLEM - cp21 Varnish Backends on cp21 is CRITICAL: 2 backends are down. mw132 mw142 [20:36:19] RECOVERY - cp20 SSH on cp20 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [20:37:06] RECOVERY - cp20 Puppet on cp20 is OK: OK: Puppet is currently enabled, last run 12 minutes ago with 0 failures [20:38:13] PROBLEM - cp20 Varnish Backends on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:38:41] RECOVERY - cp21 Stunnel HTTP for phab121 on cp21 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 1.109 second response time [20:40:50] PROBLEM - cp20 Stunnel HTTP for mw122 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:41:14] PROBLEM - cp20 Stunnel HTTP for mon141 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:41:52] PROBLEM - cp20 Stunnel HTTP for mw131 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:42:28] PROBLEM - cp20 Stunnel HTTP for puppet141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:43:08] PROBLEM - cp21 Stunnel HTTP for mw121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:43:41] PROBLEM - cp21 ferm_active on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:44:03] RECOVERY - cp20 Stunnel HTTP for mw131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.014 second response time [20:44:36] PROBLEM - cp21 Stunnel HTTP for puppet141 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:45:23] RECOVERY - cp21 Stunnel HTTP for mw121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 3.072 second response time [20:46:10] RECOVERY - cp21 ferm_active on cp21 is OK: OK ferm input default policy is set [20:46:10] RECOVERY - cp20 Stunnel HTTP for mon141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 35755 bytes in 0.038 second response time [20:46:27] RECOVERY - cp20 Stunnel HTTP for puppet141 on cp20 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 0.016 second response time [20:46:42] RECOVERY - cp21 Stunnel HTTP for puppet141 on cp21 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 0.014 second response time [20:47:31] PROBLEM - cp21 Stunnel HTTP for phab121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:48:54] RECOVERY - cp20 Stunnel HTTP for mw122 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.572 second response time [20:49:19] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.04, 3.04, 3.81 [20:49:26] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 1.77, 2.91, 4.00 [20:49:30] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [20:49:39] RECOVERY - cp21 Stunnel HTTP for phab121 on cp21 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 7.422 second response time [20:51:22] PROBLEM - cp20 Stunnel HTTP for matomo131 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:51:26] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 2.65, 3.22, 4.01 [20:53:19] RECOVERY - cp20 Stunnel HTTP for matomo131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 99061 bytes in 3.159 second response time [20:53:25] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.22, 3.11, 3.86 [20:53:27] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 2001:41d0:801:2000::4c25/cpweb, 2001:41d0:801:2000::1b80/cpweb [20:54:44] PROBLEM - cp20 Stunnel HTTP for mw142 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:55:25] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.51, 3.67, 3.97 [20:55:45] RECOVERY - cp21 Varnish Backends on cp21 is OK: All 14 backends are healthy [20:56:55] RECOVERY - cp20 Stunnel HTTP for mw142 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 7.260 second response time [20:58:17] PROBLEM - cp21 Stunnel HTTP for puppet141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [20:59:24] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.51, 3.60, 3.94 [20:59:24] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [21:00:05] PROBLEM - cp21 Varnish Backends on cp21 is CRITICAL: 7 backends are down. mw121 mw122 mw131 mw132 mw141 mw142 mediawiki [21:00:30] PROBLEM - cp20 Stunnel HTTP for mwtask141 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:00:48] RECOVERY - cp21 Stunnel HTTP for puppet141 on cp21 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 0.010 second response time [21:02:28] PROBLEM - cp20 Stunnel HTTP for mw122 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [21:02:30] RECOVERY - cp20 Stunnel HTTP for mwtask141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14736 bytes in 1.030 second response time [21:02:50] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 5.06, 3.92, 3.80 [21:03:21] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::1b80/cpweb [21:03:23] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 6.15, 3.95, 3.94 [21:04:46] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.65, 3.88, 3.81 [21:05:29] PROBLEM - cp21 Stunnel HTTP for mwtask141 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [21:07:11] RECOVERY - cp20 Stunnel HTTP for mw122 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 3.063 second response time [21:07:27] RECOVERY - cp21 Stunnel HTTP for mwtask141 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14742 bytes in 1.043 second response time [21:07:49] PROBLEM - cp20 Stunnel HTTP for matomo131 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:09:06] PROBLEM - cp20 SSH on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:09:21] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.26, 3.63, 3.86 [21:09:44] RECOVERY - cp20 Stunnel HTTP for matomo131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 99061 bytes in 0.302 second response time [21:11:24] PROBLEM - cp20 Stunnel HTTP for mon141 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:12:55] PROBLEM - cp21 conntrack_table_size on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [21:13:16] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [21:13:16] RECOVERY - cp20 SSH on cp20 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [21:13:26] RECOVERY - cp20 Stunnel HTTP for mon141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 35724 bytes in 7.306 second response time [21:14:29] PROBLEM - cp20 Stunnel HTTP for mw142 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:14:44] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.34, 3.35, 3.52 [21:14:55] PROBLEM - cp20 Stunnel HTTP for matomo131 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:15:19] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 6.30, 5.04, 4.30 [21:15:22] RECOVERY - cp21 conntrack_table_size on cp21 is OK: OK: nf_conntrack is 0 % full [21:16:44] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.71, 3.17, 3.44 [21:16:55] PROBLEM - cp20 HTTPS on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:16:59] RECOVERY - cp20 Stunnel HTTP for mw142 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.030 second response time [21:17:22] RECOVERY - cp20 Stunnel HTTP for matomo131 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 99061 bytes in 1.282 second response time [21:17:49] PROBLEM - cp21 Stunnel HTTP for puppet141 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:18:43] PROBLEM - cp20 Stunnel HTTP for mw121 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:18:44] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.64, 2.93, 3.31 [21:19:18] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.42, 3.69, 3.91 [21:19:44] PROBLEM - cp20 SSH on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:19:54] PROBLEM - cp21 Stunnel HTTP for mail121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [21:20:19] PROBLEM - cp20 Stunnel HTTP for mon141 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [21:20:43] RECOVERY - cp20 Stunnel HTTP for mw121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 1.042 second response time [21:20:53] RECOVERY - cp20 HTTPS on cp20 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3419 bytes in 0.018 second response time [21:21:28] PROBLEM - cp20 Stunnel HTTP for mw132 on cp20 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [21:21:44] RECOVERY - cp20 SSH on cp20 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u1 (protocol 2.0) [21:22:01] RECOVERY - cp21 Stunnel HTTP for puppet141 on cp21 is OK: HTTP OK: Status line output matched "403" - 289 bytes in 0.010 second response time [21:22:06] RECOVERY - cp21 Stunnel HTTP for mail121 on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 427 bytes in 1.024 second response time [21:22:15] RECOVERY - cp20 Stunnel HTTP for mon141 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 35762 bytes in 1.049 second response time [21:22:25] PROBLEM - cp21 Stunnel HTTP for reports121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [21:22:44] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.24, 3.51, 3.46 [21:23:17] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.98, 3.93, 3.92 [21:23:27] RECOVERY - cp20 Stunnel HTTP for mw132 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 0.013 second response time [21:24:08] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::4c25/cpweb [21:25:17] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.91, 3.58, 3.80 [21:26:35] RECOVERY - cp21 Stunnel HTTP for reports121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 2.060 second response time [21:26:44] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.23, 3.22, 3.39 [21:29:16] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 6.54, 4.23, 3.95 [21:31:15] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.58, 3.93, 3.88 [21:31:20] PROBLEM - cp21 HTTPS on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:31:54] PROBLEM - cp21 Stunnel HTTP for mw142 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:33:14] RECOVERY - cp21 HTTPS on cp21 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3418 bytes in 0.015 second response time [21:33:50] RECOVERY - cp21 Stunnel HTTP for mw142 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.014 second response time [21:36:40] PROBLEM - cp21 Stunnel HTTP for mw132 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:39:13] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 9.38, 5.04, 4.12 [21:41:59] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [21:43:08] RECOVERY - cp20 Varnish Backends on cp20 is OK: All 14 backends are healthy [21:44:35] RECOVERY - cp21 Stunnel HTTP for mw132 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 3.085 second response time [21:46:44] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.51, 3.91, 3.56 [21:48:44] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.54, 3.74, 3.54 [21:49:14] PROBLEM - cp21 Stunnel HTTP for phab121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [21:50:44] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 1.80, 3.07, 3.32 [21:51:10] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.11, 3.79, 4.00 [21:51:12] RECOVERY - cp21 Stunnel HTTP for phab121 on cp21 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 0.084 second response time [21:53:53] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 2001:41d0:801:2000::4c25/cpweb, 2001:41d0:801:2000::1b80/cpweb [21:55:49] RECOVERY - cp21 Varnish Backends on cp21 is OK: All 14 backends are healthy [21:56:25] PROBLEM - cp21 Stunnel HTTP for phab121 on cp21 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [21:58:24] RECOVERY - cp21 Stunnel HTTP for phab121 on cp21 is OK: HTTP OK: Status line output matched "500" - 2855 bytes in 1.065 second response time [21:59:05] PROBLEM - cp20 Varnish Backends on cp20 is CRITICAL: 2 backends are down. mw122 mw141 [21:59:08] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.41, 2.74, 3.40 [22:01:01] RECOVERY - cp20 Varnish Backends on cp20 is OK: All 14 backends are healthy [22:03:53] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [22:07:26] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 8.92, 10.32, 11.65 [22:09:26] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 14.66, 11.31, 11.81 [22:11:26] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 11.23, 11.76, 11.97 [22:20:37] PROBLEM - mw122 Current Load on mw122 is WARNING: WARNING - load average: 7.81, 9.54, 11.54 [22:25:26] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.62, 10.06, 10.35 [22:26:37] PROBLEM - mw122 Current Load on mw122 is CRITICAL: CRITICAL - load average: 13.17, 10.70, 11.27 [22:27:02] PROBLEM - db101 Current Load on db101 is CRITICAL: CRITICAL - load average: 8.14, 6.32, 3.81 [22:28:59] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 8.17, 4.02, 2.98 [22:29:02] PROBLEM - db101 Current Load on db101 is WARNING: WARNING - load average: 7.88, 6.75, 4.27 [22:30:58] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.91, 3.27, 2.82 [22:31:02] PROBLEM - db101 Current Load on db101 is CRITICAL: CRITICAL - load average: 9.80, 7.72, 4.91 [22:31:26] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 8.66, 10.57, 10.62 [22:39:26] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.25, 11.05, 10.76 [22:41:26] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.99, 11.22, 10.90 [22:42:38] PROBLEM - cp31 Current Load on cp31 is WARNING: WARNING - load average: 1.65, 1.68, 2.00 [22:44:38] PROBLEM - cp31 Current Load on cp31 is CRITICAL: CRITICAL - load average: 2.62, 2.02, 2.08 [22:47:26] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.06, 10.02, 10.29 [22:48:53] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.47, 3.54, 2.89 [22:49:17] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 4.53, 3.49, 2.94 [22:53:34] PROBLEM - mw122 MediaWiki Rendering on mw122 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:56:50] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 1.99, 3.87, 3.54 [22:57:00] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.76, 3.85, 3.47 [22:57:26] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 4.83, 11.68, 11.90 [22:57:42] PROBLEM - cp30 Stunnel HTTP for mw141 on cp30 is CRITICAL: HTTP CRITICAL - No data received from host [22:57:44] PROBLEM - cp30 Stunnel HTTP for reports121 on cp30 is CRITICAL: HTTP CRITICAL - No data received from host [22:57:44] PROBLEM - cp21 Stunnel HTTP for mw121 on cp21 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:57:44] PROBLEM - cp30 Stunnel HTTP for mw121 on cp30 is CRITICAL: HTTP CRITICAL - No data received from host [22:57:48] PROBLEM - cp20 Stunnel HTTP for mw121 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:58:07] PROBLEM - cp21 Stunnel HTTP for mw122 on cp21 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 328 bytes in 0.011 second response time [22:58:10] PROBLEM - cp31 Stunnel HTTP for mw121 on cp31 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:58:11] PROBLEM - cp30 Stunnel HTTP for mw122 on cp30 is CRITICAL: HTTP CRITICAL - No data received from host [22:58:12] PROBLEM - mw121 MediaWiki Rendering on mw121 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:58:17] PROBLEM - cp31 Stunnel HTTP for mw122 on cp31 is CRITICAL: HTTP CRITICAL - No data received from host [22:58:23] PROBLEM - cp20 Stunnel HTTP for mw122 on cp20 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:58:50] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 2.91, 3.32, 3.36 [22:58:56] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 1.73, 3.05, 3.21 [22:59:02] PROBLEM - cp30 Varnish Backends on cp30 is CRITICAL: 4 backends are down. mw121 mw131 mw141 mw142 [22:59:03] PROBLEM - cp30 Disk Space on cp30 is WARNING: DISK WARNING - free space: / 4210 MB (10% inode=96%); [22:59:41] RECOVERY - cp21 Stunnel HTTP for mw121 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.018 second response time [22:59:42] RECOVERY - cp30 Stunnel HTTP for mw141 on cp30 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.329 second response time [22:59:43] RECOVERY - cp30 Stunnel HTTP for reports121 on cp30 is OK: HTTP OK: HTTP/1.1 200 OK - 10996 bytes in 0.294 second response time [22:59:44] RECOVERY - cp30 Stunnel HTTP for mw121 on cp30 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.319 second response time [22:59:47] RECOVERY - cp20 Stunnel HTTP for mw121 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14732 bytes in 0.021 second response time [23:00:07] PROBLEM - cp20 Varnish Backends on cp20 is CRITICAL: 2 backends are down. mw121 mw122 [23:00:07] RECOVERY - mw121 MediaWiki Rendering on mw121 is OK: HTTP OK: HTTP/1.1 200 OK - 29520 bytes in 0.603 second response time [23:00:09] RECOVERY - cp31 Stunnel HTTP for mw121 on cp31 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.333 second response time [23:00:21] PROBLEM - cp31 Varnish Backends on cp31 is CRITICAL: 2 backends are down. mw122 mw132 [23:00:42] PROBLEM - cp21 Varnish Backends on cp21 is CRITICAL: 2 backends are down. mw121 mw122 [23:01:41] RECOVERY - mw122 MediaWiki Rendering on mw122 is OK: HTTP OK: HTTP/1.1 200 OK - 29520 bytes in 0.336 second response time [23:02:08] RECOVERY - cp21 Stunnel HTTP for mw122 on cp21 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.020 second response time [23:02:12] RECOVERY - cp30 Stunnel HTTP for mw122 on cp30 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.340 second response time [23:02:27] RECOVERY - cp31 Stunnel HTTP for mw122 on cp31 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.336 second response time [23:02:30] RECOVERY - cp20 Stunnel HTTP for mw122 on cp20 is OK: HTTP OK: HTTP/1.1 200 OK - 14738 bytes in 0.016 second response time [23:02:42] RECOVERY - cp21 Varnish Backends on cp21 is OK: All 14 backends are healthy [23:02:48] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.83, 3.91, 3.68 [23:02:58] PROBLEM - db112 Current Load on db112 is CRITICAL: CRITICAL - load average: 7.12, 5.23, 3.41 [23:03:02] RECOVERY - cp30 Varnish Backends on cp30 is OK: All 14 backends are healthy [23:04:07] RECOVERY - cp20 Varnish Backends on cp20 is OK: All 14 backends are healthy [23:04:21] RECOVERY - cp31 Varnish Backends on cp31 is OK: All 14 backends are healthy [23:04:45] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 5.04, 4.13, 3.61 [23:05:26] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 12.38, 11.58, 11.72 [23:08:48] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.33, 4.10, 3.79 [23:10:48] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.88, 3.78, 3.72 [23:11:20] PROBLEM - cp31 Disk Space on cp31 is WARNING: DISK WARNING - free space: / 4217 MB (10% inode=96%); [23:12:44] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.62, 3.93, 3.79 [23:12:58] RECOVERY - db112 Current Load on db112 is OK: OK - load average: 2.77, 5.02, 4.60 [23:14:48] RECOVERY - gluster101 Current Load on gluster101 is OK: OK - load average: 1.93, 2.99, 3.40 [23:15:26] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 5.98, 10.74, 11.73 [23:16:22] PROBLEM - mw121 MediaWiki Rendering on mw121 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:16:44] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 1.86, 2.54, 3.23 [23:17:26] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 15.12, 12.36, 12.17 [23:18:17] RECOVERY - mw121 MediaWiki Rendering on mw121 is OK: HTTP OK: HTTP/1.1 200 OK - 29520 bytes in 0.574 second response time [23:19:26] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.43, 11.08, 11.72 [23:23:26] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 14.37, 12.19, 11.95 [23:25:26] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 10.43, 11.14, 11.57 [23:26:48] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.42, 3.67, 3.38 [23:29:26] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 13.57, 11.25, 11.46 [23:30:48] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 7.61, 4.77, 3.80 [23:31:26] PROBLEM - mw121 Current Load on mw121 is WARNING: WARNING - load average: 9.91, 10.23, 11.03 [23:35:26] PROBLEM - mw121 Current Load on mw121 is CRITICAL: CRITICAL - load average: 13.94, 11.58, 11.40 [23:35:37] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.43, 3.40, 3.16 [23:36:48] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 1.90, 3.50, 3.58 [23:37:33] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.36, 2.96, 3.02 [23:38:48] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 5.37, 4.21, 3.82 [23:41:27] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 3.06, 3.52, 3.29 [23:43:23] PROBLEM - gluster121 Current Load on gluster121 is CRITICAL: CRITICAL - load average: 5.40, 4.02, 3.48 [23:45:19] PROBLEM - gluster121 Current Load on gluster121 is WARNING: WARNING - load average: 2.91, 3.54, 3.37 [23:47:15] RECOVERY - gluster121 Current Load on gluster121 is OK: OK - load average: 2.42, 3.27, 3.30 [23:48:48] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 2.15, 3.45, 3.77 [23:52:48] PROBLEM - gluster101 Current Load on gluster101 is CRITICAL: CRITICAL - load average: 4.23, 3.93, 3.86 [23:54:48] PROBLEM - gluster101 Current Load on gluster101 is WARNING: WARNING - load average: 3.92, 3.85, 3.83