[00:00:00] PROBLEM - swiftobject121 Current Load on swiftobject121 is WARNING: WARNING - load average: 5.45, 7.38, 7.68 [00:00:01] RECOVERY - mw132 Current Load on mw132 is OK: OK - load average: 4.78, 7.35, 9.97 [00:00:15] RECOVERY - mw134 Current Load on mw134 is OK: OK - load average: 4.49, 7.05, 9.51 [00:01:17] RECOVERY - istpcomputing.com - LetsEncrypt on sslhost is OK: OK - Certificate 'istpcomputing.com' will expire on Sat 07 Oct 2023 17:48:31 GMT +0000. [00:01:17] RECOVERY - wiki.graalmilitary.com - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.graalmilitary.com' will expire on Thu 05 Oct 2023 12:11:40 GMT +0000. [00:01:29] RECOVERY - wiki.anglish.info - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.anglish.info' will expire on Thu 05 Oct 2023 21:58:09 GMT +0000. [00:01:44] RECOVERY - dragonquestwiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'dragonquestwiki.com' will expire on Fri 06 Oct 2023 18:17:02 GMT +0000. [00:02:08] RECOVERY - gtfo-wiki.cn - LetsEncrypt on sslhost is OK: OK - Certificate 'gtfo-wiki.cn' will expire on Mon 23 Oct 2023 09:23:07 GMT +0000. [00:10:40] PROBLEM - graylog131 Current Load on graylog131 is WARNING: WARNING - load average: 3.83, 3.77, 3.96 [00:12:55] RECOVERY - bushcraftwiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'bushcraftwiki.com' will expire on Tue 24 Oct 2023 15:04:26 GMT +0000. [00:12:57] RECOVERY - pastport.org - LetsEncrypt on sslhost is OK: OK - Certificate 'pastport.org' will expire on Fri 06 Oct 2023 20:11:53 GMT +0000. [00:13:05] RECOVERY - www.rothwell-leeds.co.uk - LetsEncrypt on sslhost is OK: OK - Certificate 'www.rothwell-leeds.co.uk' will expire on Fri 06 Oct 2023 03:13:28 GMT +0000. [00:13:25] PROBLEM - cp33 Current Load on cp33 is WARNING: LOAD WARNING - total load average: 0.60, 1.03, 3.95 [00:14:01] RECOVERY - swiftobject121 Current Load on swiftobject121 is OK: OK - load average: 5.60, 5.94, 6.72 [00:14:38] RECOVERY - smashbroswiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'smashbroswiki.com' will expire on Fri 06 Oct 2023 18:12:02 GMT +0000. [00:16:52] RECOVERY - swiftobject113 Current Load on swiftobject113 is OK: OK - load average: 6.70, 6.48, 6.73 [00:17:25] RECOVERY - cp33 Current Load on cp33 is OK: LOAD OK - total load average: 0.92, 0.87, 3.20 [00:23:00] PROBLEM - cp32 Current Load on cp32 is WARNING: LOAD WARNING - total load average: 0.75, 0.85, 3.59 [00:24:32] PROBLEM - cp22 Current Load on cp22 is WARNING: LOAD WARNING - total load average: 0.20, 0.75, 3.87 [00:25:00] RECOVERY - cp32 Current Load on cp32 is OK: LOAD OK - total load average: 0.97, 0.92, 3.28 [00:26:40] PROBLEM - graylog131 Current Load on graylog131 is CRITICAL: CRITICAL - load average: 4.02, 3.64, 3.66 [00:28:01] PROBLEM - cp23 Current Load on cp23 is WARNING: LOAD WARNING - total load average: 0.35, 1.19, 3.92 [00:28:32] RECOVERY - cp22 Current Load on cp22 is OK: LOAD OK - total load average: 0.17, 0.45, 3.03 [00:28:40] PROBLEM - graylog131 Current Load on graylog131 is WARNING: WARNING - load average: 3.19, 3.56, 3.63 [00:32:01] RECOVERY - cp23 Current Load on cp23 is OK: LOAD OK - total load average: 0.30, 0.81, 3.14 [00:36:40] RECOVERY - graylog131 Current Load on graylog131 is OK: OK - load average: 3.00, 2.97, 3.33 [00:48:18] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 5.85, 6.63, 7.75 [00:56:18] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.81, 5.60, 6.75 [01:00:18] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.29, 8.11, 7.59 [01:03:36] PROBLEM - cp33 Current Load on cp33 is WARNING: LOAD WARNING - total load average: 3.73, 3.02, 2.03 [01:04:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.01, 6.60, 7.71 [01:05:31] RECOVERY - cp33 Current Load on cp33 is OK: LOAD OK - total load average: 1.12, 2.29, 1.87 [01:06:18] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.14, 7.42, 7.54 [01:16:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.13, 7.69, 7.54 [01:18:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.26, 7.76, 7.59 [01:20:19] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 6.03, 6.12, 6.70 [01:28:38] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.42, 5.99, 6.74 [01:29:23] PROBLEM - en.religiononfire.mar.in.ua - reverse DNS on sslhost is WARNING: NoNameservers: All nameservers failed to answer the query mar.in.ua. IN NS: Server 2606:4700:4700::1111 UDP port 53 answered SERVFAIL [01:30:18] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.99, 6.23, 6.29 [01:34:19] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 6.77, 6.48, 6.38 [01:36:15] PROBLEM - cp23 NTP time on cp23 is CRITICAL: NTP CRITICAL: Offset 0.5000528693 secs [01:36:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.39, 6.74, 6.81 [01:38:38] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.78, 6.38, 6.67 [01:46:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.62, 6.91, 6.77 [01:48:38] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.50, 6.31, 6.56 [01:56:22] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.15, 6.50, 6.11 [01:58:16] RECOVERY - en.religiononfire.mar.in.ua - reverse DNS on sslhost is OK: SSL OK - en.religiononfire.mar.in.ua reverse DNS resolves to cp23.miraheze.org - CNAME OK [01:58:19] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.49, 5.96, 5.95 [02:20:24] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.22, 7.56, 6.66 [02:22:20] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.18, 6.80, 6.50 [02:31:17] !log [void@puppet141] banned a set of IP addresses in firewall due to suspicious activity [02:31:23] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:47:10] PROBLEM - uk.religiononfire.mar.in.ua - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - uk.religiononfire.mar.in.ua All nameservers failed to answer the query. [03:15:50] RECOVERY - uk.religiononfire.mar.in.ua - reverse DNS on sslhost is OK: SSL OK - uk.religiononfire.mar.in.ua reverse DNS resolves to cp23.miraheze.org - CNAME OK [04:02:48] PROBLEM - graylog131 Current Load on graylog131 is WARNING: WARNING - load average: 3.53, 2.84, 2.22 [04:03:21] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.73, 3.62, 3.00 [04:04:46] RECOVERY - graylog131 Current Load on graylog131 is OK: OK - load average: 1.74, 2.34, 2.10 [04:05:20] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 4.08, 3.75, 3.13 [04:07:19] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.16, 3.52, 3.12 [04:19:13] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 3.19, 3.36, 3.26 [04:35:03] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.16, 7.03, 6.03 [04:35:55] PROBLEM - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is WARNING: WARNING - NGINX Error Rate is 41% [04:36:58] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 4.75, 6.06, 5.79 [04:37:50] RECOVERY - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is OK: OK - NGINX Error Rate is 21% [04:47:00] PROBLEM - db112 Disk Space on db112 is WARNING: DISK WARNING - free space: / 14609 MB (10% inode=99%); [06:16:23] PROBLEM - mcdev.grantlmul.xyz - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for mcdev.grantlmul.xyz could not be found [06:21:21] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 4.30, 3.73, 3.08 [06:23:20] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.44, 3.53, 3.08 [06:28:01] PROBLEM - cp23 Current Load on cp23 is WARNING: LOAD WARNING - total load average: 2.57, 3.44, 1.88 [06:30:01] RECOVERY - cp23 Current Load on cp23 is OK: LOAD OK - total load average: 1.10, 2.59, 1.76 [06:31:16] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 3.07, 3.35, 3.20 [06:44:09] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 4.00, 3.61, 3.34 [06:54:05] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 3.93, 4.04, 3.70 [06:56:04] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.51, 3.89, 3.69 [06:58:03] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 4.13, 3.91, 3.72 [07:00:02] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.56, 3.79, 3.70 [07:09:58] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 4.01, 3.71, 3.64 [07:11:57] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 2.86, 3.47, 3.56 [07:15:55] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 2.13, 2.82, 3.29 [07:47:10] PROBLEM - uk.religiononfire.mar.in.ua - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - uk.religiononfire.mar.in.ua All nameservers failed to answer the query. [08:15:49] RECOVERY - uk.religiononfire.mar.in.ua - reverse DNS on sslhost is OK: SSL OK - uk.religiononfire.mar.in.ua reverse DNS resolves to cp22.miraheze.org - CNAME OK [09:04:58] RECOVERY - cp22 NTP time on cp22 is OK: NTP OK: Offset 0.02126944065 secs [09:25:25] PROBLEM - cp33 Current Load on cp33 is CRITICAL: LOAD CRITICAL - total load average: 27.69, 17.65, 7.26 [09:25:37] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is CRITICAL: CRITICAL - NGINX Error Rate is 100% [09:25:40] PROBLEM - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is CRITICAL: CRITICAL - NGINX Error Rate is 100% [09:25:46] PROBLEM - smashbros.miraheze.org - Sectigo on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:25:51] PROBLEM - cp22 HTTPS on cp22 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:25:53] PROBLEM - www.mh142.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:25:56] PROBLEM - cp32 Current Load on cp32 is CRITICAL: LOAD CRITICAL - total load average: 95.17, 59.31, 23.85 [09:25:58] PROBLEM - dariawiki.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:00] PROBLEM - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is CRITICAL: CRITICAL - NGINX Error Rate is 100% [09:26:00] PROBLEM - history.estill.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:01] PROBLEM - allthetropes.miraheze.org - Sectigo on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:05] PROBLEM - cp23 Current Load on cp23 is CRITICAL: LOAD CRITICAL - total load average: 74.36, 77.53, 34.00 [09:26:06] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 7 datacenters are down: 217.174.247.33/cpweb, 109.228.51.216/cpweb, 2a00:da00:1800:326::1/cpweb, 2a00:da00:1800:328::1/cpweb, 108.175.15.182/cpweb, 2607:f1c0:1800:8100::1/cpweb, 2607:f1c0:1800:26f::1/cpweb [09:26:06] PROBLEM - cp23 HTTPS on cp23 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:09] PROBLEM - cp32 Stunnel for puppet141 on cp32 is CRITICAL: connect to address localhost and port 8204: Connection refused [09:26:11] PROBLEM - lgbta.miraheze.org - Sectigo on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:13] PROBLEM - closinglogosgroup.miraheze.org - Sectigo on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:14] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 6 datacenters are down: 217.174.247.33/cpweb, 109.228.51.216/cpweb, 2a00:da00:1800:326::1/cpweb, 2a00:da00:1800:328::1/cpweb, 108.175.15.182/cpweb, 74.208.203.152/cpweb [09:26:14] PROBLEM - wiki.conworlds.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:14] PROBLEM - www.johanloopmans.nl - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:18] PROBLEM - www.sidem.wiki - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:22] PROBLEM - cp32 HTTPS on cp32 is CRITICAL: HTTP CRITICAL: HTTP/1.1 503 Backend fetch failed - 8191 bytes in 4.142 second response time [09:26:27] PROBLEM - www.project-patterns.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:27] PROBLEM - www.allthetropes.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:27] PROBLEM - miraheze.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:28] PROBLEM - wiki.rsf.world - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:30] PROBLEM - iceria.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:30] PROBLEM - cp33 HTTPS on cp33 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:30] PROBLEM - franchise.franchising.org.ua - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:31] PROBLEM - wiki.minkyu.kim - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:32] PROBLEM - swiftobject112 Current Load on swiftobject112 is CRITICAL: CRITICAL - load average: 11.57, 15.51, 9.11 [09:26:32] PROBLEM - en.omniversalis.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:32] PROBLEM - wiki.overwood.xyz - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:34] PROBLEM - cp22 Current Load on cp22 is CRITICAL: LOAD CRITICAL - total load average: 42.54, 26.18, 11.29 [09:26:37] PROBLEM - elsterclopedie.marithaime.nl - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:37] PROBLEM - journeytheword.wiki - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:38] PROBLEM - fanon.polandballwiki.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:39] PROBLEM - m.miraheze.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:41] PROBLEM - wiki.denby.tech - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:26:48] PROBLEM - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is CRITICAL: CRITICAL - NGINX Error Rate is 100% [09:26:54] PROBLEM - www.publictestwiki.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:27:05] PROBLEM - swiftproxy131 Current Load on swiftproxy131 is CRITICAL: CRITICAL - load average: 13.44, 7.75, 3.52 [09:27:06] PROBLEM - cp32 Stunnel for mw143 on cp32 is CRITICAL: connect to address localhost and port 8112: Connection refused [09:27:06] PROBLEM - cp32 Stunnel for phab121 on cp32 is CRITICAL: connect to address localhost and port 8202: Connection refused [09:27:08] PROBLEM - cp32 Stunnel for mw142 on cp32 is CRITICAL: connect to address localhost and port 8109: Connection refused [09:27:09] PROBLEM - swiftproxy111 Current Load on swiftproxy111 is CRITICAL: CRITICAL - load average: 28.66, 13.78, 5.75 [09:27:26] PROBLEM - cp32 Stunnel for mon141 on cp32 is CRITICAL: connect to address localhost and port 8201: Connection refused [09:27:38] RECOVERY - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is OK: OK - NGINX Error Rate is 31% [09:27:47] RECOVERY - cp22 HTTPS on cp22 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3572 bytes in 0.136 second response time [09:27:50] PROBLEM - cp32 Stunnel for mail121 on cp32 is CRITICAL: connect to address localhost and port 8200: Connection refused [09:27:51] PROBLEM - cp32 Stunnel for mw131 on cp32 is CRITICAL: connect to address localhost and port 8106: Connection refused [09:27:57] RECOVERY - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is OK: OK - NGINX Error Rate is 9% [09:28:02] RECOVERY - cp23 HTTPS on cp23 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3571 bytes in 0.163 second response time [09:28:04] PROBLEM - cp32 Varnish Backends on cp32 is CRITICAL: 8 backends are down. mw131 mw132 mw141 mw142 mw133 mw134 mw143 mediawiki [09:28:06] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [09:28:07] PROBLEM - cp32 Stunnel for mwtask141 on cp32 is CRITICAL: connect to address localhost and port 8150: Connection refused [09:28:09] PROBLEM - cp32 Stunnel for mw133 on cp32 is CRITICAL: connect to address localhost and port 8110: Connection refused [09:28:12] PROBLEM - cp32 Stunnel for mw132 on cp32 is CRITICAL: connect to address localhost and port 8107: Connection refused [09:28:14] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [09:28:18] PROBLEM - cp32 Stunnel for reports121 on cp32 is CRITICAL: connect to address localhost and port 8205: Connection refused [09:28:27] RECOVERY - cp33 HTTPS on cp33 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3733 bytes in 0.534 second response time [09:28:29] PROBLEM - cp32 Stunnel for mw141 on cp32 is CRITICAL: connect to address localhost and port 8108: Connection refused [09:28:34] RECOVERY - m.miraheze.org - LetsEncrypt on sslhost is OK: OK - Certificate 'm.miraheze.org' will expire on Sun 05 Nov 2023 16:43:26 GMT +0000. [09:28:36] PROBLEM - cp32 Stunnel for matomo121 on cp32 is CRITICAL: connect to address localhost and port 8203: Connection refused [09:28:44] RECOVERY - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is OK: OK - NGINX Error Rate is 27% [09:28:53] PROBLEM - cp32 Stunnel for test131 on cp32 is CRITICAL: connect to address localhost and port 8180: Connection refused [09:28:56] PROBLEM - cp32 Stunnel for mw134 on cp32 is CRITICAL: connect to address localhost and port 8111: Connection refused [09:29:05] RECOVERY - swiftproxy131 Current Load on swiftproxy131 is OK: OK - load average: 2.64, 5.56, 3.23 [09:30:18] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 10.19, 6.82, 5.20 [09:32:32] PROBLEM - swiftobject112 Current Load on swiftobject112 is WARNING: WARNING - load average: 4.04, 7.59, 7.53 [09:32:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.74, 7.37, 5.74 [09:34:19] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.21, 7.56, 5.98 [09:34:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.26, 6.93, 5.78 [09:36:18] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 3.25, 6.17, 5.68 [09:36:32] PROBLEM - swiftobject112 Current Load on swiftobject112 is CRITICAL: CRITICAL - load average: 13.35, 9.34, 8.09 [09:36:37] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 3.64, 5.69, 5.46 [09:36:40] PROBLEM - graylog131 Current Load on graylog131 is CRITICAL: CRITICAL - load average: 4.41, 3.59, 2.83 [09:36:44] PROBLEM - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is CRITICAL: CRITICAL - NGINX Error Rate is 100% [09:36:59] PROBLEM - wiki.shaazzz.ir - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:37:14] PROBLEM - wiki.puucraft.net - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:37:14] PROBLEM - speleo.wiki - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:37:16] PROBLEM - studio.niuboss123.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:37:29] PROBLEM - translate.petrawiki.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:37:30] PROBLEM - wiki.litek.top - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:37:30] PROBLEM - vmklegacy.wiki - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:37:37] PROBLEM - wiki.yahyabd.xyz - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:37:37] PROBLEM - crustypedia.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:37:41] PROBLEM - wiki.potabi.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:37:41] PROBLEM - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is CRITICAL: CRITICAL - NGINX Error Rate is 96% [09:37:44] PROBLEM - kb.nena.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:37:47] PROBLEM - wiki.susqu.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:37:47] PROBLEM - ping6 on swiftproxy111 is CRITICAL: PING CRITICAL - Packet loss = 16%, RTA = 104.08 ms [09:37:58] PROBLEM - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is CRITICAL: CRITICAL - NGINX Error Rate is 100% [09:37:59] PROBLEM - cp22 HTTPS on cp22 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:38:00] PROBLEM - dkwiki.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:38:00] PROBLEM - wiki.yuanpi.eu.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:38:00] PROBLEM - allthetropes.orain.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:38:04] PROBLEM - corru.wiki - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:38:06] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 109.228.51.216/cpweb, 2a00:da00:1800:328::1/cpweb, 74.208.203.152/cpweb, 2607:f1c0:1800:26f::1/cpweb [09:38:14] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 3 datacenters are down: 109.228.51.216/cpweb, 74.208.203.152/cpweb, 2607:f1c0:1800:26f::1/cpweb [09:38:26] PROBLEM - cp32 Puppet on cp32 is UNKNOWN: NRPE: Unable to read output [09:38:40] PROBLEM - graylog131 Current Load on graylog131 is WARNING: WARNING - load average: 3.90, 3.82, 3.02 [09:38:44] RECOVERY - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is OK: OK - NGINX Error Rate is 14% [09:39:38] RECOVERY - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is OK: OK - NGINX Error Rate is 5% [09:39:48] RECOVERY - ping6 on swiftproxy111 is OK: PING OK - Packet loss = 0%, RTA = 0.52 ms [09:39:55] RECOVERY - cp22 HTTPS on cp22 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3733 bytes in 0.134 second response time [09:40:12] PROBLEM - cp33 HTTPS on cp33 is CRITICAL: connect to address 2607:f1c0:1800:26f::1 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [09:40:39] RECOVERY - graylog131 Current Load on graylog131 is OK: OK - load average: 2.00, 3.27, 2.92 [09:42:32] PROBLEM - swiftobject112 Current Load on swiftobject112 is WARNING: WARNING - load average: 3.82, 6.59, 7.47 [09:45:34] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is WARNING: WARNING - NGINX Error Rate is 55% [09:46:06] PROBLEM - mw142 Current Load on mw142 is CRITICAL: CRITICAL - load average: 12.04, 6.81, 4.10 [09:46:14] PROBLEM - cp22 HTTPS on cp22 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:46:15] PROBLEM - cp23 HTTPS on cp23 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:46:32] RECOVERY - swiftobject112 Current Load on swiftobject112 is OK: OK - load average: 2.94, 4.72, 6.51 [09:46:40] PROBLEM - schizoidnightmares.wiki - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:46:42] PROBLEM - m.miraheze.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:46:46] PROBLEM - echoes-wiki.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:46:46] PROBLEM - heavyironmodding.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:46:48] PROBLEM - wiki.orvyn.ca - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:46:57] PROBLEM - wiki.otir.nl - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [09:47:00] PROBLEM - mw143 Current Load on mw143 is CRITICAL: CRITICAL - load average: 12.33, 7.33, 3.97 [09:47:06] PROBLEM - mw131 Current Load on mw131 is CRITICAL: CRITICAL - load average: 15.38, 9.43, 5.51 [09:47:28] PROBLEM - mw133 Current Load on mw133 is WARNING: WARNING - load average: 10.26, 8.23, 5.05 [09:47:29] PROBLEM - mw134 Current Load on mw134 is WARNING: WARNING - load average: 10.75, 8.52, 5.20 [09:47:33] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is CRITICAL: CRITICAL - NGINX Error Rate is 97% [09:47:38] PROBLEM - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is CRITICAL: CRITICAL - NGINX Error Rate is 95% [09:47:40] PROBLEM - mw132 Current Load on mw132 is WARNING: WARNING - load average: 11.34, 10.03, 6.37 [09:48:06] RECOVERY - mw142 Current Load on mw142 is OK: OK - load average: 7.12, 7.53, 4.75 [09:48:11] RECOVERY - cp23 HTTPS on cp23 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3734 bytes in 0.140 second response time [09:48:37] RECOVERY - m.miraheze.org - LetsEncrypt on sslhost is OK: OK - Certificate 'm.miraheze.org' will expire on Sun 05 Nov 2023 16:43:26 GMT +0000. [09:49:00] RECOVERY - mw143 Current Load on mw143 is OK: OK - load average: 2.50, 5.23, 3.60 [09:49:06] RECOVERY - mw131 Current Load on mw131 is OK: OK - load average: 4.72, 7.25, 5.18 [09:49:28] RECOVERY - mw133 Current Load on mw133 is OK: OK - load average: 2.76, 6.12, 4.66 [09:49:29] RECOVERY - mw134 Current Load on mw134 is OK: OK - load average: 2.98, 6.40, 4.83 [09:49:40] RECOVERY - mw132 Current Load on mw132 is OK: OK - load average: 2.97, 7.39, 5.85 [09:53:57] RECOVERY - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is OK: OK - NGINX Error Rate is 7% [09:54:03] RECOVERY - cp33 HTTPS on cp33 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3572 bytes in 0.531 second response time [09:54:58] RECOVERY - www.project-patterns.com - LetsEncrypt on sslhost is OK: OK - Certificate 'project-patterns.com' will expire on Sun 05 Nov 2023 16:57:56 GMT +0000. [09:54:59] RECOVERY - miraheze.com - LetsEncrypt on sslhost is OK: OK - Certificate 'miraheze.com' will expire on Sun 05 Nov 2023 16:40:52 GMT +0000. [09:55:01] RECOVERY - wiki.rsf.world - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.rsf.world' will expire on Wed 08 Nov 2023 17:39:39 GMT +0000. [09:55:01] RECOVERY - www.allthetropes.org - LetsEncrypt on sslhost is OK: OK - Certificate 'allthetropes.org' will expire on Thu 05 Oct 2023 01:35:28 GMT +0000. [09:55:06] RECOVERY - iceria.org - LetsEncrypt on sslhost is OK: OK - Certificate 'www.iceria.org' will expire on Thu 05 Oct 2023 21:11:57 GMT +0000. [09:55:07] RECOVERY - wiki.minkyu.kim - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.minkyu.kim' will expire on Thu 05 Oct 2023 20:14:22 GMT +0000. [09:55:07] RECOVERY - franchise.franchising.org.ua - LetsEncrypt on sslhost is OK: OK - Certificate 'franchise.franchising.org.ua' will expire on Thu 05 Oct 2023 20:49:16 GMT +0000. [09:55:07] RECOVERY - smashbros.miraheze.org - Sectigo on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Sat 11 Nov 2023 23:59:59 GMT +0000. [09:55:07] RECOVERY - en.omniversalis.org - LetsEncrypt on sslhost is OK: OK - Certificate 'en.omniversalis.org' will expire on Sat 07 Oct 2023 05:05:00 GMT +0000. [09:55:08] RECOVERY - wiki.overwood.xyz - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.overwood.xyz' will expire on Tue 21 Nov 2023 15:11:54 GMT +0000. [09:55:18] RECOVERY - fanon.polandballwiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'fanon.polandballwiki.com' will expire on Fri 06 Oct 2023 18:04:46 GMT +0000. [09:55:18] RECOVERY - journeytheword.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'www.journeytheword.wiki' will expire on Fri 06 Oct 2023 13:02:47 GMT +0000. [09:55:19] RECOVERY - elsterclopedie.marithaime.nl - LetsEncrypt on sslhost is OK: OK - Certificate 'elsterclopedie.marithaime.nl' will expire on Sat 07 Oct 2023 23:02:11 GMT +0000. [09:55:23] RECOVERY - www.mh142.com - LetsEncrypt on sslhost is OK: OK - Certificate 'mh142.com' will expire on Thu 05 Oct 2023 12:30:38 GMT +0000. [09:55:29] RECOVERY - wiki.denby.tech - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.denby.tech' will expire on Mon 23 Oct 2023 18:30:19 GMT +0000. [09:55:32] RECOVERY - dariawiki.org - LetsEncrypt on sslhost is OK: OK - Certificate 'dariawiki.org' will expire on Thu 05 Oct 2023 12:53:34 GMT +0000. [09:55:33] RECOVERY - history.estill.org - LetsEncrypt on sslhost is OK: OK - Certificate 'history.estill.org' will expire on Thu 05 Oct 2023 20:04:14 GMT +0000. [09:55:34] RECOVERY - allthetropes.miraheze.org - Sectigo on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Sat 11 Nov 2023 23:59:59 GMT +0000. [09:55:36] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.54, 7.26, 6.25 [09:55:54] RECOVERY - www.publictestwiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'publictestwiki.com' will expire on Thu 05 Oct 2023 01:42:35 GMT +0000. [09:56:00] RECOVERY - closinglogosgroup.miraheze.org - Sectigo on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Sat 11 Nov 2023 23:59:59 GMT +0000. [09:56:02] RECOVERY - wiki.conworlds.org - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.conworlds.org' will expire on Thu 05 Oct 2023 13:35:56 GMT +0000. [09:56:02] RECOVERY - www.johanloopmans.nl - LetsEncrypt on sslhost is OK: OK - Certificate 'www.johanloopmans.nl' will expire on Fri 06 Oct 2023 05:58:06 GMT +0000. [09:56:08] RECOVERY - www.sidem.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'www.sidem.wiki' will expire on Fri 06 Oct 2023 18:43:43 GMT +0000. [09:56:09] RECOVERY - lgbta.miraheze.org - Sectigo on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Sat 11 Nov 2023 23:59:59 GMT +0000. [09:56:37] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.61, 7.27, 6.36 [09:57:31] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.64, 7.17, 6.36 [10:01:19] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 12.77, 8.24, 6.84 [10:02:01] PROBLEM - cp32 Current Load on cp32 is WARNING: LOAD WARNING - total load average: 0.03, 0.48, 3.97 [10:02:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.77, 8.32, 7.06 [10:03:14] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 5.75, 7.04, 6.56 [10:03:25] PROBLEM - cp33 Current Load on cp33 is WARNING: LOAD WARNING - total load average: 0.22, 0.42, 3.78 [10:03:59] RECOVERY - cp32 Stunnel for puppet141 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8204 [10:04:07] RECOVERY - cp32 Stunnel for mwtask141 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8150 [10:04:09] RECOVERY - cp32 Stunnel for mw133 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8110 [10:04:12] RECOVERY - cp32 Stunnel for mw132 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8107 [10:04:18] RECOVERY - cp32 Stunnel for reports121 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8205 [10:04:29] RECOVERY - cp32 Stunnel for mw141 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8108 [10:04:33] RECOVERY - cp32 Puppet on cp32 is OK: OK: Puppet is currently enabled, last run 38 seconds ago with 0 failures [10:04:34] RECOVERY - cp32 Stunnel for test131 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8180 [10:04:36] RECOVERY - cp32 Stunnel for matomo121 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8203 [10:04:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.82, 7.77, 7.01 [10:04:56] RECOVERY - cp32 Stunnel for mw134 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8111 [10:04:57] RECOVERY - cp32 Stunnel for mw143 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8112 [10:05:06] RECOVERY - cp32 Stunnel for phab121 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8202 [10:05:08] RECOVERY - cp32 Stunnel for mw142 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8109 [10:05:08] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 9.77, 7.93, 6.93 [10:05:25] RECOVERY - cp33 Current Load on cp33 is OK: LOAD OK - total load average: 0.48, 0.44, 3.37 [10:05:26] RECOVERY - cp32 Stunnel for mon141 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8201 [10:05:33] RECOVERY - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is OK: OK - NGINX Error Rate is 3% [10:05:38] RECOVERY - wiki.shaazzz.ir - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.shaazzz.ir' will expire on Thu 05 Oct 2023 22:33:31 GMT +0000. [10:05:49] RECOVERY - cp32 Varnish Backends on cp32 is OK: All 15 backends are healthy [10:05:50] RECOVERY - cp32 Stunnel for mail121 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8200 [10:05:52] RECOVERY - cp32 Stunnel for mw131 on cp32 is OK: TCP OK - 0.001 second response time on localhost port 8106 [10:06:01] RECOVERY - wiki.puucraft.net - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.puucraft.net' will expire on Sat 07 Oct 2023 16:58:51 GMT +0000. [10:06:01] RECOVERY - cp32 Current Load on cp32 is OK: LOAD OK - total load average: 0.35, 0.44, 3.15 [10:06:05] RECOVERY - speleo.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'speleo.wiki' will expire on Thu 05 Oct 2023 02:29:59 GMT +0000. [10:06:06] RECOVERY - cp32 HTTPS on cp32 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3571 bytes in 0.535 second response time [10:06:09] RECOVERY - studio.niuboss123.com - LetsEncrypt on sslhost is OK: OK - Certificate 'studio.niuboss123.com' will expire on Mon 11 Dec 2023 09:43:36 GMT +0000. [10:06:30] RECOVERY - translate.petrawiki.org - LetsEncrypt on sslhost is OK: OK - Certificate 'translate.petrawiki.org' will expire on Sat 07 Oct 2023 23:12:53 GMT +0000. [10:06:32] RECOVERY - vmklegacy.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'vmklegacy.wiki' will expire on Fri 13 Oct 2023 13:43:15 GMT +0000. [10:06:34] RECOVERY - wiki.litek.top - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.litek.top' will expire on Fri 27 Oct 2023 08:58:10 GMT +0000. [10:06:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.99, 8.02, 7.18 [10:06:48] RECOVERY - wiki.yahyabd.xyz - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.yahyabd.xyz' will expire on Sat 07 Oct 2023 04:04:00 GMT +0000. [10:06:49] RECOVERY - crustypedia.org - LetsEncrypt on sslhost is OK: OK - Certificate 'crustypedia.org' will expire on Fri 03 Nov 2023 09:01:30 GMT +0000. [10:06:58] RECOVERY - wiki.potabi.com - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.potabi.com' will expire on Fri 06 Oct 2023 12:33:59 GMT +0000. [10:07:04] RECOVERY - kb.nena.org - LetsEncrypt on sslhost is OK: OK - Certificate 'kb.nena.org' will expire on Wed 08 Nov 2023 07:00:57 GMT +0000. [10:07:08] RECOVERY - wiki.susqu.org - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.susqu.org' will expire on Wed 08 Nov 2023 17:10:13 GMT +0000. [10:07:35] RECOVERY - allthetropes.orain.org - LetsEncrypt on sslhost is OK: OK - Certificate 'orain.org' will expire on Tue 21 Nov 2023 20:39:44 GMT +0000. [10:07:35] RECOVERY - wiki.yuanpi.eu.org - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.yuanpi.eu.org' will expire on Wed 06 Dec 2023 11:08:08 GMT +0000. [10:07:36] RECOVERY - dkwiki.org - LetsEncrypt on sslhost is OK: OK - Certificate 'dkwiki.org' will expire on Fri 06 Oct 2023 19:53:19 GMT +0000. [10:07:40] RECOVERY - corru.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'corru.wiki' will expire on Sun 05 Nov 2023 06:48:26 GMT +0000. [10:13:32] RECOVERY - cp22 HTTPS on cp22 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3734 bytes in 0.131 second response time [10:13:38] RECOVERY - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is OK: OK - NGINX Error Rate is 7% [10:14:06] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [10:14:14] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [10:14:32] PROBLEM - cp22 Current Load on cp22 is WARNING: LOAD WARNING - total load average: 0.58, 0.40, 3.62 [10:15:55] RECOVERY - schizoidnightmares.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'schizoidnightmares.wiki' will expire on Sat 07 Oct 2023 20:24:07 GMT +0000. [10:16:05] RECOVERY - heavyironmodding.org - LetsEncrypt on sslhost is OK: OK - Certificate 'heavyironmodding.org' will expire on Fri 06 Oct 2023 04:21:18 GMT +0000. [10:16:07] RECOVERY - echoes-wiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'echoes-wiki.com' will expire on Wed 18 Oct 2023 20:36:08 GMT +0000. [10:16:10] RECOVERY - wiki.orvyn.ca - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.orvyn.ca' will expire on Fri 24 Nov 2023 17:01:57 GMT +0000. [10:16:31] RECOVERY - wiki.otir.nl - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.otir.nl' will expire on Sat 07 Oct 2023 15:32:54 GMT +0000. [10:16:32] RECOVERY - cp22 Current Load on cp22 is OK: LOAD OK - total load average: 0.53, 0.46, 3.25 [10:18:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.17, 7.93, 7.78 [10:20:21] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is CRITICAL: CRITICAL - NGINX Error Rate is 95% [10:20:23] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 4.26, 7.21, 7.60 [10:20:33] PROBLEM - cp22 Current Load on cp22 is CRITICAL: LOAD CRITICAL - total load average: 23.76, 10.88, 6.54 [10:20:40] PROBLEM - graylog131 Current Load on graylog131 is CRITICAL: CRITICAL - load average: 7.60, 5.02, 3.46 [10:20:41] PROBLEM - kirisame-kissaten.cf - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [10:20:47] PROBLEM - bluearchive.wiki - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [10:20:49] PROBLEM - mw141 Current Load on mw141 is CRITICAL: CRITICAL - load average: 12.41, 8.88, 5.65 [10:20:49] PROBLEM - cnt.miraheze.org - Sectigo on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [10:20:49] PROBLEM - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is CRITICAL: CRITICAL - NGINX Error Rate is 100% [10:20:57] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 5903 bytes in 0.008 second response time [10:21:06] PROBLEM - mw131 Current Load on mw131 is CRITICAL: CRITICAL - load average: 15.27, 10.30, 6.74 [10:21:28] PROBLEM - mw133 Current Load on mw133 is CRITICAL: CRITICAL - load average: 17.05, 10.42, 6.44 [10:21:29] PROBLEM - mw134 Current Load on mw134 is CRITICAL: CRITICAL - load average: 17.59, 10.41, 6.38 [10:21:38] PROBLEM - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is CRITICAL: CRITICAL - NGINX Error Rate is 86% [10:21:40] PROBLEM - mw132 Current Load on mw132 is CRITICAL: CRITICAL - load average: 18.47, 11.55, 7.47 [10:21:51] PROBLEM - cp32 Varnish Backends on cp32 is CRITICAL: 5 backends are down. mw132 mw141 mw133 mw134 mw143 [10:21:51] PROBLEM - cp22 Varnish Backends on cp22 is CRITICAL: 6 backends are down. mw131 mw132 mw141 mw133 mw134 mw143 [10:22:03] PROBLEM - cp32 Current Load on cp32 is WARNING: LOAD WARNING - total load average: 1.94, 3.58, 2.95 [10:22:03] PROBLEM - cp32 HTTPS on cp32 is CRITICAL: connect to address 2607:f1c0:1800:8100::1 and port 443: Connection refusedHTTP CRITICAL - Unable to open TCP socket [10:22:06] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 108.175.15.182/cpweb, 2607:f1c0:1800:8100::1/cpweb [10:22:06] PROBLEM - mw142 Current Load on mw142 is CRITICAL: CRITICAL - load average: 17.11, 10.62, 6.31 [10:22:14] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 108.175.15.182/cpweb, 2607:f1c0:1800:8100::1/cpweb [10:22:45] RECOVERY - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is OK: OK - NGINX Error Rate is 4% [10:22:49] RECOVERY - mw141 Current Load on mw141 is OK: OK - load average: 7.25, 8.97, 6.15 [10:22:57] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.328 second response time [10:23:06] RECOVERY - mw131 Current Load on mw131 is OK: OK - load average: 7.18, 9.53, 6.95 [10:23:25] PROBLEM - cp33 Current Load on cp33 is CRITICAL: LOAD CRITICAL - total load average: 6.51, 8.52, 5.18 [10:23:28] RECOVERY - mw133 Current Load on mw133 is OK: OK - load average: 6.69, 8.75, 6.32 [10:23:29] RECOVERY - mw134 Current Load on mw134 is OK: OK - load average: 5.69, 8.12, 6.01 [10:23:38] RECOVERY - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is OK: OK - NGINX Error Rate is 1% [10:23:40] RECOVERY - mw132 Current Load on mw132 is OK: OK - load average: 8.03, 9.55, 7.21 [10:23:49] RECOVERY - cp32 Varnish Backends on cp32 is OK: All 15 backends are healthy [10:23:51] RECOVERY - cp22 Varnish Backends on cp22 is OK: All 15 backends are healthy [10:24:03] RECOVERY - cp32 Current Load on cp32 is OK: LOAD OK - total load average: 0.26, 2.39, 2.58 [10:24:06] RECOVERY - mw142 Current Load on mw142 is OK: OK - load average: 5.76, 8.61, 6.11 [10:24:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.09, 7.25, 7.35 [10:26:40] PROBLEM - graylog131 Current Load on graylog131 is WARNING: WARNING - load average: 2.43, 3.93, 3.61 [10:28:18] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 12.22, 8.67, 7.85 [10:29:25] PROBLEM - cp33 Current Load on cp33 is WARNING: LOAD WARNING - total load average: 0.98, 3.17, 3.77 [10:30:18] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.92, 7.94, 7.69 [10:30:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.52, 7.98, 7.77 [10:30:40] RECOVERY - graylog131 Current Load on graylog131 is OK: OK - load average: 2.37, 3.22, 3.40 [10:31:25] RECOVERY - cp33 Current Load on cp33 is OK: LOAD OK - total load average: 0.75, 2.34, 3.39 [10:32:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.81, 8.20, 7.86 [10:34:32] PROBLEM - cp22 Current Load on cp22 is WARNING: LOAD WARNING - total load average: 0.41, 1.75, 3.83 [10:35:56] RECOVERY - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is OK: OK - NGINX Error Rate is 4% [10:36:04] RECOVERY - cp32 HTTPS on cp32 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3572 bytes in 0.532 second response time [10:36:06] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [10:36:14] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [10:38:32] RECOVERY - cp22 Current Load on cp22 is OK: LOAD OK - total load average: 0.45, 1.23, 3.16 [10:38:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.84, 7.91, 7.97 [10:40:18] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.89, 7.93, 7.71 [10:40:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.46, 8.33, 8.12 [10:46:39] PROBLEM - graylog131 Current Load on graylog131 is CRITICAL: CRITICAL - load average: 4.01, 3.55, 3.38 [10:48:40] RECOVERY - graylog131 Current Load on graylog131 is OK: OK - load average: 2.57, 3.24, 3.29 [10:49:29] RECOVERY - kirisame-kissaten.cf - LetsEncrypt on sslhost is OK: OK - Certificate 'kirisame-kissaten.cf' will expire on Sat 14 Oct 2023 15:40:58 GMT +0000. [10:49:41] RECOVERY - bluearchive.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'bluearchive.wiki' will expire on Fri 06 Oct 2023 13:46:10 GMT +0000. [10:49:43] RECOVERY - cnt.miraheze.org - Sectigo on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Sat 11 Nov 2023 23:59:59 GMT +0000. [10:52:19] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 5.88, 7.25, 7.94 [10:56:01] PROBLEM - cp23 Current Load on cp23 is WARNING: LOAD WARNING - total load average: 0.79, 0.73, 3.92 [11:00:01] RECOVERY - cp23 Current Load on cp23 is OK: LOAD OK - total load average: 0.99, 1.17, 3.38 [11:00:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.43, 7.38, 7.98 [11:06:18] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 9.00, 8.03, 7.83 [11:06:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.57, 7.66, 7.85 [11:08:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.41, 7.39, 7.72 [11:10:18] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.60, 7.81, 7.78 [11:18:18] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 10.39, 7.29, 7.34 [11:20:18] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.43, 6.89, 7.19 [11:26:18] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.42, 6.23, 6.80 [11:26:38] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.14, 6.28, 6.80 [11:32:18] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.03, 6.78, 6.80 [11:34:18] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.59, 6.31, 6.62 [11:37:21] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.19, 7.32, 6.86 [11:39:17] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.57, 6.77, 6.71 [11:45:04] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.89, 6.80, 6.67 [11:47:00] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.43, 6.37, 6.53 [11:57:19] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.89, 6.57, 6.39 [11:59:13] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 6.06, 6.39, 6.34 [12:08:23] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.89, 3.03, 2.47 [12:12:23] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 3.33, 3.32, 2.73 [12:16:23] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.37, 3.58, 3.00 [12:16:54] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 10.06, 7.64, 6.74 [12:18:23] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 3.03, 3.33, 2.97 [12:18:49] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.99, 7.38, 6.73 [12:20:53] !log restart swift-proxy on swiftproxy111 [12:21:00] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [12:22:40] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.02, 6.37, 6.48 [12:26:10] !log increase swap on cp* to 3g [12:26:15] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [12:36:15] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.75, 3.49, 3.23 [12:37:09] PROBLEM - swiftproxy111 Current Load on swiftproxy111 is WARNING: WARNING - load average: 0.35, 1.34, 7.16 [12:38:14] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 4.04, 3.62, 3.30 [12:39:09] RECOVERY - swiftproxy111 Current Load on swiftproxy111 is OK: OK - load average: 0.35, 1.02, 6.36 [12:40:13] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.98, 3.71, 3.38 [12:48:10] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 3.28, 3.39, 3.35 [12:54:06] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 4.91, 3.97, 3.58 [12:54:15] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.03, 7.71, 6.73 [12:56:11] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.68, 7.25, 6.68 [13:01:58] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.00, 6.61, 6.61 [13:02:03] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.96, 3.98, 3.76 [13:05:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.29, 6.93, 6.78 [13:07:46] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.76, 6.63, 6.69 [13:09:59] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 2.75, 2.97, 3.36 [13:18:23] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.65, 7.09, 6.75 [13:20:19] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.25, 7.70, 7.00 [13:24:10] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.13, 7.64, 7.15 [13:29:57] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.44, 6.17, 6.65 [13:30:40] PROBLEM - graylog131 Current Load on graylog131 is WARNING: WARNING - load average: 3.42, 2.99, 2.41 [13:32:18] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.45, 6.96, 6.06 [13:34:18] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 6.29, 6.62, 6.05 [13:34:39] RECOVERY - graylog131 Current Load on graylog131 is OK: OK - load average: 2.36, 2.98, 2.57 [13:35:45] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.87, 7.22, 6.86 [13:37:41] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.20, 6.83, 6.76 [13:39:36] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.66, 7.29, 6.92 [13:41:32] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.24, 6.64, 6.74 [13:48:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 6.97, 8.28, 7.48 [13:48:56] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.73, 7.05, 6.38 [13:50:50] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.55, 6.44, 6.24 [13:52:09] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.36, 7.78, 7.49 [13:57:56] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.02, 5.88, 6.72 [14:04:18] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.03, 6.84, 6.33 [14:08:18] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.57, 6.33, 6.27 [14:13:46] PROBLEM - ping6 on swiftobject112 is CRITICAL: PING CRITICAL - Packet loss = 16%, RTA = 23.29 ms [14:13:57] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is CRITICAL: CRITICAL - NGINX Error Rate is 100% [14:14:04] PROBLEM - cp22 HTTPS on cp22 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:14:09] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 8 datacenters are down: 217.174.247.33/cpweb, 109.228.51.216/cpweb, 2a00:da00:1800:326::1/cpweb, 2a00:da00:1800:328::1/cpweb, 108.175.15.182/cpweb, 74.208.203.152/cpweb, 2607:f1c0:1800:8100::1/cpweb, 2607:f1c0:1800:26f::1/cpweb [14:14:10] PROBLEM - cp23 HTTPS on cp23 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:14:10] PROBLEM - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is CRITICAL: CRITICAL - NGINX Error Rate is 100% [14:14:16] PROBLEM - iol.wiki - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:14:19] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 7 datacenters are down: 217.174.247.33/cpweb, 109.228.51.216/cpweb, 2a00:da00:1800:326::1/cpweb, 108.175.15.182/cpweb, 74.208.203.152/cpweb, 2607:f1c0:1800:8100::1/cpweb, 2607:f1c0:1800:26f::1/cpweb [14:14:24] PROBLEM - cp23 Current Load on cp23 is CRITICAL: LOAD CRITICAL - total load average: 175.10, 94.35, 36.82 [14:14:24] PROBLEM - cp33 Current Load on cp33 is CRITICAL: LOAD CRITICAL - total load average: 58.91, 23.62, 9.36 [14:14:26] PROBLEM - cp32 Current Load on cp32 is CRITICAL: LOAD CRITICAL - total load average: 136.64, 48.12, 17.77 [14:14:30] PROBLEM - cp32 HTTPS on cp32 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:14:33] PROBLEM - cp33 HTTPS on cp33 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:14:34] PROBLEM - swiftobject112 Current Load on swiftobject112 is CRITICAL: CRITICAL - load average: 8.84, 6.50, 4.92 [14:14:35] PROBLEM - cp22 Current Load on cp22 is CRITICAL: LOAD CRITICAL - total load average: 31.77, 13.35, 5.60 [14:14:38] PROBLEM - wiki.lefrenchmelee.fr - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:14:42] PROBLEM - m.miraheze.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:14:48] PROBLEM - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is CRITICAL: CRITICAL - NGINX Error Rate is 100% [14:14:48] PROBLEM - you.r-fit.cc - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:14:50] PROBLEM - ndgkb.nena.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:14:50] PROBLEM - wiki.songngu.xyz - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:14:50] PROBLEM - wiki.yunachannel.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:14:50] PROBLEM - wiki.astralprojections.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:14:50] PROBLEM - eurocom.wiki - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:14:50] PROBLEM - www.miraheze.org - Sectigo on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:14:58] PROBLEM - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is CRITICAL: CRITICAL - NGINX Error Rate is 88% [14:14:58] PROBLEM - wiki.cjgh.xyz - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:14:58] PROBLEM - wiki.starship.digital - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:15:02] PROBLEM - wiki.3point0.science - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:15:06] PROBLEM - wiki.thesimswiki.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:15:08] PROBLEM - swiftproxy131 Current Load on swiftproxy131 is CRITICAL: CRITICAL - load average: 11.31, 9.16, 4.65 [14:15:11] PROBLEM - swiftproxy111 Current Load on swiftproxy111 is CRITICAL: CRITICAL - load average: 14.53, 9.34, 4.58 [14:15:12] PROBLEM - wc.miraheze.org on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:15:15] PROBLEM - wiki.queenscourt.games - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:15:15] PROBLEM - wiki.beergeeks.co.il - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:15:20] PROBLEM - mw143 Current Load on mw143 is WARNING: WARNING - load average: 11.66, 7.50, 4.67 [14:15:47] RECOVERY - ping6 on swiftobject112 is OK: PING OK - Packet loss = 0%, RTA = 0.74 ms [14:15:57] PROBLEM - cp33 Varnish Backends on cp33 is CRITICAL: 1 backends are down. mw134 [14:16:00] RECOVERY - cp22 HTTPS on cp22 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3573 bytes in 0.136 second response time [14:16:05] PROBLEM - cp32 Varnish Backends on cp32 is CRITICAL: 3 backends are down. mw132 mw142 mw134 [14:16:06] RECOVERY - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is OK: OK - NGINX Error Rate is 17% [14:16:07] RECOVERY - cp23 HTTPS on cp23 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3734 bytes in 0.883 second response time [14:16:09] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [14:16:15] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [14:16:27] PROBLEM - os131 Current Load on os131 is CRITICAL: CRITICAL - load average: 4.37, 3.29, 1.85 [14:16:29] RECOVERY - cp33 HTTPS on cp33 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3734 bytes in 0.536 second response time [14:16:30] PROBLEM - mem141 Puppet on mem141 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:16:37] RECOVERY - m.miraheze.org - LetsEncrypt on sslhost is OK: OK - Certificate 'm.miraheze.org' will expire on Sun 05 Nov 2023 16:43:26 GMT +0000. [14:16:40] PROBLEM - graylog131 Current Load on graylog131 is CRITICAL: CRITICAL - load average: 6.70, 4.41, 3.02 [14:16:44] RECOVERY - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is OK: OK - NGINX Error Rate is 9% [14:16:55] RECOVERY - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is OK: OK - NGINX Error Rate is 12% [14:17:06] RECOVERY - wc.miraheze.org on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Sat 11 Nov 2023 23:59:59 GMT +0000. [14:17:06] PROBLEM - mw131 Current Load on mw131 is CRITICAL: CRITICAL - load average: 12.88, 10.47, 7.33 [14:17:14] RECOVERY - mw143 Current Load on mw143 is OK: OK - load average: 4.90, 6.38, 4.60 [14:17:15] PROBLEM - ping6 on swiftproxy111 is CRITICAL: PING CRITICAL - Packet loss = 16%, RTA = 43.94 ms [14:17:15] PROBLEM - swiftproxy111 Puppet on swiftproxy111 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [14:17:52] RECOVERY - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is OK: OK - NGINX Error Rate is 2% [14:17:53] RECOVERY - cp33 Varnish Backends on cp33 is OK: All 15 backends are healthy [14:18:18] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 28.38, 13.46, 8.66 [14:18:26] RECOVERY - os131 Current Load on os131 is OK: OK - load average: 2.99, 3.23, 2.01 [14:18:32] PROBLEM - swiftobject112 Current Load on swiftobject112 is WARNING: WARNING - load average: 6.49, 7.00, 5.53 [14:18:36] RECOVERY - cp32 HTTPS on cp32 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3734 bytes in 0.530 second response time [14:18:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 29.86, 15.88, 9.69 [14:19:06] PROBLEM - mw131 Current Load on mw131 is WARNING: WARNING - load average: 10.38, 10.49, 7.72 [14:19:16] RECOVERY - ping6 on swiftproxy111 is OK: PING OK - Packet loss = 0%, RTA = 0.64 ms [14:19:23] PROBLEM - swiftobject113 Current Load on swiftobject113 is CRITICAL: CRITICAL - load average: 11.19, 8.12, 5.86 [14:19:55] RECOVERY - cp32 Varnish Backends on cp32 is OK: All 15 backends are healthy [14:20:00] PROBLEM - swiftobject121 Current Load on swiftobject121 is CRITICAL: CRITICAL - load average: 11.95, 9.72, 6.67 [14:20:32] RECOVERY - swiftobject112 Current Load on swiftobject112 is OK: OK - load average: 5.62, 6.38, 5.47 [14:21:06] RECOVERY - mw131 Current Load on mw131 is OK: OK - load average: 8.07, 9.75, 7.80 [14:23:09] RECOVERY - swiftproxy111 Current Load on swiftproxy111 is OK: OK - load average: 2.14, 6.41, 5.74 [14:25:05] RECOVERY - swiftproxy131 Current Load on swiftproxy131 is OK: OK - load average: 3.46, 6.34, 6.07 [14:25:17] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 7.38, 7.88, 6.48 [14:27:15] PROBLEM - swiftobject113 Current Load on swiftobject113 is CRITICAL: CRITICAL - load average: 8.34, 8.22, 6.78 [14:29:13] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 6.61, 7.68, 6.76 [14:29:49] PROBLEM - en.religiononfire.mar.in.ua - reverse DNS on sslhost is WARNING: NoNameservers: All nameservers failed to answer the query en.religiononfire.mar.in.ua. IN CNAME: Server 2606:4700:4700::1111 UDP port 53 answered SERVFAIL [14:30:00] PROBLEM - swiftobject121 Current Load on swiftobject121 is WARNING: WARNING - load average: 7.16, 7.65, 7.19 [14:31:11] PROBLEM - swiftobject113 Current Load on swiftobject113 is CRITICAL: CRITICAL - load average: 8.59, 8.10, 7.02 [14:32:00] PROBLEM - swiftobject121 Current Load on swiftobject121 is CRITICAL: CRITICAL - load average: 8.14, 7.59, 7.21 [14:33:29] PROBLEM - cp33 Current Load on cp33 is WARNING: LOAD WARNING - total load average: 0.20, 1.21, 3.65 [14:33:54] PROBLEM - swiftobject111 Current Load on swiftobject111 is CRITICAL: CRITICAL - load average: 8.52, 7.23, 6.03 [14:34:00] PROBLEM - swiftobject121 Current Load on swiftobject121 is WARNING: WARNING - load average: 6.44, 7.28, 7.15 [14:35:25] RECOVERY - cp33 Current Load on cp33 is OK: LOAD OK - total load average: 0.29, 0.90, 3.24 [14:35:48] PROBLEM - swiftobject111 Current Load on swiftobject111 is WARNING: WARNING - load average: 7.86, 7.58, 6.31 [14:36:40] PROBLEM - graylog131 Current Load on graylog131 is WARNING: WARNING - load average: 3.01, 3.87, 3.87 [14:39:04] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 4.92, 6.84, 7.02 [14:39:25] PROBLEM - cp33 Current Load on cp33 is CRITICAL: LOAD CRITICAL - total load average: 41.26, 30.18, 14.48 [14:39:37] RECOVERY - swiftobject111 Current Load on swiftobject111 is OK: OK - load average: 4.17, 6.15, 6.06 [14:40:00] RECOVERY - swiftobject121 Current Load on swiftobject121 is OK: OK - load average: 5.01, 5.99, 6.68 [14:41:05] PROBLEM - swiftproxy131 Current Load on swiftproxy131 is WARNING: WARNING - load average: 5.38, 7.23, 6.26 [14:43:01] PROBLEM - swiftobject113 Current Load on swiftobject113 is CRITICAL: CRITICAL - load average: 13.85, 9.03, 7.75 [14:43:05] RECOVERY - swiftproxy131 Current Load on swiftproxy131 is OK: OK - load average: 3.52, 6.11, 5.98 [14:43:08] RECOVERY - swiftproxy111 Puppet on swiftproxy111 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:43:15] RECOVERY - wiki.lefrenchmelee.fr - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.lefrenchmelee.fr' will expire on Thu 05 Oct 2023 21:44:27 GMT +0000. [14:43:40] RECOVERY - you.r-fit.cc - LetsEncrypt on sslhost is OK: OK - Certificate 'you.r-fit.cc' will expire on Fri 06 Oct 2023 06:02:13 GMT +0000. [14:43:40] RECOVERY - wiki.yunachannel.com - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.yunachannel.com' will expire on Tue 17 Oct 2023 07:10:57 GMT +0000. [14:43:40] RECOVERY - ndgkb.nena.org - LetsEncrypt on sslhost is OK: OK - Certificate 'ndgkb.nena.org' will expire on Wed 08 Nov 2023 06:21:59 GMT +0000. [14:43:42] RECOVERY - wiki.astralprojections.org - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.astralprojections.org' will expire on Fri 06 Oct 2023 04:40:15 GMT +0000. [14:43:42] RECOVERY - wiki.songngu.xyz - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.songngu.xyz' will expire on Mon 06 Nov 2023 14:47:09 GMT +0000. [14:43:42] RECOVERY - eurocom.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'eurocom.wiki' will expire on Thu 02 Nov 2023 17:29:14 GMT +0000. [14:43:47] RECOVERY - www.miraheze.org - Sectigo on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Sat 11 Nov 2023 23:59:59 GMT +0000. [14:44:00] RECOVERY - iol.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'iol.wiki' will expire on Sat 07 Oct 2023 04:58:04 GMT +0000. [14:44:01] RECOVERY - wiki.cjgh.xyz - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.cjgh.xyz' will expire on Sat 07 Oct 2023 03:57:31 GMT +0000. [14:44:01] RECOVERY - wiki.starship.digital - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.starship.digital' will expire on Thu 05 Oct 2023 12:09:14 GMT +0000. [14:44:11] RECOVERY - wiki.3point0.science - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.3point0.science' will expire on Fri 01 Dec 2023 06:22:57 GMT +0000. [14:44:14] RECOVERY - wiki.thesimswiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'www.thesimswiki.com' will expire on Thu 05 Oct 2023 15:31:19 GMT +0000. [14:44:26] RECOVERY - wiki.queenscourt.games - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.queenscourt.games' will expire on Sat 14 Oct 2023 18:31:14 GMT +0000. [14:44:26] RECOVERY - wiki.beergeeks.co.il - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.beergeeks.co.il' will expire on Fri 06 Oct 2023 14:30:27 GMT +0000. [14:44:28] RECOVERY - mem141 Puppet on mem141 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [14:46:40] PROBLEM - graylog131 Current Load on graylog131 is CRITICAL: CRITICAL - load average: 6.15, 4.43, 4.01 [14:47:54] PROBLEM - swiftproxy111 Current Load on swiftproxy111 is CRITICAL: CRITICAL - load average: 16.54, 8.60, 6.00 [14:48:19] PROBLEM - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is WARNING: WARNING - NGINX Error Rate is 47% [14:48:26] PROBLEM - os131 Current Load on os131 is WARNING: WARNING - load average: 3.49, 2.63, 1.92 [14:48:32] PROBLEM - swiftobject112 Current Load on swiftobject112 is CRITICAL: CRITICAL - load average: 8.05, 6.60, 5.63 [14:48:48] PROBLEM - swiftobject121 Current Load on swiftobject121 is WARNING: WARNING - load average: 7.85, 7.64, 7.18 [14:49:06] PROBLEM - swiftproxy131 Current Load on swiftproxy131 is CRITICAL: CRITICAL - load average: 16.89, 12.34, 8.52 [14:50:18] PROBLEM - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is CRITICAL: CRITICAL - NGINX Error Rate is 86% [14:50:26] RECOVERY - os131 Current Load on os131 is OK: OK - load average: 3.20, 2.79, 2.06 [14:50:32] RECOVERY - swiftobject112 Current Load on swiftobject112 is OK: OK - load average: 5.55, 6.14, 5.58 [14:50:42] RECOVERY - swiftobject121 Current Load on swiftobject121 is OK: OK - load average: 3.96, 6.35, 6.77 [14:50:54] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 4.73, 7.31, 7.65 [14:51:09] PROBLEM - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is CRITICAL: CRITICAL - NGINX Error Rate is 100% [14:51:12] PROBLEM - cp33 HTTPS on cp33 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [14:51:22] PROBLEM - cp32 Stunnel for mw141 on cp32 is CRITICAL: connect to address localhost and port 8108: Connection refused [14:51:27] PROBLEM - cp32 Stunnel for mon141 on cp32 is CRITICAL: connect to address localhost and port 8201: Connection refused [14:51:32] PROBLEM - cp32 Stunnel for matomo121 on cp32 is CRITICAL: connect to address localhost and port 8203: Connection refused [14:51:51] PROBLEM - cp32 Stunnel for mail121 on cp32 is CRITICAL: connect to address localhost and port 8200: Connection refused [14:51:51] PROBLEM - cp32 Varnish Backends on cp32 is CRITICAL: 8 backends are down. mw131 mw132 mw141 mw142 mw133 mw134 mw143 mediawiki [14:51:53] PROBLEM - cp32 Stunnel for test131 on cp32 is CRITICAL: connect to address localhost and port 8180: Connection refused [14:51:59] PROBLEM - cp32 Stunnel for puppet141 on cp32 is CRITICAL: connect to address localhost and port 8204: Connection refused [14:52:01] PROBLEM - cp32 Stunnel for mw131 on cp32 is CRITICAL: connect to address localhost and port 8106: Connection refused [14:52:07] PROBLEM - cp32 Stunnel for mwtask141 on cp32 is CRITICAL: connect to address localhost and port 8150: Connection refused [14:52:10] PROBLEM - cp32 Stunnel for mw133 on cp32 is CRITICAL: connect to address localhost and port 8110: Connection refused [14:52:13] PROBLEM - cp32 Stunnel for mw132 on cp32 is CRITICAL: connect to address localhost and port 8107: Connection refused [14:52:18] PROBLEM - cp32 Stunnel for reports121 on cp32 is CRITICAL: connect to address localhost and port 8205: Connection refused [14:52:27] PROBLEM - cp33 Stunnel for mw132 on cp33 is CRITICAL: connect to address localhost and port 8107: Connection refused [14:52:32] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is WARNING: WARNING - NGINX Error Rate is 43% [14:52:34] PROBLEM - cp33 Stunnel for mw143 on cp33 is CRITICAL: connect to address localhost and port 8112: Connection refused [14:52:43] PROBLEM - cp33 Stunnel for mail121 on cp33 is CRITICAL: connect to address localhost and port 8200: Connection refused [14:52:44] PROBLEM - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is WARNING: WARNING - NGINX Error Rate is 54% [14:52:45] PROBLEM - cp33 Stunnel for mw133 on cp33 is CRITICAL: connect to address localhost and port 8110: Connection refused [14:52:52] PROBLEM - cp33 Stunnel for mw131 on cp33 is CRITICAL: connect to address localhost and port 8106: Connection refused [14:52:53] PROBLEM - cp33 Stunnel for mon141 on cp33 is CRITICAL: connect to address localhost and port 8201: Connection refused [14:52:56] PROBLEM - cp32 Stunnel for mw134 on cp32 is CRITICAL: connect to address localhost and port 8111: Connection refused [14:52:57] PROBLEM - cp32 Stunnel for mw143 on cp32 is CRITICAL: connect to address localhost and port 8112: Connection refused [14:52:59] PROBLEM - cp33 Stunnel for test131 on cp33 is CRITICAL: connect to address localhost and port 8180: Connection refused [14:53:03] PROBLEM - cp33 Varnish Backends on cp33 is CRITICAL: 8 backends are down. mw131 mw132 mw141 mw142 mw133 mw134 mw143 mediawiki [14:53:06] PROBLEM - cp33 Stunnel for phab121 on cp33 is CRITICAL: connect to address localhost and port 8202: Connection refused [14:53:06] PROBLEM - cp32 Stunnel for phab121 on cp32 is CRITICAL: connect to address localhost and port 8202: Connection refused [14:53:08] PROBLEM - cp32 Stunnel for mw142 on cp32 is CRITICAL: connect to address localhost and port 8109: Connection refused [14:53:13] PROBLEM - cp33 Stunnel for mw134 on cp33 is CRITICAL: connect to address localhost and port 8111: Connection refused [14:53:48] PROBLEM - cp33 Stunnel for reports121 on cp33 is CRITICAL: connect to address localhost and port 8205: Connection refused [14:53:48] PROBLEM - cp33 Stunnel for puppet141 on cp33 is CRITICAL: connect to address localhost and port 8204: Connection refused [14:53:52] PROBLEM - cp33 Stunnel for matomo121 on cp33 is CRITICAL: connect to address localhost and port 8203: Connection refused [14:54:00] PROBLEM - cp33 Stunnel for mw142 on cp33 is CRITICAL: connect to address localhost and port 8109: Connection refused [14:54:04] PROBLEM - cp33 Stunnel for mwtask141 on cp33 is CRITICAL: connect to address localhost and port 8150: Connection refused [14:54:12] RECOVERY - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is OK: OK - NGINX Error Rate is 2% [14:54:13] PROBLEM - cp33 Stunnel for mw141 on cp33 is CRITICAL: connect to address localhost and port 8108: Connection refused [14:54:29] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is CRITICAL: CRITICAL - NGINX Error Rate is 72% [14:54:32] PROBLEM - swiftobject112 Current Load on swiftobject112 is WARNING: WARNING - load average: 7.92, 7.63, 6.35 [14:54:44] RECOVERY - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is OK: OK - NGINX Error Rate is 3% [14:55:59] PROBLEM - cp33 Puppet on cp33 is UNKNOWN: NRPE: Unable to read output [14:56:26] RECOVERY - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is OK: OK - NGINX Error Rate is 34% [14:56:32] RECOVERY - swiftobject112 Current Load on swiftobject112 is OK: OK - load average: 5.33, 6.70, 6.16 [14:56:40] PROBLEM - graylog131 Current Load on graylog131 is WARNING: WARNING - load average: 2.96, 3.58, 3.98 [14:57:41] PROBLEM - swiftproxy111 Current Load on swiftproxy111 is WARNING: WARNING - load average: 1.07, 6.65, 7.60 [14:58:42] PROBLEM - en.religiononfire.mar.in.ua - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - en.religiononfire.mar.in.ua All nameservers failed to answer the query. [14:58:52] RECOVERY - swiftobject113 Current Load on swiftobject113 is OK: OK - load average: 5.13, 5.61, 6.62 [14:59:39] PROBLEM - swiftproxy111 Current Load on swiftproxy111 is CRITICAL: CRITICAL - load average: 14.84, 8.04, 7.87 [15:00:40] PROBLEM - graylog131 Current Load on graylog131 is CRITICAL: CRITICAL - load average: 5.90, 4.02, 4.01 [15:01:57] PROBLEM - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is CRITICAL: CRITICAL - NGINX Error Rate is 70% [15:02:06] PROBLEM - swiftobject121 Current Load on swiftobject121 is CRITICAL: CRITICAL - load average: 8.68, 6.60, 6.34 [15:02:19] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is CRITICAL: CRITICAL - NGINX Error Rate is 86% [15:02:31] PROBLEM - swiftobject111 Current Load on swiftobject111 is WARNING: WARNING - load average: 7.65, 6.10, 5.63 [15:02:44] PROBLEM - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is CRITICAL: CRITICAL - NGINX Error Rate is 77% [15:02:52] PROBLEM - swiftobject113 Current Load on swiftobject113 is CRITICAL: CRITICAL - load average: 10.35, 8.35, 7.54 [15:04:26] RECOVERY - swiftobject111 Current Load on swiftobject111 is OK: OK - load average: 5.89, 6.11, 5.70 [15:04:52] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 5.24, 7.29, 7.27 [15:06:00] RECOVERY - swiftobject121 Current Load on swiftobject121 is OK: OK - load average: 3.61, 5.74, 6.13 [15:06:15] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 8 datacenters are down: 217.174.247.33/cpweb, 109.228.51.216/cpweb, 2a00:da00:1800:326::1/cpweb, 2a00:da00:1800:328::1/cpweb, 108.175.15.182/cpweb, 74.208.203.152/cpweb, 2607:f1c0:1800:8100::1/cpweb, 2607:f1c0:1800:26f::1/cpweb [15:06:26] PROBLEM - null-cpu.emudev.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:06:28] PROBLEM - cp32 HTTPS on cp32 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:06:28] PROBLEM - cheeseepedia.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:06:32] PROBLEM - runzeppelin.ru - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:06:45] PROBLEM - robloxapi.wiki - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:06:46] PROBLEM - cp23 HTTPS on cp23 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:06:47] PROBLEM - wiki.knowledgerevolution.eu - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:06:47] PROBLEM - wiki.anempireofdreams.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:06:52] RECOVERY - swiftobject113 Current Load on swiftobject113 is OK: OK - load average: 3.23, 5.88, 6.75 [15:06:59] PROBLEM - wiki.shaazzz.ir - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:07:08] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 217.174.247.33/cpweb, 109.228.51.216/cpweb, 2a00:da00:1800:326::1/cpweb, 2a00:da00:1800:328::1/cpweb [15:07:14] PROBLEM - fanpedia.miraheze.org - Sectigo on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:07:18] PROBLEM - wiki.esufranchise.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:07:26] PROBLEM - swiftobject112 Current Load on swiftobject112 is CRITICAL: CRITICAL - load average: 10.13, 7.74, 6.62 [15:07:27] PROBLEM - cp22 HTTPS on cp22 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:07:37] PROBLEM - crustypedia.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:07:41] PROBLEM - wiki.potabi.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:08:07] PROBLEM - threedomwiki.pcast.site - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:08:07] PROBLEM - wiki.mineland.eu - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:08:14] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [15:08:26] RECOVERY - cp32 HTTPS on cp32 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3750 bytes in 0.435 second response time [15:08:42] RECOVERY - cp23 HTTPS on cp23 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3734 bytes in 0.402 second response time [15:09:08] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [15:09:22] RECOVERY - swiftobject112 Current Load on swiftobject112 is OK: OK - load average: 4.45, 6.55, 6.32 [15:09:22] RECOVERY - cp22 HTTPS on cp22 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3573 bytes in 0.134 second response time [15:09:41] RECOVERY - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is OK: OK - NGINX Error Rate is 2% [15:11:23] PROBLEM - cp32 Puppet on cp32 is UNKNOWN: NRPE: Unable to read output [15:13:16] PROBLEM - swiftobject112 Current Load on swiftobject112 is WARNING: WARNING - load average: 5.82, 7.06, 6.67 [15:13:43] PROBLEM - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is CRITICAL: CRITICAL - NGINX Error Rate is 94% [15:14:18] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 3.68, 5.60, 7.95 [15:15:08] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 217.174.247.33/cpweb, 2a00:da00:1800:326::1/cpweb [15:15:12] PROBLEM - swiftobject112 Current Load on swiftobject112 is CRITICAL: CRITICAL - load average: 8.21, 7.36, 6.82 [15:16:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.06, 6.23, 7.89 [15:17:08] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [15:20:48] RECOVERY - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is OK: OK - NGINX Error Rate is 18% [15:22:18] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 2.98, 4.20, 6.40 [15:22:32] RECOVERY - cp33 Stunnel for mw132 on cp33 is OK: TCP OK - 0.000 second response time on localhost port 8107 [15:22:37] RECOVERY - cp33 Stunnel for mw143 on cp33 is OK: TCP OK - 0.000 second response time on localhost port 8112 [15:22:44] RECOVERY - cp33 Stunnel for mail121 on cp33 is OK: TCP OK - 0.000 second response time on localhost port 8200 [15:22:49] RECOVERY - cp33 Stunnel for mw133 on cp33 is OK: TCP OK - 0.001 second response time on localhost port 8110 [15:22:52] RECOVERY - cp33 Stunnel for mw131 on cp33 is OK: TCP OK - 0.000 second response time on localhost port 8106 [15:22:53] RECOVERY - cp33 Stunnel for mon141 on cp33 is OK: TCP OK - 0.014 second response time on localhost port 8201 [15:22:59] RECOVERY - cp33 Stunnel for test131 on cp33 is OK: TCP OK - 0.008 second response time on localhost port 8180 [15:23:06] RECOVERY - cp33 Stunnel for phab121 on cp33 is OK: TCP OK - 0.000 second response time on localhost port 8202 [15:23:13] RECOVERY - cp33 Stunnel for mw134 on cp33 is OK: TCP OK - 0.000 second response time on localhost port 8111 [15:23:27] RECOVERY - cp33 Stunnel for mw142 on cp33 is OK: TCP OK - 0.011 second response time on localhost port 8109 [15:23:46] RECOVERY - cp33 Stunnel for reports121 on cp33 is OK: TCP OK - 0.001 second response time on localhost port 8205 [15:23:46] RECOVERY - cp33 Stunnel for puppet141 on cp33 is OK: TCP OK - 0.001 second response time on localhost port 8204 [15:23:53] RECOVERY - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is OK: OK - NGINX Error Rate is 1% [15:23:53] RECOVERY - cp33 Stunnel for matomo121 on cp33 is OK: TCP OK - 0.000 second response time on localhost port 8203 [15:24:02] RECOVERY - cp33 Stunnel for mwtask141 on cp33 is OK: TCP OK - 0.000 second response time on localhost port 8150 [15:24:07] RECOVERY - cp32 Stunnel for mwtask141 on cp32 is OK: TCP OK - 0.001 second response time on localhost port 8150 [15:24:09] RECOVERY - cp32 Stunnel for mw133 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8110 [15:24:10] PROBLEM - ping6 on swiftproxy111 is CRITICAL: PING CRITICAL - Packet loss = 16%, RTA = 191.85 ms [15:24:13] RECOVERY - cp32 Stunnel for mw132 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8107 [15:24:13] RECOVERY - cp33 Stunnel for mw141 on cp33 is OK: TCP OK - 0.000 second response time on localhost port 8108 [15:24:18] RECOVERY - cp32 Stunnel for reports121 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8205 [15:24:32] PROBLEM - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is WARNING: WARNING - NGINX Error Rate is 41% [15:24:51] PROBLEM - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is CRITICAL: CRITICAL - NGINX Error Rate is 76% [15:24:52] RECOVERY - cp33 HTTPS on cp33 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3573 bytes in 0.530 second response time [15:24:56] RECOVERY - cp32 Stunnel for mw134 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8111 [15:24:56] RECOVERY - cp33 Varnish Backends on cp33 is OK: All 15 backends are healthy [15:24:57] RECOVERY - cp32 Stunnel for mw143 on cp32 is OK: TCP OK - 0.001 second response time on localhost port 8112 [15:25:06] RECOVERY - cp32 Stunnel for phab121 on cp32 is OK: TCP OK - 0.001 second response time on localhost port 8202 [15:25:08] RECOVERY - cp32 Stunnel for mw142 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8109 [15:25:11] RECOVERY - cp32 Stunnel for mw141 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8108 [15:25:26] RECOVERY - cp32 Stunnel for mon141 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8201 [15:25:27] RECOVERY - cp32 Stunnel for matomo121 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8203 [15:25:49] RECOVERY - cp32 Varnish Backends on cp32 is OK: All 15 backends are healthy [15:25:51] RECOVERY - cp32 Stunnel for mail121 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8200 [15:25:53] RECOVERY - cp32 Stunnel for test131 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8180 [15:25:59] RECOVERY - cp32 Stunnel for puppet141 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8204 [15:26:04] RECOVERY - cp32 Stunnel for mw131 on cp32 is OK: TCP OK - 0.000 second response time on localhost port 8106 [15:26:11] RECOVERY - ping6 on swiftproxy111 is OK: PING OK - Packet loss = 0%, RTA = 24.29 ms [15:26:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.21, 6.95, 7.33 [15:26:39] RECOVERY - cp22 Current Load on cp22 is OK: LOAD OK - total load average: 1.32, 0.29, 0.10 [15:26:53] PROBLEM - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is WARNING: WARNING - NGINX Error Rate is 45% [15:27:35] RECOVERY - en.religiononfire.mar.in.ua - reverse DNS on sslhost is OK: SSL OK - en.religiononfire.mar.in.ua reverse DNS resolves to cp22.miraheze.org - CNAME OK [15:28:07] PROBLEM - swiftproxy131 HTTP on swiftproxy131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:28:27] RECOVERY - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is OK: OK - NGINX Error Rate is 1% [15:28:36] RECOVERY - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is OK: OK - NGINX Error Rate is 1% [15:28:47] PROBLEM - cp23 Varnish Backends on cp23 is WARNING: No backends detected. If this is an error, see readme.txt [15:28:58] PROBLEM - cp22 NTP time on cp22 is WARNING: NTP WARNING: Offset 0.4958215058 secs [15:29:18] PROBLEM - cp23 Puppet on cp23 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [15:30:06] PROBLEM - os141 Current Load on os141 is CRITICAL: CRITICAL - load average: 5.61, 3.93, 2.34 [15:30:06] RECOVERY - swiftproxy131 HTTP on swiftproxy131 is OK: HTTP OK: Status line output matched "HTTP/1.1 404" - 352 bytes in 1.511 second response time [15:31:04] PROBLEM - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is CRITICAL: CRITICAL - NGINX Error Rate is 71% [15:31:07] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.36, 7.89, 7.24 [15:32:06] PROBLEM - os141 Current Load on os141 is WARNING: WARNING - load average: 2.28, 3.41, 2.35 [15:32:32] PROBLEM - cp22 Current Load on cp22 is CRITICAL: LOAD CRITICAL - total load average: 9.58, 6.04, 2.66 [15:32:39] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.84, 7.66, 7.56 [15:34:03] RECOVERY - cp32 Puppet on cp32 is OK: OK: Puppet is currently enabled, last run 19 seconds ago with 0 failures [15:34:05] RECOVERY - cp33 Puppet on cp33 is OK: OK: Puppet is currently enabled, last run 23 seconds ago with 0 failures [15:34:07] PROBLEM - os141 Current Load on os141 is CRITICAL: CRITICAL - load average: 4.28, 4.13, 2.76 [15:34:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.21, 8.02, 7.73 [15:34:49] PROBLEM - cp23 Puppet on cp23 is UNKNOWN: NRPE: Unable to read output [15:34:56] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.98, 7.64, 7.31 [15:35:01] RECOVERY - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is OK: OK - NGINX Error Rate is 37% [15:35:37] RECOVERY - wiki.shaazzz.ir - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.shaazzz.ir' will expire on Thu 05 Oct 2023 22:33:31 GMT +0000. [15:36:02] RECOVERY - cp23 Varnish Backends on cp23 is OK: All 15 backends are healthy [15:36:02] RECOVERY - cheeseepedia.org - LetsEncrypt on sslhost is OK: OK - Certificate 'cheeseepedia.org' will expire on Sat 30 Sep 2023 13:16:25 GMT +0000. [15:36:04] RECOVERY - fanpedia.miraheze.org - Sectigo on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Sat 11 Nov 2023 23:59:59 GMT +0000. [15:36:08] RECOVERY - runzeppelin.ru - LetsEncrypt on sslhost is OK: OK - Certificate 'runzeppelin.ru' will expire on Thu 05 Oct 2023 12:02:35 GMT +0000. [15:36:08] RECOVERY - wiki.esufranchise.com - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.esufranchise.com' will expire on Sun 05 Nov 2023 15:38:07 GMT +0000. [15:36:36] RECOVERY - wiki.anempireofdreams.com - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.anempireofdreams.com' will expire on Fri 01 Dec 2023 07:41:09 GMT +0000. [15:36:37] RECOVERY - wiki.knowledgerevolution.eu - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.knowledgerevolution.eu' will expire on Wed 18 Oct 2023 19:15:42 GMT +0000. [15:36:49] RECOVERY - crustypedia.org - LetsEncrypt on sslhost is OK: OK - Certificate 'crustypedia.org' will expire on Fri 03 Nov 2023 09:01:30 GMT +0000. [15:36:51] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.91, 7.83, 7.39 [15:36:58] RECOVERY - wiki.potabi.com - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.potabi.com' will expire on Fri 06 Oct 2023 12:33:59 GMT +0000. [15:37:26] PROBLEM - cp22 Puppet on cp22 is UNKNOWN: NRPE: Unable to read output [15:37:47] RECOVERY - threedomwiki.pcast.site - LetsEncrypt on sslhost is OK: OK - Certificate 'threedomwiki.pcast.site' will expire on Fri 06 Oct 2023 14:14:56 GMT +0000. [15:37:48] RECOVERY - wiki.mineland.eu - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.mineland.eu' will expire on Thu 05 Oct 2023 21:04:15 GMT +0000. [15:38:52] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 7.67, 6.20, 5.65 [15:39:02] PROBLEM - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is CRITICAL: CRITICAL - NGINX Error Rate is 67% [15:40:07] PROBLEM - os141 Current Load on os141 is WARNING: WARNING - load average: 2.93, 3.91, 3.18 [15:40:51] PROBLEM - cp23 NTP time on cp23 is WARNING: NTP WARNING: Offset 0.2852746546 secs [15:40:58] RECOVERY - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is OK: OK - NGINX Error Rate is 0% [15:41:11] PROBLEM - cp23 Varnish Backends on cp23 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [15:42:01] RECOVERY - cp23 Current Load on cp23 is OK: LOAD OK - total load average: 1.07, 0.48, 0.18 [15:42:06] PROBLEM - os141 Current Load on os141 is CRITICAL: CRITICAL - load average: 4.37, 4.04, 3.31 [15:42:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.25, 8.00, 8.00 [15:42:53] RECOVERY - swiftobject113 Current Load on swiftobject113 is OK: OK - load average: 5.00, 5.97, 5.72 [15:42:53] PROBLEM - cp32 Current Load on cp32 is WARNING: LOAD WARNING - total load average: 0.57, 1.22, 3.84 [15:43:09] PROBLEM - cp22 Puppet on cp22 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:43:19] PROBLEM - mw142 Current Load on mw142 is CRITICAL: CRITICAL - load average: 21.82, 13.81, 8.42 [15:43:21] RECOVERY - cp23 Puppet on cp23 is OK: OK: Puppet is currently enabled, last run 17 seconds ago with 0 failures [15:43:34] PROBLEM - mw132 Current Load on mw132 is CRITICAL: CRITICAL - load average: 15.89, 11.18, 7.10 [15:43:36] PROBLEM - mw141 Current Load on mw141 is CRITICAL: CRITICAL - load average: 15.59, 10.79, 6.80 [15:43:51] PROBLEM - cp32 Varnish Backends on cp32 is CRITICAL: 6 backends are down. mw131 mw132 mw141 mw133 mw134 mw143 [15:44:00] PROBLEM - mw131 Current Load on mw131 is CRITICAL: CRITICAL - load average: 24.57, 14.48, 8.19 [15:44:05] PROBLEM - cp22 Varnish Backends on cp22 is CRITICAL: 6 backends are down. mw131 mw132 mw141 mw133 mw134 mw143 [15:44:12] PROBLEM - mw143 PowerDNS Recursor on mw143 is CRITICAL: CRITICAL - Plugin timed out while executing system call [15:44:27] PROBLEM - mw143 MediaWiki Rendering on mw143 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:44:29] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 3.17, 6.84, 7.44 [15:44:32] PROBLEM - cp22 Current Load on cp22 is WARNING: LOAD WARNING - total load average: 1.41, 3.94, 3.98 [15:44:54] RECOVERY - cp32 Current Load on cp32 is OK: LOAD OK - total load average: 0.22, 0.87, 3.40 [15:44:56] PROBLEM - cp33 Varnish Backends on cp33 is CRITICAL: 6 backends are down. mw131 mw132 mw141 mw142 mw134 mw143 [15:45:03] PROBLEM - mw143 Current Load on mw143 is CRITICAL: CRITICAL - load average: 31.00, 18.80, 9.62 [15:45:05] PROBLEM - mw134 PowerDNS Recursor on mw134 is CRITICAL: CRITICAL - Plugin timed out while executing system call [15:45:08] PROBLEM - mw132 PowerDNS Recursor on mw132 is CRITICAL: CRITICAL - Plugin timed out while executing system call [15:45:25] PROBLEM - cp33 Current Load on cp33 is WARNING: LOAD WARNING - total load average: 0.12, 0.69, 3.83 [15:45:41] PROBLEM - mw133 Current Load on mw133 is CRITICAL: CRITICAL - load average: 21.07, 14.43, 8.28 [15:45:47] PROBLEM - mw134 Current Load on mw134 is WARNING: WARNING - load average: 11.35, 11.63, 7.37 [15:45:47] PROBLEM - mw131 MediaWiki Rendering on mw131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:45:52] PROBLEM - mon141 Current Load on mon141 is CRITICAL: CRITICAL - load average: 7.21, 4.50, 2.49 [15:45:52] PROBLEM - mw133 PowerDNS Recursor on mw133 is CRITICAL: CRITICAL - Plugin timed out while executing system call [15:46:00] PROBLEM - mw134 MediaWiki Rendering on mw134 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [15:47:42] there go the bots [15:50:52] PROBLEM - swiftobject112 Puppet on swiftobject112 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:50:54] RECOVERY - swiftobject112 Current Load on swiftobject112 is OK: OK - load average: 5.12, 4.69, 6.76 [15:51:08] RECOVERY - mw134 MediaWiki Rendering on mw134 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.217 second response time [15:51:14] PROBLEM - swiftobject113 Puppet on swiftobject113 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:51:25] RECOVERY - mw131 MediaWiki Rendering on mw131 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.319 second response time [15:51:35] PROBLEM - mem141 Puppet on mem141 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [15:51:36] RECOVERY - cp33 Varnish Backends on cp33 is OK: All 15 backends are healthy [15:51:39] RECOVERY - cp23 Varnish Backends on cp23 is OK: All 15 backends are healthy [15:51:42] PROBLEM - os141 Current Load on os141 is WARNING: WARNING - load average: 1.01, 3.17, 3.60 [15:51:59] RECOVERY - mw142 MediaWiki Rendering on mw142 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.564 second response time [15:52:58] PROBLEM - mw134 Current Load on mw134 is WARNING: WARNING - load average: 7.11, 11.09, 9.23 [15:52:58] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 17.26, 10.46, 8.21 [15:53:00] RECOVERY - cp22 Varnish Backends on cp22 is OK: All 15 backends are healthy [15:53:08] RECOVERY - cp32 Varnish Backends on cp32 is OK: All 15 backends are healthy [15:53:11] PROBLEM - swiftobject121 Current Load on swiftobject121 is CRITICAL: CRITICAL - load average: 9.10, 6.99, 5.73 [15:53:13] PROBLEM - swiftobject113 Current Load on swiftobject113 is CRITICAL: CRITICAL - load average: 10.87, 6.79, 5.74 [15:53:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 18.04, 12.31, 9.12 [15:53:29] RECOVERY - mw133 Current Load on mw133 is OK: OK - load average: 4.77, 9.99, 9.08 [15:53:37] RECOVERY - os141 Current Load on os141 is OK: OK - load average: 0.99, 2.56, 3.33 [15:53:40] PROBLEM - mw132 Current Load on mw132 is WARNING: WARNING - load average: 7.06, 11.90, 10.35 [15:54:53] RECOVERY - mw134 Current Load on mw134 is OK: OK - load average: 4.34, 8.96, 8.68 [15:55:06] RECOVERY - swiftobject121 Current Load on swiftobject121 is OK: OK - load average: 5.94, 6.40, 5.66 [15:55:09] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 7.79, 6.87, 5.90 [15:55:15] PROBLEM - mw131 Current Load on mw131 is WARNING: WARNING - load average: 8.04, 11.67, 10.84 [15:55:27] PROBLEM - os131 Current Load on os131 is WARNING: WARNING - load average: 2.16, 3.66, 3.12 [15:57:06] RECOVERY - swiftobject113 Current Load on swiftobject113 is OK: OK - load average: 5.86, 6.47, 5.87 [15:57:24] RECOVERY - os131 Current Load on os131 is OK: OK - load average: 1.44, 2.90, 2.91 [15:57:33] PROBLEM - swiftproxy111 Current Load on swiftproxy111 is WARNING: WARNING - load average: 1.06, 2.04, 7.42 [15:57:36] RECOVERY - mw132 Current Load on mw132 is OK: OK - load average: 6.36, 9.24, 9.62 [15:59:02] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/puppet/compare/f93562d0ff34...7012d18cf5bb [15:59:04] [02miraheze/puppet] 07paladox 037012d18 - varnish: improve performance by setting some sysctl params [15:59:08] RECOVERY - mw131 Current Load on mw131 is OK: OK - load average: 6.22, 8.96, 9.94 [15:59:31] RECOVERY - swiftproxy111 Current Load on swiftproxy111 is OK: OK - load average: 1.73, 2.01, 6.76 [16:00:47] RECOVERY - cp22 Puppet on cp22 is OK: OK: Puppet is currently enabled, last run 19 seconds ago with 0 failures [16:05:36] RECOVERY - null-cpu.emudev.org - LetsEncrypt on sslhost is OK: OK - Certificate 'null-cpu.emudev.org' will expire on Tue 17 Oct 2023 08:22:32 GMT +0000. [16:06:33] RECOVERY - robloxapi.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'robloxapi.wiki' will expire on Fri 06 Oct 2023 12:51:49 GMT +0000. [16:07:30] PROBLEM - swiftobject121 Current Load on swiftobject121 is WARNING: WARNING - load average: 6.49, 6.95, 6.34 [16:08:37] PROBLEM - swiftproxy131 Current Load on swiftproxy131 is WARNING: WARNING - load average: 5.26, 4.91, 7.72 [16:09:25] RECOVERY - swiftobject121 Current Load on swiftobject121 is OK: OK - load average: 6.38, 6.68, 6.30 [16:12:27] RECOVERY - swiftproxy131 Current Load on swiftproxy131 is OK: OK - load average: 2.67, 3.63, 6.56 [16:13:00] RECOVERY - mail121 Puppet on mail121 is OK: OK: Puppet is currently enabled, last run 22 seconds ago with 0 failures [16:13:03] RECOVERY - swiftproxy111 Puppet on swiftproxy111 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [16:13:23] RECOVERY - mem141 Puppet on mem141 is OK: OK: Puppet is currently enabled, last run 22 seconds ago with 0 failures [16:13:40] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 8.00, 7.02, 6.38 [16:15:02] RECOVERY - swiftobject113 Puppet on swiftobject113 is OK: OK: Puppet is currently enabled, last run 53 seconds ago with 0 failures [16:15:25] PROBLEM - graylog131 Current Load on graylog131 is WARNING: WARNING - load average: 2.26, 3.27, 3.89 [16:15:36] PROBLEM - swiftobject113 Current Load on swiftobject113 is CRITICAL: CRITICAL - load average: 8.88, 7.17, 6.49 [16:16:29] RECOVERY - swiftobject112 Puppet on swiftobject112 is OK: OK: Puppet is currently enabled, last run 47 seconds ago with 0 failures [16:17:33] RECOVERY - swiftobject113 Current Load on swiftobject113 is OK: OK - load average: 5.23, 6.41, 6.29 [16:23:25] RECOVERY - graylog131 Current Load on graylog131 is OK: OK - load average: 2.72, 2.63, 3.32 [16:32:12] PROBLEM - graylog131 Current Load on graylog131 is CRITICAL: CRITICAL - load average: 4.36, 3.39, 3.33 [16:34:07] RECOVERY - graylog131 Current Load on graylog131 is OK: OK - load average: 3.37, 3.39, 3.34 [16:48:50] PROBLEM - cp33 NTP time on cp33 is WARNING: NTP WARNING: Offset -0.1004057527 secs [16:50:56] PROBLEM - os141 Current Load on os141 is WARNING: WARNING - load average: 3.77, 2.99, 1.88 [16:52:57] PROBLEM - os141 Current Load on os141 is CRITICAL: CRITICAL - load average: 5.84, 3.92, 2.36 [16:56:55] PROBLEM - os141 Current Load on os141 is WARNING: WARNING - load average: 2.08, 3.71, 2.71 [16:58:21] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 5.16, 7.17, 8.00 [16:58:56] RECOVERY - os141 Current Load on os141 is OK: OK - load average: 1.29, 2.93, 2.54 [17:00:20] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 9.17, 7.62, 8.03 [17:03:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.54, 6.69, 7.77 [17:08:15] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 5.94, 7.63, 7.96 [17:09:47] PROBLEM - graylog131 Current Load on graylog131 is WARNING: WARNING - load average: 3.69, 3.47, 3.19 [17:10:14] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 7.56, 7.86, 8.01 [17:11:42] RECOVERY - graylog131 Current Load on graylog131 is OK: OK - load average: 2.89, 3.33, 3.17 [17:12:13] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.35, 7.57, 7.88 [17:15:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.39, 7.53, 7.59 [17:15:33] PROBLEM - graylog131 Current Load on graylog131 is WARNING: WARNING - load average: 3.59, 3.62, 3.33 [17:16:46] PROBLEM - os131 Current Load on os131 is WARNING: WARNING - load average: 3.62, 3.34, 2.53 [17:17:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.91, 6.89, 7.35 [17:21:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.32, 7.18, 7.31 [17:22:07] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 11.43, 8.02, 7.71 [17:22:46] RECOVERY - os131 Current Load on os131 is OK: OK - load average: 2.58, 3.03, 2.70 [17:23:25] RECOVERY - graylog131 Current Load on graylog131 is OK: OK - load average: 2.61, 3.14, 3.25 [17:28:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 5.44, 7.19, 7.60 [17:29:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.92, 7.53, 7.63 [17:31:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.49, 7.96, 7.77 [17:32:00] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.14, 7.20, 7.47 [17:37:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.19, 7.57, 7.79 [17:39:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.26, 7.79, 7.81 [17:41:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.37, 7.06, 7.56 [17:41:54] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.39, 7.63, 7.79 [17:47:37] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-4 [+0/-0/±1] 13https://github.com/miraheze/puppet/commit/870d91cbbf2f [17:47:38] [02miraheze/puppet] 07paladox 03870d91c - varnish: set nginx backlog to 1638 [17:47:39] [02puppet] 07paladox created branch 03paladox-patch-4 - 13https://github.com/miraheze/puppet [17:47:40] [02puppet] 07paladox opened pull request 03#3407: varnish: set nginx backlog to 1638 - 13https://github.com/miraheze/puppet/pull/3407 [17:49:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.42, 7.18, 7.31 [17:53:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.17, 6.47, 7.01 [17:53:47] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 4.47, 5.73, 6.72 [17:55:46] PROBLEM - graylog131 Current Load on graylog131 is WARNING: WARNING - load average: 3.79, 3.26, 2.87 [17:57:16] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.52, 6.08, 6.73 [17:57:41] RECOVERY - graylog131 Current Load on graylog131 is OK: OK - load average: 3.08, 3.14, 2.87 [18:01:15] PROBLEM - cp22 Puppet on cp22 is UNKNOWN: NRPE: Unable to read output [18:01:28] PROBLEM - cp32 Puppet on cp32 is UNKNOWN: NRPE: Unable to read output [18:01:33] PROBLEM - cp33 Puppet on cp33 is UNKNOWN: NRPE: Unable to read output [18:01:41] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.90, 6.43, 6.58 [18:02:31] PROBLEM - cp23 Puppet on cp23 is UNKNOWN: NRPE: Unable to read output [18:03:40] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 4.94, 5.73, 6.30 [18:04:38] [02puppet] 07paladox synchronize pull request 03#3407: varnish: set nginx backlog to 1638 - 13https://github.com/miraheze/puppet/pull/3407 [18:04:39] [02miraheze/puppet] 07paladox pushed 031 commit to 03paladox-patch-4 [+0/-0/±1] 13https://github.com/miraheze/puppet/compare/870d91cbbf2f...fa0ec8af8030 [18:04:42] [02miraheze/puppet] 07paladox 03fa0ec8a - Update mediawiki.conf [18:05:27] RECOVERY - cp32 Puppet on cp32 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:09:15] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.64, 3.30, 2.84 [18:09:28] PROBLEM - cp32 Puppet on cp32 is UNKNOWN: NRPE: Unable to read output [18:09:33] RECOVERY - cp33 Puppet on cp33 is OK: OK: Puppet is currently enabled, last run 2 seconds ago with 0 failures [18:10:31] RECOVERY - cp23 Puppet on cp23 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:11:14] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 4.05, 3.48, 2.96 [18:11:15] RECOVERY - cp22 Puppet on cp22 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:11:27] RECOVERY - cp32 Puppet on cp32 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [18:15:14] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.72, 3.72, 3.19 [18:21:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.04, 6.85, 6.60 [18:23:14] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 4.31, 4.11, 3.58 [18:23:48] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.00, 6.70, 6.60 [18:28:20] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.35, 8.08, 7.09 [18:31:35] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.89, 8.14, 7.15 [18:33:32] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.03, 7.74, 7.15 [18:36:17] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 5.97, 7.54, 7.42 [18:37:14] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.82, 4.00, 3.88 [18:37:27] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.14, 6.40, 6.73 [18:40:14] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.40, 7.27, 7.26 [18:41:14] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 4.39, 4.08, 3.93 [18:41:20] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.63, 7.29, 7.02 [18:42:13] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 5.52, 6.54, 7.00 [18:43:17] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.90, 7.77, 7.22 [18:47:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.40, 7.04, 7.08 [18:48:09] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.84, 6.21, 6.72 [18:49:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.79, 7.44, 7.20 [18:51:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.98, 7.20, 7.15 [18:52:06] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 5.88, 6.78, 6.90 [18:53:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.37, 7.57, 7.29 [18:54:05] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.62, 6.48, 6.78 [18:55:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.46, 6.92, 7.10 [18:57:24] [02puppet] 07paladox closed pull request 03#3407: varnish: set nginx backlog to 1638 - 13https://github.com/miraheze/puppet/pull/3407 [18:57:26] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/puppet/compare/7012d18cf5bb...f5e8c55deb3a [18:57:28] [02miraheze/puppet] 07paladox 03f5e8c55 - varnish: set nginx backlog to 1638 (#3407) [18:57:29] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-4 [18:57:32] [02puppet] 07paladox deleted branch 03paladox-patch-4 - 13https://github.com/miraheze/puppet [18:58:01] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.37, 6.68, 6.83 [18:59:16] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 4.88, 6.15, 6.75 [19:00:00] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.07, 6.14, 6.62 [19:05:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.51, 6.81, 6.82 [19:07:14] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.15, 3.50, 3.92 [19:09:16] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.20, 6.32, 6.65 [19:09:52] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.62, 7.29, 6.95 [19:11:51] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.36, 6.49, 6.69 [19:13:14] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 4.05, 3.66, 3.85 [19:15:14] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.60, 3.57, 3.79 [19:15:15] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 8.00, 7.40, 6.97 [19:16:06] PROBLEM - uk.religiononfire.mar.in.ua - reverse DNS on sslhost is WARNING: NoNameservers: All nameservers failed to answer the query uk.religiononfire.mar.in.ua. IN CNAME: Server 2606:4700:4700::1111 UDP port 53 answered SERVFAIL [19:17:14] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 4.13, 3.77, 3.83 [19:19:14] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.54, 3.71, 3.80 [19:19:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.17, 6.96, 6.85 [19:21:44] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 9.51, 7.24, 6.80 [19:21:54] PROBLEM - swiftobject121 Current Load on swiftobject121 is CRITICAL: CRITICAL - load average: 8.07, 6.83, 5.78 [19:23:43] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.77, 6.61, 6.63 [19:23:54] RECOVERY - swiftobject121 Current Load on swiftobject121 is OK: OK - load average: 5.01, 6.00, 5.60 [19:25:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.06, 7.38, 7.40 [19:29:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 10.10, 8.45, 7.76 [19:29:38] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 9.60, 8.67, 7.40 [19:31:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.56, 7.37, 7.46 [19:33:35] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.14, 7.67, 7.27 [19:37:33] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.83, 6.22, 6.75 [19:45:11] RECOVERY - uk.religiononfire.mar.in.ua - reverse DNS on sslhost is OK: SSL OK - uk.religiononfire.mar.in.ua reverse DNS resolves to cp22.miraheze.org - CNAME OK [19:45:16] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 4.70, 5.85, 6.69 [19:49:25] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.87, 6.16, 6.26 [19:51:24] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 6.00, 6.00, 6.19 [19:52:08] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 10.86, 8.16, 7.37 [19:55:21] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 5.49, 6.95, 6.65 [19:56:03] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.26, 7.33, 7.23 [19:59:14] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 3.14, 3.30, 3.40 [20:01:17] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 6.34, 6.79, 6.68 [20:05:14] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.59, 7.65, 7.08 [20:05:14] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.58, 3.34, 3.36 [20:07:13] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.12, 7.31, 7.01 [20:07:48] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.76, 6.03, 6.67 [20:09:11] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.29, 6.44, 6.72 [20:13:14] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 3.23, 3.31, 3.35 [20:28:56] PROBLEM - cp32 Disk Space on cp32 is CRITICAL: DISK CRITICAL - free space: / 4453MiB (5% inode=98%); [20:29:14] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.99, 3.62, 3.46 [20:29:16] PROBLEM - cp33 Disk Space on cp33 is WARNING: DISK WARNING - free space: / 6186MiB (8% inode=98%); [20:33:14] PROBLEM - cp23 NTP time on cp23 is CRITICAL: NTP CRITICAL: Offset 0.5590316951 secs [20:37:14] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 2.72, 3.28, 3.38 [20:41:14] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.21, 3.45, 3.44 [20:54:00] PROBLEM - mw134 Current Load on mw134 is CRITICAL: CRITICAL - load average: 17.47, 9.93, 6.27 [20:54:01] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 8 datacenters are down: 217.174.247.33/cpweb, 109.228.51.216/cpweb, 2a00:da00:1800:326::1/cpweb, 2a00:da00:1800:328::1/cpweb, 108.175.15.182/cpweb, 74.208.203.152/cpweb, 2607:f1c0:1800:8100::1/cpweb, 2607:f1c0:1800:26f::1/cpweb [20:54:07] PROBLEM - closinglogosgroup.miraheze.org - Sectigo on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:54:08] PROBLEM - wikicompliance.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:54:08] PROBLEM - cp22 HTTPS on cp22 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:54:13] PROBLEM - fotnswiki.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:54:14] PROBLEM - history.estill.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:54:19] PROBLEM - wiki.qadrishattari.xyz - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:54:20] PROBLEM - www.turtletown.ca - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:54:20] PROBLEM - www.sidem.wiki - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:54:22] PROBLEM - allthetropes.miraheze.org - Sectigo on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:54:22] PROBLEM - data.wikiyri.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:54:24] PROBLEM - mw133 Current Load on mw133 is CRITICAL: CRITICAL - load average: 18.61, 10.45, 6.64 [20:54:33] PROBLEM - fanonpedia.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:54:36] PROBLEM - cp22 Varnish Backends on cp22 is CRITICAL: 8 backends are down. mw131 mw132 mw141 mw142 mw133 mw134 mw143 mediawiki [20:54:42] PROBLEM - equestripedia.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:54:42] PROBLEM - cp23 HTTPS on cp23 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:54:45] PROBLEM - solaswiki.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:54:49] PROBLEM - mw142 Current Load on mw142 is CRITICAL: CRITICAL - load average: 14.16, 9.11, 5.85 [20:54:52] PROBLEM - wiki.aridia.space - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:54:52] PROBLEM - acgn.sfdev.eu.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:54:56] PROBLEM - cp22 Current Load on cp22 is CRITICAL: LOAD CRITICAL - total load average: 38.65, 34.03, 15.22 [20:55:01] PROBLEM - crashspyro.miraheze.org - Sectigo on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:55:02] PROBLEM - wiki.closai.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:55:05] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is WARNING: WARNING - NGINX Error Rate is 44% [20:55:06] PROBLEM - wiki.projectdiablo2.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:55:07] PROBLEM - cp33 HTTPS on cp33 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:55:14] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 2.82, 3.23, 3.38 [20:55:17] PROBLEM - cp33 Current Load on cp33 is CRITICAL: LOAD CRITICAL - total load average: 24.20, 25.04, 11.56 [20:55:19] PROBLEM - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is WARNING: WARNING - NGINX Error Rate is 45% [20:55:21] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 109.228.51.216/cpweb, 2a00:da00:1800:328::1/cpweb [20:55:28] PROBLEM - wiki.infomedia.co.uk - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [20:55:38] PROBLEM - cp32 Current Load on cp32 is CRITICAL: LOAD CRITICAL - total load average: 21.28, 16.79, 7.74 [20:56:00] PROBLEM - cp23 Current Load on cp23 is CRITICAL: LOAD CRITICAL - total load average: 706.63, 448.00, 178.20 [20:56:00] RECOVERY - mw134 Current Load on mw134 is OK: OK - load average: 5.58, 8.84, 6.39 [20:56:24] RECOVERY - mw133 Current Load on mw133 is OK: OK - load average: 4.48, 8.25, 6.33 [20:56:31] PROBLEM - cp22 Varnish Backends on cp22 is WARNING: No backends detected. If this is an error, see readme.txt [20:56:36] RECOVERY - cp23 HTTPS on cp23 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3733 bytes in 0.247 second response time [20:56:49] RECOVERY - mw142 Current Load on mw142 is OK: OK - load average: 4.19, 7.38, 5.64 [20:57:02] RECOVERY - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is OK: OK - NGINX Error Rate is 2% [20:57:07] RECOVERY - cp33 HTTPS on cp33 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3753 bytes in 0.754 second response time [20:57:20] RECOVERY - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is OK: OK - NGINX Error Rate is 1% [20:58:08] RECOVERY - cloud11 IPMI Sensors on cloud11 is OK: IPMI Status: OK [20:59:40] PROBLEM - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is CRITICAL: CRITICAL - NGINX Error Rate is 100% [21:02:03] RECOVERY - cp22 HTTPS on cp22 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3732 bytes in 0.137 second response time [21:02:08] PROBLEM - cloud11 IPMI Sensors on cloud11 is CRITICAL: IPMI Status: Critical [Cntlr 2 Bay 8 = Critical] [21:02:35] RECOVERY - cp22 Varnish Backends on cp22 is OK: All 15 backends are healthy [21:02:43] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 23.68, 11.96, 7.58 [21:03:18] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [21:03:25] PROBLEM - graylog131 Current Load on graylog131 is CRITICAL: CRITICAL - load average: 4.20, 3.31, 2.59 [21:03:40] RECOVERY - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is OK: OK - NGINX Error Rate is 2% [21:03:55] PROBLEM - swiftobject121 Current Load on swiftobject121 is CRITICAL: CRITICAL - load average: 12.20, 7.97, 5.72 [21:04:01] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [21:04:04] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 4.32, 3.67, 3.46 [21:04:14] PROBLEM - swiftobject111 Current Load on swiftobject111 is CRITICAL: CRITICAL - load average: 8.27, 6.37, 4.65 [21:04:34] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 14.91, 11.50, 7.99 [21:05:25] PROBLEM - graylog131 Current Load on graylog131 is WARNING: WARNING - load average: 3.60, 3.51, 2.76 [21:06:00] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.50, 3.57, 3.45 [21:06:03] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 7.95, 6.95, 5.34 [21:06:14] PROBLEM - swiftobject111 Current Load on swiftobject111 is WARNING: WARNING - load average: 7.34, 7.06, 5.15 [21:07:25] RECOVERY - graylog131 Current Load on graylog131 is OK: OK - load average: 3.18, 3.39, 2.81 [21:08:14] RECOVERY - swiftobject111 Current Load on swiftobject111 is OK: OK - load average: 6.17, 6.79, 5.29 [21:09:52] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 4.23, 3.75, 3.53 [21:10:03] PROBLEM - swiftobject113 Current Load on swiftobject113 is CRITICAL: CRITICAL - load average: 11.99, 9.67, 6.76 [21:11:47] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.56, 3.61, 3.50 [21:12:14] PROBLEM - swiftobject111 Current Load on swiftobject111 is WARNING: WARNING - load average: 5.71, 7.23, 5.95 [21:12:20] PROBLEM - swiftproxy131 Current Load on swiftproxy131 is CRITICAL: CRITICAL - load average: 8.07, 7.42, 4.65 [21:14:14] RECOVERY - swiftobject111 Current Load on swiftobject111 is OK: OK - load average: 5.20, 6.62, 5.89 [21:14:15] PROBLEM - swiftproxy131 Current Load on swiftproxy131 is WARNING: WARNING - load average: 6.77, 7.32, 4.95 [21:16:09] RECOVERY - swiftproxy131 Current Load on swiftproxy131 is OK: OK - load average: 3.94, 6.07, 4.76 [21:17:35] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 2.89, 3.18, 3.36 [21:17:37] PROBLEM - cp32 Current Load on cp32 is WARNING: LOAD WARNING - total load average: 0.44, 1.27, 3.85 [21:17:55] PROBLEM - swiftobject121 Current Load on swiftobject121 is WARNING: WARNING - load average: 5.78, 7.36, 7.17 [21:18:03] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 6.51, 7.43, 6.85 [21:20:31] PROBLEM - cloud14 APT on cloud14 is WARNING: APT WARNING: 0 packages available for upgrade (0 critical updates). warnings detected, errors detected. [21:21:37] RECOVERY - cp32 Current Load on cp32 is OK: LOAD OK - total load average: 0.69, 0.86, 3.07 [21:22:33] PROBLEM - cloud14 APT on cloud14 is CRITICAL: APT CRITICAL: 124 packages available for upgrade (13 critical updates). [21:23:19] RECOVERY - closinglogosgroup.miraheze.org - Sectigo on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Sat 11 Nov 2023 23:59:59 GMT +0000. [21:23:32] RECOVERY - wikicompliance.com - LetsEncrypt on sslhost is OK: OK - Certificate 'wikicompliance.com' will expire on Sat 07 Oct 2023 03:18:36 GMT +0000. [21:23:39] RECOVERY - www.turtletown.ca - LetsEncrypt on sslhost is OK: OK - Certificate 'www.turtletown.ca' will expire on Sat 07 Oct 2023 14:41:54 GMT +0000. [21:23:41] RECOVERY - wiki.qadrishattari.xyz - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.qadrishattari.xyz' will expire on Fri 06 Oct 2023 05:12:22 GMT +0000. [21:23:41] RECOVERY - wiki.closai.com - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.closai.com' will expire on Wed 15 Nov 2023 11:35:23 GMT +0000. [21:23:45] RECOVERY - wiki.projectdiablo2.com - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.projectdiablo2.com' will expire on Fri 06 Oct 2023 19:07:17 GMT +0000. [21:23:47] RECOVERY - fotnswiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'fotnswiki.com' will expire on Sat 07 Oct 2023 20:03:55 GMT +0000. [21:23:56] RECOVERY - data.wikiyri.org - LetsEncrypt on sslhost is OK: OK - Certificate 'data.wikiyri.org' will expire on Wed 18 Oct 2023 20:33:54 GMT +0000. [21:24:01] RECOVERY - fanonpedia.com - LetsEncrypt on sslhost is OK: OK - Certificate 'fanonpedia.com' will expire on Sun 29 Oct 2023 17:33:54 GMT +0000. [21:24:03] RECOVERY - swiftobject113 Current Load on swiftobject113 is OK: OK - load average: 4.91, 6.24, 6.53 [21:24:03] RECOVERY - acgn.sfdev.eu.org - LetsEncrypt on sslhost is OK: OK - Certificate 'acgn.sfdev.eu.org' will expire on Sat 07 Oct 2023 23:42:29 GMT +0000. [21:24:06] RECOVERY - history.estill.org - LetsEncrypt on sslhost is OK: OK - Certificate 'history.estill.org' will expire on Thu 05 Oct 2023 20:04:14 GMT +0000. [21:24:07] RECOVERY - www.sidem.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'www.sidem.wiki' will expire on Fri 06 Oct 2023 18:43:43 GMT +0000. [21:24:12] RECOVERY - allthetropes.miraheze.org - Sectigo on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Sat 11 Nov 2023 23:59:59 GMT +0000. [21:24:13] RECOVERY - equestripedia.org - LetsEncrypt on sslhost is OK: OK - Certificate 'equestripedia.org' will expire on Fri 06 Oct 2023 05:44:50 GMT +0000. [21:24:20] RECOVERY - solaswiki.org - LetsEncrypt on sslhost is OK: OK - Certificate 'solaswiki.org' will expire on Sat 07 Oct 2023 16:23:57 GMT +0000. [21:24:20] RECOVERY - wiki.infomedia.co.uk - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.infomedia.co.uk' will expire on Sat 07 Oct 2023 04:15:56 GMT +0000. [21:24:38] RECOVERY - crashspyro.miraheze.org - Sectigo on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Sat 11 Nov 2023 23:59:59 GMT +0000. [21:24:45] RECOVERY - wiki.aridia.space - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.aridia.space' will expire on Fri 06 Oct 2023 05:19:35 GMT +0000. [21:24:55] PROBLEM - cp22 Current Load on cp22 is WARNING: LOAD WARNING - total load average: 0.60, 0.75, 3.59 [21:25:55] RECOVERY - swiftobject121 Current Load on swiftobject121 is OK: OK - load average: 5.25, 6.17, 6.70 [21:26:55] RECOVERY - cp22 Current Load on cp22 is OK: LOAD OK - total load average: 0.36, 0.62, 3.19 [21:33:38] PROBLEM - graylog131 Current Load on graylog131 is WARNING: WARNING - load average: 3.81, 3.40, 3.20 [21:34:08] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.43, 3.35, 3.33 [21:35:16] PROBLEM - cp33 Current Load on cp33 is WARNING: LOAD WARNING - total load average: 0.62, 0.90, 3.91 [21:35:34] RECOVERY - graylog131 Current Load on graylog131 is OK: OK - load average: 2.06, 2.89, 3.04 [21:36:04] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 2.86, 3.26, 3.30 [21:39:16] RECOVERY - cp33 Current Load on cp33 is OK: LOAD OK - total load average: 0.79, 0.83, 3.18 [21:43:50] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.78, 3.52, 3.41 [21:47:41] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 3.35, 3.34, 3.36 [21:56:14] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 5.66, 6.85, 7.97 [21:58:12] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.97, 7.85, 8.21 [21:59:21] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 4.04, 3.65, 3.47 [22:00:11] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 4.22, 6.45, 7.66 [22:00:45] [02miraheze/puppet] 07The-Voidwalker pushed 031 commit to 03The-Voidwalker-patch-1 [+0/-0/±1] 13https://github.com/miraheze/puppet/commit/01e2d5b41d7b [22:00:48] [02miraheze/puppet] 07The-Voidwalker 0301e2d5b - load block_abuse from a private file [22:00:49] [02puppet] 07The-Voidwalker opened pull request 03#3408: load block_abuse from a private file - 13https://github.com/miraheze/puppet/pull/3408 [22:00:51] [02puppet] 07The-Voidwalker created branch 03The-Voidwalker-patch-1 - 13https://github.com/miraheze/puppet [22:00:56] PROBLEM - cp22 Current Load on cp22 is CRITICAL: LOAD CRITICAL - total load average: 17.74, 6.39, 2.86 [22:01:03] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is CRITICAL: CRITICAL - NGINX Error Rate is 90% [22:01:04] PROBLEM - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is CRITICAL: CRITICAL - NGINX Error Rate is 95% [22:01:09] PROBLEM - cp32 Varnish Backends on cp32 is CRITICAL: 8 backends are down. mw131 mw132 mw141 mw142 mw133 mw134 mw143 mediawiki [22:01:09] PROBLEM - mw133 Current Load on mw133 is CRITICAL: CRITICAL - load average: 17.11, 10.16, 6.72 [22:01:11] PROBLEM - mw141 Current Load on mw141 is CRITICAL: CRITICAL - load average: 16.11, 9.04, 5.17 [22:01:11] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 6 datacenters are down: 217.174.247.33/cpweb, 109.228.51.216/cpweb, 2a00:da00:1800:326::1/cpweb, 2a00:da00:1800:328::1/cpweb, 108.175.15.182/cpweb, 74.208.203.152/cpweb [22:01:15] PROBLEM - cp23 Varnish Backends on cp23 is CRITICAL: 8 backends are down. mw131 mw132 mw141 mw142 mw133 mw134 mw143 mediawiki [22:01:17] PROBLEM - cp33 Current Load on cp33 is CRITICAL: LOAD CRITICAL - total load average: 13.44, 6.68, 3.66 [22:01:17] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 2.55, 3.22, 3.33 [22:01:20] PROBLEM - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is CRITICAL: CRITICAL - NGINX Error Rate is 96% [22:01:21] PROBLEM - cp33 Varnish Backends on cp33 is CRITICAL: 8 backends are down. mw131 mw132 mw141 mw142 mw133 mw134 mw143 mediawiki [22:01:21] PROBLEM - mw143 Current Load on mw143 is CRITICAL: CRITICAL - load average: 14.75, 8.96, 5.30 [22:01:23] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 3.58, 6.33, 7.65 [22:01:28] PROBLEM - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is CRITICAL: CRITICAL - NGINX Error Rate is 100% [22:01:36] PROBLEM - mw132 Current Load on mw132 is CRITICAL: CRITICAL - load average: 13.39, 10.62, 7.82 [22:01:38] PROBLEM - cp32 Current Load on cp32 is CRITICAL: LOAD CRITICAL - total load average: 13.49, 6.79, 3.22 [22:01:44] PROBLEM - mw131 Current Load on mw131 is CRITICAL: CRITICAL - load average: 14.54, 11.01, 7.83 [22:02:00] PROBLEM - mw134 Current Load on mw134 is CRITICAL: CRITICAL - load average: 15.64, 10.81, 7.44 [22:02:01] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 4 datacenters are down: 217.174.247.33/cpweb, 109.228.51.216/cpweb, 2a00:da00:1800:326::1/cpweb, 2607:f1c0:1800:8100::1/cpweb [22:02:21] PROBLEM - cp22 Varnish Backends on cp22 is CRITICAL: 2 backends are down. mw132 mw143 [22:02:34] PROBLEM - graylog131 Current Load on graylog131 is CRITICAL: CRITICAL - load average: 5.73, 4.66, 3.57 [22:02:49] PROBLEM - mw142 Current Load on mw142 is CRITICAL: CRITICAL - load average: 14.23, 10.64, 6.77 [22:03:02] RECOVERY - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is OK: OK - NGINX Error Rate is 37% [22:03:11] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [22:03:13] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:03:19] RECOVERY - cp33 HTTP 4xx/5xx ERROR Rate on cp33 is OK: OK - NGINX Error Rate is 18% [22:03:59] PROBLEM - mw131 MediaWiki Rendering on mw131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:04:01] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [22:04:09] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 2.74, 4.75, 6.70 [22:05:28] PROBLEM - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is WARNING: WARNING - NGINX Error Rate is 40% [22:07:13] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 1.540 second response time [22:07:15] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 4.66, 5.09, 6.65 [22:07:16] PROBLEM - cp33 Current Load on cp33 is WARNING: LOAD WARNING - total load average: 3.62, 3.77, 3.31 [22:07:31] PROBLEM - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is CRITICAL: CRITICAL - NGINX Error Rate is 99% [22:07:37] PROBLEM - cp32 Current Load on cp32 is WARNING: LOAD WARNING - total load average: 1.60, 3.46, 2.89 [22:07:38] PROBLEM - cp22 Disk Space on cp22 is WARNING: DISK WARNING - free space: / 7311MiB (9% inode=98%); [22:09:16] PROBLEM - cp33 Current Load on cp33 is CRITICAL: LOAD CRITICAL - total load average: 5.87, 4.39, 3.58 [22:09:23] PROBLEM - cp33 Disk Space on cp33 is CRITICAL: DISK CRITICAL - free space: / 4409MiB (5% inode=98%); [22:09:37] RECOVERY - cp32 Current Load on cp32 is OK: LOAD OK - total load average: 1.51, 2.84, 2.73 [22:11:02] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is CRITICAL: CRITICAL - NGINX Error Rate is 75% [22:11:14] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 3.53, 3.47, 3.37 [22:11:16] PROBLEM - cp33 Current Load on cp33 is WARNING: LOAD WARNING - total load average: 1.94, 3.50, 3.35 [22:11:58] RECOVERY - mw131 MediaWiki Rendering on mw131 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.328 second response time [22:13:03] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is WARNING: WARNING - NGINX Error Rate is 47% [22:13:14] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 3.07, 3.36, 3.35 [22:13:16] RECOVERY - cp33 Current Load on cp33 is OK: LOAD OK - total load average: 2.35, 3.28, 3.30 [22:13:18] [02miraheze/puppet] 07The-Voidwalker pushed 031 commit to 03The-Voidwalker-patch-1 [+0/-0/±1] 13https://github.com/miraheze/puppet/compare/01e2d5b41d7b...8f93a3a3efa1 [22:13:20] [02miraheze/puppet] 07The-Voidwalker 038f93a3a - move block_abuse out of class parameters [22:13:23] [02puppet] 07The-Voidwalker synchronize pull request 03#3408: load block_abuse from a private file - 13https://github.com/miraheze/puppet/pull/3408 [22:15:37] PROBLEM - cp32 Current Load on cp32 is WARNING: LOAD WARNING - total load average: 3.61, 3.94, 3.26 [22:17:02] PROBLEM - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is CRITICAL: CRITICAL - NGINX Error Rate is 86% [22:17:37] RECOVERY - cp32 Current Load on cp32 is OK: LOAD OK - total load average: 2.24, 3.25, 3.08 [22:18:29] RECOVERY - cloud11 IPMI Sensors on cloud11 is OK: IPMI Status: OK [22:19:55] PROBLEM - mw132 MediaWiki Rendering on mw132 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 5903 bytes in 0.011 second response time [22:20:36] [02puppet] 07The-Voidwalker closed pull request 03#3408: load block_abuse from a private file - 13https://github.com/miraheze/puppet/pull/3408 [22:20:38] [02miraheze/puppet] 07The-Voidwalker pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/puppet/compare/f5e8c55deb3a...ea07c45cbcb4 [22:20:41] [02miraheze/puppet] 07The-Voidwalker 03ea07c45 - load block_abuse from a private file (#3408) [22:20:44] [02miraheze/puppet] 07The-Voidwalker deleted branch 03The-Voidwalker-patch-1 [22:20:46] [02puppet] 07The-Voidwalker deleted branch 03The-Voidwalker-patch-1 - 13https://github.com/miraheze/puppet [22:21:16] PROBLEM - cp33 Current Load on cp33 is WARNING: LOAD WARNING - total load average: 3.53, 3.79, 3.44 [22:21:37] PROBLEM - cp32 Current Load on cp32 is WARNING: LOAD WARNING - total load average: 3.26, 3.52, 3.26 [22:21:37] PROBLEM - cp22 Disk Space on cp22 is CRITICAL: DISK CRITICAL - free space: / 4461MiB (5% inode=98%); [22:21:53] RECOVERY - mw132 MediaWiki Rendering on mw132 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.253 second response time [22:22:24] PROBLEM - cloud11 IPMI Sensors on cloud11 is CRITICAL: IPMI Status: Critical [Cntlr 2 Bay 8 = Critical] [22:23:16] RECOVERY - cp33 Current Load on cp33 is OK: LOAD OK - total load average: 1.37, 2.98, 3.20 [22:23:37] RECOVERY - cp32 Current Load on cp32 is OK: LOAD OK - total load average: 2.18, 2.92, 3.07 [22:25:02] RECOVERY - cp23 HTTP 4xx/5xx ERROR Rate on cp23 is OK: OK - NGINX Error Rate is 6% [22:25:03] RECOVERY - cp33 Varnish Backends on cp33 is OK: All 15 backends are healthy [22:25:15] RECOVERY - cp23 Varnish Backends on cp23 is OK: All 15 backends are healthy [22:27:08] RECOVERY - cp32 Varnish Backends on cp32 is OK: All 15 backends are healthy [22:27:37] PROBLEM - cp32 Current Load on cp32 is WARNING: LOAD WARNING - total load average: 1.21, 3.60, 3.53 [22:27:57] PROBLEM - thelangyalist.miraheze.org - Sectigo on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:28:06] RECOVERY - cp22 NTP time on cp22 is OK: NTP OK: Offset 0.02370035648 secs [22:28:08] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 4.67, 3.75, 3.45 [22:28:08] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 217.174.247.33/cpweb, 2a00:da00:1800:326::1/cpweb [22:28:12] RECOVERY - cp22 Varnish Backends on cp22 is OK: All 15 backends are healthy [22:29:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.32, 7.66, 6.45 [22:29:37] RECOVERY - cp32 Current Load on cp32 is OK: LOAD OK - total load average: 0.45, 2.55, 3.15 [22:29:50] RECOVERY - cp22 Current Load on cp22 is OK: LOAD OK - total load average: 0.24, 0.14, 0.05 [22:30:00] RECOVERY - cp22 HTTP 4xx/5xx ERROR Rate on cp22 is OK: OK - NGINX Error Rate is 1% [22:30:00] PROBLEM - mw134 Current Load on mw134 is WARNING: WARNING - load average: 2.34, 8.75, 11.23 [22:30:03] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [22:30:14] PROBLEM - os131 Current Load on os131 is CRITICAL: CRITICAL - load average: 4.44, 3.71, 2.62 [22:31:11] PROBLEM - mw141 Current Load on mw141 is WARNING: WARNING - load average: 3.04, 8.05, 11.32 [22:31:21] PROBLEM - mw143 Current Load on mw143 is WARNING: WARNING - load average: 3.72, 8.83, 11.29 [22:31:36] PROBLEM - mw132 Current Load on mw132 is WARNING: WARNING - load average: 3.39, 7.92, 11.52 [22:31:44] PROBLEM - mw131 Current Load on mw131 is WARNING: WARNING - load average: 4.52, 8.45, 11.51 [22:32:00] RECOVERY - mw134 Current Load on mw134 is OK: OK - load average: 2.46, 6.58, 10.13 [22:32:03] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 7.46, 6.19, 4.97 [22:32:12] PROBLEM - os131 Current Load on os131 is WARNING: WARNING - load average: 3.75, 3.74, 2.77 [22:32:49] PROBLEM - mw142 Current Load on mw142 is WARNING: WARNING - load average: 4.49, 7.70, 11.22 [22:32:50] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 14.39, 10.45, 7.25 [22:33:01] PROBLEM - mw133 Current Load on mw133 is WARNING: WARNING - load average: 3.76, 7.37, 11.10 [22:34:03] PROBLEM - swiftobject113 Current Load on swiftobject113 is CRITICAL: CRITICAL - load average: 10.63, 7.71, 5.66 [22:34:09] RECOVERY - os131 Current Load on os131 is OK: OK - load average: 2.02, 3.10, 2.65 [22:35:02] PROBLEM - cp32 HTTPS on cp32 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 328 bytes in 0.433 second response time [22:35:08] PROBLEM - cp32 Varnish Backends on cp32 is WARNING: No backends detected. If this is an error, see readme.txt [22:35:11] RECOVERY - mw141 Current Load on mw141 is OK: OK - load average: 3.84, 5.43, 9.46 [22:35:12] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 2 datacenters are down: 108.175.15.182/cpweb, 2607:f1c0:1800:8100::1/cpweb [22:35:21] RECOVERY - mw143 Current Load on mw143 is OK: OK - load average: 3.01, 5.61, 9.40 [22:36:01] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 2 datacenters are down: 108.175.15.182/cpweb, 2607:f1c0:1800:8100::1/cpweb [22:36:14] PROBLEM - swiftobject111 Current Load on swiftobject111 is CRITICAL: CRITICAL - load average: 9.20, 7.12, 5.13 [22:36:49] RECOVERY - mw142 Current Load on mw142 is OK: OK - load average: 4.88, 6.02, 9.72 [22:37:00] RECOVERY - mw133 Current Load on mw133 is OK: OK - load average: 4.67, 5.81, 9.60 [22:37:36] RECOVERY - mw132 Current Load on mw132 is OK: OK - load average: 5.88, 6.61, 9.75 [22:37:43] PROBLEM - swiftproxy131 Current Load on swiftproxy131 is WARNING: WARNING - load average: 7.61, 5.95, 3.50 [22:37:44] RECOVERY - mw131 Current Load on mw131 is OK: OK - load average: 5.05, 6.20, 9.53 [22:37:47] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 2.87, 3.81, 3.72 [22:38:14] PROBLEM - swiftobject111 Current Load on swiftobject111 is WARNING: WARNING - load average: 6.37, 6.82, 5.26 [22:38:25] PROBLEM - cp32 Puppet on cp32 is UNKNOWN: NRPE: Unable to read output [22:38:54] PROBLEM - swiftobject112 Current Load on swiftobject112 is WARNING: WARNING - load average: 7.04, 6.18, 4.94 [22:39:43] PROBLEM - swiftproxy131 Current Load on swiftproxy131 is CRITICAL: CRITICAL - load average: 8.06, 6.59, 4.03 [22:40:03] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 7.35, 7.98, 6.49 [22:40:14] RECOVERY - swiftobject111 Current Load on swiftobject111 is OK: OK - load average: 5.98, 6.68, 5.41 [22:40:54] RECOVERY - swiftobject112 Current Load on swiftobject112 is OK: OK - load average: 6.61, 6.35, 5.15 [22:42:03] PROBLEM - swiftobject113 Current Load on swiftobject113 is CRITICAL: CRITICAL - load average: 8.44, 7.77, 6.57 [22:43:08] RECOVERY - cp32 Varnish Backends on cp32 is OK: All 15 backends are healthy [22:43:55] PROBLEM - swiftobject121 Current Load on swiftobject121 is CRITICAL: CRITICAL - load average: 8.97, 8.19, 6.42 [22:44:01] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [22:44:14] PROBLEM - swiftobject111 Current Load on swiftobject111 is CRITICAL: CRITICAL - load average: 8.67, 7.82, 6.16 [22:44:58] RECOVERY - cp32 HTTPS on cp32 is OK: HTTP OK: HTTP/1.1 301 Moved Permanently - 3733 bytes in 0.530 second response time [22:45:02] RECOVERY - cp32 HTTP 4xx/5xx ERROR Rate on cp32 is OK: OK - NGINX Error Rate is 3% [22:45:11] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [22:45:43] PROBLEM - swiftproxy131 Current Load on swiftproxy131 is WARNING: WARNING - load average: 7.69, 7.33, 5.12 [22:45:55] PROBLEM - swiftobject121 Current Load on swiftobject121 is WARNING: WARNING - load average: 7.32, 7.62, 6.42 [22:47:43] PROBLEM - swiftobject112 Current Load on swiftobject112 is WARNING: WARNING - load average: 7.61, 7.58, 6.11 [22:48:14] PROBLEM - swiftobject111 Current Load on swiftobject111 is WARNING: WARNING - load average: 5.25, 7.08, 6.28 [22:49:36] RECOVERY - swiftobject112 Current Load on swiftobject112 is OK: OK - load average: 5.46, 6.80, 5.99 [22:50:14] RECOVERY - swiftobject111 Current Load on swiftobject111 is OK: OK - load average: 5.26, 6.79, 6.29 [22:51:19] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 2.68, 3.07, 3.35 [22:51:43] PROBLEM - swiftproxy131 Current Load on swiftproxy131 is CRITICAL: CRITICAL - load average: 8.06, 7.89, 6.07 [22:51:44] PROBLEM - mw131 Current Load on mw131 is WARNING: WARNING - load average: 10.31, 9.87, 9.41 [22:51:55] PROBLEM - swiftobject121 Current Load on swiftobject121 is CRITICAL: CRITICAL - load average: 8.08, 7.34, 6.65 [22:53:43] RECOVERY - swiftproxy131 Current Load on swiftproxy131 is OK: OK - load average: 2.67, 5.91, 5.57 [22:53:55] PROBLEM - swiftobject121 Current Load on swiftobject121 is WARNING: WARNING - load average: 7.85, 7.17, 6.65 [22:54:03] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 5.35, 7.26, 7.46 [22:54:50] PROBLEM - cp23 Current Load on cp23 is WARNING: LOAD WARNING - total load average: 0.42, 0.48, 3.90 [22:55:44] RECOVERY - mw131 Current Load on mw131 is OK: OK - load average: 9.20, 9.53, 9.37 [22:57:27] RECOVERY - thelangyalist.miraheze.org - Sectigo on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Sat 11 Nov 2023 23:59:59 GMT +0000. [22:57:55] RECOVERY - swiftobject121 Current Load on swiftobject121 is OK: OK - load average: 5.54, 6.48, 6.50 [22:58:50] RECOVERY - cp23 Current Load on cp23 is OK: LOAD OK - total load average: 0.59, 0.49, 3.11 [23:00:03] PROBLEM - swiftobject113 Current Load on swiftobject113 is CRITICAL: CRITICAL - load average: 8.59, 7.52, 7.44 [23:00:46] PROBLEM - mw133 Puppet on mw133 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:01:25] PROBLEM - graylog131 Current Load on graylog131 is WARNING: WARNING - load average: 2.36, 2.92, 3.86 [23:01:35] PROBLEM - bast141 Puppet on bast141 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:02:03] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 5.34, 6.60, 7.11 [23:02:19] PROBLEM - puppet141 Puppet on puppet141 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:02:59] PROBLEM - mw142 Puppet on mw142 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:03:01] PROBLEM - db131 Puppet on db131 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:03:49] RECOVERY - cp32 Puppet on cp32 is OK: OK: Puppet is currently enabled, last run 2 seconds ago with 0 failures [23:03:50] PROBLEM - swiftobject101 Puppet on swiftobject101 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:04:05] PROBLEM - db112 Current Load on db112 is CRITICAL: CRITICAL - load average: 14.99, 8.67, 4.03 [23:04:12] PROBLEM - db112 Puppet on db112 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:04:19] RECOVERY - puppet141 Puppet on puppet141 is OK: OK: Puppet is currently enabled, last run 58 seconds ago with 0 failures [23:04:25] PROBLEM - graylog131 Puppet on graylog131 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:04:51] PROBLEM - ldap141 Puppet on ldap141 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [23:07:46] [02miraheze/puppet] 07The-Voidwalker pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/puppet/compare/ea07c45cbcb4...c0a34c2fd937 [23:07:48] [02miraheze/puppet] 07The-Voidwalker 03c0a34c2 - use full absolute path [23:08:03] RECOVERY - swiftobject113 Current Load on swiftobject113 is OK: OK - load average: 5.95, 6.22, 6.76 [23:09:25] PROBLEM - graylog131 Current Load on graylog131 is CRITICAL: CRITICAL - load average: 4.51, 3.57, 3.76 [23:11:25] PROBLEM - graylog131 Current Load on graylog131 is WARNING: WARNING - load average: 3.33, 3.37, 3.66 [23:12:05] RECOVERY - db112 Current Load on db112 is OK: OK - load average: 1.03, 5.65, 5.02 [23:17:25] RECOVERY - graylog131 Current Load on graylog131 is OK: OK - load average: 2.18, 2.90, 3.38 [23:29:35] RECOVERY - bast141 Puppet on bast141 is OK: OK: Puppet is currently enabled, last run 17 seconds ago with 0 failures [23:30:46] RECOVERY - mw133 Puppet on mw133 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [23:30:59] RECOVERY - mw142 Puppet on mw142 is OK: OK: Puppet is currently enabled, last run 56 seconds ago with 0 failures [23:31:01] RECOVERY - db131 Puppet on db131 is OK: OK: Puppet is currently enabled, last run 40 seconds ago with 0 failures [23:31:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.91, 6.70, 7.91 [23:31:52] RECOVERY - swiftobject101 Puppet on swiftobject101 is OK: OK: Puppet is currently enabled, last run 33 seconds ago with 0 failures [23:32:11] RECOVERY - db112 Puppet on db112 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [23:33:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.98, 7.66, 8.11 [23:34:24] RECOVERY - graylog131 Puppet on graylog131 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [23:34:51] RECOVERY - ldap141 Puppet on ldap141 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [23:35:15] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.81, 7.45, 7.97 [23:37:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 10.60, 8.10, 8.11 [23:41:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.55, 7.28, 7.76 [23:46:09] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 5.95, 6.81, 7.80 [23:49:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.08, 6.58, 7.13 [23:51:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.91, 6.55, 7.06 [23:54:04] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.64, 5.44, 6.68 [23:57:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.09, 6.61, 6.89 [23:59:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.86, 6.67, 6.92