[00:00:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.02, 7.28, 7.89 [00:01:14] PROBLEM - swiftobject121 Current Load on swiftobject121 is CRITICAL: CRITICAL - load average: 8.48, 6.65, 5.95 [00:03:09] RECOVERY - swiftobject121 Current Load on swiftobject121 is OK: OK - load average: 6.13, 6.17, 5.85 [00:03:16] PROBLEM - db112 Current Load on db112 is CRITICAL: CRITICAL - load average: 11.67, 6.73, 3.16 [00:04:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.16, 7.84, 8.00 [00:06:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.64, 7.61, 7.91 [00:13:16] PROBLEM - db112 Current Load on db112 is WARNING: WARNING - load average: 3.27, 7.91, 6.38 [00:14:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.96, 8.13, 7.95 [00:15:16] RECOVERY - db112 Current Load on db112 is OK: OK - load average: 3.85, 6.67, 6.11 [00:16:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.08, 7.47, 7.74 [00:22:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 9.85, 7.60, 7.65 [00:24:02] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.59, 7.52, 7.62 [00:29:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.18, 7.48, 7.97 [00:30:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 11.22, 8.70, 7.97 [00:31:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 7.97, 7.99, 8.13 [00:36:56] RECOVERY - cloud11 IPMI Sensors on cloud11 is OK: IPMI Status: OK [00:37:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.67, 7.40, 7.86 [00:40:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 5.74, 7.40, 7.74 [00:40:55] PROBLEM - cloud11 IPMI Sensors on cloud11 is CRITICAL: IPMI Status: Critical [Cntlr 2 Bay 8 = Critical] [00:45:49] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.71, 7.45, 7.63 [00:46:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.58, 7.21, 7.47 [00:47:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.27, 7.77, 7.75 [00:50:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 5.83, 7.24, 7.49 [00:51:01] PROBLEM - matomo121 Current Load on matomo121 is CRITICAL: LOAD CRITICAL - total load average: 4.95, 3.70, 3.05 [00:53:01] RECOVERY - matomo121 Current Load on matomo121 is OK: LOAD OK - total load average: 3.24, 3.32, 2.98 [00:53:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.39, 7.94, 7.83 [00:57:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 4.67, 7.10, 7.58 [01:03:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.33, 7.61, 7.61 [01:09:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.91, 7.60, 7.64 [01:18:03] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.46, 6.34, 6.73 [01:21:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.81, 8.04, 7.57 [01:23:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.59, 7.77, 7.53 [01:24:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.13, 7.30, 7.02 [01:25:49] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.51, 8.29, 7.74 [01:26:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.89, 7.02, 6.94 [01:27:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.44, 7.88, 7.65 [01:28:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.78, 7.54, 7.12 [01:30:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 5.75, 6.73, 6.87 [01:30:37] [Grafana] !sre FIRING: The mediawiki job queue has more than 2500 unclaimed jobs https://grafana.miraheze.org/d/GtxbP1Xnk?orgId=1 [01:34:03] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.33, 6.18, 6.63 [01:41:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.90, 7.31, 7.26 [01:43:20] PROBLEM - matomo121 Current Load on matomo121 is WARNING: LOAD WARNING - total load average: 3.72, 3.30, 3.01 [01:45:16] RECOVERY - matomo121 Current Load on matomo121 is OK: LOAD OK - total load average: 3.12, 3.18, 3.00 [01:45:51] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.90, 7.01, 7.15 [02:10:09] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.35, 6.63, 6.27 [02:14:03] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.45, 6.53, 6.37 [02:17:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 18.76, 9.85, 8.10 [02:17:59] PROBLEM - puppetdb121 Puppet on puppetdb121 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[puppetdb] [02:22:58] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.51, 8.17, 7.10 [02:23:49] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.08, 7.44, 7.68 [02:24:54] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.52, 7.63, 7.04 [02:27:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.68, 7.70, 7.71 [02:29:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.28, 7.32, 7.56 [02:35:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.33, 7.59, 7.59 [02:36:32] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 6.31, 6.58, 6.76 [02:37:49] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.03, 7.45, 7.55 [02:45:36] [Grafana] !sre RESOLVED: High Job Queue Backlog https://grafana.miraheze.org/d/GtxbP1Xnk?orgId=1 [02:49:49] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.20, 7.75, 7.44 [02:51:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.78, 7.74, 7.48 [03:03:49] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.60, 7.18, 7.06 [03:05:43] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.31, 6.82, 6.28 [03:05:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.91, 7.15, 7.05 [03:07:59] RECOVERY - puppetdb121 Puppet on puppetdb121 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [03:09:36] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.53, 6.22, 6.16 [03:13:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.40, 7.40, 7.15 [03:17:49] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.97, 7.58, 7.31 [03:31:50] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.11, 6.47, 6.74 [03:32:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.53, 6.52, 6.07 [03:34:03] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 4.96, 5.86, 5.88 [03:37:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.27, 7.38, 7.03 [03:39:49] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.60, 6.63, 6.79 [03:43:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.41, 7.12, 7.01 [03:47:49] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.97, 6.54, 6.80 [03:51:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.41, 7.53, 7.13 [03:53:49] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.47, 7.15, 7.04 [03:59:49] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 4.81, 6.15, 6.71 [04:05:49] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.95, 6.86, 6.82 [04:07:49] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.44, 6.42, 6.64 [04:13:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.26, 6.89, 6.79 [04:19:49] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 10.43, 8.16, 7.31 [04:21:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.04, 7.68, 7.24 [04:35:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.22, 7.24, 7.12 [04:37:49] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.67, 6.55, 6.88 [04:43:49] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.51, 6.65, 6.80 [04:58:24] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.39, 7.20, 6.93 [05:00:18] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 4.52, 6.20, 6.60 [05:04:10] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.19, 6.98, 6.77 [05:08:02] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.08, 7.31, 6.99 [05:13:50] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.38, 6.45, 6.75 [05:21:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.42, 7.23, 6.89 [05:23:50] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.04, 6.73, 6.74 [05:55:49] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.53, 6.87, 6.48 [05:57:49] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.31, 6.42, 6.37 [06:34:34] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.gab.pt.eu.org All nameservers failed to answer the query. [06:44:09] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.75, 6.71, 6.00 [06:46:06] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.08, 6.21, 5.91 [06:53:20] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.70, 6.87, 6.55 [06:53:47] PROBLEM - cloud14 Puppet on cloud14 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[ulogd2] [06:55:15] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.10, 6.33, 6.40 [07:04:34] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is WARNING: SSL WARNING - rDNS OK but records conflict. {'NS': ['ns.ankh.fr.eu.org.', 'ns1.eu.org.', 'ns1.eriomem.net.'], 'CNAME': 'bouncingwiki.miraheze.org.'} [07:21:51] RECOVERY - cloud14 Puppet on cloud14 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [08:07:01] PROBLEM - matomo121 Current Load on matomo121 is CRITICAL: LOAD CRITICAL - total load average: 4.02, 2.84, 1.85 [08:09:01] RECOVERY - matomo121 Current Load on matomo121 is OK: LOAD OK - total load average: 3.20, 2.90, 2.00 [08:09:49] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.44, 7.80, 7.05 [08:13:50] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.59, 6.48, 6.68 [08:16:46] RECOVERY - ns1 NTP time on ns1 is OK: NTP OK: Offset 0.003406882286 secs [08:28:32] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.11, 6.62, 6.43 [08:32:23] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.45, 6.65, 6.49 [08:49:49] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.92, 6.82, 6.40 [08:55:50] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.17, 6.21, 6.28 [09:13:49] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.03, 6.65, 6.29 [09:15:49] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.68, 6.42, 6.23 [09:23:35] PROBLEM - cloud11 Puppet on cloud11 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[ulogd2] [09:30:46] PROBLEM - ns1 NTP time on ns1 is WARNING: NTP WARNING: Offset 0.1273450255 secs [09:51:35] RECOVERY - cloud11 Puppet on cloud11 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [09:51:49] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.51, 6.62, 6.21 [09:53:50] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.83, 6.31, 6.15 [10:46:42] PROBLEM - www.lab612.at - reverse DNS on sslhost is WARNING: NoNameservers: All nameservers failed to answer the query www.lab612.at. IN CNAME: Server 2606:4700:4700::1111 UDP port 53 answered SERVFAIL [10:49:19] PROBLEM - swiftobject122 APT on swiftobject122 is WARNING: APT WARNING: 0 packages available for upgrade (0 critical updates). warnings detected, errors detected. [10:51:23] PROBLEM - swiftobject122 APT on swiftobject122 is CRITICAL: APT CRITICAL: 86 packages available for upgrade (32 critical updates). [10:52:53] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.14, 6.27, 5.68 [10:54:50] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.55, 5.77, 5.56 [11:16:04] RECOVERY - www.lab612.at - reverse DNS on sslhost is OK: SSL OK - www.lab612.at reverse DNS resolves to cp25.miraheze.org - CNAME OK [11:24:46] PROBLEM - ns1 NTP time on ns1 is CRITICAL: NTP CRITICAL: Offset 0.5047568381 secs [12:45:49] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.40, 6.97, 6.37 [12:47:49] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.58, 6.43, 6.25 [13:46:12] PROBLEM - franchise.franchising.org.ua - reverse DNS on sslhost is WARNING: NoNameservers: All nameservers failed to answer the query franchising.org.ua. IN NS: Server 2606:4700:4700::1111 UDP port 53 answered SERVFAIL [14:15:24] RECOVERY - franchise.franchising.org.ua - reverse DNS on sslhost is OK: SSL OK - franchise.franchising.org.ua reverse DNS resolves to cp24.miraheze.org - CNAME OK [15:04:34] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.gab.pt.eu.org All nameservers failed to answer the query. [15:27:55] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.93, 6.89, 6.43 [15:31:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.66, 7.86, 6.94 [15:33:49] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.45, 7.21, 6.83 [15:34:34] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is WARNING: SSL WARNING - rDNS OK but records conflict. {'NS': ['NS.ANKH.FR.eu.org.', 'NS1.eu.org.', 'NS1.ERIOMEM.NET.'], 'CNAME': 'bouncingwiki.miraheze.org.'} [15:43:50] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 4.88, 6.19, 6.67 [15:56:43] RECOVERY - cloud11 IPMI Sensors on cloud11 is OK: IPMI Status: OK [16:00:43] PROBLEM - cloud11 IPMI Sensors on cloud11 is CRITICAL: IPMI Status: Critical [Cntlr 2 Bay 8 = Critical] [17:20:03] PROBLEM - uk.religiononfire.mar.in.ua - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'uk.religiononfire.mar.in.ua' expires in 15 day(s) (Tue 09 Jan 2024 17:16:33 GMT +0000). [17:20:22] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/ssl/compare/fa559b257551...44300ed23bd1 [17:20:23] [02miraheze/ssl] 07MirahezeSSLBot 0344300ed - Bot: Update SSL cert for uk.religiononfire.mar.in.ua [17:27:29] PROBLEM - en.religiononfire.mar.in.ua - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'en.religiononfire.mar.in.ua' expires in 15 day(s) (Tue 09 Jan 2024 17:09:58 GMT +0000). [17:27:45] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/ssl/compare/44300ed23bd1...2b73f7294c1d [17:27:48] [02miraheze/ssl] 07MirahezeSSLBot 032b73f72 - Bot: Update SSL cert for en.religiononfire.mar.in.ua [17:42:19] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.02, 6.97, 6.47 [17:46:10] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.48, 7.35, 6.78 [17:48:05] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.62, 6.71, 6.61 [18:04:34] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.gab.pt.eu.org All nameservers failed to answer the query. [18:17:59] PROBLEM - puppetdb121 Puppet on puppetdb121 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[puppetdb] [18:18:04] RECOVERY - uk.religiononfire.mar.in.ua - LetsEncrypt on sslhost is OK: OK - Certificate 'uk.religiononfire.mar.in.ua' will expire on Sat 23 Mar 2024 16:20:14 GMT +0000. [18:27:27] RECOVERY - en.religiononfire.mar.in.ua - LetsEncrypt on sslhost is OK: OK - Certificate 'en.religiononfire.mar.in.ua' will expire on Sat 23 Mar 2024 16:27:38 GMT +0000. [18:34:35] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is WARNING: SSL WARNING - rDNS OK but records conflict. {'NS': ['NS.ANKH.FR.eu.org.', 'NS1.eu.org.', 'NS1.ERIOMEM.NET.'], 'CNAME': 'bouncingwiki.miraheze.org.'} [18:52:58] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.19, 7.75, 6.62 [18:54:53] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.93, 7.42, 6.63 [18:58:43] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.07, 6.69, 6.50 [19:07:59] RECOVERY - puppetdb121 Puppet on puppetdb121 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [19:26:36] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.07, 6.46, 6.17 [19:28:32] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.05, 6.28, 6.13 [19:31:17] PROBLEM - os141 Current Load on os141 is CRITICAL: LOAD CRITICAL - total load average: 4.66, 3.15, 1.62 [19:33:15] RECOVERY - os141 Current Load on os141 is OK: LOAD OK - total load average: 1.30, 2.44, 1.54 [19:35:04] PROBLEM - wiki.recaptime.eu.org - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Cannot make SSL connection. [19:35:50] PROBLEM - cp25 HTTPS on cp25 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to cp25.miraheze.org port 443 after 2 ms: Couldn't connect to server [19:35:51] PROBLEM - marinebiodiversitymatrix.org - LetsEncrypt on sslhost is CRITICAL: connect to address marinebiodiversitymatrix.org and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:35:52] PROBLEM - fanonpedia.com - LetsEncrypt on sslhost is CRITICAL: connect to address fanonpedia.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:36:02] PROBLEM - pornwiki.org - LetsEncrypt on sslhost is CRITICAL: connect to address pornwiki.org and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:36:02] PROBLEM - wiki.projectdiablo2.com - LetsEncrypt on sslhost is CRITICAL: connect to address wiki.projectdiablo2.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:36:02] PROBLEM - replication-ops.com - LetsEncrypt on sslhost is CRITICAL: connect to address replication-ops.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:36:03] PROBLEM - donkeykong.miraheze.org - Sectigo on sslhost is CRITICAL: connect to address donkeykong.miraheze.org and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:36:03] PROBLEM - wiki.cdntennis.ca - LetsEncrypt on sslhost is CRITICAL: connect to address wiki.cdntennis.ca and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:36:04] PROBLEM - cp35 HTTPS on cp35 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to cp35.miraheze.org port 443 after 75 ms: Couldn't connect to server [19:36:04] PROBLEM - m.miraheze.org - LetsEncrypt on sslhost is CRITICAL: connect to address m.miraheze.org and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:36:04] PROBLEM - burnout.wiki - LetsEncrypt on sslhost is CRITICAL: connect to address burnout.wiki and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:36:05] PROBLEM - cp35 Nginx Backend for mw141 on cp35 is CRITICAL: connect to address localhost and port 8108: Connection refused [19:36:06] PROBLEM - cp35 Nginx Backend for mw142 on cp35 is CRITICAL: connect to address localhost and port 8109: Connection refused [19:36:08] PROBLEM - cp35 Nginx Backend for test131 on cp35 is CRITICAL: connect to address localhost and port 8180: Connection refused [19:36:09] PROBLEM - threedomwiki.pcast.site - LetsEncrypt on sslhost is CRITICAL: connect to address threedomwiki.pcast.site and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:36:10] PROBLEM - wiki.widwwa.co.uk - LetsEncrypt on sslhost is CRITICAL: connect to address wiki.widwwa.co.uk and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:36:11] PROBLEM - kodiak.wiki - LetsEncrypt on sslhost is CRITICAL: connect to address kodiak.wiki and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:36:11] PROBLEM - cp24 HTTPS on cp24 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to cp24.miraheze.org port 443 after 8 ms: Couldn't connect to server [19:36:12] PROBLEM - n64brew.dev - LetsEncrypt on sslhost is CRITICAL: connect to address n64brew.dev and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:36:12] PROBLEM - thepolicyhub.org.uk - LetsEncrypt on sslhost is CRITICAL: connect to address thepolicyhub.org.uk and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:36:13] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 8 datacenters are down: 51.195.201.140/cpweb, 51.89.139.24/cpweb, 2001:41d0:801:2000::5d68/cpweb, 2001:41d0:801:2000::3a18/cpweb, 51.222.14.30/cpweb, 51.222.12.133/cpweb, 2607:5300:205:200::3121/cpweb, 2607:5300:205:200::1c93/cpweb [19:36:17] PROBLEM - www.burnout.wiki - LetsEncrypt on sslhost is CRITICAL: connect to address www.burnout.wiki and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:36:21] PROBLEM - psychevos.org - LetsEncrypt on sslhost is CRITICAL: connect to address psychevos.org and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:36:23] PROBLEM - cp35 Nginx Backend for mon141 on cp35 is CRITICAL: connect to address localhost and port 8201: Connection refused [19:36:34] PROBLEM - cp35 Nginx Backend for phab121 on cp35 is CRITICAL: connect to address localhost and port 8202: Connection refused [19:36:38] PROBLEM - cp35 Nginx Backend for mw143 on cp35 is CRITICAL: connect to address localhost and port 8112: Connection refused [19:36:40] PROBLEM - private.yahyabd.xyz - LetsEncrypt on sslhost is CRITICAL: connect to address private.yahyabd.xyz and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:36:40] PROBLEM - vise.dayid.org - LetsEncrypt on sslhost is CRITICAL: connect to address vise.dayid.org and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:36:41] PROBLEM - null-cpu.emudev.org - LetsEncrypt on sslhost is CRITICAL: connect to address null-cpu.emudev.org and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:36:41] PROBLEM - cp34 HTTP 4xx/5xx ERROR Rate on cp34 is CRITICAL: CRITICAL - NGINX Error Rate is 73% [19:36:41] PROBLEM - cp35 Nginx Backend for mwtask141 on cp35 is CRITICAL: connect to address localhost and port 8150: Connection refused [19:36:42] PROBLEM - cp35 Nginx Backend for reports121 on cp35 is CRITICAL: connect to address localhost and port 8205: Connection refused [19:36:43] PROBLEM - wiki.ooer.ooo - LetsEncrypt on sslhost is CRITICAL: connect to address wiki.ooer.ooo and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:36:44] PROBLEM - www.cgradegames.net - LetsEncrypt on sslhost is CRITICAL: connect to address www.cgradegames.net and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:36:58] PROBLEM - cp34 Varnish Backends on cp34 is CRITICAL: 8 backends are down. mw131 mw132 mw141 mw142 mw133 mw134 mw143 mediawiki [19:36:59] PROBLEM - cp35 Nginx Backend for puppet141 on cp35 is CRITICAL: connect to address localhost and port 8204: Connection refused [19:37:02] PROBLEM - cp24 Nginx Backend for test131 on cp24 is CRITICAL: connect to address localhost and port 8180: Connection refused [19:37:03] PROBLEM - vmklegacy.wiki - LetsEncrypt on sslhost is CRITICAL: connect to address vmklegacy.wiki and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:37:07] PROBLEM - cp25 Nginx Backend for mw132 on cp25 is CRITICAL: connect to address localhost and port 8107: Connection refused [19:37:07] PROBLEM - cp35 HTTP 4xx/5xx ERROR Rate on cp35 is CRITICAL: CRITICAL - NGINX Error Rate is 100% [19:37:07] PROBLEM - cp24 Nginx Backend for reports121 on cp24 is CRITICAL: connect to address localhost and port 8205: Connection refused [19:37:08] PROBLEM - cp24 Nginx Backend for phab121 on cp24 is CRITICAL: connect to address localhost and port 8202: Connection refused [19:37:08] PROBLEM - cp25 Nginx Backend for mwtask141 on cp25 is CRITICAL: connect to address localhost and port 8150: Connection refused [19:37:08] PROBLEM - de.berlinwiki.org - LetsEncrypt on sslhost is CRITICAL: connect to address de.berlinwiki.org and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:37:08] PROBLEM - cp35 Nginx Backend for mail121 on cp35 is CRITICAL: connect to address localhost and port 8200: Connection refused [19:37:09] PROBLEM - cp25 Nginx Backend for reports121 on cp25 is CRITICAL: connect to address localhost and port 8205: Connection refused [19:37:10] PROBLEM - cp34 Nginx Backend for puppet141 on cp34 is CRITICAL: connect to address localhost and port 8204: Connection refused [19:37:11] PROBLEM - www.christipedia.nl - LetsEncrypt on sslhost is CRITICAL: connect to address www.christipedia.nl and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:37:11] PROBLEM - wiki.virtualjet.net - LetsEncrypt on sslhost is CRITICAL: connect to address wiki.virtualjet.net and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:37:16] PROBLEM - www.glitchcity.wiki - LetsEncrypt on sslhost is CRITICAL: connect to address www.glitchcity.wiki and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:37:16] PROBLEM - projectsekai.miraheze.org - Sectigo on sslhost is CRITICAL: connect to address projectsekai.miraheze.org and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:37:17] PROBLEM - steamdecklinux.wiki - LetsEncrypt on sslhost is CRITICAL: connect to address steamdecklinux.wiki and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:37:17] PROBLEM - cp34 Nginx Backend for mon141 on cp34 is CRITICAL: connect to address localhost and port 8201: Connection refused [19:37:18] PROBLEM - cp35 Nginx Backend for mw131 on cp35 is CRITICAL: connect to address localhost and port 8106: Connection refused [19:37:18] PROBLEM - cp24 Nginx Backend for puppet141 on cp24 is CRITICAL: connect to address localhost and port 8204: Connection refused [19:37:19] PROBLEM - cp25 Nginx Backend for mw134 on cp25 is CRITICAL: connect to address localhost and port 8111: Connection refused [19:37:24] PROBLEM - cp24 Nginx Backend for mw132 on cp24 is CRITICAL: connect to address localhost and port 8107: Connection refused [19:37:25] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 8 datacenters are down: 51.195.201.140/cpweb, 51.89.139.24/cpweb, 2001:41d0:801:2000::5d68/cpweb, 2001:41d0:801:2000::3a18/cpweb, 51.222.14.30/cpweb, 51.222.12.133/cpweb, 2607:5300:205:200::3121/cpweb, 2607:5300:205:200::1c93/cpweb [19:37:25] PROBLEM - cp25 Nginx Backend for mail121 on cp25 is CRITICAL: connect to address localhost and port 8200: Connection refused [19:37:25] PROBLEM - cp34 Nginx Backend for reports121 on cp34 is CRITICAL: connect to address localhost and port 8205: Connection refused [19:37:26] PROBLEM - cp25 HTTP 4xx/5xx ERROR Rate on cp25 is CRITICAL: CRITICAL - NGINX Error Rate is 99% [19:37:27] PROBLEM - cp24 Varnish Backends on cp24 is CRITICAL: 8 backends are down. mw131 mw132 mw141 mw142 mw133 mw134 mw143 mediawiki [19:37:27] PROBLEM - cp35 Nginx Backend for mw133 on cp35 is CRITICAL: connect to address localhost and port 8110: Connection refused [19:37:28] PROBLEM - cp24 Nginx Backend for mwtask141 on cp24 is CRITICAL: connect to address localhost and port 8150: Connection refused [19:37:28] PROBLEM - cp34 Nginx Backend for mail121 on cp34 is CRITICAL: connect to address localhost and port 8200: Connection refused [19:37:29] PROBLEM - miraheze.com - LetsEncrypt on sslhost is CRITICAL: connect to address miraheze.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:37:29] PROBLEM - cp35 Nginx Backend for mw132 on cp35 is CRITICAL: connect to address localhost and port 8107: Connection refused [19:37:30] PROBLEM - cp34 HTTPS on cp34 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to cp34.miraheze.org port 443 after 76 ms: Couldn't connect to server [19:37:31] PROBLEM - cp25 Nginx Backend for puppet141 on cp25 is CRITICAL: connect to address localhost and port 8204: Connection refused [19:37:32] PROBLEM - freedomplanetwiki.com - LetsEncrypt on sslhost is CRITICAL: connect to address freedomplanetwiki.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:37:32] PROBLEM - wiki.susqu.org - LetsEncrypt on sslhost is CRITICAL: connect to address wiki.susqu.org and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:37:35] PROBLEM - cp34 Nginx Backend for mw143 on cp34 is CRITICAL: connect to address localhost and port 8112: Connection refused [19:37:37] PROBLEM - www.mh142.com - LetsEncrypt on sslhost is CRITICAL: connect to address www.mh142.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:37:37] PROBLEM - cp35 Nginx Backend for mw134 on cp35 is CRITICAL: connect to address localhost and port 8111: Connection refused [19:37:40] PROBLEM - cp34 Nginx Backend for mw142 on cp34 is CRITICAL: connect to address localhost and port 8109: Connection refused [19:37:41] PROBLEM - cp24 Nginx Backend for mw133 on cp24 is CRITICAL: connect to address localhost and port 8110: Connection refused [19:37:41] PROBLEM - cp24 Nginx Backend for mw141 on cp24 is CRITICAL: connect to address localhost and port 8108: Connection refused [19:37:41] PROBLEM - cp35 Nginx Backend for matomo121 on cp35 is CRITICAL: connect to address localhost and port 8203: Connection refused [19:37:44] PROBLEM - cp34 Nginx Backend for mw133 on cp34 is CRITICAL: connect to address localhost and port 8110: Connection refused [19:37:44] PROBLEM - cp25 Nginx Backend for matomo121 on cp25 is CRITICAL: connect to address localhost and port 8203: Connection refused [19:37:45] PROBLEM - wiki.sheepservermc.net - LetsEncrypt on sslhost is CRITICAL: connect to address wiki.sheepservermc.net and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:37:47] PROBLEM - cp25 Nginx Backend for phab121 on cp25 is CRITICAL: connect to address localhost and port 8202: Connection refused [19:37:48] PROBLEM - cp34 Nginx Backend for test131 on cp34 is CRITICAL: connect to address localhost and port 8180: Connection refused [19:37:50] PROBLEM - cp34 Nginx Backend for mwtask141 on cp34 is CRITICAL: connect to address localhost and port 8150: Connection refused [19:37:52] PROBLEM - cp24 Nginx Backend for mw131 on cp24 is CRITICAL: connect to address localhost and port 8106: Connection refused [19:37:55] PROBLEM - cp34 Nginx Backend for mw131 on cp34 is CRITICAL: connect to address localhost and port 8106: Connection refused [19:37:57] PROBLEM - cp25 Nginx Backend for test131 on cp25 is CRITICAL: connect to address localhost and port 8180: Connection refused [19:37:58] PROBLEM - cp34 Nginx Backend for phab121 on cp34 is CRITICAL: connect to address localhost and port 8202: Connection refused [19:37:59] PROBLEM - cp35 Varnish Backends on cp35 is CRITICAL: 8 backends are down. mw131 mw132 mw141 mw142 mw133 mw134 mw143 mediawiki [19:38:00] RECOVERY - m.miraheze.org - LetsEncrypt on sslhost is OK: OK - Certificate 'm.miraheze.org' will expire on Tue 23 Jan 2024 13:13:37 GMT +0000. [19:38:05] PROBLEM - cp34 Nginx Backend for mw141 on cp34 is CRITICAL: connect to address localhost and port 8108: Connection refused [19:38:18] PROBLEM - cp25 Nginx Backend for mw133 on cp25 is CRITICAL: connect to address localhost and port 8110: Connection refused [19:38:20] PROBLEM - cp34 Nginx Backend for matomo121 on cp34 is CRITICAL: connect to address localhost and port 8203: Connection refused [19:38:27] PROBLEM - cp25 Nginx Backend for mw131 on cp25 is CRITICAL: connect to address localhost and port 8106: Connection refused [19:38:39] PROBLEM - cp34 Nginx Backend for mw132 on cp34 is CRITICAL: connect to address localhost and port 8107: Connection refused [19:38:40] PROBLEM - cp25 Nginx Backend for mw142 on cp25 is CRITICAL: connect to address localhost and port 8109: Connection refused [19:38:42] PROBLEM - cp25 Nginx Backend for mw141 on cp25 is CRITICAL: connect to address localhost and port 8108: Connection refused [19:38:42] PROBLEM - cp34 Nginx Backend for mw134 on cp34 is CRITICAL: connect to address localhost and port 8111: Connection refused [19:38:46] PROBLEM - cp25 Nginx Backend for mw143 on cp25 is CRITICAL: connect to address localhost and port 8112: Connection refused [19:39:02] PROBLEM - cp25 Nginx Backend for mon141 on cp25 is CRITICAL: connect to address localhost and port 8201: Connection refused [19:39:02] RECOVERY - cp24 Nginx Backend for test131 on cp24 is OK: TCP OK - 0.000 second response time on localhost port 8180 [19:39:07] RECOVERY - cp24 Nginx Backend for reports121 on cp24 is OK: TCP OK - 0.000 second response time on localhost port 8205 [19:39:08] RECOVERY - cp24 Nginx Backend for phab121 on cp24 is OK: TCP OK - 0.000 second response time on localhost port 8202 [19:39:09] PROBLEM - cp25 Varnish Backends on cp25 is CRITICAL: 8 backends are down. mw131 mw132 mw141 mw142 mw133 mw134 mw143 mediawiki [19:39:18] RECOVERY - cp24 Nginx Backend for puppet141 on cp24 is OK: TCP OK - 0.000 second response time on localhost port 8204 [19:39:24] RECOVERY - cp24 Nginx Backend for mw132 on cp24 is OK: TCP OK - 0.000 second response time on localhost port 8107 [19:39:27] RECOVERY - cp24 Varnish Backends on cp24 is OK: All 15 backends are healthy [19:39:28] RECOVERY - cp24 Nginx Backend for mwtask141 on cp24 is OK: TCP OK - 0.000 second response time on localhost port 8150 [19:39:41] RECOVERY - cp24 Nginx Backend for mw133 on cp24 is OK: TCP OK - 0.000 second response time on localhost port 8110 [19:39:41] RECOVERY - cp24 Nginx Backend for mw141 on cp24 is OK: TCP OK - 0.000 second response time on localhost port 8108 [19:39:52] RECOVERY - cp24 Nginx Backend for mw131 on cp24 is OK: TCP OK - 0.000 second response time on localhost port 8106 [19:40:05] RECOVERY - cp35 Nginx Backend for mw141 on cp35 is OK: TCP OK - 0.000 second response time on localhost port 8108 [19:40:06] RECOVERY - cp35 Nginx Backend for mw142 on cp35 is OK: TCP OK - 0.000 second response time on localhost port 8109 [19:40:08] RECOVERY - cp24 HTTPS on cp24 is OK: HTTP OK: HTTP/2 301 - 3556 bytes in 0.027 second response time [19:40:08] RECOVERY - cp35 Nginx Backend for test131 on cp35 is OK: TCP OK - 0.000 second response time on localhost port 8180 [19:40:23] RECOVERY - cp35 Nginx Backend for mon141 on cp35 is OK: TCP OK - 0.000 second response time on localhost port 8201 [19:40:34] RECOVERY - cp35 Nginx Backend for phab121 on cp35 is OK: TCP OK - 0.000 second response time on localhost port 8202 [19:40:38] RECOVERY - cp35 Nginx Backend for mw143 on cp35 is OK: TCP OK - 0.000 second response time on localhost port 8112 [19:40:41] RECOVERY - cp35 Nginx Backend for mwtask141 on cp35 is OK: TCP OK - 0.000 second response time on localhost port 8150 [19:40:42] RECOVERY - cp35 Nginx Backend for reports121 on cp35 is OK: TCP OK - 0.000 second response time on localhost port 8205 [19:40:59] RECOVERY - cp35 Nginx Backend for puppet141 on cp35 is OK: TCP OK - 0.000 second response time on localhost port 8204 [19:41:07] RECOVERY - cp35 HTTP 4xx/5xx ERROR Rate on cp35 is OK: OK - NGINX Error Rate is 5% [19:41:08] RECOVERY - cp35 Nginx Backend for mail121 on cp35 is OK: TCP OK - 0.000 second response time on localhost port 8200 [19:41:15] PROBLEM - swiftobject113 Current Load on swiftobject113 is CRITICAL: CRITICAL - load average: 16.39, 8.66, 5.63 [19:41:18] RECOVERY - cp35 Nginx Backend for mw131 on cp35 is OK: TCP OK - 0.000 second response time on localhost port 8106 [19:41:27] RECOVERY - cp35 Nginx Backend for mw133 on cp35 is OK: TCP OK - 0.000 second response time on localhost port 8110 [19:41:29] RECOVERY - cp35 Nginx Backend for mw132 on cp35 is OK: TCP OK - 0.000 second response time on localhost port 8107 [19:41:37] RECOVERY - cp35 Nginx Backend for mw134 on cp35 is OK: TCP OK - 0.000 second response time on localhost port 8111 [19:41:41] RECOVERY - cp35 Nginx Backend for matomo121 on cp35 is OK: TCP OK - 0.000 second response time on localhost port 8203 [19:41:58] RECOVERY - cp35 Varnish Backends on cp35 is OK: All 15 backends are healthy [19:42:01] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 38.34, 25.75, 14.07 [19:42:01] PROBLEM - swiftobject111 Current Load on swiftobject111 is CRITICAL: CRITICAL - load average: 11.18, 7.06, 5.24 [19:42:01] PROBLEM - swiftobject121 Current Load on swiftobject121 is CRITICAL: CRITICAL - load average: 14.87, 11.93, 7.51 [19:42:04] RECOVERY - cp35 HTTPS on cp35 is OK: HTTP OK: HTTP/2 301 - 3556 bytes in 0.526 second response time [19:42:32] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 22.47, 18.55, 10.87 [19:42:41] PROBLEM - swiftobject112 Current Load on swiftobject112 is WARNING: WARNING - load average: 7.61, 6.37, 4.55 [19:43:21] PROBLEM - graylog131 Current Load on graylog131 is CRITICAL: LOAD CRITICAL - total load average: 4.39, 3.00, 2.07 [19:44:01] PROBLEM - swiftobject111 Current Load on swiftobject111 is WARNING: WARNING - load average: 7.69, 7.13, 5.49 [19:44:41] RECOVERY - swiftobject112 Current Load on swiftobject112 is OK: OK - load average: 6.64, 6.44, 4.81 [19:45:21] RECOVERY - graylog131 Current Load on graylog131 is OK: LOAD OK - total load average: 2.62, 2.80, 2.11 [19:49:13] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 7.21, 7.79, 6.48 [19:52:01] RECOVERY - swiftobject111 Current Load on swiftobject111 is OK: OK - load average: 5.05, 6.44, 5.93 [19:53:14] RECOVERY - swiftobject113 Current Load on swiftobject113 is OK: OK - load average: 6.12, 6.78, 6.35 [19:55:11] RECOVERY - cp34 Nginx Backend for puppet141 on cp34 is OK: TCP OK - 0.000 second response time on localhost port 8204 [19:55:17] RECOVERY - cp34 Nginx Backend for mon141 on cp34 is OK: TCP OK - 0.000 second response time on localhost port 8201 [19:55:25] RECOVERY - cp34 Nginx Backend for reports121 on cp34 is OK: TCP OK - 0.000 second response time on localhost port 8205 [19:55:28] RECOVERY - cp34 Nginx Backend for mail121 on cp34 is OK: TCP OK - 0.000 second response time on localhost port 8200 [19:55:30] RECOVERY - cp34 HTTPS on cp34 is OK: HTTP OK: HTTP/2 301 - 3556 bytes in 0.507 second response time [19:55:35] RECOVERY - cp34 Nginx Backend for mw143 on cp34 is OK: TCP OK - 0.000 second response time on localhost port 8112 [19:55:40] RECOVERY - cp34 Nginx Backend for mw142 on cp34 is OK: TCP OK - 0.000 second response time on localhost port 8109 [19:55:44] RECOVERY - cp34 Nginx Backend for mw133 on cp34 is OK: TCP OK - 0.000 second response time on localhost port 8110 [19:55:48] RECOVERY - cp34 Nginx Backend for test131 on cp34 is OK: TCP OK - 0.000 second response time on localhost port 8180 [19:55:50] RECOVERY - cp34 Nginx Backend for mwtask141 on cp34 is OK: TCP OK - 0.000 second response time on localhost port 8150 [19:55:55] RECOVERY - cp34 Nginx Backend for mw131 on cp34 is OK: TCP OK - 0.000 second response time on localhost port 8106 [19:55:58] RECOVERY - cp34 Nginx Backend for phab121 on cp34 is OK: TCP OK - 0.000 second response time on localhost port 8202 [19:56:05] RECOVERY - cp34 Nginx Backend for mw141 on cp34 is OK: TCP OK - 0.000 second response time on localhost port 8108 [19:56:20] RECOVERY - cp34 Nginx Backend for matomo121 on cp34 is OK: TCP OK - 0.000 second response time on localhost port 8203 [19:56:39] RECOVERY - cp34 Nginx Backend for mw132 on cp34 is OK: TCP OK - 0.000 second response time on localhost port 8107 [19:56:41] RECOVERY - cp34 HTTP 4xx/5xx ERROR Rate on cp34 is OK: OK - NGINX Error Rate is 3% [19:56:42] RECOVERY - cp34 Nginx Backend for mw134 on cp34 is OK: TCP OK - 0.000 second response time on localhost port 8111 [19:56:55] RECOVERY - cp34 Varnish Backends on cp34 is OK: All 15 backends are healthy [19:57:31] RECOVERY - cp25 Nginx Backend for puppet141 on cp25 is OK: TCP OK - 0.000 second response time on localhost port 8204 [19:57:44] RECOVERY - cp25 Nginx Backend for matomo121 on cp25 is OK: TCP OK - 0.000 second response time on localhost port 8203 [19:57:47] RECOVERY - cp25 Nginx Backend for phab121 on cp25 is OK: TCP OK - 0.003 second response time on localhost port 8202 [19:57:57] RECOVERY - cp25 Nginx Backend for test131 on cp25 is OK: TCP OK - 0.000 second response time on localhost port 8180 [19:58:13] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [19:58:18] RECOVERY - cp25 Nginx Backend for mw133 on cp25 is OK: TCP OK - 0.000 second response time on localhost port 8110 [19:58:26] RECOVERY - cp25 Nginx Backend for mw131 on cp25 is OK: TCP OK - 0.000 second response time on localhost port 8106 [19:58:40] RECOVERY - cp25 Nginx Backend for mw142 on cp25 is OK: TCP OK - 0.000 second response time on localhost port 8109 [19:58:41] RECOVERY - cp25 Nginx Backend for mw141 on cp25 is OK: TCP OK - 0.000 second response time on localhost port 8108 [19:58:46] RECOVERY - cp25 Nginx Backend for mw143 on cp25 is OK: TCP OK - 0.000 second response time on localhost port 8112 [19:59:01] RECOVERY - cp25 Varnish Backends on cp25 is OK: All 15 backends are healthy [19:59:02] RECOVERY - cp25 Nginx Backend for mon141 on cp25 is OK: TCP OK - 0.000 second response time on localhost port 8201 [19:59:07] RECOVERY - cp25 Nginx Backend for mw132 on cp25 is OK: TCP OK - 0.000 second response time on localhost port 8107 [19:59:08] RECOVERY - cp25 Nginx Backend for mwtask141 on cp25 is OK: TCP OK - 0.000 second response time on localhost port 8150 [19:59:09] RECOVERY - cp25 Nginx Backend for reports121 on cp25 is OK: TCP OK - 0.000 second response time on localhost port 8205 [19:59:19] RECOVERY - cp25 Nginx Backend for mw134 on cp25 is OK: TCP OK - 0.000 second response time on localhost port 8111 [19:59:25] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [19:59:25] RECOVERY - cp25 Nginx Backend for mail121 on cp25 is OK: TCP OK - 0.000 second response time on localhost port 8200 [19:59:26] RECOVERY - cp25 HTTP 4xx/5xx ERROR Rate on cp25 is OK: OK - NGINX Error Rate is 1% [19:59:32] RECOVERY - cp25 HTTPS on cp25 is OK: HTTP OK: HTTP/2 301 - 3556 bytes in 0.031 second response time [20:00:06] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 7.09, 7.09, 6.62 [20:02:52] PROBLEM - swiftobject111 Current Load on swiftobject111 is WARNING: WARNING - load average: 7.72, 7.13, 6.40 [20:03:57] PROBLEM - swiftobject113 Current Load on swiftobject113 is CRITICAL: CRITICAL - load average: 8.14, 7.40, 6.83 [20:04:31] RECOVERY - wiki.recaptime.eu.org - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.recaptime.eu.org' will expire on Sun 03 Mar 2024 01:40:39 GMT +0000. [20:04:47] PROBLEM - swiftobject111 Current Load on swiftobject111 is CRITICAL: CRITICAL - load average: 11.40, 7.83, 6.69 [20:04:59] RECOVERY - fanonpedia.com - LetsEncrypt on sslhost is OK: OK - Certificate 'fanonpedia.com' will expire on Thu 11 Jan 2024 16:46:59 GMT +0000. [20:04:59] RECOVERY - marinebiodiversitymatrix.org - LetsEncrypt on sslhost is OK: OK - Certificate 'marinebiodiversitymatrix.org' will expire on Fri 01 Mar 2024 01:04:23 GMT +0000. [20:05:12] RECOVERY - private.yahyabd.xyz - LetsEncrypt on sslhost is OK: OK - Certificate 'private.yahyabd.xyz' will expire on Sun 03 Mar 2024 21:50:49 GMT +0000. [20:05:14] RECOVERY - vise.dayid.org - LetsEncrypt on sslhost is OK: OK - Certificate 'vise.dayid.org' will expire on Fri 01 Mar 2024 13:51:54 GMT +0000. [20:05:20] RECOVERY - replication-ops.com - LetsEncrypt on sslhost is OK: OK - Certificate 'replication-ops.com' will expire on Sun 03 Mar 2024 16:29:40 GMT +0000. [20:05:21] RECOVERY - www.cgradegames.net - LetsEncrypt on sslhost is OK: OK - Certificate 'www.cgradegames.net' will expire on Sun 03 Mar 2024 13:51:39 GMT +0000. [20:05:26] RECOVERY - pornwiki.org - LetsEncrypt on sslhost is OK: OK - Certificate 'en.pornwiki.org' will expire on Sun 03 Mar 2024 18:36:43 GMT +0000. [20:05:26] RECOVERY - wiki.projectdiablo2.com - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.projectdiablo2.com' will expire on Sat 02 Mar 2024 17:45:15 GMT +0000. [20:05:28] RECOVERY - wiki.cdntennis.ca - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.cdntennis.ca' will expire on Fri 01 Mar 2024 21:30:43 GMT +0000. [20:05:30] RECOVERY - thepolicyhub.org.uk - LetsEncrypt on sslhost is OK: OK - Certificate 'thepolicyhub.org.uk' will expire on Sat 02 Mar 2024 12:26:37 GMT +0000. [20:05:31] RECOVERY - burnout.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'burnout.wiki' will expire on Fri 08 Mar 2024 12:20:56 GMT +0000. [20:05:42] RECOVERY - threedomwiki.pcast.site - LetsEncrypt on sslhost is OK: OK - Certificate 'threedomwiki.pcast.site' will expire on Sat 02 Mar 2024 12:34:01 GMT +0000. [20:05:44] RECOVERY - wiki.widwwa.co.uk - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.widwwa.co.uk' will expire on Fri 16 Feb 2024 19:29:29 GMT +0000. [20:05:48] RECOVERY - n64brew.dev - LetsEncrypt on sslhost is OK: OK - Certificate 'n64brew.dev' will expire on Fri 01 Mar 2024 21:32:38 GMT +0000. [20:05:52] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 7.93, 7.57, 6.96 [20:05:59] RECOVERY - www.burnout.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'burnout.wiki' will expire on Fri 08 Mar 2024 12:20:56 GMT +0000. [20:06:02] RECOVERY - vmklegacy.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'vmklegacy.wiki' will expire on Sat 09 Mar 2024 12:30:09 GMT +0000. [20:06:02] RECOVERY - donkeykong.miraheze.org - Sectigo on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Mon 18 Nov 2024 23:59:59 GMT +0000. [20:06:09] RECOVERY - psychevos.org - LetsEncrypt on sslhost is OK: OK - Certificate 'psychevos.org' will expire on Sun 03 Mar 2024 22:23:40 GMT +0000. [20:06:22] RECOVERY - www.christipedia.nl - LetsEncrypt on sslhost is OK: OK - Certificate 'www.christipedia.nl' will expire on Sat 02 Mar 2024 01:52:01 GMT +0000. [20:06:31] RECOVERY - projectsekai.miraheze.org - Sectigo on sslhost is OK: OK - Certificate '*.miraheze.org' will expire on Mon 18 Nov 2024 23:59:59 GMT +0000. [20:06:31] RECOVERY - www.glitchcity.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'glitchcity.wiki' will expire on Wed 14 Feb 2024 10:42:59 GMT +0000. [20:06:32] RECOVERY - wiki.ooer.ooo - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.ooer.ooo' will expire on Wed 10 Jan 2024 16:10:18 GMT +0000. [20:06:33] RECOVERY - steamdecklinux.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'steamdecklinux.wiki' will expire on Sat 02 Mar 2024 13:41:30 GMT +0000. [20:06:37] RECOVERY - wiki.virtualjet.net - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.virtualjet.net' will expire on Sun 03 Mar 2024 03:58:20 GMT +0000. [20:06:38] RECOVERY - null-cpu.emudev.org - LetsEncrypt on sslhost is OK: OK - Certificate 'null-cpu.emudev.org' will expire on Wed 13 Mar 2024 07:14:41 GMT +0000. [20:06:42] PROBLEM - swiftobject111 Current Load on swiftobject111 is WARNING: WARNING - load average: 6.32, 7.23, 6.61 [20:06:58] RECOVERY - de.berlinwiki.org - LetsEncrypt on sslhost is OK: OK - Certificate 'de.berlinwiki.org' will expire on Fri 16 Feb 2024 19:18:59 GMT +0000. [20:06:59] RECOVERY - miraheze.com - LetsEncrypt on sslhost is OK: OK - Certificate 'miraheze.com' will expire on Thu 18 Jan 2024 16:17:22 GMT +0000. [20:07:03] RECOVERY - freedomplanetwiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'freedomplanetwiki.com' will expire on Sun 03 Mar 2024 20:41:53 GMT +0000. [20:07:04] RECOVERY - wiki.susqu.org - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.susqu.org' will expire on Tue 23 Jan 2024 13:09:58 GMT +0000. [20:07:14] RECOVERY - www.mh142.com - LetsEncrypt on sslhost is OK: OK - Certificate 'mh142.com' will expire on Fri 01 Mar 2024 11:05:13 GMT +0000. [20:07:30] RECOVERY - wiki.sheepservermc.net - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.sheepservermc.net' will expire on Mon 26 Feb 2024 23:14:46 GMT +0000. [20:07:47] RECOVERY - swiftobject113 Current Load on swiftobject113 is OK: OK - load average: 5.12, 6.79, 6.75 [20:10:02] PROBLEM - swiftobject121 Current Load on swiftobject121 is WARNING: WARNING - load average: 6.57, 7.54, 7.93 [20:10:03] PROBLEM - os141 Current Load on os141 is CRITICAL: LOAD CRITICAL - total load average: 8.69, 5.14, 3.48 [20:12:01] PROBLEM - swiftobject121 Current Load on swiftobject121 is CRITICAL: CRITICAL - load average: 8.17, 7.86, 8.00 [20:12:28] PROBLEM - swiftobject111 Current Load on swiftobject111 is CRITICAL: CRITICAL - load average: 9.30, 7.76, 7.01 [20:13:11] PROBLEM - cloud12 Puppet on cloud12 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [20:14:23] PROBLEM - swiftobject111 Current Load on swiftobject111 is WARNING: WARNING - load average: 6.34, 7.36, 6.96 [20:15:31] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 7.01, 7.46, 7.01 [20:16:18] RECOVERY - swiftobject111 Current Load on swiftobject111 is OK: OK - load average: 5.63, 6.74, 6.78 [20:17:27] PROBLEM - swiftobject113 Current Load on swiftobject113 is CRITICAL: CRITICAL - load average: 8.28, 7.59, 7.10 [20:19:22] PROBLEM - swiftobject113 Current Load on swiftobject113 is WARNING: WARNING - load average: 5.69, 6.78, 6.85 [20:19:50] PROBLEM - os141 Current Load on os141 is WARNING: LOAD WARNING - total load average: 2.65, 3.82, 3.66 [20:23:48] PROBLEM - os141 Current Load on os141 is CRITICAL: LOAD CRITICAL - total load average: 4.81, 4.38, 3.91 [20:25:45] PROBLEM - os141 Current Load on os141 is WARNING: LOAD WARNING - total load average: 2.74, 3.69, 3.71 [20:26:02] PROBLEM - swiftobject121 Current Load on swiftobject121 is WARNING: WARNING - load average: 5.85, 6.98, 7.81 [20:27:43] PROBLEM - os141 Current Load on os141 is CRITICAL: LOAD CRITICAL - total load average: 4.79, 4.03, 3.83 [20:28:02] PROBLEM - swiftobject121 Current Load on swiftobject121 is CRITICAL: CRITICAL - load average: 9.05, 7.53, 7.89 [20:28:26] PROBLEM - os141 PowerDNS Recursor on os141 is CRITICAL: CRITICAL - Plugin timed out while executing system call [20:28:48] PROBLEM - swiftobject111 Current Load on swiftobject111 is WARNING: WARNING - load average: 7.45, 7.12, 6.74 [20:29:40] PROBLEM - os141 Current Load on os141 is WARNING: LOAD WARNING - total load average: 1.62, 3.16, 3.54 [20:30:02] PROBLEM - swiftobject121 Current Load on swiftobject121 is WARNING: WARNING - load average: 5.82, 6.84, 7.59 [20:30:24] RECOVERY - os141 PowerDNS Recursor on os141 is OK: DNS OK: 0.382 seconds response time. miraheze.org returns 2001:41d0:801:2000::3a18,2001:41d0:801:2000::5d68,51.195.201.140,51.89.139.24 [20:30:43] RECOVERY - swiftobject111 Current Load on swiftobject111 is OK: OK - load average: 5.19, 6.42, 6.53 [20:31:14] RECOVERY - swiftobject113 Current Load on swiftobject113 is OK: OK - load average: 6.17, 6.37, 6.76 [20:31:36] RECOVERY - os141 Current Load on os141 is OK: LOAD OK - total load average: 0.32, 2.19, 3.14 [20:34:01] PROBLEM - swiftobject121 Current Load on swiftobject121 is CRITICAL: CRITICAL - load average: 8.81, 7.63, 7.72 [20:35:15] RECOVERY - kodiak.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'kodiak.wiki' will expire on Sat 20 Jan 2024 06:43:17 GMT +0000. [20:36:02] PROBLEM - swiftobject121 Current Load on swiftobject121 is WARNING: WARNING - load average: 6.20, 7.15, 7.54 [20:41:11] RECOVERY - cloud12 Puppet on cloud12 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [20:52:02] PROBLEM - swiftobject121 Current Load on swiftobject121 is CRITICAL: CRITICAL - load average: 8.78, 7.50, 7.35 [20:54:02] PROBLEM - swiftobject121 Current Load on swiftobject121 is WARNING: WARNING - load average: 6.14, 6.95, 7.17 [20:56:02] PROBLEM - swiftobject121 Current Load on swiftobject121 is CRITICAL: CRITICAL - load average: 13.83, 8.55, 7.67 [20:58:01] PROBLEM - swiftobject121 Current Load on swiftobject121 is WARNING: WARNING - load average: 6.57, 7.65, 7.45 [21:20:02] RECOVERY - swiftobject121 Current Load on swiftobject121 is OK: OK - load average: 5.00, 6.13, 6.67 [21:28:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 5.98, 6.92, 7.87 [21:42:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.05, 7.34, 7.58 [21:44:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.43, 7.74, 7.72 [21:52:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 9.05, 7.28, 7.38 [22:14:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.47, 7.17, 7.79 [22:18:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.77, 8.40, 8.16 [22:38:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.40, 7.63, 7.98 [22:52:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 9.95, 7.44, 7.57 [22:54:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.75, 7.08, 7.42 [23:02:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.95, 7.48, 7.40 [23:04:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.86, 7.40, 7.39 [23:06:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.90, 7.65, 7.46 [23:08:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.26, 7.08, 7.28 [23:12:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.79, 7.20, 7.24 [23:18:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.43, 7.61, 7.60 [23:19:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.53, 7.36, 7.99 [23:30:03] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 4.27, 5.69, 6.63 [23:34:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.87, 6.87, 6.94 [23:35:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.17, 7.68, 7.56 [23:36:03] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.49, 6.46, 6.79 [23:37:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.64, 7.31, 7.45 [23:39:49] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.11, 8.84, 8.05 [23:40:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 7.48, 8.16, 7.46 [23:42:03] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.73, 7.62, 7.34 [23:43:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.87, 7.56, 7.69 [23:46:03] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 4.52, 6.11, 6.80 [23:51:51] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.22, 7.16, 7.32 [23:53:50] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.77, 6.73, 7.15