[00:00:47] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.20, 6.55, 5.81 [00:02:41] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.17, 5.97, 5.69 [00:03:58] PROBLEM - db112 Current Load on db112 is CRITICAL: CRITICAL - load average: 14.27, 8.00, 3.73 [00:06:23] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 12.23, 10.97, 10.77 [00:12:22] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.gab.pt.eu.org All nameservers failed to answer the query. [00:12:23] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.39, 11.69, 11.19 [00:13:57] PROBLEM - db112 Current Load on db112 is WARNING: WARNING - load average: 2.83, 7.23, 6.33 [00:15:57] RECOVERY - db112 Current Load on db112 is OK: OK - load average: 2.40, 5.72, 5.88 [00:16:59] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.55, 7.21, 6.70 [00:19:01] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.10, 6.94, 6.65 [00:19:44] RECOVERY - www.twistwoodtaleswiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'www.twistwoodtaleswiki.com' will expire on Wed 20 Dec 2023 22:08:34 GMT +0000. [00:20:23] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 12.04, 11.79, 11.43 [00:22:23] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.85, 11.78, 11.47 [00:24:52] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.93, 6.65, 6.66 [00:37:55] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.21, 6.41, 6.06 [00:39:50] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 6.26, 6.27, 6.04 [00:42:22] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is WARNING: SSL WARNING - rDNS OK but records conflict. {'NS': ['NS.ANKH.FR.eu.org.', 'NS1.eu.org.', 'NS1.ERIOMEM.NET.'], 'CNAME': 'bouncingwiki.miraheze.org.'} [00:45:37] PROBLEM - sdiy.info - reverse DNS on sslhost is WARNING: SSL WARNING - rDNS OK but records conflict. {'NS': ['ns1.wikitide.net.', 'ns2.wikitide.net.'], 'CNAME': None} [00:47:15] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.35, 6.92, 6.60 [00:49:12] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.99, 6.46, 6.46 [00:51:14] PROBLEM - www.twistwoodtaleswiki.com - LetsEncrypt on sslhost is CRITICAL: connect to address www.twistwoodtaleswiki.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [00:52:23] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 12.25, 11.24, 10.97 [01:00:23] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.31, 11.75, 11.44 [01:04:41] PROBLEM - jobchron121 JobChron Service on jobchron121 is CRITICAL: PROCS CRITICAL: 2 processes with args 'redisJobChronService' [01:08:23] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 13.64, 11.91, 11.48 [01:12:22] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.gab.pt.eu.org All nameservers failed to answer the query. [01:14:23] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 6.28, 10.70, 11.30 [01:15:38] RECOVERY - sdiy.info - reverse DNS on sslhost is OK: SSL OK - sdiy.info reverse DNS resolves to cp25.miraheze.org - NS RECORDS OK [01:16:03] [Grafana] !sre FIRING: There has been a rise in the MediaWiki exception rate https://grafana.miraheze.org/d/GtxbP1Xnk?orgId=1 [01:16:08] PROBLEM - db131 Current Load on db131 is CRITICAL: CRITICAL - load average: 57.27, 25.64, 10.85 [01:16:23] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 1.92, 7.56, 10.07 [01:16:41] PROBLEM - mw134 MediaWiki Rendering on mw134 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:16:41] RECOVERY - jobchron121 JobChron Service on jobchron121 is OK: PROCS OK: 1 process with args 'redisJobChronService' [01:17:38] PROBLEM - mw131 HTTPS on mw131 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Connection timed out after 10003 milliseconds [01:17:46] PROBLEM - cp35 Varnish Backends on cp35 is CRITICAL: 8 backends are down. mw131 mw132 mw141 mw142 mw133 mw134 mw143 mediawiki [01:17:48] PROBLEM - mw133 MediaWiki Rendering on mw133 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:18:19] PROBLEM - cp24 Varnish Backends on cp24 is CRITICAL: 8 backends are down. mw131 mw132 mw141 mw142 mw133 mw134 mw143 mediawiki [01:18:21] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [01:18:24] PROBLEM - ns1 NTP time on ns1 is CRITICAL: NTP CRITICAL: Offset 0.5020071566 secs [01:18:36] PROBLEM - cp25 Varnish Backends on cp25 is CRITICAL: 8 backends are down. mw131 mw132 mw141 mw142 mw133 mw134 mw143 mediawiki [01:18:44] PROBLEM - cp34 Varnish Backends on cp34 is CRITICAL: 8 backends are down. mw131 mw132 mw141 mw142 mw133 mw134 mw143 mediawiki [01:18:59] PROBLEM - graylog131 Current Load on graylog131 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [01:19:23] RECOVERY - mw132 MediaWiki Rendering on mw132 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.513 second response time [01:19:33] RECOVERY - mw131 HTTPS on mw131 is OK: HTTP OK: HTTP/2 301 - 345 bytes in 0.056 second response time [01:19:43] RECOVERY - mw133 MediaWiki Rendering on mw133 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.537 second response time [01:19:45] PROBLEM - test131 Current Load on test131 is CRITICAL: LOAD CRITICAL - total load average: 4.17, 2.91, 1.33 [01:19:51] PROBLEM - prometheus131 Current Load on prometheus131 is CRITICAL: LOAD CRITICAL - total load average: 5.33, 4.55, 2.07 [01:20:16] RECOVERY - cp24 Varnish Backends on cp24 is OK: All 15 backends are healthy [01:20:17] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.350 second response time [01:20:30] RECOVERY - cp25 Varnish Backends on cp25 is OK: All 15 backends are healthy [01:20:38] RECOVERY - cp34 Varnish Backends on cp34 is OK: All 15 backends are healthy [01:20:39] RECOVERY - mw134 MediaWiki Rendering on mw134 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.331 second response time [01:21:36] RECOVERY - cp35 Varnish Backends on cp35 is OK: All 15 backends are healthy [01:21:41] RECOVERY - test131 Current Load on test131 is OK: LOAD OK - total load average: 0.82, 2.10, 1.22 [01:21:50] RECOVERY - prometheus131 Current Load on prometheus131 is OK: LOAD OK - total load average: 0.96, 3.18, 1.87 [01:23:19] PROBLEM - os131 Current Load on os131 is WARNING: LOAD WARNING - total load average: 0.52, 3.81, 2.80 [01:23:35] [Grafana] !sre RESOLVED: MediaWiki Exception Rate https://grafana.miraheze.org/d/GtxbP1Xnk?orgId=1 [01:25:14] RECOVERY - os131 Current Load on os131 is OK: LOAD OK - total load average: 0.35, 2.69, 2.50 [01:26:55] RECOVERY - mw132 Disk Space on mw132 is OK: DISK OK - free space: / 8871 MB (37% inode=79%); [01:28:07] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.68, 9.14, 9.09 [01:28:48] RECOVERY - www.sdiy.info - reverse DNS on sslhost is OK: SSL OK - www.sdiy.info reverse DNS resolves to cp24.miraheze.org - NS RECORDS OK [01:30:03] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 10.02, 9.34, 9.16 [01:33:58] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.55, 10.38, 9.62 [01:37:51] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 8.42, 9.92, 9.64 [01:41:32] PROBLEM - mw131 Disk Space on mw131 is WARNING: DISK WARNING - free space: / 1758 MB (7% inode=79%); [01:42:22] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is WARNING: SSL WARNING - rDNS OK but records conflict. {'NS': ['ns.ankh.fr.eu.org.', 'ns1.eu.org.', 'ns1.eriomem.net.'], 'CNAME': 'bouncingwiki.miraheze.org.'} [01:48:51] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.31, 7.08, 6.48 [01:50:45] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.49, 6.55, 6.36 [01:52:41] PROBLEM - graylog131 PowerDNS Recursor on graylog131 is CRITICAL: CRITICAL - Plugin timed out while executing system call [01:54:40] RECOVERY - graylog131 PowerDNS Recursor on graylog131 is OK: DNS OK: 0.203 seconds response time. miraheze.org returns 2001:41d0:801:2000::3a18,2001:41d0:801:2000::5d68,51.195.201.140,51.89.139.24 [01:58:50] PROBLEM - db131 Current Load on db131 is WARNING: WARNING - load average: 3.34, 5.88, 7.84 [01:59:24] PROBLEM - www.sdiy.info - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.sdiy.info could not be found [02:00:47] PROBLEM - db131 Current Load on db131 is CRITICAL: CRITICAL - load average: 8.07, 6.76, 7.93 [02:03:35] PROBLEM - mw132 Current Load on mw132 is CRITICAL: CRITICAL - load average: 17.25, 10.59, 6.97 [02:05:31] RECOVERY - mw132 Current Load on mw132 is OK: OK - load average: 7.94, 9.51, 7.02 [02:08:45] PROBLEM - db131 Current Load on db131 is WARNING: WARNING - load average: 4.47, 6.11, 7.62 [02:08:56] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.66, 9.91, 8.93 [02:12:45] RECOVERY - db131 Current Load on db131 is OK: OK - load average: 3.47, 4.78, 6.74 [02:15:29] [02miraheze/mw-config] 07paladox pushed 031 commit to 03revert-5296-paladox-patch-1 [+0/-0/±1] 13https://github.com/miraheze/mw-config/commit/5c73dbd5a3d8 [02:15:30] [02miraheze/mw-config] 07paladox 035c73dbd - Revert "Unset wgUseLocalMessageCache (#5296)" [02:15:32] [02mw-config] 07paladox created branch 03revert-5296-paladox-patch-1 - 13https://github.com/miraheze/mw-config [02:15:35] [02mw-config] 07paladox opened pull request 03#5373: Revert "Unset wgUseLocalMessageCache (#5296)" - 13https://github.com/miraheze/mw-config/pull/5373 [02:16:24] [02mw-config] 07paladox closed pull request 03#5373: Revert "Unset wgUseLocalMessageCache (#5296)" - 13https://github.com/miraheze/mw-config/pull/5373 [02:16:27] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/mw-config/compare/63481546523a...938eb0faf945 [02:16:30] [02miraheze/mw-config] 07paladox 03938eb0f - Revert "Unset wgUseLocalMessageCache (#5296)" (#5373) [02:16:31] [02miraheze/mw-config] 07paladox deleted branch 03revert-5296-paladox-patch-1 [02:16:32] miraheze/mw-config - paladox the build passed. [02:16:33] [02mw-config] 07paladox deleted branch 03revert-5296-paladox-patch-1 - 13https://github.com/miraheze/mw-config [02:16:50] !log [paladox@mwtask141] starting deploy of {'pull': 'config', 'config': True} to all [02:16:55] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:16:59] !log [paladox@mwtask141] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 8s [02:17:03] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:17:16] PROBLEM - sdiy.info - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - sdiy.info reverse DNS resolves to vps-c10c02c8.vps.ovh.net [02:17:20] miraheze/mw-config - paladox the build passed. [02:18:18] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/mw-config/compare/938eb0faf945...86cd5141d53d [02:18:21] [02miraheze/mw-config] 07paladox 0386cd514 - Set wgResourceLoaderUseObjectCacheForDeps to true [02:18:22] !log [paladox@mwtask141] starting deploy of {'pull': 'config', 'config': True} to all [02:18:27] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:18:38] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 8.91, 10.13, 9.58 [02:18:48] !log [paladox@mwtask141] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 25s [02:18:57] !log [paladox@mwtask141] starting deploy of {'config': True} to all [02:19:00] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:19:06] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:19:12] !log [paladox@mwtask141] finished deploy of {'config': True} to all - SUCCESS in 14s [02:19:13] miraheze/mw-config - paladox the build passed. [02:20:08] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:20:52] !log [paladox@test131] starting deploy of {'pull': 'config', 'config': True} to all [02:20:55] !log [paladox@test131] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 2s [02:20:58] !log [paladox@test131] starting deploy of {'config': True} to all [02:20:59] !log [paladox@test131] finished deploy of {'config': True} to all - SUCCESS in 0s [02:21:05] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:21:18] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:21:24] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:21:30] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:22:45] PROBLEM - db131 Current Load on db131 is WARNING: WARNING - load average: 6.74, 7.44, 6.94 [02:24:11] RECOVERY - cloud11 IPMI Sensors on cloud11 is OK: IPMI Status: OK [02:24:45] RECOVERY - db131 Current Load on db131 is OK: OK - load average: 5.00, 6.43, 6.62 [02:25:02] PROBLEM - mw131 Disk Space on mw131 is CRITICAL: DISK CRITICAL - free space: / 1060 MB (4% inode=79%); [02:26:06] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/mw-config/compare/86cd5141d53d...540d23a64db2 [02:26:08] !log [paladox@mwtask141] starting deploy of {'pull': 'config', 'config': True} to all [02:26:09] [02miraheze/mw-config] 07paladox 03540d23a - Set wgMaxExecutionTimeForExpensiveQueries to 30000 [02:26:13] !log [paladox@test131] starting deploy of {'pull': 'config', 'config': True} to all [02:26:14] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:26:14] !log [paladox@test131] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 1s [02:26:18] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:26:19] !log [paladox@mwtask141] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 11s [02:26:21] !log [paladox@test131] starting deploy of {'config': True} to all [02:26:22] !log [paladox@test131] finished deploy of {'config': True} to all - SUCCESS in 0s [02:26:23] !log [paladox@mwtask141] starting deploy of {'config': True} to all [02:26:30] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:26:31] !log [paladox@mwtask141] finished deploy of {'config': True} to all - SUCCESS in 8s [02:26:37] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:26:42] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:26:46] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:26:50] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:26:55] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:27:02] PROBLEM - mw131 Disk Space on mw131 is WARNING: DISK WARNING - free space: / 1745 MB (7% inode=79%); [02:27:02] miraheze/mw-config - paladox the build passed. [02:28:11] PROBLEM - cloud11 IPMI Sensors on cloud11 is CRITICAL: IPMI Status: Critical [Cntlr 2 Bay 8 = Critical] [02:31:19] PROBLEM - cp35 Varnish Backends on cp35 is CRITICAL: 6 backends are down. mw131 mw132 mw141 mw142 mw133 mw143 [02:31:24] PROBLEM - cp34 Varnish Backends on cp34 is CRITICAL: 6 backends are down. mw131 mw132 mw141 mw142 mw133 mw143 [02:31:27] [02miraheze/mw-config] 07paladox pushed 031 commit to 03paladox-patch-4 [+0/-0/±1] 13https://github.com/miraheze/mw-config/commit/bd04d5b41f2e [02:31:30] [02miraheze/mw-config] 07paladox 03bd04d5b - Set wgCdnMaxAge to 5 days [02:31:32] PROBLEM - mw141 nutcracker process on mw141 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [02:31:33] [02mw-config] 07paladox created branch 03paladox-patch-4 - 13https://github.com/miraheze/mw-config [02:31:35] PROBLEM - mw141 Disk Space on mw141 is CRITICAL: DISK CRITICAL - free space: / 657 MB (2% inode=80%); [02:31:36] [02mw-config] 07paladox opened pull request 03#5374: Set wgCdnMaxAge to 5 days - 13https://github.com/miraheze/mw-config/pull/5374 [02:31:37] [02mw-config] 07paladox closed pull request 03#5374: Set wgCdnMaxAge to 5 days - 13https://github.com/miraheze/mw-config/pull/5374 [02:31:38] !log [paladox@mwtask141] starting deploy of {'pull': 'config', 'config': True} to all [02:31:38] PROBLEM - mw141 php-fpm on mw141 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [02:31:38] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/mw-config/compare/540d23a64db2...3a7f3f9af15e [02:31:40] !log [paladox@test131] starting deploy of {'pull': 'config', 'config': True} to all [02:31:40] [02miraheze/mw-config] 07paladox 033a7f3f9 - Set wgCdnMaxAge to 5 days (#5374) [02:31:41] !log [paladox@test131] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 1s [02:31:41] [02miraheze/mw-config] 07paladox deleted branch 03paladox-patch-4 [02:31:43] [02mw-config] 07paladox deleted branch 03paladox-patch-4 - 13https://github.com/miraheze/mw-config [02:32:02] PROBLEM - mw141 JobRunner Service on mw141 is CRITICAL: CRITICAL - Plugin timed out after 10 seconds [02:32:21] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:32:26] miraheze/mw-config - paladox the build passed. [02:32:28] PROBLEM - mw142 MediaWiki Rendering on mw142 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:32:28] PROBLEM - graylog131 Current Load on graylog131 is WARNING: LOAD WARNING - total load average: 3.02, 3.49, 3.91 [02:32:40] miraheze/mw-config - paladox the build passed. [02:32:42] PROBLEM - mw141 SSH on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:32:48] PROBLEM - cp25 Varnish Backends on cp25 is CRITICAL: 4 backends are down. mw141 mw142 mw133 mw143 [02:33:03] PROBLEM - cp24 Varnish Backends on cp24 is CRITICAL: 4 backends are down. mw141 mw142 mw133 mw143 [02:33:12] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:33:50] PROBLEM - bast141 Current Load on bast141 is CRITICAL: LOAD CRITICAL - total load average: 3.03, 1.88, 0.82 [02:33:57] PROBLEM - mon141 Current Load on mon141 is CRITICAL: LOAD CRITICAL - total load average: 8.86, 3.87, 1.94 [02:34:23] RECOVERY - mw142 MediaWiki Rendering on mw142 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.290 second response time [02:34:35] PROBLEM - mw141 Current Load on mw141 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [02:34:36] PROBLEM - mw141 Puppet on mw141 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [02:34:40] PROBLEM - mw141 conntrack_table_size on mw141 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [02:34:59] PROBLEM - ldap141 Current Load on ldap141 is CRITICAL: LOAD CRITICAL - total load average: 6.13, 2.87, 1.17 [02:35:20] PROBLEM - mw141 ferm_active on mw141 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [02:35:27] PROBLEM - os141 Current Load on os141 is CRITICAL: LOAD CRITICAL - total load average: 9.84, 5.75, 2.51 [02:36:21] PROBLEM - mon141 icinga.miraheze.org HTTPS on mon141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:36:26] PROBLEM - mw143 Current Load on mw143 is CRITICAL: CRITICAL - load average: 20.67, 10.89, 5.85 [02:36:33] PROBLEM - mon141 grafana.miraheze.org HTTPS on mon141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:36:33] PROBLEM - mon141 HTTPS on mon141 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Connection timed out after 10003 milliseconds [02:36:41] PROBLEM - db142 MariaDB on db142 is CRITICAL: Too many connections [02:37:03] PROBLEM - cloud14 Current Load on cloud14 is CRITICAL: CRITICAL - load average: 157.77, 101.27, 53.53 [02:37:25] PROBLEM - mw142 Current Load on mw142 is CRITICAL: CRITICAL - load average: 18.09, 11.14, 6.79 [02:37:54] RECOVERY - mw141 nutcracker process on mw141 is OK: PROCS OK: 1 process with UID = 115 (nutcracker), command name 'nutcracker' [02:38:05] PROBLEM - db142 Current Load on db142 is CRITICAL: CRITICAL - load average: 9.68, 6.59, 3.29 [02:38:17] RECOVERY - mw141 JobRunner Service on mw141 is OK: PROCS OK: 1 process with args 'redisJobRunnerService' [02:38:39] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 13.44, 10.30, 9.17 [02:39:02] PROBLEM - db142 MariaDB Connections on db142 is UNKNOWN: [02:39:40] PROBLEM - os141 SSH on os141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:39:48] RECOVERY - mw141 php-fpm on mw141 is OK: PROCS OK: 22 processes with command name 'php-fpm7.4' [02:39:48] PROBLEM - mw143 MediaWiki Rendering on mw143 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:41:24] PROBLEM - os141 PowerDNS Recursor on os141 is CRITICAL: CRITICAL - Plugin timed out while executing system call [02:41:31] PROBLEM - os141 conntrack_table_size on os141 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [02:41:57] PROBLEM - db142 PowerDNS Recursor on db142 is CRITICAL: CRITICAL - Plugin timed out while executing system call [02:42:05] !log [paladox@mwtask141] DEPLOY ABORTED: Canary check failed for publictestwiki.com@localhost [02:42:27] RECOVERY - mon141 icinga.miraheze.org HTTPS on mon141 is OK: HTTP OK: HTTP/1.1 302 Found - 297 bytes in 0.018 second response time [02:42:28] RECOVERY - graylog131 Current Load on graylog131 is OK: LOAD OK - total load average: 1.73, 2.40, 3.18 [02:42:31] RECOVERY - mw141 Puppet on mw141 is OK: OK: Puppet is currently enabled, last run 37 minutes ago with 0 failures [02:42:37] RECOVERY - db142 MariaDB on db142 is OK: Uptime: 742537 Threads: 9 Questions: 120478166 Slow queries: 8035608 Opens: 146080 Open tables: 145565 Queries per second avg: 162.252 [02:42:39] RECOVERY - mon141 HTTPS on mon141 is OK: HTTP OK: HTTP/2 200 - 336 bytes in 0.030 second response time [02:42:40] RECOVERY - mon141 grafana.miraheze.org HTTPS on mon141 is OK: HTTP OK: HTTP/1.1 200 OK - 43410 bytes in 0.723 second response time [02:42:42] RECOVERY - mw141 conntrack_table_size on mw141 is OK: OK: nf_conntrack is 0 % full [02:42:52] RECOVERY - mw141 SSH on mw141 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u2 (protocol 2.0) [02:43:10] RECOVERY - mw141 ferm_active on mw141 is OK: OK ferm input default policy is set [02:43:24] RECOVERY - cp34 Varnish Backends on cp34 is OK: All 15 backends are healthy [02:43:29] RECOVERY - os141 conntrack_table_size on os141 is OK: OK: nf_conntrack is 0 % full [02:43:33] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.895 second response time [02:43:48] RECOVERY - os141 SSH on os141 is OK: SSH OK - OpenSSH_9.2p1 Debian-2+deb12u1 (protocol 2.0) [02:43:52] RECOVERY - db142 MariaDB Connections on db142 is OK: OK connection usage: 0.2%Current connections: 1 [02:43:55] RECOVERY - mw143 MediaWiki Rendering on mw143 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.280 second response time [02:43:56] RECOVERY - db142 PowerDNS Recursor on db142 is OK: DNS OK: 0.240 seconds response time. miraheze.org returns 2001:41d0:801:2000::3a18,2001:41d0:801:2000::5d68,51.195.201.140,51.89.139.24 [02:44:17] PROBLEM - db142 Current Load on db142 is WARNING: WARNING - load average: 2.93, 7.48, 5.14 [02:44:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.83, 11.58, 10.25 [02:44:43] RECOVERY - cp35 Varnish Backends on cp35 is OK: All 15 backends are healthy [02:44:48] RECOVERY - cp25 Varnish Backends on cp25 is OK: All 15 backends are healthy [02:45:03] RECOVERY - cp24 Varnish Backends on cp24 is OK: All 15 backends are healthy [02:45:25] PROBLEM - mw142 Current Load on mw142 is WARNING: WARNING - load average: 5.22, 11.57, 9.77 [02:45:33] RECOVERY - os141 PowerDNS Recursor on os141 is OK: DNS OK: 0.170 seconds response time. miraheze.org returns 2001:41d0:801:2000::3a18,2001:41d0:801:2000::5d68,51.195.201.140,51.89.139.24 [02:45:42] RECOVERY - bast141 Current Load on bast141 is OK: LOAD OK - total load average: 0.26, 1.59, 1.49 [02:46:17] RECOVERY - db142 Current Load on db142 is OK: OK - load average: 0.65, 5.09, 4.54 [02:46:26] RECOVERY - mw143 Current Load on mw143 is OK: OK - load average: 2.44, 10.14, 9.47 [02:46:55] PROBLEM - cloud14 Current Load on cloud14 is WARNING: WARNING - load average: 17.73, 70.44, 71.61 [02:47:25] RECOVERY - mw142 Current Load on mw142 is OK: OK - load average: 3.66, 8.69, 8.92 [02:47:26] PROBLEM - mw141 Current Load on mw141 is WARNING: WARNING - load average: 4.18, 8.76, 11.27 [02:48:54] RECOVERY - cloud14 Current Load on cloud14 is OK: OK - load average: 16.05, 52.96, 65.10 [02:49:29] !log [paladox@mwtask141] starting deploy of {'pull': 'config', 'config': True} to all [02:49:30] RECOVERY - mw141 Disk Space on mw141 is OK: DISK OK - free space: / 9412 MB (39% inode=80%); [02:49:33] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:49:36] !log [paladox@mwtask141] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 7s [02:49:38] !log [paladox@mwtask141] starting deploy of {'config': True} to all [02:49:40] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:49:43] !log [paladox@mwtask141] finished deploy of {'config': True} to all - SUCCESS in 5s [02:49:44] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:49:50] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:51:23] RECOVERY - mw141 Current Load on mw141 is OK: OK - load average: 2.64, 5.43, 9.33 [02:52:59] PROBLEM - ldap141 Current Load on ldap141 is WARNING: LOAD WARNING - total load average: 0.08, 0.83, 1.80 [02:53:57] PROBLEM - mon141 Current Load on mon141 is WARNING: LOAD WARNING - total load average: 0.65, 2.00, 3.92 [02:54:57] PROBLEM - os141 Current Load on os141 is WARNING: LOAD WARNING - total load average: 1.62, 1.87, 3.75 [02:54:59] RECOVERY - ldap141 Current Load on ldap141 is OK: LOAD OK - total load average: 0.83, 0.82, 1.67 [02:55:48] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/mw-config/compare/3a7f3f9af15e...eb8f0bc1a2e7 [02:55:50] [02miraheze/mw-config] 07paladox 03eb8f0bc - Set wgSearchSuggestCacheExpiry to 10800 [02:55:52] !log [paladox@mwtask141] starting deploy of {'pull': 'config', 'config': True} to all [02:55:56] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:56:28] !log [@test131] starting deploy of {'config': True} to all [02:56:29] !log [@test131] finished deploy of {'config': True} to all - SUCCESS in 0s [02:56:31] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:56:35] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [02:56:43] PROBLEM - cp35 Varnish Backends on cp35 is CRITICAL: 1 backends are down. mw141 [02:56:44] miraheze/mw-config - paladox the build passed. [02:56:48] PROBLEM - cp25 Varnish Backends on cp25 is CRITICAL: 1 backends are down. mw141 [02:56:54] PROBLEM - os141 Current Load on os141 is CRITICAL: LOAD CRITICAL - total load average: 4.88, 2.74, 3.83 [02:57:03] PROBLEM - cp24 Varnish Backends on cp24 is CRITICAL: 1 backends are down. mw141 [02:57:16] PROBLEM - mw141 Current Load on mw141 is CRITICAL: CRITICAL - load average: 14.30, 10.12, 9.93 [02:57:24] PROBLEM - cp34 Varnish Backends on cp34 is CRITICAL: 1 backends are down. mw141 [02:57:31] PROBLEM - mw141 HTTPS on mw141 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Connection timed out after 10002 milliseconds [02:57:40] PROBLEM - mw141 MediaWiki Rendering on mw141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [02:58:10] PROBLEM - db142 Current Load on db142 is CRITICAL: CRITICAL - load average: 18.91, 12.19, 7.15 [02:58:39] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 12.42, 11.21, 10.59 [02:58:59] PROBLEM - ldap141 Current Load on ldap141 is CRITICAL: LOAD CRITICAL - total load average: 2.14, 1.56, 1.78 [02:59:32] PROBLEM - bast141 Current Load on bast141 is CRITICAL: LOAD CRITICAL - total load average: 4.69, 2.21, 1.50 [02:59:57] PROBLEM - mon141 Current Load on mon141 is CRITICAL: LOAD CRITICAL - total load average: 4.29, 2.93, 3.63 [03:00:53] PROBLEM - cloud14 Current Load on cloud14 is CRITICAL: CRITICAL - load average: 123.25, 80.02, 65.02 [03:01:10] PROBLEM - mw141 Puppet on mw141 is CRITICAL: connect to address 2a10:6740::6:502 port 5666: No route to hostconnect to host 2a10:6740::6:502 port 5666: No route to host [03:01:10] PROBLEM - mw141 JobRunner Service on mw141 is CRITICAL: connect to address 2a10:6740::6:502 port 5666: No route to hostconnect to host 2a10:6740::6:502 port 5666: No route to host [03:01:15] PROBLEM - mw141 PowerDNS Recursor on mw141 is CRITICAL: connect to address 2a10:6740::6:502 port 5666: No route to hostconnect to host 2a10:6740::6:502 port 5666: No route to host [03:01:26] PROBLEM - mw143 Current Load on mw143 is CRITICAL: connect to address 2a10:6740::6:513 port 5666: No route to hostconnect to host 2a10:6740::6:513 port 5666: No route to host [03:01:32] PROBLEM - mw141 conntrack_table_size on mw141 is CRITICAL: connect to address 2a10:6740::6:502 port 5666: No route to hostconnect to host 2a10:6740::6:502 port 5666: No route to host [03:01:32] PROBLEM - mw141 Disk Space on mw141 is CRITICAL: connect to address 2a10:6740::6:502 port 5666: No route to hostconnect to host 2a10:6740::6:502 port 5666: No route to host [03:01:45] PROBLEM - mw141 ferm_active on mw141 is CRITICAL: connect to address 2a10:6740::6:502 port 5666: No route to hostconnect to host 2a10:6740::6:502 port 5666: No route to host [03:01:52] PROBLEM - mw141 NTP time on mw141 is CRITICAL: connect to address 2a10:6740::6:502 port 5666: No route to hostconnect to host 2a10:6740::6:502 port 5666: No route to host [03:01:59] PROBLEM - mw143 MediaWiki Rendering on mw143 is CRITICAL: connect to address 2a10:6740::6:513 and port 443: No route to hostHTTP CRITICAL - Unable to open TCP socket [03:02:02] PROBLEM - Host mw141 is DOWN: CRITICAL - Host Unreachable (2a10:6740::6:502) [03:02:13] PROBLEM - mw142 MediaWiki Rendering on mw142 is CRITICAL: connect to address 2a10:6740::6:503 and port 443: No route to hostHTTP CRITICAL - Unable to open TCP socket [03:02:21] PROBLEM - mw142 conntrack_table_size on mw142 is CRITICAL: connect to address 2a10:6740::6:503 port 5666: No route to hostconnect to host 2a10:6740::6:503 port 5666: No route to host [03:02:27] PROBLEM - mw142 Current Load on mw142 is CRITICAL: connect to address 2a10:6740::6:503 port 5666: No route to hostconnect to host 2a10:6740::6:503 port 5666: No route to host [03:02:28] PROBLEM - mw142 JobRunner Service on mw142 is CRITICAL: connect to address 2a10:6740::6:503 port 5666: No route to hostconnect to host 2a10:6740::6:503 port 5666: No route to host [03:02:28] PROBLEM - mon141 grafana.miraheze.org HTTPS on mon141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:02:28] PROBLEM - mon141 HTTPS on mon141 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [03:02:35] PROBLEM - mw143 ferm_active on mw143 is CRITICAL: connect to address 2a10:6740::6:513 port 5666: No route to hostconnect to host 2a10:6740::6:513 port 5666: No route to host [03:02:38] PROBLEM - mw142 nutcracker process on mw142 is CRITICAL: connect to address 2a10:6740::6:503 port 5666: No route to hostconnect to host 2a10:6740::6:503 port 5666: No route to host [03:02:41] PROBLEM - mw142 php-fpm on mw142 is CRITICAL: connect to address 2a10:6740::6:503 port 5666: No route to hostconnect to host 2a10:6740::6:503 port 5666: No route to host [03:02:41] PROBLEM - mw142 HTTPS on mw142 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to mw142.miraheze.org port 443 after 3066 ms: Couldn't connect to server [03:02:45] PROBLEM - mw143 HTTPS on mw143 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to mw143.miraheze.org port 443 after 3066 ms: Couldn't connect to server [03:02:56] PROBLEM - ping6 on mw142 is CRITICAL: CRITICAL - Host Unreachable (2a10:6740::6:503) [03:02:56] PROBLEM - mw143 NTP time on mw143 is CRITICAL: connect to address 2a10:6740::6:513 port 5666: No route to hostconnect to host 2a10:6740::6:513 port 5666: No route to host [03:03:04] PROBLEM - mw143 nutcracker process on mw143 is CRITICAL: connect to address 2a10:6740::6:513 port 5666: No route to hostconnect to host 2a10:6740::6:513 port 5666: No route to host [03:03:12] PROBLEM - mw143 SSH on mw143 is CRITICAL: connect to address 2a10:6740::6:513 and port 22: No route to host [03:03:13] PROBLEM - mw142 ferm_active on mw142 is CRITICAL: connect to address 2a10:6740::6:503 port 5666: No route to hostconnect to host 2a10:6740::6:503 port 5666: No route to host [03:03:13] PROBLEM - mw142 NTP time on mw142 is CRITICAL: connect to address 2a10:6740::6:503 port 5666: No route to hostconnect to host 2a10:6740::6:503 port 5666: No route to host [03:03:19] PROBLEM - mw142 PowerDNS Recursor on mw142 is CRITICAL: connect to address 2a10:6740::6:503 port 5666: No route to hostconnect to host 2a10:6740::6:503 port 5666: No route to host [03:03:19] PROBLEM - mw142 Puppet on mw142 is CRITICAL: connect to address 2a10:6740::6:503 port 5666: No route to hostconnect to host 2a10:6740::6:503 port 5666: No route to host [03:03:19] PROBLEM - mw142 Disk Space on mw142 is CRITICAL: connect to address 2a10:6740::6:503 port 5666: No route to hostconnect to host 2a10:6740::6:503 port 5666: No route to host [03:03:24] PROBLEM - Host mw142 is DOWN: CRITICAL - Host Unreachable (2a10:6740::6:503) [03:03:25] PROBLEM - mw143 Puppet on mw143 is CRITICAL: connect to address 2a10:6740::6:513 port 5666: No route to hostconnect to host 2a10:6740::6:513 port 5666: No route to host [03:03:29] PROBLEM - mw143 PowerDNS Recursor on mw143 is CRITICAL: connect to address 2a10:6740::6:513 port 5666: No route to hostconnect to host 2a10:6740::6:513 port 5666: No route to host [03:03:29] PROBLEM - mw143 conntrack_table_size on mw143 is CRITICAL: connect to address 2a10:6740::6:513 port 5666: No route to hostconnect to host 2a10:6740::6:513 port 5666: No route to host [03:03:36] PROBLEM - Host mw143 is DOWN: CRITICAL - Host Unreachable (2a10:6740::6:513) [03:06:43] PROBLEM - os141 Puppet on os141 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [03:06:46] PROBLEM - ping6 on mwtask141 is CRITICAL: CRITICAL - Host Unreachable (2a10:6740::6:504) [03:06:49] PROBLEM - mwtask141 SSH on mwtask141 is CRITICAL: connect to address 2a10:6740::6:504 and port 22: No route to host [03:06:49] PROBLEM - mwtask141 mathoid on mwtask141 is CRITICAL: connect to address 2a10:6740::6:504 and port 10044: No route to host [03:07:33] PROBLEM - Host mwtask141 is DOWN: CRITICAL - Host Unreachable (2a10:6740::6:504) [03:07:33] PROBLEM - mwtask141 NTP time on mwtask141 is CRITICAL: connect to address 2a10:6740::6:504 port 5666: No route to hostconnect to host 2a10:6740::6:504 port 5666: No route to host [03:07:34] RECOVERY - Host mw142 is UP: PING OK - Packet loss = 0%, RTA = 1.51 ms [03:07:34] RECOVERY - mw142 JobRunner Service on mw142 is OK: PROCS OK: 1 process with args 'redisJobRunnerService' [03:07:34] RECOVERY - mw142 Current Load on mw142 is OK: OK - load average: 0.78, 0.16, 0.05 [03:07:34] RECOVERY - mw142 php-fpm on mw142 is OK: PROCS OK: 25 processes with command name 'php-fpm7.4' [03:07:35] RECOVERY - mw142 nutcracker process on mw142 is OK: PROCS OK: 1 process with UID = 115 (nutcracker), command name 'nutcracker' [03:07:35] RECOVERY - Host mw143 is UP: PING OK - Packet loss = 0%, RTA = 2.10 ms [03:07:38] RECOVERY - mon141 grafana.miraheze.org HTTPS on mon141 is OK: HTTP OK: HTTP/1.1 200 OK - 43418 bytes in 0.066 second response time [03:07:39] RECOVERY - mon141 HTTPS on mon141 is OK: HTTP OK: HTTP/2 200 - 336 bytes in 0.024 second response time [03:07:43] RECOVERY - mw142 ferm_active on mw142 is OK: OK ferm input default policy is set [03:07:43] RECOVERY - mw143 Current Load on mw143 is OK: OK - load average: 1.68, 0.37, 0.12 [03:07:44] RECOVERY - ping6 on mw142 is OK: PING OK - Packet loss = 0%, RTA = 1.31 ms [03:07:46] RECOVERY - mw143 SSH on mw143 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u2 (protocol 2.0) [03:07:46] PROBLEM - mw142 NTP time on mw142 is WARNING: NTP WARNING: Offset 0.3951113224 secs [03:07:46] RECOVERY - mw143 conntrack_table_size on mw143 is OK: OK: nf_conntrack is 0 % full [03:07:47] RECOVERY - mw142 PowerDNS Recursor on mw142 is OK: DNS OK: 5.750 seconds response time. miraheze.org returns 2001:41d0:801:2000::3a18,2001:41d0:801:2000::5d68,51.195.201.140,51.89.139.24 [03:07:48] RECOVERY - mw142 Disk Space on mw142 is OK: DISK OK - free space: / 9365 MB (39% inode=79%); [03:07:49] RECOVERY - mw142 Puppet on mw142 is OK: OK: Puppet is currently enabled, last run 37 minutes ago with 0 failures [03:07:49] RECOVERY - mw143 Puppet on mw143 is OK: OK: Puppet is currently enabled, last run 34 minutes ago with 0 failures [03:07:49] RECOVERY - mw143 PowerDNS Recursor on mw143 is OK: DNS OK: 0.229 seconds response time. miraheze.org returns 2001:41d0:801:2000::3a18,2001:41d0:801:2000::5d68,51.195.201.140,51.89.139.24 [03:07:53] PROBLEM - db142 MariaDB Connections on db142 is UNKNOWN: PHP Fatal error: Uncaught mysqli_sql_exception: Connection refused in /usr/lib/nagios/plugins/check_mysql_connections.php:66Stack trace:#0 /usr/lib/nagios/plugins/check_mysql_connections.php(66): mysqli_real_connect(Object(mysqli), 'db142.miraheze....', 'icinga', Object(SensitiveParameterValue), NULL, NULL, NULL, true)#1 {main} thrown in /usr/lib/nagios/plugins/check_mysql_connections. [03:07:53] n line 66Fatal error: Uncaught mysqli_sql_exception: Connection refused in /usr/lib/nagios/plugins/check_mysql_connections.php:66Stack trace:#0 /usr/lib/nagios/plugins/check_mysql_connections.php(66): mysqli_real_connect(Object(mysqli), 'db142.miraheze....', 'icinga', Object(SensitiveParameterValue), NULL, NULL, NULL, true)#1 {main} thrown in /usr/lib/nagios/plugins/check_mysql_connections.php on line 66 [03:08:08] RECOVERY - mw142 conntrack_table_size on mw142 is OK: OK: nf_conntrack is 0 % full [03:08:09] RECOVERY - mw142 MediaWiki Rendering on mw142 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 4.994 second response time [03:08:39] PROBLEM - os141 Puppet on os141 is UNKNOWN: NRPE: Unable to read output [03:08:53] !log [paladox@mwtask141] starting deploy of {'pull': 'config', 'config': True} to all [03:09:19] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:09:24] RECOVERY - mw143 HTTPS on mw143 is OK: HTTP OK: HTTP/2 301 - 345 bytes in 0.012 second response time [03:09:29] PROBLEM - db142 MariaDB on db142 is CRITICAL: Can't connect to MySQL server on 'db142.miraheze.org' (115) [03:09:29] RECOVERY - Host mw141 is UP: PING OK - Packet loss = 0%, RTA = 1.13 ms [03:09:29] RECOVERY - mw142 HTTPS on mw142 is OK: HTTP OK: HTTP/2 301 - 345 bytes in 0.018 second response time [03:09:31] RECOVERY - mw143 ferm_active on mw143 is OK: OK ferm input default policy is set [03:09:31] !log [paladox@mwtask141] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 38s [03:09:32] RECOVERY - mw143 nutcracker process on mw143 is OK: PROCS OK: 1 process with UID = 115 (nutcracker), command name 'nutcracker' [03:09:33] RECOVERY - Host mwtask141 is UP: PING OK - Packet loss = 0%, RTA = 0.25 ms [03:09:33] RECOVERY - mwtask141 NTP time on mwtask141 is OK: NTP OK: Offset -0.0006466805935 secs [03:09:33] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.91, 3.48, 1.25 [03:09:35] RECOVERY - mw141 PowerDNS Recursor on mw141 is OK: DNS OK: 0.104 seconds response time. miraheze.org returns 2001:41d0:801:2000::3a18,2001:41d0:801:2000::5d68,51.195.201.140,51.89.139.24 [03:09:36] RECOVERY - mw141 Puppet on mw141 is OK: OK: Puppet is currently enabled, last run 25 minutes ago with 0 failures [03:09:36] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:09:36] PROBLEM - cloud14 Current Load on cloud14 is WARNING: WARNING - load average: 17.95, 70.34, 75.36 [03:09:38] !log [paladox@mwtask141] starting deploy of {'config': True} to all [03:09:42] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:09:43] RECOVERY - mw141 HTTPS on mw141 is OK: HTTP OK: HTTP/2 301 - 345 bytes in 0.169 second response time [03:09:44] RECOVERY - mw141 JobRunner Service on mw141 is OK: PROCS OK: 1 process with args 'redisJobRunnerService' [03:09:45] RECOVERY - mw143 NTP time on mw143 is OK: NTP OK: Offset -0.0002294182777 secs [03:09:46] !log [paladox@mwtask141] finished deploy of {'config': True} to all - SUCCESS in 7s [03:09:46] RECOVERY - mw142 NTP time on mw142 is OK: NTP OK: Offset -0.007281452417 secs [03:09:49] RECOVERY - mw141 ferm_active on mw141 is OK: OK ferm input default policy is set [03:09:50] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:09:51] RECOVERY - mw141 conntrack_table_size on mw141 is OK: OK: nf_conntrack is 0 % full [03:09:52] RECOVERY - mw141 Disk Space on mw141 is OK: DISK OK - free space: / 9443 MB (39% inode=80%); [03:09:53] PROBLEM - mwtask141 MediaWiki Rendering on mwtask141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:09:53] RECOVERY - mw141 MediaWiki Rendering on mw141 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 4.899 second response time [03:09:54] RECOVERY - mw141 NTP time on mw141 is OK: NTP OK: Offset -4.801154137e-05 secs [03:09:57] RECOVERY - mw143 MediaWiki Rendering on mw143 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.407 second response time [03:10:18] RECOVERY - mw141 Current Load on mw141 is OK: OK - load average: 5.69, 1.95, 0.71 [03:10:43] RECOVERY - mwtask141 SSH on mwtask141 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u2 (protocol 2.0) [03:10:48] RECOVERY - mwtask141 mathoid on mwtask141 is OK: TCP OK - 0.000 second response time on 2a10:6740::6:504 port 10044 [03:10:53] RECOVERY - ping6 on mwtask141 is OK: PING OK - Packet loss = 0%, RTA = 1.04 ms [03:11:20] RECOVERY - cp25 Varnish Backends on cp25 is OK: All 15 backends are healthy [03:11:22] RECOVERY - cp35 Varnish Backends on cp35 is OK: All 15 backends are healthy [03:11:27] RECOVERY - db142 MariaDB on db142 is OK: Uptime: 243 Threads: 12 Questions: 12219 Slow queries: 498 Opens: 1183 Open tables: 1176 Queries per second avg: 50.283 [03:11:31] RECOVERY - cp34 Varnish Backends on cp34 is OK: All 15 backends are healthy [03:11:33] RECOVERY - cp24 Varnish Backends on cp24 is OK: All 15 backends are healthy [03:11:44] RECOVERY - mwtask141 MediaWiki Rendering on mwtask141 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.378 second response time [03:11:47] RECOVERY - db142 MariaDB Connections on db142 is OK: OK connection usage: 1.2%Current connections: 6 [03:11:57] PROBLEM - mon141 Current Load on mon141 is WARNING: LOAD WARNING - total load average: 0.52, 2.72, 3.96 [03:13:33] RECOVERY - cloud14 Current Load on cloud14 is OK: OK - load average: 17.46, 41.43, 62.36 [03:14:06] PROBLEM - swiftproxy111 Puppet on swiftproxy111 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [03:15:29] PROBLEM - mail121 Puppet on mail121 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [03:15:57] RECOVERY - mon141 Current Load on mon141 is OK: LOAD OK - total load average: 0.56, 1.58, 3.20 [03:16:13] PROBLEM - puppet141 Puppet on puppet141 is CRITICAL: CRITICAL: Failed to apply catalog, zero resources tracked by Puppet. It might be a dependency cycle. [03:16:31] PROBLEM - ldap141 Current Load on ldap141 is WARNING: LOAD WARNING - total load average: 0.03, 0.88, 1.86 [03:16:41] PROBLEM - sdiy.info - reverse DNS on sslhost is WARNING: SSL WARNING - rDNS OK but records conflict. {'NS': ['ns1.wikitide.net.', 'ns2.wikitide.net.'], 'CNAME': None} [03:17:26] PROBLEM - os141 Current Load on os141 is WARNING: LOAD WARNING - total load average: 0.03, 1.36, 3.68 [03:18:29] RECOVERY - ldap141 Current Load on ldap141 is OK: LOAD OK - total load average: 0.05, 0.60, 1.63 [03:19:26] RECOVERY - os141 Current Load on os141 is OK: LOAD OK - total load average: 0.17, 0.98, 3.26 [03:19:41] PROBLEM - bast141 Current Load on bast141 is WARNING: LOAD WARNING - total load average: 0.00, 0.59, 1.89 [03:20:41] [02miraheze/mw-config] 07paladox pushed 031 commit to 03paladox-patch-4 [+0/-0/±1] 13https://github.com/miraheze/mw-config/commit/169f0d8cbbb4 [03:20:44] [02miraheze/mw-config] 07paladox 03169f0d8 - Set $wgChronologyProtectorStash [03:20:46] PROBLEM - db142 Current Load on db142 is WARNING: WARNING - load average: 0.35, 2.44, 7.36 [03:20:47] [02mw-config] 07paladox created branch 03paladox-patch-4 - 13https://github.com/miraheze/mw-config [03:20:50] [02mw-config] 07paladox opened pull request 03#5375: Set $wgChronologyProtectorStash - 13https://github.com/miraheze/mw-config/pull/5375 [03:21:09] [02miraheze/mw-config] 07github-actions[bot] pushed 031 commit to 03paladox-patch-4 [+0/-0/±1] 13https://github.com/miraheze/mw-config/compare/169f0d8cbbb4...d8ea00fed1f6 [03:21:10] [02miraheze/mw-config] 07github-actions 03d8ea00f - CI: lint code to MediaWiki standards [03:21:11] [02mw-config] 07github-actions[bot] synchronize pull request 03#5375: Set $wgChronologyProtectorStash - 13https://github.com/miraheze/mw-config/pull/5375 [03:21:36] [02mw-config] 07paladox closed pull request 03#5375: Set $wgChronologyProtectorStash - 13https://github.com/miraheze/mw-config/pull/5375 [03:21:37] [02miraheze/mw-config] 07paladox deleted branch 03paladox-patch-4 [03:21:38] !log [paladox@mwtask141] starting deploy of {'pull': 'config', 'config': True} to all [03:21:40] RECOVERY - bast141 Current Load on bast141 is OK: LOAD OK - total load average: 0.03, 0.40, 1.66 [03:21:40] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/mw-config/compare/eb8f0bc1a2e7...c24b5cd535da [03:21:40] !log [paladox@test131] starting deploy of {'config': True} to all [03:21:42] [02miraheze/mw-config] 07paladox 03c24b5cd - Set $wgChronologyProtectorStash (#5375) [03:21:42] !log [paladox@test131] starting deploy of {'pull': 'config', 'config': True} to all [03:21:42] miraheze/mw-config - paladox the build passed. [03:21:43] !log [paladox@test131] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 1s [03:21:43] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:21:45] !log [paladox@test131] starting deploy of {'config': True} to all [03:21:45] [02mw-config] 07paladox deleted branch 03paladox-patch-4 - 13https://github.com/miraheze/mw-config [03:21:46] !log [paladox@test131] finished deploy of {'config': True} to all - SUCCESS in 0s [03:21:47] !log [paladox@mwtask141] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 8s [03:21:48] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:21:48] !log [paladox@mwtask141] starting deploy of {'config': True} to all [03:21:52] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:21:54] !log [paladox@mwtask141] finished deploy of {'config': True} to all - SUCCESS in 5s [03:22:01] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:22:08] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:22:13] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:22:13] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.98, 9.80, 6.14 [03:22:17] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:22:22] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:22:26] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:22:31] miraheze/mw-config - paladox the build passed. [03:22:45] RECOVERY - db142 Current Load on db142 is OK: OK - load average: 0.42, 1.80, 6.53 [03:28:02] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.83, 10.10, 7.45 [03:38:13] RECOVERY - puppet141 Puppet on puppet141 is OK: OK: Puppet is currently enabled, last run 30 seconds ago with 0 failures [03:39:12] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.gab.pt.eu.org All nameservers failed to answer the query. [03:40:41] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.53, 10.17, 8.86 [03:41:01] RECOVERY - mw131 Disk Space on mw131 is OK: DISK OK - free space: / 9315 MB (39% inode=79%); [03:42:05] RECOVERY - swiftproxy111 Puppet on swiftproxy111 is OK: OK: Puppet is currently enabled, last run 1 second ago with 0 failures [03:42:38] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 8.69, 9.63, 8.83 [03:43:05] RECOVERY - mail121 Puppet on mail121 is OK: OK: Puppet is currently enabled, last run 14 seconds ago with 0 failures [03:45:56] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/mw-config/compare/c24b5cd535da...383befa93d3a [03:45:58] [02miraheze/mw-config] 07paladox 03383befa - Set wgPropagateErrors to false [03:45:59] !log [paladox@mwtask141] starting deploy of {'pull': 'config', 'config': True} to all [03:46:00] !log [paladox@test131] starting deploy of {'pull': 'config', 'config': True} to all [03:46:02] !log [paladox@test131] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 1s [03:46:03] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:46:04] !log [paladox@test131] starting deploy of {'config': True} to all [03:46:05] !log [paladox@test131] finished deploy of {'config': True} to all - SUCCESS in 0s [03:46:06] !log [paladox@mwtask141] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 7s [03:46:07] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:46:09] !log [paladox@mwtask141] starting deploy of {'config': True} to all [03:46:11] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:46:15] !log [paladox@mwtask141] finished deploy of {'config': True} to all - SUCCESS in 5s [03:46:17] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:46:21] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:46:24] PROBLEM - sdiy.info - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - sdiy.info reverse DNS resolves to vps-c10c02c8.vps.ovh.net [03:46:25] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:46:31] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:46:37] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [03:46:49] miraheze/mw-config - paladox the build passed. [04:08:25] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is WARNING: SSL WARNING - rDNS OK but records conflict. {'NS': ['NS.ANKH.FR.eu.org.', 'NS1.eu.org.', 'NS1.ERIOMEM.NET.'], 'CNAME': 'bouncingwiki.miraheze.org.'} [04:17:00] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.15, 10.24, 9.74 [04:18:57] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 10.05, 10.20, 9.79 [04:22:52] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.24, 10.32, 9.95 [04:30:38] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.60, 10.10, 10.03 [04:32:28] PROBLEM - graylog131 Current Load on graylog131 is WARNING: LOAD WARNING - total load average: 3.98, 3.36, 3.08 [04:34:41] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.02, 6.93, 6.43 [04:36:28] RECOVERY - graylog131 Current Load on graylog131 is OK: LOAD OK - total load average: 2.85, 3.30, 3.13 [04:36:35] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.55, 6.83, 6.45 [04:37:34] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.74, 10.44, 10.19 [04:38:30] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.62, 6.41, 6.34 [05:02:45] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.64, 10.01, 10.20 [05:10:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.64, 10.16, 10.12 [05:12:38] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 10.20, 10.17, 10.13 [05:37:04] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.58, 10.25, 9.99 [05:39:00] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.85, 10.05, 9.95 [05:42:55] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.24, 10.13, 10.00 [05:44:51] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 10.03, 10.07, 9.99 [05:45:15] PROBLEM - sdiy.info - reverse DNS on sslhost is WARNING: SSL WARNING - rDNS OK but records conflict. {'NS': ['ns1.wikitide.net.', 'ns2.wikitide.net.'], 'CNAME': None} [05:53:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.47, 10.59, 10.23 [05:55:35] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 8.89, 10.01, 10.06 [05:56:43] PROBLEM - db112 Current Load on db112 is CRITICAL: CRITICAL - load average: 10.27, 5.59, 3.20 [06:00:43] PROBLEM - db112 Current Load on db112 is WARNING: WARNING - load average: 4.59, 6.84, 4.44 [06:02:28] PROBLEM - graylog131 Current Load on graylog131 is WARNING: LOAD WARNING - total load average: 3.83, 3.50, 3.11 [06:02:43] PROBLEM - db112 Current Load on db112 is CRITICAL: CRITICAL - load average: 14.09, 8.74, 5.36 [06:03:16] PROBLEM - db112 Disk Space on db112 is CRITICAL: DISK CRITICAL - free space: / 7794 MB (5% inode=99%); [06:04:28] RECOVERY - graylog131 Current Load on graylog131 is OK: LOAD OK - total load average: 2.81, 3.23, 3.06 [06:05:19] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.26, 10.75, 10.32 [06:06:43] RECOVERY - db112 Current Load on db112 is OK: OK - load average: 2.80, 6.17, 5.15 [06:14:57] PROBLEM - sdiy.info - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - sdiy.info reverse DNS resolves to vps-c10c02c8.vps.ovh.net [06:21:46] RECOVERY - cloud11 IPMI Sensors on cloud11 is OK: IPMI Status: OK [06:25:45] PROBLEM - cloud11 IPMI Sensors on cloud11 is CRITICAL: IPMI Status: Critical [Cntlr 2 Bay 8 = Critical] [06:33:22] PROBLEM - db131 Disk Space on db131 is CRITICAL: DISK CRITICAL - free space: / 14278 MB (5% inode=96%); [06:34:31] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.gab.pt.eu.org All nameservers failed to answer the query. [06:35:16] PROBLEM - db112 Disk Space on db112 is WARNING: DISK WARNING - free space: / 12875 MB (9% inode=99%); [06:44:40] PROBLEM - sdiy.info - reverse DNS on sslhost is WARNING: SSL WARNING - rDNS OK but records conflict. {'NS': ['ns1.wikitide.net.', 'ns2.wikitide.net.'], 'CNAME': None} [06:46:50] PROBLEM - db112 Current Load on db112 is CRITICAL: CRITICAL - load average: 9.28, 8.31, 6.44 [06:47:12] RECOVERY - www.twistwoodtaleswiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'www.twistwoodtaleswiki.com' will expire on Wed 20 Dec 2023 22:08:34 GMT +0000. [06:50:43] PROBLEM - db112 Current Load on db112 is WARNING: WARNING - load average: 6.04, 7.71, 6.74 [06:56:43] RECOVERY - db112 Current Load on db112 is OK: OK - load average: 6.24, 6.60, 6.55 [07:00:38] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 12.26, 11.25, 10.88 [07:03:44] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is WARNING: NoNameservers: All nameservers failed to answer the query wiki.gab.pt.eu.org. IN CNAME: Server 2606:4700:4700::1111 UDP port 53 answered SERVFAIL [07:06:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.61, 11.63, 11.20 [07:14:38] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 12.06, 11.53, 11.26 [07:16:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.50, 11.19, 11.18 [07:17:53] RECOVERY - cloud11 IPMI Sensors on cloud11 is OK: IPMI Status: OK [07:19:12] PROBLEM - www.twistwoodtaleswiki.com - LetsEncrypt on sslhost is CRITICAL: connect to address www.twistwoodtaleswiki.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [07:21:20] PROBLEM - graylog131 Current Load on graylog131 is CRITICAL: LOAD CRITICAL - total load average: 7.04, 5.01, 3.72 [07:21:36] PROBLEM - mw132 Current Load on mw132 is CRITICAL: CRITICAL - load average: 12.54, 10.15, 6.36 [07:21:54] PROBLEM - cloud11 IPMI Sensors on cloud11 is CRITICAL: IPMI Status: Critical [Cntlr 2 Bay 8 = Critical] [07:22:38] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 7.65, 8.78, 10.14 [07:23:30] PROBLEM - mw132 Disk Space on mw132 is CRITICAL: DISK CRITICAL - free space: / 1008 MB (4% inode=79%); [07:23:31] RECOVERY - mw132 Current Load on mw132 is OK: OK - load average: 5.87, 8.41, 6.16 [07:29:12] PROBLEM - graylog131 Current Load on graylog131 is WARNING: LOAD WARNING - total load average: 2.98, 3.60, 3.54 [07:29:52] RECOVERY - ewgf.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'ewgf.wiki' will expire on Wed 20 Dec 2023 22:59:05 GMT +0000. [07:33:08] RECOVERY - graylog131 Current Load on graylog131 is OK: LOAD OK - total load average: 2.78, 3.15, 3.37 [07:38:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.27, 10.57, 10.29 [07:40:38] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.32, 10.17, 10.18 [07:44:05] RECOVERY - sdiy.info - reverse DNS on sslhost is OK: SSL OK - sdiy.info reverse DNS resolves to cp25.miraheze.org - NS RECORDS OK [07:48:23] RECOVERY - www.twistwoodtaleswiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'www.twistwoodtaleswiki.com' will expire on Wed 20 Dec 2023 22:08:34 GMT +0000. [07:54:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.33, 10.33, 10.17 [08:00:23] PROBLEM - ewgf.wiki - LetsEncrypt on sslhost is CRITICAL: connect to address ewgf.wiki and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [08:15:44] PROBLEM - sdiy.info - reverse DNS on sslhost is WARNING: SSL WARNING - rDNS OK but records conflict. {'NS': ['ns1.wikitide.net.', 'ns2.wikitide.net.'], 'CNAME': None} [08:19:34] PROBLEM - www.twistwoodtaleswiki.com - LetsEncrypt on sslhost is CRITICAL: connect to address www.twistwoodtaleswiki.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [08:35:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.79, 7.05, 5.78 [08:37:18] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.32, 6.26, 5.64 [08:45:27] PROBLEM - sdiy.info - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - sdiy.info reverse DNS resolves to vps-c10c02c8.vps.ovh.net [08:54:38] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 8.75, 9.54, 10.04 [09:04:28] PROBLEM - graylog131 Current Load on graylog131 is WARNING: LOAD WARNING - total load average: 3.57, 3.19, 3.01 [09:06:28] RECOVERY - graylog131 Current Load on graylog131 is OK: LOAD OK - total load average: 1.74, 2.72, 2.86 [09:08:21] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.80, 10.37, 10.17 [09:19:59] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.15, 10.00, 10.20 [09:33:36] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.50, 10.24, 10.17 [09:35:30] RECOVERY - mw132 Disk Space on mw132 is OK: DISK OK - free space: / 8762 MB (36% inode=79%); [09:35:32] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.89, 10.06, 10.11 [09:42:53] PROBLEM - graylog131 Current Load on graylog131 is WARNING: LOAD WARNING - total load average: 3.96, 3.44, 3.09 [09:44:53] PROBLEM - sdiy.info - reverse DNS on sslhost is WARNING: SSL WARNING - rDNS OK but records conflict. {'NS': ['ns1.wikitide.net.', 'ns2.wikitide.net.'], 'CNAME': None} [09:45:18] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 9.96, 10.25, 10.20 [09:46:49] RECOVERY - graylog131 Current Load on graylog131 is OK: LOAD OK - total load average: 3.09, 3.16, 3.04 [09:47:14] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.68, 10.11, 10.16 [09:55:02] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.09, 10.56, 10.33 [10:26:38] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 12.13, 10.93, 10.65 [10:28:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.08, 10.92, 10.68 [10:44:18] PROBLEM - sdiy.info - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - sdiy.info reverse DNS resolves to vps-c10c02c8.vps.ovh.net [10:58:38] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 12.72, 11.56, 10.95 [11:00:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.46, 11.42, 10.97 [11:42:41] PROBLEM - glitchcity.wiki - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'glitchcity.wiki' expires in 15 day(s) (Sat 02 Dec 2023 11:27:37 GMT +0000). [11:43:06] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/ssl/compare/d9d2ff59428a...f6194afbf5e8 [11:43:09] [02miraheze/ssl] 07MirahezeSSLBot 03f6194af - Bot: Update SSL cert for glitchcity.wiki [11:43:44] PROBLEM - sdiy.info - reverse DNS on sslhost is WARNING: SSL WARNING - rDNS OK but records conflict. {'NS': ['ns1.wikitide.net.', 'ns2.wikitide.net.'], 'CNAME': None} [11:47:12] RECOVERY - www.twistwoodtaleswiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'www.twistwoodtaleswiki.com' will expire on Wed 20 Dec 2023 22:08:34 GMT +0000. [11:47:16] PROBLEM - www.glitchcity.wiki - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'glitchcity.wiki' expires in 15 day(s) (Sat 02 Dec 2023 11:27:37 GMT +0000). [11:54:03] RECOVERY - ewgf.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'ewgf.wiki' will expire on Wed 20 Dec 2023 22:59:05 GMT +0000. [11:59:42] PROBLEM - franchise.franchising.org.ua - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - franchise.franchising.org.ua All nameservers failed to answer the query. [12:00:38] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.09, 9.89, 10.19 [12:08:38] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 12.31, 11.11, 10.56 [12:11:29] RECOVERY - glitchcity.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'glitchcity.wiki' will expire on Wed 14 Feb 2024 10:42:59 GMT +0000. [12:14:10] [02miraheze/ManageWiki] 07translatewiki pushed 031 commit to 03master [+0/-0/±2] 13https://github.com/miraheze/ManageWiki/compare/06c4e0d6bf25...809a1e0da4d4 [12:14:12] [02miraheze/ManageWiki] 07translatewiki 03809a1e0 - Localisation updates from https://translatewiki.net. [12:16:01] RECOVERY - www.glitchcity.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'glitchcity.wiki' will expire on Wed 14 Feb 2024 10:42:59 GMT +0000. [12:19:12] PROBLEM - www.twistwoodtaleswiki.com - LetsEncrypt on sslhost is CRITICAL: connect to address www.twistwoodtaleswiki.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [12:19:35] miraheze/ManageWiki - translatewiki the build passed. [12:26:02] PROBLEM - ewgf.wiki - LetsEncrypt on sslhost is CRITICAL: connect to address ewgf.wiki and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [12:29:31] RECOVERY - franchise.franchising.org.ua - reverse DNS on sslhost is OK: SSL OK - franchise.franchising.org.ua reverse DNS resolves to cp24.miraheze.org - CNAME OK [12:32:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.61, 11.60, 11.72 [12:40:38] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 12.01, 11.72, 11.68 [12:42:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.90, 11.70, 11.67 [12:43:10] PROBLEM - sdiy.info - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - sdiy.info reverse DNS resolves to vps-c10c02c8.vps.ovh.net [12:44:38] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 13.01, 12.16, 11.85 [12:46:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.58, 11.92, 11.80 [12:48:23] RECOVERY - www.twistwoodtaleswiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'www.twistwoodtaleswiki.com' will expire on Wed 20 Dec 2023 22:08:34 GMT +0000. [12:52:38] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 12.12, 11.78, 11.75 [12:54:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.80, 11.79, 11.76 [13:12:53] PROBLEM - sdiy.info - reverse DNS on sslhost is WARNING: SSL WARNING - rDNS OK but records conflict. {'NS': ['ns1.wikitide.net.', 'ns2.wikitide.net.'], 'CNAME': None} [13:19:34] PROBLEM - www.twistwoodtaleswiki.com - LetsEncrypt on sslhost is CRITICAL: connect to address www.twistwoodtaleswiki.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [13:24:38] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.06, 9.21, 10.06 [13:40:21] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.71, 10.41, 10.24 [13:42:36] PROBLEM - sdiy.info - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - sdiy.info reverse DNS resolves to vps-c10c02c8.vps.ovh.net [13:44:14] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.63, 10.15, 10.19 [13:48:46] RECOVERY - www.twistwoodtaleswiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'www.twistwoodtaleswiki.com' will expire on Wed 20 Dec 2023 22:08:34 GMT +0000. [14:09:35] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.28, 10.51, 10.19 [14:10:07] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.74, 7.50, 6.83 [14:12:01] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.46, 7.00, 6.72 [14:15:50] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.46, 6.56, 6.59 [14:19:57] PROBLEM - www.twistwoodtaleswiki.com - LetsEncrypt on sslhost is CRITICAL: connect to address www.twistwoodtaleswiki.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [14:21:32] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.89, 7.07, 6.74 [14:25:21] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.48, 6.38, 6.56 [14:31:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.61, 7.19, 6.88 [14:33:38] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.25, 6.83, 6.16 [14:35:18] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.98, 6.58, 6.70 [14:39:38] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 6.75, 6.47, 6.20 [14:40:44] PROBLEM - guia.cineastas.pt - reverse DNS on sslhost is WARNING: NoNameservers: All nameservers failed to answer the query cineastas.pt. IN NS: Server 2606:4700:4700::1111 UDP port 53 answered SERVFAIL [14:42:10] RECOVERY - www.sdiy.info - reverse DNS on sslhost is OK: SSL OK - www.sdiy.info reverse DNS resolves to cp24.miraheze.org - NS RECORDS OK [14:46:38] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.09, 9.78, 10.19 [14:53:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.32, 7.29, 6.78 [14:54:31] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 10.66, 7.52, 6.46 [14:55:19] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.23, 7.99, 7.12 [14:56:27] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.28, 7.56, 6.62 [14:57:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.87, 7.13, 6.91 [14:58:23] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.15, 7.85, 6.84 [15:02:14] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.09, 6.60, 6.57 [15:05:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.14, 7.59, 7.16 [15:07:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.79, 6.86, 6.94 [15:09:18] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 4.98, 6.34, 6.75 [15:10:00] RECOVERY - guia.cineastas.pt - reverse DNS on sslhost is OK: SSL OK - guia.cineastas.pt reverse DNS resolves to cp24.miraheze.org - CNAME OK [15:14:10] PROBLEM - www.sdiy.info - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.sdiy.info could not be found [15:24:04] RECOVERY - ewgf.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'ewgf.wiki' will expire on Wed 20 Dec 2023 22:59:05 GMT +0000. [15:29:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.38, 7.02, 6.66 [15:31:18] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.18, 6.71, 6.58 [15:49:38] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.41, 6.63, 6.38 [15:53:26] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.87, 6.69, 6.50 [15:56:03] PROBLEM - ewgf.wiki - LetsEncrypt on sslhost is CRITICAL: connect to address ewgf.wiki and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [15:57:22] PROBLEM - db131 Disk Space on db131 is WARNING: DISK WARNING - free space: / 14772 MB (6% inode=96%); [16:06:07] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.39, 7.67, 7.01 [16:07:38] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.95, 7.22, 6.71 [16:09:38] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.29, 6.77, 6.62 [16:09:55] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.19, 7.46, 7.12 [16:12:38] PROBLEM - cloud10 Puppet on cloud10 is UNKNOWN: NRPE: Unable to read output [16:21:20] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.24, 7.44, 7.18 [16:23:19] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.21, 6.58, 6.89 [16:32:53] PROBLEM - graylog131 Current Load on graylog131 is WARNING: LOAD WARNING - total load average: 3.52, 3.29, 2.89 [16:33:18] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.38, 6.50, 6.73 [16:34:49] RECOVERY - graylog131 Current Load on graylog131 is OK: LOAD OK - total load average: 2.65, 3.01, 2.83 [16:38:43] PROBLEM - graylog131 Current Load on graylog131 is WARNING: LOAD WARNING - total load average: 3.33, 3.49, 3.09 [16:40:38] RECOVERY - cloud10 Puppet on cloud10 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [16:40:40] RECOVERY - graylog131 Current Load on graylog131 is OK: LOAD OK - total load average: 2.71, 3.21, 3.03 [16:51:28] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/mw-config/compare/383befa93d3a...906e16332fc2 [16:51:30] [02miraheze/mw-config] 07paladox 03906e163 - Set warmup to true always unless commonswiki [16:52:24] miraheze/mw-config - paladox the build passed. [16:52:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.40, 10.51, 9.71 [16:54:06] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/mw-config/compare/906e16332fc2...0e8aea648f75 [16:54:09] [02miraheze/mw-config] 07paladox 030e8aea6 - Add c5 to wgCreateWikiDatabaseClustersInactive and remove c3 [16:54:26] !log [paladox@mwtask141] starting deploy of {'pull': 'config', 'config': True} to all [16:54:31] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:54:35] !log [paladox@mwtask141] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 9s [16:54:37] !log [paladox@mwtask141] starting deploy of {'config': True} to all [16:54:39] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 12.11, 10.98, 9.98 [16:54:39] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:54:43] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:54:44] !log [paladox@mwtask141] finished deploy of {'config': True} to all - SUCCESS in 6s [16:54:49] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:55:05] miraheze/mw-config - paladox the build passed. [16:56:05] !log [@test131] starting deploy of {'config': True} to all [16:56:06] !log [@test131] finished deploy of {'config': True} to all - SUCCESS in 0s [16:56:09] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:56:13] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:56:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.00, 10.47, 9.91 [16:57:22] RECOVERY - db131 Disk Space on db131 is OK: DISK OK - free space: / 32922 MB (13% inode=96%); [17:00:38] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.87, 9.99, 9.83 [17:03:25] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/mw-config/compare/0e8aea648f75...7a455c17a16b [17:03:28] [02miraheze/mw-config] 07paladox 037a455c1 - Lower wgParserCacheExpireTim to 7 days [17:03:29] !log [paladox@mwtask141] starting deploy of {'pull': 'config', 'config': True} to all [17:03:36] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:03:39] !log [paladox@mwtask141] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 9s [17:03:43] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:03:43] !log [paladox@mwtask141] starting deploy of {'config': True} to all [17:03:48] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:03:55] !log [paladox@mwtask141] finished deploy of {'config': True} to all - SUCCESS in 11s [17:04:00] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:04:20] miraheze/mw-config - paladox the build passed. [17:04:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.56, 10.46, 10.07 [17:12:10] RECOVERY - www.sdiy.info - reverse DNS on sslhost is OK: SSL OK - www.sdiy.info reverse DNS resolves to cp24.miraheze.org - NS RECORDS OK [17:12:38] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 8.43, 9.63, 9.93 [17:25:44] !log [@test131] starting deploy of {'config': True} to all [17:25:45] !log [@test131] finished deploy of {'config': True} to all - SUCCESS in 0s [17:25:48] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:25:53] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [17:46:26] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.gab.pt.eu.org All nameservers failed to answer the query. [17:47:12] RECOVERY - www.twistwoodtaleswiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'www.twistwoodtaleswiki.com' will expire on Wed 20 Dec 2023 22:08:34 GMT +0000. [17:54:03] RECOVERY - ewgf.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'ewgf.wiki' will expire on Wed 20 Dec 2023 22:59:05 GMT +0000. [17:59:38] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.18, 6.88, 6.40 [18:01:38] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 6.06, 6.71, 6.40 [18:06:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.44, 9.85, 9.34 [18:12:38] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.83, 10.05, 9.62 [18:13:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.03, 6.62, 6.21 [18:14:47] PROBLEM - www.sdiy.info - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for www.sdiy.info could not be found [18:15:18] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.07, 6.36, 6.17 [18:15:43] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is WARNING: SSL WARNING - rDNS OK but records conflict. {'NS': ['NS.ANKH.FR.eu.org.', 'NS1.eu.org.', 'NS1.ERIOMEM.NET.'], 'CNAME': 'bouncingwiki.miraheze.org.'} [18:16:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.77, 10.92, 10.07 [18:18:27] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.70, 6.85, 6.48 [18:18:38] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 13.15, 11.62, 10.42 [18:19:12] PROBLEM - www.twistwoodtaleswiki.com - LetsEncrypt on sslhost is CRITICAL: connect to address www.twistwoodtaleswiki.com and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [18:19:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.13, 7.55, 6.70 [18:20:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.87, 11.63, 10.58 [18:22:18] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.61, 6.39, 6.40 [18:24:38] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 12.19, 11.96, 10.95 [18:25:19] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.46, 7.84, 7.14 [18:26:02] PROBLEM - ewgf.wiki - LetsEncrypt on sslhost is CRITICAL: connect to address ewgf.wiki and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [18:26:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.45, 11.53, 10.93 [18:27:11] PROBLEM - decyclopedia.tk - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Certificate 'decyclopedia.tk' expires in 7 day(s) (Fri 24 Nov 2023 18:08:29 GMT +0000). [18:33:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.45, 7.40, 7.11 [18:35:19] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.49, 6.98, 6.99 [18:39:43] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.55, 6.77, 6.43 [18:40:40] PROBLEM - graylog131 Current Load on graylog131 is WARNING: LOAD WARNING - total load average: 3.48, 3.27, 3.06 [18:41:37] RECOVERY - os131 Disk Space on os131 is OK: DISK OK - free space: / 71309MiB (35% inode=99%); [18:41:39] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 9.24, 7.35, 6.67 [18:42:38] PROBLEM - graylog131 Current Load on graylog131 is CRITICAL: LOAD CRITICAL - total load average: 7.93, 4.77, 3.62 [18:43:19] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.38, 7.60, 7.18 [18:43:38] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.79, 6.64, 6.49 [18:44:55] PROBLEM - os131 Current Load on os131 is CRITICAL: LOAD CRITICAL - total load average: 4.41, 3.19, 1.56 [18:45:19] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.44, 7.28, 7.12 [18:48:55] PROBLEM - os131 Current Load on os131 is WARNING: LOAD WARNING - total load average: 3.49, 3.67, 2.16 [18:50:55] PROBLEM - os131 Current Load on os131 is CRITICAL: LOAD CRITICAL - total load average: 4.05, 3.70, 2.35 [18:54:55] PROBLEM - os131 Current Load on os131 is WARNING: LOAD WARNING - total load average: 3.44, 3.74, 2.69 [18:58:55] PROBLEM - os131 Current Load on os131 is CRITICAL: LOAD CRITICAL - total load average: 4.49, 3.85, 2.96 [19:00:07] [02miraheze/ssl] 07paladox pushed 031 commit to 03paladox-patch-1 [+0/-0/±1] 13https://github.com/miraheze/ssl/commit/5883d688c012 [19:00:09] [02miraheze/ssl] 07paladox 035883d68 - Update opensearch-node.crt [19:00:12] [02ssl] 07paladox created branch 03paladox-patch-1 - 13https://github.com/miraheze/ssl [19:00:15] [02ssl] 07paladox opened pull request 03#736: Update opensearch-node.crt - 13https://github.com/miraheze/ssl/pull/736 [19:00:25] [02ssl] 07paladox closed pull request 03#736: Update opensearch-node.crt - 13https://github.com/miraheze/ssl/pull/736 [19:00:26] [02miraheze/ssl] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/ssl/compare/f6194afbf5e8...52e661fc4f13 [19:00:28] [02miraheze/ssl] 07paladox 0352e661f - Update opensearch-node.crt (#736) [19:00:31] [02miraheze/ssl] 07paladox deleted branch 03paladox-patch-1 [19:00:34] [02ssl] 07paladox deleted branch 03paladox-patch-1 - 13https://github.com/miraheze/ssl [19:00:56] PROBLEM - os131 Current Load on os131 is WARNING: LOAD WARNING - total load average: 3.14, 3.57, 2.97 [19:04:55] PROBLEM - os131 Current Load on os131 is CRITICAL: LOAD CRITICAL - total load average: 4.63, 3.80, 3.18 [19:11:16] RECOVERY - os141 Puppet on os141 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [19:15:30] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/puppet/compare/b28f7be352cd...5a8641b1d984 [19:15:32] [02miraheze/puppet] 07paladox 035a8641b - graylog: upgrade to 5.2.1 [19:16:43] RECOVERY - os131 Puppet on os131 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [19:20:55] PROBLEM - os131 Current Load on os131 is WARNING: LOAD WARNING - total load average: 3.31, 3.78, 3.94 [19:22:38] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.26, 9.56, 10.09 [19:24:04] RECOVERY - ewgf.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'ewgf.wiki' will expire on Wed 20 Dec 2023 22:59:05 GMT +0000. [19:25:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.02, 7.32, 7.04 [19:26:55] RECOVERY - os131 Current Load on os131 is OK: LOAD OK - total load average: 1.78, 2.64, 3.39 [19:27:19] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.61, 7.32, 7.07 [19:34:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.53, 10.52, 10.22 [19:38:04] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.30, 7.05, 6.65 [19:38:38] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.65, 9.93, 10.04 [19:39:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.99, 7.57, 7.26 [19:41:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.76, 7.73, 7.37 [19:43:52] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.10, 6.23, 6.45 [19:47:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.63, 7.17, 7.16 [19:51:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.77, 7.03, 7.12 [19:55:33] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.77, 10.37, 10.07 [19:56:02] PROBLEM - ewgf.wiki - LetsEncrypt on sslhost is CRITICAL: connect to address ewgf.wiki and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [19:59:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.01, 7.28, 7.09 [20:01:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.75, 7.01, 7.01 [20:01:22] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.27, 10.02, 10.05 [20:05:17] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.19, 10.56, 10.26 [20:05:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.77, 7.63, 7.24 [20:09:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.76, 7.29, 7.17 [20:11:38] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.83, 6.28, 6.22 [20:13:38] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 6.00, 6.15, 6.18 [20:16:28] PROBLEM - graylog131 Current Load on graylog131 is WARNING: LOAD WARNING - total load average: 2.21, 2.95, 3.84 [20:21:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.01, 7.77, 7.29 [20:23:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.57, 7.78, 7.36 [20:24:27] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.39, 7.16, 6.70 [20:28:28] RECOVERY - graylog131 Current Load on graylog131 is OK: LOAD OK - total load average: 2.43, 2.87, 3.34 [20:34:06] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.19, 6.52, 6.64 [20:37:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.74, 7.54, 7.20 [20:37:59] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.44, 6.99, 6.80 [20:39:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.70, 6.88, 7.00 [20:39:54] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.42, 6.57, 6.68 [20:43:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.11, 8.03, 7.42 [20:46:38] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.15, 9.90, 10.20 [20:47:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.84, 7.06, 7.18 [20:53:18] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.53, 6.22, 6.78 [21:01:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.85, 6.75, 6.83 [21:05:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.16, 7.24, 6.97 [21:07:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.66, 6.70, 6.81 [21:09:24] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.07, 10.45, 10.09 [21:11:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.21, 7.63, 7.17 [21:20:34] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 8.93, 7.73, 6.96 [21:21:02] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.95, 10.13, 10.13 [21:22:29] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 4.88, 6.77, 6.71 [21:23:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.72, 7.71, 7.62 [21:24:04] RECOVERY - ewgf.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'ewgf.wiki' will expire on Wed 20 Dec 2023 22:59:05 GMT +0000. [21:24:57] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.02, 10.62, 10.31 [21:26:53] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 12.30, 11.20, 10.56 [21:28:50] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.77, 11.32, 10.68 [21:34:39] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 12.02, 11.65, 11.01 [21:36:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.44, 11.28, 10.96 [21:37:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.67, 7.21, 7.27 [21:38:38] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 12.15, 11.54, 11.09 [21:39:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.29, 6.75, 7.09 [21:40:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.94, 11.59, 11.16 [21:44:38] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 12.52, 11.77, 11.30 [21:47:19] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.84, 7.54, 7.25 [21:49:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.97, 7.69, 7.35 [21:51:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.02, 8.27, 7.64 [21:52:53] PROBLEM - graylog131 Current Load on graylog131 is WARNING: LOAD WARNING - total load average: 3.82, 3.35, 2.81 [21:53:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.56, 7.69, 7.51 [21:54:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.46, 11.95, 11.73 [21:55:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.70, 8.13, 7.69 [21:56:02] PROBLEM - ewgf.wiki - LetsEncrypt on sslhost is CRITICAL: connect to address ewgf.wiki and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [21:56:38] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 12.42, 12.05, 11.79 [21:56:49] RECOVERY - graylog131 Current Load on graylog131 is OK: LOAD OK - total load average: 3.17, 3.37, 2.97 [21:57:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.41, 7.94, 7.68 [22:00:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.88, 11.64, 11.71 [22:13:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 10.13, 7.66, 7.21 [22:15:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.74, 7.12, 7.06 [22:22:05] [02miraheze/mw-config] 07paladox pushed 031 commit to 03paladox-patch-4 [+0/-0/±1] 13https://github.com/miraheze/mw-config/commit/75cebc387b49 [22:22:07] [02miraheze/mw-config] 07paladox 0375cebc3 - Set wgExternalLinksSchemaMigrationStage to READ_NEW in addition to still writing to both tables [22:22:08] [02mw-config] 07paladox created branch 03paladox-patch-4 - 13https://github.com/miraheze/mw-config [22:22:11] [02mw-config] 07paladox opened pull request 03#5376: Set wgExternalLinksSchemaMigrationStage to READ_NEW in addition to still writing to both tables - 13https://github.com/miraheze/mw-config/pull/5376 [22:22:18] [02mw-config] 07paladox closed pull request 03#5376: Set wgExternalLinksSchemaMigrationStage to READ_NEW in addition to still writing to both tables - 13https://github.com/miraheze/mw-config/pull/5376 [22:22:20] [02miraheze/mw-config] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/mw-config/compare/7a455c17a16b...00cb143f5051 [22:22:21] [02miraheze/mw-config] 07paladox 0300cb143 - Set wgExternalLinksSchemaMigrationStage to READ_NEW in addition to still writing to both tables (#5376) [22:22:24] [02miraheze/mw-config] 07paladox deleted branch 03paladox-patch-4 [22:22:27] [02mw-config] 07paladox deleted branch 03paladox-patch-4 - 13https://github.com/miraheze/mw-config [22:22:36] !log [paladox@mwtask141] starting deploy of {'pull': 'config', 'config': True} to all [22:22:42] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:22:49] !log [paladox@mwtask141] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 13s [22:22:51] !log [paladox@mwtask141] starting deploy of {'config': True} to all [22:22:54] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:22:58] !log [paladox@mwtask141] finished deploy of {'config': True} to all - SUCCESS in 7s [22:23:00] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:23:05] miraheze/mw-config - paladox the build passed. [22:23:05] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:23:15] miraheze/mw-config - paladox the build passed. [22:23:19] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 4.49, 6.17, 6.70 [22:26:35] !log [@test131] starting deploy of {'config': True} to all [22:26:36] !log [@test131] finished deploy of {'config': True} to all - SUCCESS in 0s [22:26:39] RECOVERY - ns1 NTP time on ns1 is OK: NTP OK: Offset 0.008214890957 secs [22:26:40] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:26:45] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:30:38] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 9.34, 9.53, 10.20 [22:45:43] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.gab.pt.eu.org All nameservers failed to answer the query. [22:56:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 10.37, 9.10, 7.58 [22:57:38] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 5.93, 7.66, 6.44 [23:00:07] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.69, 7.78, 7.37 [23:03:38] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 3.70, 5.63, 6.02 [23:10:38] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.42, 10.54, 10.07 [23:15:43] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is WARNING: NoNameservers: All nameservers failed to answer the query pt.eu.org. IN NS: Server 2606:4700:4700::1111 UDP port 53 answered SERVFAIL [23:20:38] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 8.95, 9.82, 10.01 [23:25:05] PROBLEM - ewgf.wiki - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - ewgf.wiki All nameservers failed to answer the query. [23:31:18] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.48, 6.24, 6.68 [23:34:21] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 11.23, 10.45, 10.18 [23:36:18] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: CRITICAL - load average: 12.06, 10.87, 10.36 [23:38:15] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.07, 10.45, 10.26 [23:42:07] RECOVERY - mwtask141 Current Load on mwtask141 is OK: OK - load average: 8.97, 9.89, 10.10 [23:55:05] PROBLEM - ewgf.wiki - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for ewgf.wiki could not be found [23:57:10] PROBLEM - graylog131 Current Load on graylog131 is WARNING: LOAD WARNING - total load average: 3.44, 3.05, 2.89 [23:57:45] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: WARNING - load average: 10.15, 10.58, 10.33 [23:59:08] RECOVERY - graylog131 Current Load on graylog131 is OK: LOAD OK - total load average: 2.31, 2.86, 2.84