[00:00:41] [02puppet] 07paladox closed pull request 03#3572: jobrunner-hi: tune real/time for jobs - 13https://github.com/miraheze/puppet/pull/3572 [00:00:42] [02miraheze/puppet] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/puppet/compare/89622b23daf1...a33d2f99b39d [00:00:45] [02miraheze/puppet] 07paladox 03a33d2f9 - jobrunner-hi: tune real/time for jobs (#3572) [00:00:46] [02miraheze/puppet] 07paladox deleted branch 03paladox-patch-15 [00:00:48] [02puppet] 07paladox deleted branch 03paladox-patch-15 - 13https://github.com/miraheze/puppet [00:03:19] PROBLEM - db112 Current Load on db112 is CRITICAL: CRITICAL - load average: 17.04, 8.41, 3.79 [00:03:45] RECOVERY - mwtask141 Current Load on mwtask141 is OK: LOAD OK - total load average: 5.53, 7.93, 9.71 [00:05:13] !log [void@phab121] Deleted spam account [00:05:30] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [00:06:01] PROBLEM - os131 Current Load on os131 is CRITICAL: LOAD CRITICAL - total load average: 5.92, 3.60, 2.22 [00:08:03] PROBLEM - os131 Current Load on os131 is WARNING: LOAD WARNING - total load average: 3.64, 3.52, 2.36 [00:12:01] RECOVERY - os131 Current Load on os131 is OK: LOAD OK - total load average: 3.34, 3.36, 2.55 [00:15:27] PROBLEM - matomo121 Current Load on matomo121 is CRITICAL: LOAD CRITICAL - total load average: 4.81, 3.58, 2.52 [00:17:19] PROBLEM - db112 Current Load on db112 is WARNING: WARNING - load average: 2.89, 6.46, 6.84 [00:17:22] PROBLEM - matomo121 Current Load on matomo121 is WARNING: LOAD WARNING - total load average: 3.87, 3.66, 2.67 [00:19:19] PROBLEM - db112 Current Load on db112 is CRITICAL: CRITICAL - load average: 8.93, 7.43, 7.15 [00:21:19] RECOVERY - db112 Current Load on db112 is OK: OK - load average: 3.33, 5.76, 6.56 [00:24:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.42, 7.40, 6.67 [00:25:02] RECOVERY - matomo121 Current Load on matomo121 is OK: LOAD OK - total load average: 2.40, 3.12, 2.85 [00:26:13] PROBLEM - os131 Current Load on os131 is CRITICAL: LOAD CRITICAL - total load average: 5.77, 4.05, 3.09 [00:26:15] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.77, 6.96, 6.59 [00:30:02] PROBLEM - os131 Current Load on os131 is WARNING: LOAD WARNING - total load average: 2.41, 3.75, 3.23 [00:30:11] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.77, 7.59, 6.89 [00:32:00] RECOVERY - os131 Current Load on os131 is OK: LOAD OK - total load average: 1.99, 3.22, 3.10 [00:32:09] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.64, 6.97, 6.77 [00:34:06] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.58, 6.56, 6.64 [00:38:38] PROBLEM - os141 Current Load on os141 is CRITICAL: LOAD CRITICAL - total load average: 4.71, 3.71, 2.85 [00:40:35] PROBLEM - os141 Current Load on os141 is WARNING: LOAD WARNING - total load average: 2.76, 3.45, 2.87 [00:42:35] RECOVERY - os141 Current Load on os141 is OK: LOAD OK - total load average: 2.34, 3.12, 2.82 [01:43:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.21, 7.01, 6.63 [01:45:52] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.42, 6.73, 6.57 [01:49:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.22, 7.44, 6.90 [01:53:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.44, 6.94, 6.82 [01:57:52] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 4.42, 6.08, 6.53 [02:16:19] PROBLEM - puppetdb121 Puppet on puppetdb121 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[puppetdb] [02:22:39] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.47, 6.92, 6.68 [02:24:37] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.68, 6.49, 6.55 [03:01:15] RECOVERY - db142 Backups SQL on db142 is OK: FILE_AGE OK: /var/log/sql-backup.log is 74 seconds old and 0 bytes [03:02:21] PROBLEM - mon141 Backups Grafana on mon141 is WARNING: FILE_AGE WARNING: /var/log/grafana-backup.log is 864138 seconds old and 785 bytes [03:08:19] RECOVERY - puppetdb121 Puppet on puppetdb121 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [03:11:07] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 6.84, 6.46, 6.01 [03:13:05] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 9.21, 6.97, 6.22 [03:15:02] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.79, 6.31, 6.06 [03:53:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.98, 6.63, 6.20 [03:55:52] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.45, 6.12, 6.06 [04:28:06] PROBLEM - db142 Disk Space on db142 is WARNING: DISK WARNING - free space: / 35678 MB (10% inode=97%); [05:06:30] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.46, 7.06, 6.39 [05:08:28] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.75, 6.66, 6.30 [05:22:07] RECOVERY - db142 Disk Space on db142 is OK: DISK OK - free space: / 38944 MB (11% inode=97%); [06:11:18] PROBLEM - os141 Current Load on os141 is CRITICAL: LOAD CRITICAL - total load average: 5.00, 3.20, 1.77 [06:11:57] PROBLEM - cloud10 Puppet on cloud10 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[ulogd2] [06:15:17] PROBLEM - os141 Current Load on os141 is WARNING: LOAD WARNING - total load average: 2.47, 3.53, 2.28 [06:16:19] PROBLEM - puppetdb121 Puppet on puppetdb121 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[puppetdb] [06:17:15] RECOVERY - os141 Current Load on os141 is OK: LOAD OK - total load average: 0.88, 2.60, 2.09 [06:17:38] PROBLEM - db112 Disk Space on db112 is CRITICAL: DISK CRITICAL - free space: / 6646 MB (5% inode=99%); [06:28:09] PROBLEM - db112 Current Load on db112 is CRITICAL: CRITICAL - load average: 10.63, 9.37, 5.57 [06:30:04] PROBLEM - db112 Current Load on db112 is WARNING: WARNING - load average: 3.85, 7.23, 5.23 [06:33:53] RECOVERY - db112 Current Load on db112 is OK: OK - load average: 4.20, 6.49, 5.46 [06:39:57] RECOVERY - cloud10 Puppet on cloud10 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [06:41:38] PROBLEM - db112 Disk Space on db112 is WARNING: DISK WARNING - free space: / 9027 MB (6% inode=99%); [06:43:28] PROBLEM - db112 Current Load on db112 is CRITICAL: CRITICAL - load average: 15.16, 10.45, 7.09 [06:43:38] PROBLEM - db112 Disk Space on db112 is CRITICAL: DISK CRITICAL - free space: / 7808 MB (5% inode=99%); [06:57:38] PROBLEM - db112 Disk Space on db112 is WARNING: DISK WARNING - free space: / 9882 MB (7% inode=99%); [07:01:45] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: LOAD WARNING - total load average: 11.12, 9.86, 9.33 [07:05:45] PROBLEM - mwtask141 Current Load on mwtask141 is CRITICAL: LOAD CRITICAL - total load average: 12.73, 11.15, 9.96 [07:07:19] PROBLEM - db112 Current Load on db112 is WARNING: WARNING - load average: 0.31, 3.41, 7.28 [07:07:45] PROBLEM - mwtask141 Current Load on mwtask141 is WARNING: LOAD WARNING - total load average: 11.41, 11.19, 10.12 [07:08:19] RECOVERY - puppetdb121 Puppet on puppetdb121 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [07:09:19] RECOVERY - db112 Current Load on db112 is OK: OK - load average: 1.74, 2.65, 6.51 [07:19:45] RECOVERY - mwtask141 Current Load on mwtask141 is OK: LOAD OK - total load average: 9.72, 10.05, 10.11 [08:24:33] PROBLEM - cloud14 Puppet on cloud14 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[ulogd2] [08:25:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.31, 7.34, 6.65 [08:27:16] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.09, 6.80, 6.53 [08:45:22] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.gab.pt.eu.org All nameservers failed to answer the query. [08:52:33] RECOVERY - cloud14 Puppet on cloud14 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [09:12:15] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.96, 6.54, 6.25 [09:14:12] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.89, 6.24, 6.17 [09:15:22] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is WARNING: SSL WARNING - rDNS OK but records conflict. {'NS': ['ns.ankh.fr.eu.org.', 'ns1.eu.org.', 'ns1.eriomem.net.'], 'CNAME': 'bouncingwiki.miraheze.org.'} [09:23:58] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.98, 7.40, 6.73 [09:27:54] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.19, 6.52, 6.54 [10:14:55] PROBLEM - wiki.tmyt105.leyhp.com - reverse DNS on sslhost is WARNING: LifetimeTimeout: The resolution lifetime expired after 5.406 seconds: Server 2606:4700:4700::1111 UDP port 53 answered The DNS operation timed out.; Server 2606:4700:4700::1111 UDP port 53 answered The DNS operation timed out.; Server 2606:4700:4700::1111 UDP port 53 answered The DNS operation timed out. [10:16:19] PROBLEM - puppetdb121 Puppet on puppetdb121 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[puppetdb] [10:21:38] PROBLEM - ping6 on cp35 is CRITICAL: PING CRITICAL - Packet loss = 28%, RTA = 90.60 ms [10:21:43] PROBLEM - ping6 on cp24 is CRITICAL: PING CRITICAL - Packet loss = 44%, RTA = 8.62 ms [10:25:26] PROBLEM - ping6 on cp25 is CRITICAL: PING CRITICAL - Packet loss = 70%, RTA = 8.43 ms [10:27:26] RECOVERY - ping6 on cp25 is OK: PING OK - Packet loss = 0%, RTA = 10.30 ms [10:27:52] RECOVERY - ping6 on cp24 is OK: PING OK - Packet loss = 0%, RTA = 10.08 ms [10:27:59] PROBLEM - wiki.nmc.games - LetsEncrypt on sslhost is CRITICAL: connect to address wiki.nmc.games and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [10:31:44] RECOVERY - ping6 on cp35 is OK: PING OK - Packet loss = 0%, RTA = 90.34 ms [10:32:05] PROBLEM - ping6 on cp24 is CRITICAL: PING CRITICAL - Packet loss = 37%, RTA = 8.60 ms [10:32:11] PROBLEM - ping6 on cp34 is CRITICAL: PING CRITICAL - Packet loss = 60%, RTA = 82.59 ms [10:33:43] PROBLEM - ping6 on cp25 is CRITICAL: PING CRITICAL - Packet loss = 37%, RTA = 8.60 ms [10:33:53] PROBLEM - gogigantic.wiki - reverse DNS on sslhost is WARNING: LifetimeTimeout: The resolution lifetime expired after 5.403 seconds: Server 2606:4700:4700::1111 UDP port 53 answered The DNS operation timed out.; Server 2606:4700:4700::1111 UDP port 53 answered The DNS operation timed out.; Server 2606:4700:4700::1111 UDP port 53 answered The DNS operation timed out. [10:35:55] PROBLEM - ping6 on cp35 is CRITICAL: PING CRITICAL - Packet loss = 28%, RTA = 83.79 ms [10:40:11] PROBLEM - wiki.wikimedia.cat - reverse DNS on sslhost is WARNING: LifetimeTimeout: The resolution lifetime expired after 5.409 seconds: Server 2606:4700:4700::1111 UDP port 53 answered The DNS operation timed out.; Server 2606:4700:4700::1111 UDP port 53 answered The DNS operation timed out.; Server 2606:4700:4700::1111 UDP port 53 answered The DNS operation timed out. [10:40:28] PROBLEM - vedopedia.witches-empire.com - LetsEncrypt on sslhost is CRITICAL: CRITICAL - Socket timeout after 10 seconds [10:40:35] RECOVERY - ping6 on cp34 is OK: PING OK - Packet loss = 0%, RTA = 82.78 ms [10:40:50] PROBLEM - wiki.rsf.world - reverse DNS on sslhost is WARNING: LifetimeTimeout: The resolution lifetime expired after 5.404 seconds: Server 2606:4700:4700::1111 UDP port 53 answered The DNS operation timed out.; Server 2606:4700:4700::1111 UDP port 53 answered The DNS operation timed out.; Server 2606:4700:4700::1111 UDP port 53 answered The DNS operation timed out. [10:41:55] RECOVERY - ping6 on cp35 is OK: PING OK - Packet loss = 0%, RTA = 83.91 ms [10:43:55] RECOVERY - ping6 on cp25 is OK: PING OK - Packet loss = 0%, RTA = 9.15 ms [10:44:10] RECOVERY - wiki.tmyt105.leyhp.com - reverse DNS on sslhost is OK: SSL OK - wiki.tmyt105.leyhp.com reverse DNS resolves to cp25.miraheze.org - CNAME OK [10:48:19] RECOVERY - ping6 on cp24 is OK: PING OK - Packet loss = 0%, RTA = 2.84 ms [10:57:56] RECOVERY - wiki.nmc.games - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.nmc.games' will expire on Mon 04 Mar 2024 13:20:34 GMT +0000. [11:03:47] RECOVERY - gogigantic.wiki - reverse DNS on sslhost is OK: SSL OK - gogigantic.wiki reverse DNS resolves to cp25.miraheze.org - CNAME OK [11:08:19] RECOVERY - puppetdb121 Puppet on puppetdb121 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [11:09:05] RECOVERY - wiki.wikimedia.cat - reverse DNS on sslhost is OK: SSL OK - wiki.wikimedia.cat reverse DNS resolves to cp24.miraheze.org - CNAME OK [11:09:30] RECOVERY - vedopedia.witches-empire.com - LetsEncrypt on sslhost is OK: OK - Certificate 'vedopedia.witches-empire.com' will expire on Fri 01 Mar 2024 18:20:13 GMT +0000. [11:10:50] RECOVERY - wiki.rsf.world - reverse DNS on sslhost is OK: SSL OK - wiki.rsf.world reverse DNS resolves to cp24.miraheze.org - CNAME OK [11:24:37] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.82, 6.63, 6.19 [11:26:35] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.90, 6.97, 6.34 [11:28:33] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.41, 7.09, 6.46 [11:34:26] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.02, 6.17, 6.30 [12:12:48] [02miraheze/ImportDump] 07translatewiki pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/ImportDump/compare/72db7f88adb5...b3d80cbf67af [12:12:51] [02miraheze/ImportDump] 07translatewiki 03b3d80cb - Localisation updates from https://translatewiki.net. [12:12:52] [02miraheze/CreateWiki] 07translatewiki pushed 031 commit to 03master [+0/-0/±2] 13https://github.com/miraheze/CreateWiki/compare/0b930ce55739...751458780b61 [12:12:55] [02miraheze/CreateWiki] 07translatewiki 037514587 - Localisation updates from https://translatewiki.net. [12:12:58] [02miraheze/IncidentReporting] 07translatewiki pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/IncidentReporting/compare/73b928ed404e...96be6dca5d5d [12:12:59] [02miraheze/IncidentReporting] 07translatewiki 0396be6dc - Localisation updates from https://translatewiki.net. [12:13:00] [02miraheze/MirahezeMagic] 07translatewiki pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/MirahezeMagic/compare/2f4bd04ed69a...0604a54a61b7 [12:13:02] [02miraheze/MirahezeMagic] 07translatewiki 030604a54 - Localisation updates from https://translatewiki.net. [12:13:05] [02miraheze/DataDump] 07translatewiki pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/DataDump/compare/a77bbdb1cfcf...b28c4bb684c6 [12:13:08] [02miraheze/DataDump] 07translatewiki 03b28c4bb - Localisation updates from https://translatewiki.net. [12:13:10] [02miraheze/SpriteSheet] 07translatewiki pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/SpriteSheet/compare/44d95a0902c9...dcab09786a2f [12:13:13] [02miraheze/SpriteSheet] 07translatewiki 03dcab097 - Localisation updates from https://translatewiki.net. [12:13:14] [02miraheze/ManageWiki] 07translatewiki pushed 031 commit to 03master [+0/-0/±2] 13https://github.com/miraheze/ManageWiki/compare/6c0806ceb2a3...4600541d8ea6 [12:13:16] [02miraheze/ManageWiki] 07translatewiki 034600541 - Localisation updates from https://translatewiki.net. [12:19:17] miraheze/DataDump - translatewiki the build passed. [12:19:48] miraheze/IncidentReporting - translatewiki the build passed. [12:23:08] miraheze/SpriteSheet - translatewiki the build passed. [12:23:22] miraheze/CreateWiki - translatewiki the build passed. [12:23:36] miraheze/ImportDump - translatewiki the build passed. [12:23:47] miraheze/MirahezeMagic - translatewiki the build has errored. [12:26:11] miraheze/ManageWiki - translatewiki the build passed. [13:04:02] PROBLEM - kalons-reverie.com - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'kalons-reverie.com' expires in 15 day(s) (Sat 06 Jan 2024 12:47:56 GMT +0000). [13:04:17] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/ssl/compare/c112d1fd888f...db1ce6f601b9 [13:04:20] [02miraheze/ssl] 07MirahezeSSLBot 03db1ce6f - Bot: Update SSL cert for kalons-reverie.com [13:33:03] RECOVERY - kalons-reverie.com - LetsEncrypt on sslhost is OK: OK - Certificate 'kalons-reverie.com' will expire on Wed 20 Mar 2024 12:04:08 GMT +0000. [13:47:40] PROBLEM - wows.wiki - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'wows.wiki' expires in 8 day(s) (Sat 30 Dec 2023 06:14:45 GMT +0000). [14:13:54] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.22, 7.30, 6.64 [14:15:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.11, 6.83, 6.55 [14:17:40] PROBLEM - wows.wiki - LetsEncrypt on sslhost is CRITICAL: connect to address wows.wiki and port 443: Network is unreachableHTTP CRITICAL - Unable to open TCP socket [14:17:52] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.89, 6.71, 6.55 [14:21:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.28, 7.34, 6.90 [14:22:28] PROBLEM - www.hebammenwiki.de - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'www.hebammenwiki.de' expires in 15 day(s) (Sat 06 Jan 2024 14:04:15 GMT +0000). [14:22:48] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/ssl/compare/db1ce6f601b9...c023d7ab8338 [14:22:51] [02miraheze/ssl] 07MirahezeSSLBot 03c023d7a - Bot: Update SSL cert for www.hebammenwiki.de [14:31:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 10.99, 8.16, 7.28 [14:33:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.77, 7.63, 7.17 [14:35:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 13.65, 8.76, 7.57 [14:39:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.26, 7.28, 7.23 [14:43:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.24, 7.49, 7.31 [14:45:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.47, 6.89, 7.09 [14:49:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.24, 7.10, 7.10 [14:51:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.15, 6.40, 6.84 [14:51:55] RECOVERY - www.hebammenwiki.de - LetsEncrypt on sslhost is OK: OK - Certificate 'www.hebammenwiki.de' will expire on Wed 20 Mar 2024 13:22:39 GMT +0000. [14:52:33] PROBLEM - bushcraftwiki.com - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'bushcraftwiki.com' expires in 15 day(s) (Sat 06 Jan 2024 14:21:18 GMT +0000). [14:52:47] [02miraheze/ssl] 07MirahezeSSLBot pushed 031 commit to 03master [+0/-0/±1] 13https://github.com/miraheze/ssl/compare/c023d7ab8338...234d38ac7b36 [14:52:50] [02miraheze/ssl] 07MirahezeSSLBot 03234d38a - Bot: Update SSL cert for bushcraftwiki.com [14:53:52] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.57, 6.06, 6.66 [15:05:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.26, 6.42, 6.47 [15:11:52] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.46, 6.66, 6.55 [15:15:53] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.04, 7.09, 6.72 [15:17:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.86, 7.14, 6.78 [15:19:52] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.11, 6.60, 6.62 [15:51:37] RECOVERY - bushcraftwiki.com - LetsEncrypt on sslhost is OK: OK - Certificate 'bushcraftwiki.com' will expire on Wed 20 Mar 2024 13:52:40 GMT +0000. [16:12:30] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.19, 6.77, 6.18 [16:14:28] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.24, 6.26, 6.07 [16:22:16] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.59, 6.91, 6.47 [16:24:13] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.11, 6.27, 6.29 [16:29:41] PROBLEM - matomo121 Current Load on matomo121 is WARNING: LOAD WARNING - total load average: 3.51, 3.43, 2.96 [16:31:36] RECOVERY - matomo121 Current Load on matomo121 is OK: LOAD OK - total load average: 3.06, 3.37, 3.00 [17:00:22] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.85, 7.06, 6.60 [17:02:20] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.17, 6.58, 6.47 [17:13:03] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.67, 6.77, 6.47 [17:16:59] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.96, 6.45, 6.42 [17:24:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.02, 6.79, 6.58 [17:28:47] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.98, 6.57, 6.57 [17:32:15] PROBLEM - swiftobject101 Current Load on swiftobject101 is WARNING: WARNING - load average: 7.58, 6.13, 5.83 [17:34:13] PROBLEM - swiftobject101 Current Load on swiftobject101 is CRITICAL: CRITICAL - load average: 10.08, 7.32, 6.29 [17:36:10] RECOVERY - swiftobject101 Current Load on swiftobject101 is OK: OK - load average: 5.21, 6.69, 6.20 [18:12:44] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.20, 7.15, 6.68 [19:01:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.88, 7.69, 6.60 [19:03:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.47, 7.47, 6.65 [19:07:52] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.28, 6.51, 6.41 [19:19:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 14.92, 8.87, 7.37 [19:21:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.90, 7.70, 7.11 [19:27:52] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.37, 6.41, 6.73 [19:48:39] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.82, 7.29, 6.52 [19:50:37] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.92, 7.15, 6.56 [19:54:32] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.16, 6.33, 6.38 [20:26:44] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.42, 7.15, 6.62 [20:28:42] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.55, 6.93, 6.60 [20:32:37] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.11, 6.64, 6.58 [20:40:26] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.89, 6.45, 6.44 [20:42:24] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.29, 6.31, 6.38 [20:55:04] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.59, 7.36, 6.74 [20:58:59] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.89, 7.03, 6.74 [20:59:59] RECOVERY - ns1 NTP time on ns1 is OK: NTP OK: Offset 0.006885021925 secs [21:00:57] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.28, 6.70, 6.65 [21:18:42] [02miraheze/mediawiki] 07paladox deleted branch 03REL1_39 [21:18:43] [02mediawiki] 07paladox closed pull request 03#15526: Install RealMe - 13https://github.com/miraheze/mediawiki/pull/15526 [21:18:46] [02mediawiki] 07paladox deleted branch 03REL1_39 - 13https://github.com/miraheze/mediawiki [21:18:49] [02mediawiki] 07paladox closed pull request 03#15813: Use InnoDB for search index - 13https://github.com/miraheze/mediawiki/pull/15813 [21:19:58] [02miraheze/mediawiki] 07paladox pushed 03286 commits to 03REL1_40 [+31/-0/±1234] 13https://github.com/miraheze/mediawiki/compare/c4fdf78ab7c0...5b416a954c56 [21:20:00] [02miraheze/mediawiki] 07reedy 03ce9d183 - Start 1.40.2 [21:20:02] [02miraheze/mediawiki] 07jdforrester 033a38a9e - Update git submodules [21:20:03] [02miraheze/mediawiki] 07translatewiki 038d0e978 - Update git submodules [21:20:04] [02miraheze/mediawiki] ... and 283 more commits. [21:21:25] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.54, 8.44, 7.34 [21:25:20] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.82, 7.32, 7.14 [21:27:18] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 8.40, 7.74, 7.31 [21:33:11] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.70, 7.47, 7.41 [21:41:16] PROBLEM - mwtask141 Puppet on mwtask141 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Exec[vendor_psysh_composer] [21:49:14] !log [paladox@mwtask141] starting deploy of {'world': True, 'l10n': True} to all [21:49:19] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:50:48] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 9.88, 7.47, 7.15 [21:52:46] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 5.46, 6.55, 6.85 [21:53:22] PROBLEM - os141 Current Load on os141 is CRITICAL: LOAD CRITICAL - total load average: 8.54, 4.25, 2.35 [21:53:25] PROBLEM - mwtask141 MediaWiki Rendering on mwtask141 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [21:54:44] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 4.65, 5.89, 6.57 [21:55:26] RECOVERY - mwtask141 MediaWiki Rendering on mwtask141 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 2.957 second response time [21:55:37] [02miraheze/mediawiki] 07paladox pushed 0334 commits to 03REL1_41 [+6/-0/±262] 13https://github.com/miraheze/mediawiki/compare/b7a1e08bd5c0...11db568ab522 [21:55:39] [02miraheze/mediawiki] 07paladox 03ee537bf - Update git submodules [21:55:42] [02miraheze/mediawiki] 07paladox 03a49c86b - thumb: Fix "PHP Deprecated: strlen(): Passing null to parameter" [21:55:43] [02miraheze/mediawiki] 07WinstonSung 033257321 - Maintenance: Fix RebuildTextIndex [21:55:45] [02miraheze/mediawiki] ... and 31 more commits. [21:56:39] PROBLEM - graylog131 Current Load on graylog131 is CRITICAL: LOAD CRITICAL - total load average: 5.01, 3.40, 2.33 [21:57:16] PROBLEM - os141 Current Load on os141 is WARNING: LOAD WARNING - total load average: 2.99, 3.73, 2.59 [21:58:54] PROBLEM - studio.niuboss123.com - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - studio.niuboss123.com All nameservers failed to answer the query. [22:01:10] !log [paladox@test131] starting deploy of {'world': True, 'l10n': True} to all [22:01:17] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:03:29] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.20, 6.90, 6.82 [22:05:16] PROBLEM - os141 Current Load on os141 is CRITICAL: LOAD CRITICAL - total load average: 5.60, 4.17, 3.19 [22:09:22] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.24, 6.69, 6.76 [22:10:03] PROBLEM - os131 Current Load on os131 is WARNING: LOAD WARNING - total load average: 3.52, 2.80, 1.86 [22:12:00] RECOVERY - os131 Current Load on os131 is OK: LOAD OK - total load average: 2.72, 2.64, 1.91 [22:13:45] RECOVERY - mwtask141 Puppet on mwtask141 is OK: OK: Puppet is currently enabled, last run 39 seconds ago with 0 failures [22:16:41] PROBLEM - graylog131 Current Load on graylog131 is WARNING: LOAD WARNING - total load average: 2.75, 3.82, 3.94 [22:19:15] PROBLEM - os141 Current Load on os141 is WARNING: LOAD WARNING - total load average: 1.50, 3.24, 3.78 [22:20:06] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 7.53, 6.93, 6.77 [22:21:14] !log [paladox@test131] finished deploy of {'world': True, 'l10n': True} to all - SUCCESS in 1203s [22:21:20] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:25:16] RECOVERY - os141 Current Load on os141 is OK: LOAD OK - total load average: 1.74, 2.61, 3.33 [22:25:59] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.49, 6.70, 6.78 [22:26:37] PROBLEM - graylog131 Current Load on graylog131 is CRITICAL: LOAD CRITICAL - total load average: 7.45, 4.48, 3.92 [22:28:54] PROBLEM - studio.niuboss123.com - reverse DNS on sslhost is WARNING: rDNS WARNING - reverse DNS entry for studio.niuboss123.com could not be found [22:30:34] PROBLEM - mw131 Current Load on mw131 is CRITICAL: LOAD CRITICAL - total load average: 12.74, 9.22, 6.29 [22:30:58] PROBLEM - mw134 Current Load on mw134 is CRITICAL: LOAD CRITICAL - total load average: 19.30, 11.77, 7.65 [22:31:00] PROBLEM - mw141 Current Load on mw141 is CRITICAL: LOAD CRITICAL - total load average: 15.92, 10.40, 7.18 [22:31:25] PROBLEM - mw133 Current Load on mw133 is CRITICAL: LOAD CRITICAL - total load average: 13.66, 10.54, 6.81 [22:32:23] PROBLEM - os131 Current Load on os131 is CRITICAL: LOAD CRITICAL - total load average: 6.89, 3.88, 2.82 [22:32:36] PROBLEM - mw132 Current Load on mw132 is CRITICAL: LOAD CRITICAL - total load average: 16.60, 12.49, 8.51 [22:32:44] PROBLEM - cp25 Varnish Backends on cp25 is CRITICAL: 3 backends are down. mw131 mw133 mw134 [22:32:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.53, 7.06, 6.96 [22:33:24] PROBLEM - mw142 Current Load on mw142 is CRITICAL: LOAD CRITICAL - total load average: 23.46, 12.15, 7.49 [22:34:24] PROBLEM - os131 PowerDNS Recursor on os131 is CRITICAL: CRITICAL - Plugin timed out while executing system call [22:34:58] RECOVERY - mw141 Current Load on mw141 is OK: LOAD OK - total load average: 6.93, 9.71, 7.72 [22:35:44] PROBLEM - graylog131 SSH on graylog131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:35:46] PROBLEM - graylog131 PowerDNS Recursor on graylog131 is CRITICAL: CRITICAL - Plugin timed out while executing system call [22:35:47] PROBLEM - cp35 Varnish Backends on cp35 is CRITICAL: 5 backends are down. mw131 mw132 mw133 mw134 mw143 [22:36:05] PROBLEM - cp34 Varnish Backends on cp34 is CRITICAL: 4 backends are down. mw132 mw141 mw134 mw143 [22:36:18] PROBLEM - cp24 Varnish Backends on cp24 is CRITICAL: 5 backends are down. mw132 mw141 mw133 mw134 mw143 [22:37:13] PROBLEM - mw142 Current Load on mw142 is WARNING: LOAD WARNING - total load average: 10.70, 12.00, 8.47 [22:37:50] PROBLEM - mw143 Current Load on mw143 is CRITICAL: LOAD CRITICAL - total load average: 15.36, 10.96, 7.40 [22:37:52] RECOVERY - graylog131 SSH on graylog131 is OK: SSH OK - OpenSSH_9.2p1 Debian-2+deb12u1 (protocol 2.0) [22:37:59] PROBLEM - mw132 MediaWiki Rendering on mw132 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:38:32] RECOVERY - cp25 Varnish Backends on cp25 is OK: All 15 backends are healthy [22:38:37] RECOVERY - os131 PowerDNS Recursor on os131 is OK: DNS OK: 3.201 seconds response time. miraheze.org returns 2001:41d0:801:2000::3a18,2001:41d0:801:2000::5d68,51.195.201.140,51.89.139.24 [22:38:44] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 5.27, 6.38, 6.70 [22:38:56] PROBLEM - mw141 Current Load on mw141 is CRITICAL: LOAD CRITICAL - total load average: 14.16, 13.37, 9.79 [22:39:49] RECOVERY - mw143 Current Load on mw143 is OK: LOAD OK - total load average: 6.42, 9.61, 7.39 [22:40:06] RECOVERY - db142 Disk Space on db142 is OK: DISK OK - free space: / 36720 MB (11% inode=97%); [22:41:13] PROBLEM - mw142 Current Load on mw142 is CRITICAL: LOAD CRITICAL - total load average: 15.73, 12.47, 9.40 [22:42:18] PROBLEM - graylog131 SSH on graylog131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:42:55] PROBLEM - graylog131 Puppet on graylog131 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [22:43:05] PROBLEM - os131 PowerDNS Recursor on os131 is CRITICAL: CRITICAL - Plugin timed out while executing system call [22:43:54] PROBLEM - os131 Puppet on os131 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [22:44:06] PROBLEM - db142 Disk Space on db142 is WARNING: DISK WARNING - free space: / 36646 MB (10% inode=97%); [22:44:19] RECOVERY - graylog131 SSH on graylog131 is OK: SSH OK - OpenSSH_9.2p1 Debian-2+deb12u1 (protocol 2.0) [22:45:06] RECOVERY - os131 PowerDNS Recursor on os131 is OK: DNS OK: 1.867 second response time. miraheze.org returns 2001:41d0:801:2000::3a18,2001:41d0:801:2000::5d68,51.195.201.140,51.89.139.24 [22:45:15] PROBLEM - graylog131 HTTPS on graylog131 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10002 milliseconds with 0 bytes received [22:45:21] PROBLEM - cp25 Varnish Backends on cp25 is CRITICAL: 3 backends are down. mw132 mw142 mw143 [22:45:49] PROBLEM - mw143 Current Load on mw143 is WARNING: LOAD WARNING - total load average: 9.17, 10.67, 8.60 [22:46:11] RECOVERY - mw132 MediaWiki Rendering on mw132 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.415 second response time [22:46:15] RECOVERY - os131 Puppet on os131 is OK: OK: Puppet is currently enabled, last run 31 minutes ago with 0 failures [22:47:13] PROBLEM - mw142 Current Load on mw142 is WARNING: LOAD WARNING - total load average: 7.10, 10.48, 9.62 [22:47:49] RECOVERY - mw143 Current Load on mw143 is OK: LOAD OK - total load average: 8.49, 9.85, 8.54 [22:48:05] PROBLEM - graylog131 NTP time on graylog131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [22:48:57] PROBLEM - mw131 MediaWiki Rendering on mw131 is CRITICAL: HTTP CRITICAL: HTTP/1.1 502 Bad Gateway - 5903 bytes in 0.020 second response time [22:49:13] PROBLEM - mw142 Current Load on mw142 is CRITICAL: LOAD CRITICAL - total load average: 15.10, 12.33, 10.40 [22:50:15] !log [paladox@mwtask141] DEPLOY ABORTED: Canary check failed for publictestwiki.com@mw131.miraheze.org [22:50:19] PROBLEM - mw131 php-fpm on mw131 is CRITICAL: PROCS CRITICAL: 0 processes with command name 'php-fpm8.2' [22:50:25] RECOVERY - graylog131 NTP time on graylog131 is OK: NTP OK: Offset -0.001145213842 secs [22:50:45] !log [paladox@mwtask141] starting deploy of {'world': True, 'l10n': True} to all [22:50:59] RECOVERY - graylog131 Puppet on graylog131 is OK: OK: Puppet is currently enabled, last run 47 minutes ago with 0 failures [22:51:06] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:51:09] RECOVERY - cp25 Varnish Backends on cp25 is OK: All 15 backends are healthy [22:51:24] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [22:51:25] RECOVERY - graylog131 PowerDNS Recursor on graylog131 is OK: DNS OK: 0.315 seconds response time. miraheze.org returns 2001:41d0:801:2000::3a18,2001:41d0:801:2000::5d68,51.195.201.140,51.89.139.24 [22:51:50] PROBLEM - mw143 Current Load on mw143 is WARNING: LOAD WARNING - total load average: 11.20, 11.46, 9.51 [22:52:00] PROBLEM - os131 Current Load on os131 is WARNING: LOAD WARNING - total load average: 0.53, 3.06, 3.96 [22:52:18] RECOVERY - cp24 Varnish Backends on cp24 is OK: All 15 backends are healthy [22:52:19] RECOVERY - mw131 php-fpm on mw131 is OK: PROCS OK: 25 processes with command name 'php-fpm8.2' [22:53:02] RECOVERY - mw131 MediaWiki Rendering on mw131 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.583 second response time [22:53:13] PROBLEM - mw142 Current Load on mw142 is WARNING: LOAD WARNING - total load average: 5.95, 11.14, 10.57 [22:53:33] RECOVERY - graylog131 HTTPS on graylog131 is OK: HTTP OK: HTTP/2 200 - 646 bytes in 0.711 second response time [22:53:35] RECOVERY - cp34 Varnish Backends on cp34 is OK: All 15 backends are healthy [22:53:42] RECOVERY - cp35 Varnish Backends on cp35 is OK: All 15 backends are healthy [22:53:49] RECOVERY - mw143 Current Load on mw143 is OK: LOAD OK - total load average: 6.16, 9.28, 8.94 [22:54:35] PROBLEM - mw131 Current Load on mw131 is WARNING: LOAD WARNING - total load average: 9.00, 10.16, 11.77 [22:54:51] PROBLEM - mw141 Current Load on mw141 is WARNING: LOAD WARNING - total load average: 5.31, 10.04, 10.84 [22:55:13] RECOVERY - mw142 Current Load on mw142 is OK: LOAD OK - total load average: 6.00, 9.59, 10.07 [22:56:01] PROBLEM - os131 Current Load on os131 is CRITICAL: LOAD CRITICAL - total load average: 5.82, 3.93, 4.06 [22:56:34] PROBLEM - mw131 Current Load on mw131 is CRITICAL: LOAD CRITICAL - total load average: 19.71, 12.99, 12.55 [22:56:51] RECOVERY - mw141 Current Load on mw141 is OK: LOAD OK - total load average: 4.02, 8.08, 10.03 [22:57:31] [Grafana] FIRING: Some MediaWiki Appservers are running out of PHP-FPM workers. https://grafana.miraheze.org/d/GtxbP1Xnk?orgId=1 [22:57:52] PROBLEM - os131 PowerDNS Recursor on os131 is CRITICAL: CRITICAL - Plugin timed out while executing system call [22:57:53] PROBLEM - graylog131 PowerDNS Recursor on graylog131 is CRITICAL: CRITICAL - Plugin timed out while executing system call [22:57:54] PROBLEM - graylog131 Puppet on graylog131 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [22:58:55] PROBLEM - cp25 Varnish Backends on cp25 is CRITICAL: 2 backends are down. mw131 mw134 [22:59:28] PROBLEM - cp34 Varnish Backends on cp34 is CRITICAL: 3 backends are down. mw131 mw132 mw134 [22:59:39] PROBLEM - cp35 Varnish Backends on cp35 is CRITICAL: 3 backends are down. mw131 mw132 mw134 [23:01:59] PROBLEM - mem131 Puppet on mem131 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [23:02:17] PROBLEM - mem131 PowerDNS Recursor on mem131 is CRITICAL: CRITICAL - Plugin timed out while executing system call [23:02:27] PROBLEM - mem131 SSH on mem131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:02:30] [Grafana] RESOLVED: PHP-FPM Worker Usage High https://grafana.miraheze.org/d/GtxbP1Xnk?orgId=1 [23:02:35] PROBLEM - mem131 Current Load on mem131 is CRITICAL: CRITICAL - load average: 9.35, 5.54, 2.84 [23:03:38] RECOVERY - cp35 Varnish Backends on cp35 is OK: All 15 backends are healthy [23:04:16] RECOVERY - graylog131 PowerDNS Recursor on graylog131 is OK: DNS OK: 9.276 seconds response time. miraheze.org returns 2001:41d0:801:2000::3a18,2001:41d0:801:2000::5d68,51.195.201.140,51.89.139.24 [23:04:18] PROBLEM - cp24 Varnish Backends on cp24 is CRITICAL: 1 backends are down. mw132 [23:04:26] RECOVERY - mem131 SSH on mem131 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u2 (protocol 2.0) [23:04:29] RECOVERY - os131 PowerDNS Recursor on os131 is OK: DNS OK: 0.228 seconds response time. miraheze.org returns 2001:41d0:801:2000::3a18,2001:41d0:801:2000::5d68,51.195.201.140,51.89.139.24 [23:04:30] RECOVERY - mem131 Puppet on mem131 is OK: OK: Puppet is currently enabled, last run 39 minutes ago with 0 failures [23:05:18] RECOVERY - cp34 Varnish Backends on cp34 is OK: All 15 backends are healthy [23:08:24] PROBLEM - mw134 MediaWiki Rendering on mw134 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:08:38] PROBLEM - graylog131 PowerDNS Recursor on graylog131 is CRITICAL: CRITICAL - Plugin timed out while executing system call [23:09:13] PROBLEM - mw142 Current Load on mw142 is WARNING: LOAD WARNING - total load average: 11.32, 9.79, 9.27 [23:09:14] PROBLEM - cp34 Varnish Backends on cp34 is CRITICAL: 4 backends are down. mw131 mw132 mw133 mw134 [23:09:36] PROBLEM - cp35 Varnish Backends on cp35 is CRITICAL: 3 backends are down. mw132 mw133 mw134 [23:09:49] PROBLEM - mw143 Current Load on mw143 is WARNING: LOAD WARNING - total load average: 11.46, 9.64, 8.41 [23:10:18] RECOVERY - cp24 Varnish Backends on cp24 is OK: All 15 backends are healthy [23:10:46] PROBLEM - mw141 Current Load on mw141 is WARNING: LOAD WARNING - total load average: 11.45, 10.77, 9.80 [23:11:13] RECOVERY - mw142 Current Load on mw142 is OK: LOAD OK - total load average: 8.06, 9.87, 9.42 [23:11:20] PROBLEM - mem131 Puppet on mem131 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [23:11:35] RECOVERY - cp35 Varnish Backends on cp35 is OK: All 15 backends are healthy [23:11:49] RECOVERY - mw143 Current Load on mw143 is OK: LOAD OK - total load average: 6.02, 8.10, 7.98 [23:14:18] PROBLEM - cp24 Varnish Backends on cp24 is CRITICAL: 1 backends are down. mw131 [23:14:52] RECOVERY - graylog131 PowerDNS Recursor on graylog131 is OK: DNS OK: 1.264 second response time. miraheze.org returns 2001:41d0:801:2000::3a18,2001:41d0:801:2000::5d68,51.195.201.140,51.89.139.24 [23:15:04] RECOVERY - cp34 Varnish Backends on cp34 is OK: All 15 backends are healthy [23:18:00] PROBLEM - os131 Current Load on os131 is WARNING: LOAD WARNING - total load average: 2.73, 3.08, 3.98 [23:18:16] RECOVERY - cp25 Varnish Backends on cp25 is OK: All 15 backends are healthy [23:18:18] RECOVERY - cp24 Varnish Backends on cp24 is OK: All 15 backends are healthy [23:18:23] PROBLEM - mem131 SSH on mem131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:18:45] RECOVERY - mw134 MediaWiki Rendering on mw134 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.486 second response time [23:19:15] PROBLEM - mw131 MediaWiki Rendering on mw131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:20:01] PROBLEM - os131 Current Load on os131 is CRITICAL: LOAD CRITICAL - total load average: 4.11, 3.31, 3.95 [23:20:22] RECOVERY - mem131 SSH on mem131 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u2 (protocol 2.0) [23:20:43] RECOVERY - mw141 Current Load on mw141 is OK: LOAD OK - total load average: 8.19, 9.05, 9.47 [23:21:14] RECOVERY - mw131 MediaWiki Rendering on mw131 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 2.003 second response time [23:21:52] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.16, 6.83, 6.33 [23:22:00] PROBLEM - os131 Current Load on os131 is WARNING: LOAD WARNING - total load average: 1.63, 2.63, 3.62 [23:22:10] PROBLEM - cp25 Varnish Backends on cp25 is CRITICAL: 2 backends are down. mw132 mw133 [23:22:58] PROBLEM - cp34 Varnish Backends on cp34 is CRITICAL: 2 backends are down. mw131 mw133 [23:23:15] PROBLEM - graylog131 PowerDNS Recursor on graylog131 is CRITICAL: CRITICAL - Plugin timed out while executing system call [23:23:52] RECOVERY - swiftobject122 Current Load on swiftobject122 is OK: OK - load average: 6.00, 6.54, 6.29 [23:24:00] RECOVERY - os131 Current Load on os131 is OK: LOAD OK - total load average: 1.29, 2.14, 3.31 [23:24:42] PROBLEM - mw141 Current Load on mw141 is CRITICAL: LOAD CRITICAL - total load average: 12.20, 11.33, 10.36 [23:24:42] PROBLEM - mw143 Current Load on mw143 is CRITICAL: LOAD CRITICAL - total load average: 13.29, 10.52, 8.97 [23:24:46] PROBLEM - mem131 SSH on mem131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:26:41] RECOVERY - mw143 Current Load on mw143 is OK: LOAD OK - total load average: 7.56, 9.62, 8.86 [23:26:41] PROBLEM - mw141 Current Load on mw141 is WARNING: LOAD WARNING - total load average: 8.60, 10.52, 10.21 [23:26:59] PROBLEM - graylog131 SSH on graylog131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:27:04] PROBLEM - mw134 MediaWiki Rendering on mw134 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:27:15] PROBLEM - cp24 Varnish Backends on cp24 is CRITICAL: 8 backends are down. mw131 mw132 mw141 mw142 mw133 mw134 mw143 mediawiki [23:27:28] PROBLEM - cp35 Varnish Backends on cp35 is CRITICAL: 8 backends are down. mw131 mw132 mw141 mw142 mw133 mw134 mw143 mediawiki [23:28:41] RECOVERY - mw141 Current Load on mw141 is OK: LOAD OK - total load average: 8.01, 9.76, 9.99 [23:29:50] PROBLEM - mem131 NTP time on mem131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:29:55] RECOVERY - mem131 PowerDNS Recursor on mem131 is OK: DNS OK: 1.055 second response time. miraheze.org returns 2001:41d0:801:2000::3a18,2001:41d0:801:2000::5d68,51.195.201.140,51.89.139.24 [23:30:04] RECOVERY - graylog131 Puppet on graylog131 is OK: OK: Puppet is currently enabled, last run 38 minutes ago with 0 failures [23:31:56] RECOVERY - mem131 NTP time on mem131 is OK: NTP OK: Offset -0.0008809864521 secs [23:32:24] PROBLEM - mw142 Current Load on mw142 is CRITICAL: LOAD CRITICAL - total load average: 13.06, 11.68, 10.29 [23:32:25] PROBLEM - mw143 Current Load on mw143 is CRITICAL: LOAD CRITICAL - total load average: 16.04, 11.55, 9.76 [23:32:39] PROBLEM - mw141 Current Load on mw141 is CRITICAL: LOAD CRITICAL - total load average: 15.35, 12.00, 10.80 [23:33:12] RECOVERY - mem131 SSH on mem131 is OK: SSH OK - OpenSSH_8.4p1 Debian-5+deb11u2 (protocol 2.0) [23:33:14] RECOVERY - cp24 Varnish Backends on cp24 is OK: All 15 backends are healthy [23:33:24] RECOVERY - mw134 MediaWiki Rendering on mw134 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 9.959 second response time [23:33:26] RECOVERY - cp35 Varnish Backends on cp35 is OK: All 15 backends are healthy [23:33:50] RECOVERY - cp25 Varnish Backends on cp25 is OK: All 15 backends are healthy [23:34:18] RECOVERY - mw142 Current Load on mw142 is OK: LOAD OK - total load average: 5.94, 9.71, 9.76 [23:34:20] RECOVERY - mw143 Current Load on mw143 is OK: LOAD OK - total load average: 7.61, 10.04, 9.44 [23:34:38] PROBLEM - mw141 Current Load on mw141 is WARNING: LOAD WARNING - total load average: 6.34, 9.99, 10.23 [23:34:38] RECOVERY - cp34 Varnish Backends on cp34 is OK: All 15 backends are healthy [23:35:25] RECOVERY - graylog131 SSH on graylog131 is OK: SSH OK - OpenSSH_9.2p1 Debian-2+deb12u1 (protocol 2.0) [23:36:40] RECOVERY - mw141 Current Load on mw141 is OK: LOAD OK - total load average: 4.55, 8.13, 9.52 [23:36:58] PROBLEM - graylog131 Puppet on graylog131 is CRITICAL: CHECK_NRPE STATE CRITICAL: Socket timeout after 60 seconds. [23:39:50] PROBLEM - cp25 Varnish Backends on cp25 is CRITICAL: 4 backends are down. mw132 mw141 mw133 mw134 [23:39:52] PROBLEM - graylog131 SSH on graylog131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:40:11] PROBLEM - cp24 Varnish Backends on cp24 is CRITICAL: 2 backends are down. mw141 mw134 [23:40:23] PROBLEM - cp35 Varnish Backends on cp35 is CRITICAL: 1 backends are down. mw141 [23:40:24] PROBLEM - mw133 MediaWiki Rendering on mw133 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:40:38] PROBLEM - mw141 Current Load on mw141 is WARNING: LOAD WARNING - total load average: 9.99, 10.72, 10.35 [23:41:56] PROBLEM - mem131 PowerDNS Recursor on mem131 is CRITICAL: CRITICAL - Plugin timed out while executing system call [23:42:26] RECOVERY - mw133 MediaWiki Rendering on mw133 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 5.121 second response time [23:42:28] RECOVERY - graylog131 PowerDNS Recursor on graylog131 is OK: DNS OK: 8.818 seconds response time. miraheze.org returns 2001:41d0:801:2000::3a18,2001:41d0:801:2000::5d68,51.195.201.140,51.89.139.24 [23:42:37] RECOVERY - mw141 Current Load on mw141 is OK: LOAD OK - total load average: 5.50, 8.69, 9.64 [23:45:22] PROBLEM - wiki.gab.pt.eu.org - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - wiki.gab.pt.eu.org All nameservers failed to answer the query. [23:46:56] PROBLEM - graylog131 PowerDNS Recursor on graylog131 is CRITICAL: CRITICAL - Plugin timed out while executing system call [23:52:39] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.81, 6.65, 6.18 [23:53:25] PROBLEM - cp34 Varnish Backends on cp34 is CRITICAL: 2 backends are down. mw131 mw133 [23:53:50] RECOVERY - cp25 Varnish Backends on cp25 is OK: All 15 backends are healthy [23:54:07] RECOVERY - cp24 Varnish Backends on cp24 is OK: All 15 backends are healthy [23:54:30] [Grafana] FIRING: Some MediaWiki Appservers are running out of PHP-FPM workers. https://grafana.miraheze.org/d/GtxbP1Xnk?orgId=1 [23:56:02] PROBLEM - mem131 SSH on mem131 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [23:56:35] PROBLEM - swiftobject122 Current Load on swiftobject122 is CRITICAL: CRITICAL - load average: 14.07, 8.37, 6.84 [23:57:18] RECOVERY - cp34 Varnish Backends on cp34 is OK: All 15 backends are healthy [23:58:17] RECOVERY - cp35 Varnish Backends on cp35 is OK: All 15 backends are healthy [23:58:32] PROBLEM - swiftobject122 Current Load on swiftobject122 is WARNING: WARNING - load average: 6.59, 7.38, 6.66 [23:59:54] RECOVERY - graylog131 PowerDNS Recursor on graylog131 is OK: DNS OK: 8.268 seconds response time. miraheze.org returns 2001:41d0:801:2000::3a18,2001:41d0:801:2000::5d68,51.195.201.140,51.89.139.24