[00:00:28] [02CreateWiki] 07Universal-Omega pushed 1 new commit to 03config 13https://github.com/miraheze/CreateWiki/commit/23b854fee88e708d9055a809062ce9a72cb6f686 [00:00:28] 02CreateWiki/03config 07CosmicAlpha 0323b854f Update [00:01:17] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [00:01:21] [02CreateWiki] 07Universal-Omega pushed 1 new commit to 03config 13https://github.com/miraheze/CreateWiki/commit/6038f0cd4175b91860acaf55f55b7501d3496e27 [00:01:21] 02CreateWiki/03config 07CosmicAlpha 036038f0c Use [00:03:16] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.29 ms [00:05:05] miraheze/CreateWiki - Universal-Omega the build passed. [00:07:36] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 80%, RTA = 31.40 ms [00:11:41] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.28 ms [00:13:00] miraheze/CreateWiki - Universal-Omega the build passed. [00:14:51] RECOVERY - cp37 Disk Space on cp37 is OK: DISK OK - free space: / 51690MiB (11% inode=99%); [00:35:04] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 28%, RTA = 31.27 ms [00:39:07] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.24 ms [01:18:51] PROBLEM - cp37 Disk Space on cp37 is WARNING: DISK WARNING - free space: / 49807MiB (10% inode=99%); [01:28:47] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [01:36:04] PROBLEM - aryavartpedia.online - LetsEncrypt on sslhost is CRITICAL: No address associated with hostnameHTTP CRITICAL - Unable to open TCP socket [01:36:59] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.27 ms [01:41:18] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 50%, RTA = 31.25 ms [01:47:26] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.39 ms [01:52:11] PROBLEM - mwtask171 Current Load on mwtask171 is WARNING: LOAD WARNING - total load average: 20.54, 15.22, 7.59 [01:54:11] PROBLEM - mwtask171 Current Load on mwtask171 is CRITICAL: LOAD CRITICAL - total load average: 24.36, 17.90, 9.47 [01:55:39] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 28%, RTA = 31.38 ms [02:03:54] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.30 ms [02:14:11] RECOVERY - mwtask171 Current Load on mwtask171 is OK: LOAD OK - total load average: 2.70, 16.44, 18.87 [02:52:02] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [02:55:35] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 32.33 ms [02:58:35] RECOVERY - mattermost1 APT on mattermost1 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [02:58:40] PROBLEM - db172 APT on db172 is WARNING: APT WARNING: 0 packages available for upgrade (0 critical updates). warnings detected, errors detected. [02:59:03] RECOVERY - cp36 APT on cp36 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [02:59:07] PROBLEM - db151 APT on db151 is WARNING: APT WARNING: 0 packages available for upgrade (0 critical updates). warnings detected, errors detected. [02:59:12] RECOVERY - eventgate181 APT on eventgate181 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [02:59:15] RECOVERY - bast181 APT on bast181 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [02:59:15] PROBLEM - db161 APT on db161 is WARNING: APT WARNING: 0 packages available for upgrade (0 critical updates). warnings detected, errors detected. [02:59:15] RECOVERY - cp37 APT on cp37 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [02:59:32] RECOVERY - changeprop151 APT on changeprop151 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [02:59:40] RECOVERY - cloud18 APT on cloud18 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [02:59:46] RECOVERY - mem161 APT on mem161 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [02:59:47] RECOVERY - db171 APT on db171 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [02:59:48] RECOVERY - bast161 APT on bast161 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [02:59:49] RECOVERY - mw184 APT on mw184 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [02:59:51] RECOVERY - mw154 APT on mw154 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [02:59:52] RECOVERY - kafka181 APT on kafka181 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [02:59:53] RECOVERY - mwtask151 APT on mwtask151 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [02:59:55] RECOVERY - mwtask171 APT on mwtask171 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:06] RECOVERY - mw164 APT on mw164 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:06] RECOVERY - mem151 APT on mem151 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:07] PROBLEM - mw173 MediaWiki Rendering on mw173 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 4714 bytes in 0.076 second response time [03:00:08] RECOVERY - mw153 APT on mw153 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:13] RECOVERY - mw173 APT on mw173 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:16] RECOVERY - mw161 APT on mw161 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:21] RECOVERY - db182 APT on db182 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:21] RECOVERY - ldap171 APT on ldap171 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:27] RECOVERY - bots171 APT on bots171 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:27] RECOVERY - db181 APT on db181 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:28] RECOVERY - mw183 APT on mw183 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:29] RECOVERY - matomo151 APT on matomo151 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:30] RECOVERY - graylog161 APT on graylog161 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:32] RECOVERY - mw152 APT on mw152 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:36] RECOVERY - mon181 APT on mon181 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:39] RECOVERY - mw151 APT on mw151 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:40] RECOVERY - db172 APT on db172 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:41] RECOVERY - mw174 APT on mw174 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:45] RECOVERY - cloud16 APT on cloud16 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:45] RECOVERY - cloud17 APT on cloud17 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:45] RECOVERY - swiftobject171 APT on swiftobject171 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:46] RECOVERY - os162 APT on os162 is OK: APT OK: 26 packages available for upgrade (0 critical updates). [03:00:47] RECOVERY - mw172 APT on mw172 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:47] PROBLEM - db182 MariaDB on db182 is CRITICAL: Can't connect to server on 'db182.wikitide.net' (115) [03:00:48] RECOVERY - phorge171 APT on phorge171 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:51] PROBLEM - phorge171 issue-tracker.miraheze.org HTTPS on phorge171 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 4279 bytes in 0.026 second response time [03:00:52] RECOVERY - swiftproxy161 APT on swiftproxy161 is OK: APT OK: 28 packages available for upgrade (0 critical updates). [03:00:52] RECOVERY - ns1 APT on ns1 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:55] PROBLEM - db171 MariaDB on db171 is CRITICAL: Can't connect to server on 'db171.wikitide.net' (115) [03:00:55] RECOVERY - mw171 APT on mw171 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:57] RECOVERY - mw163 APT on mw163 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:57] RECOVERY - cloud15 APT on cloud15 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:00:59] PROBLEM - mw163 MediaWiki Rendering on mw163 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 4714 bytes in 0.086 second response time [03:01:00] RECOVERY - swiftac171 APT on swiftac171 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:01:02] RECOVERY - mw162 APT on mw162 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:01:03] RECOVERY - mw182 APT on mw182 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:01:03] RECOVERY - mwtask161 APT on mwtask161 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:01:04] RECOVERY - db151 APT on db151 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:01:04] PROBLEM - db151 MariaDB Connections on db151 is UNKNOWN: PHP Fatal error: Uncaught mysqli_sql_exception: Connection refused in /usr/lib/nagios/plugins/check_mysql_connections.php:66Stack trace:#0 /usr/lib/nagios/plugins/check_mysql_connections.php(66): mysqli_real_connect(Object(mysqli), 'db151.wikitide....', 'icinga', Object(SensitiveParameterValue), NULL, NULL, NULL, true)#1 {main} thrown in /usr/lib/nagios/plugins/check_mysql_connect [03:01:05] n line 66Fatal error: Uncaught mysqli_sql_exception: Connection refused in /usr/lib/nagios/plugins/check_mysql_connections.php:66Stack trace:#0 /usr/lib/nagios/plugins/check_mysql_connections.php(66): mysqli_real_connect(Object(mysqli), 'db151.wikitide....', 'icinga', Object(SensitiveParameterValue), NULL, NULL, NULL, true)#1 {main} thrown in /usr/lib/nagios/plugins/check_mysql_connections.php on line 66 RECOVERY - db161 APT on db161 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:01:18] RECOVERY - mw181 APT on mw181 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:01:21] RECOVERY - mwtask181 APT on mwtask181 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:01:22] PROBLEM - matomo151 HTTPS on matomo151 is CRITICAL: HTTP CRITICAL: HTTP/2 500 - 426 bytes in 0.037 second response time [03:01:25] PROBLEM - db172 MariaDB Connections on db172 is UNKNOWN: PHP Fatal error: Uncaught mysqli_sql_exception: Connection refused in /usr/lib/nagios/plugins/check_mysql_connections.php:66Stack trace:#0 /usr/lib/nagios/plugins/check_mysql_connections.php(66): mysqli_real_connect(Object(mysqli), 'db172.wikitide....', 'icinga', Object(SensitiveParameterValue), NULL, NULL, NULL, true)#1 {main} thrown in /usr/lib/nagios/plugins/check_mysql_connect [03:01:25] n line 66Fatal error: Uncaught mysqli_sql_exception: Connection refused in /usr/lib/nagios/plugins/check_mysql_connections.php:66Stack trace:#0 /usr/lib/nagios/plugins/check_mysql_connections.php(66): mysqli_real_connect(Object(mysqli), 'db172.wikitide....', 'icinga', Object(SensitiveParameterValue), NULL, NULL, NULL, true)#1 {main} thrown in /usr/lib/nagios/plugins/check_mysql_connections.php on line 66 RECOVERY - rdb151 APT on rdb151 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:01:37] RECOVERY - swiftobject151 APT on swiftobject151 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:01:37] PROBLEM - db151 MariaDB on db151 is CRITICAL: Can't connect to server on 'db151.wikitide.net' (115) [03:01:44] RECOVERY - puppet181 APT on puppet181 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:01:48] RECOVERY - swiftobject161 APT on swiftobject161 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:01:49] PROBLEM - test151 MediaWiki Rendering on test151 is CRITICAL: HTTP CRITICAL: HTTP/1.1 500 Internal Server Error - 8191 bytes in 0.195 second response time [03:01:55] RECOVERY - reports171 APT on reports171 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:01:55] RECOVERY - test151 APT on test151 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:02:01] RECOVERY - mw173 MediaWiki Rendering on mw173 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.168 second response time [03:02:10] RECOVERY - swiftproxy171 APT on swiftproxy171 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:02:13] RECOVERY - prometheus151 APT on prometheus151 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:02:14] PROBLEM - bast161 Puppet on bast161 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Package[qemu-guest-agent] [03:02:14] RECOVERY - swiftobject181 APT on swiftobject181 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:02:15] RECOVERY - ns2 APT on ns2 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:02:16] PROBLEM - phorge171 phorge-static.wikitide.net HTTPS on phorge171 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/1.1 500 Internal Server Error [03:02:24] PROBLEM - db172 MariaDB on db172 is CRITICAL: Can't connect to server on 'db172.wikitide.net' (115) [03:02:32] PROBLEM - db182 MariaDB Connections on db182 is UNKNOWN: PHP Fatal error: Uncaught mysqli_sql_exception: Connection refused in /usr/lib/nagios/plugins/check_mysql_connections.php:66Stack trace:#0 /usr/lib/nagios/plugins/check_mysql_connections.php(66): mysqli_real_connect(Object(mysqli), 'db182.wikitide....', 'icinga', Object(SensitiveParameterValue), NULL, NULL, NULL, true)#1 {main} thrown in /usr/lib/nagios/plugins/check_mysql_connect [03:02:32] n line 66Fatal error: Uncaught mysqli_sql_exception: Connection refused in /usr/lib/nagios/plugins/check_mysql_connections.php:66Stack trace:#0 /usr/lib/nagios/plugins/check_mysql_connections.php(66): mysqli_real_connect(Object(mysqli), 'db182.wikitide....', 'icinga', Object(SensitiveParameterValue), NULL, NULL, NULL, true)#1 {main} thrown in /usr/lib/nagios/plugins/check_mysql_connections.php on line 66 RECOVERY - os161 APT on os161 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [03:02:35] RECOVERY - os151 APT on os151 is OK: APT OK: 1 packages available for upgrade (0 critical updates). [03:02:52] RECOVERY - db171 MariaDB on db171 is OK: Uptime: 113 Threads: 87 Questions: 235863 Slow queries: 0 Opens: 5240 Open tables: 5234 Queries per second avg: 2087.283 [03:02:59] RECOVERY - db151 MariaDB Connections on db151 is OK: OK connection usage: 38.6%Current connections: 386 [03:03:24] PROBLEM - db161 Puppet on db161 is CRITICAL: CRITICAL: Puppet has 2 failures. Last run 3 minutes ago with 2 failures. Failed resources (up to 3 shown): Package[mariadb-server],Package[mariadb-backup] [03:03:35] RECOVERY - db151 MariaDB on db151 is OK: Uptime: 78 Threads: 387 Questions: 2094 Slow queries: 11 Opens: 272 Open tables: 266 Queries per second avg: 26.846 [03:05:19] RECOVERY - db172 MariaDB Connections on db172 is OK: OK connection usage: 0.1%Current connections: 1 [03:05:19] PROBLEM - mw162 HTTPS on mw162 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10002 milliseconds with 0 bytes received [03:05:20] PROBLEM - cp37 HTTPS on cp37 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 503 [03:05:23] PROBLEM - mw162 MediaWiki Rendering on mw162 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:05:23] PROBLEM - mw164 HTTPS on mw164 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10003 milliseconds with 0 bytes received [03:05:24] PROBLEM - mw184 HTTPS on mw184 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [03:05:24] PROBLEM - mw182 MediaWiki Rendering on mw182 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:05:25] PROBLEM - mw174 MediaWiki Rendering on mw174 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:05:25] PROBLEM - mw152 MediaWiki Rendering on mw152 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:05:27] PROBLEM - mw153 MediaWiki Rendering on mw153 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:05:27] PROBLEM - mw181 MediaWiki Rendering on mw181 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:05:28] PROBLEM - mw154 HTTPS on mw154 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10003 milliseconds with 0 bytes received [03:05:29] PROBLEM - mw171 MediaWiki Rendering on mw171 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:05:35] PROBLEM - mw151 MediaWiki Rendering on mw151 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:05:36] PROBLEM - mw183 MediaWiki Rendering on mw183 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:05:38] PROBLEM - mw173 HTTPS on mw173 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10000 milliseconds with 0 bytes received [03:05:39] PROBLEM - mw164 MediaWiki Rendering on mw164 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:05:40] PROBLEM - mw183 HTTPS on mw183 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [03:05:41] PROBLEM - mw161 HTTPS on mw161 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [03:05:44] PROBLEM - mw184 MediaWiki Rendering on mw184 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:05:46] PROBLEM - mw161 MediaWiki Rendering on mw161 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:05:46] PROBLEM - mw154 MediaWiki Rendering on mw154 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:05:47] PROBLEM - db151 Current Load on db151 is CRITICAL: LOAD CRITICAL - total load average: 333.22, 172.69, 68.94 [03:05:49] RECOVERY - test151 MediaWiki Rendering on test151 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.422 second response time [03:06:02] PROBLEM - mw163 HTTPS on mw163 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [03:06:05] PROBLEM - mw172 MediaWiki Rendering on mw172 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:06:14] PROBLEM - mw153 HTTPS on mw153 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10001 milliseconds with 0 bytes received [03:06:15] RECOVERY - db172 MariaDB on db172 is OK: Uptime: 92 Threads: 1 Questions: 124 Slow queries: 0 Opens: 30 Open tables: 24 Queries per second avg: 1.347 [03:06:15] PROBLEM - mw172 HTTPS on mw172 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10002 milliseconds with 0 bytes received [03:06:16] PROBLEM - mw182 HTTPS on mw182 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10003 milliseconds with 0 bytes received [03:06:16] RECOVERY - phorge171 phorge-static.wikitide.net HTTPS on phorge171 is OK: HTTP OK: Status line output matched "HTTP/1.1 200" - 17669 bytes in 0.028 second response time [03:06:19] PROBLEM - mw173 MediaWiki Rendering on mw173 is CRITICAL: CRITICAL - Socket timeout after 10 seconds [03:06:21] RECOVERY - db182 MariaDB Connections on db182 is OK: OK connection usage: 0.8%Current connections: 8 [03:06:22] PROBLEM - cp36 HTTPS on cp36 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: HTTP/2 503 [03:06:26] PROBLEM - cp37 Varnish Backends on cp37 is CRITICAL: 17 backends are down. mw151 mw152 mw161 mw162 mw171 mw172 mw181 mw182 mw153 mw154 mw163 mw164 mw173 mw174 mw183 mw184 mediawiki [03:06:27] PROBLEM - mw174 HTTPS on mw174 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [03:06:36] RECOVERY - db182 MariaDB on db182 is OK: Uptime: 78 Threads: 8 Questions: 58817 Slow queries: 0 Opens: 124 Open tables: 117 Queries per second avg: 754.064 [03:06:36] RECOVERY - phorge171 issue-tracker.miraheze.org HTTPS on phorge171 is OK: HTTP OK: HTTP/1.1 200 OK - 19582 bytes in 0.055 second response time [03:06:40] PROBLEM - mw181 HTTPS on mw181 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [03:06:58] PROBLEM - cp36 Varnish Backends on cp36 is CRITICAL: 14 backends are down. mw151 mw152 mw161 mw162 mw172 mw181 mw182 mw153 mw154 mw163 mw164 mw173 mw174 mw184 [03:07:01] PROBLEM - mw152 HTTPS on mw152 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 28 - Operation timed out after 10004 milliseconds with 0 bytes received [03:07:17] RECOVERY - matomo151 HTTPS on matomo151 is OK: HTTP OK: HTTP/2 200 - 553 bytes in 0.210 second response time [03:07:19] RECOVERY - db161 Puppet on db161 is OK: OK: Puppet is currently enabled, last run 21 seconds ago with 0 failures [03:07:19] RECOVERY - mw184 HTTPS on mw184 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3951 bytes in 0.062 second response time [03:07:19] RECOVERY - mw182 MediaWiki Rendering on mw182 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.180 second response time [03:07:20] RECOVERY - mw174 MediaWiki Rendering on mw174 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.172 second response time [03:07:23] RECOVERY - mw153 MediaWiki Rendering on mw153 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.144 second response time [03:07:23] RECOVERY - mw152 MediaWiki Rendering on mw152 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.945 second response time [03:07:23] RECOVERY - mw181 MediaWiki Rendering on mw181 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.178 second response time [03:07:24] RECOVERY - mw154 HTTPS on mw154 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3951 bytes in 0.052 second response time [03:07:34] RECOVERY - mw173 HTTPS on mw173 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3951 bytes in 0.052 second response time [03:07:43] RECOVERY - mw154 MediaWiki Rendering on mw154 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.181 second response time [03:08:01] RECOVERY - mw163 HTTPS on mw163 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3951 bytes in 0.051 second response time [03:08:14] RECOVERY - mw173 MediaWiki Rendering on mw173 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.156 second response time [03:09:15] RECOVERY - mw163 MediaWiki Rendering on mw163 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 1.620 second response time [03:09:20] RECOVERY - mw162 HTTPS on mw162 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3951 bytes in 0.341 second response time [03:09:30] RECOVERY - mw164 HTTPS on mw164 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3951 bytes in 0.338 second response time [03:09:32] RECOVERY - mw162 MediaWiki Rendering on mw162 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 1.402 second response time [03:09:33] RECOVERY - mw164 MediaWiki Rendering on mw164 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 1.537 second response time [03:09:35] RECOVERY - mw151 MediaWiki Rendering on mw151 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 1.770 second response time [03:09:36] RECOVERY - mw171 MediaWiki Rendering on mw171 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.194 second response time [03:09:46] RECOVERY - mw183 MediaWiki Rendering on mw183 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 2.498 second response time [03:09:49] RECOVERY - mw184 MediaWiki Rendering on mw184 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 5.336 second response time [03:09:49] RECOVERY - mw161 HTTPS on mw161 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3951 bytes in 0.054 second response time [03:09:49] RECOVERY - mw183 HTTPS on mw183 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3951 bytes in 0.054 second response time [03:09:53] RECOVERY - mw161 MediaWiki Rendering on mw161 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.239 second response time [03:09:58] RECOVERY - mw172 MediaWiki Rendering on mw172 is OK: HTTP OK: HTTP/1.1 200 OK - 8191 bytes in 0.190 second response time [03:10:12] RECOVERY - cp36 HTTPS on cp36 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 4114 bytes in 0.063 second response time [03:10:19] RECOVERY - cp37 Varnish Backends on cp37 is OK: All 29 backends are healthy [03:10:21] RECOVERY - mw172 HTTPS on mw172 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3951 bytes in 0.053 second response time [03:10:21] RECOVERY - mw182 HTTPS on mw182 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3951 bytes in 0.061 second response time [03:10:22] RECOVERY - mw153 HTTPS on mw153 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3951 bytes in 0.048 second response time [03:10:27] RECOVERY - mw174 HTTPS on mw174 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3951 bytes in 0.053 second response time [03:10:43] RECOVERY - mw181 HTTPS on mw181 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3951 bytes in 0.059 second response time [03:10:52] RECOVERY - mw152 HTTPS on mw152 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 3951 bytes in 0.051 second response time [03:10:58] RECOVERY - cp36 Varnish Backends on cp36 is OK: All 29 backends are healthy [03:11:20] PROBLEM - db161 Current Load on db161 is CRITICAL: LOAD CRITICAL - total load average: 25.80, 40.24, 20.28 [03:11:20] RECOVERY - cp37 HTTPS on cp37 is OK: HTTP OK: HTTP/2 404 - Status line output matched "HTTP/2 404" - 4114 bytes in 0.051 second response time [03:13:27] PROBLEM - os162 Puppet on os162 is CRITICAL: CRITICAL: Puppet has 2 failures. Last run 2 minutes ago with 2 failures. Failed resources (up to 3 shown): Package[opensearch],Opensearch_template[graylog-internal] [03:17:26] @agentisai was it backups again [03:17:55] bad regex from my part [03:18:04] specifically, a trailing `|` [03:18:25] instead of just test and mwtask updating packages, db did so as well [03:19:33] PROBLEM - db161 Current Load on db161 is WARNING: LOAD WARNING - total load average: 0.10, 7.83, 11.99 [03:23:22] RECOVERY - db161 Current Load on db161 is OK: LOAD OK - total load average: 0.27, 3.79, 9.42 [03:25:27] RECOVERY - bast161 Puppet on bast161 is OK: OK: Puppet is currently enabled, last run 29 seconds ago with 0 failures [03:26:51] PROBLEM - mwtask181 Current Load on mwtask181 is CRITICAL: LOAD CRITICAL - total load average: 24.63, 17.00, 8.63 [03:29:07] PROBLEM - os161 Puppet on os161 is CRITICAL: CRITICAL: Puppet has 2 failures. Last run 53 seconds ago with 2 failures. Failed resources (up to 3 shown): Package[opensearch],Opensearch_template[graylog-internal] [03:29:21] PROBLEM - os151 Puppet on os151 is CRITICAL: CRITICAL: Puppet has 2 failures. Last run 2 minutes ago with 2 failures. Failed resources (up to 3 shown): Package[opensearch],Opensearch_template[graylog-internal] [03:34:51] PROBLEM - mwtask181 Current Load on mwtask181 is WARNING: LOAD WARNING - total load average: 19.19, 22.03, 14.91 [03:36:51] RECOVERY - mwtask181 Current Load on mwtask181 is OK: LOAD OK - total load average: 16.89, 20.27, 15.14 [03:41:38] [02puppet] 07AgentIsai pushed 1 new commit to 03master 13https://github.com/miraheze/puppet/commit/008774dd65613929b0b270151908eac46c80a2fd [03:41:38] 02puppet/03master 07Agent Isai 03008774d Modify ns2/mattermost1 max checks before alerts [03:44:40] PROBLEM - db151 Current Load on db151 is WARNING: LOAD WARNING - total load average: 0.27, 0.53, 11.47 [03:45:20] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [03:46:37] RECOVERY - db151 Current Load on db151 is OK: LOAD OK - total load average: 0.46, 0.51, 10.13 [03:49:42] huh.. [03:50:29] Never update packages with apt upgrade. On any server even mw and task in the future. Only use the upgrade script to do security upgrades only. [03:50:32] Is there a way to make it rollling, so it says “ROLLED TO TEST. NO EXPLODE? COOL. TEST OTHER NOW. DB? wait no” [03:51:12] Is there a way go disable or put a warning on the command for humans? [03:52:09] No we don't want to really disable though anyway. Just without maintenance upgrading all packages should never be done because this has happened multiple times (including by myself and Reception as well) which is why I made the upgrade shell script on puppet. [03:55:58] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.31 ms [03:56:58] [02puppet] 07Universal-Omega created 03upgrade-packages (+1 new commit) 13https://github.com/miraheze/puppet/commit/ce05d9b84155 [03:56:58] 02puppet/03upgrade-packages 07CosmicAlpha 03ce05d9b Add upgrade-packages script… [03:57:03] [02puppet] 07Universal-Omega opened pull request #4189: Add upgrade-packages script (03master...03upgrade-packages) 13https://github.com/miraheze/puppet/pull/4189 [03:57:09] [02puppet] 07coderabbitai[bot] commented on pull request #4189: --- […] 13https://github.com/miraheze/puppet/pull/4189#issuecomment-2664550026 [03:57:50] I forgot the script only existed on puppet181 and never puppetized lol [03:59:40] And now that I put in GH I just realized how messy that script is lol [04:00:25] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [04:02:11] TIL a script exists [04:03:43] That’s not good [04:04:21] If a core sysadmin command is deemed not to be used in place of an in house hard to find script that should be very clearly communicated [04:05:29] What's not? [04:05:58] ^ [04:06:16] Yeah I made it 3 years ago because this incident has happened at least 4 times I can remember before. (myself and reception did it before also) [04:07:02] is this script documented somewhere other than folk knowledge? [04:07:46] Void and I always use the script (Reception might have before too I'm not sure) but I guess this was a failure on my part to properly document and communicate. I apologize for that. [04:08:12] perhaps the fact that apt upgrade shouldn’t be used anywhere should also be documented [04:08:32] since last I heard (actually, inferenced), it shouldn’t be run on db* [04:08:44] COSMIC [04:08:59] I’ve never seen it cause any issues other than on db which is the real heavy hitter [04:09:09] :Facepalm: okay so [[Tech:Upgrade script]] anyone? [04:09:10] [04:09:15] I shouldn't be used with salt that is. If you do it manually on a mediawiki server it's fine. But salt tends to come with the risk of typos which is the mistake I made before also. [04:09:31] wait is the site still down [04:09:37] Is it? [04:09:59] No works for me [04:10:03] Bah [04:10:23] im going to bed and doing some late night code reading [04:10:30] maybe hope on scoutlink irc for a spell [04:10:59] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.29 ms [04:12:45] You had me worried that the upgrade upgraded MariaDB to 11 lol that would have been extremely bad. It would probably break everything, MediaWiki doesn't work with it yet AFAIK. I checked that though and we are fine. [04:14:07] Oh god lmao [04:14:30] is there a way to define DO NOT UPDATE PAST THIS VERSION anywhere [04:14:35] I remember testing it a long time ago and it completely screwed things up, especially RecentChanges etc... and also caused serious performance issues due to changes with how it does JOIN etc... [04:14:52] ^MariaDB 11 that is [04:14:58] HYPOTHETICALLY how would we recover from that [04:15:01] 😇 [04:15:26] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [04:15:39] We can probably pin it. It's something we definitely need to do. When we upgrade we can do the test db first and test it that way. [04:15:41] after crying in the corner [04:15:54] whats the script name and location [04:16:03] We likely couldn't revert. It would run migrations that can't be reversed. [04:16:07] for update [04:16:25] it will never upgrade mariadb with my upgrade script. [04:16:44] Well we gotta do something and begging and pleading mediawiki to help us move Mediawiki to Maria 11 VERY quickly [04:16:45] apt upgrade by itself might though [04:17:08] It could be supported with MW 1.43 now it's something I might want to test again too. [04:17:25] no i mean whats the name and location of the update script if you aren’t gonna doc it on wiki(DO THAT) pin it here or smt [04:17:30] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.29 ms [04:17:43] but man these things should be made extremely clear to any IFS [04:17:51] Very confused lol [04:18:03] my upgrade script or apt upgrade? [04:18:13] https://github.com/miraheze/puppet/pull/4189 [04:18:13] [GitHub] [miraheze/puppet #4189] open PR by Universal-Omega, created 21 minutes, 12 seconds ago: Add upgrade-packages script | To puppetize, we've always had it on puppet181 but better to actually puppetize the script. [04:18:43] Im pretty sure mysql 11 doesn’t exist on Debian repositories yet [04:19:14] MediaWiki 1.39 through master are listed as compatible with MariaDB 10.3.0+ [04:19:40] yours [04:19:47] It will for sure when we upgrade to debian 13 that is supposed to come out this year. [04:19:50] idk its late i need sleepy time [04:19:57] PROBLEM - mon181 Check correctness of the icinga configuration on mon181 is CRITICAL: Icinga configuration contains errors [04:22:55] [1/26] @agentisai [04:22:55] [2/26] ```ps [04:22:55] [3/26] universalomega@mon181:~$ sudo icinga2 daemon -C [04:22:55] [4/26] [2025-02-18 04:22:22 +0000] information/cli: Icinga application loader (version: r2.14.5-1) [04:22:56] [5/26] [2025-02-18 04:22:22 +0000] information/cli: Loading configuration file(s). [04:22:56] [6/26] [2025-02-18 04:22:22 +0000] information/ConfigItem: Committing config item(s). [04:22:56] [7/26] [2025-02-18 04:22:22 +0000] information/ApiListener: My API identity: mon181.wikitide.net [04:22:57] [8/26] [2025-02-18 04:22:23 +0000] critical/config: Error: Error while evaluating expression: Can't convert '5m' to a floating point number. [04:22:57] [9/26] Location: in /etc/icinga2/conf.d/services.conf: 54:5-54:25 [04:22:57] [10/26] /etc/icinga2/conf.d/services.conf(52): [04:22:58] [11/26] /etc/icinga2/conf.d/services.conf(53): if ( host.name == "ns2" ) { [04:22:58] [12/26] /etc/icinga2/conf.d/services.conf(54): retry_interval = "5m" [04:22:58] [13/26] ^^^^^^^^^^^^^^^^^^^^^ [04:22:59] [14/26] /etc/icinga2/conf.d/services.conf(55): max_check_attempts = 25 [04:22:59] [15/26] /etc/icinga2/conf.d/services.conf(56): vars.ping_wrta = 300 [04:23:00] [16/26] [2025-02-18 04:22:23 +0000] critical/config: Error: Error while evaluating expression: Can't convert '5m' to a floating point number. [04:23:00] [17/26] Location: in /etc/icinga2/conf.d/services.conf: 61:5-61:25 [04:23:01] [18/26] /etc/icinga2/conf.d/services.conf(59): [04:23:01] [19/26] /etc/icinga2/conf.d/services.conf(60): if ( host.name == "mattermost1" ) { [04:23:02] [20/26] /etc/icinga2/conf.d/services.conf(61): retry_interval = "5m" [04:23:02] [21/26] ^^^^^^^^^^^^^^^^^^^^^ [04:23:03] [22/26] /etc/icinga2/conf.d/services.conf(62): max_check_attempts = 25 [04:23:03] [23/26] /etc/icinga2/conf.d/services.conf(63): vars.ping_wrta = 150 [04:23:04] [24/26] [2025-02-18 04:22:23 +0000] critical/config: 2 errors [04:23:04] [25/26] [2025-02-18 04:22:23 +0000] critical/cli: Config validation failed. Re-run with 'icinga2 daemon -C' after fixing the config. [04:23:05] [26/26] ``` [04:23:36] PROBLEM - ping6 on ns2 is CRITICAL: PING CRITICAL - Packet loss = 16%, RTA = 142.87 ms [04:25:36] RECOVERY - ping6 on ns2 is OK: PING OK - Packet loss = 0%, RTA = 142.83 ms [04:28:19] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [04:34:41] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.28 ms [04:41:11] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [04:49:41] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.31 ms [04:54:08] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [05:04:42] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.97 ms [05:11:13] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [05:13:16] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.31 ms [05:30:43] [02mw-config] 07MacFan4000 pushed 1 new commit to 03master 13https://github.com/miraheze/mw-config/commit/0491d6eee7c96b51e3fb46e5bde2f9f54e87dd9b [05:30:43] 02mw-config/03master 07MacFan4000 030491d6e add maintenance site notice [05:31:24] !log [macfan@mwtask181] starting deploy of {'pull': 'config', 'config': True} to all [05:31:26] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [05:31:38] miraheze/mw-config - MacFan4000 the build passed. [05:31:45] !log [macfan@mwtask181] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 20s [05:31:47] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [05:32:12] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [05:42:48] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.31 ms [05:47:16] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [05:55:45] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.26 ms [06:00:12] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [06:06:34] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.39 ms [06:31:32] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 70%, RTA = 31.28 ms [06:35:39] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.36 ms [06:40:07] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [06:50:41] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.35 ms [06:55:08] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [07:05:48] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.29 ms [08:02:53] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [08:13:30] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.37 ms [08:17:57] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [08:26:27] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.28 ms [08:32:58] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [08:41:21] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.33 ms [09:40:55] PROBLEM - changeprop151 APT on changeprop151 is CRITICAL: APT CRITICAL: 3 packages available for upgrade (3 critical updates). [09:50:56] RECOVERY - changeprop151 APT on changeprop151 is OK: APT OK: 0 packages available for upgrade (0 critical updates). [10:11:10] PROBLEM - mwtask151 Current Load on mwtask151 is WARNING: LOAD WARNING - total load average: 22.15, 16.04, 8.35 [10:15:10] RECOVERY - mwtask151 Current Load on mwtask151 is OK: LOAD OK - total load average: 9.48, 16.08, 10.39 [10:24:45] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [10:37:30] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.98 ms [10:44:00] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [10:58:57] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.31 ms [11:40:20] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 60%, RTA = 31.32 ms [11:42:23] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.33 ms [11:53:12] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [12:03:46] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.39 ms [12:22:17] PROBLEM - db171 Current Load on db171 is CRITICAL: LOAD CRITICAL - total load average: 15.27, 11.87, 7.63 [12:31:58] PROBLEM - db171 Current Load on db171 is WARNING: LOAD WARNING - total load average: 5.93, 11.12, 10.18 [12:35:51] PROBLEM - db171 Current Load on db171 is CRITICAL: LOAD CRITICAL - total load average: 13.04, 12.15, 10.73 [12:37:57] PROBLEM - mw184 Current Load on mw184 is WARNING: LOAD WARNING - total load average: 21.37, 20.31, 16.18 [12:39:53] RECOVERY - mw184 Current Load on mw184 is OK: LOAD OK - total load average: 19.17, 19.51, 16.35 [12:49:00] [Grafana] FIRING: Some MediaWiki Appservers are running out of PHP-FPM workers. https://grafana.wikitide.net/d/GtxbP1Xnk?orgId=1 [12:54:00] [Grafana] RESOLVED: PHP-FPM Worker Usage High https://grafana.wikitide.net/d/GtxbP1Xnk?orgId=1 [12:57:44] PROBLEM - db171 Current Load on db171 is WARNING: LOAD WARNING - total load average: 0.96, 5.81, 10.69 [12:59:44] RECOVERY - db171 Current Load on db171 is OK: LOAD OK - total load average: 0.49, 4.02, 9.44 [13:03:35] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 37%, RTA = 31.29 ms [13:05:38] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.27 ms [14:00:28] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [14:06:50] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.34 ms [14:27:48] PROBLEM - cp37 Disk Space on cp37 is CRITICAL: DISK CRITICAL - free space: / 27173MiB (5% inode=99%); [14:38:16] PROBLEM - gimkit.wiki - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'gimkit.wiki' expires in 14 day(s) (Wed 05 Mar 2025 02:35:49 PM GMT +0000). [14:38:30] [02ssl] 07WikiTideSSLBot pushed 1 new commit to 03master 13https://github.com/miraheze/ssl/commit/c421d9907d3e8f2fc572f5d5e470d911f3178ae8 [14:38:31] 02ssl/03master 07WikiTideSSLBot 03c421d99 Bot: Update SSL cert for gimkit.wiki [14:44:32] PROBLEM - junxions.wiki - LetsEncrypt on sslhost is WARNING: WARNING - Certificate 'junxions.wiki' expires in 14 day(s) (Wed 05 Mar 2025 02:30:23 PM GMT +0000). [14:44:42] [02ssl] 07WikiTideSSLBot pushed 1 new commit to 03master 13https://github.com/miraheze/ssl/commit/94099e608c987cdfdea3a01156d7c5f0be208124 [14:44:43] 02ssl/03master 07WikiTideSSLBot 0394099e6 Bot: Update SSL cert for junxions.wiki [14:49:37] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [15:00:16] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.31 ms [15:04:43] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [15:07:02] RECOVERY - gimkit.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'gimkit.wiki' will expire on Mon 19 May 2025 01:39:54 PM GMT +0000. [15:13:13] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.35 ms [15:13:34] RECOVERY - junxions.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'junxions.wiki' will expire on Mon 19 May 2025 01:46:07 PM GMT +0000. [16:04:42] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.43/maintenance/run.php /srv/mediawiki/1.43/maintenance/importImages.php --wiki=mythcommunitywiki images --search-recursively (START) [16:04:43] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:11:12] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [16:17:33] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.31 ms [16:24:04] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [16:27:34] [02ssl] 07MacFan4000 pushed 1 new commit to 03master 13https://github.com/miraheze/ssl/commit/6d033ee39c11ff1aab650cf85977593f4e06f85a [16:27:34] 02ssl/03master 07MacFan4000 036d033ee add 5 domains [16:34:38] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.33 ms [16:40:57] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.43/maintenance/run.php /srv/mediawiki/1.43/maintenance/importImages.php --wiki=mythcommunitywiki images --search-recursively (END - exit=0) [16:40:58] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [16:41:08] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [16:47:29] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.47 ms [17:47:05] PROBLEM - mwtask161 Current Load on mwtask161 is WARNING: LOAD WARNING - total load average: 22.55, 17.73, 11.06 [17:49:05] RECOVERY - mwtask161 Current Load on mwtask161 is OK: LOAD OK - total load average: 14.09, 17.32, 11.79 [18:08:00] PROBLEM - mwtask161 Current Load on mwtask161 is WARNING: LOAD WARNING - total load average: 23.11, 17.93, 14.33 [18:09:59] RECOVERY - mwtask161 Current Load on mwtask161 is OK: LOAD OK - total load average: 10.14, 16.13, 14.19 [18:10:25] PROBLEM - mwtask171 Current Load on mwtask171 is WARNING: LOAD WARNING - total load average: 22.01, 16.89, 12.08 [18:16:18] RECOVERY - mwtask171 Current Load on mwtask171 is OK: LOAD OK - total load average: 4.60, 14.22, 12.98 [18:34:50] [02ssl] 07MacFan4000 pushed 1 new commit to 03master 13https://github.com/miraheze/ssl/commit/2d9366d8911d46c1e610972305b49fd0daeb9c31 [18:34:51] 02ssl/03master 07MacFan4000 032d9366d add eccg.themadpunter.tech [18:44:48] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [18:55:24] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.32 ms [18:57:55] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.43/maintenance/run.php /srv/mediawiki/1.43/maintenance/importImages.php --wiki=chidurianwikiwiki images --search-recursively (START) [18:57:57] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [18:59:52] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [19:04:04] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.32 ms [19:11:43] !log [macfan@mwtask181] sudo -u www-data php /srv/mediawiki/1.43/maintenance/run.php /srv/mediawiki/1.43/maintenance/importImages.php --wiki=chidurianwikiwiki images --search-recursively (END - exit=0) [19:11:45] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [19:13:54] [02.github] 07dependabot[bot] created 03dependabot/github_actions/actions/cache-4.2.1 (+1 new commit) 13https://github.com/miraheze/.github/commit/5229f55ef5fe [19:13:55] 02.github/03dependabot/github_actions/actions/cache-4.2.1 07dependabot[bot] 035229f55 Bump actions/cache from 4.2.0 to 4.2.1… [19:13:56] [02.github] 07dependabot[bot] added the label 'dependencies' to pull request #56 (Bump actions/cache from 4.2.0 to 4.2.1) 13https://github.com/miraheze/.github/pull/56 [19:13:58] [02.github] 07dependabot[bot] opened pull request #56: Bump actions/cache from 4.2.0 to 4.2.1 (03master...03dependabot/github_actions/actions/cache-4.2.1) 13https://github.com/miraheze/.github/pull/56 [19:14:04] [02.github] 07coderabbitai[bot] commented on pull request #56: --- […] 13https://github.com/miraheze/.github/pull/56#issuecomment-2666697603 [19:24:56] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [19:27:49] PROBLEM - matomo151 HTTPS on matomo151 is CRITICAL: HTTP CRITICAL - Invalid HTTP response received from host on port 443: cURL returned 7 - Failed to connect to matomo151.wikitide.net port 443 after 0 ms: Couldn't connect to server [19:31:17] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.37 ms [19:36:55] [02mw-config] 07songnguxyz opened pull request #5829: Update lhmnwiki's configs (07miraheze:03master...07songnguxyz:03patch-4) 13https://github.com/miraheze/mw-config/pull/5829 [19:37:46] miraheze/mw-config - songnguxyz the build passed. [19:38:21] `#5829` are just some quick config changes [19:41:55] [02mw-config] 07RhinosF1 commented on pull request #5829: @songnguxyz is there any reason this can't be added to ManageWiki? […] 13https://github.com/miraheze/mw-config/pull/5829#issuecomment-2666775185 [19:42:45] Reviewed [19:43:13] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [19:45:18] [02mw-config] 07songnguxyz commented on pull request #5829: I had added it since #5806, yet as I said that it doesn't shown properly in ManageWiki. The other one `wgPortableInfoboxCustomImageWidth` does seems to show normally however. 13https://github.com/miraheze/mw-config/pull/5829#issuecomment-2666782159 [19:49:37] [1/2] There are some issue with ManageWiki, both here and outside MH as well [19:49:38] [2/2] check PR [19:50:54] That's a bug that should be reported and fixed [19:51:04] @cosmicalpha can you take a peek at ^ [19:51:17] https://github.com/miraheze/mw-config/pull/5829#issuecomment-2666782159 [19:51:18] [GitHub] [miraheze/mw-config #5829] Comment by songnguxyz, created 6 minutes, 2 seconds ago: I had added it since #5806, yet as I said that it doesn't shown properly in ManageWiki. The other one `wgPortableInfoboxCustomImageWidth` shows up normally otherwise. […] [19:51:30] Should be in ManageWiki but isn't working [19:53:49] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.32 ms [19:55:49] RECOVERY - matomo151 HTTPS on matomo151 is OK: HTTP OK: HTTP/2 200 - 553 bytes in 0.255 second response time [19:58:20] [02mw-config] 07songnguxyz commented on pull request #5829: Task created: https://issue-tracker.miraheze.org/T13241 13https://github.com/miraheze/mw-config/pull/5829#issuecomment-2666809649 [19:58:55] [02mw-config] 07songnguxyz un-readied pull request #5829: Update lhmnwiki's configs (07miraheze:03master...07songnguxyz:03patch-4) 13https://github.com/miraheze/mw-config/pull/5829 [20:40:41] [02mw-config] 07Universal-Omega pushed 1 new commit to 03master 13https://github.com/miraheze/mw-config/commit/0d0a7435f3117c81803b0e4655dd789de2197448 [20:40:42] 02mw-config/03master 07CosmicAlpha 030d0a743 Fix wgEnableProtectionIndicators [20:40:52] @rhinosf1 @songngu.xyz ^ [20:41:32] Thanks! [20:41:36] miraheze/mw-config - Universal-Omega the build passed. [20:41:53] No problem! Whoever added it forgot to add global => true which means it won't show unless there is an extension called mediawiki the way it was. [20:42:16] Ah [20:42:44] I'll deploy the change in a moment [20:44:27] For future reference global => true is needed for all configuration not part of extensions in MWE, that includes global extension/skin variables and mediawiki core ones. [20:47:32] https://github.com/miraheze/mw-config/pull/4187#issuecomment-953521607 was the reason it was required like that too, so filtering works which I added back in 2021. I forgot I did that and exactly why for a moment lol. [20:47:32] [GitHub] [miraheze/mw-config #4187] Comment by Universal-Omega, created 3 years, 3 months ago: I have a plan to make ManageWikiSettings able to be filtered by extension/skin. Currently it now supports filtering by extensions/skins set in ManageWikiExtensions by the from value, doing this would allow the 'from' to be changed to the actual […] [20:51:31] Hmm any ideas on how I could disable an extension on all wikis where another extension is not enabled? [20:52:08] @rhinosf1 do you know if it'd be feasable to let mwscript do like --not-extension --has-extension generate DB list. I'm not sure how lol. [20:52:26] !log [universalomega@mwtask181] starting deploy of {'pull': 'config', 'config': True} to all [20:52:28] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:52:47] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 80%, RTA = 31.30 ms [20:52:47] !log [universalomega@mwtask181] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 21s [20:52:49] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [20:52:53] It gets passed to a magic script [20:53:13] I forgot that lol [20:53:24] https://github.com/miraheze/MirahezeMagic/blob/master/maintenance/generateExtensionDatabaseList.php [20:53:24] [GitHub] [miraheze/MirahezeMagic] maintenance/generateExtensionDatabaseList.php @ master [20:53:25] I guess that's on me to make the other script support it first... [20:53:30] Yup [20:53:43] mwscript can do whatever you like tbh [20:53:50] It mostly just shells out to other stuff [20:54:51] No rush we have time anyway but it seems GrowthExperiments now hard requires CirrusSearch (defined in extension.json as a dependency) so with 1.44 rather than enabling the restricted CS on all wikis with GE, GE will need to be disabled on all wikis without CS [20:55:21] Have you spoke to @urbanecm [20:55:28] To see if it's truly hard [20:55:44] Not just been added to ext.json [20:56:00] At first it was an accident it seemed as it wasn't defined in extension.json but now it was added to extension.json so it seems to be intentionally requiring it now. [20:56:04] I have a feeling this happened before but the code didn't strictly require [20:56:06] I just assumed anyway [20:56:55] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.31 ms [20:57:34] How many wikis have CS? [21:01:05] Uhhh, was something done with CirrusSearch today? seems like search is globally having issues for me [21:02:02] At least meta wiki and the game wiki I help maintain, when a topic is inserted into search bar and enter used it just says „An error has occurred while searching: We could not complete your search due to a temporary problem. Please try again later. ” [21:02:06] [02mw-config] 07Universal-Omega commented on pull request #5829: $wgEnableProtectionIndicators in ManageWiki should be fixed now. 13https://github.com/miraheze/mw-config/pull/5829#issuecomment-2666922361 [21:02:24] haven't checked wikis without cirrussearch though [21:02:45] search issue also reported by two other users [21:03:25] PROBLEM - ping6 on mattermost1 is CRITICAL: PING CRITICAL - Packet loss = 100% [21:05:29] RECOVERY - ping6 on mattermost1 is OK: PING OK - Packet loss = 0%, RTA = 31.31 ms [21:07:40] @cosmicalpha any thoughts on what Frisk said while you're here? [21:08:13] https://wiki.animalroyale.com/w/index.php?search=Unicorn&title=Special%3ASearch&wprov=acrw1_-1 [21:08:13] https://meta.miraheze.org/w/index.php?search=robot&title=Special%3ASearch&wprov=acrw1_-1&ns0=1&ns4=1 [21:08:17] This was likely caused by the apt upgrades done [21:08:38] Special:Version might be a separate issue but it also errors out on both wikis [21:08:40] I'll look in a moment [21:08:50] at both issues [21:08:54] Thanks! [21:13:08] This was definitely caused by upgrades. [21:18:01] [1/4] ``` java.lang.IllegalArgumentException: An SPI class of type org.apache.lucene.codecs.Codec with name 'Lucene912' does not exist. You need to add [21:18:01] [2/4] the corresponding JAR file supporting this SPI to your classpath. The current classpath supports the following names: [Composite99Codec, Lucene80, Lucene84, Lucene86, Lucene87, Lucene70, Lucene90, Lucene91, Lucene92, Lucene94, Lucene95, Lucene99, KNN80Codec, KNN84Codec, KNN86Codec, KNN87Codec, KNN910Codec, KNN920Codec, KNN940Codec, KNN950Codec, KNN990Codec, UnitTestCodec, ZSTD, [21:18:02] [3/4] ZSTDNODICT, Lucene95CustomCodec, ZSTD99, ZSTDNODICT99, QATDEFLATE99, QATLZ499, CorrelationCodec, CorrelationCodec990] [21:18:02] [4/4] ``` [21:24:18] yeah so basically component of elasticsearch and therefore cirrussearch, guess this explains why pizzatower wiki doesn't seem to be affected [21:28:57] I have no idea how to fix this right now but am trying [21:29:19] I've added a #minor-announcements @cosmicalpha [21:35:03] [02puppet] 07Universal-Omega pushed 1 new commit to 03master 13https://github.com/miraheze/puppet/commit/c0bc5d38c99a69873c092666175ce9c42329b562 [21:35:04] 02puppet/03master 07CosmicAlpha 03c0bc5d3 Upgrade opensearch [21:36:44] Okay search is fixed [21:36:54] @rhinosf1, Frisk [21:37:18] !log upgrade opensearch to 2.19.0 [21:37:19] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [21:37:57] Chef's kiss, thank you for all of your work :) [21:38:44] Can confirm both search and Special:Version is working correctly [21:39:58] RECOVERY - os162 Puppet on os162 is OK: OK: Puppet is currently enabled, last run 53 seconds ago with 0 failures [21:43:40] RECOVERY - os151 Puppet on os151 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [21:43:42] RECOVERY - os161 Puppet on os161 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [21:48:51] [02puppet] 07Universal-Omega pushed 1 new commit to 03master 13https://github.com/miraheze/puppet/commit/360811253e4b2000e21fc24f5ab12125a7be0ba4 [21:48:51] 02puppet/03master 07CosmicAlpha 033608112 Fix [21:50:54] RECOVERY - mon181 Check correctness of the icinga configuration on mon181 is OK: Icinga configuration is correct [21:51:50] PROBLEM - anawawiki.com - Cloudflare on sslhost is CRITICAL: No address associated with hostnameHTTP CRITICAL - Unable to open TCP socket [21:58:06] [02puppet] 07Universal-Omega closed pull request #3897: create new deploy test user for T12486 (03master...03Reception123-patch-3) 13https://github.com/miraheze/puppet/pull/3897 [21:58:07] [02puppet] 07Universal-Omega commented on pull request #3897: We are going about this a different way. 13https://github.com/miraheze/puppet/pull/3897#issuecomment-2667015458 [21:58:11] [02puppet] 07Universal-Omega 04deleted 03Reception123-patch-3 at 03e45a214 13https://github.com/miraheze/puppet/commit/e45a214 [22:40:39] [02ssl] 07WikiTideSSLBot pushed 1 new commit to 03master 13https://github.com/miraheze/ssl/commit/b8a7013564010dda575275be674ac4c8600b32b2 [22:40:39] 02ssl/03master 07WikiTideSSLBot 03b8a7013 Bot: Update SSL cert for mwcosmos.com [23:45:24] [02mw-config] 07MacFan4000 pushed 1 new commit to 03master 13https://github.com/miraheze/mw-config/commit/144ed0d6510ad4aa5ad36d032866e9000e58ba2e [23:45:24] 02mw-config/03master 07MacFan4000 03144ed0d fix styling [23:46:07] !log [macfan@test151] starting deploy of {'pull': 'config', 'config': True} to test151 [23:46:08] !log [macfan@test151] finished deploy of {'pull': 'config', 'config': True} to test151 - SUCCESS in 1s [23:46:08] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [23:46:10] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [23:46:23] miraheze/mw-config - MacFan4000 the build passed. [23:46:48] PROBLEM - os161 Current Load on os161 is WARNING: LOAD WARNING - total load average: 3.75, 3.33, 2.75 [23:47:59] RECOVERY - mwcosmos.com - LetsEncrypt on sslhost is OK: OK - Certificate 'mwcosmos.com' will expire on Mon 19 May 2025 09:42:04 PM GMT +0000. [23:48:01] !log [macfan@test151] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/extensions/ManageWiki/maintenance/togleExtension.php --wiki=metawikibeta CirrusSearch (END - exit=256) [23:48:03] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [23:48:46] RECOVERY - os161 Current Load on os161 is OK: LOAD OK - total load average: 2.94, 3.21, 2.78 [23:48:55] !log [macfan@test151] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/extensions/ManageWiki/maintenance/toggleExtension.php --wiki=metawikibeta CirrusSearch --disable (END - exit=0) [23:48:57] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [23:49:50] !log [macfan@test151] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/extensions/ManageWiki/maintenance/toggleExtension.php --wiki=metawikibeta GrowthExperiments --disable (END - exit=0) [23:49:52] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [23:50:47] !log [macfan@test151] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/extensions/ManageWiki/maintenance/toggleExtension.php --wiki=metawikibeta growthexperiments --disable (END - exit=0) [23:50:49] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [23:51:01] !log [macfan@test151] sudo -u www-data php /srv/mediawiki/1.44/maintenance/run.php /srv/mediawiki/1.44/extensions/ManageWiki/maintenance/toggleExtension.php --wiki=metawikibeta cirrussearch --disable (END - exit=0) [23:51:03] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [23:52:29] !log [macfan@mwtask181] starting deploy of {'pull': 'config', 'config': True} to all [23:52:31] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log [23:52:50] !log [macfan@mwtask181] finished deploy of {'pull': 'config', 'config': True} to all - SUCCESS in 20s [23:52:52] Logged the message at https://meta.miraheze.org/wiki/Tech:Server_admin_log