[00:00:56] PROBLEM - cp13 Puppet on cp13 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[nginx] [00:01:45] [02miraheze/ssl] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JPSsI [00:01:47] [02miraheze/ssl] 07paladox 032afff39 - Update www.iceria.org.crt [00:03:11] PROBLEM - cp14 Puppet on cp14 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): Service[nginx] [00:03:58] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 31.42, 24.32, 21.33 [00:04:12] PROBLEM - cp15 Puppet on cp15 is CRITICAL: CRITICAL: Puppet has 135 failures. Last run 3 minutes ago with 135 failures. Failed resources (up to 3 shown): File[www.winenjoy.net],File[www.winenjoy.net_private],File[wiki.thefactoryhka.com.pa],File[wiki.thefactoryhka.com.pa_private] [00:04:34] RECOVERY - www.bluepageswiki.org - LetsEncrypt on sslhost is OK: OK - Certificate 'www.bluepageswiki.org' will expire on Mon 31 Jan 2022 22:56:37 GMT +0000. [00:04:41] RECOVERY - www.erikapedia.com - LetsEncrypt on sslhost is OK: OK - Certificate 'www.erikapedia.com' will expire on Mon 31 Jan 2022 22:50:10 GMT +0000. [00:04:43] RECOVERY - iceria.org - LetsEncrypt on sslhost is OK: OK - Certificate 'www.iceria.org' will expire on Mon 31 Jan 2022 23:00:20 GMT +0000. [00:04:45] RECOVERY - www.mcpk.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'www.mcpk.wiki' will expire on Mon 31 Jan 2022 22:52:23 GMT +0000. [00:04:47] RECOVERY - erikapedia.com - LetsEncrypt on sslhost is OK: OK - Certificate 'www.erikapedia.com' will expire on Mon 31 Jan 2022 22:50:10 GMT +0000. [00:04:50] RECOVERY - wiki.hrznstudio.com - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.hrznstudio.com' will expire on Mon 31 Jan 2022 22:49:00 GMT +0000. [00:04:52] RECOVERY - thestarsareright.org - LetsEncrypt on sslhost is OK: OK - Certificate 'thestarsareright.org' will expire on Mon 31 Jan 2022 22:47:28 GMT +0000. [00:04:55] RECOVERY - cp13 Puppet on cp13 is OK: OK: Puppet is currently enabled, last run 32 seconds ago with 0 failures [00:05:01] RECOVERY - www.wikimicrofinanza.it - LetsEncrypt on sslhost is OK: OK - Certificate 'www.wikimicrofinanza.it' will expire on Mon 31 Jan 2022 22:54:41 GMT +0000. [00:05:04] RECOVERY - www.iceria.org - LetsEncrypt on sslhost is OK: OK - Certificate 'www.iceria.org' will expire on Mon 31 Jan 2022 23:00:20 GMT +0000. [00:05:04] RECOVERY - www.lab612.at - LetsEncrypt on sslhost is OK: OK - Certificate 'www.lab612.at' will expire on Mon 31 Jan 2022 22:53:37 GMT +0000. [00:05:09] RECOVERY - cp14 Puppet on cp14 is OK: OK: Puppet is currently enabled, last run 58 seconds ago with 0 failures [00:05:55] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 19.54, 22.26, 20.93 [00:06:10] RECOVERY - cp15 Puppet on cp15 is OK: OK: Puppet is currently enabled, last run 42 seconds ago with 0 failures [00:06:11] RECOVERY - cp12 Puppet on cp12 is OK: OK: Puppet is currently enabled, last run 40 seconds ago with 0 failures [00:07:09] [02miraheze/ssl] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JPSs5 [00:07:10] [02miraheze/ssl] 07paladox 03e0be35d - Update www.zenbuddhism.info.crt [00:07:53] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 27.87, 25.25, 22.21 [00:07:53] RECOVERY - www.zenbuddhism.info - LetsEncrypt on sslhost is OK: OK - Certificate 'www.zenbuddhism.info' will expire on Mon 31 Jan 2022 22:55:43 GMT +0000. [00:08:52] [02miraheze/ssl] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JPSGf [00:08:53] [02miraheze/ssl] 07paladox 036c13805 - Update wiki-asterix.cf.crt [00:09:21] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 5.16, 5.22, 5.04 [00:09:50] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 22.81, 23.89, 22.05 [00:11:18] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.77, 5.75, 5.24 [00:11:31] RECOVERY - mwtask1 Puppet on mwtask1 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [00:11:47] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 27.15, 25.79, 22.99 [00:11:53] RECOVERY - mw11 Puppet on mw11 is OK: OK: Puppet is currently enabled, last run 31 seconds ago with 0 failures [00:12:03] [02miraheze/ssl] 07paladox pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JPSG0 [00:12:04] [02miraheze/ssl] 07paladox 03dde0667 - Update www.arru.xyz.crt [00:13:16] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.52, 5.43, 5.20 [00:14:12] RECOVERY - mw8 Puppet on mw8 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [00:14:36] RECOVERY - wiki-asterix.cf - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki-asterix.cf' will expire on Mon 31 Jan 2022 23:08:28 GMT +0000. [00:14:40] RECOVERY - arru.xyz - LetsEncrypt on sslhost is OK: OK - Certificate 'www.arru.xyz' will expire on Mon 31 Jan 2022 23:11:41 GMT +0000. [00:14:42] RECOVERY - www.arru.xyz - LetsEncrypt on sslhost is OK: OK - Certificate 'www.arru.xyz' will expire on Mon 31 Jan 2022 23:11:41 GMT +0000. [00:15:13] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 8.49, 6.49, 5.60 [00:22:40] RECOVERY - mw10 Puppet on mw10 is OK: OK: Puppet is currently enabled, last run 47 seconds ago with 0 failures [00:29:42] RECOVERY - mw12 Puppet on mw12 is OK: OK: Puppet is currently enabled, last run 2 seconds ago with 0 failures [00:38:44] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 3.15, 4.20, 5.72 [00:40:41] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 8.54, 5.74, 6.09 [00:42:39] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.68, 5.20, 5.85 [00:44:36] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 7.97, 5.81, 5.97 [00:46:33] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 5.13, 5.52, 5.85 [00:53:15] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 23.49, 21.01, 23.81 [00:54:23] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 3.01, 3.70, 4.88 [01:05:15] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 13.49, 17.16, 20.12 [01:16:48] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is WARNING: Traceback (most recent call last): File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 148, in main() File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 116, in main rdns_hostname = get_reverse_dnshostname(args.hostname) File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 101, in get_reverse_dnshostname resolved_ip_addr = str(dns_resolver.quer [01:16:48] tname, 'A')[0]) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 992, in query timeout = self._compute_timeout(start, lifetime) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 799, in _compute_timeout raise Timeout(timeout=duration)dns.exception.Timeout: The DNS operation timed out after 30.001814126968384 seconds [01:23:44] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - zw.fontainebleau-avon.fr All nameservers failed to answer the query. [02:13:14] PROBLEM - mon2 Current Load on mon2 is CRITICAL: CRITICAL - load average: 4.17, 3.36, 2.81 [02:15:13] RECOVERY - mon2 Current Load on mon2 is OK: OK - load average: 3.16, 3.14, 2.79 [02:55:29] I guess this is a fallout from the db move but is it task-worthy or will it recover over time so I just need to give it a more time? https://usercontent.irccloud-cdn.com/file/I4QBKVwG/IMG_1177.PNG [03:07:00] Once a phab ticket is assigned for import, is it just the one user who can restart the import script for images if the first attempt failed? [03:36:18] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is WARNING: Traceback (most recent call last): File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 148, in main() File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 116, in main rdns_hostname = get_reverse_dnshostname(args.hostname) File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 101, in get_reverse_dnshostname resolved_ip_addr = str(dns_resolver.quer [03:36:18] tname, 'A')[0]) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 992, in query timeout = self._compute_timeout(start, lifetime) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 799, in _compute_timeout raise Timeout(timeout=duration)dns.exception.Timeout: The DNS operation timed out after 30.002535581588745 seconds [03:43:09] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - zw.fontainebleau-avon.fr All nameservers failed to answer the query. [04:52:48] PROBLEM - mw9 Current Load on mw9 is CRITICAL: CRITICAL - load average: 9.08, 7.91, 4.91 [04:52:59] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is WARNING: Traceback (most recent call last): File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 148, in main() File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 116, in main rdns_hostname = get_reverse_dnshostname(args.hostname) File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 101, in get_reverse_dnshostname resolved_ip_addr = str(dns_resolver.quer [04:52:59] tname, 'A')[0]) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 992, in query timeout = self._compute_timeout(start, lifetime) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 799, in _compute_timeout raise Timeout(timeout=duration)dns.exception.Timeout: The DNS operation timed out after 30.003698348999023 seconds [04:54:48] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 4.95, 6.89, 4.91 [04:56:48] RECOVERY - mw9 Current Load on mw9 is OK: OK - load average: 3.31, 5.67, 4.71 [04:59:11] PROBLEM - mw12 Current Load on mw12 is CRITICAL: CRITICAL - load average: 10.93, 6.49, 4.17 [05:00:14] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - zw.fontainebleau-avon.fr All nameservers failed to answer the query. [05:01:10] RECOVERY - mw12 Current Load on mw12 is OK: OK - load average: 5.37, 6.19, 4.37 [05:58:09] [02CreateWiki] 07Universal-Omega synchronize pull request 03#256: Fix notifications and add NotificationsManager service - 13https://git.io/JP1fg [06:02:50] [02CreateWiki] 07Universal-Omega synchronize pull request 03#256: Fix notifications and add NotificationsManager service - 13https://git.io/JP1fg [06:03:29] [02CreateWiki] 07Universal-Omega synchronize pull request 03#256: Fix notifications and add NotificationsManager service - 13https://git.io/JP1fg [06:10:11] miraheze/CreateWiki - Universal-Omega the build passed. [06:11:10] miraheze/CreateWiki - Universal-Omega the build passed. [06:15:08] miraheze/CreateWiki - Universal-Omega the build passed. [06:30:29] [02miraheze/ssl] 07Reception123 pushed 034 commits to 03master [+0/-3/±41] 13https://git.io/JP9Ce [06:30:30] [02miraheze/ssl] 07Reception123 03ae2ef69 - renew SSL certs [06:30:32] [02miraheze/ssl] 07Reception123 0301944f0 - remove wiki.fffbr.org cert (not pointing) [06:30:33] [02miraheze/ssl] 07Reception123 030d68725 - remove www.highdeologywiki.com (not pointing) [06:30:35] [02miraheze/ssl] 07Reception123 03394f047 - Merge branch 'master' of https://github.com/miraheze/ssl into master [06:30:35] [url] GitHub - miraheze/ssl: A repository for storing and managing SSL certificates for Miraheze | github.com [06:41:10] PROBLEM - cp14 Puppet on cp14 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): File[www.puzzles.wiki_private] [06:42:11] PROBLEM - cp15 Puppet on cp15 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): File[www.puzzles.wiki_private] [06:43:32] PROBLEM - mwtask1 Puppet on mwtask1 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): File[www.puzzles.wiki_private] [06:44:12] PROBLEM - mw8 Puppet on mw8 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): File[www.puzzles.wiki_private] [06:45:53] PROBLEM - mw11 Puppet on mw11 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 3 minutes ago with 1 failures. Failed resources (up to 3 shown): File[www.puzzles.wiki_private] [06:46:35] [02miraheze/ssl] 07Reception123 pushed 031 commit to 03master [+0/-0/±1] 13https://git.io/JP98i [06:46:36] [02miraheze/ssl] 07Reception123 030f25653 - fix [06:59:16] RECOVERY - dreamsit.com.br - LetsEncrypt on sslhost is OK: OK - Certificate 'dreamsit.com.br' will expire on Tue 01 Feb 2022 05:18:33 GMT +0000. [07:06:06] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is WARNING: Traceback (most recent call last): File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 148, in main() File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 116, in main rdns_hostname = get_reverse_dnshostname(args.hostname) File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 101, in get_reverse_dnshostname resolved_ip_addr = str(dns_resolver.quer [07:06:06] tname, 'A')[0]) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 992, in query timeout = self._compute_timeout(start, lifetime) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 799, in _compute_timeout raise Timeout(timeout=duration)dns.exception.Timeout: The DNS operation timed out after 30.002267837524414 seconds [07:07:43] RECOVERY - www.electrowiki.org - LetsEncrypt on sslhost is OK: OK - Certificate 'electrowiki.org' will expire on Tue 01 Feb 2022 05:17:32 GMT +0000. [07:08:31] RECOVERY - cp14 Puppet on cp14 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [07:08:47] RECOVERY - patternarchive.online - LetsEncrypt on sslhost is OK: OK - Certificate 'patternarchive.online' will expire on Tue 01 Feb 2022 05:12:22 GMT +0000. [07:08:53] RECOVERY - wiki.yumeka.icu - LetsEncrypt on sslhost is OK: OK - Certificate 'wiki.yumeka.icu' will expire on Tue 01 Feb 2022 05:25:39 GMT +0000. [07:09:15] RECOVERY - www.chinatech.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'www.chinatech.wiki' will expire on Tue 01 Feb 2022 05:21:48 GMT +0000. [07:09:29] RECOVERY - otcg.ml - LetsEncrypt on sslhost is OK: OK - Certificate 'otcg.ml' will expire on Tue 01 Feb 2022 05:23:51 GMT +0000. [07:09:50] RECOVERY - cp15 Puppet on cp15 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [07:10:30] RECOVERY - wikimoma.art - LetsEncrypt on sslhost is OK: OK - Certificate 'wikimoma.art' will expire on Tue 01 Feb 2022 05:24:41 GMT +0000. [07:10:43] RECOVERY - puzzles.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'puzzles.wiki' will expire on Tue 01 Feb 2022 05:45:17 GMT +0000. [07:11:12] RECOVERY - mwtask1 Puppet on mwtask1 is OK: OK: Puppet is currently enabled, last run 16 seconds ago with 0 failures [07:11:20] RECOVERY - rww2.org - LetsEncrypt on sslhost is OK: OK - Certificate 'rww2.org' will expire on Tue 01 Feb 2022 05:22:58 GMT +0000. [07:11:25] RECOVERY - mw11 Puppet on mw11 is OK: OK: Puppet is currently enabled, last run 21 seconds ago with 0 failures [07:11:39] RECOVERY - civwiki.org - LetsEncrypt on sslhost is OK: OK - Certificate 'civwiki.org' will expire on Tue 01 Feb 2022 05:26:33 GMT +0000. [07:11:56] RECOVERY - www.puzzles.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'puzzles.wiki' will expire on Tue 01 Feb 2022 05:45:17 GMT +0000. [07:12:48] RECOVERY - electrowiki.org - LetsEncrypt on sslhost is OK: OK - Certificate 'electrowiki.org' will expire on Tue 01 Feb 2022 05:17:32 GMT +0000. [07:12:51] RECOVERY - reviwiki.info - LetsEncrypt on sslhost is OK: OK - Certificate 'reviwiki.info' will expire on Tue 01 Feb 2022 05:19:22 GMT +0000. [07:13:07] RECOVERY - mw8 Puppet on mw8 is OK: OK: Puppet is currently enabled, last run 1 minute ago with 0 failures [07:13:19] RECOVERY - www.burnout.wiki - LetsEncrypt on sslhost is OK: OK - Certificate 'www.burnout.wiki' will expire on Tue 01 Feb 2022 05:20:58 GMT +0000. [07:13:31] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - zw.fontainebleau-avon.fr All nameservers failed to answer the query. [09:20:56] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is WARNING: Traceback (most recent call last): File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 148, in main() File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 116, in main rdns_hostname = get_reverse_dnshostname(args.hostname) File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 101, in get_reverse_dnshostname resolved_ip_addr = str(dns_resolver.quer [09:20:56] tname, 'A')[0]) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 992, in query timeout = self._compute_timeout(start, lifetime) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 799, in _compute_timeout raise Timeout(timeout=duration)dns.exception.Timeout: The DNS operation timed out after 30.001394987106323 seconds [09:27:58] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - zw.fontainebleau-avon.fr All nameservers failed to answer the query. [09:42:28] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is WARNING: Traceback (most recent call last): File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 148, in main() File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 116, in main rdns_hostname = get_reverse_dnshostname(args.hostname) File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 101, in get_reverse_dnshostname resolved_ip_addr = str(dns_resolver.quer [09:42:28] tname, 'A')[0]) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 992, in query timeout = self._compute_timeout(start, lifetime) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 799, in _compute_timeout raise Timeout(timeout=duration)dns.exception.Timeout: The DNS operation timed out after 30.00213646888733 seconds [09:49:30] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - zw.fontainebleau-avon.fr All nameservers failed to answer the query. [11:29:22] PROBLEM - mw8 Current Load on mw8 is WARNING: WARNING - load average: 7.33, 6.56, 4.43 [11:31:22] RECOVERY - mw8 Current Load on mw8 is OK: OK - load average: 4.33, 5.67, 4.36 [11:58:59] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is WARNING: Traceback (most recent call last): File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 148, in main() File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 116, in main rdns_hostname = get_reverse_dnshostname(args.hostname) File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 101, in get_reverse_dnshostname resolved_ip_addr = str(dns_resolver.quer [11:58:59] tname, 'A')[0]) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 992, in query timeout = self._compute_timeout(start, lifetime) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 799, in _compute_timeout raise Timeout(timeout=duration)dns.exception.Timeout: The DNS operation timed out after 30.009249687194824 seconds [12:06:07] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - zw.fontainebleau-avon.fr All nameservers failed to answer the query. [12:38:49] PROBLEM - mon2 Current Load on mon2 is WARNING: WARNING - load average: 3.57, 3.46, 3.02 [12:40:45] RECOVERY - mon2 Current Load on mon2 is OK: OK - load average: 2.52, 3.15, 2.96 [12:41:48] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is WARNING: Traceback (most recent call last): File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 148, in main() File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 116, in main rdns_hostname = get_reverse_dnshostname(args.hostname) File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 101, in get_reverse_dnshostname resolved_ip_addr = str(dns_resolver.quer [12:41:48] tname, 'A')[0]) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 992, in query timeout = self._compute_timeout(start, lifetime) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 799, in _compute_timeout raise Timeout(timeout=duration)dns.exception.Timeout: The DNS operation timed out after 30.001426935195923 seconds [12:49:05] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - zw.fontainebleau-avon.fr All nameservers failed to answer the query. [12:56:30] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is WARNING: Traceback (most recent call last): File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 148, in main() File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 116, in main rdns_hostname = get_reverse_dnshostname(args.hostname) File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 101, in get_reverse_dnshostname resolved_ip_addr = str(dns_resolver.quer [12:56:31] tname, 'A')[0]) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 900, in query timeout = self._compute_timeout(start, lifetime) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 799, in _compute_timeout raise Timeout(timeout=duration)dns.exception.Timeout: The DNS operation timed out after 30.000078916549683 seconds [13:03:32] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - zw.fontainebleau-avon.fr All nameservers failed to answer the query. [15:02:58] [02mw-config] 07CloakSelf opened pull request 03#4200: Add FameData and WikiData to import - 13https://git.io/JPQIo [15:03:55] miraheze/mw-config - CloakSelf the build has errored. [15:06:33] PROBLEM - mon2 Current Load on mon2 is WARNING: WARNING - load average: 3.95, 3.47, 3.08 [15:08:29] RECOVERY - mon2 Current Load on mon2 is OK: OK - load average: 3.23, 3.26, 3.04 [15:12:12] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is WARNING: Traceback (most recent call last): File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 148, in main() File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 116, in main rdns_hostname = get_reverse_dnshostname(args.hostname) File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 101, in get_reverse_dnshostname resolved_ip_addr = str(dns_resolver.quer [15:12:12] tname, 'A')[0]) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 992, in query timeout = self._compute_timeout(start, lifetime) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 799, in _compute_timeout raise Timeout(timeout=duration)dns.exception.Timeout: The DNS operation timed out after 30.00153875350952 seconds [15:19:14] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - zw.fontainebleau-avon.fr All nameservers failed to answer the query. [15:52:34] ^C5,12colored text and background^C [15:52:37] Oops. [16:06:56] PROBLEM - mon2 Current Load on mon2 is WARNING: WARNING - load average: 3.91, 3.41, 3.18 [16:08:52] PROBLEM - mon2 Current Load on mon2 is CRITICAL: CRITICAL - load average: 4.29, 3.67, 3.31 [16:10:48] PROBLEM - mon2 Current Load on mon2 is WARNING: WARNING - load average: 3.84, 3.62, 3.33 [16:14:39] PROBLEM - mon2 Current Load on mon2 is CRITICAL: CRITICAL - load average: 4.40, 3.92, 3.50 [16:16:13] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is WARNING: Traceback (most recent call last): File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 148, in main() File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 116, in main rdns_hostname = get_reverse_dnshostname(args.hostname) File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 101, in get_reverse_dnshostname resolved_ip_addr = str(dns_resolver.quer [16:16:14] tname, 'A')[0]) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 992, in query timeout = self._compute_timeout(start, lifetime) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 799, in _compute_timeout raise Timeout(timeout=duration)dns.exception.Timeout: The DNS operation timed out after 30.000290870666504 seconds [16:23:19] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - zw.fontainebleau-avon.fr All nameservers failed to answer the query. [16:28:10] PROBLEM - mon2 Current Load on mon2 is WARNING: WARNING - load average: 2.64, 3.52, 3.67 [16:33:57] PROBLEM - mon2 Current Load on mon2 is CRITICAL: CRITICAL - load average: 4.54, 3.85, 3.75 [16:35:56] PROBLEM - mon2 Current Load on mon2 is WARNING: WARNING - load average: 2.38, 3.31, 3.56 [16:37:54] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is WARNING: Traceback (most recent call last): File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 148, in main() File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 116, in main rdns_hostname = get_reverse_dnshostname(args.hostname) File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 101, in get_reverse_dnshostname resolved_ip_addr = str(dns_resolver.quer [16:37:54] tname, 'A')[0]) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 992, in query timeout = self._compute_timeout(start, lifetime) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 799, in _compute_timeout raise Timeout(timeout=duration)dns.exception.Timeout: The DNS operation timed out after 30.015114307403564 seconds [16:37:56] PROBLEM - mon2 Current Load on mon2 is CRITICAL: CRITICAL - load average: 4.25, 3.59, 3.62 [16:41:56] PROBLEM - mon2 Current Load on mon2 is WARNING: WARNING - load average: 2.52, 3.10, 3.42 [16:43:56] RECOVERY - mon2 Current Load on mon2 is OK: OK - load average: 2.51, 2.98, 3.35 [16:44:56] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - zw.fontainebleau-avon.fr All nameservers failed to answer the query. [16:52:49] PROBLEM - mon2 Current Load on mon2 is WARNING: WARNING - load average: 3.85, 3.72, 3.50 [16:54:45] RECOVERY - mon2 Current Load on mon2 is OK: OK - load average: 2.41, 3.19, 3.32 [17:05:04] [02CreateWiki] 07Universal-Omega edited pull request 03#256: Fix notifications and add NotificationsManager service - 13https://git.io/JP1fg [17:10:20] [02CreateWiki] 07supertassu reviewed pull request 03#256 commit - 13https://git.io/JPQ6b [17:10:21] [02CreateWiki] 07supertassu reviewed pull request 03#256 commit - 13https://git.io/JPQ6N [17:16:42] [02puppet] 07Kelvs599 opened pull request 03#2104: Add player.vimeo.com and docs.google.com to frame-src - 13https://git.io/JPQPv [17:35:13] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is WARNING: Traceback (most recent call last): File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 148, in main() File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 116, in main rdns_hostname = get_reverse_dnshostname(args.hostname) File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 101, in get_reverse_dnshostname resolved_ip_addr = str(dns_resolver.quer [17:35:14] tname, 'A')[0]) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 992, in query timeout = self._compute_timeout(start, lifetime) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 799, in _compute_timeout raise Timeout(timeout=duration)dns.exception.Timeout: The DNS operation timed out after 30.003711938858032 seconds [17:42:34] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - zw.fontainebleau-avon.fr All nameservers failed to answer the query. [17:50:00] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is WARNING: Traceback (most recent call last): File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 148, in main() File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 116, in main rdns_hostname = get_reverse_dnshostname(args.hostname) File "/usr/lib/nagios/plugins/check_reverse_dns.py", line 101, in get_reverse_dnshostname resolved_ip_addr = str(dns_resolver.quer [17:50:00] tname, 'A')[0]) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 992, in query timeout = self._compute_timeout(start, lifetime) File "/usr/lib/python3/dist-packages/dns/resolver.py", line 799, in _compute_timeout raise Timeout(timeout=duration)dns.exception.Timeout: The DNS operation timed out after 30.006683826446533 seconds [17:55:30] Reception123: pls pull that [17:57:05] PROBLEM - zw.fontainebleau-avon.fr - reverse DNS on sslhost is CRITICAL: rDNS CRITICAL - zw.fontainebleau-avon.fr All nameservers failed to answer the query. [17:57:43] Hm? Which one? [18:00:45] Reception123: the one flapping every 5 minutes for the last day [18:00:54] http://zw.fontainebleau-avon.fr [18:01:16] DNS server is 50% not resolving and 50% not responding [18:10:18] ah that's what you meant by pull [18:10:37] I was indeed going to deal with that this evening [18:11:25] [02miraheze/ssl] 07Reception123 pushed 031 commit to 03master [+0/-1/±1] 13https://git.io/JPQ7C [18:11:26] [02miraheze/ssl] 07Reception123 03dbe19ce - remove zw.fontainebleau-avon.fr cert (not pointing) [18:22:43] PROBLEM - mw9 Puppet on mw9 is CRITICAL: CRITICAL: Puppet has 1 failures. Last run 2 minutes ago with 1 failures. Failed resources (up to 3 shown): File[zw.fontainebleau-avon.fr] [18:50:09] RECOVERY - mw9 Puppet on mw9 is OK: OK: Puppet is currently enabled, last run 52 seconds ago with 0 failures [19:33:18] PROBLEM - mon2 Current Load on mon2 is WARNING: WARNING - load average: 3.67, 3.24, 2.97 [19:35:17] RECOVERY - mon2 Current Load on mon2 is OK: OK - load average: 3.15, 3.31, 3.04 [19:37:38] [02miraheze/mediawiki] 07dependabot[bot] pushed 031 commit to 03dependabot/npm_and_yarn/validator-13.7.0 [+0/-0/±1] 13https://git.io/JP7Uq [19:37:39] [02miraheze/mediawiki] 07dependabot[bot] 0394a72ea - Bump validator from 13.5.2 to 13.7.0 [19:37:41] [02mediawiki] 07dependabot[bot] opened pull request 03#3974: Bump validator from 13.5.2 to 13.7.0 - 13https://git.io/JP7Um [19:37:42] [02mediawiki] 07dependabot[bot] created branch 03dependabot/npm_and_yarn/validator-13.7.0 - 13https://git.io/vbL5b [19:37:44] [02mediawiki] 07dependabot[bot] labeled pull request 03#3974: Bump validator from 13.5.2 to 13.7.0 - 13https://git.io/JP7Um [19:37:47] [02mediawiki] 07dependabot[bot] labeled pull request 03#3974: Bump validator from 13.5.2 to 13.7.0 - 13https://git.io/JP7Um [19:39:14] PROBLEM - mon2 Current Load on mon2 is WARNING: WARNING - load average: 3.87, 3.66, 3.25 [19:47:11] RECOVERY - mon2 Current Load on mon2 is OK: OK - load average: 2.21, 2.97, 3.14 [19:51:08] PROBLEM - mon2 Current Load on mon2 is CRITICAL: CRITICAL - load average: 4.15, 3.75, 3.43 [19:51:31] Hi 😊👋 [19:53:07] PROBLEM - mon2 Current Load on mon2 is WARNING: WARNING - load average: 3.74, 3.84, 3.51 [19:57:06] RECOVERY - mon2 Current Load on mon2 is OK: OK - load average: 3.12, 3.29, 3.36 [19:58:04] Hi [19:58:54] What's up RhinosF1? [19:59:52] Sleepy [20:01:15] Ah, just right away [20:01:30] How was today for you? RhinosF1 [20:04:16] Not as stressful as minds [20:04:23] Monday + Tuesday [20:05:59] Oh yeah, I don't understand though, but I gerrit 🙂 [20:17:54] PROBLEM - mon2 Current Load on mon2 is CRITICAL: CRITICAL - load average: 4.37, 3.75, 3.28 [20:21:52] PROBLEM - mon2 Current Load on mon2 is WARNING: WARNING - load average: 2.66, 3.61, 3.37 [20:23:51] PROBLEM - mon2 Current Load on mon2 is CRITICAL: CRITICAL - load average: 4.29, 3.77, 3.45 [20:25:51] PROBLEM - mon2 Current Load on mon2 is WARNING: WARNING - load average: 3.74, 3.65, 3.44 [20:27:50] RECOVERY - mon2 Current Load on mon2 is OK: OK - load average: 2.76, 3.32, 3.34 [20:52:36] PROBLEM - mon2 Current Load on mon2 is WARNING: WARNING - load average: 3.61, 2.96, 3.02 [20:54:35] RECOVERY - mon2 Current Load on mon2 is OK: OK - load average: 2.86, 2.95, 3.01 [21:12:24] [02CreateWiki] 07Universal-Omega synchronize pull request 03#256: Fix notifications and add NotificationsManager service - 13https://git.io/JP1fg [21:12:31] [02CreateWiki] 07Universal-Omega synchronize pull request 03#256: Fix notifications and add NotificationsManager service - 13https://git.io/JP1fg [21:13:32] [02CreateWiki] 07Universal-Omega reviewed pull request 03#256 commit - 13https://git.io/JP7cZ [21:13:45] [02CreateWiki] 07Universal-Omega reviewed pull request 03#256 commit - 13https://git.io/JP7cC [21:18:43] miraheze/CreateWiki - Universal-Omega the build has errored. [21:22:21] PROBLEM - mon2 Current Load on mon2 is CRITICAL: CRITICAL - load average: 4.17, 3.61, 3.12 [21:23:06] miraheze/CreateWiki - Universal-Omega the build passed. [21:26:19] PROBLEM - mon2 Current Load on mon2 is WARNING: WARNING - load average: 3.77, 3.73, 3.29 [21:28:18] RECOVERY - mon2 Current Load on mon2 is OK: OK - load average: 3.12, 3.32, 3.18 [21:34:12] PROBLEM - cp14 Current Load on cp14 is WARNING: WARNING - load average: 1.10, 1.75, 1.10 [21:36:12] RECOVERY - cp14 Current Load on cp14 is OK: OK - load average: 0.38, 1.28, 1.01 [21:44:49] @CosmicAlpha the Wikibase stuff is still pending, If this could be resolved it'll be for the benefit of future wikis that'd be using wikibase. I thought you said you had an Idea>...? [21:45:12] [02mediawiki] 07Universal-Omega closed pull request 03#3974: Bump validator from 13.5.2 to 13.7.0 - 13https://git.io/JP7Um [21:45:14] [02mediawiki] 07dependabot[bot] commented on pull request 03#3974: Bump validator from 13.5.2 to 13.7.0 - 13https://git.io/JP78w [21:45:16] [02mediawiki] 07Universal-Omega deleted branch 03dependabot/npm_and_yarn/validator-13.7.0 - 13https://git.io/vbL5b [21:45:17] [02miraheze/mediawiki] 07Universal-Omega deleted branch 03dependabot/npm_and_yarn/validator-13.7.0 [21:45:45] Joseph: my idea won't work. [21:45:59] what's it though? [21:47:02] specialSiteLinkGroups, but setting another site link group would mess up the wikibase sites table I think. [21:47:46] like removing the current one and replacing it with specialSiteLinkGroups? [21:49:49] Adding one, or replacing it would break it. For specialSiteLinkGroups to work, the key `special` would need added to siteLinkGroups, and that's what wouldn't work. [21:52:07] And you currently don't think of any other way? [21:54:58] Not that I know of. [21:56:13] We'd definitely look for a way out though [21:56:26] [02CreateWiki] 07Universal-Omega edited pull request 03#255: Use settings log if there is more than one change - 13https://git.io/JPPVM [22:03:51] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 28.09, 23.79, 18.19 [22:03:51] PROBLEM - graylog2 Current Load on graylog2 is CRITICAL: CRITICAL - load average: 4.37, 3.19, 2.10 [22:04:08] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 5.95, 5.21, 3.65 [22:05:46] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 19.43, 22.30, 18.31 [22:05:51] RECOVERY - graylog2 Current Load on graylog2 is OK: OK - load average: 2.19, 2.83, 2.11 [22:07:41] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 30.62, 25.50, 19.92 [22:10:08] PROBLEM - gluster3 Current Load on gluster3 is CRITICAL: CRITICAL - load average: 6.16, 5.85, 4.45 [22:19:01] PROBLEM - ns1 GDNSD Datacenters on ns1 is CRITICAL: CRITICAL - 1 datacenter is down: 2001:41d0:801:2000::58af/cpweb [22:20:33] PROBLEM - ns2 GDNSD Datacenters on ns2 is CRITICAL: CRITICAL - 7 datacenters are down: 51.38.69.175/cpweb, 54.38.211.199/cpweb, 2001:41d0:801:2000::58af/cpweb, 2001:41d0:800:170b::5/cpweb, 51.222.25.132/cpweb, 167.114.2.161/cpweb, 2607:5300:201:3100::1d3/cpweb [22:20:38] PROBLEM - graylog2 Current Load on graylog2 is CRITICAL: CRITICAL - load average: 4.66, 3.50, 2.80 [22:21:01] RECOVERY - ns1 GDNSD Datacenters on ns1 is OK: OK - all datacenters are online [22:22:33] RECOVERY - ns2 GDNSD Datacenters on ns2 is OK: OK - all datacenters are online [22:22:34] PROBLEM - graylog2 Current Load on graylog2 is WARNING: WARNING - load average: 3.90, 3.61, 2.93 [22:26:27] RECOVERY - graylog2 Current Load on graylog2 is OK: OK - load average: 2.78, 3.26, 2.94 [22:48:08] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 1.62, 3.84, 5.47 [22:49:07] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 14.67, 18.69, 23.80 [22:50:08] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 2.04, 3.21, 5.04 [22:51:08] PROBLEM - cloud4 Current Load on cloud4 is CRITICAL: CRITICAL - load average: 23.55, 21.02, 24.06 [22:54:08] PROBLEM - gluster3 Current Load on gluster3 is WARNING: WARNING - load average: 4.75, 5.17, 5.50 [22:57:08] PROBLEM - cloud4 Current Load on cloud4 is WARNING: WARNING - load average: 19.85, 21.91, 23.75 [23:00:08] RECOVERY - gluster3 Current Load on gluster3 is OK: OK - load average: 2.77, 4.24, 5.05 [23:09:08] RECOVERY - cloud4 Current Load on cloud4 is OK: OK - load average: 14.69, 17.22, 20.26 [23:14:29] PROBLEM - mw11 Current Load on mw11 is WARNING: WARNING - load average: 6.52, 7.06, 5.34 [23:16:25] RECOVERY - mw11 Current Load on mw11 is OK: OK - load average: 3.51, 5.86, 5.11 [23:21:36] PROBLEM - mw9 Current Load on mw9 is CRITICAL: CRITICAL - load average: 14.47, 9.21, 6.56 [23:23:34] PROBLEM - mw9 Current Load on mw9 is WARNING: WARNING - load average: 5.30, 7.42, 6.23 [23:25:32] RECOVERY - mw9 Current Load on mw9 is OK: OK - load average: 4.18, 6.31, 5.96 [23:30:15] I have concerns about password storage [23:35:17] PROBLEM - mon2 Current Load on mon2 is WARNING: WARNING - load average: 3.67, 3.38, 2.96 [23:37:16] RECOVERY - mon2 Current Load on mon2 is OK: OK - load average: 1.97, 2.92, 2.85 [23:41:06] on [[Tech:Compromised_Handling]], some services have passwords saved, and if someone manages to somehow get access, they can compromise certain accounts at will. my particular concern surrounds data breaches. [23:41:07] https://meta.miraheze.org/wiki/Tech:Compromised_Handling [23:41:07] [url] Tech:Compromised Handling - Miraheze Meta | meta.miraheze.org [23:42:54] Could you elaborate? [23:42:54] All services require passwords but we takes steps to prevent these passwords from ever being leaked and getting to the wrong hands [23:44:50] The twitch incident worries me, that a big company got info leaked, so what about a small one that prob has less security compared to twitch. [23:46:19] Under Notification of Users, it says something like this: "Under EU law, we're required to notify all European users in the event of a breach of their personal data within 72 hours of discovery of the breach. Since we never bother to geolocate people, assume that all users are European and do the right thing. Notification steps should depend on the extent of the breach, and what we discover in our investigation. If we [23:46:19] determine that no PII has been compromised, writing up an incident report on Meta is enough. If we determine that a decent section of users have had their information compromised, run a sitenotice. If it happened to be restricted to a few wikis, we can run it there, otherwise, do a global sitenotice. Link to the incident report. If we can identify individual users who have had their PII compromised, go ahead and send them an email if they [23:46:20] ever gave us their address. If we have a lot of emails to write (hopefully not), prepare a mass email. Use the bcc field as to not make things worse. External notification methods should be considered too, like Twitter." (Yes, I pretty much copied it from that link @Joseph Bukkit has sent here.) [23:48:35] Security breaches happen but all companies try to take preventive steps to prevent such breaches from happening. Twitch wasn't completely hacked, my understanding is that a server was misconfigured and allowed anyone access to certain info. It all boils down to security practices, a company can be as big as Apple/Microsoft and have the poorest security practices while a small company, implementing the correct security practices, [23:48:35] may have even better security than a giant. [23:54:57] PROBLEM - gluster4 Disk Space on gluster4 is WARNING: DISK WARNING - free space: / 117628 MB (10% inode=79%);