[00:01:49] (03Merged) 10jenkins-bot: Branch commit for wmf/branch_cut_pretest [core] (wmf/branch_cut_pretest) - 10https://gerrit.wikimedia.org/r/1026893 (owner: 10TrainBranchBot) [01:05:36] (03PS17) 10Winston Sung: Add DEPRECATED_LANGUAGE_CODE_MAPPING to wgInterlanguageLinkCodeMap [mediawiki-config] - 10https://gerrit.wikimedia.org/r/558052 (https://phabricator.wikimedia.org/T248352) (owner: 10Fomafix) [01:17:16] PROBLEM - Check if Pybal has been restarted after pybal.conf was changed on lvs7003 is CRITICAL: CRITICAL: Service pybal.service has not been restarted after /etc/pybal/pybal.conf was changed (gt 4h). https://wikitech.wikimedia.org/wiki/PyBal%23Pybal_service_has_not_been_restarted [01:20:08] PROBLEM - Check if Pybal has been restarted after pybal.conf was changed on lvs7001 is CRITICAL: CRITICAL: Service pybal.service has not been restarted after /etc/pybal/pybal.conf was changed (gt 4h). https://wikitech.wikimedia.org/wiki/PyBal%23Pybal_service_has_not_been_restarted [01:22:10] FIRING: [2x] SystemdUnitFailed: docker-reporter-base-images.service on build2001:9100 - https://wikitech.wikimedia.org/wiki/Monitoring/check_systemd_state - https://grafana.wikimedia.org/d/g-AaZRFWk/systemd-status - https://alerts.wikimedia.org/?q=alertname%3DSystemdUnitFailed [01:23:29] !log lvs7003 - restart pybal [01:23:30] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [01:23:36] RECOVERY - PyBal IPVS diff check on lvs7003 is OK: OK: no difference between hosts in IPVS/PyBal https://wikitech.wikimedia.org/wiki/PyBal [01:24:26] RECOVERY - PyBal connections to etcd on lvs7003 is OK: OK: 16 connections established with conf1009.eqiad.wmnet:4001 (min=16) https://wikitech.wikimedia.org/wiki/PyBal [01:24:59] !log lvs7001 - restart pybal [01:25:00] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [01:27:18] RECOVERY - PyBal connections to etcd on lvs7001 is OK: OK: 12 connections established with conf1009.eqiad.wmnet:4001 (min=12) https://wikitech.wikimedia.org/wiki/PyBal [01:28:42] RECOVERY - Check if Pybal has been restarted after pybal.conf was changed on lvs7001 is OK: OK: pybal.service was restarted after /etc/pybal/pybal.conf was changed. https://wikitech.wikimedia.org/wiki/PyBal%23Pybal_service_has_not_been_restarted [01:28:42] RECOVERY - Check if Pybal has been restarted after pybal.conf was changed on lvs7003 is OK: OK: pybal.service was restarted after /etc/pybal/pybal.conf was changed. https://wikitech.wikimedia.org/wiki/PyBal%23Pybal_service_has_not_been_restarted [01:29:26] RECOVERY - PyBal IPVS diff check on lvs7001 is OK: OK: no difference between hosts in IPVS/PyBal https://wikitech.wikimedia.org/wiki/PyBal [01:33:04] !log bblack@cumin1002 conftool action : set/weight=100; selector: name=dns7.* [02:40:12] FIRING: [5x] JobUnavailable: Reduced availability for job ncredir in ops@drmrs - https://wikitech.wikimedia.org/wiki/Prometheus#Prometheus_job_unavailable - https://grafana.wikimedia.org/d/NEJu05xZz/prometheus-targets - https://alerts.wikimedia.org/?q=alertname%3DJobUnavailable [02:53:21] (03Abandoned) 10Ryan Kemper: wdqs: enable nfs data reloads on wdqs1021 [puppet] - 10https://gerrit.wikimedia.org/r/1026668 (https://phabricator.wikimedia.org/T362920) (owner: 10Ryan Kemper) [03:00:12] FIRING: [5x] JobUnavailable: Reduced availability for job ncredir in ops@drmrs - https://wikitech.wikimedia.org/wiki/Prometheus#Prometheus_job_unavailable - https://grafana.wikimedia.org/d/NEJu05xZz/prometheus-targets - https://alerts.wikimedia.org/?q=alertname%3DJobUnavailable [03:02:10] FIRING: [2x] SystemdUnitFailed: docker-reporter-base-images.service on build2001:9100 - https://wikitech.wikimedia.org/wiki/Monitoring/check_systemd_state - https://grafana.wikimedia.org/d/g-AaZRFWk/systemd-status - https://alerts.wikimedia.org/?q=alertname%3DSystemdUnitFailed [03:06:11] !log Enable log level DEBUG for curator on logstash2026 - T364190 [03:06:13] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [03:06:14] T364190: Curator Failed to complete action: replicas - https://phabricator.wikimedia.org/T364190 [03:07:02] !log Restarting `status curator_actions_cluster_wide.service` to log with DEBUGG level on logstash2026 - T364190 [03:07:04] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [05:02:10] FIRING: [2x] SystemdUnitFailed: docker-reporter-base-images.service on build2001:9100 - https://wikitech.wikimedia.org/wiki/Monitoring/check_systemd_state - https://grafana.wikimedia.org/d/g-AaZRFWk/systemd-status - https://alerts.wikimedia.org/?q=alertname%3DSystemdUnitFailed [05:25:40] PROBLEM - mailman list info on lists1001 is CRITICAL: CRITICAL - Socket timeout after 10 seconds https://wikitech.wikimedia.org/wiki/Mailman/Monitoring [05:27:32] RECOVERY - mailman list info on lists1001 is OK: HTTP OK: HTTP/1.1 200 OK - 8616 bytes in 0.284 second response time https://wikitech.wikimedia.org/wiki/Mailman/Monitoring [06:04:21] FIRING: PoolcounterFullQueues: Full queues for poolcounter1004:9106 poolcounter - https://www.mediawiki.org/wiki/PoolCounter#Request_tracing_in_production - https://grafana.wikimedia.org/d/aIcYxuxZk/poolcounter?orgId=1&viewPanel=6&from=now-1h&to=now&var-dc=eqiad%20prometheus/ops - https://alerts.wikimedia.org/?q=alertname%3DPoolcounterFullQueues [06:09:21] RESOLVED: PoolcounterFullQueues: Full queues for poolcounter1004:9106 poolcounter - https://www.mediawiki.org/wiki/PoolCounter#Request_tracing_in_production - https://grafana.wikimedia.org/d/aIcYxuxZk/poolcounter?orgId=1&viewPanel=6&from=now-1h&to=now&var-dc=eqiad%20prometheus/ops - https://alerts.wikimedia.org/?q=alertname%3DPoolcounterFullQueues [06:57:48] (03PS1) 10Addshore: gitlab::runner::allowed_images update dependabot locations [puppet] - 10https://gerrit.wikimedia.org/r/1027060 [07:00:04] Deploy window No deploys all day! See Deployments/Emergencies if things are broken. (https://wikitech.wikimedia.org/wiki/Deployments#deploycal-item-20240504T0700) [07:00:40] easy one that I'd love for the hackathon :) https://gerrit.wikimedia.org/r/c/operations/puppet/+/1027060 [07:01:27] FIRING: [4x] JobUnavailable: Reduced availability for job ncredir in ops@drmrs - https://wikitech.wikimedia.org/wiki/Prometheus#Prometheus_job_unavailable - https://grafana.wikimedia.org/d/NEJu05xZz/prometheus-targets - https://alerts.wikimedia.org/?q=alertname%3DJobUnavailable [07:09:36] (03PS1) 10Majavah: maintain-views: Redact entire logging_logindex rows without a target [puppet] - 10https://gerrit.wikimedia.org/r/1027062 (https://phabricator.wikimedia.org/T363633) [07:15:26] (03CR) 10JJMC89: [C:03+1] maintain-views: Redact entire logging_logindex rows without a target [puppet] - 10https://gerrit.wikimedia.org/r/1027062 (https://phabricator.wikimedia.org/T363633) (owner: 10Majavah) [07:29:39] (03CR) 10Majavah: [C:03+2] maintain-views: Redact entire logging_logindex rows without a target [puppet] - 10https://gerrit.wikimedia.org/r/1027062 (https://phabricator.wikimedia.org/T363633) (owner: 10Majavah) [07:33:07] !log taavi@cumin1002 START - Cookbook sre.wikireplicas.update-views [07:39:43] !log taavi@cumin1002 END (PASS) - Cookbook sre.wikireplicas.update-views (exit_code=0) [07:42:21] (03CR) 10Brennen Bearnes: [C:03+1] gitlab::runner::allowed_images update dependabot locations [puppet] - 10https://gerrit.wikimedia.org/r/1027060 (owner: 10Addshore) [07:42:30] (03CR) 10Majavah: [C:03+2] gitlab::runner::allowed_images update dependabot locations [puppet] - 10https://gerrit.wikimedia.org/r/1027060 (owner: 10Addshore) [07:43:02] Ty [07:52:11] FIRING: [2x] RoutinatorRsyncErrors: Routinator rsync fetching issue in codfw - https://wikitech.wikimedia.org/wiki/RPKI#RSYNC_status - https://grafana.wikimedia.org/d/UwUa77GZk/rpki - https://alerts.wikimedia.org/?q=alertname%3DRoutinatorRsyncErrors [08:18:50] PROBLEM - mailman list info on lists1001 is CRITICAL: CRITICAL - Socket timeout after 10 seconds https://wikitech.wikimedia.org/wiki/Mailman/Monitoring [08:19:36] PROBLEM - mailman archives on lists1001 is CRITICAL: CRITICAL - Socket timeout after 10 seconds https://wikitech.wikimedia.org/wiki/Mailman/Monitoring [08:21:06] PROBLEM - mailman list info ssl expiry on lists1001 is CRITICAL: CRITICAL - Socket timeout after 10 seconds https://wikitech.wikimedia.org/wiki/Mailman/Monitoring [08:22:58] RECOVERY - mailman list info ssl expiry on lists1001 is OK: OK - Certificate lists.wikimedia.org will expire on Fri 14 Jun 2024 01:28:50 AM GMT +0000. https://wikitech.wikimedia.org/wiki/Mailman/Monitoring [08:26:28] RECOVERY - mailman archives on lists1001 is OK: HTTP OK: HTTP/1.1 200 OK - 51923 bytes in 0.100 second response time https://wikitech.wikimedia.org/wiki/Mailman/Monitoring [08:26:44] RECOVERY - mailman list info on lists1001 is OK: HTTP OK: HTTP/1.1 200 OK - 8616 bytes in 0.284 second response time https://wikitech.wikimedia.org/wiki/Mailman/Monitoring [09:02:10] FIRING: [2x] SystemdUnitFailed: docker-reporter-base-images.service on build2001:9100 - https://wikitech.wikimedia.org/wiki/Monitoring/check_systemd_state - https://grafana.wikimedia.org/d/g-AaZRFWk/systemd-status - https://alerts.wikimedia.org/?q=alertname%3DSystemdUnitFailed [11:01:27] FIRING: [4x] JobUnavailable: Reduced availability for job ncredir in ops@drmrs - https://wikitech.wikimedia.org/wiki/Prometheus#Prometheus_job_unavailable - https://grafana.wikimedia.org/d/NEJu05xZz/prometheus-targets - https://alerts.wikimedia.org/?q=alertname%3DJobUnavailable [11:52:11] FIRING: [2x] RoutinatorRsyncErrors: Routinator rsync fetching issue in codfw - https://wikitech.wikimedia.org/wiki/RPKI#RSYNC_status - https://grafana.wikimedia.org/d/UwUa77GZk/rpki - https://alerts.wikimedia.org/?q=alertname%3DRoutinatorRsyncErrors [12:48:45] (03CR) 10Cathal Mooney: [C:03+1] sites: update installserver for magru [homer/public] - 10https://gerrit.wikimedia.org/r/1026945 (https://phabricator.wikimedia.org/T346722) (owner: 10Ssingh) [13:02:10] FIRING: [2x] SystemdUnitFailed: docker-reporter-base-images.service on build2001:9100 - https://wikitech.wikimedia.org/wiki/Monitoring/check_systemd_state - https://grafana.wikimedia.org/d/g-AaZRFWk/systemd-status - https://alerts.wikimedia.org/?q=alertname%3DSystemdUnitFailed [13:16:15] FIRING: PHPFPMTooBusy: Not enough idle PHP-FPM workers for Mediawiki mw-parsoid at eqiad: 37.84% idle - https://bit.ly/wmf-fpmsat - https://grafana.wikimedia.org/d/U7JT--knk/mw-on-k8s?orgId=1&viewPanel=84&var-dc=eqiad%20prometheus/k8s&var-service=mediawiki&var-namespace=mw-parsoid&var-container_name=All - https://alerts.wikimedia.org/?q=alertname%3DPHPFPMTooBusy [13:16:15] FIRING: MediaWikiLatencyExceeded: p75 latency high: eqiad mw-parsoid (k8s) 1.722s - https://wikitech.wikimedia.org/wiki/Application_servers/Runbook#Average_latency_exceeded - https://grafana.wikimedia.org/d/U7JT--knk/mw-on-k8s?orgId=1&viewPanel=55&var-dc=eqiad%20prometheus/k8s&var-service=mediawiki&var-namespace=mw-parsoid - https://alerts.wikimedia.org/?q=alertname%3DMediaWikiLatencyExceeded [13:19:57] FIRING: ProbeDown: Service eventgate-main:4492 has failed probes (http_eventgate-main_ip4) #page - https://wikitech.wikimedia.org/wiki/Runbook#eventgate-main:4492 - https://grafana.wikimedia.org/d/O0nHhdhnz/network-probes-overview?var-job=probes/service&var-module=All - https://alerts.wikimedia.org/?q=alertname%3DProbeDown [13:20:15] FIRING: [2x] MediaWikiHighErrorRate: Elevated rate of MediaWiki errors - kube-mw-parsoid - https://wikitech.wikimedia.org/wiki/Application_servers/Runbook - https://grafana.wikimedia.org/d/000000438/mediawiki-exceptions-alerts?panelId=18&fullscreen&orgId=1&var-datasource=eqiad%20prometheus/ops - https://alerts.wikimedia.org/?q=alertname%3DMediaWikiHighErrorRate [13:21:15] RESOLVED: PHPFPMTooBusy: Not enough idle PHP-FPM workers for Mediawiki mw-parsoid at eqiad: 37.84% idle - https://bit.ly/wmf-fpmsat - https://grafana.wikimedia.org/d/U7JT--knk/mw-on-k8s?orgId=1&viewPanel=84&var-dc=eqiad%20prometheus/k8s&var-service=mediawiki&var-namespace=mw-parsoid&var-container_name=All - https://alerts.wikimedia.org/?q=alertname%3DPHPFPMTooBusy [13:22:10] oof. [13:22:15] FIRING: PHPFPMTooBusy: Not enough idle PHP-FPM workers for Mediawiki mw-parsoid at eqiad: 27.48% idle - https://bit.ly/wmf-fpmsat - https://grafana.wikimedia.org/d/U7JT--knk/mw-on-k8s?orgId=1&viewPanel=84&var-dc=eqiad%20prometheus/k8s&var-service=mediawiki&var-namespace=mw-parsoid&var-container_name=All - https://alerts.wikimedia.org/?q=alertname%3DPHPFPMTooBusy [13:22:51] FIRING: SwaggerProbeHasFailures: Not all openapi/swagger endpoints returned healthy - https://wikitech.wikimedia.org/wiki/Runbook#https://eventgate-main.svc.eqiad.wmnet:4492 - https://grafana.wikimedia.org/d/_77ik484k/openapi-swagger-endpoint-state?var-site=eqiad - https://alerts.wikimedia.org/?q=alertname%3DSwaggerProbeHasFailures [13:22:59] eventgate main pods are in crashloop backof [13:23:21] !incidents [13:23:22] 4651 (ACKED) ProbeDown sre (10.2.2.45 ip4 eventgate-main:4492 probes/service http_eventgate-main_ip4 eqiad) [13:25:07] o/ [13:25:15] RESOLVED: [4x] MediaWikiHighErrorRate: Elevated rate of MediaWiki errors - api_appserver - https://wikitech.wikimedia.org/wiki/Application_servers/Runbook - https://grafana.wikimedia.org/d/000000438/mediawiki-exceptions-alerts?panelId=18&fullscreen&orgId=1&var-datasource=eqiad%20prometheus/ops - https://alerts.wikimedia.org/?q=alertname%3DMediaWikiHighErrorRate [13:26:10] PROBLEM - PyBal backends health check on lvs1020 is CRITICAL: PYBAL CRITICAL - CRITICAL - eventgate-main_4492: Servers parse1011.eqiad.wmnet, mw1380.eqiad.wmnet, mw1492.eqiad.wmnet, kubernetes1025.eqiad.wmnet, mw1419.eqiad.wmnet, mw1434.eqiad.wmnet, mw1479.eqiad.wmnet, mw1430.eqiad.wmnet, mw1415.eqiad.wmnet, mw1480.eqiad.wmnet, parse1009.eqiad.wmnet, mw1405.eqiad.wmnet, mw1425.eqiad.wmnet, mw1399.eqiad.wmnet, mw1391.eqiad.wmnet, mw1435.eq [13:26:10] t, mw1488.eqiad.wmnet, mw1454.eqiad.wmnet, mw1408.eqiad.wmnet, mw1370.eqiad.wmnet, kubernetes1017.eqiad.wmnet, kubernetes1050.eqiad.wmnet, kubernetes1012.eqiad.wmnet, mw1465.eqiad.wmnet, kubernetes1014.eqiad.wmnet, kubernetes1018.eqiad.wmnet, mw1369.eqiad.wmnet, mw1486.eqiad.wmnet, mw1360.eqiad.wmnet, mw1356.eqiad.wmnet, mw1483.eqiad.wmnet, mw1458.eqiad.wmnet, mw1468.eqiad.wmnet, kubernetes1028.eqiad.wmnet, kubernetes1015.eqiad.wmnet, kub [13:26:10] 031.eqiad.wmnet, kubernetes1024.eqiad.wmnet, parse1019.eqiad.wmnet, mw1381.eqiad.wmnet, parse1021.eqiad.wmnet, kubernetes1042.eqiad.wmnet, kubernetes1056.eqiad.wmnet, mw1441.eqiad.wmnet https://wikitech.wikimedia.org/wiki/PyBal [13:26:10] PROBLEM - PyBal backends health check on lvs1019 is CRITICAL: PYBAL CRITICAL - CRITICAL - eventgate-main_4492: Servers kubernetes1010.eqiad.wmnet, parse1011.eqiad.wmnet, parse1013.eqiad.wmnet, kubernetes1041.eqiad.wmnet, mw1442.eqiad.wmnet, mw1434.eqiad.wmnet, mw1479.eqiad.wmnet, mw1470.eqiad.wmnet, mw1415.eqiad.wmnet, mw1388.eqiad.wmnet, mw1480.eqiad.wmnet, parse1009.eqiad.wmnet, mw1405.eqiad.wmnet, kubernetes1050.eqiad.wmnet, kubernetes [13:26:10] ad.wmnet, mw1435.eqiad.wmnet, mw1424.eqiad.wmnet, mw1395.eqiad.wmnet, mw1488.eqiad.wmnet, mw1454.eqiad.wmnet, parse1005.eqiad.wmnet, mw1389.eqiad.wmnet, kubernetes1017.eqiad.wmnet, mw1425.eqiad.wmnet, kubernetes1012.eqiad.wmnet, mw1465.eqiad.wmnet, kubernetes1033.eqiad.wmnet, mw1483.eqiad.wmnet, mw1369.eqiad.wmnet, kubernetes1059.eqiad.wmnet, mw1469.eqiad.wmnet, kubernetes1005.eqiad.wmnet, mw1486.eqiad.wmnet, mw1360.eqiad.wmnet, kubernete [13:26:10] iad.wmnet, mw1458.eqiad.wmnet, parse1012.eqiad.wmnet, mw1453.eqiad.wmnet, mw1468.eqiad.wmnet, kubernetes1008.eqiad.wmnet, mw1464.eqiad.wmnet, parse1019.eqiad.wmnet, kubernetes1042.eqiad https://wikitech.wikimedia.org/wiki/PyBal [13:27:15] FIRING: MediaWikiHighErrorRate: Elevated rate of MediaWiki errors - kube-mw-api-ext - https://wikitech.wikimedia.org/wiki/Application_servers/Runbook - https://grafana.wikimedia.org/d/000000438/mediawiki-exceptions-alerts?panelId=18&fullscreen&orgId=1&var-datasource=eqiad%20prometheus/ops - https://alerts.wikimedia.org/?q=alertname%3DMediaWikiHighErrorRate [13:27:15] RESOLVED: PHPFPMTooBusy: Not enough idle PHP-FPM workers for Mediawiki mw-parsoid at eqiad: 31.16% idle - https://bit.ly/wmf-fpmsat - https://grafana.wikimedia.org/d/U7JT--knk/mw-on-k8s?orgId=1&viewPanel=84&var-dc=eqiad%20prometheus/k8s&var-service=mediawiki&var-namespace=mw-parsoid&var-container_name=All - https://alerts.wikimedia.org/?q=alertname%3DPHPFPMTooBusy [13:28:27] it seems eventgate pods (and probably others?) are a bit struggling with too much traffic https://grafana.wikimedia.org/d/ZB39Izmnz/eventgate?orgId=1&refresh=30s&var-service=eventgate-main&var-stream=All&var-kafka_broker=All&var-kafka_producer_type=All&var-dc=thanos&var-site=All&from=now-6h&to=now [13:28:27] and https://grafana.wikimedia.org/d/-D2KNUEGk/kubernetes-pod-details?orgId=1&var-datasource=eqiad%20prometheus%2Fk8s&var-namespace=eventgate-main&var-pod=All&var-container=All&from=now-3h&to=now [13:29:26] (03PS1) 10Zabe: db-production: Generate sectionsByDB on the fly [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027148 [13:29:51] FIRING: ATSBackendErrorsHigh: ATS: elevated 5xx errors from restbase.discovery.wmnet #page - https://wikitech.wikimedia.org/wiki/Apache_Traffic_Server#Debugging - https://grafana.wikimedia.org/d/1T_4O08Wk/ats-backends-origin-servers-overview?orgId=1&viewPanel=12&var-site=esams&var-cluster=text&var-origin=restbase.discovery.wmnet - https://alerts.wikimedia.org/?q=alertname%3DATSBackendErrorsHigh [13:30:47] (03CR) 10CI reject: [V:04-1] db-production: Generate sectionsByDB on the fly [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027148 (owner: 10Zabe) [13:31:08] jelto: let's bump the number of pods? [13:31:15] RESOLVED: MediaWikiLatencyExceeded: p75 latency high: eqiad mw-parsoid (k8s) 815.8ms - https://wikitech.wikimedia.org/wiki/Application_servers/Runbook#Average_latency_exceeded - https://grafana.wikimedia.org/d/U7JT--knk/mw-on-k8s?orgId=1&viewPanel=55&var-dc=eqiad%20prometheus/k8s&var-service=mediawiki&var-namespace=mw-parsoid - https://alerts.wikimedia.org/?q=alertname%3DMediaWikiLatencyExceeded [13:32:10] we can try that, although the logs mention some error "1 out of 1 events had failures and were not accepted. (0 invalid and 1 errored)". So not sure if I that helps [13:32:15] RESOLVED: MediaWikiHighErrorRate: Elevated rate of MediaWiki errors - kube-mw-api-ext - https://wikitech.wikimedia.org/wiki/Application_servers/Runbook - https://grafana.wikimedia.org/d/000000438/mediawiki-exceptions-alerts?panelId=18&fullscreen&orgId=1&var-datasource=eqiad%20prometheus/ops - https://alerts.wikimedia.org/?q=alertname%3DMediaWikiHighErrorRate [13:33:06] (03PS2) 10Zabe: db-production: Generate sectionsByDB on the fly [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027148 [13:33:15] FIRING: PHPFPMTooBusy: Not enough idle PHP-FPM workers for Mediawiki mw-parsoid at eqiad: 37.93% idle - https://bit.ly/wmf-fpmsat - https://grafana.wikimedia.org/d/U7JT--knk/mw-on-k8s?orgId=1&viewPanel=84&var-dc=eqiad%20prometheus/k8s&var-service=mediawiki&var-namespace=mw-parsoid&var-container_name=All - https://alerts.wikimedia.org/?q=alertname%3DPHPFPMTooBusy [13:33:30] eventgate-main has 8 replicas currently, we could try bumping it to 12 or 14 and see what happens [13:33:51] but it seems they get any load at the moment [13:34:16] (because they are not getting traffic due to crash looping I think) [13:34:20] (03CR) 10Zabe: db-production: Generate sectionsByDB on the fly (031 comment) [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027148 (owner: 10Zabe) [13:34:51] FIRING: [6x] ATSBackendErrorsHigh: ATS: elevated 5xx errors from restbase.discovery.wmnet #page - https://wikitech.wikimedia.org/wiki/Apache_Traffic_Server#Debugging - https://alerts.wikimedia.org/?q=alertname%3DATSBackendErrorsHigh [13:35:16] https://grafana.wikimedia.org/d/ZB39Izmnz/eventgate?orgId=1&refresh=30s&var-service=eventgate-main&var-stream=All&var-kafka_broker=All&var-kafka_producer_type=All&var-dc=thanos&var-site=All&from=now-6h&to=now&viewPanel=71 looks like parsoid requests went up a lot ? [13:36:02] PROBLEM - IPv4 ping to eqsin on ripe-atlas-eqsin is CRITICAL: CRITICAL - failed 40 probes of 800 (alerts on 35) - https://atlas.ripe.net/measurements/11645085/#!map https://wikitech.wikimedia.org/wiki/Network_monitoring%23Atlas_alerts https://grafana.wikimedia.org/d/K1qm1j-Wz/ripe-atlas [13:36:20] (03PS1) 10Jforrester: [WIP] Change static 'A Wikimedia project' icon to new one [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027150 (https://phabricator.wikimedia.org/T256190) [13:37:59] jelto: Amir1: o/ I've ~30min's only before I have to leave - did you scale up eventgate-main already? [13:38:12] nope [13:38:35] no not sure if that helps, parsoid also looks like a lot is going on https://grafana.wikimedia.org/d/t_x3DEu4k/parsoid-health?orgId=1&refresh=15m&from=now-6h&to=now [13:39:06] won't hurt. Doubled the number of replicas in eqiad to 16 [13:39:06] should I scale eventgate main? jayme [13:39:12] ok thanks [13:39:39] the new pods are in running state :) [13:39:43] but they're still getting oom killed more or less immediately [13:39:57] RESOLVED: ProbeDown: Service eventgate-main:4492 has failed probes (http_eventgate-main_ip4) #page - https://wikitech.wikimedia.org/wiki/Runbook#eventgate-main:4492 - https://grafana.wikimedia.org/d/O0nHhdhnz/network-probes-overview?var-job=probes/service&var-module=All - https://alerts.wikimedia.org/?q=alertname%3DProbeDown [13:40:09] oh, no...they dont [13:40:11] interesting [13:40:12] FIRING: ProbeDown: Service eventgate-main:4492 has failed probes (http_eventgate-main_ip4) - https://wikitech.wikimedia.org/wiki/Runbook#eventgate-main:4492 - https://grafana.wikimedia.org/d/O0nHhdhnz/network-probes-overview?var-job=probes/service&var-module=All - https://alerts.wikimedia.org/?q=alertname%3DProbeDown [13:40:35] !incidents [13:40:36] 4652 (ACKED) ATSBackendErrorsHigh cache_text sre (restbase.discovery.wmnet esams) [13:40:36] 4651 (RESOLVED) ProbeDown sre (10.2.2.45 ip4 eventgate-main:4492 probes/service http_eventgate-main_ip4 eqiad) [13:41:04] RECOVERY - IPv4 ping to eqsin on ripe-atlas-eqsin is OK: OK - failed 26 probes of 800 (alerts on 35) - https://atlas.ripe.net/measurements/11645085/#!map https://wikitech.wikimedia.org/wiki/Network_monitoring%23Atlas_alerts https://grafana.wikimedia.org/d/K1qm1j-Wz/ripe-atlas [13:41:05] !log doubled the number of eventgate-main replicas in eqiad to 16 [13:41:06] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [13:41:09] also some of the old pods are running again [13:41:10] RECOVERY - PyBal backends health check on lvs1020 is OK: PYBAL OK - All pools are healthy https://wikitech.wikimedia.org/wiki/PyBal [13:41:10] RECOVERY - PyBal backends health check on lvs1019 is OK: PYBAL OK - All pools are healthy https://wikitech.wikimedia.org/wiki/PyBal [13:41:36] RESOLVED: ProbeDown: Service eventgate-main:4492 has failed probes (http_eventgate-main_ip4) - https://wikitech.wikimedia.org/wiki/Runbook#eventgate-main:4492 - https://grafana.wikimedia.org/d/O0nHhdhnz/network-probes-overview?var-job=probes/service&var-module=All - https://alerts.wikimedia.org/?q=alertname%3DProbeDown [13:42:01] eventgate is getting a lot of traffic again [13:42:27] FIRING: [2x] ProbeDown: Service wdqs1017:443 has failed probes (http_wdqs_internal_sparql_endpoint_search_ip4) - https://wikitech.wikimedia.org/wiki/Runbook#wdqs1017:443 - https://grafana.wikimedia.org/d/O0nHhdhnz/network-probes-overview?var-job=probes/custom&var-module=All - https://alerts.wikimedia.org/?q=alertname%3DProbeDown [13:42:27] all refreshLinks it seems [13:42:51] RESOLVED: SwaggerProbeHasFailures: Not all openapi/swagger endpoints returned healthy - https://wikitech.wikimedia.org/wiki/Runbook#https://eventgate-main.svc.eqiad.wmnet:4492 - https://grafana.wikimedia.org/d/_77ik484k/openapi-swagger-endpoint-state?var-site=eqiad - https://alerts.wikimedia.org/?q=alertname%3DSwaggerProbeHasFailures [13:43:13] (03CR) 10Ladsgroup: "<3 <3 <3" [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027150 (https://phabricator.wikimedia.org/T256190) (owner: 10Jforrester) [13:43:15] RESOLVED: PHPFPMTooBusy: Not enough idle PHP-FPM workers for Mediawiki mw-parsoid at eqiad: 28.6% idle - https://bit.ly/wmf-fpmsat - https://grafana.wikimedia.org/d/U7JT--knk/mw-on-k8s?orgId=1&viewPanel=84&var-dc=eqiad%20prometheus/k8s&var-service=mediawiki&var-namespace=mw-parsoid&var-container_name=All - https://alerts.wikimedia.org/?q=alertname%3DPHPFPMTooBusy [13:43:46] now all pods eventgate pods are running again [13:44:11] The error I saw come from EventGate was "unable to load schema" [13:44:25] how did eventgate lose the ability to read schemas? [13:44:51] RESOLVED: [6x] ATSBackendErrorsHigh: ATS: elevated 5xx errors from restbase.discovery.wmnet #page - https://wikitech.wikimedia.org/wiki/Apache_Traffic_Server#Debugging - https://alerts.wikimedia.org/?q=alertname%3DATSBackendErrorsHigh [13:45:28] !incidents [13:45:28] 4652 (RESOLVED) ATSBackendErrorsHigh cache_text sre (restbase.discovery.wmnet esams) [13:45:29] 4651 (RESOLVED) ProbeDown sre (10.2.2.45 ip4 eventgate-main:4492 probes/service http_eventgate-main_ip4 eqiad) [13:45:40] thanks jayme for the quick help :) [13:45:45] cwhite: what did you see exactly and where? [13:46:31] jayme: https://logstash.wikimedia.org/goto/a18bc65ea58c2db6ed1d475d9f4f1ef0 [13:46:36] thanks [13:47:27] RESOLVED: [2x] ProbeDown: Service wdqs1017:443 has failed probes (http_wdqs_internal_sparql_endpoint_search_ip4) - https://wikitech.wikimedia.org/wiki/Runbook#wdqs1017:443 - https://grafana.wikimedia.org/d/O0nHhdhnz/network-probes-overview?var-job=probes/custom&var-module=All - https://alerts.wikimedia.org/?q=alertname%3DProbeDown [13:48:36] the interesting part there is maybe "connect ECONNREFUSED 127.0.0.1:6023" with 6023 being the service mesh port for the schema service [13:50:21] that's very interesting... [13:51:16] (03PS3) 10Zabe: db-production: Generate sectionsByDB on the fly [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027148 [13:52:28] that thing is an nginx serving static json schemas AIUI ... [13:55:57] mitght also be a red hering from all the restarts [13:56:52] I might be envoy circuit breaking when the service got slow to respond [13:57:03] https://grafana-rw.wikimedia.org/d/b1jttnFMz/envoy-telemetry-k8s?orgId=1&var-datasource=thanos&var-site=eqiad&var-prometheus=k8s&var-app=eventgate&var-kubernetes_namespace=eventgate-main&var-destination=schema&from=now-3h&to=now [13:57:34] that says schema latency increased dramatically [13:57:55] but that's not reflected here https://grafana-rw.wikimedia.org/d/VTCkm29Wz/envoy-telemetry?orgId=1&var-datasource=eqiad%20prometheus%2Fops&var-origin=eventschemas&var-origin_instance=All&var-destination=All&from=now-3h&to=now [14:01:00] Pretty much all the 500s we served during this event were mobileapps. [14:02:00] https://logstash.wikimedia.org/goto/62bfdb0ed803443ce1a6854ed724cb67 [14:09:40] so we just keep eventgate running with 16 replicas over the weekend and investigate more on Monday? Eventstreams traffic seem to have recovered to a normal baseline. [14:10:03] yeah, I'd keep it that way for now [14:10:28] I gtg ... have a good rest of the weekend. Hope Tallinn is nice and fun :) [14:11:02] thanks again, see you next week. [14:16:18] Thanks j.ayme! Enjoy your weekend! [14:36:27] FIRING: [5x] JobUnavailable: Reduced availability for job ncredir in ops@drmrs - https://wikitech.wikimedia.org/wiki/Prometheus#Prometheus_job_unavailable - https://grafana.wikimedia.org/d/NEJu05xZz/prometheus-targets - https://alerts.wikimedia.org/?q=alertname%3DJobUnavailable [14:45:03] (03PS2) 10Jforrester: [WIP] Change static 'A Wikimedia project' icon to new one [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027150 (https://phabricator.wikimedia.org/T256190) [15:00:12] FIRING: [5x] JobUnavailable: Reduced availability for job ncredir in ops@drmrs - https://wikitech.wikimedia.org/wiki/Prometheus#Prometheus_job_unavailable - https://grafana.wikimedia.org/d/NEJu05xZz/prometheus-targets - https://alerts.wikimedia.org/?q=alertname%3DJobUnavailable [15:14:59] (03PS1) 10Lucas Werkmeister (WMDE): Disable ParserMigration on commonswiki [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027194 (https://phabricator.wikimedia.org/T364228) [15:16:09] (03CR) 10Subramanya Sastry: [C:03+1] Disable ParserMigration on commonswiki [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027194 (https://phabricator.wikimedia.org/T364228) (owner: 10Lucas Werkmeister (WMDE)) [15:18:04] PROBLEM - IPv4 ping to eqsin on ripe-atlas-eqsin is CRITICAL: CRITICAL - failed 39 probes of 800 (alerts on 35) - https://atlas.ripe.net/measurements/11645085/#!map https://wikitech.wikimedia.org/wiki/Network_monitoring%23Atlas_alerts https://grafana.wikimedia.org/d/K1qm1j-Wz/ripe-atlas [15:23:02] RECOVERY - IPv4 ping to eqsin on ripe-atlas-eqsin is OK: OK - failed 30 probes of 800 (alerts on 35) - https://atlas.ripe.net/measurements/11645085/#!map https://wikitech.wikimedia.org/wiki/Network_monitoring%23Atlas_alerts https://grafana.wikimedia.org/d/K1qm1j-Wz/ripe-atlas [15:27:58] (03CR) 10Jforrester: [C:03+1] "Reasonable fix, but yes, let's not deploy on a Hackathon weekend; power-user-only feature." [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027194 (https://phabricator.wikimedia.org/T364228) (owner: 10Lucas Werkmeister (WMDE)) [15:50:04] PROBLEM - IPv4 ping to eqsin on ripe-atlas-eqsin is CRITICAL: CRITICAL - failed 37 probes of 800 (alerts on 35) - https://atlas.ripe.net/measurements/11645085/#!map https://wikitech.wikimedia.org/wiki/Network_monitoring%23Atlas_alerts https://grafana.wikimedia.org/d/K1qm1j-Wz/ripe-atlas [15:52:11] FIRING: [2x] RoutinatorRsyncErrors: Routinator rsync fetching issue in codfw - https://wikitech.wikimedia.org/wiki/RPKI#RSYNC_status - https://grafana.wikimedia.org/d/UwUa77GZk/rpki - https://alerts.wikimedia.org/?q=alertname%3DRoutinatorRsyncErrors [15:55:02] RECOVERY - IPv4 ping to eqsin on ripe-atlas-eqsin is OK: OK - failed 27 probes of 800 (alerts on 35) - https://atlas.ripe.net/measurements/11645085/#!map https://wikitech.wikimedia.org/wiki/Network_monitoring%23Atlas_alerts https://grafana.wikimedia.org/d/K1qm1j-Wz/ripe-atlas [16:12:02] PROBLEM - IPv4 ping to eqsin on ripe-atlas-eqsin is CRITICAL: CRITICAL - failed 37 probes of 800 (alerts on 35) - https://atlas.ripe.net/measurements/11645085/#!map https://wikitech.wikimedia.org/wiki/Network_monitoring%23Atlas_alerts https://grafana.wikimedia.org/d/K1qm1j-Wz/ripe-atlas [16:15:16] PROBLEM - Check unit status of httpbb_kubernetes_mw-wikifunctions_hourly on cumin2002 is CRITICAL: CRITICAL: Status of the systemd unit httpbb_kubernetes_mw-wikifunctions_hourly https://wikitech.wikimedia.org/wiki/Monitoring/systemd_unit_state [16:17:04] RECOVERY - IPv4 ping to eqsin on ripe-atlas-eqsin is OK: OK - failed 27 probes of 800 (alerts on 35) - https://atlas.ripe.net/measurements/11645085/#!map https://wikitech.wikimedia.org/wiki/Network_monitoring%23Atlas_alerts https://grafana.wikimedia.org/d/K1qm1j-Wz/ripe-atlas [16:22:10] FIRING: [3x] SystemdUnitFailed: docker-reporter-base-images.service on build2001:9100 - https://wikitech.wikimedia.org/wiki/Monitoring/check_systemd_state - https://grafana.wikimedia.org/d/g-AaZRFWk/systemd-status - https://alerts.wikimedia.org/?q=alertname%3DSystemdUnitFailed [16:24:04] PROBLEM - IPv4 ping to eqsin on ripe-atlas-eqsin is CRITICAL: CRITICAL - failed 43 probes of 800 (alerts on 35) - https://atlas.ripe.net/measurements/11645085/#!map https://wikitech.wikimedia.org/wiki/Network_monitoring%23Atlas_alerts https://grafana.wikimedia.org/d/K1qm1j-Wz/ripe-atlas [16:29:04] RECOVERY - IPv4 ping to eqsin on ripe-atlas-eqsin is OK: OK - failed 32 probes of 800 (alerts on 35) - https://atlas.ripe.net/measurements/11645085/#!map https://wikitech.wikimedia.org/wiki/Network_monitoring%23Atlas_alerts https://grafana.wikimedia.org/d/K1qm1j-Wz/ripe-atlas [16:41:04] PROBLEM - IPv4 ping to eqsin on ripe-atlas-eqsin is CRITICAL: CRITICAL - failed 38 probes of 800 (alerts on 35) - https://atlas.ripe.net/measurements/11645085/#!map https://wikitech.wikimedia.org/wiki/Network_monitoring%23Atlas_alerts https://grafana.wikimedia.org/d/K1qm1j-Wz/ripe-atlas [16:46:04] RECOVERY - IPv4 ping to eqsin on ripe-atlas-eqsin is OK: OK - failed 27 probes of 800 (alerts on 35) - https://atlas.ripe.net/measurements/11645085/#!map https://wikitech.wikimedia.org/wiki/Network_monitoring%23Atlas_alerts https://grafana.wikimedia.org/d/K1qm1j-Wz/ripe-atlas [17:13:06] PROBLEM - IPv4 ping to eqsin on ripe-atlas-eqsin is CRITICAL: CRITICAL - failed 36 probes of 800 (alerts on 35) - https://atlas.ripe.net/measurements/11645085/#!map https://wikitech.wikimedia.org/wiki/Network_monitoring%23Atlas_alerts https://grafana.wikimedia.org/d/K1qm1j-Wz/ripe-atlas [17:15:16] RECOVERY - Check unit status of httpbb_kubernetes_mw-wikifunctions_hourly on cumin2002 is OK: OK: Status of the systemd unit httpbb_kubernetes_mw-wikifunctions_hourly https://wikitech.wikimedia.org/wiki/Monitoring/systemd_unit_state [17:17:10] FIRING: [3x] SystemdUnitFailed: docker-reporter-base-images.service on build2001:9100 - https://wikitech.wikimedia.org/wiki/Monitoring/check_systemd_state - https://grafana.wikimedia.org/d/g-AaZRFWk/systemd-status - https://alerts.wikimedia.org/?q=alertname%3DSystemdUnitFailed [17:18:06] RECOVERY - IPv4 ping to eqsin on ripe-atlas-eqsin is OK: OK - failed 22 probes of 800 (alerts on 35) - https://atlas.ripe.net/measurements/11645085/#!map https://wikitech.wikimedia.org/wiki/Network_monitoring%23Atlas_alerts https://grafana.wikimedia.org/d/K1qm1j-Wz/ripe-atlas [17:39:24] PROBLEM - BGP status on cr4-ulsfo is CRITICAL: BGP CRITICAL - AS64605/IPv4: Active - Anycast, AS64605/IPv6: Active - Anycast https://wikitech.wikimedia.org/wiki/Network_monitoring%23BGP_status [17:45:08] (03CR) 10Lucas Werkmeister (WMDE): "BTW, Iā€™d also be fine with leaving ParserMigration on but disabling the preference ā€“ but AFAICT it only supports disabling the URL paramet" [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027194 (https://phabricator.wikimedia.org/T364228) (owner: 10Lucas Werkmeister (WMDE)) [18:06:42] (03PS1) 10Jelto: gitlab: add option to run a custom exporter [puppet] - 10https://gerrit.wikimedia.org/r/1027234 (https://phabricator.wikimedia.org/T354656) [18:09:05] (03CR) 10CI reject: [V:04-1] gitlab: add option to run a custom exporter [puppet] - 10https://gerrit.wikimedia.org/r/1027234 (https://phabricator.wikimedia.org/T354656) (owner: 10Jelto) [18:15:38] (03PS1) 10Urbanecm: iglwiki: Enable GrowthExperiments [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027237 (https://phabricator.wikimedia.org/T364130) [18:15:48] (03PS1) 10Jelto: gitlab: add dummy token for exporter [labs/private] - 10https://gerrit.wikimedia.org/r/1027238 (https://phabricator.wikimedia.org/T354656) [18:16:28] (03CR) 10CI reject: [V:04-1] iglwiki: Enable GrowthExperiments [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027237 (https://phabricator.wikimedia.org/T364130) (owner: 10Urbanecm) [18:18:02] (03CR) 10Jelto: [V:03+2 C:03+2] gitlab: add dummy token for exporter [labs/private] - 10https://gerrit.wikimedia.org/r/1027238 (https://phabricator.wikimedia.org/T354656) (owner: 10Jelto) [18:18:15] (03PS2) 10Urbanecm: iglwiki: Enable GrowthExperiments [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027237 (https://phabricator.wikimedia.org/T364130) [18:19:38] (03CR) 10Jelto: "recheck" [puppet] - 10https://gerrit.wikimedia.org/r/1027234 (https://phabricator.wikimedia.org/T354656) (owner: 10Jelto) [18:33:36] (03CR) 10Jelto: [V:03+1] "PCC SUCCESS (CORE_DIFF 2): https://integration.wikimedia.org/ci/job/operations-puppet-catalog-compiler/label=puppet5-compiler-node/2250/co" [puppet] - 10https://gerrit.wikimedia.org/r/1027234 (https://phabricator.wikimedia.org/T354656) (owner: 10Jelto) [18:35:19] (03PS2) 10Jelto: gitlab: add option to run a custom exporter [puppet] - 10https://gerrit.wikimedia.org/r/1027234 (https://phabricator.wikimedia.org/T354656) [18:39:17] (03CR) 10Jelto: [V:03+1] "PCC SUCCESS (CORE_DIFF 2): https://integration.wikimedia.org/ci/job/operations-puppet-catalog-compiler/label=puppet5-compiler-node/2251/co" [puppet] - 10https://gerrit.wikimedia.org/r/1027234 (https://phabricator.wikimedia.org/T354656) (owner: 10Jelto) [19:01:27] FIRING: [4x] JobUnavailable: Reduced availability for job ncredir in ops@drmrs - https://wikitech.wikimedia.org/wiki/Prometheus#Prometheus_job_unavailable - https://grafana.wikimedia.org/d/NEJu05xZz/prometheus-targets - https://alerts.wikimedia.org/?q=alertname%3DJobUnavailable [19:23:09] (03PS3) 10Jelto: gitlab: add option to run a custom exporter [puppet] - 10https://gerrit.wikimedia.org/r/1027234 (https://phabricator.wikimedia.org/T354656) [19:33:56] (03PS4) 10Jelto: gitlab: add option to run a custom exporter [puppet] - 10https://gerrit.wikimedia.org/r/1027234 (https://phabricator.wikimedia.org/T354656) [19:34:12] PROBLEM - Postgres Replication Lag on puppetdb2003 is CRITICAL: POSTGRES_HOT_STANDBY_DELAY CRITICAL: DB puppetdb (host:localhost) 167057792 and 4 seconds https://wikitech.wikimedia.org/wiki/Postgres%23Monitoring [19:35:12] RECOVERY - Postgres Replication Lag on puppetdb2003 is OK: POSTGRES_HOT_STANDBY_DELAY OK: DB puppetdb (host:localhost) 85312 and 0 seconds https://wikitech.wikimedia.org/wiki/Postgres%23Monitoring [19:39:46] (03PS5) 10Jelto: gitlab: add option to run a custom exporter [puppet] - 10https://gerrit.wikimedia.org/r/1027234 (https://phabricator.wikimedia.org/T354656) [19:52:11] FIRING: [2x] RoutinatorRsyncErrors: Routinator rsync fetching issue in codfw - https://wikitech.wikimedia.org/wiki/RPKI#RSYNC_status - https://grafana.wikimedia.org/d/UwUa77GZk/rpki - https://alerts.wikimedia.org/?q=alertname%3DRoutinatorRsyncErrors [19:59:08] (03PS6) 10Jelto: gitlab: add option to run a custom exporter [puppet] - 10https://gerrit.wikimedia.org/r/1027234 (https://phabricator.wikimedia.org/T354656) [20:03:18] (03CR) 10Jelto: "I cherry-picked this in devtools and it works as expected on gitlab-prod-1002.devtools.eqiad1.wikimedia.cloud (after a few fixes)." [puppet] - 10https://gerrit.wikimedia.org/r/1027234 (https://phabricator.wikimedia.org/T354656) (owner: 10Jelto) [20:23:17] (03PS1) 10Majavah: hieradata: update deployment-prep imagescaler url [puppet] - 10https://gerrit.wikimedia.org/r/1027258 [20:33:55] (03CR) 10Majavah: [C:03+2] hieradata: update deployment-prep imagescaler url [puppet] - 10https://gerrit.wikimedia.org/r/1027258 (owner: 10Majavah) [20:50:17] 06SRE, 10SRE-swift-storage, 06Data-Persistence, 10Thumbor, and 6 others: Change default image thumbnail size - https://phabricator.wikimedia.org/T355914#9771768 (10Nosferattus) A lot of the discussion here is based on the assumption that we don't already have 250px thumbnails for most images: > "I would a... [21:17:10] FIRING: [2x] SystemdUnitFailed: docker-reporter-base-images.service on build2001:9100 - https://wikitech.wikimedia.org/wiki/Monitoring/check_systemd_state - https://grafana.wikimedia.org/d/g-AaZRFWk/systemd-status - https://alerts.wikimedia.org/?q=alertname%3DSystemdUnitFailed [22:04:38] (03PS1) 10Zabe: Switch password hashing to 'B' on beta [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027276 [22:06:56] (03PS2) 10Zabe: Switch password hashing to 'B' on beta [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027276 [22:07:03] (03CR) 10Zabe: [C:03+2] "beta only" [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027276 (owner: 10Zabe) [22:07:49] (03Merged) 10jenkins-bot: Switch password hashing to 'B' on beta [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027276 (owner: 10Zabe) [22:19:41] (03PS1) 10Zabe: beta: testing more wrapping [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027278 [22:22:10] FIRING: [3x] SystemdUnitFailed: docker-reporter-base-images.service on build2001:9100 - https://wikitech.wikimedia.org/wiki/Monitoring/check_systemd_state - https://grafana.wikimedia.org/d/g-AaZRFWk/systemd-status - https://alerts.wikimedia.org/?q=alertname%3DSystemdUnitFailed [22:22:20] PROBLEM - Check unit status of httpbb_hourly_appserver on cumin2002 is CRITICAL: CRITICAL: Status of the systemd unit httpbb_hourly_appserver https://wikitech.wikimedia.org/wiki/Monitoring/systemd_unit_state [22:30:37] 10SRE-swift-storage: File has disappeared from Commons storage - https://phabricator.wikimedia.org/T364258 (10Magog_the_Ogre) 03NEW [22:31:54] (03PS1) 10Zabe: Revert "Switch password hashing to 'B' on beta" [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027279 [22:31:54] (03PS1) 10Zabe: beta: Set password hashing to 'B' [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027280 [22:32:25] (03CR) 10Zabe: [C:03+2] Revert "Switch password hashing to 'B' on beta" [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027279 (owner: 10Zabe) [22:32:35] (03CR) 10Zabe: [C:03+2] beta: Set password hashing to 'B' [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027280 (owner: 10Zabe) [22:33:10] (03Merged) 10jenkins-bot: Revert "Switch password hashing to 'B' on beta" [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027279 (owner: 10Zabe) [22:33:19] (03Merged) 10jenkins-bot: beta: Set password hashing to 'B' [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027280 (owner: 10Zabe) [22:34:23] 10SRE-swift-storage: File:Gnome-edit-delete.svg has disappeared from Commons storage - https://phabricator.wikimedia.org/T364258#9771916 (10Peachey88) [22:56:46] (03PS1) 10Zabe: Revert "beta: Set password hashing to 'B'" [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027294 [22:56:50] (03CR) 10Zabe: [C:03+2] Revert "beta: Set password hashing to 'B'" [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027294 (owner: 10Zabe) [22:57:37] (03Merged) 10jenkins-bot: Revert "beta: Set password hashing to 'B'" [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027294 (owner: 10Zabe) [23:01:27] FIRING: [4x] JobUnavailable: Reduced availability for job ncredir in ops@drmrs - https://wikitech.wikimedia.org/wiki/Prometheus#Prometheus_job_unavailable - https://grafana.wikimedia.org/d/NEJu05xZz/prometheus-targets - https://alerts.wikimedia.org/?q=alertname%3DJobUnavailable [23:17:10] FIRING: [3x] SystemdUnitFailed: docker-reporter-base-images.service on build2001:9100 - https://wikitech.wikimedia.org/wiki/Monitoring/check_systemd_state - https://grafana.wikimedia.org/d/g-AaZRFWk/systemd-status - https://alerts.wikimedia.org/?q=alertname%3DSystemdUnitFailed [23:22:20] RECOVERY - Check unit status of httpbb_hourly_appserver on cumin2002 is OK: OK: Status of the systemd unit httpbb_hourly_appserver https://wikitech.wikimedia.org/wiki/Monitoring/systemd_unit_state [23:33:31] (03PS4) 10PleaseStand: Use OpenSSL for PBKDF2 password hashing [mediawiki-config] - 10https://gerrit.wikimedia.org/r/842522 (https://phabricator.wikimedia.org/T320929) [23:38:24] (03PS1) 10TrainBranchBot: Branch commit for wmf/branch_cut_pretest [core] (wmf/branch_cut_pretest) - 10https://gerrit.wikimedia.org/r/1026903 [23:38:24] (03CR) 10TrainBranchBot: [C:03+2] Branch commit for wmf/branch_cut_pretest [core] (wmf/branch_cut_pretest) - 10https://gerrit.wikimedia.org/r/1026903 (owner: 10TrainBranchBot) [23:39:52] 10SRE-swift-storage: File:Gnome-edit-delete.svg has disappeared from Commons storage - https://phabricator.wikimedia.org/T364258#9771970 (10Pppery) ā†’14Duplicate dup:03T363995 [23:39:55] 10SRE-swift-storage, 06Commons: Commons: File not found - https://phabricator.wikimedia.org/T363995#9771972 (10Pppery) [23:44:17] (03Abandoned) 10Zabe: beta: testing more wrapping [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027278 (owner: 10Zabe) [23:50:48] (03PS1) 10Zabe: Stop setting wgPasswordDefault [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027335 [23:51:15] (03PS2) 10Zabe: Stop setting wgPasswordDefault [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027335 [23:52:11] FIRING: [2x] RoutinatorRsyncErrors: Routinator rsync fetching issue in codfw - https://wikitech.wikimedia.org/wiki/RPKI#RSYNC_status - https://grafana.wikimedia.org/d/UwUa77GZk/rpki - https://alerts.wikimedia.org/?q=alertname%3DRoutinatorRsyncErrors [23:53:16] (03PS1) 10Zabe: beta: Use OpenSSL for PBKDF2 password hashing [mediawiki-config] - 10https://gerrit.wikimedia.org/r/1027337 (https://phabricator.wikimedia.org/T320929) [23:59:59] (03Merged) 10jenkins-bot: Branch commit for wmf/branch_cut_pretest [core] (wmf/branch_cut_pretest) - 10https://gerrit.wikimedia.org/r/1026903 (owner: 10TrainBranchBot)