[13:55:37] office hours starting in https://meet.google.com/vgj-bbeb-uyi [16:58:29] workout, back in ~30 [17:34:00] back [18:48:21] lunch/errands, back in ~1h [20:15:41] back [20:17:23] getting some alerts for "CirrusSearch job topic eqiad.cpjobqueue.partitioned.mediawiki.job.cirrusSearchElasticaWrite is heavily backlogged with 6.211M messages" , should we be worried? [20:21:32] well, the alert cleared, but still curious about what to do in those situations. Will read the operations scrollback and see if any of that is related [20:25:40] looks like a DB maintenance is happening [20:28:30] I guess this is the jobqueue dashboard? still reading thru it https://grafana.wikimedia.org/d/LSeAShkGz/jobqueue?orgId=1 [20:39:23] https://grafana-rw.wikimedia.org/d/000000234/kafka-by-topic?forceLogin&orgId=1&refresh=5m&var-datasource=eqiad%20prometheus%2Fops&var-kafka_cluster=jumbo-eqiad&var-kafka_broker=All&var-topic=eqiad.mediawiki.job.cirrusSearchElasticaWrite I found a kafka by topic dashboard too but still looking for the number of messages in queue [20:59:36] ebernhardson: still around? Any opinion on the backlog of Cirrus writes? [21:00:29] inflatador: that's about updates to Elasticsearch not keeping up. Not the end of the world. Search still works, there might be delays in getting the updates. [21:01:13] But that only affects the very few editors who want to see the effects of there changes in Search right away. [21:12:10] understood. Still curious about which dashboard shows that info, but that's far from urgent [22:33:10] ebernhardson we were going to run a few instances of data-reload.py on the newer hosts, any concerns about overwhelming the dumps servers?