[00:20:26] PROBLEM - Hadoop NodeManager on an-worker1128 is CRITICAL: PROCS CRITICAL: 0 processes with command name java, args org.apache.hadoop.yarn.server.nodemanager.NodeManager https://wikitech.wikimedia.org/wiki/Analytics/Systems/Cluster/Hadoop/Alerts%23Yarn_Nodemanager_process [00:38:08] PROBLEM - YARN NodeManager JVM Heap usage on an-worker1128 is CRITICAL: 0.9565 ge 0.95 https://wikitech.wikimedia.org/wiki/Analytics/Systems/Cluster/Hadoop/Administration https://grafana.wikimedia.org/dashboard/db/hadoop?var-hadoop_cluster=analytics-hadoop&orgId=1&panelId=17&fullscreen [00:45:54] RECOVERY - Hadoop NodeManager on an-worker1128 is OK: PROCS OK: 1 process with command name java, args org.apache.hadoop.yarn.server.nodemanager.NodeManager https://wikitech.wikimedia.org/wiki/Analytics/Systems/Cluster/Hadoop/Alerts%23Yarn_Nodemanager_process [00:48:36] RECOVERY - YARN NodeManager JVM Heap usage on an-worker1128 is OK: (C)0.95 ge (W)0.9 ge 0.8543 https://wikitech.wikimedia.org/wiki/Analytics/Systems/Cluster/Hadoop/Administration https://grafana.wikimedia.org/dashboard/db/hadoop?var-hadoop_cluster=analytics-hadoop&orgId=1&panelId=17&fullscreen [03:53:44] PROBLEM - Check unit status of monitor_refine_netflow on an-launcher1002 is CRITICAL: CRITICAL: Status of the systemd unit monitor_refine_netflow https://wikitech.wikimedia.org/wiki/Analytics/Systems/Managing_systemd_timers [04:15:58] RECOVERY - Check unit status of monitor_refine_event_sanitized_analytics_delayed on an-launcher1002 is OK: OK: Status of the systemd unit monitor_refine_event_sanitized_analytics_delayed https://wikitech.wikimedia.org/wiki/Analytics/Systems/Managing_systemd_timers [05:34:31] 10Analytics-Radar, 10WikimediaDebug, 10observability, 10serviceops, and 4 others: Create a separate 'mwdebug' cluster - https://phabricator.wikimedia.org/T262202 (10jijiki) @Krinkle I agree that we should come up with a complete solution for this. I will close this task and we can continue this discussion... [05:36:29] 10Analytics-Radar, 10WikimediaDebug, 10observability, 10serviceops, and 4 others: Create a separate 'mwdebug' cluster - https://phabricator.wikimedia.org/T262202 (10jijiki) 05Open→03Resolved p:05Triage→03Medium [05:43:52] 10Analytics-Radar, 10WikimediaDebug, 10observability, 10serviceops, and 4 others: Create a separate 'mwdebug' cluster - https://phabricator.wikimedia.org/T262202 (10Joe) For the record, the mwdebug cluster on kubernetes has its own servergroup. [08:11:49] 10Analytics-Data-Quality, 10WMDE-TechWish, 10WMDE-Templates-FocusArea: Check whether VE template dialog and Template Wizard metrics are healthy - https://phabricator.wikimedia.org/T292045 (10awight) [08:12:08] 10Analytics-Data-Quality, 10WMDE-TechWish, 10WMDE-Templates-FocusArea, 10WMDE-TechWish-Sprint-2021-09-29: Check whether VE template dialog and Template Wizard metrics are healthy - https://phabricator.wikimedia.org/T292045 (10awight) [08:38:41] 10Analytics, 10Event-Platform, 10Wikimedia-JobQueue: Queuing jobs is extremely slow - https://phabricator.wikimedia.org/T292048 (10Ladsgroup) [08:38:57] 10Analytics, 10Event-Platform, 10Wikimedia-JobQueue: Queuing jobs is extremely slow - https://phabricator.wikimedia.org/T292048 (10Ladsgroup) [08:54:41] 10Analytics, 10Event-Platform, 10Wikimedia-JobQueue: Queuing jobs is extremely slow - https://phabricator.wikimedia.org/T292048 (10Ladsgroup) [09:16:39] !log restart hive-* units on an-coord1002 for openjdk upgrades (standby node) [09:16:42] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [10:07:10] 10Analytics, 10Data-Engineering, 10Event-Platform: Allow kafka clients to verify brokers hostnames when using SSL - https://phabricator.wikimedia.org/T291905 (10jbond) > f we use broker specific certs, will client's need to have all of the broker certs in their truststores? Maybe not...maybe they only need t... [10:49:43] PROBLEM - Check if active EventStreams endpoint is delivering messages. on alert1001 is CRITICAL: CRITICAL: No EventStreams message was consumed from https://stream.wikimedia.org/v2/stream/recentchange within 10 seconds. https://wikitech.wikimedia.org/wiki/Event_Platform/EventStreams/Administration [10:51:16] Investigating this now -^^ [10:52:11] Looks like something widespread. [10:54:30] Seems to have been a network blip, mentioned in #wikimedia-sre [11:00:36] Alert is still critical after 12 minutes. Investigating further. [11:09:41] I can't see any particular ongoing issues from here: https://grafana.wikimedia.org/d/znIuUcsWz/eventstreams?orgId=1&refresh=1m&var-dc=eqiad%20prometheus%2Fk8s&var-service=eventstreams [11:09:41] so it may be an issue with the check. I see that there are still some LVS errors in Icinga, which might be related. [11:16:10] I've checked `btullis@deploy1002:/srv/deployment-charts/helmfile.d/services/eventstreams$ helmfile -e eqiad status` and it looks fine. Pods were restarted 38 minutes ago, but appear otherwise healthy. [11:18:20] If I run the check myself on alert1001 I get an OK status. [11:18:23] https://www.irccloud.com/pastebin/WMxxpnwH/ [11:18:46] RECOVERY - Check if active EventStreams endpoint is delivering messages. on alert1001 is OK: OK: An EventStreams message was consumed from https://stream.wikimedia.org/v2/stream/recentchange within 10 seconds. https://wikitech.wikimedia.org/wiki/Event_Platform/EventStreams/Administration [11:19:23] OK, there it goes back to OK. [11:21:53] 10Analytics, 10Analytics-Kanban, 10Data-Engineering: Snapshot and Reload cassandra2 pageview_per_article data table from all 12 instances - https://phabricator.wikimedia.org/T291472 (10BTullis) This has completed 4 of the 12 snapshot transfers in around 24 hours. So the task should take around another 2 days... [12:38:25] Hi btullis - Am I correct in assuming the last repair of the mediarequest_per_file table should almost be done? [12:39:04] It's at 98% but then there is an 8-12 hour period before the command returns. [12:39:35] ...after it gets to 100% [12:39:52] Thanks for letting me know :) Thank you as well for the follow up on the pageviews_per_article_flat copy in the task :) [12:40:39] No worries. How's the QA going for the smaller tables? [12:42:03] it's all done successfully except the last very long ongoing check (all days since 2015 of daily top data - currently at 2017-11-01) [12:42:14] And up to now no issue at all [12:42:29] Great! [12:42:58] indeed :) Thanks you again for the great progress! [12:46:36] btullis: I hope we're gonna have enough disk-space to load the data once transfer is fully done :S [12:46:50] A pleasure. Do your checks include the meta tables as well, or just the data tables? [12:47:13] btullis: on aqs1011 (where data has been copied), disk-usage is at ~65% [12:47:52] btullis: my tests look at data only - metadata is only relevant for the AQS-node system, which would have complained if there had been issues [12:51:30] Yes it is a concern isn't it. [12:52:01] btullis: I wonder if the copy-then-load approach would be safer [12:54:48] Not quite sure what you mean. Copy to where? [12:55:17] I mean copy data from one instance, then load it, then delete the copied data, and repeat [12:55:34] instead of loading all instances (even if on different hosts) [13:17:00] Yes, I see. I'm not sure that there will be a significant difference though, between having copied 12 snapshots first and copying one, loading it, then deleting it. [13:17:45] btullis: if problem there is it'll show later I guess, but the concern will show up after some loading for sure [13:17:56] btullis: let's try and see :) [13:19:28] I could see some potential safety in starting to load them in order from biggest to smallest, and deleting them as we go. e.g. aqs1010-a is 1.6 TB and the volume is at 64%. [13:21:04] If we load this one first, we expect that to add a certain amount of data to *each* of the 12 instances, but it shouldn't add more than 1.2 TB to aqs1010:/srv/cassandra-a/ itself. [13:23:10] Once it's loaded we could delete that snapshot. The next most pressing is aqs1010-b at 1.4 TB and 58% full. So we would load and delete in this order of how full they are, sequentially as opposed to in parallel. [13:38:14] that works for me btullis --^ [13:39:39] OK. Will make some notes to talk about the approach on T291472 [13:39:40] T291472: Snapshot and Reload cassandra2 pageview_per_article data table from all 12 instances - https://phabricator.wikimedia.org/T291472 [14:13:54] milimetric: yt? wanna pair with me and drop some data from an-web1001? https://phabricator.wikimedia.org/T285355#7364748 [14:14:48] omw cave otto [14:14:59] huh... ottomata didn't autocomplete [14:15:09] irc hiccup [14:15:10] back [14:15:28] hi ottomata I'm in the cave [14:21:28] 10Analytics-Clusters, 10Analytics-Kanban, 10Patch-For-Review: Set up an-web1001 and decommission thorium - https://phabricator.wikimedia.org/T285355 (10Ottomata) @elukey, deleted. ` 14:19:19 [@an-web1001:/srv/stats.wikimedia.org/htdocs/reportcard] $ sudo rm -rf ./pediapress ./staff ./extended ` [14:28:40] 10Analytics, 10Data-Engineering: Fix presto kerberos support for system users - https://phabricator.wikimedia.org/T292072 (10BTullis) [14:32:02] 10Analytics, 10Data-Engineering: Fix presto kerberos support for system users - https://phabricator.wikimedia.org/T292072 (10Ottomata) Fixing the wrapper looks easy; fixing the python client looks difficult. [14:36:37] 10Analytics-Clusters, 10Analytics-Kanban, 10Patch-For-Review: Set up an-web1001 and decommission thorium - https://phabricator.wikimedia.org/T285355 (10elukey) @Ottomata nice! To double check, I see the following to drop in puppet private: ` class passwords::geowiki { # NOTE! If you change these, y... [14:38:18] 10Analytics-Clusters, 10Analytics-Kanban, 10Patch-For-Review: Set up an-web1001 and decommission thorium - https://phabricator.wikimedia.org/T285355 (10Ottomata) Yes please! @elukey I'm going to proceed with the decom of thorium. [14:42:13] 10Analytics-Clusters, 10Analytics-Kanban, 10decommission-hardware: decommission thorium.eqiad.wmnet - https://phabricator.wikimedia.org/T292075 (10Ottomata) [14:42:25] 10Analytics-Clusters, 10Analytics-Kanban, 10decommission-hardware: decommission thorium.eqiad.wmnet - https://phabricator.wikimedia.org/T292075 (10Ottomata) [14:42:35] 10Analytics-Clusters, 10Analytics-Kanban, 10decommission-hardware: decommission thorium.eqiad.wmnet - https://phabricator.wikimedia.org/T292075 (10Ottomata) [14:42:37] 10Analytics-Clusters, 10Analytics-Kanban, 10Patch-For-Review: Set up an-web1001 and decommission thorium - https://phabricator.wikimedia.org/T285355 (10Ottomata) [14:43:06] 10Analytics-Clusters, 10Analytics-Kanban, 10decommission-hardware: decommission thorium.eqiad.wmnet - https://phabricator.wikimedia.org/T292075 (10Ottomata) [14:48:49] milimetric: btw, i looked into https://phabricator.wikimedia.org/T289003 but wasn't able to reproduce the issue you saw [14:48:56] so am not sure what to do with the ticket [14:54:02] 10Analytics-Clusters, 10Analytics-Kanban, 10Patch-For-Review: Set up an-web1001 and decommission thorium - https://phabricator.wikimedia.org/T285355 (10elukey) All cleaned up! [14:54:46] 10Analytics-Clusters, 10Analytics-Kanban, 10Patch-For-Review: Set up an-web1001 and decommission thorium - https://phabricator.wikimedia.org/T285355 (10Ottomata) Thank you! [15:11:52] 10Analytics, 10Data-Engineering, 10Product-Analytics: Fix presto kerberos support for system users - https://phabricator.wikimedia.org/T292072 (10mpopov) [15:13:37] 10Analytics, 10Analytics-Kanban, 10Data-Engineering, 10Data-Engineering-Kanban, 10Patch-For-Review: Test Alluxio as cache layer for Presto - https://phabricator.wikimedia.org/T266641 (10BTullis) I have started running the master process manually as the alluxio user on an-test-coord1001 with the following... [16:11:18] 10Analytics, 10Analytics-Kanban, 10Data-Engineering: Setup Presto UI in production - https://phabricator.wikimedia.org/T292087 (10razzi) [16:11:56] 10Analytics, 10Analytics-Kanban, 10Data-Engineering: Add a presto query logger - https://phabricator.wikimedia.org/T269832 (10razzi) I made a subtask dedicated to enabling the presto UI: https://phabricator.wikimedia.org/T292087 [16:25:46] 10Analytics, 10Analytics-Kanban: Improve Refine bad data handling - https://phabricator.wikimedia.org/T289003 (10Milimetric) huh... weird. Maybe it was something else, but @JAllemandou and I really thought it was the `source_url` when we looked through the code. Anyway if it works with that input, then all g... [16:29:47] 10Analytics, 10Analytics-Kanban: Improve Refine bad data handling - https://phabricator.wikimedia.org/T289003 (10Ottomata) OH! `source_url` is a VirualPageView specific field. Refine never looks at it. If this was set in `meta.domain` or `webHost`, this would get run through the is_wmf_domain function. Th... [16:53:21] 10Analytics, 10Patch-For-Review: Decide whether to migrate from Presto to Trino - https://phabricator.wikimedia.org/T266640 (10Ottomata) We should find out if swapping to Trino is a mostly drop in replacement. - Can we re-use our existing Presto debian packaging scripts? - Does superset just work with Trino as... [17:01:33] (03PS1) 10Razzi: Update superset package to 1.3.1 [analytics/superset/deploy] - 10https://gerrit.wikimedia.org/r/724783 [17:16:41] 10Analytics-Clusters, 10Analytics-Kanban, 10Data-Engineering, 10Data-Engineering-Kanban, 10Product-Analytics: Upgrade Superset to 1.3 - https://phabricator.wikimedia.org/T288115 (10razzi) I have upgraded the staging superset to 1.3.1 [17:17:21] 10Analytics-Clusters, 10Analytics-Kanban, 10Data-Engineering, 10Data-Engineering-Kanban, 10Product-Analytics: Upgrade Superset to 1.3.1 or higher - https://phabricator.wikimedia.org/T288115 (10razzi) [17:22:12] (03CR) 10Sharvaniharan: "@Ottomata @Gergo anything else needs to be updated on this to get it merged?" [schemas/event/secondary] - 10https://gerrit.wikimedia.org/r/722964 (https://phabricator.wikimedia.org/T286000) (owner: 10Sharvaniharan) [17:41:05] (03CR) 10Ottomata: [C: 03+2] "I think we can merge. Sharvaniharan, you should have merge rights, if you don't already. Try to merge, and if not we can get you the pro" [schemas/event/secondary] - 10https://gerrit.wikimedia.org/r/722964 (https://phabricator.wikimedia.org/T286000) (owner: 10Sharvaniharan) [17:41:29] (03CR) 10Ottomata: [C: 03+2] "Oh, actually,I think my +2 may have started the merge process!" [schemas/event/secondary] - 10https://gerrit.wikimedia.org/r/722964 (https://phabricator.wikimedia.org/T286000) (owner: 10Sharvaniharan) [17:42:19] (03Merged) 10jenkins-bot: Migrate MobileWikiAppDailyStats to MEP [schemas/event/secondary] - 10https://gerrit.wikimedia.org/r/722964 (https://phabricator.wikimedia.org/T286000) (owner: 10Sharvaniharan) [17:45:55] 10Analytics, 10Analytics-Kanban, 10Event-Platform, 10Growth-Team, and 4 others: Revisions missing from mediawiki_revision_create - https://phabricator.wikimedia.org/T215001 (10Pchelolo) So, after mangling keepAlive timeouts in envoy, we've had 0 503/504 in eventgate over that last day. @Milimetric would... [18:11:25] (03CR) 10Sharvaniharan: Migrate MobileWikiAppDailyStats to MEP (031 comment) [schemas/event/secondary] - 10https://gerrit.wikimedia.org/r/722964 (https://phabricator.wikimedia.org/T286000) (owner: 10Sharvaniharan) [18:27:45] ottomata: regarding jobs being slow tot get queued. This is verbose logs of jobs being queued, if that's going to help https://logstash.wikimedia.org/goto/a288ef574056abcd7bea7ad72eb2c3a4 [18:28:09] 10Analytics, 10Data-Engineering, 10Product-Analytics: Fix presto kerberos support for system users - https://phabricator.wikimedia.org/T292072 (10mpopov) @Ottomata: Just to double-check: the issue is isolated to Presto, right? We should be able to use Spark (via [[ https://github.com/wikimedia/wmfdata-python... [18:28:17] it seems the time to establish connection is negligible but time to consume it is not [18:28:56] 40ms is enough time to queue the job in another dc [18:30:56] 10Analytics, 10Data-Engineering, 10Product-Analytics: Fix presto kerberos support for system users - https://phabricator.wikimedia.org/T292072 (10Ottomata) Yes! I think it is isolated to Presto. [18:32:03] Amir1: to consume? [18:32:16] 10Analytics, 10Event-Platform, 10Wikimedia-JobQueue: Queuing jobs is extremely slow - https://phabricator.wikimedia.org/T292048 (10Ladsgroup) This is verbose logs of jobs being queued, if that's going to help https://logstash.wikimedia.org/goto/a288ef574056abcd7bea7ad72eb2c3a4 it seems the time to establish... [18:32:33] ottomata: I meant event-gate to respond [18:32:46] "publish" I guess [18:32:55] * Amir1 is not familiar with the lingo [18:33:12] Amir1: you mean the time it takes for eventgate to respond with 201? [18:33:30] yup [18:33:35] ah, 40ms is long? [18:34:13] on itself no, but this is queuing around ten jobs [18:34:38] you could queue in bulk? the API takes a list of events [18:34:42] which makes saving an edit half a second longer [18:35:03] we could queue with less reliability using hasty=true parameter [18:35:20] from what I can see the queue goes to batch but probably batch of one [18:35:23] without hasty=true, the response won't come until Kafka has confirmed that it has accepted the event [18:35:49] not sure if we can do better than 40ms for HTTP + event validation + kafka produce with ACK [18:36:07] do you think it would be faster if I try to batch them? [18:36:16] we can definitely try and measure [18:36:20] if it is 40ms per POST [18:36:32] you might get some speed up if you are doing lots of POSTs [18:36:41] and put them into one [18:36:51] let me play with it then [18:37:09] k, it'd be interesting to see on the eventgate side where the bottleneck is [18:37:21] i'd expect it to be producing to kafka...mayyybe validation [18:37:23] hmm [18:37:39] we could tune the kafka producer client on the eventgate side to do batching differnetly [18:37:57] we could also have a special eventgate instance tuned better for larger batch posts [18:37:59] its a trade off [18:38:19] I have a scary thought that it might also first send it to the other dc and then confirm acceptance [18:38:25] can improve latency for batches, or try to produce as fast as possible for individual produces [18:38:28] because I know jobs get duplicated [18:38:29] no [18:38:30] it won't do that [18:38:46] its just waiting for the local Kafka brokers to ACK the produce request [18:38:52] okay [18:39:07] the DC replication is a separate consumer + producer called MirrorMaker [18:41:48] ugh batching is almost impossible, unless I define a buffer in the request [18:41:52] that's going to be fun [18:42:00] https://performance.wikimedia.org/xhgui/run/symbol?id=6153427113e6381f84c80315&symbol=MediaWiki%5CExtension%5CEventBus%5CEventBus%3A%3Asend [18:44:18] whoa cool never seen ^ before [18:44:37] xhgui is amazing [18:44:39] Amir1: buffer in the request? [18:44:57] like basically hold the pushed jobs and flush it later [18:46:39] haha, lazypush already does it [18:46:42] noice [18:48:05] I'm not familiar with the jobqueue interface [18:48:10] but EventBus::send wil ll take an array of events [18:48:47] Amir1: [18:48:49] https://gerrit.wikimedia.org/r/plugins/gitiles/mediawiki/extensions/EventBus/+/refs/heads/master/includes/Adapters/JobQueue/JobQueueEventBus.php [18:48:53] doBatchPush [18:48:53] ? [18:49:00] yeah [18:49:17] basically any push just goes to batchpush([$job]) [18:49:25] aye [18:58:56] 10Analytics-Radar, 10Fundraising-Backlog, 10Product-Analytics, 10Wikipedia-iOS-App-Backlog, and 2 others: Understand impact of Apple's Relay Service - https://phabricator.wikimedia.org/T289795 (10mpopov) [19:12:09] (03PS1) 10Joal: Make SimpleStringWriter instrumented for metrics [analytics/gobblin] (wmf) - 10https://gerrit.wikimedia.org/r/724825 (https://phabricator.wikimedia.org/T286503) [19:16:29] (03CR) 10Ottomata: [C: 03+1] "Ah, cool!" [analytics/gobblin] (wmf) - 10https://gerrit.wikimedia.org/r/724825 (https://phabricator.wikimedia.org/T286503) (owner: 10Joal) [19:18:40] 10Analytics, 10Event-Platform, 10Wikimedia-JobQueue, 10Patch-For-Review: Queuing jobs is extremely slow - https://phabricator.wikimedia.org/T292048 (10Pchelolo) JobQueueEventBus is splitting jobs by type and then sends each job type separately. @Ottomata I forgot, can we send multiple different streams to... [19:19:53] 10Analytics, 10Event-Platform, 10Wikimedia-JobQueue, 10Patch-For-Review: Queuing jobs is extremely slow - https://phabricator.wikimedia.org/T292048 (10Ottomata) Yes for sure, as long as each one has `meta.stream` set correctly, eventgate will route them to the right place. [19:50:51] Dropping the superset_staging database [19:58:34] (03CR) 10Joal: [V: 03+1] "Tested on cluster - we haz metrix." [analytics/gobblin] (wmf) - 10https://gerrit.wikimedia.org/r/724825 (https://phabricator.wikimedia.org/T286503) (owner: 10Joal) [20:16:20] (03PS3) 10Bearloga: ETL test notebook [analytics/wmf-product/jobs] - 10https://gerrit.wikimedia.org/r/724469 (https://phabricator.wikimedia.org/T291958) [20:28:04] (03CR) 10Bearloga: [V: 03+2 C: 03+2] ETL test notebook [analytics/wmf-product/jobs] - 10https://gerrit.wikimedia.org/r/724469 (https://phabricator.wikimedia.org/T291958) (owner: 10Bearloga) [20:40:04] (03CR) 10Ottomata: ETL test notebook (031 comment) [analytics/wmf-product/jobs] - 10https://gerrit.wikimedia.org/r/724469 (https://phabricator.wikimedia.org/T291958) (owner: 10Bearloga) [20:58:38] 10Analytics, 10Event-Platform, 10Platform Team Workboards (MW Expedition): Decouple EventBus and EventFactory - https://phabricator.wikimedia.org/T292121 (10Pchelolo) [21:02:33] 10Analytics, 10Event-Platform: Introduce EventBusSendUpdate - https://phabricator.wikimedia.org/T292123 (10Pchelolo) [21:09:45] 10Analytics, 10Event-Platform: Introduce EventBusSendUpdate - https://phabricator.wikimedia.org/T292123 (10Ottomata) Does this help the queuing job issue? Are jobs sent in the DeferredUpdate? [21:23:45] 10Analytics-Radar, 10Product-Analytics (Kanban): [REQUEST] Investigate decrease in New Registered Users - https://phabricator.wikimedia.org/T289799 (10Iflorez) @Tgr Do you know of recent bot detection deployments that should be considered here? Or insights about changes to bot activity that I should review? An... [21:30:05] 10Analytics, 10Event-Platform, 10Wikibase change dispatching scripts to jobs, 10Wikimedia-JobQueue, 10Patch-For-Review: Queuing jobs is extremely slow - https://phabricator.wikimedia.org/T292048 (10Ladsgroup) [21:36:42] 10Analytics, 10Event-Platform: Introduce EventBusSendUpdate - https://phabricator.wikimedia.org/T292123 (10Pchelolo) No, this one is for non-job events. We send them all via callable deferred update anyway, so why not batch them. [21:40:16] 10Analytics, 10Event-Platform: Introduce EventBusSendUpdate - https://phabricator.wikimedia.org/T292123 (10Ottomata) Mmmkay :) [22:49:52] 10Analytics-Radar, 10Fundraising-Backlog, 10Product-Analytics, 10Wikipedia-iOS-App-Backlog, and 2 others: Understand impact of Apple's Relay Service - https://phabricator.wikimedia.org/T289795 (10MMiller_WMF) a:03MMiller_WMF [23:45:45] 10Analytics-Radar, 10Product-Analytics (Kanban): [REQUEST] Investigate decrease in New Registered Users - https://phabricator.wikimedia.org/T289799 (10kzimmerman) Irene and I reviewed the data, and one thing that stood out is that, within wikis, trends in registration over the past few years don't seem to matc...