[01:08:25] (03PS1) 10Sharvaniharan: Remove android.image_recommendation_interaction schema [schemas/event/secondary] - 10https://gerrit.wikimedia.org/r/708189 [05:14:14] 10Analytics, 10Analytics-Kanban, 10Event-Platform, 10Patch-For-Review: EchoMail and EchoInteraction Event Platform Migration - https://phabricator.wikimedia.org/T287210 (10MMiller_WMF) That is correct -- we do not need geocoded data for those schemas. [06:56:34] (03CR) 10Addshore: [C: 03+2] Create wd_propertysuggester/client_side_property_request and wd_propertysuggester/server_side_property_request [schemas/event/secondary] - 10https://gerrit.wikimedia.org/r/689152 (owner: 10Martaannaj) [06:57:07] (03Merged) 10jenkins-bot: Create wd_propertysuggester/client_side_property_request and wd_propertysuggester/server_side_property_request [schemas/event/secondary] - 10https://gerrit.wikimedia.org/r/689152 (owner: 10Martaannaj) [07:32:07] 10Analytics-Radar, 10Event-Platform, 10MW-1.36-notes (1.36.0-wmf.37; 2021-03-30), 10MW-1.37-notes (1.37.0-wmf.3; 2021-04-27): extensions/EventBus - Use UserGroupManager instead of User group methods - https://phabricator.wikimedia.org/T281825 (10Vlad.shapik) [08:06:28] a-team How can I handle the monthly Wikimedia pageviews dumps? There are one bz2 file for each month but they cannot be decompressed as the daily dumps, I always obtain an error [08:40:15] wences91: I see hourly dump files here: https://dumps.wikimedia.org/other/pageviews/ - Could you combine these? [08:45:12] btullis Yes, now I am combining the daily pageviews files (https://dumps.wikimedia.org/other/pageview_complete/), but it would be faster if I could use the monthly pageviews instead of them (https://dumps.wikimedia.org/other/pageview_complete/monthly/) [09:09:28] Yes I see. The trouble is that the hourly mapping is performed quite early in the pipeline, so I think it would likely be more work for you to tap into this pipeline and create daily dumps from it. [09:10:01] I can look at how the monthly dumps are generated and see if there is an easy way for you to re-use this mechanism, if you like. [09:12:32] btullis it would be great! [09:13:35] By the way, the daily pageviews are available and I don't have to create it from the hourly dumps [09:46:30] Can anyone comment if any action needs to be taken for the following systems, ahead of our network maintenance at 15:00 UTC (T286061)? [09:46:30] T286061: Switch buffer re-partition - Eqiad Row B - https://phabricator.wikimedia.org/T286061 [09:47:05] an-coord1001, an-launcher1002, an-master1002, an-presto1004 & stat1007 [09:47:28] Disruption should be minimal (<1second), but the whole row should be considered "at risk". [09:49:15] topranks: we could take actions but given that we expect either no impact or something really brief I'd say that we are good to go [09:49:36] I'll update the task [09:50:12] elukey: thanks! I have the task open here so I can update it don't worry. [09:50:29] done [09:50:31] :) [09:50:41] need ot be quick around here :) [09:51:24] actually do you know about "kafka-main" also? Last week we depooled the instances in rows D and C beforehand. [09:51:50] topranks: SRE infra / service-ops are best point of contacts, maybe Keith [09:52:16] Yeah he did it last week, I'll drop him a line later. Cheers [10:45:34] 10Analytics-Clusters, 10Analytics-Kanban, 10Patch-For-Review: Add analytics-presto.eqiad.wmnet CNAME for Presto coordinator failover - https://phabricator.wikimedia.org/T273642 (10BTullis) I have created a new Kerberos principal named `presto/analytics-test-presto.eqiad.wmnet@WIKIMEDIA` with the following co... [11:03:56] 10Analytics-Clusters, 10Analytics-Kanban, 10Patch-For-Review: Add analytics-presto.eqiad.wmnet CNAME for Presto coordinator failover - https://phabricator.wikimedia.org/T273642 (10BTullis) Added to the private puppet repository. Tested the deployment to an-test-coord1001.eqiad.wmnet. ` btullis@an-test-coord... [12:01:16] 10Quarry: Database dump for analysis - https://phabricator.wikimedia.org/T93907 (10Jhernandez) p:05Medium→03Low [12:14:56] 10Quarry: Show query code revisions and runs history - https://phabricator.wikimedia.org/T206482 (10Jhernandez) Related T100982 [12:18:52] 10Quarry: Add date when query was last run - https://phabricator.wikimedia.org/T77941 (10Jhernandez) 05Open→03Resolved a:03Jhernandez This seems to be done, right? {F34566892} [12:27:05] 10Quarry: Add page navigation on top of the query results - https://phabricator.wikimedia.org/T126542 (10Jhernandez) [12:44:14] 10Quarry: Add page to discover user profiles and their queries - https://phabricator.wikimedia.org/T287462 (10Jhernandez) [13:06:14] hello teamm :] [13:06:36] * btullis waves hello [13:17:31] wences91 can you open a phab task with the error you get and which dump files? thank you and sorry that you're having trouble with them! [13:20:32] 10Quarry: Various assortment of improvements - https://phabricator.wikimedia.org/T133545 (10Jhernandez) 05Open→03Declined This seems like a duplicate of {T71037} and {T135908} [13:21:37] 10Quarry, 10Patch-For-Review: Add a stop button to halt the query - https://phabricator.wikimedia.org/T71037 (10Jhernandez) [13:21:47] 10Quarry: Various assortment of improvements - https://phabricator.wikimedia.org/T133545 (10Jhernandez) [13:22:11] 10Quarry: Add a possibility to delete a draft - https://phabricator.wikimedia.org/T135908 (10Jhernandez) [13:22:21] 10Quarry: Various assortment of improvements - https://phabricator.wikimedia.org/T133545 (10Jhernandez) [13:24:18] (03CR) 10Mforns: [C: 03+1] "LGTM!" [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/686629 (https://phabricator.wikimedia.org/T280649) (owner: 10Milimetric) [13:34:52] 10Quarry: Validate and autocomplete database names in the database input field - https://phabricator.wikimedia.org/T287471 (10Jhernandez) [13:35:18] Thanks for stepping in fdans - I couldn't find where the generation of the daily dumps happens. [13:36:05] btullis: thank you for responding! I'm downloading a monthly dump to see if there are any issues with their generation [13:40:19] btullis: https://github.com/wikimedia/analytics-refinery/blob/master/oozie/pageview/daily_dump/make_dumps.hql [13:40:41] but folks outside wmf wouldn't have access to pageview_hourly [13:41:07] (I'm happy to show you around our various pipelines anytime) [13:45:26] Excellent! Thanks both. I'll take you up on that sometime please milimetric. I've still only just dipped my toes into the refinery repo. [14:11:39] 10Analytics-Clusters, 10Analytics-Kanban, 10Patch-For-Review: Add analytics-presto.eqiad.wmnet CNAME for Presto coordinator failover - https://phabricator.wikimedia.org/T273642 (10BTullis) I believe that https://gerrit.wikimedia.org/r/706641 is now ready for merging in order to test the functionality of: *... [14:13:05] 10Quarry: quarry-web-01 leaks files in /tmp - https://phabricator.wikimedia.org/T238375 (10Andrew) a:03Andrew Need to check back in a week to see that tmpreaper is doing its job [14:23:17] 10Analytics, 10Analytics-Kanban, 10Event-Platform, 10Services, and 2 others: EventGate should use recent service-runner (^2.8.1) with Prometheus support - https://phabricator.wikimedia.org/T272714 (10Ottomata) All eventgate clusters deployed, woohoo! This is the new [[ https://grafana-rw.wikimedia.org/d/ZB... [14:25:58] 10Analytics, 10Analytics-Kanban: jupyter notebook causing syslog/etc.. to fill up with error messages - https://phabricator.wikimedia.org/T287339 (10BTullis) No growth of the `/` filesystem for the past 24 hours. {F34567006} https://grafana.wikimedia.org/d/000000377/host-overview?viewPanel=28&orgId=1&var-ser... [14:29:48] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Deprecate profile::analytics::cluster::users - https://phabricator.wikimedia.org/T287063 (10jbond) >>! In T287063#7233625, @Ottomata wrote: >> Note: allocating users via data.yaml means that they will be deployed across all the hosts managed by puppet (ev... [14:31:11] ottomata, qq: I'm trying to connect to the test cluster hive metastore from airflow with no success, I've searched for configs in puppet and found: host=an-test-coord1001.eqiad.wmnet port=10000, but no luck, I also tried host=analytics-test-hive.eqiad.wmnet and port=9083, also not working, could you help me? [14:32:18] mforns: ya gimme few mins... [14:32:33] no prob ottomata can be later! thanks :] [14:33:21] 10Analytics-Clusters, 10Analytics-Kanban, 10Patch-For-Review: Add analytics-presto.eqiad.wmnet CNAME for Presto coordinator failover - https://phabricator.wikimedia.org/T273642 (10jbond) > I believe that https://gerrit.wikimedia.org/r/706641 is now ready for merging in order to test the functionality of: thi... [14:38:28] mforns: ok lets seee>..>>> [14:38:43] k :] [14:39:37] it might have something to do with kerberos [14:39:42] ya it migiht [14:40:06] 10Analytics, 10SRE, 10Traffic, 10Patch-For-Review: Downloading from Archiva.wikimedia.org seems slower than Maven Central - https://phabricator.wikimedia.org/T273086 (10hashar) The performance are currently severely degraded, seems each request made to archiva has a 3-4 seconds delay before starting the tr... [14:40:22] "Server hive/localhost@WIKIMEDIA not found in Kerberos database" [14:40:37] oh [14:40:38] hm [14:40:42] for refine spark we have to do [14:40:51] --principal analytics/an-test-coord1001.eqiad.wmnet@WIKIMEDIA --keytab /etc/security/keytabs/analytics/analytics.keytab \ [14:41:01] right [14:41:02] that's a spark opt though [14:41:18] shouldn't be those specified in airflow.cnf? [14:41:32] I remember you can specify the ticket cache dir [14:41:42] one sec [14:41:54] well, airflow-kerberos is running [14:42:07] it is configured in airflow [14:42:18] e..g principal = analytics/an-test-coord1001.eqiad.wmnet@WIKIMEDIA [14:42:18] hm [14:42:26] maybe somehow that is not being passed through to the hive client [14:42:27] https://www.irccloud.com/pastebin/020E5qHL/ [14:42:29] lets see what airflow does [14:42:32] yeah that's all set [14:42:48] mforns: can you paste a full stacktrace? [14:42:53] yes [14:43:37] mforns: also [14:43:37] https://phabricator.wikimedia.org/T275233#7108441 [14:44:00] ottomata: here: https://pastebin.com/ [14:44:03] oops no [14:44:13] here: https://pastebin.com/6weqFKwH [14:44:49] hmsclient hm [14:49:23] ottomata: mforns: Could it be that the Kerberos principal needs to be `hive/analytics-test-hive.eqiad.wmnet@WIKIMEDIA` ? [14:49:35] yes something like that [14:49:35] https://airflow.apache.org/docs/apache-airflow-providers-apache-hive/stable/_modules/airflow/providers/apache/hive/hooks/hive.html#HiveMetastoreHook [14:49:36] See: https://phabricator.wikimedia.org/T257412#6574413 [14:49:45] it needs to be passed down to the pyhive client [14:49:45] makes sense [14:50:03] oh hm [14:50:04] hm [14:50:05] i see [14:50:11] hmmm [14:51:11] hmm no i don't think so....i think that is the principal for the hive server and metastore service users, right? [14:51:18] the clients should still use their own principals to authentical [14:51:18] tet [14:53:03] oh hm no HiveMetastoreHook is not using pyhive [14:53:15] looks like thriftclient directly? [14:54:13] https://pypi.org/project/hmsclient/ [14:56:02] i think we need to define the hive connection [14:56:06] lets see [14:56:07] it is defined [14:56:10] oh? [14:56:59] analytics-test-metastore [14:57:00] hm [14:57:18] I've tried many combinations, right now it points to: an-test-coord1001.eqiad.wmnet's internal IP (10.64.53.41) and port 9083, and authmechanism: GSSAPI [14:57:26] aye [14:57:58] based on your paste though it looks like conn.host is not correct [14:58:06] hm maybe its resolving the ip in the log? [14:58:07] aha [14:58:09] i doubt it though [14:58:21] oh but it did connect [14:58:21] Connected to localhost:9083 [14:58:24] ottomata: the paste was for another connection settings [14:58:34] oh ok [14:58:42] yes, exactly at that point host was= locahost [14:58:51] I'll paste the current stacktrace [14:59:09] the host might matter for kerberos auth? not sure [14:59:29] lets set it to analytics-test-hive.eqiad.wmnet [14:59:35] https://pastebin.com/aRvfkWGZ [14:59:48] ok, trying [15:01:06] [2021-07-27 15:00:31,496] {hive.py:541} INFO - Trying to connect to analytics-test-hive.eqiad.wmnet:9083 [15:01:06] [2021-07-27 15:00:31,497] {hive.py:543} INFO - Connected to analytics-test-hive.eqiad.wmnet:9083 [15:01:22] thrift.transport.TTransport.TTransportException: Bad status: 3 (b'GSS initiate failed') [15:01:59] k [15:06:53] ottomata: might it have something to do with hive-site.xml? I've noticed that in the an-test-coord1001 I can only log into Hive if I do sudo -u analytics [15:07:17] hmm, that's true. [15:07:25] at this point it doesn't seem so mforns [15:07:30] i don't think this is reading hive-site [15:07:35] wouldn't make sense, because airflow runs under analytics too [15:07:45] i'm trying to run the code that airflow runs in ipython to try somet things [15:18:06] 10Analytics-Clusters, 10Analytics-Kanban, 10Patch-For-Review, 10User-MoritzMuehlenhoff: Reduce manual kinit frequency on stat100x hosts - https://phabricator.wikimedia.org/T268985 (10BTullis) This first patch patch is ready merging, which will install the kstart package and enable auto-renew functionality... [15:19:33] 10Analytics-Clusters, 10Analytics-Kanban, 10Patch-For-Review: Add analytics-presto.eqiad.wmnet CNAME for Presto coordinator failover - https://phabricator.wikimedia.org/T273642 (10BTullis) Yes, sorry. I meant https://gerrit.wikimedia.org/r/c/operations/puppet/+/706661 [15:19:44] ok mforns i have some code I can directly repoduuce the error i think [15:19:54] oh! cool [15:20:40] https://gist.github.com/ottomata/7bd5600873a8be5b29bbcc46a1c7a011 [15:31:30] mforns, ottomata - where are you running the code? [15:31:50] through airflow, an-test-coord1001 [15:32:31] ah ok, so on the host we set dns_canonicalize_hostname = true (the krb default) that doesn't work [15:32:48] sorry, doesn't work with the analytics-hive-like cnames [15:33:06] i think we get the same error with thhe hostname, will try [15:33:10] if you try to run the code on an-test-client1001 it should work [15:33:15] ? [15:33:23] why does the source of the host matter? [15:33:29] its the same keytab and client connecting, right? [15:33:31] analytics' keytab? [15:34:02] lemme try on launcher then with prod hive [15:34:05] the dns_canonicalize_hostname = true setting forces to do a reverse lookup of the ip, that ends up to an-test-coord1001, that is not a match with the analytics-test-hive service principal [15:34:06] that is not on the coord host [15:34:16] oh so its just a problem on that host? [15:34:45] the python sasl code reads /etc/krb5.conf [15:35:01] hmmmmmMmMm [15:35:03] yeah exactly, on client nodes like an-test-client1001 we have dns_canonicalize_hostname = false [15:35:08] for this exact reason [15:35:18] in theory the code above should work on it [15:35:58] interesting [15:36:08] elukey: why then can't i use host = an-test-coord1001.eqiad.wmnet [15:36:09] when connecting? [15:36:11] would that work? [15:36:20] when connecting from an-test-coord1001 [15:36:20] ? [15:36:40] not sure since the service principal is analytics-test-hive.eqiad.wmnet [15:37:05] oh hm [15:37:10] but maybe if you target the host and specify the principal separately (don't recall if possible) yes [15:37:11] right [15:37:25] yeah, and i dunno how to do that via this thrift sasl cli, mayyyybe through jdbc [15:37:40] ok i think that means we need to run our test airflow on a different host [15:37:52] I can't recall exactly if dns_canonicalize_hostname = true is needed on an-test-coord, we could just turn it to false [15:37:54] mforns: it works on the main analytics airflow instance [15:38:01] with host=analytics-hive.eqiad.wmnet [15:38:12] I'm reading yes [15:38:12] oh , because we don't have failover? [15:38:24] i'd rather keep the configs the same if we can [15:38:33] no reason not to just move our test airflow to an-test-client [15:38:39] just have to remember where it is [15:38:41] i'll make patch [15:38:43] ack :) [15:38:55] ok, thanks elukey and ottomata :D [15:39:00] <3 [15:39:04] yes thank you elukey [15:39:19] i was wondering if that canonicalize thing was related but yikeso it would have taken me a day or two to figure itout [15:39:43] kerberos keeps giving love and happiness in people's life [15:40:08] hehe [15:40:13] ottomata: part of my brain recognized the problem after so many days spent in misery [15:41:28] hah yeah [15:41:36] elukey: oh that means we need to deploy the anlaytics keytab to an-test-client [15:42:12] that should be ok, right? [15:42:47] I think it is already there, IIRC I used to test something on it [15:43:22] elukey@an-test-client1001:~$ ls -l /etc/security/keytabs [15:43:22] total 8 [15:43:22] dr-xr-x--- 2 analytics analytics 4096 Oct 22 2020 analytics [15:43:25] dr-xr-x--- 2 analytics-search analytics-search 4096 Mar 29 08:11 analytics-search [15:43:28] yep! [15:43:30] good to go :) [15:43:31] oh huh [15:43:40] oh it sure is! [15:43:52] it was after the loong list of tests that me and Joseph did for Bigtop IIRC [15:56:45] mforns: moving airflow-analytics-test to an-test-client1001 [15:56:57] i copied /srv/airflow-analytics-test to my homedir on an-test-coord1001 [15:57:06] your dags are there, inc ase you don't also have them locally [15:57:22] hmm i guess i'll copy them over to client too [15:57:23] ok yeah [15:59:54] 10Analytics-Clusters, 10Analytics-Kanban: Disk filling up on `/` on an-coord1001 - https://phabricator.wikimedia.org/T279304 (10BTullis) I've checked `an-test-coord1001` and it's definitely exhibiting the same behaviour as `an-coord1001` and `an-coord1002` in that the setting of: `log4j.appender.DRFA.MaxBacku... [16:04:48] fdans: standup!? [16:40:40] 10Analytics, 10SRE, 10Traffic: Downloading from Archiva.wikimedia.org seems slower than Maven Central - https://phabricator.wikimedia.org/T273086 (10hashar) Note that uploading is fast. Here for a file named `service-0.3.78-dist.tar.gz` ` 01:42:07.283 [INFO] [INFO] Uploaded to archiva.releases: https://archi... [17:00:31] mforns: btw airflow is moved to an-test-client1001 with dags ands plugins and templates copiued [17:00:33] lemme know if it is ok [17:00:38] try your stuff there now [17:26:14] 10Analytics-Clusters, 10Analytics-Kanban: Disk filling up on `/` on an-coord1001 - https://phabricator.wikimedia.org/T279304 (10BTullis) Possible clue at the top of `/var/log/hive/hiver-server2.out` on each of the three coordinator servers. ` btullis@an-coord1001:/var/log/hive$ head /var/log/hive/hive-server2... [17:47:07] ottomata: thanks a lot!! [17:50:26] 10Analytics-Clusters, 10Analytics-Kanban: Disk filling up on `/` on an-coord1001 - https://phabricator.wikimedia.org/T279304 (10elukey) @BTullis one thing that I noticed in the command line args of the hive server/metastore processes is `-Dlog4j.configurationFile=hive-log4j2.properties`, that in theory should... [17:52:08] ottomata: can you chown the dags/data_quality folder to analytics? [18:06:28] ottomata: the current version of the code works, though! :] [18:10:16] yes and you should be able to as well, no? oh maybe not [18:10:34] oh oops [18:10:35] yeah [18:11:19] done [18:12:44] thanks ottomata :] [18:24:07] 10Analytics, 10Analytics-Kanban, 10Event-Platform, 10Patch-For-Review: EchoMail and EchoInteraction Event Platform Migration - https://phabricator.wikimedia.org/T287210 (10Ottomata) a:03Ottomata [18:24:19] (03CR) 10Ottomata: [C: 03+2] Add legacy/echomail and legacy/echointeraction schemas [schemas/event/secondary] - 10https://gerrit.wikimedia.org/r/706742 (https://phabricator.wikimedia.org/T287210) (owner: 10Ottomata) [18:25:04] (03Merged) 10jenkins-bot: Add legacy/echomail and legacy/echointeraction schemas [schemas/event/secondary] - 10https://gerrit.wikimedia.org/r/706742 (https://phabricator.wikimedia.org/T287210) (owner: 10Ottomata) [18:25:39] 10Analytics, 10Analytics-EventLogging, 10Analytics-Kanban, 10Better Use Of Data, and 5 others: Migrate legacy metawiki schemas to Event Platform - https://phabricator.wikimedia.org/T259163 (10Ottomata) [18:29:00] 10Analytics, 10Analytics-Kanban, 10Event-Platform, 10Patch-For-Review: EchoMail and EchoInteraction Event Platform Migration - https://phabricator.wikimedia.org/T287210 (10Ottomata) [19:15:21] 10Analytics, 10Analytics-Kanban: Fix default ownership and permissions for Hive managed databases in /user/hive/warehouse - https://phabricator.wikimedia.org/T280175 (10Ottomata) We will discuss this with Product Analytics as the next sync. [19:20:43] 10Analytics, 10Analytics-Kanban, 10WMDE-TechWish: Deployment access request for some analytics repos - https://phabricator.wikimedia.org/T274880 (10Ottomata) 05Open→03Resolved [19:20:53] 10Analytics, 10Platform Team Workboards (Image Suggestion API): Airflow collaborations - https://phabricator.wikimedia.org/T282033 (10Ottomata) [19:20:55] 10Analytics, 10Product-Analytics, 10Epic: Replace Oozie with better workflow scheduler - https://phabricator.wikimedia.org/T271429 (10Ottomata) [19:20:57] 10Analytics, 10Analytics-Kanban, 10Patch-For-Review: Generalize the current Airflow puppet/scap code to deploy a dedicated Analytics instance - https://phabricator.wikimedia.org/T272973 (10Ottomata) 05Open→03Resolved [19:21:00] 10Analytics, 10Analytics-Kanban, 10Event-Platform: Schema compatibility check for changing event schemas fails when adding to the middle of an array - https://phabricator.wikimedia.org/T270470 (10Ottomata) 05Open→03Resolved [19:21:06] 10Analytics, 10Analytics-Kanban, 10Event-Platform: jsonschema-tools should fail if new required field is added - https://phabricator.wikimedia.org/T263457 (10Ottomata) 05Open→03Resolved [19:21:20] 10Analytics, 10Analytics-Kanban, 10Event-Platform, 10Wikidata, and 3 others: Automate event stream ingestion into HDFS for streams that don't use EventGate - https://phabricator.wikimedia.org/T273901 (10Ottomata) 05Open→03Resolved [19:21:23] 10Analytics, 10Analytics-Kanban, 10Event-Platform: jsonschema-tools should allow skipping of repository tests for certain schemas. - https://phabricator.wikimedia.org/T285006 (10Ottomata) 05Open→03Resolved [19:21:29] 10Analytics, 10Analytics-Kanban, 10Event-Platform, 10Product-Analytics, 10Product-Data-Infrastructure: Run CI tests on analytics/legacy event schemas in schemas/event/secondary - https://phabricator.wikimedia.org/T285975 (10Ottomata) 05Open→03Resolved [19:21:31] 10Analytics, 10Analytics-Kanban: Delete UpperCased eventlogging legacy directories in /wmf/data/event 90 days from 2021-04-15 (after 2021-07-14) - https://phabricator.wikimedia.org/T280293 (10Ottomata) 05Open→03Resolved [19:21:41] 10Analytics, 10Analytics-Kanban, 10Event-Platform, 10Fundraising-Backlog, and 2 others: CentralNoticeBannerHistory and CentralNoticeImpression Event Platform Migration - https://phabricator.wikimedia.org/T271168 (10Ottomata) 05Open→03Resolved [19:21:44] 10Analytics, 10Analytics-EventLogging, 10Analytics-Kanban, 10Better Use Of Data, and 5 others: Migrate legacy metawiki schemas to Event Platform - https://phabricator.wikimedia.org/T259163 (10Ottomata) [19:21:50] 10Analytics, 10Analytics-Kanban, 10Event-Platform, 10MW-1.37-notes (1.37.0-wmf.11; 2021-06-21), 10Patch-For-Review: LandingPageImpression Event Platform Migration - https://phabricator.wikimedia.org/T282855 (10Ottomata) 05Open→03Resolved [19:21:53] 10Analytics, 10Analytics-EventLogging, 10Analytics-Kanban, 10Better Use Of Data, and 5 others: Migrate legacy metawiki schemas to Event Platform - https://phabricator.wikimedia.org/T259163 (10Ottomata) [19:21:57] 10Analytics, 10Analytics-Kanban, 10Event-Platform, 10MW-1.37-notes (1.37.0-wmf.11; 2021-06-21), 10Patch-For-Review: WMDEBanner* Event Platform Migration - https://phabricator.wikimedia.org/T282562 (10Ottomata) 05Open→03Resolved [19:22:06] 10Analytics, 10Better Use Of Data, 10Event-Platform, 10Product-Infrastructure-Team-Backlog, 10Epic: Event Platform Client Libraries - https://phabricator.wikimedia.org/T228175 (10Ottomata) [19:22:09] 10Analytics, 10Analytics-EventLogging, 10Analytics-Kanban, 10Better Use Of Data, and 5 others: Migrate legacy metawiki schemas to Event Platform - https://phabricator.wikimedia.org/T259163 (10Ottomata) [19:22:11] 10Analytics, 10Analytics-Kanban, 10Better Use Of Data, 10Event-Platform, and 5 others: VirtualPageView Event Platform Migration - https://phabricator.wikimedia.org/T238138 (10Ottomata) 05Open→03Resolved [19:41:32] 10Analytics, 10DBA, 10Event-Platform, 10WMF-Architecture-Team, 10Services (later): Consistent MediaWiki state change events | MediaWiki events as source of truth - https://phabricator.wikimedia.org/T120242 (10Ottomata) > the idea of a commit log that contains the entire history of all events [...] I gath... [19:44:35] 10Analytics, 10Wikipedia-Android-App-Backlog (Android Release FY2021-22): android image_recommendation_interaction error - https://phabricator.wikimedia.org/T284620 (10Ottomata)