[00:22:46] RECOVERY - Check unit status of monitor_refine_eventlogging_analytics on an-launcher1002 is OK: OK: Status of the systemd unit monitor_refine_eventlogging_analytics https://wikitech.wikimedia.org/wiki/Analytics/Systems/Managing_systemd_timers [01:05:36] 10Analytics-Radar, 10Product-Analytics (Kanban): [REQUEST] Investigate decrease in New Registered Users - https://phabricator.wikimedia.org/T289799 (10MMiller_WMF) @Sdkb -- thanks for pointing out that task. I actually noticed today that the message has been slimmed down (see below). Do you know of a convers... [02:33:03] (03PS1) 10Andrew Bogott: api_stop_query(): use one() in the query rather than [0] [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/721109 [02:33:05] (03PS1) 10Andrew Bogott: Test api routes [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/721110 [02:33:44] (03CR) 10jerkins-bot: [V: 04-1] Test api routes [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/721110 (owner: 10Andrew Bogott) [02:38:57] (03CR) 10Andrew Bogott: "recheck" [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/721110 (owner: 10Andrew Bogott) [02:46:36] 10Quarry: Quarry queries seem not to run for fawiki - https://phabricator.wikimedia.org/T291053 (10Huji) [04:27:22] (03PS1) 10Andrew Bogott: Add test_login.py [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/721117 [04:32:20] PROBLEM - Check unit status of monitor_refine_event_sanitized_analytics_delayed on an-launcher1002 is CRITICAL: CRITICAL: Status of the systemd unit monitor_refine_event_sanitized_analytics_delayed https://wikitech.wikimedia.org/wiki/Analytics/Systems/Managing_systemd_timers [04:56:34] 10Analytics-Radar, 10Product-Analytics (Kanban): [REQUEST] Investigate decrease in New Registered Users - https://phabricator.wikimedia.org/T289799 (10Iflorez) When comparing logging table records where log_action = 'create' to ssac table records where event.isselfmade = true, I see a very similar number of re... [07:15:03] Hi btullis - I'm seeing errors for the 'mediarequest_top_files' data :S [07:55:54] 10Analytics: Check home/HDFS leftovers of kaywong - https://phabricator.wikimedia.org/T291060 (10MoritzMuehlenhoff) [08:24:43] joal: Oh dear. Is it the same thing of data missing from the C3 cluster? [08:39:14] The import of the 4th `mediarequest_per_file/data` table has finished, so I believe that this table will be 100% complete. Looking at `mediarequest_top_files/data` again now. [09:53:25] I've re-imported all four of the `mediarequest_top_files/data` snapshots again, double checking each step. Do you have a test case that we can use for this again? [10:32:26] 10Analytics-Clusters, 10Analytics-Kanban, 10User-MoritzMuehlenhoff: Improve user experience for Kerberos by creating automatic token renewal service - https://phabricator.wikimedia.org/T268985 (10BTullis) The change has now been deployed and the auto-renew service is enabled for users on an-test-client1001.e... [10:33:18] 10Analytics-Clusters, 10Analytics-Kanban, 10Data-Engineering, 10Data-Engineering-Kanban, 10Patch-For-Review: Deploy an-test-coord1002 to facilitate failover testing of analytics coordinator role - https://phabricator.wikimedia.org/T287864 (10BTullis) [10:34:05] 10Analytics-Clusters, 10Analytics-Kanban, 10Data-Engineering, 10Data-Engineering-Kanban, 10Patch-For-Review: Deploy an-test-coord1002 to facilitate failover testing of analytics coordinator role - https://phabricator.wikimedia.org/T287864 (10BTullis) [10:39:55] 10Analytics-Clusters, 10Analytics-Kanban, 10Data-Engineering, 10Data-Engineering-Kanban, 10Patch-For-Review: Deploy an-test-coord1002 to facilitate failover testing of analytics coordinator role - https://phabricator.wikimedia.org/T287864 (10BTullis) I'm awaiting the decision from this server request: {T... [11:45:34] Hey btullis - sorry I was away caring kids - I'll be on during siesta-time and then from standup onward [11:47:47] btullis: my tests are now passing - it must have been a problem with the previous import I assume [11:48:24] btullis: I run the test on a lot of dates, will let you know if anymore problem happens :) [12:17:14] and we have new relaod to do btullis - 'ikibase:limit "once";' [12:17:21] wooops- wrong paste [12:17:36] btullis: this table: local_group_default_T_top_pageviews [12:17:52] same to the mediarequests_top one, we're missing data :( [12:44:03] btullis: tested the kinit on an-test-client1001 (3 ssh in a row - no creds, first time after knit, second time after kinit) and everything looks good (also checked the timer etc..) [12:44:09] will report back in 2d :) [12:44:23] (great work) [12:48:53] \o/ [13:03:31] joal: ack. Will reload those two tables now. [13:04:11] elukey: thanks. [13:08:35] joal: When you say "same to the mediarequests_top one, we're missing data" - Is this the `local_group_default_T_mediarequest_top_files/data` table? [13:14:37] (03CR) 10Michael DiPietro: [C: 03+1] api_stop_query(): use one() in the query rather than [0] [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/721109 (owner: 10Andrew Bogott) [13:15:49] 10Analytics-Clusters, 10Analytics-Kanban, 10User-MoritzMuehlenhoff: Improve user experience for Kerberos by creating automatic token renewal service - https://phabricator.wikimedia.org/T268985 (10Ottomata) Nice! [13:17:06] RECOVERY - Check unit status of monitor_refine_event_sanitized_analytics_delayed on an-launcher1002 is OK: OK: Status of the systemd unit monitor_refine_event_sanitized_analytics_delayed https://wikitech.wikimedia.org/wiki/Analytics/Systems/Managing_systemd_timers [13:24:32] btullis: o/ am avail if you wanna chat testing puppet stuff [13:25:20] Yeah, let's do that. BC? [13:26:41] 10Analytics-Clusters, 10Analytics-Kanban, 10Data-Engineering, 10Data-Engineering-Kanban: Upgrade Matomo to latest upstream - https://phabricator.wikimedia.org/T275144 (10BTullis) a:03BTullis [13:27:49] 10Analytics-Clusters, 10Analytics-Kanban, 10observability, 10Patch-For-Review: Setup Analytics team in VO/splunk oncall - https://phabricator.wikimedia.org/T273064 (10BTullis) a:03BTullis [13:29:34] 10Analytics-Kanban: Analytics Hardware for Fiscal Year 2020/2021 - https://phabricator.wikimedia.org/T255145 (10BTullis) [13:29:36] 10Analytics-Clusters, 10Analytics-Kanban, 10Patch-For-Review: Refresh Druid nodes (druid100[1-3]) - https://phabricator.wikimedia.org/T255148 (10BTullis) 05Open→03Resolved [13:34:03] (03CR) 10Andrew Bogott: [C: 03+2] api_stop_query(): use one() in the query rather than [0] [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/721109 (owner: 10Andrew Bogott) [13:34:16] 10Analytics-Clusters, 10Analytics-Kanban, 10Patch-For-Review: Add 6 worker nodes to the HDFS Namenode config of the Analytics Hadoop cluster - https://phabricator.wikimedia.org/T275767 (10BTullis) 05Open→03Resolved [13:34:18] 10Analytics-Clusters, 10Patch-For-Review: Decommisison the Hadoop backup cluster and add the worker nodes to the main Hadoop cluster - https://phabricator.wikimedia.org/T274795 (10BTullis) [13:38:58] (03Merged) 10jenkins-bot: api_stop_query(): use one() in the query rather than [0] [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/721109 (owner: 10Andrew Bogott) [13:41:55] (03CR) 10Andrew Bogott: [C: 03+2] Test api routes [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/721110 (owner: 10Andrew Bogott) [13:42:05] (03CR) 10Andrew Bogott: [C: 03+2] Add test_login.py [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/721117 (owner: 10Andrew Bogott) [13:42:23] o/ ottomata: by the way, Luca proposed a project to setup pontoon for analytics/kerberos during next month's hackathon here: https://docs.google.com/document/d/1g1tPPWuiOTNBsH5-vK-7BEb_Esal3PWmHCosIwPTvd8/edit#heading=h.n64cbjifwd6y [13:46:52] (03Merged) 10jenkins-bot: Test api routes [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/721110 (owner: 10Andrew Bogott) [13:47:07] (03Merged) 10jenkins-bot: Add test_login.py [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/721117 (owner: 10Andrew Bogott) [14:11:58] Should deletion log comments / summaries be in the data lake somewhere? :) [14:12:41] oh btullis SORRY missed these pings somehow [14:12:45] batcave good, now? [14:12:59] Yup. See you there. [14:36:53] addshore: hmmm [14:37:15] i'd expect there to be in the monthly import snapshots of the mw databases [14:37:22] but in events hmmm... i think so? [14:37:40] yes [14:37:41] https://schema.wikimedia.org/repositories//primary/jsonschema/mediawiki/page/delete/current.yaml [14:37:46] so [14:37:50] event.mediawiki_page_delete table [14:37:55] in hive [14:38:08] aaah yes that might work, I was looking at logging before [14:42:54] ottomata: Would it make sense for me to work on extending this existing WMCS project? https://openstack-browser.toolforge.org/project/analytics [14:43:13] Or do you think it would be better to request a brand new one? [14:44:41] btullis: yes [14:44:43] that would make sense [14:44:57] lemme make you an admin... [14:45:04] 👍 [14:47:14] 10Analytics, 10Data-Engineering, 10Growth-Team, 10Metrics-Platform, and 4 others: Migrated Server-side EventLogging events recording http.client_ip as 127.0.0.1 - https://phabricator.wikimedia.org/T288853 (10nettrom_WMF) >>! In T288853#7352497, @Ottomata wrote: > The -1 is not for the idea, but for a speci... [14:47:52] btullis: done [14:49:20] Confirmed, thanks. Will poke around. [14:56:33] 10Analytics-Clusters, 10Analytics-Kanban, 10Patch-For-Review: Set up an-web1001 and decommission thorium - https://phabricator.wikimedia.org/T285355 (10Ottomata) Running rsync /srv thorium -> an-web1001: ` time sudo rsync -av thorium.eqiad.wmnet::transfer_from_thorium/ /srv/ ` [15:16:26] 10Quarry, 10cloud-services-team (Kanban): Should quarry use our standard secrets management - https://phabricator.wikimedia.org/T290184 (10Bstorm) We probably should use a project puppetmaster like usual we've decided. [15:21:51] 10Quarry, 10cloud-services-team (Kanban): Should quarry use our standard secrets management - https://phabricator.wikimedia.org/T290184 (10aborrero) cc {T276327} [15:29:07] joal: I've finished reloading all four snapshots of `local_group_default_T_top_pageviews/data`- Can you tell me if you're still seeing issues with it please? [15:30:09] 10Analytics, 10SRE, 10ops-eqiad: Degraded RAID on an-worker1096 - https://phabricator.wikimedia.org/T290805 (10elukey) For the records, puppet was failing with: ` Sep 14 12:01:09 an-worker1096 puppet-agent[35073]: (/Stage[main]/Bigtop::Hadoop::Worker/Bigtop::Hadoop::Worker::Paths[/var/lib/hadoop/data/g]/File... [15:31:58] btullis: running tests right now [15:31:59] (03PS1) 10Razzi: Update superset package to 1.3 [analytics/superset/deploy] - 10https://gerrit.wikimedia.org/r/721340 [15:33:20] 10Analytics, 10Dumps-Generation, 10Wikidata, 10wdwb-tech: Proposal: Generate Wikidata JSON & RDF dumps from Hadoop - https://phabricator.wikimedia.org/T291089 (10Addshore) [15:34:21] btullis: Not sure I told you - on Wednesdays I care the kids, making my schedule not as usual - some time early morning, some in the afternoon (siesta) and then evening time - I'm sorry for the lag in communication today [15:44:10] 10Analytics, 10Dumps-Generation, 10Wikidata, 10wdwb-tech: Proposal: Generate Wikidata JSON & RDF dumps from Hadoop - https://phabricator.wikimedia.org/T291089 (10Addshore) Cross linking to {T290839} which I was reading when I decided to write this down, particularly when reading T290839#7354690 [15:44:58] btullis: from my current tests, your reload has done the job :) [15:45:05] thanks a lot! [15:45:17] 10Analytics, 10Dumps-Generation, 10Wikidata, 10wdwb-tech: Proposal: Generate Wikidata JSON & RDF dumps from Hadoop - https://phabricator.wikimedia.org/T291089 (10Addshore) [16:00:44] 10Analytics, 10Analytics-Kanban, 10Data-Engineering, 10Data-Engineering-Kanban, 10Product-Analytics: Upgrade Superset to 1.3 - https://phabricator.wikimedia.org/T288115 (10razzi) Superset 1.3 is deployed to staging, test it out via `ssh -NL 8080:an-tool1005.eqiad.wmnet:80 an-tool1005.eqiad.wmnet` and the... [16:12:09] 10Analytics, 10Analytics-Kanban: Fix `wmf.editors_daily` data deletion - https://phabricator.wikimedia.org/T290093 (10Milimetric) 05Open→03Resolved Yep, data deletion seems to be working well [16:12:39] (03CR) 10Ottomata: [C: 03+1] Update superset package to 1.3 [analytics/superset/deploy] - 10https://gerrit.wikimedia.org/r/721340 (owner: 10Razzi) [16:23:06] (03CR) 10Joal: [V: 03+2 C: 03+2] "Merging for deploy" [analytics/refinery] - 10https://gerrit.wikimedia.org/r/720317 (https://phabricator.wikimedia.org/T290723) (owner: 10Joal) [16:26:20] !log Deploying refinery [16:26:25] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [16:39:40] joal: No worries from my end. I'm sorry about all of the errors in loading. If the reloading is doing the job (although there is still an open question about `local_group_default_T_mediarequest_top_files/data`) then it looks like I have made a systematic error in the reloading. [16:40:35] Given that running sstableloader multipole times doesn't cause any issues, I'd probably rather reload *all* of the tables that I've done so far, to be on the safe side, [16:41:49] btullis: so far so good (I'm testing daily top endpoints, one dau per month, and no error 2015-> 2020 ) [16:42:06] btullis: Once I'm done with daily, I'll do some tests some monthly [16:43:25] btullis: no problem in having errors, I'm happy we managed to fix them :) [16:44:24] OK. Cool. I think the problem arose because I used relative paths in some commands, where it would have been more precise to use absolute paths. If I ran the command from the wrong directory (which seems likely to have been the cause) then that would have loaded the same snapshot twice, instead of loading two different snapshots. [16:46:12] makes sense btullis [16:50:35] I feel a bit of a fat-fingered fool though, for making such schoolboy errors. Still hopefully it's all salvageable. [16:51:28] btullis: this is the concern with manual execution of somehow repetitive tasks :S computers are better than us for repetition :) [16:52:59] agree. [16:56:57] interesting btullis - for every endpoint, there are a small number of rows not present in the new hosts [16:58:58] folks, I'm monitoring the next Gobblin runs - I have just deployed the change to fetch-timeout [16:59:31] Hmm. that is odd. [17:00:06] yeah - could be rows not correctly replicated [17:00:21] btullis: --^ [17:03:22] This is a bit worrying. [17:04:31] btullis: I'm assuming that taking snapshots from other hosts and replicating the loading would help, but this represent quite some work [17:07:46] btullis: we could try that for small tables and see if it does the trick [17:17:42] 10Analytics-Clusters, 10Analytics-Kanban, 10Data-Engineering, 10Data-Engineering-Kanban, 10Platform Team Workboards (Platform Engineering Reliability): Upgrade the Cassandra AQS cluster to Cassandra 3.11 - https://phabricator.wikimedia.org/T255141 (10Ottomata) [17:17:54] 10Analytics-Clusters, 10Cassandra, 10Data-Engineering, 10Data-Engineering-Kanban, and 2 others: Cassandra3 migration for Analytics AQS - https://phabricator.wikimedia.org/T249755 (10Ottomata) [17:18:15] 10Analytics-Clusters, 10Cassandra, 10Data-Engineering, 10Data-Engineering-Kanban, and 2 others: Cassandra3 migration for Analytics AQS - https://phabricator.wikimedia.org/T249755 (10Ottomata) a:03BTullis [17:18:30] 10Analytics-Clusters, 10Analytics-Kanban, 10Cassandra, 10Patch-For-Review: Set up a testing environment for the AQS Cassandra 3 migration - https://phabricator.wikimedia.org/T257572 (10Ottomata) a:05hnowlan→03razzi [17:23:07] 10Analytics-Clusters, 10Analytics-Kanban, 10Data-Engineering, 10Data-Engineering-Kanban: Upgrade Matomo to latest upstream - https://phabricator.wikimedia.org/T275144 (10BTullis) a:05BTullis→03razzi [17:23:37] 10Analytics-Clusters, 10Analytics-Kanban, 10observability, 10Patch-For-Review: Setup Analytics team in VO/splunk oncall - https://phabricator.wikimedia.org/T273064 (10BTullis) a:05BTullis→03razzi [17:23:51] 10Analytics-Clusters, 10Analytics-Kanban, 10Data-Engineering, 10Data-Engineering-Kanban, 10Product-Analytics: Upgrade Superset to 1.3 - https://phabricator.wikimedia.org/T288115 (10Ottomata) [17:30:58] 10Analytics, 10Data-Engineering: LVS in Analytics VLANs - https://phabricator.wikimedia.org/T288750 (10Ottomata) Moving back to Analytics to reprioritize with the team. [17:34:40] (03PS1) 10Andrew Bogott: Added test_user.py [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/721363 [17:38:50] 10Analytics, 10Dumps-Generation, 10Wikidata, 10wdwb-tech: Proposal: Generate Wikidata JSON & RDF dumps from Hadoop - https://phabricator.wikimedia.org/T291089 (10Ottomata) > a reliable and consistent input (such as MediaWiki recent changes) I guess by this you mean polling the MW RecentChanges API? [17:40:51] 10Analytics, 10Dumps-Generation, 10Wikidata, 10wdwb-tech: Proposal: Generate Wikidata JSON & RDF dumps from Hadoop - https://phabricator.wikimedia.org/T291089 (10Addshore) >>! In T291089#7356181, @Ottomata wrote: >> a reliable and consistent input (such as MediaWiki recent changes) > > I guess by this you... [17:41:05] 10Analytics, 10Dumps-Generation, 10Wikidata, 10wdwb-tech: Proposal: Generate Wikidata JSON & RDF dumps from Hadoop - https://phabricator.wikimedia.org/T291089 (10Ottomata) > And the new query service flink updater could also make use of the RDF stream Perhaps the existing logic in the WDQS updater to gener... [17:41:38] 10Analytics, 10Dumps-Generation, 10Wikidata, 10wdwb-tech: Proposal: Generate Wikidata JSON & RDF dumps from Hadoop - https://phabricator.wikimedia.org/T291089 (10Ottomata) > I imagine other sources like https://wikitech.wikimedia.org/wiki/Event_Platform/EventStreams would all have the same problems? Yes, E... [17:43:29] 10Analytics, 10Dumps-Generation, 10Wikidata, 10wdwb-tech: Proposal: Generate Wikidata JSON & RDF dumps from Hadoop - https://phabricator.wikimedia.org/T291089 (10Addshore) > Perhaps the existing logic in the WDQS updater to generate its RDF stream could be factored out into its own service? Or, at least, i... [17:54:22] 10Analytics, 10Analytics-EventLogging, 10Analytics-Kanban, 10Event-Platform, and 2 others: Determine which remaining legacy EventLogging schemas need to be migrated or decommissioned - https://phabricator.wikimedia.org/T282131 (10Ottomata) [17:55:05] 10Analytics, 10Analytics-EventLogging, 10Analytics-Kanban, 10Event-Platform, and 2 others: Determine which remaining legacy EventLogging schemas need to be migrated or decommissioned - https://phabricator.wikimedia.org/T282131 (10Ottomata) After updating the audit spreadsheet, I removed more of the schemas... [18:00:42] (03CR) 10Ottomata: [C: 03+2] Map tile state change event schema [schemas/event/primary] - 10https://gerrit.wikimedia.org/r/716219 (https://phabricator.wikimedia.org/T289771) (owner: 10Jgiannelos) [18:55:25] I'm quiting for tonight folks - see you tomorrow [19:10:39] byeee joal [20:14:17] FYI an-web1001 just ran out of disk [20:15:13] thanks RhinosF1 - ottomata, razzi - would one of you care that? [20:15:22] OH! [20:15:27] joal: np [20:15:31] that's me, i'm rsyncing...it should have had llenthy of space [20:15:32] looking [20:15:40] 21:13:12 PROBLEM - Disk space on an-web1001 is CRITICAL: DISK CRITICAL - free space: /srv 0 MB (0% inode=99%): https://wikitech.wikimedia.org/wiki/Monitoring/Disk_space https://grafana.wikimedia.org/dashboard/db/host-overview?var-server=an-web1001&var-datasource=eqiad+prometheus/ops [20:15:46] It's just /srv [20:16:01] OHHHH because many of the things i'm syncing are hardlinks on the source [20:16:08] so it smaking copies on the dest! [20:16:13] can fix i think [20:18:28] Hello, is it intentional that non-legacy schemas like https://schema.wikimedia.org/repositories/secondary/jsonschema/analytics/mediawiki/mentor_dashboard/visit/current.yaml don't have a DB name in them? [20:19:05] urbanecm: i think you'd have to ask the maintainer of that schema [20:19:28] if by "maintainer" you mean the person who's responsible for feeding the data there, that'd be me ottomata :) [20:19:32] haha [20:19:35] did you make the schema? [20:19:39] yes [20:19:41] oh [20:20:05] https://gerrit.wikimedia.org/r/c/schemas/event/secondary/+/714099 [20:20:17] i guess its not in analytics common? [20:20:27] this might be a question for the metrics platform folks who are trying to standardize analytics schemas [20:20:31] and/or [20:20:34] you could include mediawiki/common [20:20:37] fragment [20:21:03] https://schema.wikimedia.org/repositories//primary/jsonschema/fragment/mediawiki/common/current.yaml [20:21:19] might be better / easier just add database explictly though [20:22:36] and once i do so, will something magically fill it? Or do I need to do that from the data producer myself? [20:24:18] you'll need to fill it [20:24:21] okay [20:24:26] unless metrics platform has somethign in mind [20:24:32] ask jason linehan [20:24:55] *cries at the fact that his query fails* [20:25:02] where are they reachable? :-) [20:25:50] and also, do you happen to know if meta.domain is going to be the actual http domain (ie. cs.m.wikipedia.org or cs.wikipedia.org), or some canonical version of the domain? [20:26:01] Anyone any idea why this would spend all the time running and then fail in stage 2? D: https://phabricator.wikimedia.org/P17276 [20:26:50] maybe I should just create the table first ... [20:27:09] 10Analytics-Clusters, 10Analytics-Kanban, 10Patch-For-Review: Set up an-web1001 and decommission thorium - https://phabricator.wikimedia.org/T285355 (10Ottomata) Oops, have to tell rsync to preserve hardlinks. Starting over: ` sudo rsync -avn -H --delete --exclude '.hardsync.*' thorium.eqiad.wmnet::transfe... [20:29:37] urbanecm: IIRC it will be the actual http domain [20:29:54] in Hive though, we augment events with normalized_host [20:30:44] urbanecm: slack, email or phab? [20:30:47] i think their team doesn't much use IRC [20:31:03] :/ [20:31:09] relevant [20:31:10] https://phabricator.wikimedia.org/T275420 [20:32:02] 10Analytics, 10Dumps-Generation, 10Wikidata, 10wdwb-tech: Proposal: Generate Wikidata JSON & RDF dumps from Hadoop - https://phabricator.wikimedia.org/T291089 (10Addshore) From IRC > 7:32 PM <+dcausse> addshore: I'm not convinced that RecentChanges is more reliable than the revision-create stream, using t... [20:32:53] I'll consult jason then. Thanks again ottomata. [20:35:14] ya good luck! [20:41:19] (03CR) 10Andrew Bogott: [C: 03+2] Added test_user.py [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/721363 (owner: 10Andrew Bogott) [20:42:14] (03CR) 10jerkins-bot: [V: 04-1] Added test_user.py [analytics/quarry/web] - 10https://gerrit.wikimedia.org/r/721363 (owner: 10Andrew Bogott) [20:52:25] I see .... https://phabricator.wikimedia.org/P17277 [22:55:49] (03CR) 10Dave Pifke: [C: 04-1] "This might be a deployment-prep issue, or it might point to an actual problem:" [analytics/statsv] - 10https://gerrit.wikimedia.org/r/721044 (https://phabricator.wikimedia.org/T290131) (owner: 10Dave Pifke) [23:06:33] (03CR) 10Krinkle: Add TLS support (031 comment) [analytics/statsv] - 10https://gerrit.wikimedia.org/r/721044 (https://phabricator.wikimedia.org/T290131) (owner: 10Dave Pifke)