[07:23:25] good morning [07:43:43] bonjour [07:50:59] (03CR) 10Joal: [C: 03+1] "Let's merge (I'm super bad at locales, patch looks good from my newbie perspective :)" [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/676075 (owner: 10DCausse) [07:52:13] (03CR) 10Joal: "Closing comments after having patched." (035 comments) [analytics/refinery] - 10https://gerrit.wikimedia.org/r/701463 (https://phabricator.wikimedia.org/T271232) (owner: 10Ottomata) [08:50:19] (03PS1) 10Joal: Add gobblin-wmf to artifacts folder [analytics/refinery] - 10https://gerrit.wikimedia.org/r/702591 (https://phabricator.wikimedia.org/T271232) [08:51:16] Hello all. [08:51:23] Hi btullis :) [09:08:42] I have a quick question about DNS search suffixes and/or CanonicalDomain entries in ~/.ssh/config [09:10:05] How is it generally done? Firstly, should I expect to be able to type an unqualified, single-word hostname and get to the right server? Or do people generally type the FQDN out in full? [09:14:07] Should I add WMF specific search suffices to /etc/resolv.conf on my workstation? If so, which ones would people recommend? [09:19:03] I've set up my SSH config according to this: https://wikitech.wikimedia.org/wiki/Production_access#Advanced:_operations_config but I wondered how other people do it, and whether typing the FQDN is the norm. Thanks. [09:22:50] btullis: o/ are you running on debian/linux? [09:23:11] https://wikitech.wikimedia.org/wiki/Wmf-sre-laptop is a good source [09:23:26] Yes. bullseye/testing (for graphics card compatibility only) [09:23:54] for the ssh, i usually type the whole hostname but I have auto-completion [09:24:14] IIRC the config is all in the package [09:24:14] elukey. Great, thanks. I will check that out. [09:24:44] basically you have a script called wmf-update-known-hosts-production that gets updated host fingerprints from the bastions [09:24:53] and populates your known hosts config [09:25:17] then IIRC there is some config to allow auto-completion (in case I can find more, it has been a while since I touched it) [09:25:21] really handy [09:26:05] the deb package contains also pws, that is another tool that you'll need [09:26:12] https://office.wikimedia.org/wiki/Pwstore [09:26:24] (to share secrets among sres via gpg) [09:26:56] Ah yes, I did run that script, but without known-hosts is populated.installing the repo first. Known-hosts is populated and auto-complete looks like it will work. Thanks. [09:29:08] Great. I'll work on pwstore today. I'm quite familiar with 'pass' (https://www.passwordstore.org/) which looks like it works in a very similar way to Pwstore. [09:30:23] yes it should be very similar, there should be a step in you onboarding to have a gpg key added to the repo [09:30:39] after that you should be able to decrypt pwstore's files [09:31:30] the most used (at least for me) passwords are the "management" one (that is access to serial console) and root_password (to log in in a tty after the serial connection and debug what's wrong on the host) [09:32:32] Nice. FYI I'm also booked in for a workshop with moritzm this month to learn about the way we build and host deb packages. [09:41:29] That's perfect. Thanks elukey. Turns out that I've done quite a few of these steps manually, so I could have saved myself some time but never mind. [09:42:17] btullis: :) [09:44:06] these days wmf-update-known-hosts-production pulls fingerprints via HTTPS from config-master (to avoid the initial chicken-and-egg where you needed to manually ack the fingerprint of bast2002 to eventually download the fingerprints from it) [09:47:03] moritzm: Ah yes, I suspected so. I'll update this section then: https://wikitech.wikimedia.org/wiki/Production_access#Known_host_files because it still mentions the need for the chicken. :) [09:49:28] Will the update-ssh-config script work if I put my production private key on a Yubikey? I haven't yet, but one is in the post. [09:54:15] ah yes, https://wikitech.wikimedia.org/wiki/Production_access#Known_host_files is outdated, will update it later [09:54:50] it should also with with yubikey-local storage, I think, but if you run into an issue, let me know and can have a look [09:55:37] Cheers. Will do. [09:56:38] 10Analytics-Radar, 10WMDE-Templates-FocusArea, 10Patch-For-Review, 10WMDE-TechWish (Sprint-2021-02-03), and 2 others: Add missing normalization to CodeMirror Grafana board - https://phabricator.wikimedia.org/T273748 (10lilients_WMDE) I added new diagrams about code mirror enabled in preferences to the [[ h... [09:59:45] moritzm: ah TIL about config-master! [12:11:30] PROBLEM - Check unit status of performance-asoranking on stat1007 is CRITICAL: CRITICAL: Status of the systemd unit performance-asoranking https://wikitech.wikimedia.org/wiki/Analytics/Systems/Managing_systemd_timers [12:44:37] ah interesting --^ [12:45:25] ottomata: o/ I think that the asoranking unit needs pandas to run, it was created before conda [12:45:54] maybe we can install the pkg temporarily and ask Performance to migrate to conda [12:45:57] wdyt? [13:00:36] elukey: o/ for sure, makes sense. these are things we were not sure about! I sent emails but didn't expect to find out anything until we uninstalled :) [13:00:54] lets put pandas back [13:00:56] making a patch [13:08:46] 10Analytics-EventLogging, 10Analytics-Kanban, 10Event-Platform, 10Goal, and 3 others: Modern Event Platform - https://phabricator.wikimedia.org/T185233 (10Ottomata) [13:14:11] (03CR) 10Ottomata: [V: 03+2 C: 03+2] Add gobblin-wmf to artifacts folder [analytics/refinery] - 10https://gerrit.wikimedia.org/r/702591 (https://phabricator.wikimedia.org/T271232) (owner: 10Joal) [13:14:36] RECOVERY - Check unit status of performance-asoranking on stat1007 is OK: OK: Status of the systemd unit performance-asoranking https://wikitech.wikimedia.org/wiki/Analytics/Systems/Managing_systemd_timers [13:14:40] (03CR) 10Ottomata: [C: 03+2] Make LocaleUtil and UserEventBuilder independent from system locale [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/676075 (owner: 10DCausse) [13:18:24] thanks luca! [13:21:02] <3 [14:12:36] joal: gonna merge https://gerrit.wikimedia.org/r/c/analytics/refinery/+/701463, ok? [14:12:48] please! [14:12:56] (03CR) 10Ottomata: [V: 03+2 C: 03+2] Add bin/gobblin wrapper and initial gobblin/ common properties files [analytics/refinery] - 10https://gerrit.wikimedia.org/r/701463 (https://phabricator.wikimedia.org/T271232) (owner: 10Ottomata) [14:14:58] (03PS1) 10Ottomata: Add gobblin-wmf.jar symlink to versioned gobblin jar [analytics/refinery] - 10https://gerrit.wikimedia.org/r/702666 (https://phabricator.wikimedia.org/T271232) [14:15:54] (03CR) 10Ottomata: [V: 03+2 C: 03+2] Add gobblin-wmf.jar symlink to versioned gobblin jar [analytics/refinery] - 10https://gerrit.wikimedia.org/r/702666 (https://phabricator.wikimedia.org/T271232) (owner: 10Ottomata) [14:16:46] joal: is it possible to make gobblin include relavitve? [14:17:04] would be nice to not have to hardcode the /srv/deployment path [14:17:17] or I suppose we could make a var that could be substituded on the CLI [14:18:43] ottomata: I have not tested relative imports - I don't know how they'll be taken, relative to inclusion-file, or relative to running file [14:18:49] ottomata: testing now [14:20:05] (03PS1) 10Mforns: Add airflow DAG for anomaly detection (POC) [analytics/refinery] - 10https://gerrit.wikimedia.org/r/702668 (https://phabricator.wikimedia.org/T285692) [14:20:49] ottomata: relative include don't work [14:22:33] ottomata: the common includes are hard-coded as well ... [14:29:29] (03CR) 10Nettrom: [C: 03+1] "Looks good to me!" [schemas/event/secondary] - 10https://gerrit.wikimedia.org/r/702472 (https://phabricator.wikimedia.org/T272664) (owner: 10MewOphaswongse) [14:31:27] ottomata: given we wish to move the jobs to airflow, do we keep the include hard-coded for now (they are for sysconif files in the common folder already) [14:31:30] ? [14:35:41] eyah [14:35:44] its fine [14:36:01] will just be annoying if you want to develop a little [14:36:15] its nice how with oozie we can copy all of refinery and just chnage refinery.path or whatever [14:36:18] but yeah fine for now [14:45:51] 10Analytics, 10observability: Need a list of AQS Kibana dashboards and searches - https://phabricator.wikimedia.org/T285318 (10colewhite) 05Open→03Resolved The migration has been deployed. Thanks, everyone! [14:49:38] Hi team, good morning [14:49:45] mornin! [14:50:27] I'm going to start draining the yarn tasks [14:50:43] btullis: yt? I can hop on a video and explain what I'm doing as it happens [14:52:14] Yes, definitely. [14:52:55] ok cool, I'll be in the batcave in a couple minutes [14:53:07] 👍 [14:53:39] The relevant task is https://phabricator.wikimedia.org/T278423 [14:54:40] razzi: o/ can you quickly write down the procedure in the task before starting? [14:54:47] so we can follow along [14:58:52] (and also not forget steps like the changing uids/gids etc..) [15:02:45] elukey: sounds good, I'll comment. I'm going to stop timers on an-launcher now though, since that takes a little while to apply [15:03:40] It's basically a mirror of https://phabricator.wikimedia.org/T278423#7094641, substituting an-master1001 for an-master1002 [15:04:17] there is 1 thing that is different: there is one more service on an-master1001, I forget the name, journal something? [15:05:28] the mapreduce history server [15:05:46] Hadoop historyserver [15:05:47] yep [15:06:21] I have no shame to ask a dumb question... what is it? [15:06:30] and how do I stop it :) [15:06:44] there is no dumb question :) [15:07:35] it stores the status of finished jobs, plus other stats [15:07:47] we have it only on an-master1001, not really critical [15:07:57] you can stop it with systemctl stop etc.. [15:08:15] ok cool [15:08:21] remember also that yarn.wikimedia.org will not be available during the an-master1001 reimage [15:08:30] hmmm ok yeah [15:08:30] since the httpd config points to an-master1001 [15:08:42] can we do ssh tunneling on an-master1002? [15:08:45] also cli [15:08:49] I'm in the batcave now for anybody who wants to follow along [15:08:55] yes you can do anything [15:09:38] all the actions will be !logged here of course [15:09:39] but it is always good to have a procedure written down with commands ready to avoid mistakes etc.. [15:09:44] yep yep [15:09:45] Oh right, I joined the Meet link in the calendar. Where do I find the batcave? (Apart from under Wayne Manor) [15:09:57] xD [15:10:04] https://meet.google.com/rxb-bjxn-nip [15:10:18] I was wondering if anybody explained that all the recurring meetings have the same meeting id, and it is the batcave [15:10:45] and we generally use it to hang out and discuss [15:11:20] there's also the tardis: https://meet.google.com/kti-iybt-ekv [15:24:28] 10Analytics-Clusters, 10Analytics-Kanban, 10Patch-For-Review, 10User-razzi: Upgrade the Hadoop masters to Debian Buster - https://phabricator.wikimedia.org/T278423 (10razzi) ### Reimaging plan for an-master1001 Prepare cluster for maintenance (drain cluster, safe mode, snapshot, backup: - Disable puppet o... [15:24:45] Ok! reimaging plan is posted [15:25:08] elukey: feel free to hang out in the batcave with ben and I if you want to do a quick review of the steps [15:25:29] otherwise, I'll wait a few minutes, review it myself, and start with the an-launcher1002 timer stops [15:27:22] razzi: nono please proceed [15:27:33] just announce in here what steps you are doing so we can sync [15:27:48] Ok cool [15:27:53] also, remember to always check metrics an general health before starting [15:27:59] *and [15:28:07] What I'll do this time, is say the thing I'm /about/ to do, wait a moment for feedback, then do it, then !log it [15:28:08] hadoop dashboard, icinga for alarms, etc.. [15:28:12] good call [15:28:34] this is true for every maintenance, so we are sure that nothing was outstanding before [15:29:19] Icinga/Hive Server JVM Heap usage still going [15:29:31] razzi: also one nit - before running the reimage script, it is good to check if yarn and hdfs are active on an-master1002 [15:29:44] and that nothing is on fire (again hadoop dashboard etc..) [15:29:52] I had this idea to do the an-coord service restarts while we're in maintenance mode, save me the trouble of creating a patch to update the cname, what do you think elukey? [15:30:03] It's a little added complexity, but it will save us some effort later [15:30:09] razzi: yes yes it makes sense [15:30:19] did you see my notes about the hive server heap usage? [15:30:32] I saw you mention it somewhere, but I'm not sure what exactly is the deal [15:30:40] it should be better with a jvm restart though, I remember that [15:30:56] TIL about alerts.wikimedia.org - hadn't seen mention of it before. [15:31:53] razzi: yep but there is a warning that has been outstanding for 8 days, nothing on fire but icinga needs to be checked more often :) [15:32:26] I've seen it, I guess I should acknowledge it? Or are you saying I should have done something about it by now :) [15:32:42] the heap usage is around 90%, that is fine but if we get an extra set of requests to the server (out of the ordinary) it may OOM [15:33:04] a restart will surely clear out stuff, and give us some room to work on it, but there may be the need to bump the heap size [15:33:13] cool cool [15:33:25] some metrics are strange (like neverending growth of MetaSpace, etc..) [15:33:53] Alerts are looking fine, only the known heap warning [15:34:12] Metrics: https://grafana.wikimedia.org/d/000000585/hadoop [15:35:06] oh wow! [15:35:10] as btullis pointed out... [15:35:13] it's the first of the mont [15:35:14] h [15:35:20] so sqoop is going to be running :X [15:35:28] maybe we should reschedule maintenance? [15:38:15] this is a good point [15:40:28] kudos to btullis ! [15:41:13] sqoop is currenlty not using any hadoop job, it is pulling from the dbs, and not using hive [15:41:47] I see a 'sqoop-mediawiki-monthly-2021-06-commonswiki.templatelinks' on https://yarn.wikimedia.org/cluster/apps/RUNNING [15:42:36] perfect so nevermind [15:42:39] :) [15:43:05] we could, in theory, proceed without stopping jobs [15:43:13] but I would advice against it [15:43:15] we could, but that's a different plan [15:43:15] yeah [15:43:35] how do you feel elukey about doing the maintenance on a friday? but then again sqoop might still be running [15:43:49] we can always just push to the week after next [15:43:51] yes I'd say that we need to postpone after holidays [15:43:55] ok :) [15:44:15] if you want you can use this time to restart hive doing the dns failover [15:44:42] yeah, why not [15:44:55] elukey: want to join bc and discuss this? [15:46:01] sure [15:47:55] Happy July to the team, especially to those of us like me that didn't realize :) [16:00:17] happy canada day! [16:01:09] milimetric: standup? [16:06:55] 10Analytics-Clusters, 10Analytics-Kanban, 10Patch-For-Review, 10User-razzi: Upgrade the Hadoop masters to Debian Buster - https://phabricator.wikimedia.org/T278423 (10razzi) Happy July! Sqoop is running so maintenance is rescheduled to the week after next. [16:13:13] elukey: going to merge the dns change [16:20:24] Following steps at https://wikitech.wikimedia.org/wiki/DNS#Changing_records_in_a_zonefile to apply dns change [16:23:07] sure [16:23:21] please don't restart gdnsd or similar [16:25:27] I added some notes to the wikipage [16:26:37] razzi: remember to use tmux etc.. [16:37:01] 10Analytics, 10Analytics-Kanban, 10Product-Analytics: Investigate Hive & Hadoop permissions for users in same group - https://phabricator.wikimedia.org/T285503 (10Ottomata) p:05Triage→03Medium [16:38:19] !log sudo authdns-update on ns0.wikimedia.org to apply https://gerrit.wikimedia.org/r/c/operations/dns/+/702689 [16:38:22] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [16:38:26] 10Analytics: Sqoop image metadata - https://phabricator.wikimedia.org/T285783 (10Ottomata) p:05Triage→03Low [16:39:32] > OK - authdns-update successful on all nodes! [16:40:04] 10Analytics: Sqoop image metadata - https://phabricator.wikimedia.org/T285783 (10Ottomata) Blocked until T275268 is done and all old data is moved out of the image table too. [16:52:05] razzi: dig analytics-hive.eqiad.wmnet @ns1.wikimedia.org confirms it was good [16:52:28] so now tail the hive server logs for a bit, in 5/10 mins you should be ok [16:52:52] and then revert when 1001 is up and running fine [16:53:11] (also don't forget to restart daemons on an-coord1002) [16:56:25] going out for a run, will re-check later if I am needed :) [17:00:41] sounds good, thanks elukey [17:17:25] 10Analytics-Radar, 10Data-Services, 10cloud-services-team (Kanban): Mitigate breaking changes from the new Wiki Replicas architecture - https://phabricator.wikimedia.org/T280152 (10Jhernandez) [17:17:41] (03PS1) 10Ottomata: Rematerialize fragment schemas with generated examples. [schemas/event/secondary] - 10https://gerrit.wikimedia.org/r/702700 (https://phabricator.wikimedia.org/T270134) [17:18:11] 10Analytics-Radar, 10Data-Services, 10cloud-services-team (Kanban): Mitigate breaking changes from the new Wiki Replicas architecture - https://phabricator.wikimedia.org/T280152 (10Jhernandez) [17:34:01] 10Analytics-Clusters: ROCm can't find clang on stat1005 - https://phabricator.wikimedia.org/T285495 (10EBernhardson) 05Open→03Resolved Looks to be all good, thanks! [17:47:09] ok hive server logs seem quiet on an-coord1001 [17:53:47] going to restart java services on an-coord1001 [18:15:29] !log sudo systemctl restart oozie on an-coord1001 for https://phabricator.wikimedia.org/T283067 [18:15:33] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [18:16:23] !log sudo systemctl restart hive-server2.service [18:16:26] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [18:16:37] !log razzi@an-coord1001:~$ sudo systemctl restart hive-metastore.service [18:16:40] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [18:17:21] !log razzi@an-coord1001:~$ sudo systemctl restart presto-server.service [18:17:24] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [18:17:28] Experimenting with this new log format [18:17:37] it's easy for me, I just copy the whole line, and it shows the host :) [18:18:38] Ok, fuse also is also holding onto some old jars on an-coord1001, going to umount and moun [18:18:39] t [18:18:56] !log razzi@an-coord1001:~$ sudo umount /mnt/hdfs [18:18:58] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [18:19:04] !log razzi@an-coord1001:~$ sudo mount -a [18:19:06] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [18:19:29] no jars in `sudo lsof -Xd DEL`: success! [18:19:44] Now going to dns switchover 1001 back to active, and do the same on an-coord1002 [18:22:07] PROBLEM - Hive Server on an-coord1001 is CRITICAL: PROCS CRITICAL: 0 processes with command name java, args org.apache.hive.service.server.HiveServer2 https://wikitech.wikimedia.org/wiki/Analytics/Systems/Cluster/Hive [18:24:01] RECOVERY - Hive Server on an-coord1001 is OK: PROCS OK: 1 process with command name java, args org.apache.hive.service.server.HiveServer2 https://wikitech.wikimedia.org/wiki/Analytics/Systems/Cluster/Hive [18:29:48] ottomata: is now a good time to deploy gobblin on the test cluster or not really? [18:31:26] (03CR) 10Joal: "thanks for that - I completely missed it" [analytics/refinery] - 10https://gerrit.wikimedia.org/r/702666 (https://phabricator.wikimedia.org/T271232) (owner: 10Ottomata) [18:47:51] joal: les do it [18:48:15] i guess we merge this one too [18:48:15] https://gerrit.wikimedia.org/r/c/analytics/refinery/+/702431 [18:48:26] (03CR) 10Ottomata: [V: 03+2 C: 03+2] Add webrequest_test gobblin job [analytics/refinery] - 10https://gerrit.wikimedia.org/r/702431 (https://phabricator.wikimedia.org/T271232) (owner: 10Joal) [18:51:51] ok ready to go ottomata [18:52:02] joal deploying now [18:52:06] to test [18:52:15] ottomata: ack - I was about to do it :) [18:52:48] ottomata: dumb question - Are you deploying to test only using scap env function? [18:54:12] yes [18:54:33] joal: https://wikitech.wikimedia.org/wiki/Server_Admin_Log#2021-07-01 [18:54:42] just did scap deploy -e hadoop-test [18:56:07] joal: done [18:56:19] ack - I guess now is manual test? [18:56:20] !log razzi@authdns1001:~$ sudo authdns-update [18:56:22] shall I do that? [18:56:24] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [18:56:29] joal: ya i think so! [18:56:40] maybe you can use your own .pull file but reference the refinerh sysconfig files [18:56:42] ah ottomata - we need to create the /wmf/gobblin folder with correct perms [18:56:48] oh in hdfs [18:56:51] correct [18:56:57] ...what are correct perms? i guess analytics? [18:56:59] I did that in prod [18:57:03] yessir [18:57:05] joa you can do in test too [18:57:06] i think [18:57:09] from an-test-coord [18:57:11] should be the same as an-launcher [18:57:16] an-test-coord1001 [18:58:06] ~yup, will do - just mentionning :) [18:59:45] Folders created, starting job [19:01:39] coOoOL [19:01:47] joal: with bin/gobblin wrapper? [19:01:56] ottomata: yessir - with prod job and all [19:02:07] on with the webrequest_test.pull filel? [19:02:14] yup [19:02:21] coo. [19:02:22] ok [19:02:36] ottomata: will do dry run first, contrl, then run [19:06:40] ottomata: job successful, logs look good, gobblin files in place, no data yet [19:06:58] ottomata: launching a job with data in a few minutes [19:07:08] ottomata: I think we're ready for a puppet timer :) [19:07:36] nice! [19:07:44] joal: lets do that monday? [19:07:45] (03PS2) 10Joal: [WIP] Update to spark-3 and scala-2.12 [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/656897 [19:07:51] ottomata: sure!n [19:07:53] ok [19:08:51] ottomata: plan for monday: finalize move for test (timer for 3 hours, then stop, move data, recreate/repair hive table, change timer, start) [19:09:38] ottomata: for reference, the command I ran: sudo -u analytics PYTHONPATH=/srv/deployment/analytics/refinery/python:$PYTHONPATH kerberos-run-command analytics /srv/deployment/analytics/refinery/bin/gobblin --sysconfig /srv/deployment/analytics/refinery/gobblin/common/analytics-test-hadoop.sysconfig.properties /srv/deployment/analytics/refinery/gobblin/jobs/webrequest_test.pull [19:10:54] 10Analytics, 10Event-Platform, 10Product-Analytics, 10Product-Data-Infrastructure: Run CI tests on analytics/legacy event schemas in schemas/event/secondary - https://phabricator.wikimedia.org/T285975 (10Ottomata) [19:11:13] 10Analytics, 10Analytics-Kanban, 10Event-Platform, 10Product-Analytics, 10Product-Data-Infrastructure: Run CI tests on analytics/legacy event schemas in schemas/event/secondary - https://phabricator.wikimedia.org/T285975 (10Ottomata) p:05Triage→03High [19:11:28] sounds good joal [19:11:32] hmmm [19:11:32] maybe [19:11:38] we want to run the job manually to do the fiirst imports [19:11:40] for a few hours [19:11:42] before we run the timer? [19:11:54] oh hm [19:12:03] i guess none of the config is in puppet [19:12:06] so it doesnt' really matter [19:12:06] ottomata: feasible, but I dislike being a timer myself :) [19:12:17] how does it know when to start importing? [19:12:19] it just does latest? [19:12:35] since you ran the job now...does that mean when we run on monday it will import everything over the weekend? [19:12:52] ottomata: we need the first job to happen without a store, so that it starts at latest from hat point [19:13:10] I'll drop the state_store before starting to import [19:13:12] ok [19:13:13] cool [19:13:25] alright lets do timer monday my morn then [19:14:23] ottomata: job with data import successfully ran [19:14:34] all good - have a good weekend folks :) [19:14:49] nice! [19:14:50] laters joal! [19:14:56] joal: actually, on monday [19:14:58] if you work before I do [19:15:02] yup? [19:15:03] you could run thee job manually a few times to get a few hours [19:15:13] yah - doable [19:15:14] and then we could just do the timer + migration when I start [19:15:20] (03PS1) 10Ottomata: Use latest version of jsonschema-tools and run tests on analytics/legacy schemas [schemas/event/secondary] - 10https://gerrit.wikimedia.org/r/702736 (https://phabricator.wikimedia.org/T285975) [19:15:21] ok works for me [19:15:27] I'll try to remember [19:15:28] gr8 [19:15:33] gr8, laters!!!! :) [19:16:18] (03CR) 10jerkins-bot: [V: 04-1] Use latest version of jsonschema-tools and run tests on analytics/legacy schemas [schemas/event/secondary] - 10https://gerrit.wikimedia.org/r/702736 (https://phabricator.wikimedia.org/T285975) (owner: 10Ottomata) [19:19:18] (03PS2) 10Ottomata: Use latest version of jsonschema-tools and run tests on analytics/legacy schemas [schemas/event/secondary] - 10https://gerrit.wikimedia.org/r/702736 (https://phabricator.wikimedia.org/T285975) [19:33:57] btullis: feel free to merge that icinga patch of yours [19:34:03] once you get a +1 from someone you can self merge [19:34:15] (and sometimes we self merge for small things anyway) [19:37:08] OK, thanks. I'm going to have to go through Gerrit in more detail at some point, but it's not urgent. It just gave the the option to Submit instead of Merge, so I went for that. :-) [19:38:52] ya actually i dont' really know the diff, i think it usually says submit, but not all repos are configured the same [20:16:14] (03PS3) 10Joal: [WIP] Update to spark-3 and scala-2.12 [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/656897 [20:16:43] I think we can think of moving to spark3 :) [20:16:54] And with that, off for today! [21:27:29] byee teamm, see you tomorrow :]