[07:39:33] 10Analytics-Radar, 10WMDE-Templates-FocusArea, 10Patch-For-Review, 10WMDE-TechWish (Sprint-2021-02-03), and 2 others: Add missing normalization to CodeMirror Grafana board - https://phabricator.wikimedia.org/T273748 (10WMDE-Fisch) [07:46:51] 10Analytics-Radar, 10WMDE-Templates-FocusArea, 10WMDE-TechWish-Sprint-2021-07-07: Backfill metrics for TemplateWizard and VisualEditor - https://phabricator.wikimedia.org/T274988 (10WMDE-Fisch) [12:48:04] (03PS1) 10Joal: Add permissions settings for gobblin [analytics/refinery] - 10https://gerrit.wikimedia.org/r/703594 (https://phabricator.wikimedia.org/T271232) [12:48:18] ottomata: for when you're up --^ [12:48:44] joal just getting on letsss seeeee [12:48:46] hjello! [12:48:52] Oh good morning :) [12:49:03] ah nice! [12:49:13] (03CR) 10Ottomata: [V: 03+2 C: 03+2] Add permissions settings for gobblin [analytics/refinery] - 10https://gerrit.wikimedia.org/r/703594 (https://phabricator.wikimedia.org/T271232) (owner: 10Joal) [12:49:30] how's webrequest and netflow lookin? [12:50:21] No alert, no check :) [12:53:22] still doing emails...but should we prep to do the migration? :) [12:53:41] let's do it ottomata [12:53:51] ottomata: only thing we have not tested is refine over new files [12:56:31] joal: eh? [12:56:36] doesn't that happen in test cluster? [12:56:40] oh [12:56:46] no? what do you mean? [12:56:49] ottomata: we have only done webrequest in test cluster [12:57:35] ottomata: for webrequest no problem in migrating, all testing has been done [12:57:58] ottomata: for netflow, jobs after ingestion are done by refine [12:58:01] ahhh [12:58:02] right [12:58:02] and this we have not tested [12:58:03] hm [12:58:24] We can migrate webrequest and wait for netflow if you wish [12:59:44] lets do webrequest first and get it all setteld [12:59:53] ack [13:01:10] (03PS1) 10Joal: Make gobblin-webrequest use production folder [analytics/refinery] - 10https://gerrit.wikimedia.org/r/703597 (https://phabricator.wikimedia.org/T271232) [13:01:15] ottomata: --^ [13:01:28] ottomata: I let you stop gobblin and camus, and I start moving data [13:01:53] ok [13:02:51] ok camus stopped [13:03:11] gobblin stopped [13:03:12] puppet stopped [13:03:28] !log disabled camus-webrequest and gobblin-webrequest timer on an-launcher1002 in prep for migration [13:03:32] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [13:03:47] (03CR) 10Ottomata: [V: 03+2 C: 03+2] Make gobblin-webrequest use production folder [analytics/refinery] - 10https://gerrit.wikimedia.org/r/703597 (https://phabricator.wikimedia.org/T271232) (owner: 10Joal) [13:04:27] shall I deploy? [13:04:32] i think i will --limit an-launcher1002 [13:05:41] joal: ^ ? [13:07:53] ottomata: let me ponder that: ) [13:08:27] ottomata: ok - we should also do it for test, but ok for a limit to an-launcerh1002 for prod [13:08:47] ph because perms [13:08:50] right ok [13:09:05] !log Move data for webrequest camus-gobblin migration [13:09:08] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [13:09:21] deploying launcher now so we can proceed with that [13:09:23] will do test after [13:09:26] ack [13:12:44] !log deploying refinery to an-launcher1002 for webrequest gobblin migratoin [13:12:50] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [13:13:02] deploying to hdfs [13:14:47] ottomata: I'm fixing perms on dirs and files in the meantime [13:14:50] ok [13:19:25] deployinigi to test [13:20:41] ottomata: something we will need to change I think is the deletion script for raw webrequest data - the pattern of folders have changed [13:25:10] oh hm [13:25:12] right [13:25:13] oh [13:25:30] lemme look at test cluster [13:25:31] for that [13:26:09] ottomata: I assume error emails for the test cluster are sent to Luca, and we have not seen problems [13:27:11] huh no its still analytics-alerts [13:27:21] ok - weird then [13:27:22] the data is just still too new to drop [13:27:27] maybe the script fails silently [13:27:27] and/or it wouldn't find any to drop [13:27:57] ottomata: I think the script doesn't find data becasue patterns don't work (and data is to new too [13:28:01] making a patch [13:28:08] ottomata: we should create a fake folder to drop [13:28:15] k good idea [13:29:21] ottomata: ready to kill-restart jobs [13:29:45] proceed! [13:29:53] ottomata: ok [13:30:04] !log kill-restart webrequest using gobblin data [13:30:06] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [13:30:32] joal i will merge patch to remove camus job [13:30:49] oh have to ammend, not netflow yet [13:32:39] sorry for the mess with netflow ottomata :S [13:32:44] np no mess! [13:32:59] It could have been simpler if I had tested [13:34:13] anyhow - first run of webrequest started - let's see (I started to move data while the previous refine job had not finished, so I had to restart 1h earlier than latest hour - good test) [13:34:56] joal ok, should i wait before runninig puppet to reenable gobblin job? [13:35:34] nope, please go :) [13:36:04] hmm joal what should I do with refinery-drop-webrequest-sequence-stats-partitions job? [13:36:07] just remove it? [13:36:22] why? [13:36:28] we still have that data [13:36:35] hmmm ok nm [13:36:40] right right that is part of the hive refine [13:37:08] i'm looking at puppet and wondering if we should keep data purge job declarations next to the job declarations that create the data [13:37:27] or, even declare them together in a define wrapper [13:37:28] ok, that's a refactor for another time [13:37:29] ottomata: it all should go in airflow ) [13:37:32] indeed [13:48:47] joal [13:48:47] https://gerrit.wikimedia.org/r/c/operations/puppet/+/703603 [13:50:55] reading [13:51:46] all good ottomata I think [13:53:02] ottomata: first gobblin run of webrequest succeeded - perms are corrected :) [13:53:05] awseoom! [13:53:13] ok i'll merge that and lets do some test dir deletions as you say [13:53:19] And refine succeeded toos! [13:53:26] webrequest-refine sorry [13:53:32] ok let's test netflow-refine now [13:53:49] sure ottomata (about deletion) [13:53:58] niiice [13:53:59] ok [13:54:13] joal we tested some event refines, rigth? [13:54:26] ottomata: I actually don't think so! [13:54:30] oh [13:54:33] hehehh ok then lets test [13:54:33] maybe you have ottomata? cause I have not [13:54:36] no i have not [13:54:40] ok let's do it :) [13:54:48] ok so globbin imported netflow data is there ya? [13:54:58] globbin haha [13:55:01] ottomata: you said it should all work, I believed you - testing is nonetheless a good idea :) [13:55:19] I always say it should all work, I am an optimist :) [13:55:27] hehe :) [13:55:45] joal i think i can run a ttest nettflow refine on that data [13:55:55] works for me ottomata [13:56:53] ottomata: I'm gonna add a fake-folder in test for deletion, and check permissions [13:57:18] great [13:57:19] ty [13:57:30] the purge job is updated there [13:57:32] ottomata: I assume I should wait for puppet to have run, right? [13:57:34] and on an-launcher [13:57:35] already done [13:57:43] ack [13:57:52] on an-test-coord as well? [14:04:44] ottomata: created /wmf/data/raw/webrequest/webrequest_text/year=2021/month=05/day=01/hour=00 [14:04:49] on prod cluster [14:04:55] joal sorry yes on an test coord too [14:05:59] same on test cluster ottomata [14:06:07] ottomata: let's wait and see if those get deleted :) [14:06:44] joal lets force a run of the job [14:06:47] i think it only runs once a day? [14:07:28] ok ottomata [14:07:30] lerts for [14:07:42] let's do test cluster first? [14:07:45] joal [14:07:48] yes? [14:07:48] Jul 07 14:07:19 an-test-coord1001 kerberos-run-command[28795]: 2021-07-07T14:07:19 INFO Starting EXECUTION. [14:07:48] Jul 07 14:07:31 an-test-coord1001 kerberos-run-command[28795]: 2021-07-07T14:07:31 INFO No Hive partitions dropped for table wmf_raw.webrequest. [14:07:48] Jul 07 14:07:36 an-test-coord1001 kerberos-run-command[28795]: 2021-07-07T14:07:36 INFO Removing 1 directories. [14:07:51] in test cluster ^ [14:07:53] looks good1 [14:07:59] \o/ [14:08:03] checking directories [14:09:31] ottomata: folder not deleted :( [14:09:36] hm [14:09:52] same output on prod [14:10:00] oh [14:10:00] Jul 07 14:07:38 an-test-coord1001 kerberos-run-command[28795]: ('Command: hdfs dfs -rm -R -skipTrash /wmf/data/raw/webrequest/webrequest_text/year=2021/month=05 failed with error code: 1', b'', b'rm: Permission denied: user=analytics, access=WRITE, inode="/wmf/data/raw/webrequest/webrequest_text/year=2021":hdfs:analytics-privatedata-users:drwxr-x---\n') [14:10:06] ottomata: there was no file in there, nor hive partition - shall I add a fake file? [14:10:15] MEH [14:10:19] ok get it [14:10:26] prb cause you manually created as hdfs? [14:10:40] yeah = but I changed ownership I think [14:10:45] i don't see that error on an-launcher [14:11:07] ottomata: test again, I understand [14:11:48] ottomata: the root folder was not created (different partitions for source), and I changed ownership to leafes only [14:13:33] ok cool [14:15:35] interesting joal abourt the padded date values [14:15:45] i think hive and spark don't pad when they create those partitions [14:15:56] ottomata: correct [14:16:05] i kinda like padded better i think [14:19:18] ottomata: We could have left it padded I think, but we would have needed to tell hive to use String instead of Int as partition type [14:19:30] hmmmmm [14:19:34] i wonder if consistency is better? [14:20:06] i think not too late to change if we should, the refine stuff should be fine (not sure if oozie webrequest refine needs changed) [14:21:51] ottomata: refine is fine I think too [14:22:28] re strinig types; i think we make partition column types strings by default usually [14:22:34] at least, the refine spark stuff does [14:22:36] ottomata: the webrequest-raw partitions are created using 'LOCATIONS' so it works when using the int value [14:22:40] otherwise its hard to automate [14:22:54] ottomata: IIRC year/month/day/hour are int [14:27:56] ottomata: how is refine going for netflow? [14:29:05] took me a while to figure out the new opts [14:29:13] forgot that there is a defaulted one for datetime format i had to change [14:29:18] got that going now running it and getting an error [14:29:22] troubleshooting now [14:31:25] super ottomata - let me know if I can help [14:31:59] joal what do you think about changing gobblin to not pad, just for consistency? [14:32:04] seems like the right thing to do [14:32:14] , unless we want to go the other way and make everything else pad [14:32:22] I'm not even sure how I can do that - will read on java date format [14:36:40] hmm ok so RefineTarget is not properly reading json data automatically [14:36:44] maybe it doesn't know how to do gzip?? [14:37:12] ottomata: gzip shouldn't be an issue - except if you open the file manually to check content [14:37:29] hm yeah its just reading the df as a string, investigating [14:38:01] ottomata: about padding - another solution is to make folders (almost)exactly as they were with camus: webrequest_text/2021/07/07/00 [14:38:39] mostly just want consistency [14:38:46] prefer with partition keys [14:38:57] there's nothing wrong with the padded values atm [14:39:04] just not consistent witth other date parrtition values [14:39:16] yup [14:41:37] joal RefineTarget does not (yet!) work with gzip [14:42:42] hmmm yes maybe it will....need to just tell it to use json explicitly [14:42:42] hm [14:44:57] ottomata: I think we can tell gobblin not use gzip [14:45:08] no i think gzip is probably good [14:45:09] it should work [14:45:14] looking intto it [14:45:25] ack ottomata [14:45:33] ottomata: about padding, change is simple [14:45:51] ottomata: if we change it now, we'll need to change back oozie as well :) [14:46:07] i think we should do it1 [14:46:08] ! [14:46:14] now is bettter than never! :) [14:46:19] ok - doing test now [14:49:39] (03PS1) 10Joal: Update gobblin to import into non-padded time folders [analytics/refinery] - 10https://gerrit.wikimedia.org/r/703610 (https://phabricator.wikimedia.org/T231272) [14:56:33] oh so easy [14:56:37] my apologies, I got kicked offline and didn't realize it [14:56:39] (03CR) 10Ottomata: [C: 03+1] Update gobblin to import into non-padded time folders [analytics/refinery] - 10https://gerrit.wikimedia.org/r/703610 (https://phabricator.wikimedia.org/T231272) (owner: 10Joal) [14:56:54] I'm around, doing some cleaning up and email checking [14:57:30] and keeping an eye on things, so no worries, I can do my ops week [14:57:52] (03PS1) 10Joal: Update webrequest-raw to use non-padded time folders [analytics/refinery] - 10https://gerrit.wikimedia.org/r/703611 (https://phabricator.wikimedia.org/T231272) [14:58:06] ottomata: those 2 changes are the only one needed I think (for webrequest_) [14:58:13] hi milimetric :) [14:58:23] hiya [15:01:17] ottomata: kids are around now - Would it be ok for you if we apply the move for webrequest later in the day? [15:01:47] oh joal i thought we already did that? [15:02:03] ottomata: the change of partition padding I mean :) [15:02:05] oh oh [15:02:08] yea for sure [15:02:12] i'm working on a patch for refine now [15:02:17] lets continue later [15:02:26] Super - Will be back in hopefully 1h or 2 [15:02:31] if i'm not on feel free to proceed with that, i think you can do that withough my help [15:02:32] cool [15:31:26] yarghhh joall do you know how to solve this compile error? [15:31:28] CqlRecordWriter.java:[10,35] cannot find symbol [15:31:28] symbol: class ColumnFamilyOutputFormat [15:31:28] location: package org.apache.cassandra.hadoop [15:31:30] i've seen it before [15:31:33] thought nit had been fixed? [15:31:37] oh maybe i just need a mv clean [15:31:48] mvn clean* [15:34:47] hm no [15:34:59] milimetric: have you seen that before/ ^^^ i know you all were doing stuff in refinery-source with cassandra [15:39:14] ottomata: I had similar compile problems, and mvn clean didn't help, I think I had some bad cache or something, had to blast some m2 folder or something, fuzzy on it [15:40:11] oh hm [15:40:29] i worked around it by excluding refinery-cassandra from my mvn package command [15:46:49] ottomata: weird :S [15:47:11] ottomata: could it be that you're folder is not up to date? [15:47:21] no i updated, must be some .mvn thing [15:47:36] ottomata: weird :( [15:47:37] anyway, i think my patch is working (job still running) will submit for review [15:47:42] ack [15:47:58] ottomata: shall I deploy the patches for unpaded time? [15:48:15] ottomata: it's easier if you stop the gobblin jobs for a minute while I deploy [15:48:36] (03CR) 10Joal: [V: 03+2 C: 03+2] "Merging for deploy" [analytics/refinery] - 10https://gerrit.wikimedia.org/r/703610 (https://phabricator.wikimedia.org/T231272) (owner: 10Joal) [15:49:35] (03CR) 10Joal: [V: 03+2 C: 03+2] "Merging for deploy" [analytics/refinery] - 10https://gerrit.wikimedia.org/r/703611 (https://phabricator.wikimedia.org/T231272) (owner: 10Joal) [15:49:58] Actually ottomata, I'm gonna wait for your patch, we could do a single deploy [15:51:59] (03PS1) 10Ottomata: RefineTarget - support gzipped json as input data format [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/703617 (https://phabricator.wikimedia.org/T271232) [15:52:12] k [15:52:22] joal ^ [15:52:53] (03PS2) 10Ottomata: RefineTarget - support gzipped json as input data format [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/703617 (https://phabricator.wikimedia.org/T271232) [15:55:13] (03CR) 10Joal: RefineTarget - support gzipped json as input data format (031 comment) [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/703617 (https://phabricator.wikimedia.org/T271232) (owner: 10Ottomata) [15:55:25] ottomata: I dislike assuming json for gzip files [15:55:39] the rest is super fine [15:56:07] (03CR) 10Ottomata: RefineTarget - support gzipped json as input data format (031 comment) [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/703617 (https://phabricator.wikimedia.org/T271232) (owner: 10Ottomata) [15:56:34] joal replied, we can do discusson on ticket for posterity :) [15:56:42] :) [15:56:49] (I knew you wouldn't like it :p ) [15:57:32] (03CR) 10Ottomata: RefineTarget - support gzipped json as input data format (031 comment) [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/703617 (https://phabricator.wikimedia.org/T271232) (owner: 10Ottomata) [15:58:20] (03CR) 10Joal: RefineTarget - support gzipped json as input data format (031 comment) [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/703617 (https://phabricator.wikimedia.org/T271232) (owner: 10Ottomata) [15:58:57] (03CR) 10Ottomata: RefineTarget - support gzipped json as input data format (031 comment) [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/703617 (https://phabricator.wikimedia.org/T271232) (owner: 10Ottomata) [16:01:10] (03CR) 10Joal: RefineTarget - support gzipped json as input data format (031 comment) [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/703617 (https://phabricator.wikimedia.org/T271232) (owner: 10Ottomata) [16:02:30] oooo ok joal i'll try that [16:03:01] ack ottomata - deploying now then, parallelizing work (one thing at a time) [16:03:15] ottomata: May I ask you to stop gobblin on test-cluster please? [16:03:21] ok [16:04:19] done joal [16:04:29] Ack - deploying in test first [16:05:15] !log Deploy refinery to test-cluster [16:05:19] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [16:21:19] ottomata: can you please restart the gobblin job in test? [16:22:07] k [16:22:27] joal i think i got the GZIPInputStream working [16:22:28] q [16:22:34] \o/ [16:22:35] should i change detection from looking at file extension [16:22:36] ? [16:22:42] to look at file magic byte header? [16:23:26] hm - as you wish - in hadoop it's done using extensions, but no problem if you prefer doing it with magic :) [16:23:38] ah not prefer, just don't know what is bets [16:23:40] best [16:23:56] extension should be good - we should follow conventions [16:24:18] wiill keep extensionis then, more complicated if i have to read compressed gzipped file bytes , and then later read use GZIPInputStream bytes on the same file [16:24:39] works for me ottomata [16:24:43] complex it is already [16:36:43] ottomata: it all looks good on test (oozie job not yet executed but waiting on expected data) [16:37:09] ottomata: with your permission I'll proceed with prod, and therefore need gobblin to be stopped please ) [16:37:21] (03PS3) 10Ottomata: RefineTarget - support gzipped json as input data format [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/703617 (https://phabricator.wikimedia.org/T271232) [16:37:24] reading [16:37:36] :) [16:37:41] getting some lucnh [16:37:50] ottomata: could you stop gobblin first please? [16:37:53] in prod [16:38:02] That would unlock me :) [16:38:12] ah - too late :) [16:40:33] (03CR) 10Joal: [C: 03+1] "I like it! Thank you for on-the-fly-unzip :)" [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/703617 (https://phabricator.wikimedia.org/T271232) (owner: 10Ottomata) [16:46:45] ah joal [16:47:05] joal in prod? [16:48:04] yes please ottomata ) [16:48:23] * joal needs to change no only the headset but the keyboard as well [16:48:40] just webrequset? [16:48:59] ottomata: both webrequest and netflow - they will both be impacted [16:49:13] ok done joal [16:49:16] thak you [16:49:34] ottomata: do you wish I wait for refinery-source to deloy? [16:49:50] i just finished testing and ran netlflow [16:49:54] looks good to me if good toyou! [16:50:10] (03CR) 10Ottomata: [C: 03+2] RefineTarget - support gzipped json as input data format [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/703617 (https://phabricator.wikimedia.org/T271232) (owner: 10Ottomata) [16:50:13] ok let's merge and deploy :) [16:50:44] ottomata: I'm gonna deploy on an-launcher1002 only, make the thing restart, and redeploy with refinery source [16:50:49] iterative steps [16:50:49] (03CR) 10Ottomata: [C: 03+2] "> Patch Set 3: Code-Review+1" (031 comment) [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/703617 (https://phabricator.wikimedia.org/T271232) (owner: 10Ottomata) [16:50:59] k [16:51:10] we gotta release refinery source anyway [16:51:18] yeah [16:51:23] this will take some time [16:51:36] that's why I move on refinery first [16:51:39] k [16:52:37] !log Deploy refinery to an-launcher1002 [16:52:40] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [17:00:29] (03Merged) 10jenkins-bot: RefineTarget - support gzipped json as input data format [analytics/refinery/source] - 10https://gerrit.wikimedia.org/r/703617 (https://phabricator.wikimedia.org/T271232) (owner: 10Ottomata) [17:00:41] ok ottomata - ready to restart gobblin please [17:01:10] joal ok, [17:01:10] arf, nope [17:01:13] ok go [17:01:14] oh ok [17:01:21] joal, ok if i start release refinery source? [17:01:28] please! [17:01:40] Starting build #91 for job analytics-refinery-maven-release-docker [17:01:44] ok, also starting gobblin [17:01:47] ottomata: you can restart gobblin - my problem is with oozie, not gobblin [17:03:21] ottomata: I forgot to dpeloy refinery - HDFS /facepalm [17:03:26] !log Deploy refinery to HDFS [17:03:28] Logged the message at https://www.mediawiki.org/wiki/Analytics/Server_Admin_Log [17:03:56] oh heheh [17:04:05] so easy to forget that [17:04:33] That step will probably go away with airflow! \o/ [17:14:28] Project analytics-refinery-maven-release-docker build #91: 09SUCCESS in 12 min: https://integration.wikimedia.org/ci/job/analytics-refinery-maven-release-docker/91/ [17:19:36] Starting build #49 for job analytics-refinery-update-jars-docker [17:20:04] (03PS1) 10Maven-release-user: Add refinery-source jars for v0.1.14 to artifacts [analytics/refinery] - 10https://gerrit.wikimedia.org/r/703621 [17:20:04] Project analytics-refinery-update-jars-docker build #49: 09SUCCESS in 28 sec: https://integration.wikimedia.org/ci/job/analytics-refinery-update-jars-docker/49/ [17:20:18] ok ottomata - unpadded-time released [17:20:47] ottomata: My understanding is that it doens't impact deletion - right? [17:33:40] nope both refine and purge use \d+ [17:34:59] (03CR) 10Ottomata: [V: 03+2 C: 03+2] Add refinery-source jars for v0.1.14 to artifacts [analytics/refinery] - 10https://gerrit.wikimedia.org/r/703621 (owner: 10Maven-release-user) [17:35:07] joal: ^^ [17:35:17] ok refinery ready for deploy [17:35:30] ok if i do full deploy? [17:35:34] looks good ottomata :) [17:35:37] go for it! [17:35:47] and not to forget the hdfs step :) [17:41:50] ottomata: I assume the deploy you're doing is the one allowing us to finalize netflow, right? [17:47:17] yes [17:47:22] will have to make a refine .pp job patch too [17:47:34] still waiting for main deploy [17:48:53] ottomata: when we'll devise a strategy for airflow deploys, let's make sure we split jars from code :) [17:49:25] still waiting for main deploy? [17:49:28] oops [17:49:32] whatcha mean joal? [17:51:38] ottomata: the bulk amount of time taken when deploying refinery is /downloadingcopying jars [17:52:04] if we separate code and jars in 2 repos with different release cycles, we can make delpying code a lot easier [17:52:13] actually not easier, faster [17:52:14] ah ya [17:58:44] joal https://gerrit.wikimedia.org/r/c/operations/puppet/+/703623 [18:01:11] one comment ottomata - [18:01:35] ok - I assume this patch means we're ready to move netflow :) [18:01:40] (or almost) [18:02:27] ya [18:02:47] joal i've got a schedueled workout starting soon! happy to do netflow wtih ya after, but i know its gettting late for you [18:02:50] what do you think? [18:03:15] ottomata: I'll stop soon - Let's finalize netflow tomorrow? [18:03:46] ok [18:03:51] joal, what if the e.g. hour is 17? [18:03:52] vs 7? [18:04:01] don't remmeber how format worked but that format seemed to work... [18:04:07] but i didn't actually check if it picked up single digit hours [18:04:15] that's my concern [18:04:17] checking [18:04:31] oh, actually, ha i'm testing on the old format [18:04:39] with padded so of course it woked [18:04:39] ok [18:04:46] hm [18:05:20] ok there is some new day=7 stuff there [18:05:21] it seems to parse correctly [18:05:21] going to try [18:06:25] ottomata: My tests tell me bith formats work in exact similar way for parsing [18:06:30] so we're good [18:08:25] As we move netflow tomorrow I'm gonna get diner - Thank you for the help ottomata :) [18:10:49] ok cool [18:10:54] great, thanks joal! [18:11:00] i'll get the deploys finished up and we can do gthat tomorrow [18:11:01] laters!