[10:31:48] lunch [13:12:27] .moti wave [13:12:29] o/ [14:01:38] ebernhardson: I’d like to base my redirect handling on your deduplication PR so redirect adds/removes get merged, too. Is there anything preventing us from merging it? I just re-ran the CI pipeline, hoping that will resolve the “no space left on device” issue. [14:15:17] dcausse: just thinking out loud about merging redirects: if we have any rev-based input event in the merge-window, we’d add/remove redirects to/from fields.redirect. That would require prioritising input events by completeness (and age) as we’d want to merge into the latest, most complete input event. If there’s no complete input event inside the window, we have to rely on the ES-extra-set-handler to process [14:15:17] redirect adds/removes. Is that correct? [14:21:29] pfischer: lemme check, I thought I made the cirrusdoc building in such a way that redirects would not be fetched, meaning that when fetching the content on rev_based update you'd not get redirects there [14:30:23] for some reasons I thought that setting cbbuilders=content would not populate the redirect array but this does not seem to be the case [14:32:21] the RevisionFetcher does use cbbuilders=content [14:32:39] I think the original intent was to not fetch redirects on rev updates [14:32:51] because this not when they are changed [14:32:59] s/this/this is/ [14:33:51] I need to understand why cbbuilders=content is still producing the redirect array [14:36:42] pfischer: in short if we make sure that cirrusdoc api does not return the redirect array I think it's fine to simply override the redirects array (should be empty) and use the set_noop handler [14:42:43] I wonder what happens with the set handler if you still provide the content of the array in source body [14:58:03] pfischer: should be ready to merge [14:58:54] assuming retro is replaced by staff meeting? [14:58:55] shall we cancel the retro in favor of the staff meeting? [14:59:13] sounds good to me [14:59:21] ha. that's three votes, so let's cancel [14:59:22] +1 [15:33:16] ah it's because the new mem cache, doc builder $flags are not part of the cache key... [16:08:34] workout, back in ~40 [16:54:56] back [16:56:04] dinner [17:46:26] lunch, back in time for pairing [18:20:11] back [18:20:48] ryankemper if it's OK with you, I'd like to move back pairing at least 30m...MrG isn't here and the data transfer needs ~30-45 to complete before we can do much work on it [18:30:29] inflatador: ack! [18:34:37] ryankemper thanks! I updated the invite. [18:58:55] the latest version of the cookbook is no longer bombing out on the 'pool' cmd [19:04:15] Excellent [19:37:06] I'm still not sure exactly what we need to do to get the application deployed. Git-fat is installed but it doesn't look like the large files (jars/wars) are actually getting pulled down [19:54:02] git-fat should be triggered by scap deploy, you might need to re-run the deploy? [19:54:46] it seems plausible puppet setup the repo and doesn't know anything about the scap configuration, once you run a deploy of the repo the scap bits would do the git-fat parts [19:54:58] yeah, it looks like we'll have to add these hosts to scap dsh [19:55:41] scap apparently won't deploy on stuff that's not explicitly in the dsh file, or is already pooled via conftool? It has some catch-22 guardrails that we're going to have to work around [19:55:50] ahh, hmm [19:56:08] probably just a puppet patch a la https://gerrit.wikimedia.org/r/c/operations/puppet/+/914384 [19:56:31] oh, ok [19:57:20] either that or I'm invoking scap wrong. `scap deploy -l 'wdqs2019.codfw.wmnet' ${VERSION}` returns `No targets selected, check limits and dsh_targets` [19:58:17] hmm, tbh i think i've only ever deployed to the defaults :) Indeed the docs describe that not as defining the set of hosts, but filtering the available hosts. So yea it needs to already be in the list [20:42:49] OK, puppet patch up for adding the new hosts into dsh...I *think* we'll be able to remove this once we bring 'em into production? https://gerrit.wikimedia.org/r/c/operations/puppet/+/936089/ [20:45:17] inflatador: should work, and ya it should be able to be removed after running it. [20:47:01] ebernhardson cool. I forgot 2013, so just pushed a new patch [20:55:01] hmm...scap deploys with no errors, but it's still not pulling down the bigger files [20:58:33] :S [20:59:43] poking logs, sec [21:02:13] scap.runcmd.FailedCommand: Command 'git fat init' failed with exit code 1;\nstdout:\n\nstderr:\ngit: 'fat' is not a git command. See 'git --help' [21:02:25] from wdqs2019 [21:02:58] curiously, i see git-fat from my user account :S [21:03:06] Yeah, I'm on 2019 too doing stuff ;) [21:03:12] ahh :) [21:03:39] curious that the deploy doesn't report an error even though git-fat failed [21:03:44] only in the logs [21:04:25] Yeah, I think it's puppet running `Notice: /Stage[main]/Query_service::Deploy::Scap/Scap::Target[wdqs/wdqs]/Package[wdqs/wdqs]/ensure: created (corrective)` which seems to run before git-fat is installed? I dunno... [21:05:08] puppet shouldn't effect the scap deploy, as long as puppet has completed a run before the deploy is done [21:06:39] Puppet always fails on the prometheus exporters [21:06:47] ahh [21:07:29] its scap action "works"...at least the parts that don't need git-fat [21:08:47] we're in https://meet.google.com/eki-rafx-cxi if you wanna join [22:07:38] happened to run across this blog post again and found it amusing: "Since full-blown artificial intelligence that is able to read millions of pages of information and provide the answer to any question is not yet available to everyone, what do we do?" - Trey Jones, 2018 [22:15:30] I guess the answer is "wait a few years?" ;P