[09:48:51] lunch [13:17:11] dcausse are you OK with doing the calico migration for rdf-streaming-updater today ( https://gerrit.wikimedia.org/r/c/operations/deployment-charts/+/1072597 ) ? I know you mentioned using a restartNonce in https://wikimedia.slack.com/archives/C055QGPTC69/p1726237109752779?thread_ts=1726214949.315629&cid=C055QGPTC69 so if I need to clean up anything LMK [13:22:17] inflatador: sure, I was about to suggest doing this actually [13:22:53] perhaps with the new networkpolicy you added we might not even need a restart? [13:27:08] I couldn't get the policy to deploy in staging w/out destroy/apply, but I'm hopeful we won't have to do that. We can always start with a simple deploy, then use more disruptive options (restart/destroy) only if things go wrong. If we start on wcqs I think we'll be able to figure that out without too much at stake [13:29:22] we could use staging for this perhaps? [13:30:44] anyways we can figure that out when we start pairing [13:38:35] I won't be able to join the Wednesday meeting, cnflicting SRE learning circle [13:42:37] we could try again in staging, sure [13:43:16] unfortunately I've done it a few times there already ;( [13:46:12] inflatador: with the networkpolicy copied and adjusted to match flink-app-$release instead of using vendor the template as well? [13:49:24] dcausse sadly, yes. But I'm OK with trying it again. I wanted to ask about the savepoint patch...would it make more sense to manually trigger a savepoint per environment right before we migrate rather than trigger all once? Was thinking we might lose some updates if we trigger then wait awhile [13:49:39] if that's wrong and we can just replay LMK [13:50:00] ah... sigh :( was hoping that not having to create new labels would have helped [13:50:18] inflatador: imo you can drop the savepoint patch imo [13:50:43] we can do the stop manually if we have to do the destroy/redeploy thing [13:53:47] ACK, I removed it from relation chain [14:03:29] dcausse in pairing now if you wanna join [14:08:19] inflatador: sorry was distracted at home, joining [15:02:45] we're still in the pairing session doing calico stuff [16:07:04] dcausse brouberol looks like we already had a patch for removing the old policies ( https://gerrit.wikimedia.org/r/c/operations/deployment-charts/+/1072243 ) . I'm going to go ahead and merge [16:09:15] workout, back in ~40 [16:28:09] Nice! [17:28:48] sorry, been back [17:31:17] grumble grumble, we're still getting alerts for categories-related systemd units on graph split hosts [17:33:29] so much for https://phabricator.wikimedia.org/T373935#10145608 [17:37:47] dinner [17:42:17] hmm, I think it's only on scholarly hosts, maybe I just missed those last time [17:42:38] lunch, back in ~45 [18:36:04] * ebernhardson wonders why we have update-topic and topic-prefix-filter as options, seems update-topic should be derived from update-stream and topic-prefix-filter [19:09:27] ebernhardson: yes, I guess that predates dcausse's implementation of topic filters [19:23:37] ebernhardson: since I was nap-muted during Wednesday meeting: did you find a solution for your privat-update-stream question from last night [19:24:33] pfischer: yea i added a consume-private-events option that defaults false which will enable union'ing (and removed saneitize-private-wikis to reuse it). Right now just reviewing code and pondering which tests i need to add [19:25:22] it wasn't a particularly hard problem, but i realized it as i was trying to finish my day but clean things up enough to post a first draft on gitlab, and realized i [19:25:34] and realized i missed some things [19:28:07] Yeah, just had a similar experience once I saw my own Cirrus CR on gerrit, TODOS jumped at me... [19:30:54] ebernhardson: I just remembered the reason for update stream vs topic: driving a topic from a stream only works for the source builder via event utilities, but the kafka sink builder does not provide that lookup. [19:31:25] s/driving/deriving/ [19:31:36] pfischer: ahh, that would make sense then why we can't use the filter. I noticed we pass the filter into the constructor but didn't delve deep enough to see where all it was used [19:35:58] We'd have to make sure, that the topic filter always results in exactly one topic, that we then pass to the kafka sink. Seems doable.