[13:18:50] hmm, that's neat. I did not know you could mark yourself away in Phab https://phabricator.wikimedia.org/E1687 [15:03:55] \o [15:05:19] .o/ [15:52:19] o/ [15:53:57] \o [16:34:09] •o/ [17:14:11] * ebernhardson wonders if the gitlab ci can submit a patch to gerrit [17:14:38] could have the streaming updater trigger_release also submit a patch to gerrit with the version bump [17:15:11] i guess of course it can, the question is getting credentials in place [17:16:49] ebernhardson: Which repo would you like to patch? Schemas? [17:17:43] pfischer: deployment-charts [17:18:09] pfischer: i suppose we will release less often soon, but a manual step i do is copy the new tag out of gitlab and create a gerrit patch [17:22:41] grr, sounds tedious at the current frequency this is happening. :-( [17:25:00] ebernhardson: Would the HTTP credentials work for this? https://gerrit.wikimedia.org/r/settings/#HTTPCredentials [17:26:08] pfischer: hmm, interesting it allws to generate additional random passwords, i hadn't seen that. I suppose we could put it into the gitlab variables, although it's a bit dubious putting personal data into a gitlab variable [17:26:43] Yeah, still better than putting your SSH key their, but dubious still [17:28:51] maybe i could use cindy's account, at least it doesn't have +2 anywhere which reduces scope. The name is a bit funny for purpose but eh [17:29:42] * pfischer wonders why a single event remains in a flink tumbling window forever (is not emitted) if not accompanied by a least one more event [17:29:54] that is odd [17:30:09] indeed [17:30:21] I’ll think about it over dinner ;-) [18:03:32] does anyone have strong feelings whether or not to enable elastic snapshots in eqiad? re https://phabricator.wikimedia.org/T348686 [18:05:06] also, whether or not we need it for the smaller clusters [18:06:27] inflatador: it would be nice if it was just everywhere, my most recent usecase was to copy testwiki to relforge which is on a small cluster [18:08:08] apparently pushing a deployment-charts patch is planned future work for the existing release pipeline [18:08:51] ebernhardson ACK, I imagine it won't be too much work to get it everywhere [18:09:39] hoping it's just applying the same thing three times, but who knows [18:15:49] Y, probably. Good to hear you're using it and it works [18:30:41] lunch/appointment, back in ~2h [19:10:54] destroyed and recreated the cirrus-streaming-updater in k8s, still getting serialization errors :S I think david suggested before we may need to directly write serializers, that may be the case [19:29:27] ebernhardson: did it fail for the same reason (trying to de-serialize a list of strings)? [19:32:18] pfischer: "Instant exceedes minimum or maximum instant" [19:32:33] deserializing into the time window [19:33:07] then also errors about reading beyond the end of the record. I guess my suspicion is the Instant is reading some other fields value? [19:34:34] Yes, sounds like it. I was thinking of reusing rows instead (before writing custom serializers) since we already have that transcoding implemented. [19:34:53] sounds viable [19:35:33] I might not be able to implement it today but can look into that tomorrow. [19:37:43] that'd be awesome, thanks! [19:46:32] BTW: I found the reason for the event lingering inside the window: the window has a trigger that decides when to the window is flushed/processed. That trigger has not been set explicitly and defaults to an EventTimeTrigger that waits for a Watermark to come by to trigger a flush. But due to the low throughput in my local kafka broker, that watermark never comes, at least not with the current WarterMarkStrategy. [19:47:33] If I replace it with a ProcessingTimeTrigger, it works as expected. However, since we want to judge lateness by event-time, that’s not an option. [19:48:16] interesting, i guess i hadn't looked closely into flink timers but that's not what i expected [20:19:49] ebernhardson: implemented a quick POC to see if rows are an option. However, the schema is strict when it comes to incomplete target documents so storing the encoded JSON byte[] is not an option. [20:20:27] I’ll look into alternatives tomorrow. [20:29:58] back [21:56:00] ebernhardson: would you be able to narrow down the source event that lead to the serialization issue? Maybe I could try to reproduce it locally tomorrow