[13:13:08] o/ [14:00:39] sorry if i missed a message here or elsewhere earlier: i meant to say that the product & tech regular meeting conflicts with the talk-with-search meeting. how to handle? push the talk-with-search one hour later today? reschedule talk-with-search to next week? [14:03:18] Hi Adam, today’s agenda has one request for discussion [14:03:59] I think it's a bit late to reschedule the talk to the Search Platform meeting. [14:04:09] My bad, I did not see the conflict. [14:04:14] pfischer LMK if you wanna pair. I'm working on a docker image for airflow [14:04:21] no big deal if not [14:05:00] Let's keep the office hours, see if it finishes early and jump on the P&T meeting or watch the recording [14:06:13] inflatador: thanks, I don’t have anything on my plate today, Janis is still working on patches for enabling distributed rate limiting via envoy T362310, but that’s nothing we can actively contribute to at the moment, besides reviewing [14:06:14] T362310: SUP rate-limit fetch - https://phabricator.wikimedia.org/T362310 [14:06:50] pfischer np, let's skip this week [14:36:59] gehel I'm going to the P&T mtg if that is OK w/you [14:54:14] aww, dumps had been going well for awhile. this weeks wikidata dump failed. Going to ignore and hope next week is better [15:02:40] 🤞 [16:00:36] workout, back in ~40 [17:00:31] back [17:36:31] cool, gitlab only lets you request one person at a time. On the other hand, it immediately adds/notifies the reviewer without warning you, so you can add/remove/add everyone I guess [17:41:05] lunch, back in ~40 [18:37:34] inflatador: indeed, I'm trying to get to grips with it :P [18:37:44] thanks for that, most everything looks good to me [18:37:58] one thing I did think we should probably change is to also check for IPv6 connectivity [18:38:30] The IPv4 check should in almost all cases cover both, but there is one edge-case network mismatch that the v6 check may highlight the v4 one won't [18:39:03] looked a little into the wikidata dump failure. What happened is the dump took so long that the mediawiki release it was running was deleted from the server and it couldn't pull in an auto-include during shutdown (presumably every class it needed during normal runtime was already loaded) [18:39:08] I commented back in gitlab but just in case I sent my words into some void :) [19:01:07] topranks thanks for the review, should be easy enough to add the IPv6 addr [19:01:35] yep I think you can just do the same thing again [19:49:23] turned on all public wiki writes for sup in codfw [19:52:25] !!! [19:52:31] {◕ ◡ ◕} [19:53:00] it's currently doing double duty, will turn cirrus writes off in the next 30min or so in deploy window [19:53:39] Exciting! If I can do anything to help LMK [19:55:32] * ebernhardson realizes on pondering at the jump in the saneitizer average loop completion graph from 70% to 4% that since nothing strictly keeps the start time of the loops together, eventually this will just be a line at 50% [21:10:59] it all seems reasonably happy [21:16:08] Cool [21:17:30] ebernhardson we have a pretty new cloud server called 'wdqspuppetserver-1' on our wdqs WMCS account...any idea who might be using this? [21:23:43] inflatador: hmm, not sure [21:24:29] ebernhardson np, dancy pinged us for some post git-fat removal puppet errors and it seems related to some puppet code that does non-scap deploys in cloud [21:24:50] I've asked in cloud and wikidata rooms, maybe someone knows [21:36:20] OK, we tracked it down...turns out it's unrelated to the problem Ryan and I are working on, created by abogutt so we're not worried about it [21:48:06] dcausse: dr0ptp4kt: gehel: do any of you anticipate us needing deployment-prep (horizon) wdqs hosts in the future? inflatador and I are doing some spring cleaning right now. our gut is that we won't really have a use case for wdqs in deployment prep but wanted to check [21:49:56] I don't think we've used them for at least a few years. So we can probably delete them and recreate if needed. [21:50:06] ^^ not deployment-prep project, but the wikidata-query service project [21:50:49] we already have a "search" project, probably don't need one just for query services [21:50:53] There might be an integration with Mediawiki in deployment-prep, but I doubt [21:52:23] to be clear, we won't be touching deployment-prep. We just want to delete our wikidata-query account. Only thing running on it is a VM called 'ldf-test.' We can wait for d-causse to get back before making a decision [22:20:47] well thats...annoying. If I invoke something like F.col('foo') when importing a file it blows up if the jvm hasn't been initialized yet. Works in prod, blows up in tests (because python starts the jvm, instead of the jvm starting python) [22:21:17] entirely solvable, but it means even though all these expressions are basically static values, they have to be built up and runtime and not simply imported as constants [22:22:01] * ebernhardson was trying to have a python module you would import for a specific table that had all the various complex-properties of that table [22:24:12] testing spark is never fun :S and the important testing is hard to do here and happens in jupyter with real data anyways, might not do much