[06:41:38] btullis: we have a bunch of wikis pending the views creation, some of them for almost 2 months, could you give them some priority? I am getting pinged by some people about it. The ones waiting are all at the "Done" column https://phabricator.wikimedia.org/project/board/1060/ [08:09:58] inflatador: for what little it's worth, I've been using transfer.py from within a cookbook for small transfers (~100M a time) and other than being noisy in the logs, it's worked reliably. [08:10:18] (the sre.swift.remove-ghost-objects cookbook) [09:28:04] Emperor: note they are using wdqs/data-transfer.py , not transfer.py [09:28:15] oh, doh, sorry [10:13:43] jbond: akosiaris: Need a quick-ish review for this https://gerrit.wikimedia.org/r/c/operations/cookbooks/+/911777 [10:14:24] * jbond looking [10:14:38] I'll try and find a more elegant solution but I didn't see the issue earlier. I also just realised I can't just say "pool eqiad" and it'll switch over the active/passive services [10:15:20] claime: ack sgtm [10:15:21] So I'll have to trick it a bit by depooling codfw of all services (which will give me the option to switch A/P services to eqiad) then pool only A/A in codfw [10:15:57] jbond: thanks [10:16:06] np [10:36:55] claime: just +1ed it [10:37:54] thx [10:38:20] I have another incoming to add mw-api-int and mw-api-int-ro to the mediawiki services constant [13:02:48] Emperor jynus FWiW, data-transfer.py uses the same approach as transfer.py (pigz + openssl) . But I am going to try transfer.py itself and see if I get different results. Thanks for the suggestion! [13:10:41] inflatador: I don't know who uses or maintains that, but transfer.py is maintained by me and uses ~18 times every night to generate database backups [13:11:08] it can be used as a library- I think Emperor uses it that way, maybe? [13:12:39] however, it is not for everyone, it was created for certain needs in mind and may not be ideal in other cases [13:12:53] who maintains it? Me ;) . It predates me considerably, though. Assuming Spicerack already imports it, I could refactor the cookbook [13:13:39] what I mean is, IF the use case is the same, you will get maintenance "for free" [13:14:54] It's worth a try. The use case is transfering 1.2T of data from hosts in the same DC [13:15:11] db backups are around that size [13:15:27] however, it depends on the nature of them- it has not rsync-like functionality [13:15:44] so not good for many small files an cannot continue failed transfers [13:16:11] it's actually a single 1.2T file [13:16:18] the initial aim was for xtrabackup, which requires a unix pipe (real time generation of files) [13:16:33] then it looks a similar use case [13:17:32] integrity is also important for us, so we hash the file on reception [13:18:24] I imagine that takes awhile for 1.2T ;) [13:18:47] not really, it can saturate a 10G link if neccesary [13:18:49] that was another aim [13:19:50] there are some issues with logs, but that needs a bit of solving architecturing and ownership issues [13:20:39] I meant hashing a single 1.2 TB file probably takes awhile [13:20:59] it's done real time, if sufficiently porwerful cpi [13:21:01] *cpu [13:21:23] the bottleneck right now is encryption if not using modern cpu extensions [13:22:37] hmm, maybe I'm hashing wrong. I was just thinking of how long it takes to run `sha256sum` on a large file [13:23:09] or are you splitting the file into chunks and hashing each chunk? [13:23:20] so you absolutely use transfer.py, it is not *that* bad [13:23:24] *should [13:23:47] library> indeed, I use it thus [13:24:23] inflatador: https://github.com/wikimedia/operations-software-transferpy/blob/master/transferpy/Transferer.py#L207 [13:28:18] OK, just fired off transfer.py from cumin1001. I'm excited! [13:33:13] let's see hot it goes [13:34:27] I don't think the utility should be the right way to do transfers (rsync, rclone and many other methods are going to be much better in other cases), but that cookbook, as it is now, should just call it IMHO [13:35:02] marostegui: Yes, will do. Apologies for the delay. [13:36:32] thanks:) [15:53:15] why does buster have a newer version of openjdk-11 than bullsye? [15:53:16] https://packages.debian.org/buster/openjdk-11-jre-headless [15:53:22] https://packages.debian.org/bullseye/openjdk-11-jre-headless [15:54:01] and, moritzm how does https://gerrit.wikimedia.org/r/c/operations/docker-images/production-images/+/905592 end updating to 11.0.19 [15:54:01] ? [17:35:44] you may already know this, but fwiw, bullseye has openjdk-17 too https://packages.debian.org/bullseye/openjdk-17-jdk [18:04:27] ottomata: I think this may because the bullseye version is stuck in proposed-updates, https://www.debian.org/releases/proposed-updates [18:06:56] mutante: too fancy for us :) [18:07:47] this view is pretty irrelevant, for bullseye it refers to the version present on the latest install media (and bullseye will only see those updated to .18 with the next point release) [18:08:03] what's relevant is what is present on security.debian.org and that is .18 for both [18:08:16] https://tracker.debian.org/pkg/openjdk-11 is the more useful view here than package search [18:08:35] and 11.0.19 was a just typo, the last update rebased to .18 [18:08:41] ah okay [18:08:51] makes a lil more sense then [18:09:12] since I have your attention: https://gerrit.wikimedia.org/r/c/operations/docker-images/production-images/+/911905 look okay to you? :) (no worries if you can't look now...but if you could I could keep working...) [18:09:14] :) [18:09:31] I need to leave now, I'll have a look tomorrow [18:09:36] k no worries, ty [18:21:01] ah, that makes sense, thanks for clarifying mortizm