[11:01:16] Emperor: I'm working on https://phabricator.wikimedia.org/T350014 to set up building for an internally-developed software (i.e. not uploaded to Debian) and would like to use wmf-debci, I couldn't find anything specific for this use case though, what can I do in this case? [11:15:20] godog: this is (essentially) creating a new go package, yes? There's a draft tutorial on this sort of area at https://wikitech.wikimedia.org/wiki/User:MVernon/Packaging_Tutorial and I note that there is already a golang-specific helper for dh ( dh-make-golang ) which I've found a bit of a pain in the past but quite useful for working out what Debian dependencies you need to specify. [11:17:06] The Insultingly Short Version: have a branch named e.g. packaging-wikimedia that you do the packaging work in (arrange to have debian/* such that builds will work) ; use builddebs.yml@repos/sre/wmf-debci as CI/CD ; ... ; profit [11:18:19] Emperor: nice, thank you! ok so if I got it right builddebs.yml@repos/sre/wmf-debci will work as-is for "local" packages too, neat [11:18:50] Yes, you can use the building machinery without any of the "try and fetch updates from Debian" tooling [11:19:26] cheers, will try later today [11:27:06] I've done a few go packages myself, IME if the build system is halfway sensible, things should mostly Just Work (and if not, it can be a total pain in the ...) [11:28:02] <_joe_> I suggest NOT to use debian's logic for building go packages btw [11:28:09] <_joe_> I think it's completely broken [11:28:19] <_joe_> (using packages for every dependency) [11:29:17] <_joe_> what we usually do is to actually vendor dependencies [11:31:24] It arguably makes sense from a distro POV, I can see it's more of a bore for local-only packages. [11:36:55] <_joe_> Emperor: I have my opinions on how debian should've handled go packaging [11:37:02] <_joe_> but it's besides the point :) [11:39:26] I know people who are quite grumpy about the rust packaging approach; I'm too much of a dilettante in both language ecosystems to have strong opinions [11:47:50] I had to run swapoff -a during a debian install to allow for a disk partition change, not liking that :-( [11:52:15] what's worse- without doing that, the partition metadata wasn't properly cleared for other disks, leading to grub unable to boot :-( [11:53:18] Is this a system you were trying to salvage some storage from? I tend to dd /dev/zero across the starts of devices (sometimes after mdadm --zero-superblock) with gay abandon if trying to change partitioning (because as you say, otherwise things like mdadm like to find the old stuff) [11:55:18] no, I actually wanted to delete everything- but by that time, it had already overwritten its os [11:56:28] this is something that I discussed with IF- the need for a revert or a rescue system in case the os or partitioning gets borked, but data needs to be kept [11:58:50] [could you not get the installer to drop you into a shell?] [12:02:21] yes, that's how I did the swapoff [12:02:45] but the cumin cookbook doesn't have an option for out of band issues [12:03:25] what do you mean? [12:07:17] have you checked the --pxe-media CLI argument? https://gerrit.wikimedia.org/r/plugins/gitiles/operations/cookbooks/+/refs/heads/master/cookbooks/sre/hosts/reimage.py#83 [12:08:08] either in the reimage or the sre.hosts.dhcp cookbook, depending on what you need [13:21:38] Emperor: all good re: wmf-debci, success https://gitlab.wikimedia.org/repos/sre/alerts-triage/-/pipelines/30714 [13:35:13] \o/ [13:37:19] are backports builds something you have tried already and/or there's an example I could follow? specifically I'd like to test the build on bullseye-backports [13:40:12] <_joe_> godog: so you can build your deb package in CI, but we still need to publish it rebuilding it on the build servers? [13:43:00] _joe_: I'm not sure, my understanding is that once built then we can upload it [13:44:11] the resulting build artifacts should contain everything you need to push to the apt repo [13:44:41] packages built using the trusted gitlab runners infra can be uploaded to apt.wikimedia.org [13:44:45] godog: https://gitlab.wikimedia.org/repos/data_persistence/pcre2 see the two sid-wikimediabp branches [13:45:25] eventually we'll have staging repo where the runner build artefacts will be available for testing [13:45:39] (that's automaticly-updated backports), but you should in theory just need a new branch with a different suite named in the changelog (and properly a different version number) [13:45:59] and then they can be synced to the main repo via some SRE-confirmation step [13:46:36] <_joe_> moritzm: oh so I just need to download them [13:46:44] <_joe_> ack [13:48:38] Emperor: ack thank you, from what I've read so far then yes when we have a e.g. bookworm-backports image on the registry things should just work [13:49:05] _joe_: docs roughly here: https://wikitech.wikimedia.org/wiki/Debian_packaging#Upload_to_Wikimedia_Repo [13:49:19] godog: that's certainly the aim :) [13:53:10] alright next step is looking into getting the backports images [13:55:48] Emperor: is there any plan to add support for autopkgtest too? [13:56:35] nevermind, the base images seem to include backports already [13:57:24] volans: 's not on my immediate TODO, but if there were demand it could presumably be done [I've not previously tried running autopkgtests locally, only via Debian's own CI] [13:57:59] [yeah me too] [14:17:51] Emperor: I ran into a problem with mk-build-deps at https://gitlab.wikimedia.org/repos/sre/alerts-triage/-/jobs/162346#L4489 though I can't tell what's wrong because https://gitlab.wikimedia.org/repos/sre/wmf-debci/-/blob/main/builddebs.yml?ref_type=heads#L127 overrides the default tool, any reason we can't use the default apt-get invocation ? [14:22:03] ISTR (might be wrong!) that you need -y which the default doesn't have. I have an imminent meeting, but I wouldn't see a problem with replacing that with the default-invocation-but-with-y so err mk-build-deps -i debian/control -t "apt-get -y -o Debug::pkgProblemResolver=yes --no-install-recommends" [14:23:08] ack, will send a MR your way Emperor [14:23:41] thanks, I should be able to merge it after my meeting [14:23:54] [wow, that is an unhelpful lack of information, thanks apt-get] [15:05:10] I reimaged a host (cloudnet1005.eqiad.wmnet) but the new key is not present in https://config-master.wikimedia.org/known_hosts [15:05:27] the old one has been removed [15:05:49] did puppet run on puppetmaster1001 yet? [15:05:56] dhinus: puppet 7 or puppet5? did the reimage completed succesfully? [15:06:09] the cookbook does that on config-master IIRC [15:06:12] reimage successful, and I can see "configmaster.wikimedia.org updated with the host new SSH public key for wmf-update-known-hosts-production" [15:06:23] in the cookbook output [15:07:04] now it's there [15:07:09] it took a while... [15:07:16] check puppetboard [15:07:20] to see when it was added [15:07:30] is that file cached on the edges? [15:07:45] where shoud I look in puppetboard? [15:08:12] in the node page? [15:08:52] taavi: doesn't seem like it [15:09:09] the cookbook runs puppet on 'P{P:configmaster} and not P{config-master[1-2]*}' [15:09:24] I don' recall why excluding the config-master hosts tbh and in a meeting right now [15:09:30] no rush [15:09:38] that's puppetmaster[12]001 [15:10:16] I wonder if now the file at https://config-master.wikimedia.org/known_hosts is served by the config-master hosts and not puppetmaster [15:10:25] and hence the cookbook si not running puppet on the right place anymore [15:15:40] dhinus: found it, it was https://gerrit.wikimedia.org/r/c/operations/cookbooks/+/945774 [15:15:58] and was never reverted once the hosts were working [15:16:08] and nowadays we serve that from config-master [15:16:11] so we should adapt the query [15:16:56] nice, thanks! not urgent as it eventually updates... but it would be nice if it's immediately available [15:17:06] yep [15:20:28] dhinus: (cc jbond ) https://gerrit.wikimedia.org/r/c/operations/cookbooks/+/971976 [15:23:38] volans: thanks +1 [15:24:09] thanks +1d as well [15:25:33] thx [16:08:39] Emperor: ok a little more clear now what's going on https://gitlab.wikimedia.org/repos/sre/alerts-triage/-/jobs/162453#L2300 and I believe it is apt's target release not being bullseye-backports :| would you be open to a change to set the target release if $SUITE contains 'backports' ? [16:11:39] I'm slightly confused [16:19:16] apt is refusing to install the golang-go from backports even though it woudl satisfy the version constraint? [16:20:39] I _think_ that's a consequence of how backports are enabled by default - they're set to priority 100 (per https://backports.debian.org/Instructions/ ) [16:23:00] indeed, it is one of the Release fields that instructs apt to a low pin value IIRC [16:24:48] I forget which, at any rate either raise the pinning or set target release, either should work I think [16:25:34] that would get you everything from backports (which maybe is the correct answer, but isn't obviously what one wants in the general case) [16:25:56] there ought to be "use backports iff necessary to satisfy a version constraint" [16:27:42] the closest I found to that is apt -t -backports [16:28:39] I was thinking of changing the pinning only if the changelog entry contains 'backports' though [16:30:43] I'm not sure that would actually work as expected, will give it some more thought [16:36:29] I think doing that will install all the build-deps that are in -backports (rather than just all the necessary-by-versioning ones) [16:43:31] is that the pinning or target release Emperor ? [16:44:00] setting target suite (but I think bumping the priority of -backports would have the equivalent effect) [16:45:00] ok I'll investigate more tomorrow [16:52:05] -backports get priority 100 by default (as a consequence of NotAutomatic: yes and ButAutomaticUpgrades: yes ) ; if we raised that to >=500 (equal to stable archive) then you'd get everything from -backports if available. If we leave it as <500 then apt will not pick a -backports package unless forced by -t or /bullseye-backports [16:54:21] [or if you specified the version on the CLI directly] [16:55:23] ...which might be the least-bad answer, but it's bugging me that this seemingly can't be done better [16:55:43] [which == bumping priority or specifying -t] [16:59:47] Emperor: for what its worth the buyild hosts with BACKPORTS=yes sets the priority to 500 [16:59:53] see: modules/package_builder/templates/D02backports.erb [17:03:57] jbond: Hm, so maybe the answer is to have some images with that setting? [which is another way of arranging for builds to pull in dependencies from -backports wherever possible] [17:04:16] I'm not sure we want it in the general case of package builds [17:04:24] agree [17:33:48] <_joe_> Emperor: we should make that setting depend on an env variable I guess [17:47:37] yeah, that would be a plausible approach [18:41:25] We are bringing pc4 online, there will be some slowdown until it gets populated [18:41:38] in appservers and so on. [19:08:01] eoghan: I merged your puppet change by mistake [19:08:28] eoghan: this https://gerrit.wikimedia.org/r/c/labs/private/+/971993 [20:06:38] marostegui: ack, thanks! Should be a noop.