[11:47:48] Bryan tells me that clouds.yaml is no longer mounted in toolforge containers (as it used to be with https://gerrit.wikimedia.org/r/c/operations/puppet/+/916589) -- is there a way to carry that feature on to modern toolforge containers or has the whole mounting-local-files-into-containers thing no longer possible? [11:51:09] it is definitely possible [11:51:41] can you write me a patch, or point me to the right yaml file? [11:51:57] I'm trying to think if that would be elegant though. Another approach for a tool that wants to hit the openstack API would be to ship its own clouds.yaml file [11:52:26] so we as platform maintainers have one less thing to worry about [11:53:04] shipping its own was what I was trying to do here: https://gerrit.wikimedia.org/r/c/labs/tools/stashbot/+/1093997 [11:53:25] issues are 1) getting it in the right place and 2) keeping it in sync with the puppetized version [11:54:49] I see [11:56:22] I can probably work around the 'in the wrong place' thing at the cost of messy code [11:57:52] I think the easiest solution is to just mount the file like the novaobserver one [11:58:44] just for that particular tool, or globally? [12:00:49] globally, I'm sending a patch [12:01:10] https://gitlab.wikimedia.org/repos/cloud/toolforge/volume-admission/-/merge_requests/24 [12:01:40] that'll sure make my life easy :) [12:01:43] thank you! [12:02:39] (untested) [12:02:52] would you like to take over testing and deploying it? [12:04:25] I definitely don't know how to do that, but I can learn [12:04:43] testing means with lima-kilo right? [12:05:32] you can test in lima-kilo, and in toolsbeta [12:05:59] lima-kilo would definitely help to save round trips with the patch [12:06:23] 1) deploy lima-kilo in your system [12:06:36] 2) get the patch in your local checkout of volume-admission [12:06:47] 3) re-deploy volume-admission with that patch in your lima-kilo [12:07:13] 4) start a test tool (via a job, for example) and verify the file is present [12:07:24] we may need to patch lima-kilo to produce that file, though [12:09:31] let's see if l-k still works on my laptop... [12:09:44] https://gitlab.wikimedia.org/repos/cloud/toolforge/lima-kilo/-/merge_requests/228 [12:15:29] andrewbogott: for T386915, could you please add which run of the decom cookbook was it? hostname and from where you run it [12:15:30] T386915: cookbook: decomission workflow may remove VIPs from netbox - https://phabricator.wikimedia.org/T386915 [12:20:38] yep, done [12:21:13] It's a mixed bag, I had the sense to notice that it was doing something weird and took a screenshot, but not sense enough to tell it not to do it [12:21:29] thanks [12:25:33] so one thing to note is that https://gerrit.wikimedia.org/r/c/operations/puppet/+/916589 pre-dates the PSP -> kyverno migration, so before merging the volume-admission patch please ensure the path is also allowed by Kyverno as well [12:32:00] lol when I launch lima-kilo my laptop gets too hot to touch [12:34:55] taavi: kyverno doesn't contain policies related to host path. I now wonder if that means that all host paths are allowed to be mounted :-( [12:44:24] T386921 [12:44:32] https://phabricator.wikimedia.org/T386921 (NDA ticket) [12:55:28] What information is in clouds.yaml? Can we give that information in a different way? (Not depending on puppet/filesystem on the worker) [12:57:42] ideally, an envvar or something. Maybe that's more elegant [12:58:47] https://www.irccloud.com/pastebin/WZJqjzne/ [12:59:47] having that actual file present allows for very convenient openstack client setup [12:59:48] conn = openstack.connect(cloud="novaobserver") [13:00:09] but it's certainly possible to set all those creds one at a time if they come from someplace else [13:01:01] and there's already novaobserver.yaml present which contains the exact same creds. [13:02:51] what tools need it? [13:03:46] I can certainly just use novaobserver.yaml if there's a downside to including clouds.yaml [13:03:48] this is for https://gerrit.wikimedia.org/r/c/labs/tools/stashbot/+/1093997/5/stashbot/sal.py#267 [13:05:59] the less things we mount from the workers the better xd [13:06:08] how hard is it to do it using environment variables? [13:07:08] not hard, it just adds another source of truth for the credentials. [13:07:22] wait, are you talking about also removing novaobserver.yaml now? [13:07:31] yep, if not now eventually [13:08:49] I'll look at doing it via env variables then [13:09:30] I wonder if openstack CLIs/SDKs support env vars natively as an alternative to yaml files [13:10:48] they do, mostly [13:10:55] would be nice if we can use custom env vars, in case the user wants to override them [13:11:06] (or have two different sets of them) [13:13:01] yeah we need an override mechanism, but it shouldn't be hard [13:13:27] * dhinus late lunch [13:28:58] I have prepared a patch for https://phabricator.wikimedia.org/T386921 --> https://gitlab.wikimedia.org/repos/cloud/toolforge/toolforge-deploy/-/merge_requests/673 [13:29:07] but I want to comment [13:29:15] 1) I would need help with testing it [13:29:32] 2) I'm really confused as to why this wasn't done when PSP was deleted [13:39:20] any specifics on what help do you need testing it? [13:41:34] I need to relocate -- working on a public library that is now closing [13:41:45] testing would be [13:42:03] 1) making sure existing tools wont break (so, mounting the mentioned volumes) [13:42:12] 2) making sure we block other unwanted volumes [13:42:23] * arturo be back later [13:46:07] quick review when someone has a minute in-between-tasks (not urgent!) https://gerrit.wikimedia.org/r/c/operations/puppet/+/1121364 [15:25:07] thanks! second part of that: https://gitlab.wikimedia.org/repos/cloud/toolforge/alerts/-/merge_requests/28 (and the one that closes the task associated) [15:28:28] +1'd [15:31:04] thanks! [15:50:23] in my laptop, I can reproduce a lima-kilo failure in which I cannot bootstrap the kind cluster if using the disk cache option [15:52:51] did you reuse an old disk, or a new one? [15:53:49] I'm using an old one for the last few weeks (created anew before that) [15:56:42] I have destroyed and recreated my lima-kilo vm several times this week, and it always worked (with no special options) [15:57:03] but I noticed a message I didn't see before when I //destroy// the VM: [15:57:27] WARN[0000] Failed to unlock disk "cache". To use, run `limactl disk unlock cache` [15:58:04] then I see it gets mounted anyway when I recreate the VM: INFO[0000] [hostagent] Mounting disk "cache" on "/mnt/lima-cache" [15:58:06] I will manually delete the disk, in case somehow it contains wrong config [15:58:34] also try updating lima, I think they changed something because I had to install an extra package after the latest upgrade [15:58:46] oh yes, update your limactl [15:59:00] (it should have complained if you had something under 1.0 iirc) [15:59:17] I have 1.0.4 [16:00:05] I upgraded to 1.0.6 earlier this week [16:00:06] I have 1.0.3, should be ok [16:01:13] ok [17:06:44] * arturo offline [18:32:11] * dcaro off [18:32:13] cya on monday!