[00:26:30] !log toolsbeta deleted toolsbeta-sgeexec-0902 since it had a badly screwed up /tmp [00:26:33] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [03:08:33] !log admin stopping maintain-dbusers on labstore1004 for help diagnosing T290630 [03:08:37] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [03:08:38] T290630: clouddb1017 replag alerts - https://phabricator.wikimedia.org/T290630 [03:15:45] !log admin resetting swap on clouddb1017 T290630 [03:15:49] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [03:15:49] T290630: clouddb1017 replag alerts - https://phabricator.wikimedia.org/T290630 [14:12:23] 4qb签到!混个眼熟`! [16:19:41] !log toolsbeta 70017ec0ac root@toolsbeta-test-k8s-control-4:~# kubectl apply -f /etc/kubernetes/psp/base-pod-security-policies.yaml [16:19:44] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [16:20:01] !log tools 70017ec0ac root@tools-k8s-control-3:~# kubectl apply -f /etc/kubernetes/psp/base-pod-security-policies.yaml [16:20:04] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [16:50:06] !log toolsbeta enable unattended updates on toolsbeta T290494 [16:50:09] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [16:50:09] T290494: Revisit Toolforge automated package updates and version pinnings - https://phabricator.wikimedia.org/T290494 [17:36:12] !log toolsbeta deploying a base tekton triggers setup T267374 [17:36:15] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [17:36:16] T267374: Set up a Toolforge buildpack CI pipeline as a POC - https://phabricator.wikimedia.org/T267374 [20:13:34] Krinkle: on your patch there, if the facts include the mountpoint, the mount will be defined. I wonder if refreshing the puppet facts will help? [20:15:25] bstorm: hm.. I'm missing a bit of context to understand, but... I think this means there is a copy of puppet facts somewhere in the CI job? [20:15:37] I wasn't aware that was the case [20:15:46] Yes for the puppet compiler [20:16:26] oh wow, we're running that automaticlaly on CI now? [20:16:28] hmm, but this is an rspec that fails [20:16:32] Not the puppet compiler [20:16:40] So basically, this is doing a compile with incomplete facts [20:16:55] so it will fail definitely unless that mount is not conditionally defined...ugh [20:17:01] I did find that a rather compile-like error indeed, which I'm not used to getting on CI. [20:17:26] Yeah, that's because there's an rspec test defined that looks to compile without error. [20:17:44] But *that* compile does not have the facts unlike PCC [20:17:48] but yeah, my very limited puppet experience did teach me that it seems generally good for resources to be defines uncondtionally one way or the other. [20:18:49] Well, I might suggest that the mount dependency isn't 100% needed in this version of puppet. The order matters and is deterministic. You could also make it depend on the cinderutils::ensure resource instead...but yeah [20:18:54] if the else branch makes the mount happen by some other way, maybe thereis a way we can tell puppet in that manifst that it should consider it existence without actually defining a mount in the traditional way. e.g. define Mount[/srv] in a way that makes the shel script responsible for mounting instead of asking puppet to do it [20:19:24] but still fufill the dependency [20:20:17] It makes sense as a dependency... [20:20:21] hrm. [20:21:31] This is ultimately just to make rspec happy. [20:21:51] Puppet will be happy if the cindermount did its work and will (correctly) fail if not [20:29:41] Krinkle: I have a suggestion. I think removing the require => Mount statement makes sense here. `require ::profile::labs::cindermount::srv` makes that a dependency for everything else already https://puppet.com/docs/puppet/7/lang_relationships.html#lang_rel_require [20:30:16] If the cindermount is broken, it won't mount so there's that, but hopefully it won't fail silently? [20:30:29] I know we just made that happen, but we did fix that issue I believe [20:31:07] It's a bit fluffier than the original version since the original probably had a non-conditional mount definition [20:31:37] So I'm not sure I'm entirely thrilled with it, but I think it does mostly meet the standard of doing what the original did [20:43:06] bstorm: the main worry I think is that if it fails, it might end up continuing but in a way that puts things on the local disk [20:43:12] since the directory does exist by default [20:43:21] It should not be able to [20:43:42] ah you mean the cindermount class would fail in a way tha tmakes it not proceed? [20:43:52] Yes [20:44:12] If it has no volume to prepare and no mountpoint set, it should fail (expressly does at that point) [20:44:52] So it's a bit fluffier and less obvious, but it should still fail if there's nothing to mount at /srv and if there's no volume to set up and mount there [20:46:26] If you can make that not happen, then we need to fix that cinderutils::ensure resource because that's how it is supposed to work at least :) [20:50:38] * bstorm now wonders what rspec will do with the rest of that class.... [22:03:18] !log admin restarted the prometheus-mysqld-exporter@s1 service as it was not working T290630 [22:03:22] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [22:03:22] T290630: clouddb1017 wmf-pt-kill dying, unauthed connections and other mysteries. - https://phabricator.wikimedia.org/T290630