[09:29:54] random thought: if I'm going to replace all of the tools k8s workers in any case, should I name the new ones to indicate that they have NFS access (tools-k8s-worker-nfs-NN)? non-NFS nodes are something we will likely have at some point in the future and having that information in the name is the easiest way to do it as NFS mounts are managed in [09:29:54] hiera [10:38:56] yep, I think that might be the easiest, though we might be relying on the names somewhere (cookbooks, puppet, ...) [15:45:23] Is someone else already on top of the clouddb-services-puppetmaster-01 puppet cert thing or should I have a look? I'm thinking there's at most one VM using that puppetmaster [15:52:27] oops, I see that d.caro already mentioned that in his checkin. I will ignore! For a moment I thought we could maybe do away with that puppetmaster entirely but it seems not [16:50:33] andrewbogott: yep, there's still some usage, if you know of a procedure to follow (docs or cookbook) please put in the task, part of it is to figure out how to do it and document it [16:51:27] I do not. Since it only has one client it's easy to just rebuild and reset everything but that's not the actual correct fix [16:51:53] I bet jbond knows how to do it correctly :'( [16:52:28] the issue is that we will need it for the other puppetmasters eventually too [16:53:05] yeah, that's why 'just rebuild everything' isn't ideal [16:54:19] unrelated: dcaro, you might've just seen an alert fire for a leaked dns record. This is a pretty classic example of when that happens -- restarting rabbitmq caused the process of building a VM in admin-monitoring to break halfway through which likely left dns in a slightly inconsistent state. I'm doing the cleanup now. [16:55:14] ack [17:06:23] * dcaro off [17:26:43] hm, it didn't really leak a dns record, the test itself was interrupted by the api reset