[10:28:54] Krinkle: ping re: https://phabricator.wikimedia.org/T282761#7131405 (if you're too busy at the moment, that's fine, just wanted to make sure you didn't miss it) [10:29:39] hi all i just created a small docker helper function for testing our docker images localy, may be useful to others https://phabricator.wikimedia.org/P16305 [10:29:49] (bash helper function) [10:38:26] <_joe_> jbond: heh I have the same function with a different name [10:41:12] :D not supprised suspect you use it much more then i :) [10:47:49] how many of you have the helper "ssh to this container"? I've got like 5 or 6 versions of that one :-P [10:53:47] apergos: i have this mess https://github.com/b4ldr/profile/blob/master/.zshrc#L172-L205 [10:55:23] oh, you /bin/bash into it [10:55:30] yes [10:55:46] yeah I have ssh running on most of mine and I can therefore scp crap to/from it easily [10:59:12] <_joe_> apergos: for kubernetes, we have https://github.com/lavagetto/k8sh [10:59:29] <_joe_> still not supporting minikube well, I have to work on it [11:04:53] I swear half of github must be full of little scripts like this now (workaround for docker. workaround for kube. make this happen in containers. etc) [11:06:18] oh wow, you use type hints (smetimes) [11:47:46] phab down? [11:47:57] ah no, it just took ages [12:54:56] marostegui: your hopes dashed again [13:00:54] :( [13:11:04] kormat: wouldn't phab going down cause more work? [13:12:30] RhinosF1: if it was the phab db that died, yeah, that's true [13:13:02] kormat: there's no way SRE's can escape [13:15:22] * kormat hides [13:18:06] You're job is to not be seen by most! [13:39:54] kormat: it's running now, tee'ed to /home/krinkle/purge_parsercache_now_pc1008.log [13:40:00] Krinkle: 👍 [13:40:07] kormat: I was on a virtual offsite this week been very hectic [13:40:12] in terms of timing [13:40:16] no problem :) [13:40:18] kormat: we have one more after this, right? [13:40:25] yep, pc3 is left [13:40:27] k [13:54:07] kormat: _joe_: are you around for the next 2 hours or so? I forgot to deploy the DiscussionTools parser cache expiry time reduction patch and we should probably do it today if we want to be able to learn that it has the intended effect on slowing down growth (we're half-way the the purging already meanwhile). [13:54:23] it's a minor patch but want to make sure there's people around just in case [13:54:42] Krinkle: we're just about to start the last day of the percona mysql operations training, sorry [13:54:58] so i'm going to be unavailable for the next 4h (and then unavailable due to end of day) [14:13:37] <_joe_> Krinkle: go on [14:14:59] ack [18:44:22] kormat: do you know things about bacula? [18:44:29] (oh you're probably off for the weekend, nm) [18:56:00] andrewbogott: do you need a restore or something urgent? [18:56:23] mutante: it's not urgent. My question is more along the lines of "is this really getting backed up? and if so, where?" [18:56:35] I know how to do a restore but I don't actually want to do that, just be reassured. [18:56:49] I can take a look at the bacula console if you like. what is the host name? [19:04:10] (checking is like pretending to restore but then stop at the last step) [19:09:54] mutante: cloudcontrol1003.wikimedia.org [19:10:07] I think we are finally convinced that it is actually backing up, although possibly to the wrong pool [19:10:28] My main question is/was 'are these backups on another host or is cloudvirt1003 just backing up to itself?' [19:12:38] andrewbogott: I can confirm bacula has backups from cloudcontrol1003, the path it backs up is /srv/backups and there are files in it like .trove_eqiad1-202106040407.sql.gz with today's date [19:12:49] these backups are on backup1001 [19:12:57] mutante: that's great, thank you for checking [19:13:09] so at least I can rest easy even if I don't love the process [19:13:11] they would be restored to whatever hosts someone selects in the dialog for a restore [19:13:18] yw [22:56:59] mutante: are you still hanging around here? [23:51:30] Hmm. It doesn't look like I have much time to restore these files. Is there anybody here who knows how to use the bacula server? I have succeeded in discovering that the files I need are on the director, but I cannot restore them without doing fun dances with creating configs and possibly messing with encryption keys with puppet off. This is way more potentially harmful than I'm prepared for on Friday evening without help. [23:52:07] It seems urgent to me because it looks like the retention is 30 days, and my files are from May 6th :( [23:52:23] I'm not even certain I'm right about the retention [23:55:03] If anybody wants to at least be around if I crash the backup server or something this weekend, please ping me. I just don't want to risk the backup server to get my files, though I could *probably* figure out how to do this, I'm not 100% sure some of what I see here actually matches the doc. [23:56:44] I want to restore cloudmetrics1002's files to anywhere under /tmp on cloudmetrics1001 before they aren't there anymore. cloudmetrics1002 has been down due to hardware failure long enough that it is no longer in puppet (as far as I can tell). [23:57:06] I'll check in tomorrow and see if someone is around.