[02:18:38] is something wrong with the grid? [02:18:49] job has been stuck in qw since 1:00 UTC [02:19:48] https://phabricator.wikimedia.org/P47169 [02:21:15] I guess this is the world subtly telling me to move to k8s [02:23:04] The grid is dead, long live k8s! [02:23:09] "is something wrong with the grid?" the better question is "when is there not" [02:24:48] !log tools.dbreps Disabled cronjob, switched to toolforge-jobs [02:24:51] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.dbreps/SAL [03:54:19] T335009 [03:54:19] T335009: Toolforge grid seems overloaded - https://phabricator.wikimedia.org/T335009 [15:14:39] !log tools.panoviewer `webservice restart` T335039 [15:14:43] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.panoviewer/SAL [15:14:44] T335039: Panoviewer is down (504 Gateway timeout) - https://phabricator.wikimedia.org/T335039 [16:33:26] hello! possibly dumb question: I have a 2nd cinder volume on the `wikiwho` VPS project. Can I mount it to the same directory as the other cinder volume? (the subdirectories will be different) [16:35:29] musikanimal: no, you can only mount one volume at a single mountpoint. you could however use symlinks to get something that (maybe) feels like what you want [16:36:07] ah, yeah symlinks should work! ok thanks [16:37:06] the code I think (currently) needs it to be in the same directory, so symlink it is [16:37:07] and just to be sure... `sudo wmcs-prepare-cinder-volume` isn't going to touch that 1st volume without asking me first, right? it would catastrophic if I accidentally reformatted it [17:01:00] musikanimal: it won't format unless the volume is unformatted. That said, I can take a snapshot for you beforehand if you want! [17:01:26] We don't have space for persistent snaps but I can take one and hold it just until you feel secure :) [17:01:40] that'd be great! just out of an abundance of caution. It's as you know ~5 TB of data that took many weeks to build [17:01:57] project + volume name? [17:02:11] wikiwho / pickle_storage [17:05:08] ok, I made a snap named 'safetynet1' [17:05:16] lmk when I can delete it again :) [17:08:38] thanks! [17:08:54] in a meeting so will do the mount in about 45 mins, if that's okay [17:09:30] yep [18:00:12] currently mounting :) this could take a while! it indeed seemed to pick only the new volume, so you can probably delete the snapshot andrewbogott. Thank you! [18:00:56] err, I guess it's currently reformatting the volume, not mounting [18:22:25] heh, I was wondering why mounting should take a while ^^ [18:29:32] I probably should have used screen for this. Looks like it will be a few hours to reformat this 5 TB volume! [18:46:29] yep, those volumes have IO throttles which aren't noticeable during normal activities but formatting takes forever [19:31:28] !log tools.poty-stuff Updated from 7927bcc to 6c7a90c [19:31:32] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.poty-stuff/SAL [19:41:33] musikanimal: still Ok with me deleting your snap? [19:42:34] sure! should be fine. Thanks again :) [19:43:19] np, I'm glad that the script did what I was expecting :) [19:43:52] hehe, me too! [21:25:02] it appears I lost my session. I see `wmcs-prepare-cinder-volume` is still running. I'm hoping there wasn't any further interaction that the script needed from me? I already entered the directory of where to mount it [21:32:45] eek! okay the process isn't running anymore, and I don't see the mount directory :( [21:34:17] I think it still had maybe an hour or two left before the reformatting was done. I'm guessing that broke [21:43:53] musikanimal: you should be able to run wmcs-prepare-cinder-volume again. I'm reading the python code and it looks to me like it will format the device again even if it is already formatted. Alternately, you could read the script's "mount_volume" method and manually try mounting if the volume is now formatted and you don't want to wait for that to happen again. [21:49:50] I just ran it and it appears it must have failed when formatting, so trying again [21:54:16] hopefully in a screen/tmux this time :) [21:58:55] indeed!