[11:51:36] !log tools remove --feature-gates=TTLAfterFinished=true from kube-controller-manager static pod definition (T349197) [11:51:40] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [11:51:40] T349197: [infra] Remove TTLAfterFinished from config before upgrade to 1.25 - https://phabricator.wikimedia.org/T349197 [11:53:43] !log tools cleanup kubeadm configmap from TTLAfterFinished settings (T349197) [11:53:46] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [11:54:52] !log toolsbeta cleanup extra redundant cert-signing settings from controller-manager arguments [11:54:53] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [16:52:27] !log bsadowski1@tools-bastion-13 tools.stewardbots Restarted StewardBot/SULWatcher because of a connection loss [16:52:29] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.stewardbots/SAL [16:53:28] !log bsadowski1@tools-bastion-13 tools.stewardbots Restarted StewardBot/StewardBot because of a connection loss [16:53:31] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.stewardbots/SAL [17:00:36] !log draining (I hope) tools-elastic-3 and tools-elastic-1 for T311905 [17:00:36] andrewbogott: Unknown project "draining" [17:00:36] T311905: Upgrade Toolforge (Elastic|Open)Search cluster to Debian Bullseye - https://phabricator.wikimedia.org/T311905 [17:16:40] !log tools draining (I hope) tools-elastic-3 and tools-elastic-1 for T311905 [17:16:44] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [17:16:44] T311905: Upgrade Toolforge (Elastic|Open)Search cluster to Debian Bullseye - https://phabricator.wikimedia.org/T311905 [18:02:18] i'm trying to set up a new server for the Programs & Events Dashboard database, and I'm at the step of moving the storage volume to the new server. [18:02:29] I'm not clear on what I should do, based on https://wikitech.wikimedia.org/wiki/Help:Adding_disk_space_to_Cloud_VPS_instances [18:03:13] the instructions there are for a new volume, but there's no explicit instructions for an already-prepared volume with data on it [18:04:55] ragesoss: I think it would just be in Horizon. click "'Manage Attachements' action in Horizon.", detach it from old instance and attach it to the new instance [18:05:29] mutante: i've done that, but it's not mounted [18:05:47] it's attached and I see the volume in `lsblk` [18:06:51] i could `mount /dev/sdb /srv` i think, but i guess that would not survive a reboot? so fstab probably needs to be edited? [18:07:52] yea, that sounds right to me. it would not survice a reboot unless fstab is edited [18:07:57] can you check fstab on the old instance? [18:08:24] yes [18:08:28] UUID=45f84e82-27fa-4a9b-ab8c-11c767c4aa8f / ext4 discard,defaults 1 1 [18:08:29] UUID=4e5640ab-a727-4614-b672-cdde189f9588 /srv ext4 discard,nofail,x-systemd.device-timeout=2s 0 2 [18:09:09] I wonder if "wmcs-prepare-cinder-volume" has any commandline options [18:09:15] like to just let it do the mount part [18:09:20] but not the formatting part before [18:09:36] yes, that's the kind of thing i was looking fgor [18:10:02] ah, see this: [18:10:04] "Alternatively, instead of running sudo wmcs-prepare-cinder-volume, you can add the profile::labs::cindermount::srv Puppet profile to the instance or a relevant prefix. " [18:10:22] maybe you just need to add this to the Instance via the Hiera tab in Horizon [18:10:25] and reboot [18:11:12] andrewbogott: ^ is this reasoanble to just mount an existing cinder volume? [18:12:31] prepare-cinder-volume should work for an already filled volume. It should detect that things are already formatted and skip that part. [18:12:40] The puppet role is likely to work as well although I haven't used it in a while. [18:12:58] If it prompts you to format then ctrl-c but it shouldn't :) [18:13:34] how sure are you that this won't nuke the data without confirmation? [18:13:57] pretty sure! But it's also a very simple python script so you can check for yourself, or dismantle it for parts. [18:14:54] I was just looking at the source code of wmcs-prepare-cinder-volume. at the bottom there is "format_volume(args)" and next line "mount_volume(args)" [18:15:05] so I was thinking you could just comment that formatting line [18:15:13] but if it autodetects, better :) [18:15:39] you can comment out that line for extra security :) [18:15:42] * andrewbogott checks the script [18:15:55] https://gerrit.wikimedia.org/g/operations/puppet/+/3b447918f861e11829c602493ec16fc598ab0ebf/modules/cinderutils/files/wmcs-prepare-cinder-volume.py [18:16:19] 237 and 239 [18:17:02] would be nice to add a --mount-only argument maybe [18:17:13] so it runs /bin/lsblk and then checks for fstype [18:17:17] notice there is already --mountpoint if you wanted to change that [18:21:43] okay, i've commented out the subprocess call in the format_volume method, so it if it tries to call that i'll still see the output but it shouldn't actually do it... and hopefully it won't even try because it detects that it's already formatted. [18:21:59] sounds good [18:21:59] default mountpoint of /srv is what i want, so that's all good. [18:22:52] success [18:22:55] thank you! [18:23:33] to recap, the script handles the existing volume just fine, and neither attempts to format it nor prompts for anything about formatting. [18:24:01] that's good :) [18:24:13] the doc page says "This tool is only useful for newly-created block devices. It can not be used to reattach formatted volumes or move volumes between instances." [18:24:21] https://wikitech.wikimedia.org/wiki/Help:Adding_disk_space_to_Cloud_VPS_instances#wmcs-prepare-cinder-volume [19:47:11] !log tools.stashbot !log function check [19:47:13] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.stashbot/SAL [20:16:32] !log video Fixed puppet configuration on video-redis-buster, logrotate configuration on all encoding instances as per T365154 [20:16:35] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Video/SAL [20:16:36] T365154: video2commons general failure - https://phabricator.wikimedia.org/T365154 [20:18:39] !log video Creating video-redis-bookworm instance as per T360711 [20:18:41] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Video/SAL [20:18:42] T360711: Replace or remove Debian Buster VMs in 'video' cloud-vps project - https://phabricator.wikimedia.org/T360711 [20:20:38] don-vip: I am very happy to see you got the ability to help with video2commons! Thank you for caring about that project and making the effort to help. :) [20:23:12] bd808: thanks! [20:24:50] there was at least two major issues: logrotat the filled the instance disks => fixed The other one is a change of behaviour of pywikibot that causes video2commons to serialize whole binary contents of videos in the redis instance in case of upload error. I notified pywikibot maintainer, I hope he will allow me to configure this behaviour [20:25:59] now I work on the buster=>bookworm migration, I understood that video-nfs-1 instance is managed by wmcs team, correct? [20:26:39] eh... andrewbogott probably built it originally, but I'm not sure if he wants to own it forever. It's a good question though. :) [20:28:11] I built it and don't mind owning it. I'm driving now but if you write me a ticket I can work on it soon [20:28:11] its a bullseye instance so you don't need to worry about an immediate replacement [20:30:17] don-vip: if things would be easier for you with some more CPU+RAM while you are doing instance replacement please do file a quota increase ticket. It looks like the project has very limited room for growth at the moment. [20:31:11] for now it's ok :) I wrote a plan that allows me to work as I want on the migration without requiring a quota upgrade :) [21:00:57] Thank you don-vip! [22:38:02] I forgot to say "Happy 10th anniversary of the Toolserver shutdown!" yesterday. For those of you keeping score, Toolserver has now been gone longer than it existed (2005-2014). [22:49:29] heh, by my reckoning the tipover point was *looks* last year already https://wikis.world/@LucasWerkmeister/110292832794335343 [22:50:13] either way, happy anniversary to toolforge¹ as everyone’s #1 preferred wikimedia tool hosting platform \o/ [22:50:27] (¹ I think the toolforge name itself is younger than that but who’s counting ^^) [22:55:36] actually, while I think most people say toolforge these days, when I come across an old term it’s more likely to be toolserver than tool labs (at least that’s my impression) [22:55:46] I guess the toolserver name was around for longer than labs [23:02:35] toolserver (2005-2014), tool labs (2013-2017), toolforge (2017-??) [23:04:11] toolforge.org (2020-??) [23:05:44] https://wikitech.wikimedia.org/wiki/User:Majavah/History_of_Toolforge has citations on a lot of these dates. [23:07:05] damn, I didn’t realize k8s had been selected as the grid replacement since before I was developing tools [23:07:28] grid engine shutdown (2015-2024)… just shy of ten years there, then [23:10:13] yeah. there are #reasons, but the easiest is that the original replacement idea was basically "recreate grid engine in a container" and when that ws proven to be too hard we took a detour to make the OpenStack deployment much better (the 2017-2020 era) [23:11:28] that OpenStack work made things much, much more stable and flexible, but also kept us from investing a lot into augmenting Kubernetes [23:12:16] Wasn't the handwriting on the wall already by 2015 that grid engine wasn't a major player anymore? [23:12:26] roy649: oh yes [23:12:44] it was known in 2013 that it was dead tech [23:13:34] I think Toolforge also switched grid engine implementation at least once, from Sun Grid Engine to Son of Grid Engine? [23:14:02] #reasons again meant that we wanted to match things to the legacy toolserver stack when tool labs was built to replace it [23:14:37] Who need modern tech when you've got a really cool punny product name :-) [23:15:55] yes, the original grid was "Sun Grid Engine" packaged by Ubuntu and then we switched to "Son of Grid Engine" when we moved to Debian in 2019 [23:16:07] https://wikitech.wikimedia.org/wiki/News/Toolforge_Trusty_deprecation [23:16:43] "Son of Grid Engine" is a fork of "Sun Grid Engine" that is kind of limping along still [23:18:07] There is also a "Some Grid Engine" fork of "Son of Grid Engine" that we never used [23:19:35] I _think_ the "Son of Grid Engine" fork happened when Oracle acquired SGE [23:20:19] I certainly feel the pain of the people who were forced to migrate off SGE against their will, but overall I think we'd do better to play more in the "move fast and break things" end of the spectrum than we do now. [23:20:54] easy to say when the pitchforks are not pointed directly at your head :) [23:22:14] I was really proud of how we all collectively dealt with the final shutdown push. It was scary and a lot, but folks mostly dug in and moved their projects. [23:25:43] You know what you get if you try to integrate all the grid engine implementations out there? [23:26:16] the sum of grid engines! [23:26:56] Well, I was going to say Sum of Grid Engine, but yeah, you get full marks [23:27:03] yay :D [23:27:23] I was trying to figure out “alright this is clearly a setup, where is this going” ^^ [23:27:30] full marks to you for thinking of it first, of course :) [23:33:36] I was a big Sun fanboi at one time. They had a good run, did a lot of really cool stuff, and really changed the world in many ways. [23:34:38] I had Sun hardware as my daily driver desktop from 1999-2006 and loved it [23:34:43] NFS was revolutionary for the time. [23:35:01] Java, as much as I loathe working in it, totally changed the world. [23:37:46] NeWS was awesome in its own way. It's kind of sad that X won that war. [23:55:03] Wow, on the subject of Java, I just saw that James Gosling retired yesterday! [23:55:09] https://www.linkedin.com/feed/update/urn:li:activity:7213740307538956289/