[00:44:20] * bd808 off [08:50:46] morning! [08:52:30] can I get a +1 here? T357694 [08:52:31] T357694: request temporary quota increase for project iiab - https://phabricator.wikimedia.org/T357694 [08:59:32] morning [09:00:12] blancadesal: done [09:00:23] thanks! [09:19:45] data point: I moved a project from flake8/black/isort to ruff last weekend. I removed all config associated with the previous tools and didn't configure ruff in any specific way. It passed on the first run without reformatting the code. In terms of speed, there was no noticeable difference. The codebase is very small and most of the time is spent on setting up the tox venvs and running the unit tests anyway. [09:19:45] https://github.com/mediawiki-utilities/python-mwsql [09:24:59] good morning [09:25:11] nice, ruff is also available on pip right? (so easy to handle with both pre-commit/tox) [09:25:38] dhinus: do you know if cloudcomin servers can also run sre cookbooks? I would like to create a wmcs cookbook that wraps the sre.reboot one [09:26:11] dcaro: yes! https://pypi.org/project/ruff/ [09:26:15] nice [09:26:47] arturo: to some extent, you can't tweak dhcp and such (ex. provision), nor downtime icinga [09:26:50] (for example) [09:26:57] some others work ok [09:27:29] no, the sre cookbooks repo is not cloned on the cloudcumin servers [09:27:41] mmm, the cookbook I would like to wrap is the sre.reimage one [09:28:48] ok, then I think I will create a wmcs cloudvirt-pre-reimage and post-reimage cookbooks [09:35:46] hmm, I thought we had them there, then for reboots and such we are using directly spicerack libs? [09:36:06] the idea with cumins and cloudcumins is that right now they have two completely separate sets of cookbooks, but eventually we could create a "shared" set that is installed in both [09:37:14] and yes you can already use spicerack libs, but not sre. cookbooks [09:38:02] yep, for rebooting and such we use spicerack directly, bypassing the cookbooks [09:40:58] hmm, I faintly remember having to do so to be able to change the downtiming logic or something like that [09:58:19] hmm, the admin gets stuck when listing tools and stops working [11:05:13] it seems getting stuck contacting hay.toolforge.org [11:14:42] * dcaro lunch, will take a few hours after too [13:59:13] taavi: thanks for the review, will send another update later [14:25:06] bd808 linked to this wiki a few hours ago and I thought it could be improved, so I rewrote it: https://wikitech.wikimedia.org/wiki/Help:Toolforge/Quickstart#Set_up_an_SSH_client_and_a_key [14:25:31] it's something that many toolforge users will refer to, so I'd appreciate a review of my changes [14:37:20] maybe the change "To do so, insert your public key into the "New SSH Key" field and click "Add SSH key".", might be a bit clearer with the old 'paste the contents', as 'insert' is a bit more ambiguous [14:38:04] Everything else looks better to me 👍 [14:38:07] dhinus: ^ [14:46:55] i was debugging a trove issue where the postgres WAL archive folder filled up. turns out the trove docs say "That is going to be a problem if the WAL segment files in the archive folder keep increasing" about the archiving feature [14:49:22] good news is that the database in T355138 is up again. bad news is that the postgresql config generated by trove makes zero sense to me [14:49:22] T355138: Rescue DBapp trove instance in glamwikidashboard project - https://phabricator.wikimedia.org/T355138 [14:50:51] dcaro: thanks, modified [14:54:26] taavi: iirc there was a process setup by trove themselves to cleanup the archive periodically [14:54:45] https://opendev.org/openstack/trove/commit/02971d850b57ac27a126ecb8ca4012f97ae856fd [14:55:10] but it was not working (this was maybe a year ago) [14:55:28] it needed upgrading or something [14:56:09] clearly it was not working here, although the archiving itself being broken due to one half-copied file might explain that [15:15:52] quick review? https://gitlab.wikimedia.org/repos/cloud/toolforge/toolforge-deploy/-/merge_requests/197 [15:26:43] dcaro: +1'd [15:26:51] thanks! [16:01:10] * dcaro off [16:01:18] have a good weekend [16:28:32] o/ [18:13:54] dhinus: thank you for improving docs! [18:41:24] * bd808 lunch