[09:21:20] !log tools.kiranbot2 marked tool for deletion per https://meta.wikimedia.org/w/index.php?title=User_talk:Majavah&diff=22882220&oldid=22711958 [09:21:22] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.kiranbot2/SAL [09:21:27] !log tools.kiranbot3 marked tool for deletion per https://meta.wikimedia.org/w/index.php?title=User_talk:Majavah&diff=22882220&oldid=22711958 [09:21:28] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.kiranbot3/SAL [09:38:00] !log admin restarting neutron-dhcp-agent on cloudnet1003 (T302369) [09:38:05] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [09:38:05] T302369: cloudcontrol1003 - Check for VMs leaked by the nova-fullstack test - https://phabricator.wikimedia.org/T302369 [09:39:19] !log admin restarting neutron-api cloudcontrol1003 to see if the agent status update starts working (T302369) [09:39:22] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [09:54:28] !log admin neutron agent-delete afcb9b7f-c1a6-4ff4-9b10-92bfbe8d1a56 (Linux bridge agent | cloudvirtan1002) [09:54:32] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [09:55:12] !log admin neutron agent-delete afe173eb-35ba-444a-9960-899629786d2f (Linux bridge agent | cloudvirtan1003) [09:55:15] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [09:55:47] !log admin neutron agent-delete 2eeef198-8af7-4e5d-bd73-e14a2a8d2404 (Linux bridge agent | cloudvirtan1004) [09:55:50] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [09:56:19] !log admin neutron agent-delete 1071c198-ed57-4b5a-9439-30e66a31aa69 (Linux bridge agent | cloudvirtan1005) [09:56:21] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [09:56:57] !log admin neutron agent-delete bad663b3-fd25-4393-a546-4b1b4bdec4db (Linux bridge agent | cloudvirtan1001) [09:56:59] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [10:05:22] !log admin Deleting stuck novafullstack servers, to let the service create new ones (T302369) [10:05:26] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [10:05:26] T302369: cloudcontrol1003 - Check for VMs leaked by the nova-fullstack test - https://phabricator.wikimedia.org/T302369 [10:14:09] !log admin cleaning up neutron agents for non-existent servers cloudvirt100[1-9].eqiad.wmnet,cloudvirt10[12-15].eqiad.wmnet [10:14:12] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [12:13:48] !log admin cleaning up cinder volume snapshots, aborrero@cloudcontrol1005:~$ for i in $(sudo wmcs-openstack volume snapshot list -f value -c ID) ; do sudo wmcs-openstack volume snapshot delete $i ; done (T302382) [12:13:52] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [12:13:52] T302382: icinga alert: Check for snapshots leaked by cinder backup agent - https://phabricator.wikimedia.org/T302382 [13:13:44] !log paws deploying e6eedbc58bd6f1f074912f4faf8075275ae13819 cleanup [13:13:47] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Paws/SAL [14:27:12] i tried 'rebuild instance' on cumin.mariadb104-test.eqiad1.wikimedia.cloud to reimage it from buster to bullseye, and while the web u/i says it succeeded, it's come back up running buster [14:27:18] am i holding it wrong? [14:31:50] you might want to remove and recreate the instance instead [14:32:03] kormat: good question! the openstack docs say "There is a known limitation where the root disk is not replaced for volume-backed instances during a rebuild.", which could be the case here [14:33:07] not sure if just being hosted in ceph counts as "volume-backed instance" or if it's specifically referring to a cinder volume [14:34:33] taavi: that.. does raise the question of what 'rebuild' is supposed to do ;) [14:34:39] if that's the case, we probably should remove that option from horizon, I don't believe you're the first person to have rebuilds not work [14:34:42] i guess i could try removing the instance, and recreating it? [14:34:52] "This operation recreates the root disk of the server. For a volume-backed server, this operation keeps the contents of the volume." [14:35:01] 🤦‍♀️ [14:35:34] that's what we usually do (although we try to not re-use instance names) [14:37:44] reusing the instance name should be ok now, that was an issue with the hook on the dns, though there might be some cache in the bastion that might try to reuse the old ip for a few minutes [14:39:18] created T302404 [14:39:18] T302404: fix or remove instance "rebuild" button - https://phabricator.wikimedia.org/T302404 [14:39:34] alright, here goes 🤞 [15:23:00] that worked, have a functional instance again. thanks, taavi + dcaro <3 [16:40:47] !log tools.mabot add eswikiversity redirect maintenance jobs [16:40:50] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.mabot/SAL [17:40:27] !log admin restarting lots of openstack services to try to clear up the mess that is T236101 [17:40:31] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [17:40:31] T236101: Find a way to remove non-replicated tables from ToolsDB - https://phabricator.wikimedia.org/T236101 [17:41:39] !log toolhub Updated demo server to aae410a [17:41:40] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolhub/SAL [17:43:18] !log tools.mabot Revert eswikiversity test configuration [17:43:19] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.mabot/SAL [20:39:10] !log admin added domain-wide 'designateadmin' and 'observer' roles to project-proxy-dns-manager service account T295246 [20:39:14] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [20:39:14] T295246: Dynamicproxy API should be useful without the Horizon dashboard - https://phabricator.wikimedia.org/T295246