[06:04:19] Amir1: anything running on s6 regarding templatelinks? [06:04:29] I am seeing s6 with quite a bunch of slow queries [06:05:04] looks like the force isn't working fine [06:05:20] https://phabricator.wikimedia.org/P27897 [06:10:19] yeah definitely faster with that other index [07:06:30] reminder, I'm going to reboot cumin1001 shortly [07:06:45] +1 [07:42:15] marostegui: there isn't anything running but that's read new being slow. I wrote more details on https://phabricator.wikimedia.org/T299421#7934073 [07:42:45] if I remove that force index, something else becomes slow (check yesterday's dewiki slow queries) [07:43:17] :( [07:43:28] yeah, with those queries it is always going to be something [07:43:35] If I do join decomposition on the new query it becomes fast again. Maybe I can do it in subquery [07:44:08] but it's weird, the join order is correct so it should pick the ids first and be correct [08:17:48] marostegui: how long usually takes to run a schema change on revision on s1? [08:18:34] I don't remember but I think around 5-10h or more [08:19:51] okay, I downtimed it for 16 hours [12:47:20] interestingly, ipmitool chassis power cycle does a reboot on Dell kit, and a power cycle on HP kit [12:54:56] if needed we could make the redfish chassis_reset() support HP too, with that one you can choose the policy ;) [12:54:59] https://doc.wikimedia.org/spicerack/master/api/spicerack.redfish.html#spicerack.redfish.ChassisResetPolicy [13:56:10] Amir1: probably you want to run a reboot on db1163 (candidate master for s1) [13:56:20] to make sure it picks the latest kernel before the switchover [13:56:33] sure, right now I'm running a schema change on revision on it [13:56:52] so I don't need to do the switchover again [13:58:20] sure :) [15:30:07] I am going to test database backups checks in production to make sure they keep working as expected, expect icinga IRC complains soon [16:40:33] I finished the test, no more fake alerts expected from me