[09:31:08] Amir1: i believe you [09:44:45] :D [09:45:38] marostegui: wrt T297189, I'm trying to find a way to clean it up. We have the problem that it's hundreds of millions of rows, any sort of query will be just useless [09:45:38] T297189: Schema change for dropping ft_title and ft_namesapce - https://phabricator.wikimedia.org/T297189 [09:46:14] the only way I can think of right now is to just dump the whole table into a file (which will be gigabytes) and go through them by a script [09:46:34] Amir1: No problem from my side, no rush! [09:47:27] it's weird, we have dump of those tables, a sql file that will rebuild it and somehow that didn't work? [09:47:32] worked but not enough [09:48:01] in ruwiki it's 1.4B rows [09:48:02] lovely [14:33:43] that's very ... odd. db1115 has a grant for the `orchestrator_srv` user... but only for db1169 [14:46:12] marostegui, Amir1: how does the procedure i've added for the first step look? https://phabricator.wikimedia.org/T301315 [14:50:17] LGTM [14:50:25] the part I care is grants :D [14:50:41] fixed the grants [15:33:53] commented on the task [15:46:57] godog: I notice a thanos host unmounted a disk on 9 March (thanos-be1003 / sdm ); megacli flags Media Error Count: 7 and the event log contains some uncorrectable medium errors. Enough to warrant a DC ticket for replacement? System is in warranty [15:47:59] godog: (I think the thanos nodes are mostly "yours" :) ) [15:57:33] Emperor: yeah totally up for replacement, thanks for the heads up [15:58:52] I'll offline the disk [16:10:53] godog: thanks, I'll leave that with you :) [16:27:20] Emperor: for sure, I take it you've seen the puppet failures on thanos hosts re: profile::thanos::swift::cluster ? [17:42:13] no, and I ran it by hand on thanos-fe1001 earlier to check; where should I be looking? [tomorrow unless it's really urgent] [17:46:50] oh, got it now. I can fix that tomorrow