[10:32:02] lunch [13:18:26] greetings [13:45:27] o/ [13:56:37] /moti wave [13:57:27] take 2 [13:57:30] \o [14:48:32] o/ [15:14:12] should I be concerned that the deployment-prep and omega and psi clusters show zero shards when I curl _cat/allocation?v=true ? [15:14:41] indices also showing zero bytes, but disk space is used [15:15:07] hmm, maybe? looking :) [15:17:37] inflatador: seems fine, a `du -sh /srv/elasticsearch` gives 5mb for elastic05. I'm not really sure how elastic gets that 3gb used value, doesn't really line up with anthing on `df -h` either [15:19:04] beta-search-psi and omega both take up 24k [15:19:23] so all of that 5MB comes from beta-search (chi) [15:19:27] (or most of it) [15:19:53] I've banned elastic05,06,and 07 from all clusters, let me check one of the active ones [15:19:58] inflatador: ahh, ok i now realize what you mean, i was a bit lost :) [15:20:15] inflatador: i think we never actually point cirrus at those other clusters in beta, they exist to test puppet but are unused [15:20:22] deployment-elastic11.deployment-prep.eqiad1.wikimedia.cloud [15:20:35] ebernhardson cool, will update docs accordingly [15:25:06] now i'm not 100% sure, poking the config it does seem like beta gets the configuration that would put wikis on the psi/omega clusters, but most wikis in the test cluster are in the 'big indices' list from prod [15:25:45] so it's not that it's not configured, it's that the smaller clusters have the smaller wikis, and we dont seem to have test wikis of those small wikis. [15:29:49] ah OK, that's good context [16:02:16] workout, back in ~30 [16:41:29] back [17:03:03] lunch, back in an hr [17:52:21] back [18:10:42] * ebernhardson is disapointed to find jarhell is not limted to jetty, bigdata has jarhell between bigdata-core and bigdata-runtime jars too [18:14:04] and colt, and probably more things. not worth looking at :P [18:17:00] sounds depressing ;P [18:19:17] Image search result for "jar hell": https://www.gotohellmi.com/store/p229/Mug_Shot_from_Hell%2C_Michigan_Mini_Jar_Shot_Glass_-_OUT_OF_STOCK.html [18:53:32] lol, jar hell is where java has access to the same class from multiple sources and which implementation you get is somewhat random. And then sometimes it complains that it can't assign a Foobar to a place that expected a Foobar, because they came from different class loaders [18:54:06] i suppose it's basically a mess that has to be cleaned up, or ignored. people ignore messes all the time too :) [19:07:14] Yeah, it sounds like dependency hell with a dash of namespace collisions thrown in for good measure [19:45:21] https://gerrit.wikimedia.org/r/c/operations/puppet/+/791666/ PR for removing the old beta VMs from puppet if anyone has time to look [19:46:08] I shut off 05 already, probably shut off 06 and 07 next wk and delete them all by Friday if no one screams (open to feedback if anyone would rather do it some other way) [19:58:45] quick break, back in ~15 [20:14:31] back [20:33:22] Updater's failed on `wcqs1001`, not sure why yet [21:22:12] ryankemper: oh sorry thats me, i previously depooled it [21:22:25] depooling isn't why its failing, but thats also me :) [21:23:01] ebernhardson: :P no worries, I can ack the alert [21:30:05] ryankemper where did the alert come from? I don't see it, but it may have been eaten by one of my email filters [21:32:08] inflatador: icinga alert, see the two I just acked in #wkimedia-operations [21:34:33] ryankemper thanks, i don't see any emails, I guess I should turn on an IRC phrase highlight for 'wcqs' [21:36:13] inflatador: yup that’s an option, otherwise it’s just checking the icinga ui a few times a day [21:41:22] i fixed up wcqs1001, it should stop complaining soon. Earlier was oauth specific, but the problem i'm looking at now should show up on wdqs1009 all the same will continue testing there [21:58:50] * ebernhardson tried cleaning up multiple copies of logback, but now it doesn't log anything. Complete silence, not even the 'i initialized the logging framework' message :) [21:59:01] but clearly slf4j and logback are still in there...sigh [22:10:11] OK, I'm out. happy weekend all