[07:03:12] it seems we indeed have issues with the broken master for cirrussearch - https://gerrit.wikimedia.org/r/c/mediawiki/extensions/CirrusSearch/+/704788/1#message-20be474e6d05b1fd34b207e9909a1dc138ddc696 [07:20:38] so something broke... [07:20:54] looks like it's the completion suggester [07:21:09] I get results when it's disabled: https://cirrustest-cirrus-integ.wmflabs.org/w/api.php?action=opensearch&search=ma&cirrusUseCompletionSuggester=no [07:21:22] but none when it's enabled [07:22:13] but strangely many more tests would fail if it was completely broken [07:24:00] these 2 tests are pure browser tests (not using the api), might as well be something in the UI that changed causing the test to not find what it wants in the page DOM [07:24:38] so it was a coincidence (I haven't merged a thing into master) [07:25:55] I'll investigate that once I'll get all the pieces ready in my patch (still need to deduplicate some code) [07:27:07] hmm, fun part is I get results from completion suggester on my local vagrant [07:29:34] might be the script that populates them (resetMwv)? but again I would expect way more tests to fail... strange... [07:30:24] there's work done on the widget so I would not be surprised if the DOM changed, it happened in the past [07:38:07] that wouldn't explain why API doesn't return results [07:38:19] (and why it does on my vagrant) [07:38:28] like you said, main page should be always there [07:39:12] indeed [07:39:57] sometimes cindy fails to rebase, and the repo might be in a weird spot, CirrusSearch itself or mediawiki-core [07:40:43] it may happen that someone cherry-picks a patch and then cindy fails to cleanup and update to master [07:41:24] git status and log on these two projects should tell if it's the case [07:43:17] looks correct [07:43:26] proper branch and commits [07:44:05] although on main repo there are local changes, but I don't know if they shouldn't be there [07:46:41] they should probably not [07:46:43] looking [07:48:19] ah you meant the vagrant repo? yes those are expected [07:48:28] yep, that's what I meant [07:48:42] hm.. so it's not that :/ [09:48:06] lunch [13:15:38] meal 3 break [14:03:54] ryankemper: in case the switch issue is not resolved before the week-end I think it might safe to change masters on the omega cluster that lost one (2 eligible masters is not great) [14:04:01] patch is up here: https://gerrit.wikimedia.org/r/c/operations/puppet/+/704973 [14:05:33] and will require to restart elasticsearch_6@production-search-omega-codfw.service on elastic2051.codfw.wmnet & elastic2038.codfw.wmnet [14:58:25] dcausse: what's the take on type hints in PHP? Intellij adds new methods and function with ": should I keep using doc type annotations or for the new ones use the newer method? or replace everything with the new one? [14:59:22] I kind of like the newer approach, but I probably written 95% of my PHP code during the last month [14:59:49] zpapierski: type hints are encouraged, public method should still have docs and types defined tho [15:00:20] esp for container types e.g. string[] that cannot yet be written at the syntax level [15:00:32] ok, I'll go with the type hints + docs with types [17:05:28] dcausse: thanks for the patch. still seeing crits on icinga so looks like switch A3 is still down [17:05:41] gonna make sure that's the case real quick but assuming so I'll get those restarts done right away [17:11:09] Yeah from the context on #sre switch will be down until monday, proceeding [18:30:37] Okay we're back to 3 eligible masters for `codfw-omega`. Shouldn't be any more intervention needed AFAICT [18:31:34] FWIW it looks like `production-search-codfw` (not omega) will be stuck in yellow cluster status until the switch is fixed on Monday. We've got a 99.89% active shard percent though so the actual impact is very minimal; there shouldn't be any concern about leaving it in this state over the weekend (not that we can even fix it without doing some ugly stuff anyway)