[11:37:26] lunch [13:01:11] \o [13:12:07] .o/ [13:32:52] o/ [14:22:15] i can't really decide what to do with interwiki search on Special:Search re: semantic search. We could catch the profile exception and just skip them, or we could try and re-run routing on interwiki and exclude semantic, or we could detect it early and influence CrossSearchStrategy [14:22:30] but the ordering isn't quite right for changing CrossSearchStrategy, that gets decided before routing [14:29:10] or we can add yet another boolean to SearchContext that the semantic router flips, but that seems worse [14:30:30] the routing is not allowed to change the SearchQuery I suppose? [14:32:16] i suppose i was thinking of the SearchQuery as mostly immutable, i could let it replace the CrossSearchStrategy but so far we don't have any set* methods on SearchQuery [14:35:43] yes I thought it'd be cleaner to keep it immutable... [14:36:42] it's maybe least terrible for interwiki searcher to catch the profile exceptions, but that seems terrible in another way [14:36:43] but since we're still in a hackish model with this extra cirrus param to trigger the semantic builder we might just add more hacks and allow the interwiki to fallback to a known profile [14:37:05] or possibly force a naive default profile for interwiki? [14:37:23] hmm, actually maybe forcing the interwiki profile is reasonable. Keep it simple and minimally configurable by the user [14:37:37] it's a bold assumption to think that the host wiki best profile is suited for sister projects [14:37:46] yea, that makes sense [14:38:32] * ebernhardson finds "simple" and cirrus to not really fit in the same sentence very well :P [14:38:44] :) [14:58:41] will be late for the wed meeting [14:58:43] Looks like opensearch can't stand up its cluster without https ;(. The securityadmin stuff assumes you'll have a TLS listener [14:58:56] :( [15:01:02] maybe have an answer...but need to get enough stuff setup to actually test [15:01:22] We can potentially hack around this by having multiple services in Istio [15:07:26] dcausse: wed meeting/ [15:11:57] * ebernhardson realizing daylight confusion time is amoung us [18:35:44] oh nice, `grep -ir interwikisearcher tests/phpunit`: nothing :P [18:40:55] inflatador: do you have opinions on the number of master capable hosts? Pondering what it looks like to scale up and i feel a little awkward adding them all as masters [18:41:01] in the semsearch cluster [18:44:19] also relatedly, poking in admin_ng i see the largest max memory allowed anywhere is in dse-k8s is 32g, wondering if i should update that for our estimated 38g, or scale up node count to fit in 32g [18:45:30] actually i guess i'll ask about scaling in slack [18:50:09] ebernhardson I think 5 is probably a good number, just a hunch though [18:51:03] re: node size we should be able to accommodate 38G. That's a big chunk of those 64G hosts, but I think it's fine [18:51:06] ok, seems reasonable. I'm intending to scale that up to the 16 node estimate today, althogh i suppose first i will add a non-master or two to -test to make sure it does what i think [18:51:17] or maybe tomorrow, will see how things go [18:51:52] cool, ping me if ya need anything [18:53:46] i suppose i'll also have to review how opensearch handles masters, we have 7 currently but scaling down to 5 is reasonable. I'm just not sure how that works out either, might need some api calls to purge old masters [18:55:10] yeah, would probably need to use https://docs.opensearch.org/latest/api-reference/cluster-api/cluster-voting-configuration-exclusions/ [19:16:12] Oh! Didn't catch that you already had 7. Probably not worth the effort to scale down if you're keeping the existing cluster [19:16:31] yea i suppose can just leave it at 7, it's probably fine [20:17:42] * ebernhardson wonders why helm deployment_diffs runs so much slower locally vs CI