[09:59:08] lunch [12:46:06] \o [12:58:13] o/ [13:18:10] o/ [13:19:18] dcausse: random fun thing i learned...there are 490M unique statement_keyword values in wikidata [13:19:30] was considering if the 5M commons urls really made a difference or not :P [13:19:53] if anything, i suppose i'm surprised nothing broke [13:22:01] ebernhardson: yes... I'm not sure that these 490M unique tokens are particularly useful so why I'm not a big fan about adding another set of 5m entries [13:22:21] not super convinced that these are useful for search [13:22:36] yea, i suspect very few have ever been searched for [13:23:16] what I see quite often is something like haswbstatement:P31=Q123 [13:23:22] which makes sense [13:24:23] yea, that seems like the primary use case [13:27:34] welcome back dcausse ! [13:28:34] thx! [13:31:42] when looking at https://phabricator.wikimedia.org/P67741 I'm not sure that this makes sense to keep them as-is [13:32:32] one thing is that if you allow a property to get indexed it gets copied to the all field, which I think makes sense for some the textual entries there [13:34:05] hmm, i hadn't considered the all field [13:34:10] but not convinced that keeping the exact tokens for haswbstatement [13:34:17] is worthwhile [13:38:53] not sure how we pull back on those either though, i'm going to guess those mostly amount to the 'string' and 'external-id' types [13:39:36] i vaguely remember the external-id, it was solving a problem where you couldn't search for a thing by a known id [13:43:18] yes, we'd have to dig into old phab tickets, but I vaguely remember that this seemed useful where you copy/paste a random id in the search bar [13:44:06] we could scan query logs to see how much time haswbstatement is used with something that does not look like a Qid [13:44:25] yea, might be worthwhile just to understand how things are being used [15:13:44] i wonder if we need to shuffle shards around on cloudelastic, it keeps alerting for high gc. Or maybe just roll a restart through it [15:14:17] ebernhardson yeah, I've been restarting the smaller ones here and there, doesn't help much. We probably need to detune that alert a bit [15:14:45] inflatador: hmm, better would be to fix the gc problems :P We could potentially give it more memory [15:16:54] ebernhardson ACK ;) , if you wanna make a puppet patch to give more mem LMK, otherwise I can look once I get out of mtg [15:17:03] or maybe it is being a bit quick, the dashboars claim we only peak at ~10 gc's/hr [15:20:02] oh, actually it is being bad. 1006 is doing 400/hr in the old pool. A well behaving instance does <1 [15:24:29] Ò_Ó [15:25:16] :/ [15:26:05] it's how it works when it runs out of memory, it keeps running the gc to try and free some, but doesn't get any back. Looking over some stats, 1006 is more frequent, it does have much more disk used (probably means larger indices) than other instances...but balancing is tedious :P Probably give it another 2G of memory (10G->12G) [15:30:28] oh i suppose disk isn't a good proxy, since these are the smaller clusters but all 3 clusters run on each machine. Anyways puppet patch up to increase memory 2g (turns out its 12g->14g) [15:30:40] looks like 1005/1006 are older hosts FWiW [15:31:20] same amount of RAM tho [15:31:35] i wouldn't expect being an older host to effect GC though, it might be able to run the GC faster but it would get the same results. 400 old gc/hr vs <1 old gc/hr is memory pressure in the heap itself [15:31:51] something taking up more memory, could potentially dig into it but historically that takes a long time [15:33:10] ebernhardson no worries, will add my +1 shortly [15:37:47] OK, change merged/puppet-merged/applied in puppet. Will roll restart cloudelastic shortly [16:00:21] gehel dcausse just depooled CODFW completely from `wdqs-main`. Is it OK for me to start a data transfer from 2022->2021 now? [16:00:29] ref T373791 [16:00:30] T373791: Transfer a sane journal (subgraph:main) to wdqs2021 from wdqs2022 - https://phabricator.wikimedia.org/T373791 [16:01:20] inflatador: yes I think so [16:02:46] dcausse ACK, will get that started soon [16:03:20] ebernhardson I'm restarting cloudelastic to apply the new cluster settings now [16:05:25] inflatador: thanks! [16:32:16] wdqs-main outage just p-aged all SREs ;( fixing that now [16:32:29] oops [16:33:27] heading out for dinner, back later tonigh [16:35:07] so I'm turning off p-aging for wdqs-main/wdqs-scholarly. Should I turn it off for old wdqs services as well? [16:40:43] We definitely don't want anyone to be woken up by WDQS going down. Our SLO is low enough [16:40:59] gehel ACK, will update https://gerrit.wikimedia.org/r/c/operations/puppet/+/1070301 [16:47:11] OK, that's merged...should no longer p-age for any wdqs endpoints [16:57:12] what an odd error...my local wikibasecirrussearch tests fail because it expected "Arabic" and "Hebrew", but my test rendered "العربية" and "עברית" [16:57:22] must be some config flag somewhere... [17:00:22] cloudelastic heap settings are applied [17:00:39] inflatador: excellent! will probably take a few days to find out if it fixed anything [17:00:47] usually the heap takes some time to fill up [17:03:08] dcausse the wdqs-main data xfer is done, LMK if it looks OK, we can repool CODFW at that point [17:46:35] should we be running categories on the graph split hosts now? [17:47:30] inflatador: hmm, i suppose at some point? I could be mistaken but i think cirrus is the only consumer of the categories [17:48:38] just wondering as I believe we disabled categories when we started doing the tests [17:49:47] I see a `categories.jnl` on `wdqs2021`...hmm [17:53:11] ref https://alerts.wikimedia.org/?q=%40state%3Dactive&q=%40cluster%3Dwikimedia.org&q=alertname%3DCategories%20update%20lag [17:53:15] heading to lunch, back in ~40 [18:16:30] back [20:31:12] re: pairing session, systemd docs on user resource control: https://www.freedesktop.org/software/systemd/man/latest/user@.service.html [23:24:38] inflatador: thanks! there's still something wrong with it... it's consuming from the wrong topic so better to keep it depooled for now [23:24:44] will take a closer look tomorrow [23:38:01] dcausse np [23:41:30] we can reimage tomorrow if you like