[07:02:26] dcausse: I know I asked, but I don't remember if you answered - how can I run those "browser" (cucumber?) tests locally? [07:04:45] zpapierski: there are instructions in tests/integration but also latest I've seen are from Erik here: https://phabricator.wikimedia.org/T284519#7144349 [07:05:05] a, perfect, thx [07:07:29] in doubt connect to cirrus-integ.eqiad.wmflabs then sudo -u cindy -i and then tmux a to see the session, all this is run via ~cindy/run-cindy.sh [07:07:45] will do, if needed [07:07:55] it has extra steps to poll gerrit changes but that's all [08:18:37] errand + meal 2 - be back around 1PM CEST [09:52:09] errand+lunch [10:01:39] lunch [11:53:07] meal 3 break [12:27:28] I'm guessing those tests shouldn't fail - they're working of a headless browser? [12:28:39] yes it must be started by hand prior running the suite IIRC [12:29:29] by it I assume you mean chromedriver [12:30:28] or not, it actually starts chromedriver by itself [12:30:47] ok [12:39:24] weird, it mostly fails with incorrect login [12:39:32] I'm not even sure if it's more than that [12:45:08] I remember that you might need to set some env vars, see tests/integration/README.wmv-wmcs.md (section Run the tests) [12:47:01] thanks [12:52:33] ebernhardson: by chance do you have a notebook that imports a cirrus dump to relforge? [12:55:37] ebernhardson: tested your WME code and it works as expected but I don't see the explicit enrollment of autocomplete requests via &cirrusUserTesting=subTest [12:58:28] now it says that no proxy is configured for this host - I need proxy for local testing? [13:01:24] no I don't remember anything related to a proxy [13:02:44] hosts in tests/integration/config/wdio.conf.js looks correct [13:03:13] e.g. http://cirrustest.wiki.local.wmftest.net:8080, http://commons.wiki.local.wmftest.net:8080 and http://ru.wiki.local.wmftest.net:8080 [13:03:24] should all work [13:06:59] open hangout is opened: https://meet.google.com/ugw-nsih-qyw [13:48:18] need to relocate,be back for retro [14:19:11] is there a page anywhere that describes/documents search satisfaction? i feel like I saw it somewhere once [14:20:26] also, I was just looking at https://www.mediawiki.org/wiki/Extension:CirrusSearch/CompletionSuggester and it looks like this is the feature that tries to title match to articles, as opposed to Query Completion, right? If so, I'll update the description so it's more clear what this feature is doing [14:24:08] mpham: there's the source code in Gerrit, which has a bunch of comments on what the various fields are: https://gerrit.wikimedia.org/r/plugins/gitiles/schemas/event/secondary/+/master/jsonschema/analytics/legacy/searchsatisfaction/current.yaml [14:25:12] \o [14:25:34] o/ [14:25:46] dcausse: for cirrus dumps->hive, stat1007:~ebernhardson/projects/cirrus2hive [14:26:04] i guess we could commit that somewhere, second time in last month or two someone wanted that [14:26:22] thanks! I'll pass that to Cormac [14:27:21] dcausse: for autocomplete, indeed known :( It seems we never attached to test bucket to autocomplete before, and the next test to run (glent) doesn't need it so decided to leave it mostly possible (hopefully) but unimplemented [14:27:44] ok makes sense! [14:28:56] * ebernhardson didn't realize there were 4 more screens of backscroll last time, a bunch of talkative peeps today :P [14:30:31] Nettrom: thanks. i'll take a look. i was hoping for something i could share with non-technical folks as well if possible (other PMs), but will look through this as well [14:58:54] meh, bluetooth being funny this morning. might be a minute late [15:05:15] mpham: I don't know of any straigthtforward documentation of that, unfortunately. If it's any help, when the SD team did baselines for search on Commons we defined a set of metrics and have an accompanying notebook that calculates it: https://phabricator.wikimedia.org/T258723#6391122 [16:52:47] dinner [17:30:59] * ebernhardson ponders a custom header pointing to dumps for requests that are rejected from the automated requests pool counter, but fitting that in seems annoying [17:36:54] Would the purpose be so that people getting requests rejected could see what types of reqs are getting rejected? [17:37:16] ryankemper: the idea would be when a developer who is getting failed requests all over the place will see they can get the data and not hammer our apis :) [17:37:38] but really, who am i kidding. I would probably slow down the script and wait longer unless it was something significant :P [17:38:40] :P many probably would [17:38:52] It’d be a dump of just failed requests tho right? Not like all reqs [17:38:59] no, i mean the cirrussearch dumps [17:39:14] the idea is they are hitting our apis so we query elasticsearch, we point them to the dumps of our elasticsearch indices and suggest "do it yourself" [17:39:17] Ohhh I see [17:40:51] I like the idea altho I agree that probably 90% of devs would just add a sleep and try to finish up whatever they’re doing [17:41:21] Would adding a custom header add any meaningful overhead or is it just a question of whether we want to bother spending the bit of time to implement it? [17:41:36] it's just that the header goes in a different part of the code that where we decide what pool to use [17:41:42] so it's annoying to thread it around [17:42:04] really not much overhead for an http header [17:44:05] ah yeah plumbing is never fun [19:10:42] volans: I'm gonna get started on the next iteration of the elasticsearch cookbook refactor w/ the new class api (https://phabricator.wikimedia.org/T280221 / https://gerrit.wikimedia.org/r/c/operations/cookbooks/+/679701) [19:10:53] from what I can remember the main things to tackle are pulling `rolling_operation` into its own private method: https://gerrit.wikimedia.org/r/c/operations/cookbooks/+/679701/7/cookbooks/sre/elasticsearch/rolling-operation.py#78 as well as a couple things here: https://gerrit.wikimedia.org/r/c/operations/cookbooks/+/679701/20/cookbooks/sre/elasticsearch/e6-upgrade.py#29 [19:12:02] do you remember there being any other big things to address? (no worries if it's been too long to remember, I'll be tagging you / gehel on the upcoming patch for those first few changes so that'll be a logical time to look over stuff again anyway) [19:15:45] ryankemper: should be an easy one, this only needs to be deployed to search-loader instances now: https://gerrit.wikimedia.org/r/c/operations/puppet/+/698025 [19:18:00] ebernhardson: will merge that now. as far as post-merge cleanup, it sounds like just need to `rm -rfv` : `/etc/mjolnir` as well as `/srv/deployment/search/mjolnir` across all `elastic*` hosts? [19:18:10] like I just need to* [19:18:35] ebernhardson: no, this was hosts in the analytics network. Sec lemme check which exactly [19:18:55] * ryankemper backs away slowly while ebernhardson talks to himself [19:19:04] :P [19:19:15] heh [19:19:54] oh right this is the analytics elasticsearch file not ours [19:20:19] ryankemper: so it should be everything with role(statistics::explorer), stat100[4-8].eqiad.wmnet afaict [19:25:45] yup can confirm that's right, selecting for all hosts that match `profile::analytics::cluster` gives `stat100[4-8].eqiad.wmnet` as well [19:26:20] I always have to look up the cumin syntax, for that profile it was `P:analytics::cluster::elasticsearch`, and similarly to find all hosts with `role::statistics::explorer` it was `R:class@tag = role::statistics::explorer` [19:26:43] that role syntax is kinda weird but it works ¯\_(ツ)_/¯ [20:24:55] * ebernhardson is slightly embarrsed how long i've been trying to figure out that i mocked a non-existing `assignments` function in python instead of `assignment` and thats why nothing tests right... [21:22:25] meh, something wrong with sonarcloud giving mjolnir ci failures :(