[09:45:18] Hello, we recently got an alert for the `cirrussearch-dump-s5.service` on `snapshot1017.eqiad.wmnet` related to a newly created wiki`aewikimedia `. We had some questions on the root cause and the resolution. Should we need to do the Create the index step of adding a new wiki? ie. https://wikitech.wikimedia.org/wiki/Add_a_wiki#Search or https://wikitech.wikimedia.org/wiki/Search/CirrusSearch#Adding_new_wikis [09:45:18] https://www.irccloud.com/pastebin/DhekMLAT/ [09:46:28] stevemunene: thanks for the ping, taking a look [09:47:11] Thanks dcausse , we're pairing on this with btullis [09:51:41] stevemunene: so indeed the indices do not exist... but the wiki appears to still be under creation at T362529 [09:51:41] T362529: Create a Wikimedians of United Arab Emirates User Group Wiki - https://phabricator.wikimedia.org/T362529 [09:54:58] I'm not clear on what's the order of operation for creating wikis, IIRC there's should a call to addWiki.php mentionned in SAL but I don't seem to find this [09:55:16] addWiki.php "should" create the indices [09:57:05] dcausse: Agreed. We're not clear on the order things should have been done, either. It was added to the `s5.dblist` in this change: https://gerrit.wikimedia.org/r/c/operations/mediawiki-config/+/1026788 - that file is automatically used by the `cirrussearch-dump-s5.service` on snapshot1017 to decide which wikis to dump. [09:59:06] btullis: right, will ask Zabe if they ran addWiki.php and if there was issues [09:59:24] dcausse: Many thanks. [10:03:58] ah.. I forgot tath addWiki.php no longer creates the indices and they must be created manually via the link you shared earlier [10:11:52] Ack this means creating the index then populating the index right? [10:13:01] stevemunene: might not be necesary to populate the index it should be empty [10:13:33] I just ran the command mentioned at https://wikitech.wikimedia.org/wiki/Add_a_wiki#Search, this should fix the various errors we see [10:14:08] Confirmed working. No results, but no error. https://ae.wikimedia.org/w/index.php?search=fish&title=Special%3ASearch&ns0=1 [10:14:34] stevemunene: So you want to re-run the `cirrussearch-dump-s5.service`service, or should I? [10:15:35] no worries btullis go ahead [10:15:47] Ack, will do. [10:16:13] lunch [12:19:47] inflatador: the metric at https://gerrit.wikimedia.org/r/c/operations/alerts/+/1054317/2/team-search-platform/cirrussearch_k8s.yaml#34 requires 1 week of data but will "work" with less so I think it's fine to merge if we want to [12:31:44] nvm, missed that Erik already responded [13:19:42] o/ [13:47:10] a quick note regarding icinga to alertmanager migration, with icinga you could, for the same alert, setup a warning and a critical threshold, this is apparently not possible with alert manager without defining two distinct alert, Filippo suggested that we keep only the critical one and that's what I've done [13:49:32] ACK [13:59:15] \o [14:01:06] o/ [14:11:07] \moti wave2 [14:11:10] .o/ [15:39:45] inflatador: i might argue the task you just closed still needs dashboards migrated [15:42:49] unrealted annoyances, ripgrep respects .gitignore. and mediawiki/core's extensions/ directory has `*` in gitignore :S [15:45:10] "ag" has -u but then it digs into .git :/ [15:45:29] yea same with ripgrep, there is --no-ignore but then it's basically just grep -r [15:46:46] for mw extensions I find codesearch generally better than local search [15:47:36] i try using that at times, but it's always a bit awkward, and i can't do vi $(rg foo | cut -d : -f 1 | uniq) [15:48:12] ebernhardson ACK, feel free to reopen. We're in ticket grooming and haven't seen movement on that one in awhile. Could take a look during pairing today maybe [15:48:43] inflatador: oh, i'm actually totally on the wrong train of thought. Somehow i was thinking statsd->prom and those dashboard. No it's all fine [15:49:22] i should try reading more than the little box that pops up at the bottom of phab :P [15:55:58] it's all good [15:56:03] Workout, back in ~40 [16:12:18] just saw T368714 :/ [16:12:19] T368714: kafka-main replacement nodes don't fit kafka-main (storage wise) - https://phabricator.wikimedia.org/T368714 [16:15:30] hope they can fit additional disks to replacement machines [16:18:40] ouch [16:26:05] i guess there are always bigger disks to replace the current ones, if more dont fit. [16:27:51] yes but apparently the cost is a bit prohibitive and they're looking into reusing decommissioned hosts harddrives [16:29:36] dinner [16:42:45] back [16:43:37] that Kafka ticket sounds like a bit of a whoops [16:44:26] * inflatador wonders if we'll ever use Ceph like a SAN [16:47:12] yea, suspect it's a miscommunication between raw capacity and usable raid10 capacity, but who knows [16:47:26] (there are probably tickets, but i'm not bothering to look :P) [16:48:19] ACK, that way lies madness ;P [17:35:15] dcausse if you like, I can work on patches to get rid of the puppet checks referenced in this relation chain: https://gerrit.wikimedia.org/r/c/operations/alerts/+/1054317 . We don't need to merge them until the new alerts are verified [17:41:28] lunch, back in ~40 [17:56:02] * ebernhardson is having a hard time understanding where there are separate EventBusFactory::{getInstance,getInstanceForStream} methods. [17:56:06] s/where/why/ [18:15:39] * ebernhardson suspects getInstance is never invoked, but hard to say for sure [18:25:05] lunch taking a lil long, will be about 10’ late to pairing [18:31:17] * ebernhardson was failing to read and figured it out :P [19:22:38] picking up kids, back in ~1h [19:47:15] lunch [20:25:49] back [20:33:10] back [20:59:15] I think my display hates word processing. First I had phantom spaces and cursors out of place in Google Docs. Now I'm getting phantom indents in Apple Pages [21:06:16] ah, I think I figured out the Pages thing. looks like `--` confuses its word wrap