[08:06:40] The wrong v6 IP is back on cp2031... [08:07:10] or was it fully removed? volans ? [08:07:20] I'm running a tcpdump to try to find out if that's still coming [09:26:37] XioNoX: I didn't touch anything but my understanding was that br.andon did remove manually the :118: addresses from B2 hosts [09:28:30] checking that one host it's still there, however my tcpdump capture didn't see any RA for :118: since 1.5h, so I don't think they're happening [09:30:39] cp2031 has indeed the wrong ip in the known hosts file [09:31:11] and is the only one in B2 [09:33:06] maybe the cumin command worked, but this one was done manually and it didn't work there? [09:37:04] I'm removing it manually [09:37:22] checking backlog [09:37:37] done [09:37:55] yeah cp2031 was the one manually fixed before the others [09:37:58] so no tsure [09:40:12] let's see if it comes back [09:40:48] hmm cp2031 got rebooted at 01:11 and had really weird memory issues at ~03:30 [09:40:58] (apt-get, puppet, exim segfaulting) [14:31:23] Amir1: hello, re: T326892 [14:31:24] T326892: Make Vector 2022 the default skin on English Wikipedia - https://phabricator.wikimedia.org/T326892 [14:31:35] I just added url/api_search url:^/w/rest.php/v1/search/ [14:32:50] also see, [14:32:51] requestctl get action cache-text/api_search [14:32:56] expression: pattern@url/api_search AND pattern@sites/enwiki [14:33:03] does this look fine to you? [14:34:05] I think you said that banning the search for the JS is more preferred. would that then be https://en.wikipedia.org/w/load.php?lang=en&modules=%40wikimedia%2Fcodex-search%2Cvue%7Cskins.vector.search&skin=vector-2022&version=eb4ez [14:34:09] ? [14:34:22] in which case, we can add a separate pattern and action for that [14:47:43] Amir1: see the security channel [14:47:50] noted [14:57:17] <_joe_> !issync [14:57:17] Syncing #wikimedia-sre (requested by joe_oblivian) [14:57:19] Set /cs flags #wikimedia-sre sirenbot -O [14:57:21] Set /cs flags #wikimedia-sre akosiaris +Afiortv [14:57:39] good bot [14:58:01] _joe_: thanks! [14:58:36] <_joe_> synced everywhere [15:14:13] https://phabricator.wikimedia.org/T327286 [15:14:13] Sorry for mentioning this in multiple places, but this feature is what I have anticipated for a long time. I believe this has something to do with the SRE team, and I would like to hear about opinions from you, thanks! [18:39:19] welcome kavitha_ !!! [19:04:46] Has anyone already noticed the increased latency since 18:20 UTC ? e.g. https://grafana.wikimedia.org/d/RIA1lzDZk/application-servers-red?orgId=1&viewPanel=67 [19:04:54] this coincides with vector2022, right ? [19:07:48] akosiaris: that's artificial reqs straight to the appservers from inside? [19:08:12] this matches https://sal.toolforge.org/log/yRAdxoUB8Fs0LHO5uKkO [19:08:56] yeah it's a pretty sharp increase too, whereas the vector thing was slightly earlier in the day, and was ramped up in percentages [20:23:51] <_joe_> the reason is not that deployment, but rather [20:23:53] <_joe_> https://grafana.wikimedia.org/d/RIA1lzDZk/application-servers-red?orgId=1&viewPanel=17 [20:25:22] <_joe_> something stopped calling us, and was calling us *a lot* [20:46:14] zooming out to 24 hours on both graphs has an effect too: https://grafana.wikimedia.org/goto/v8fdiVT4k (percentile latency) and https://grafana.wikimedia.org/goto/Y0VFm4o4k (request rate) [20:46:43] something started ~06:15 and *ended* ~18:20 [20:47:21] the end time corresponds with the vector2022 deploy, but the start time doesn't match anything I can find immediately [20:48:02] so the end time would be a weird coincidence but it could just be uncached external traffic [22:38:22] <_joe_> rzl: yeah it was a bytedance wikidata scraper [22:52:43] nod