[00:00:08] Yo uwould also need the dependencies [00:00:44] i mean i can find those bits, but usually in debian if you depend on something it has to be packaged separately and you depend on the -dev version [00:01:08] lemme double check what these actually do i guess...some libs are header only [00:04:02] so if the libs do end up being header only does that basically mean we can just pull the files in manually and cut the dependency [00:04:09] wow, i still have the directory i made this in :) [00:04:15] no makefile :P [00:04:26] :P [00:10:54] hmm, probably not easy to packgae. pthread_do is a library and needs a makefile and such run. [00:11:06] Compiling this thing amounts to: gcc -Lptrace_do/ -o madvise elasticsearch-madvise-random.c proc_maps_parser/pmparser.c -l:libptrace_do.a [00:11:25] which compiles pmparser.c directly in, and statically links libptrace [00:11:32] maybe just leave the binary :P [00:16:28] tbh i don't see a lot of value...if ops is forcing to write this as if it was going to live forever we should start some tmux scripts and call it a weekend :P [00:21:24] I think that makes sense to me [00:21:38] I'll push for us to just ship the bin and report back here [00:59:23] ebernhardson: if you're still around: looks like l.egoktm might be able to wrangle the .deb stuff together for us. assuming he does I want to verify that the new binary works. I should expect to see disk IO drop a bunch after running the new binary, right? [00:59:27] I'm looking at https://grafana.wikimedia.org/d/000000460/elasticsearch-node-comparison?orgId=1&from=1624490506445&to=1625872906445&var-cluster=elasticsearch&var-exported_cluster=production-search&var-dcA=codfw%20prometheus%2Fops&var-nodeA=elastic2054&var-dcB=codfw%20prometheus%2Fops&var-nodeB=elastic2056 and I know we disabled readahead for 2054 on tuesday but for whatever reason 2054 doesn't have disk throughput metrics [01:04:05] ryankemper: on systems with heavy io they should drop, systems with low (<~20MB/s) probably wont see much [01:04:25] specifically read i/o, this shouldn't change writes [01:04:48] ebernhardson: ack. for whatever reason 2054 has been reporting 0B/s for both...some kind of metrics issue [01:06:25] hmm, yea it should be reporting some. Checking `iostat 10` seeing 50-100MB/s or so [01:08:01] for `iostat 10` which device are you looking at [01:08:25] oh probably sda? since lsblk says that's where /srv/ is [01:11:52] ryankemper: well, just whichever has traffic :) [01:12:29] ryankemper: nothing else should be doing much, anything listed there should be dominated by elastic. In the case of 2054 sda and sdb are pieces, md2 is the full read rate [01:17:06] dinner [03:17:52] Alright, we've got the mitigation in place across the fleet. Few nodes I've spot checked are showing less IO [09:14:45] meh... wdqs@eqiad is not supposed to receive traffic but some machines there still hang for mem pressure... [09:53:14] lunch [10:10:15] lunch [12:38:47] just saw that completion is now contacting /rest.php/v1/search/title?q=query&limit=10 on some wikis [12:39:14] on https://tr.wikipedia.org/ [12:42:32] looks like cirrus* params are not being passed :/ [12:59:35] damn! [13:00:01] Is that part of the viewjs migration? [13:01:28] I think so [13:01:54] can you create a phab task? You have a better understanding than me on how those parameters look [13:02:14] sure [13:02:21] thanks! [13:25:42] filed T286043 but this is part of broader conversation to have I think [13:25:43] T286043: CirrusSearch custom URI params are not passed to the backend when using the /search/title REST api - https://phabricator.wikimedia.org/T286043 [15:01:29] \o [15:21:02] o/ [15:25:45] going to skip the unmeeting and start the week-end early, see you in a while, o/ [16:06:14] Enjoy the vacation!