[08:46:10] ebernhardson, ryankemper: I moved our 1:1 for tomorrow (I need to go to a SexEd presentation for Oscar this evening) [08:46:22] ebernhardson: let's try to do our ITC tomorrow if that's OK for you [11:29:08] lunch [14:14:01] o/ [14:20:24] o/ [14:20:41] looks like wdqs1009 corrupted itself, wdqs1010 is still going [14:28:04] yes, I think you can stop the import on wdqs1009 :( [14:28:16] I just restarted it [14:29:26] * inflatador crosses fingers [15:51:59] \o [15:52:57] perhaps interesting, a bunch of yandex source repos leaked and someone put up a list of ~1900 ranking factors they use: https://yandex-explorer.herokuapp.com/search?q=&o=all [16:02:18] ryankemper: triage: https://meet.google.com/eki-rafx-cxi [17:07:51] dcausse FYI , this updates the JRE. Don't know if this explains the increased memory requirements you were seeing in yarn but FYI https://gerrit.wikimedia.org/r/c/operations/docker-images/production-images/+/884351 [17:16:22] inflatador: thanks for the heads up! it's unrelated because the mem increase I saw was in yarn (it does not use docker images) and it runs java8 [17:17:27] oh yeah, I know it doesn't use docker but didn't know it was using java8 [18:39:17] lunch, back in ~1h [19:55:08] back [20:39:48] * ebernhardson realizes i have no idea what git commit the version of mjolnir used in yarn is [20:40:18] guessing it's the commit 'Increase evrsion to 1.1', but it's not actually recorded in the wheel anywhere [21:28:44] meh...i changed the conda environment to remove specific versioning so it could choose on its own. It's been "solving environment" for 30 minutes now :S [22:12:38] finally got some things running...it doesn't start spark yet for some reason, but at least it's installing deps and doing something now :) [22:15:57] some sort of scala mismatch..should be fun: java.lang.NoSuchMethodError: 'scala.collection.mutable.ArrayOps scala.Predef$.booleanArrayOps(boolean[])' [23:11:58] interesting, i hadn't previously looked but pulled some numbers. taking the avg across instances of the p95 per-node full_text latency, it dropped from 180-195ms to 160-170ms when the incoming links jobs were turned of. Similarly moved more_like from 220-235ms down to 195-205ms [23:12:23] active thread usage in the search pool was reduced by 200 threads