[11:22:55] lunch [11:37:42] errands [15:01:13] workout, back in ~40 [15:01:28] \o [15:05:04] o/ [15:30:42] * ebernhardson realizes cirrus always issues a delete to the redirect page_id when updating a redirect...should SUP? [15:31:02] it's why the `redirectInIndex` remediation only requires a links update job and not a paired delete [15:31:59] oh, we do. I just didn't see it :P [15:32:42] it happens much earlier by generating two events [16:08:32] ryankemper just a heads-up, brouberol and I were working on the "Elasticsearch as a Service" design doc, we changed the name to "Mutualized Opensearch"...naming (as everything) is still tentative [16:25:58] lunch, back in ~1h [17:04:12] back [17:05:41] seeing some wdqs sparql probes failing in codfw...taking a look now [17:06:39] dcausse: my meeting now is cancelled. would now be an okay time to WDQS next() ? [17:06:52] (or do tomorrow?) [17:17:18] * ebernhardson cant decide if the oldDocument problem generation should stay in cirrus or live in flink...cirrus is convenient because it can provide namespaces, index names, etc [17:20:48] i guess we are fetching afterwards anyways, so it would come later... [17:37:48] dr0ptp4kt: sorry missed your ping, tomorrow might be better, I'm alone with the kids on thursdays evening [17:38:12] * dr0ptp4kt okay! see you tomorrow! have a good night dcausse ! [18:32:31] I'm looking at the storage requirements for the new shared ES cluster...does anyone know how long we need to keep apifeatureusage indices around? Looks like we have some that are several years old: https://phabricator.wikimedia.org/P58885 [18:39:43] inflatador: i think we are only supposed to retain 90-ish days, sec [18:41:29] inflatador: relevant bits in puppet hieradata/role/common/apifeatureusage/logstash.yaml. Curator is configured there to delete indices older than 91 days, but something there isn't working [18:42:12] ebernhardson thanks, will get a ticket started and check it out [18:45:43] this gives me an idea for a one-liner...paste a path and open it in gerrit web UI [18:47:44] hooray! [18:47:46] `open "https://gerrit.wikimedia.org/r/plugins/gitiles/operations/puppet/+/refs/heads/production/$1"` [19:24:24] * ebernhardson wonders if clusterGroup should have been called crossClusterName everywhere [20:56:46] ryankemper cookbook failed with "cannot pool a node whose weight is equal to 0"...probably means we need to add some of the newer hosts into conftool. Checking now [20:57:59] inflatador: yup exactly, will need to run the equivalent of `sudo confctl select 'name=elastic2087\.codfw\.wmnet' set/weight=10:pooled=yes` on puppetmaster [20:58:12] (otw to gym so will leave to you or can run the cmds when im back) [20:58:31] ryankemper ACK, I'll take care of it [21:06:41] FWiW confctl is on cumin hosts too [21:07:36] break, back in ~20 [21:34:23] back [22:16:28] ryankemper cookbook still running under my user on cumin2002. I'm heading out in about 10, but all looks well so far. This batch only has 2 hosts, so I'm guessing it's done after that