[08:22:24] hello folks [08:22:33] just powercycled elastic2043, it wasn't responsive [08:22:44] I opened a task, I found some dimm/memory errors in getsel [08:22:56] inflatador: --^ [08:24:54] host is up now, seems fine [08:25:12] elukey: thanks! :) [10:35:49] lunch [13:04:28] Thanks elukey ! Will take a look [14:15:18] dcausse you were right, it looks like DSE deploys from the same repo. At first glance, it looks like the dse-k8s-eqiad only has infra-related namespaces so far (cert-manager, istio etc) [14:17:27] inflatador: ok, good to know [14:53:12] Are we retro-ing today? [15:00:26] yeah, let's do an elasticsearch 7 retro [16:04:39] cindy was still running...looking at the es7 branch :P lets see if it's happy with master [16:05:30] runs an unsupported php version, needs minor updates at least [16:12:10] runs an unsupported debian, we didn't release php7.4 packages for stretch :( [16:35:50] bah [18:11:10] :S we use lxc for virtualbox on cloud, they ran into some issue that no-one wanted to debug so the debian/buster64 images for lxc were removed: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=951347 [18:11:24] s/virtualbox/vagrant/ [18:12:43] there are random other users that uploaded lxc images for debian buster, but the most used has 32 downloads which doesn't inspire confidence :P [18:29:35] that looks vaguely familiar, I may have gone down that rabbit hole awhile back [18:33:33] gonna try and vote again, wish me luck! [18:33:47] (hopefully back in ~30) [18:35:43] * ebernhardson ponders doing the rediculous...install the old version and dist-upgrade it :P [19:20:35] back [19:20:42] why not? [19:20:45] ;) [19:27:00] ebernhardson: It might be time to think about how to move cindy to docker containers. mediawiki-vagrant is on life support at best [19:27:30] * bd808 is not exactly happy about that, but it is what it is at this point [19:27:35] bd808: i have a half working thing with docker containers, the problem is the docker containers are not even half implemented [19:27:39] like, no job queue? seriously? [19:28:01] yeah, I also have feelings about the state of mediawiki + docker tooling [19:28:14] on my local dev env i have a completely silly backgrounded tmux session running jobs in a bash loop [19:29:05] which runner do we have in mw-vagrant? The stand-alone one that aaron made at some point? [19:29:59] hmm, yea i think the puppet still runs aaron's redis versions [19:30:59] seems to be https://github.com/wikimedia/mediawiki-vagrant/blob/master/puppet/modules/mediawiki/manifests/jobrunner.pp which is the code from https://gerrit.wikimedia.org/r/plugins/gitiles/mediawiki/services/jobrunner/+/refs/heads/master [19:32:03] i'm sure given a few weeks i could get the docker stuff up to spec...but last time i took those two weeks i only got as far as having a plausible local dev env that can run the integration testing, but not all that repeatable or sharable (i have a hacked version of the go client, among other things) [19:32:16] fun times [19:32:56] I have some stuff, but its really just the bits I need to have a test env for Striker which needs a fake metawiki and wikitech [19:33:17] I just used docker-compose because I didn't need flexibility [19:36:44] it's installing now, after a dist-upgrade of the stretch image into buster. If this works will probably let it run for a bit, but otherwise i guess i have to figure out how to put my dev env in a cloud instance and run the integration tests that way [21:13:40] meh, it turns out there is now a user in ldap named cindy as well :P [21:14:04] :q [21:35:11] ebernhardson you mentioned something about the paths being incorrect on the data-transfer co-okbook, but I can't find where? [21:40:17] ebernhardson nm, found it ... wasn't about the transfer co_okbook anyway https://phabricator.wikimedia.org/T222349 [21:50:23] inflatador: look for your.org in the cookbooks, it's the data-reload.py one [21:50:31] basically instead of downloading from your.org it should copy off nfs [22:15:19] ACK, will look into that more tomorrow