[05:45:32] !log tools.heritage Deploy latest from Git master: 7ed2ef3, 7ed27df, d1fe905, da4b9a4, 681c3c2, 81f91ab, ede4d73, 1e5e614, c43703e (T295238) [05:45:36] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.heritage/SAL [13:11:19] I will let you know when I see Nettrom and I will deliver that message to them [13:11:19] @notify Nettrom FYI I just confirmed jsub defaults to using `release=stretch` https://gerrit.wikimedia.org/r/plugins/gitiles/labs/toollabs/+/refs/heads/master/jobutils/bin/jsub#727 so the problem you found yesterday was perhaps qsub not having the default + the buster grid being idle as taavi suggested [13:24:16] !log admin cloudmetrics1004:~ $ sudo systemctl restart wmcs_monitoring_graphite_rsync.service (T300138) [13:24:19] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [13:24:19] T300138: cloudmetrics: rsync sometimes fails - https://phabricator.wikimedia.org/T300138 [15:41:34] Is there any way to be issued elasticsearch credentials with finer grain than just "write access"? It would be nice to have credentials which allowed me to add docs to an index, but didn't allow deleting the index so a single programming error couldn't destroy a lot of work. Then a second set of credentials which allowed index deletion. [19:10:18] my tool pod no longer has /mnt/nfs/labstore-secondary-tools-project mounted [19:12:24] spi-tools-6b9b74966c-7z8wp [19:12:28] it looks like it should have /data/project mounted still [19:14:10] apparently it used to have /mnt/nfs/labstore-secondary-tools-project mounted because that's what's embedded in my python venv. [19:14:48] has something changed in the configs? [19:15:40] I haven't changed anything or heard anyone else do the same [19:17:01] hmmm. that's really strange. The tool is up and running for 105 days but I'm not understanding how since the paths in the venv no longer exist. [19:19:03] it's possible that the last time I built the venv, I did it on the bastion. I think that would explain the paths. But I don't understand how the tool is running. [19:22:42] everything in spi-tools/www/python/venv/bin references it other than python, python3.7, python3 [19:23:19] no matter the last-mod date (Jul 4 2020, Jul 27 2020, Jun 7 2021) [19:23:53] yeah, those dates sound about right for the last time I updated the venv and restarted the tool. [19:26:26] I see stack mentions in uwsgi.log of /data/project/spi-tools-dev/, so I guess that's what's actually running. [19:26:28] weird [19:27:16] I'm going to try blowing away the venv on my dev instance and rebuilding it from scratch to see what happens. [19:28:21] when you create the venv, try specifying the absolute path (/data/project/spi-tools/www/python/venv) [19:28:40] you shouldn't need to, but if something funky's going on it should help [19:30:02] I'm going to try the normal way, and if that doesn't work, I'll try your idea. [19:30:36] I do deployments so infrequently, it's possible I just messed something up the last time because I was out of practice. [19:30:45] I've got a cheat-sheet, but ..... [19:35:16] I see the I've got in my cheat sheet, to create a new venv: [19:36:01] ... /data/project/spi-tools-dev/python-distros/Python-3.7.3-install/bin/python3.7 -m venv --copies venv [19:36:16] so I guess that's essentially what you were suggesting anyway, with the absolute path [19:38:05] why are you using a custom compiled python and not the system installed one inside the image? [19:38:09] ^ [19:46:16] that goes back a long time ago before there was a 3.7 image available, so I just built my own from source. [19:46:31] which is actually pretty trivial to do [19:46:56] and I just kept using that. [19:47:40] I supposed now that you've got 3.7 (and 3.9?) images available, I should probably switch to using that, but inertia.... [19:47:56] python 3.9 has been available for a few months now [19:48:12] and 3.7 for much longer? [19:48:15] yeah, this goes back a couple of years, I think. [19:48:55] LIke I said, I should probably switch over at some point. [20:31:06] Regarding building your venv on the bastion vs on a pod, does it only matter when you initially set up the venv, or do you have to do every "pip install" on the pod, even if just to install a new module? [20:32:52] every time [20:34:58] even if you use --copies (which you should not with system python) it's still best practice [20:35:37] OK, I've messed this up more than once. I guess I need to figure out a way to automate it, otherwise I'm sure it'll happen again. [20:37:29] It's one of those insidious things that if you do it wrong, it's not immediately obvious.