[02:03:26] hey! anyone there? [02:04:21] i'm trying to ssh in to toolforge using a windows machine, but keep getting right after the publickey packet is sent [02:04:27] any help would be really appreciated [02:04:34] here's the last thing i see before i get turfed [02:04:49] > debug2: we sent a publickey packet, wait for reply [02:04:50] > Connection closed by 185.15.56.48 port 22 [02:06:46] what are you using to connect? [02:07:07] puTTY [02:07:14] also tried git bash [02:07:16] same result [02:07:23] (i'm a windows pleb) [02:08:21] Does your username in puTTY match the "Instance shell account name" in https://wikitech.wikimedia.org/wiki/Special:Preferences ? [02:09:49] yessir [02:18:32] only other thing I can think to suggest would be to make sure that you have your private key in Putty and your public key in https://toolsadmin.wikimedia.org/profile/settings/ssh-keys/ [02:18:58] if that all looks fine, I'd suggest filing a phab task so someone with more access can help [02:19:10] i do have both of those. will file a task. thank you! [12:44:38] !log tools deploying volume-admission to tools, should not affect anything yet T279106 [12:44:44] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [12:44:44] T279106: Establish replacement for PodPresets in Toolforge Kubernetes - https://phabricator.wikimedia.org/T279106 [13:04:30] !log masz bump RAM quota from 20GB to 36GB (T290793) [13:04:33] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Masz/SAL [13:04:33] T290793: Request increased quota for masz Cloud VPS project - https://phabricator.wikimedia.org/T290793 [14:15:42] musikanimal, in regards to https://phabricator.wikimedia.org/T290768, is their an approximate cost of running the current setup? Is it in AWS? [14:19:55] I'm amused by the "122GB disk space" but then a "15TB PG DB" [14:22:16] I can ask. I do not think they're using AWS. The BuiltWith profile says they're using a hosting provider called DFN, but I was under the impression it was hardware owned by Gesis https://www.gesis.org/en/institute [14:24:43] lol yes, the postgres db is huge! As I said, we (WMF) don't strictly need that data right now, but I'm not sure if other consumers of WikiWho are relying on it [14:25:45] musikanimal, ahh.. So is there any option to continue in the current setup? [14:26:49] It sounds like not. If it was at a commercial cloud provider, it might be easier to leave in place if needed [14:27:43] I believe the project was financed under a contract with Gesis, and that ends in early 2022. I was told Gesis probably has little interest in maintaining/paying for it after that, but you make a good point; if it is some sort of cloud hosting maybe we can takeover that account. I shall ask them! [14:28:32] the other advantage of bringing it in-house was we could add more languages, which has been highly requested. But if it's elastic computing maybe that's still possible with the current setup [14:31:39] musikanimal, I will try and respond on the ticket as well, but I'm trying to understand costs. At first glance, it seems the cost of hardware equals the yearly hosting cost [17:05:34] !log toolhub Updated toolhub-demo.wmcloud.org to 8c8c4fb [17:05:37] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolhub/SAL [17:13:02] !log toolhub Freed up disk space on toolhub-demo01 with `docker image prune -a` to remove all unused containers pulled to the instance (mostly old snapshots of the toolhub image). [17:13:05] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolhub/SAL [17:20:05] !log tools.apersonbot Fixed JSON syntax errors in $HOME/public_html/toolinfo.json. Updated URLs in same file to new toolforge.org standard. [17:20:07] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.apersonbot/SAL [17:22:01] !log tools.wikilinkbot Updated $HOME/public_html/toolinfo.json to add missing "title" and update URL to toolforge.org [17:22:02] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.wikilinkbot/SAL [17:23:07] !log tools.tsreports Fixed JSON syntax errors in $HOME/public_html/toolinfo.json. Updated URLs in same file to new toolforge.org standard. [17:23:08] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.tsreports/SAL [17:24:47] !log tools.gerrit-reviewer-bot Added missing "title" in $HOME/public_html/toolinfo.json. [17:24:49] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.gerrit-reviewer-bot/SAL [17:26:47] !log tools.gerrit-patch-uploader Added missing "title" in $HOME/public_html/static/toolinfo.json. [17:26:49] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.gerrit-patch-uploader/SAL [17:39:16] Is docker-registry.tools.wmflabs.org/toolforge-python39-sssd-base:latest supposed to have `pip` installed? I know the python37 image intentionally didn't [17:45:06] AntiComposite: pip should be available in virtualenvs, like in the 3.7 image [17:46:40] https://phabricator.wikimedia.org/P17306 [17:51:11] python3-virtualenv in bullseye seems to pull in pip, while the version in buster doesn't: https://packages.debian.org/bullseye/python3-virtualenv [17:53:17] is the global install actually causing any issue? [17:58:43] not sure [18:01:12] for now the biggest issue I can think of is it not working as users might expect [18:03:44] `pip install -u` installs to directories outside of the path & wouldn't be picked up by webservice [18:17:47] I guess we could put something like `echo "no use a venv"` into /usr/local/bin/pip? [18:18:24] or print a warning and then exec /usr/bin/pip "$@" [18:35:54] or stop installing python3-virtualenv? That's the `virtualenv` script right? `python3 -mvenv ...` is what we document using for python3 in part because the virtualenv script is kind of old and sketchy. [18:37:58] yup [18:39:32] Which python3 version is available in toolforge now? Has it changed (relatively) recently? [18:46:46] 3.9 is available in Kubernetes now, I think it's been available for a month? [18:47:15] Exactly a month since https://lists.wikimedia.org/hyperkitty/list/cloud@lists.wikimedia.org/message/YPNSDYOTXNYVMK2OYK2SRNABE6KHAKBI/ was sent [18:50:26] (bastions and grid are still 3.5) [20:30:28] majavah: lol, I didn’t expect to receive pull requests on the notwikilambda-k8s repo :D [20:32:10] lucaswerkmeister: stop making bugs then! [20:38:14] ooops, and the update deployment was broken, currently at 290 restarts [20:38:18] I’ve hopefully fixed that now [20:40:46] !log tools.notwikilambda `git -C ~/www/js/function-evaluator/ config pull.rebase true` to silence `git pull` warnings (presumably due to newer git in node12 container) [20:40:48] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.notwikilambda/SAL [20:40:54] !log tools.notwikilambda `git -C ~/www/js/function-orchestrator/ config pull.rebase true` ditto [20:40:56] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.notwikilambda/SAL [20:50:19] !log tools.notwikilambda set cronjob successfulJobsHistoryLimit to 0 (0aa054b391) [20:50:21] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.notwikilambda/SAL [21:00:51] !log tools.notwikilambda tried to deploy d028cb3667 (switch to node12) for function-evaluator but the new pod never came up so I undid the rollout; will consider further steps [21:00:53] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.notwikilambda/SAL [21:19:11] !log tools.notwikilambda successfully rolled out node12 for function-evaluator after changing `npm ci` to `npm install` (ecfa5ab9ab) [21:19:13] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.notwikilambda/SAL [21:23:33] !log tools.notwikilambda successfully rolled out node12 for function-orchestrator too [21:23:34] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.notwikilambda/SAL [21:57:03] !log admin moving cloudvirt1043 into the 'nfs' aggregate for T291405 [21:57:07] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [21:57:07] T291405: Reduce or eliminate bare-metal NFS servers - https://phabricator.wikimedia.org/T291405 [22:36:19] !log cloudstore created cloudstore-nfs-01 with a floating ip and a 10GB cinder volume T291406 [22:36:25] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Cloudstore/SAL [22:36:25] T291406: POC: puppet-provision a cinder-backed NFS server in eqiad1 - https://phabricator.wikimedia.org/T291406 [22:44:41] !log admin sudo touch /tmp/galera.disabled on cloudcontrol1004, the service seems troubled there [22:44:44] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [22:46:53] !log stopped puppet & mariadb on cloudcontrol1004; it was flapping [22:46:54] andrewbogott: Unknown project "stopped" [22:48:51] !log admin stopped puppet & mariadb on cloudcontrol1004; it was flapping [22:48:52] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [23:08:05] !log admin ran `echo check > /sys/block/md0/md/sync_action` on cloudcontrol1004 to check raid [23:08:08] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL