[08:07:37] !log tools.heritage Deploy latest from Git: ef4111a, f70063a, db03f84, 27fa2f6 [08:07:39] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.heritage/SAL [08:16:57] !log tools.heritage Run ./bin/build.sh to rebuild the virtualenv with Python3.7 on a Buster node [08:17:00] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.heritage/SAL [08:17:10] !log tools.heritage Run ./bin/build.sh to rebuild the virtualenv with Python3.7 on a Buster node (for T307269) [08:17:12] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.heritage/SAL [08:18:02] !log tools.heritage Trigger a full update_monuments job post-T307269 [08:18:06] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.heritage/SAL [08:20:11] !log tools redis: start replication from the old cluster to the new one (T278541) [08:20:13] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [08:20:14] T278541: Toolforge: migrate redis servers to Debian Buster or later - https://phabricator.wikimedia.org/T278541 [10:25:08] I get a lot of "500 Internal Server Error" when my bot tries to log in from toolforge, sometimes it works, but fails often [10:30:06] Wurgl: log in to where? the wikis? [10:30:23] Yes [10:30:54] could you paste the full error message somewhere, including the 'request id' (a bunch of random looking characters) if it includes one? [10:31:01] with action=login … [10:31:41]

500 Internal Server Error

\nAn internal server error occurred.\n [10:31:59] Thats all (with \n as a newline) [10:32:16] that.. doesn't look like something that the wikis would usually generate. what does your code look like? [10:33:23] This code is not modified since 3 years or so. [10:34:20] But now pizza … you are not you when you are hungry [11:14:23] * dcaro lunch [12:28:03] taavi: Now after lunch, the problem is magically gone. Something had a few hickups [12:30:51] Wurgl: weird! at least it works now :/ [12:35:28] de.wikipedia.org <-- was the bad one. [12:35:52] However, I ignore it [13:36:26] Any updates on https://meta.wikimedia.org/wiki/Special:OAuthConsumerRegistration/update/1e8e9033d9df4ff0d3e326e9ebbf06fb? [13:37:43] Make that https://meta.wikimedia.org/wiki/Special:OAuthListConsumers/view/1e8e9033d9df4ff0d3e326e9ebbf06fb [14:01:40] roy649: not yet, sorry [14:02:29] Is there anything specific that this is waiting on? Additional information that I could supply? [14:21:26] tgr: any suggestions on where to go from here? what group makes these approvals? [14:22:19] Not at this point, I just did not get around to it. We don't have a process yet for this kind of thing yet, I started a discussion about the need for it a few months ago but then went for a long holiday and have yet to pick up the threads. [14:23:54] Sorry to be a pest about this, but I've got work that I want to get out which is blocked by this. Who did you start the discussion with? [14:40:12] Is Quarry working. Even very simply queries such as https://quarry.wmcloud.org/query/64255 seem to be running indefinitely. [15:09:39] Rook: ^ of possible interest to you [15:11:16] Well, not indefinitely, but a query of "SELECT 1" shouldn't take 8 minutes to run. [15:15:16] Doe seem to be running slowly. I've yet to establish access to the db side of things to look there. razzi any thoughts? [15:47:15] 0891247114371 hmm [15:48:24] Aha! Die Nummer hab ich schon inner Plonkliste [15:49:42] Wurgl: ? [15:53:30] oops! Wrong channel [15:53:32] sorry [15:53:54] phone spammer in germany [16:52:11] hello! (this may be a stupid question): I am trying to create a one-off job with the Toolforge jobs framework. It's a simple bash script that echos some SQL that is piped to toolsdb with the `mysql` command. This works fine on the grid, but on k8s it doesn't know about `mysql`. What am I missing? I tried using several different container images [16:52:34] the script basically does `echo "USE mydb; /* my query */;" | mysql --defaults-file=$HOME/replica.my.cnf -h tools.db.svc.wikimedia.cloud` [16:53:10] musikanimal: maybe try using the full path to the mysql command. could be a PATH issue [16:55:46] I tried `/usr/bin/mysql`. Where else would it be? [16:56:09] also the `sql local` etc. don't seem to work within k8s either [16:57:35] musikanimal: it's /usr/local/bin/mysql [16:57:38] on prod db servers [16:57:55] might not be relevant but to answer where it _could_ be [16:58:44] the "sql" wrapper points to that on mwmaint hosts: [16:58:45] [mwmaint1002:~] $ which sql [16:58:45] /usr/local/bin/sql [16:58:58] still no dice :( Is there an easy way to enter the k8s shell so I can experiment without going through `toolforge-jobs`? [16:59:26] `docker run docker-registry.tools.wmflabs.org/toolforge-bullseye-standalone:latest find -type f -name mysql` [16:59:27] no results [16:59:36] pretty sure that image doesn’t include mysql [17:00:01] (`docker run -it ... bash` to get a shell – and that’s on a local system btw, not on toolforge) [17:00:21] ^ was about to say what Lucas already said [17:00:43] both things. if it's really not in the image..which it sounds like.. then it's time to open a ticket [17:00:49] looks like mysql-common is the only relevant dpkg package installed [17:01:01] and that is the way to get a bash but not sure about specifics on toolforge [17:01:02] yeah, sounds like a reasonable thing to include to me [17:01:25] okay I shall create a task [17:06:27] mutante: I'm right here! :P [17:07:10] I admit I'm confused about k8s vs docker here. I guess we're using k8s on top of docker? [17:07:20] anyway, here's the task: https://phabricator.wikimedia.org/T307486 [17:07:48] thanks to all for your help [17:08:46] SQL: lol, sorry for the ping. I guess you are used to being highlighted though :) [17:09:11] mutante: I'm pretty used to it, I just like to joke around - not worried about pings LOL [17:10:17] ok :) [17:34:31] musikanimal: the container images are kept as small as possible be design, so for now your best option is stick to the grid [17:35:54] okay thanks. I think it's my only option, hehe [17:57:41] in cloud VPS, a project admin can remove themselves from a project but a "member" can't remove itself because they are ..not admin. does that seemright? [17:58:51] mutante: sounds like the current reality, yes (no comment on if that's how it should work or not) [17:58:51] mutante: yes. that is how OpenStack permissions work [17:59:25] alright, thanks! would you mind removing me from "integration" ? [17:59:33] I am not NOT in it but also not admin [17:59:57] and I forgot why that is..but must have been a long time ago [17:59:58] bd808: actually we have a custom horizon panel to manage those as novaadmin :-) I think there was some sort of issue with limiting that access to a single project last time we looked into it [18:02:51] !log integration `sudo wmcs-openstack role remove --user dzahn --project integration user` per request [18:02:52] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Integration/SAL [18:03:09] thank you [18:03:57] I responded to the stretch deprecation mail btw. It said "Hello admin,". So I replied that all members got this mail not just admins [18:05:04] komla: ^ That sounds like a bug in the script (maybe a bug I made long ago, maybe something new?) [18:05:48] nagging non-admins about instance replacement is kind of mean [18:05:55] I sent that to komla, out-of-office agent said they are back tomorrow. [18:06:11] which mail? I haven't got any for my project [18:06:23] * taavi really hopes one was not sent to all members of the 'tools' project [18:06:40] yea, I may have been in that state because I clicked to remove myself from admin ship and then killed my ability to remove me completely. [18:06:45] sometime in the past [18:07:02] taavi: '[IMPORTANT] - All Debian Stretch VMs should be removed by June of 2022' [18:07:02] taavi: the ones I have are "[IMPORTANT} - All Debian Stretch VMS should be removed by June of 2022" [18:07:36] 'No messages match your search' [18:07:37] hmm [18:08:59] I see the cloud-announce thread with almost the same name, but it does not have an '[IMPORTANT]' prefix [18:09:03] I got them on my personal email too rather than work... which is super weird. [18:09:05] would the script get the "who is admin" data in the moment it runs? or maybe that was a snapshot from the past when I was still actualy admin.. but I wouldn't know when it changed [18:09:37] the only place we track that is in the Keystone service itself mutante [18:09:44] *nod* [18:10:06] long ago there was some LDAP mirroring, but now its just Keystone's mariadb table [18:18:05] !log admin updated 'puppet-enc' endpoints on the keystone catalog to use https and port 443 [18:18:07] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [20:16:14] !log tools.lexeme-forms deployed d8429a8740 (l10n updates) [20:16:18] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.lexeme-forms/SAL [20:38:11] !log admin upgrading clouddb2001-dev in place [20:38:14] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [23:58:40] !log striker Shutdown legacy stretch instances (T306096) [23:58:42] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Striker/SAL [23:58:43] T306096: Cloud VPS "striker" project Stretch deprecation - https://phabricator.wikimedia.org/T306096