[08:20:46] Thanks for helping with the Wikimedia Chat question yesterday. I have now recreated a phabricator issue https://phabricator.wikimedia.org/T335471 [14:17:52] hi folks, there's an alert about labtest-puppetmaster.wikimedia.org cert about to expire on the production puppet master, is that something that's used or I can nuke it ? [14:21:00] sounds unused to me [14:22:23] yeah I suspected as much (not in DNS) and wanted to double check [14:22:38] dcaro maybe ^ ? [14:23:46] I think it's not used no, andrewbogott or bd808 might have more historical background ^ [14:24:20] You can delete it. [14:24:38] I don't remember what it's for but we can surely live without it :) [14:44:38] ok! thank you folks [14:44:57] {{done}} [16:59:19] Per the instructions at https://wikitech.wikimedia.org/wiki/Help:Toolforge/Database#User_databases I “became” my tool but now I can’t log in to the SQL server because I don’t have a replica.my.cnf file [17:03:40] @harej: is that with a brand new tool? Every tool should have a replica.my.cnf [19:07:57] There we go. The file is now created. [19:29:56] @harej: awesome. the maintain-db-users process is usually pretty quick (a few minutes), but I guess it took a while for you this time. [19:30:24] I might have just needed to wait a couple more minutes; something else took my attention in the meantime [20:33:09] !log tools Started process to rebuild all buster and bullseye based container images per https://wikitech.wikimedia.org/wiki/Portal:Toolforge/Admin/Kubernetes#Building_toolforge_specific_images [20:33:11] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [20:42:56] !log tools Container image rebuild failed with GPG errors in buster-sssd base image. Will investigate and attempt to restart once resolved in a local dev environment. [20:43:00] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [20:50:27] !log tools Started process to rebuild all buster and bullseye based container images again. Prior problem seems to have been stale images in local cache on the build server. [20:50:29] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [20:51:10] is UTF-8 supported in Lua modules? [20:51:11] [20:51:12] For string.sub("Bangladeş", -2) [20:51:14] [20:51:15] I get "ş" instead of "eş" [20:51:17] [20:51:19] Is there any workaround for this? [20:51:56] @Yetkin: Take a look at https://www.mediawiki.org/wiki/Extension:Scribunto/Lua_reference_manual#Ustring_library [20:51:57] https://www.mediawiki.org/wiki/Extension:Scribunto/Lua_reference_manual#Ustring_library [20:52:03] dangit [20:52:37] jinx :) [20:52:37] https://www.mediawiki.org/wiki/Extension:Scribunto/Lua_reference_manual/vi#mw.ustring.sub [20:52:51] Why I've ended up on /vi... [20:53:16] it's a dropin replacement so you should be able to use mw.string.sub("Bangladeş", -2) [20:56:04] 👍 thanks man [23:16:38] !log tools.glamtools Hard stop && start cycle to reset Deployment and all dependent objects (T335520) [23:16:43] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.glamtools/SAL [23:37:20] Could someone for toolforge kick whois-referral out of it's sleep please? Likely just needing reboot methinks [23:41:04] I'm trying to get it restarted AmandaNP, but the kubernetes cluster is being stupid slow at the moment... [23:41:24] ah, so that was you. *me also did a webservice restart but was wondering why it had seemed to restart before then already* [23:41:32] "whois-referral-7c7858b4f5-w8fcs 0/1 ContainerCreating 0 2m36s" [23:41:45] No major rush, it's just annoying since last night CUing [23:43:17] why is the k8s cluster so sad suddenly? [23:44:03] the old pod seems to be partially stuck on disk I/O, like I experienced last night too [23:44:13] (though in this case it only seems to affect three out of five uwsgi processes) [23:44:41] * bd808 expects Sammy to show up pissed that bd808 broke her tool soon [23:45:05] (the two non-stuck processes seem to be using ~25% CPU constantly, having amassed 69h of CPU time by now) [23:45:43] @lucaswerkmeister: are they on the same k8s node? Do we have a sick one out there? [23:46:01] nope, yesterday it was worker-30, this one’s worker-67 [23:47:08] (the old wd-image-positions pod is apparently finally gone btw) [23:55:14] @lucaswerkmeister: I made T335543 if you have anything to add at this point. [23:55:17] T335543: Pods getting stuck in "Terminating" status - https://phabricator.wikimedia.org/T335543 [23:59:33] !log tools `kubectl drain --ignore-daemonsets --delete-emptydir-data --force tools-k8s-worker-67` (T335543) [23:59:39] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL