[01:45:19] bd808: I think the freenode bridge bot can be shut down [01:46:04] Yeah - people can bridge using wm-bot if they really want it [01:46:46] legoktm: a discussion of that for this channel is on the agenda for the WMCS team meeting tomorrow. :) [01:47:43] Anyway bd808 it’s offline on freenode again [01:58:06] !log tools.bridgebot Disabled all freenode connections [01:58:08] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.bridgebot/SAL [02:05:22] !log tools.stashbot Shutdown freenode bot [02:05:24] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.stashbot/SAL [02:06:24] !log tools.jouncebot Shutdown freenode bot [02:06:26] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.jouncebot/SAL [02:43:28] !log tools.wikibugs Shutdown freenode version [02:43:31] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.wikibugs/SAL [03:08:53] Freenode has indeed started hijacking channels such as #wikimedia-libera [03:19:30] yeah, it looks like they are walking the list of all the soft closed channels and making a hostile takeover of our namespace. I hoped it would be a couple more weeks before that happened, but sadly I'm not surprised. [03:53:09] ah, I see, any channel that had something like "we're moving to libera" in the topic was closed. Hostile takeover, indeed! [03:53:52] yup! [03:53:55] yup [03:54:30] it's kinda turned things from "please move along to libera at your earliest conveniance" to "grab what you can and run" [04:02:09] well if you just wanted to close the channel and kick everyone, it's actually kind of convenient, hehe [04:05:11] just looking for a silver lining. The new Freenode really showed their true colours doing that to an organization like us who were honestly just trying to stay afoot amidst the chaos [04:05:35] I would have preferred more time, but whatever [04:08:13] PSA: get in the queue for your Wikimedia cloak by `/msg wmopbot cloak` on Libera Chat and then follow the prompts. It does require you to interact with the bot on Freenode too for full setup. [04:16:46] ^ very slick! thank you [04:17:32] Thank you @bd808 for that [04:18:09] just spreading the good news :) [04:20:48] Apparently some channels were erroneously included in the policy enforcement [04:34:03] yeah... not ours though ;) [10:19:38] !log tools.wikibugs Updated channels.yaml to: dd9d5a6522bc4aa9b9d4150afb316dd957c60981 Do not crash when tokens are given [10:19:40] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.wikibugs/SAL [10:22:56] !log tools.wikibugs Updated channels.yaml to: dd9d5a6522bc4aa9b9d4150afb316dd957c60981 Do not crash when tokens are given [10:22:57] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.wikibugs/SAL [11:03:38] !log admin created public flavor `g3.cores16.ram36.disk20` (even though it was requested as private in T283293, but may be useful for others) [11:03:42] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [11:03:42] T283293: Request increased quota and new instance flavor for dwl Cloud VPS project - https://phabricator.wikimedia.org/T283293 [11:06:32] !log dwl bump quotas, CPU from 45 to 81, RAM from 182GB to 218GB, storage from 160GB to 320GB (T283283) [11:06:33] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Dwl/SAL [11:06:43] !log dwl bump quotas, CPU from 45 to 81, RAM from 182GB to 218GB, storage from 160GB to 320GB (T283293) [11:06:45] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Dwl/SAL [11:26:47] !log admin [codfw1dev] purge old kernel packages in cloudvirt200[12]-dev [11:26:49] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [11:51:23] arturo: does the flavor need some time to show up? [11:51:36] gifti: I don't think so [11:51:43] perhaps I missed some step, let me double check [11:54:38] gifti: try now! I created it as a private flavor, but it should be public [11:56:20] yup, it's there [11:57:01] I trust: MacFan4000!.*@user/macfan4000 (2admin), .*@user/majavah (2admin), .*@user/bd808 (2admin), .*@user/legoktm (2admin), [11:57:01] @trusted [11:57:19] User was deleted from access list [11:57:19] @trustdel .*@user/majavah [11:57:46] Successfully added .*@wikimedia/majavah [11:57:46] @trustadd .*@wikimedia/majavah admin [11:58:01] thanks! [11:58:15] User was deleted from access list [11:58:15] @trustdel MacFan4000!.*@user/macfan4000 [11:58:34] User was deleted from access list [11:58:34] @trustdel .*@user/legoktm [12:00:10] Successfully added .*@wikipedia/Legoktm [12:00:10] @trustadd .*@wikipedia/Legoktm admin [12:08:04] i don't know why, but in project dwl i don't have enough quota to resize all my instances as planned [12:08:59] ram quota specifically [12:17:15] gifti: I may have missed some math on my side [14:36:21] !log admin Enabled syslog logging on codfw ceph cluster (mon/osd/mgr) (T281247) [14:36:24] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [14:36:25] T281247: ceph.observability: add ceph logs to central logging - https://phabricator.wikimedia.org/T281247 [14:36:41] !log admin Enabled syslog logging for osd.55 on eqiad ceph cluster for testing (T281247) [14:36:43] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [15:43:53] !log tools.wikibugs Updated channels.yaml to: a5f322b13d1bd954cd90e8ce2692c593d06c53fd Update for new directory and network move [15:43:56] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.wikibugs/SAL [15:50:41] !log tools.wikibugs Updated channels.yaml to: d6a6485cc5ff73c4b2922cd790c273b736580a4c Add config for #wikimedia-sre-foundations [15:50:43] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.wikibugs/SAL [15:55:48] hey all irc bot operators, remember to do the cloak transfer request for your bots too [15:57:26] majavah: good prompt. thanks [16:38:15] !log tools.wikibugs Updated channels.yaml to: 8fe607bc1d6123b999c89b603884825b72e091a7 Re-add netops to the #wikimedia-traffic channel [16:38:19] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.wikibugs/SAL [16:53:22] !log clouddb-services restarting postgresql since T220164 was closed. Hoping all connections don't get used up again. [16:53:25] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Clouddb-services/SAL [16:53:25] T220164: osm4wiki generating around 300 perl processes wherever it runs, which overloads the server for purposes of gridengine - https://phabricator.wikimedia.org/T220164 [17:41:37] arturo: so, did you actually provide enough quota? have you miscalculated? or is it somehing else? [17:59:24] so, apparently you set the ram quota to 219,000 MB, it should have been 218*1024=223,232MB [17:59:36] gifti, I can fix.. dwl project? [17:59:40] yes [18:00:52] also, arturo, the cores16.ram36 should probably have been 36,864MB ram instead of 36,000 but it doesn't really matter to me [18:01:12] gifti, no worries. Feel free to send the correct specs and I'll check them and fix :-) [18:02:36] uh, so i'd need a 219776MB ram quota with the current flavors [18:07:06] !log admin draining cloudvirt1018, converting it to a local-storage host like cloudvirt1019 and 1020 -- T283296 [18:07:10] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [18:07:10] T283296: Set up a mechanism for etcd nodes to be local storage VMs - https://phabricator.wikimedia.org/T283296 [18:08:44] @balloons ↑ [18:09:01] gifti, 👍 [18:16:37] gifti, try again now. I'm checking the request as well to make sure you can have enough instances, etc [18:17:00] ok [18:18:18] balloons: everything looks perfect now [18:19:36] !log dwl fixing ram quota in support of https://phabricator.wikimedia.org/T277681 [18:19:38] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Dwl/SAL [18:23:40] balloons: #someday we should make a values calculator that makes it easier to do +xMiB things :) [18:24:48] I cheated and used a larger round number since it's a temporary change.. But yes, I generally do the calculator math a couple times before setting quotas :-) [18:44:21] hm, what’s the best way to run additional containers for a tool (k8s backend)? [18:45:03] specifically I’m currently trying to run pygments-server (https://github.com/Khan/pygments-server) for the notwikilambda tool, and have the main container (mediawiki) talk to that one for syntax highlighting [18:45:31] (in the hope that this will help me figure out how to run a wikifunctions orchestrator there, but I figured pygments-server would be easier to start with, since I don’t have much kubernetes experience) [18:45:59] after playing with kubectl a bit and reading some documentation, I thought an extra service + deployment would be the way to go, but apparently the services quota is 1 [18:46:14] should I file a phab task to raise that quota, or try another approach? [18:46:51] yes, please file a phab task, let me find the workboard for that [18:47:06] here’s what I had so far FWIW https://gist.github.com/lucaswerkmeister/8b28d3325f157ca26dbef558a7c085d2 [18:47:11] ok thanks! [18:47:35] https://phabricator.wikimedia.org/project/view/4834/ [18:47:36] IIUC that *should* make a DNS name “pygments-server.tool-notwikilambda” available in the PHP container [18:48:06] ah good, “Kubernetes service” is even one of the listed quotas :) [18:48:17] the quota bump sounds reasonable to me, I think the procedure requires approval from two roots [18:48:51] lucaswerkmeister: btw you can just invoke www/python/venv/bin/gunicorn directly instead of activating the venv and needing a shell [18:49:13] legoktm: I wasn’t sure if the venv activation includes more steps, but if you say so :) [18:49:15] thx! [18:49:40] it does, but for what you're doing it doesn't matter :) [18:52:23] majavah: created https://phabricator.wikimedia.org/T283754 [18:53:51] thanks! please also add how many more do you need [18:56:10] i'm trying to figure out the exact procedure for granting those, that page says that they're approved on the wmcs meeting but I think that was changed recently and I don't want to make you wait a week [19:06:26] !log reimaging cloudvirt1018 to support local VM storage [19:06:27] andrewbogott: Unknown project "reimaging" [19:10:57] !log admin reimaging cloudvirt1018 to support local VM storage [19:10:59] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [20:08:32] !log tools.translate-link deployed 54739135d2 (improve index page) [20:08:36] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.translate-link/SAL [21:36:52] legoktm: we should probably get wikibugs on the list to get a cloak [21:37:57] yes, let me do the wmopbot thing [21:42:57] https://wmopbot.toolforge.org/fn2libera connected [22:00:09] MacFan4000: do we need to do that for wm-bot as well? [22:00:38] yes, let me see [22:00:56] wm-bot [~wm-bot@wikimedia/bot/wm-bot] [22:01:01] Yes, but on freenode you’d have to shut it down as you can’t have more then 5 logins at a time [22:02:43] how bad would it be if I shut down the wm-bot5 temporarily? [22:02:54] I assume that's the least used so far... [22:03:21] I have no idea if you can shut down just a single instance [22:04:03] let's see [22:04:11] It looks like wm-bot is the least used [22:05:21] 15:04 [freenode] -MemoServ(MemoServ@services.)- You have 16 new memos. [22:05:22] lmao [22:06:23] we;l wmopbot is not responding [22:06:25] well* [22:06:33] I asked danilo for help [22:41:40] MacFan4000: linked the two with wmopbot and restarted the bnc [23:12:36] legoktm: it’s still offline [23:13:39] umm [23:16:26] well it says it's connected [23:16:42] I followed https://wikitech.wikimedia.org/wiki/Wm-bot#How_to_fix_1_or_more_disconnected_instances [23:17:23] Could there be a seperate instance for fn? [23:20:50] this is the fn instance [23:21:20] Well it isn’t in #wm-bot [23:22:43] And /whois-Ing it says it’s offline [23:29:05] MacFan4000: so...I don't see anything in the logs [23:29:16] wm-bot channels: 53 connected: True working: True queue: 270 [23:29:21] the queue on everything else is 0 [23:34:19] I could restart everything but I have no idea if that'll make things worse or not