[05:35:04] !log wikisp moodle instance on Mars VM enabled (T289309) [05:35:07] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Wikisp/SAL [05:35:07] T289309: Instalar Moodle - https://phabricator.wikimedia.org/T289309 [12:11:36] !log tools deploy ingress-nginx v1.0.4 / chart v4.0.6 on toolforge T292771 [12:11:40] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [12:11:40] T292771: upgrade to ingress-nginx 1.0 - https://phabricator.wikimedia.org/T292771 [12:24:24] !log tools deploy ingress-admission updates [12:26:48] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [12:36:46] so I've cleaned up my excessive db queries, but I still hit the db session limit when a bunch of people are trying to use my app at once...am I able to request a limit increase? [12:38:07] GenNotability: you can open a phabricator ticket and we can talk about it, sure [12:39:26] I think https://phabricator.wikimedia.org/project/view/4481/ is the right workflow here [12:40:14] although the instructions are written with the replicas in mind and not toolsdb :/ [12:55:25] arturo, majavah: thank you, will have a look :) [13:12:47] Today at 7am Pacific (which I think is in just under an hour), Sage Weil is doing a walkthrough of the Ceph CRUSH code at https://bluejeans.com/908675367 [13:13:04] (I thought folk here might be interested; there should be a recording on youtube later) [13:20:02] Is Ceph CRUSH like Candy CRUSH? [13:22:07] 🐟 [13:55:47] Emperor: nice! [14:09:54] Also, Sage uses emacs :) [15:06:02] !log paws delete orphan pods for 2 users [15:06:03] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Paws/SAL [15:45:14] !log toolhub Update demo server to 5212ce1 (ui: Show dismissible demo notice in debug mode) [15:45:18] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolhub/SAL [16:38:56] !log mwcli create project with addshore and jhuneidi as projectadmins (T294283) [16:38:57] arturo: Unknown project "mwcli" [16:38:58] T294283: Request creation of mwcli VPS project - https://phabricator.wikimedia.org/T294283 [16:39:05] <3 [16:39:36] !log mwcli create project with addshore and jhuneidi as projectadmins (T294283) [16:39:36] arturo: Unknown project "mwcli" [16:39:44] stashbot :-( [16:39:44] See https://wikitech.wikimedia.org/wiki/Tool:Stashbot for help. [16:40:52] !log mwcli create project with addshore and jhuneidi as projectadmins (T294283) [16:40:53] arturo: Unknown project "mwcli" [16:41:08] well ... [16:41:14] D: [16:41:16] did I create the project? [16:41:57] I did ... so let give $whatever a minute to sync [16:42:08] horizone days yes! [16:42:14] cool [16:42:14] hehe, horizone [16:42:54] horizon + ozone = horizone [16:43:01] I don't remember how long stashbot caches the ldap list for. I think it's about 5 minutes? [16:51:20] !log mwcli create project with addshore and jhuneidi as projectadmins (T294283) [16:52:01] ⏲️ [16:56:37] stashbot: ? [16:57:40] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Mwcli/SAL [16:57:40] T294283: Request creation of mwcli VPS project - https://phabricator.wikimedia.org/T294283 [16:57:41] See https://wikitech.wikimedia.org/wiki/Tool:Stashbot for help. [16:58:01] * arturo had the restart command already typed in the prompt [17:40:59] majavah: I successfully got the alert for libup (yay!), I see it's now cleared on the alertmanager web interface, but I haven't (yet) gotten any recovery email - am I supposed to? [17:43:54] ooh nice! [17:44:31] apparently alertmanager doesn't send recovery emails by default but it can be configured to do so [17:50:46] I would like that, mostly so if it does automatically fix itself or someone else does, then I can see that in the same place [17:52:27] sure [18:00:44] !log tools deleting legacy ingresses for tools.wmflabs.org urls [18:00:47] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [21:37:20] So I see in horizon there are open stack api endpoints, and I can talk to these from cloud VPS machines, is it "allowed" that I use them? and how does auth work If I am? [21:46:06] addshore: it's a work in progress. There will be an announcement when they are something actually usable by folks. [21:46:28] Nice [21:46:44] I did write a ticket for this mwcli thing in the past weeks that said do something with cloud vps :P [21:47:20] T294195 [21:47:20] T294195: Openstack API access credentials - https://phabricator.wikimedia.org/T294195 [21:48:21] ty, cross linked to https://phabricator.wikimedia.org/T292906 for my future thinking :) [21:48:25] I really don't know how fast this stuff will move to being a general use feature. Probably not super fast unless m.ajavah decides it is really interesting to work on [21:50:02] sounds good to me! My initial functionality set I was thinking was listing things that exist (possible without this api), and convenience ssh things perhaps. [21:50:05] addshore: for doing openstack-browser like things, there is a shared read-only user you can use... I just have to remember where you find it's creds [21:50:14] but i just spotted that list of api endpoints again, and the cogs started spinning [21:51:08] see, when I see an API endpoint my spinning cogs go “I wonder if it’s protected against CSRF” [21:51:54] addshore: I think /etc/novaobserver.yaml is provisioned on all Cloud VPS instances with the read-only openstack credentials [21:52:17] oh yes! [21:52:18] nice [21:52:32] that's how opestack-browser works -- https://phabricator.wikimedia.org/source/tool-keystone-browser/browse/master/keystone_browser/keystone.py$46 [21:52:35] I think that must be true since https://openstack-browser.toolforge.org/ works [21:52:37] heh, jinx [21:52:47] * addshore writes that in his ticket too [21:54:40] quipped: https://bash.toolforge.org/quip/sEeZvnwBa_6PSCT9ENwP [22:28:54] bd808: so, it's not clear to me whether I should keep using toolsadmin to add toolinfo metadata, or delete those and enter the data in Toolhub manually. Some of the newer fields like links to documentation don't seem to be supported in toolsadmin [22:30:19] legoktm: it's a good question without a perfect answer at this point. You are correct that toolsadmin does not gather or publish a toolinfo/1.2.0 record yet. [22:31:34] Right now, if you uncheck the "this is a webservice" box in toolsadmin it will stop publishing that record. Then you could create a nicer record either directly in Toolhub or via your own published toolinfo.json file somewhere. [22:31:58] Or you could wait patiently for toolsadmin to get better. [22:32:10] Or you could make a duplicate record. [22:33:05] duplicates was my next question :p [22:33:12] https://toolhub.wikimedia.org/search?q=apt&ordering=-score&page=1&page_size=12 [22:33:14] there are going to be a couple of new folks joining me on working on Toolhub in November. That might make it possible for toolsadmin to get better faster. [22:33:37] majavah has created a apt.toolforge.org tool that just redirects to my apt-browser.toolforge.org one [22:33:56] can I merge the two entries somehow? or indicate the tool has two URLs? [22:33:57] duplicate discussion so far: https://meta.wikimedia.org/wiki/Talk:Toolhub#Duplicated_projects [22:34:51] maybe just having majavah unpublish the toolsadmin record will remove the duplicate [22:35:04] exciting to hear that more people will be working on toolhub :D [22:35:51] I see https://meta.wikimedia.org/wiki/Talk:Toolhub#Can_we_fix_our_own_existing_tool_records_in_the_UI? is mostly what I was asking [22:37:31] its sort of funny that the magic for toolinfo I put into Striker ~5 years is now annoying because I haven't updated it :) [22:39:07] legoktm: related... I want to move Striker (toolsadmin) to k8s because that deploy method is so much easier than the scap3 mess it uses now. But that is going to run head first into the new ideas about the k8s cluster being mw-related only. [22:39:45] ^.^ [22:40:09] it's not clear to me whether we're actually going to be enforcing that now or waiting until the new other/misc cluster exists [22:40:25] it's still very much in discussion [22:40:52] is this a discussion that is open to folks like me? [22:41:25] I think so [22:41:44] so far we've been trying to nail down exactly what we want the current MW-focused cluster to have: https://gerrit.wikimedia.org/r/c/operations/deployment-charts/+/725003 [22:42:40] "services" feels like a funny name if MediaWiki itself is going to run there [22:43:00] everything is a service [22:43:13] the commit message is out of date, if you read the very long thread the current planned name is "wikikube" [22:44:33] I think the main thing that would be helpful right now is getting an idea of what people want to run/move on k8s by filing service deployment request tickets [22:44:55] *nod* I can do some of that :) [22:46:06] I really want everything that is on labweb* today in k8s, although I haven't talked to a.ndrewbogott about the possible challenges of moving Horizon [22:47:03] everything on labweb* is wikitech, Horizon, and Striker [22:49:10] that all seems reasonable and good fits for k8s to me