[05:15:33] phabricator down? [05:16:25] It's not just you! phabricator.wikimedia.org is down. [05:16:26] Last updated: Apr 16, 2024, 7:16 AM (1 second ago) [05:18:13] 05:18:01 up 86 days, 7:20, 1 user, load average: 52.50, 28.55, 14.39 [05:18:15] wow [05:20:03] It is now back for me [05:22:09] https://grafana.wikimedia.org/d/000000377/host-overview?orgId=1&var-server=phab1004&var-datasource=thanos&var-cluster=misc&from=1713243773580&to=1713244869774 [11:53:29] Hmmm. any idea why util/pcc.py might be ignoring -P 7 (and -P7) and always use the P5 compiler? [11:55:55] ah, nvm,. the node is still on P5 [11:59:05] klausman: https://gerrit.wikimedia.org/r/c/operations/puppet/+/966527 [11:59:13] that patch adds the `--puppet-version` arg to the script [11:59:17] but it is apparentlynot used anywhere [11:59:31] thus I'd expect the Jenkins job to run with whatever the defaults are [11:59:49] and in this case, teh hsot in question is still on P5, so it's ok [11:59:55] the host* [12:00:39] I had hoped it was a P5 can't do that, use P7 problem, but instead my pcc failure is me-not-know-Puppet [12:01:26] and the job should runs against both, at least the Jenkins job defaults to run on both [12:03:09] I had some discussion about it with John at the time that change got merged [12:03:36] and apparently the trouble was finding out how to pass that the value given from the puppet-version as a label to the jenkins job [12:03:45] which I guess has been left unimplemented [12:04:01] Oh well, P5 will be gone very soon /s [12:04:30] the jenkinsapi possibly not supporting passing parameters to a matrix job or whatever api has to be found [12:06:31] we could just signal through the change-#. if divisible by 5, run on p5, if divisble by 7, run on 7. [12:06:39] I see no problem with this approach. [12:07:05] yup sounds good to me. And if the change number is a prime number, automatically submit/merge it [12:07:22] perfect [12:38:10] we had some alerts earlier today, see https://phabricator.wikimedia.org/T358936#9717389 for context [14:14:26] /window bare [14:14:32] ^ ::sigh:: [14:17:08] /curtains install [14:18:39] also kinda-related PSA: passwords you type every day should start with / [14:19:08] beginning them with `/!` will prevent them from going into IRC channels or into shell histories [14:19:56] hah even better! [14:28:16] it was super weird to literally connect the rj-45 and receive a bunch of alerts [15:20:24] claime, effie, et al, is the cloud-vps 'appserver' project still in use? There are several pending tasks there, including https://phabricator.wikimedia.org/T360700 (which I am happy to deal with myself if I have someone to coordinate with) [15:21:23] I haven't used it in ages, jayme^? [15:22:47] the full admin list is "Alexandros Kosiaris Clément Goubert Effie Mouzeli Elukey Giuseppe Lavagetto JMeybohm" [15:23:02] * jayme meeting, back in 10 [15:26:05] ah, that - pontoon environments basically. I can delete most if not all of the instances, but we'd like to keep the project - if that helps [15:26:43] jayme: that sounds fine but please update https://wikitech.wikimedia.org/wiki/News/Cloud_VPS_2024_Purge accordingly [15:27:35] will do, thanks for pinging andrewbogott! [15:27:56] 👍 [15:28:03] taavi, jhathaway, moritzm, is the puppet-dev project effectively defunct now that jbond has departed? It's unmarked on the purge page and also has https://phabricator.wikimedia.org/T361593 with no response [15:29:04] let me have a look [15:31:56] I haven't used it for ages and I think it was mostly used to stage/test the puppet 7. from my PoV it can be phased out unless Jesse or Taavi still use it [15:32:32] ok, thanks moritzm, let's see if anyone else has an opinion :) [15:34:18] I agree with moritzm, I would like to keep the project around, but the instances can be removed [15:35:20] great, shall I delete things right now? [15:37:05] fine by me, unless taavi objects [15:37:10] ftw also good with me [15:37:17] no objections from me [15:37:23] o/ nice to see you jbond [15:37:25] Hello jbond! [15:37:55] o/ hi, im normally lurking and awake when summoned ;) [15:41:21] <3 [15:41:58] let's see... next on my list is pki, which has many of the same admins: jbond, moritzm, jayme, among others. And again, I'm happy to upgrade the old puppetmaser there myself if that's your preference. https://phabricator.wikimedia.org/T361591 [15:42:46] andrewbogott: last i knew pki was ise used by some other cloud services. at the very least o11y [15:43:13] oh, that's interesting. [15:45:28] topranks: wow the optics things is getting complex eh (I just saw the new task you created) [15:46:03] ha yeah it should be simple but it's not turning out that way [15:46:31] tbh we shouldn't let this situation happen [15:47:00] yeah, scary [15:47:03] in this case the module is rare as it's our only 100G circuit [15:47:11] andrewbogott: see https://wikitech.wikimedia.org/wiki/PKI/Cloud [15:47:35] we let the ball drop, but with most other types we've always a little buffer of a few more spares [15:47:48] * topranks waves at jbond :) [15:47:54] jbond: any idea who is inheriting maintenance of that project? [15:48:47] no i highlighted it as an unowned service when i left and recomended it should probably be owned by cloud services as its a shared service. conversation died at that point [15:49:18] oh, does that mean there's a ticket someplace awaiting wmcs response? [15:49:20] * andrewbogott searches [15:50:11] in any case that suggests that I can at least upgrade that puppetmaster myself [15:50:34] andrewbogott: im not sure tbh. i think there were a couple of meetings with cloud services where some of this got caught and i also mentioned it with jobo but im not sure if there is a ticket [15:50:50] ok [15:51:33] As in i mentioned to cloud services this are the things that i maintain which dont have a real owner. some i feell should go to cloud services some other to sre. However i dont think there was agreement [15:53:26] don't sweat it John it's on us to sort out [15:54:34] thx, but the tl;dr is that i thin at the very least its used by deploiyment prep and o11y [16:01:38] jbond: that's fine, I'll upgrade what I can today. [16:01:49] ack [16:05:13] jbond: do you recall why we created a local puppetdb server in the cloud devtools project by any chance? [16:05:30] we have a buster puppetdb [16:05:52] mutante: what other things are in dev tools? is that cumin and spicerack [16:05:52] but some hiera settings for setting the puppetdb.. were like pointing to the puppetmaster,, not this puppetdb machine [16:06:18] is it possible puppetdb is also part of a puppetmaster/server? [16:06:35] jbond: puppetmaster, deployment_server (!) and puppetdb [16:06:49] was it maybe because deployment server uses it? [16:06:50] the hiera layout for puppet ius not great some of the puppetdb settings are under e.g. puppetmaster::puppetdb [16:07:19] jbond: maybe it was to be able to use cumin then.. hmm [16:07:57] mutante: im not sure i cant think of why the deployment server would need it. but if something needs cumin thats probably why [16:07:59] profile::puppetdb::master: puppetmaster-1001.devtools.eqiad1.wikimedia.cloud [16:08:16] yes thats saying that puppetdb should connect to that puppetmaster [16:08:16] ^ this confused me, because it's pointing to the master.. even though we have the one actually called puppetdb* [16:08:24] oh [16:08:32] ack [16:08:58] i shut down the puppetdb machine to see what breaks but not sure yet :) [16:09:36] cumin is used in deployment-prep AFAIK and that most likely uses the puppetdb backend (but could also use the openstack one for example) [16:10:40] what I really want here is "shut down all buster instances", so it's more like do we need to replace it or not [16:11:11] ack, though separate project from deployment-prep [16:12:06] Yep, I build a new puppetdb for deployment-prep already [16:13:27] ah, ok.. so.. if we don't need cumin then we don't need to do that? [16:14:27] but good to know we could copy that setup from deployment-prep