[12:31:19] hmm how should I run facter to get all the custom facts that we're using? [12:33:24] vgutierrez: not sure if it answers the question but Puppetboard have all the facts, eg. https://puppetboard.wikimedia.org/node/netmon1002.wikimedia.org [12:36:05] Yup.. I'm using that as a reference right now [13:27:35] vgutierrez: `sudo facter -p` forces the custom facts to also run [13:27:49] e.g. `sudo facter -p lldp` [13:29:53] thanks jbond [16:13:53] jbond: hi, do you know what went wrong here? https://integration.wikimedia.org/ci/job/operations-puppet-catalog-compiler/33780/console [16:38:50] taavi: no i have not seen that error before and have been unable to reproduce [16:39:12] i have ran `git fsck [16:39:20] for good measure [16:57:43] taavi, jbond: python 3.8 and above support a PosixPath in subprocess args, but that log is from python 3.7 which doesn't [16:59:01] I've never dug into the puppet compiler but I guess prepare.py needs to str() the path-like arguments before passing to check_call(), if targeting 3.7 [17:02:19] rzl ahh thanks ill check the code and send a fix shortly [17:11:38] ryankemper, inflatador: o/ there are some alerts in icinga related to elastic nodes, seem related to misconfigured checks, is there anything to be fixed? [17:11:51] (sorry if it was already asked, in case let's ack them) [17:13:48] elukey: ah, thought those had cleared, I'll get a ticket up and ack them [17:14:41] (tl;dr is that check was broken previously in that it would never fail, I patched that specific issue which then revealed some other issues, reverted the original patch while still investigating so it's interesting that those haven't cleared) [17:15:52] ryankemper: thanks! [17:17:45] (acked with https://phabricator.wikimedia.org/T301511#7708316) [17:51:22] apergos: A user just wrote this in a ticket: "Previously we were linking /mnt/nfs/dumps-labstore1007.wikimedia.org/xmldatadumps/public/ from within our project NFS to filter out pages that contain math in all languages and build individual wikis to test new math features. However, since this resource seems to be gone..." do you know what that's about? Is that something that was removed from dumps or are they confused? [17:51:28] T301300 [17:51:29] T301300: Does the 'math' project need NFS? - https://phabricator.wikimedia.org/T301300 [17:52:32] no idea. [17:53:10] labstore1007 should export /srv/dumpdsata/something but I don't know what exactly nor to which hosts [17:53:23] certainly nothing has been removed on the dumpsdata hosts side [17:53:42] ohhhhh I was thinking they were talking about a particular dump being missing [17:53:53] if they arent' getting the mount at all then that's probably my problem [17:54:01] I'll dig around in their instance [17:54:47] it looks like they were using the general mount of all the public datasets, yeah [17:55:07] I know there's a mount that's supposed to be generally available to all cloud instances [17:55:45] "scratch"? wasnt that remove just recently [17:56:10] ok, confirmed, the mount volume is there but there's no xmldatadumps dir [17:56:45] mutante: scratch wasn't removed but I did move it to a new server. If you have a project where it stopped working I'm interested! [17:56:59] andrewbogott: ACK, no, I am good, I don't need NFS on my projects :) [18:01:03] apergos: confirmed, they're mounting the dumps servers but there's no xmldatadumps dir -- you wouldn't expect that to be there? [18:01:33] well yes I would expect the public directory to be there [18:01:38] what do they mount and what [18:01:41] s in it? [18:02:53] fstab line is [18:02:59] labstore1006.wikimedia.org: /mnt/nfs/dumps-labstore1006.wikimedia.org nfs vers=4,bg,intr,sec=sys,proto=tcp,noatime,lookupcache=all,nofsc,ro,soft,timeo=300,retrans=3 [18:03:08] oh from 1006 [18:03:19] I think 1006 is the web server and 1007 is the nfs server [18:03:28] not 100% positive but I think so right now [18:03:43] you can check the exports on the two hosts I guess [18:03:45] the dir contains a zillion wiki-named subdirs [18:03:50] well the dir sounds right [18:04:00] same thing in the 1007 mount [18:04:04] the directory blahblah/public is the thing that should be exported [18:04:19] we don't export the top level xmldatadujmps dir because it's also got the private/ dir underneath [18:04:26] so only the public one with all public contents! [18:04:46] well tbh the labstore boxes might not even have the private subdir at the same level. but anyways [18:04:58] ok [18:05:25] that sounds right to me, there's also the "other" dir at the same level as all the wiki dirs, for the datasets that aren't the sql/xml dumps [18:05:28] so possibly they're remembering something from ancient times where the actual xmldatadumps dir was visible [18:05:37] very ancient. [18:05:43] but any data that they want is still there, just in a different path [18:05:49] seems like it indeed [18:05:53] this is a very very old project [18:08:05] dunno what to tell them [18:08:19] as far back as I remember there being labstore1006/7, that's how the exports worked [18:08:26] of course my memory is fallible [18:09:26] ok. I might be misunderstanding the issue entirely, maybe they built a VM with nfs disabled or something. I followed up on the ticket. [18:09:27] Thanks [18:14:41] sure, good luck! [18:21:50] andrewbogott: it is quite possible that Physikerwelt's code predates the modern dumps directory structure. That project has been around for a long time (/me met Physikerwelt in SF office in late 2013/early 2014 while he was working on a grant project to make math rendering suck less) [18:22:02] likely! [18:22:13] Hopefully we can get it sorted out [19:05:00] What does it mean when `sudo enable-puppet foobar` returns 1 and says nothing? [19:06:47] ebernhardson: probably that `foobar` was the wrong message [19:08:24] rzl: oh, i never realized those are supposed to be the same message. I guess that's how i keep leaving instances disabled... [19:08:47] aha! yeah :) it's designed that way to make it harder to re-enable somebody else's disabled node without realizing it [19:09:25] rzl: thanks! all makes sense now :) [19:09:37] but that should be clearer -- want to patch `modules/base/files/puppet/enable-puppet` to add an error message in that situation? [19:09:58] rzl: sure i'll make it emit something, i supposed i didn't notice because there isn't a visible indication of failure. It was only when i thought to check the exit code [19:10:01] "no, I don't want to do that" is a totally valid answer and I'll do it otherwise, but the opportunity is yours if you want it :) [19:10:11] yeah totally [19:11:07] j.bond is a good person to mail the patch to, when you're ready [19:33:23] we should consider making `enable-puppet` spit out a warning that the messages didn't match (unless there's something that depends on it that is expecting no stdout, but I doubt that?) [19:33:55] I've gotten bit by failing to append `- root` to my original disable message when re-enabling before [20:51:19] ryankemper: "- $SUDO_USER" gets appended automatically if SUDO_USER is set (so both if you just sudo puppet-(en|dis)able or get a root's shell with sudo -i) [20:52:17] volans: how odd, I swear I've had to manually append `- root` when running `sudo enable-puppet` on the host [20:53:08] will test it out real quick [20:55:01] hmm...nope didn't need to manually append, I guess I probably didn't run it with sudo at the time [21:02:25] ryankemper: or you did run it with sudo from within a root's shell (double sudo clears SUDO_USER unfortunately) [21:02:47] sorry I said it wrong [21:02:51] it sets SUDO_USER to root [21:02:55] so you get '- root' [21:02:59] appended [21:03:39] ah it's all making sense now [21:03:55] it's my bad habit of sticking `sudo` in cumin commands [21:04:23] cumin will always have '- root' as currently ssh as root [21:04:29] so that would not change much [21:04:46] ah no sorry that makes sense [21:05:04] it's late here.. I should just get off :) [21:08:04] :P [21:10:24] to recap, plain commands exeuted with cumin don't have SUDO_USER set, but will have SUDO_USER=root if sudo is used in the command [21:10:38] and now I can go off :)