[10:44:44] lunch [11:52:07] how do i get the magic cirrus search dump things from my requests? [11:53:17] addshore: the search query sent to elastic of the json doc representing a page we send to elastic when indexing a page? [11:53:24] s/of/or [11:54:06] append &cirrusDumpQuery to the search request URL for the former [11:54:37] append ?action=cirrusDump to any article URL for the later [11:56:15] thanks! [11:56:41] Another more interesting question, if I wanted to get a list of everything on the right hand side of say "P123=" would I be able to do that ? [12:03:50] addshore: not sure I understand you want all possible (distinct) values of a particular property in the whole index? [12:03:59] yes :) [12:04:08] pretend i dont have a query service [12:04:12] ok [12:04:57] so first we do not index all the properties in elastic (e.g. monolingual text are not indexed) [12:05:13] ack, okay, ignoring those then :D [12:06:15] is it something that must be available publicly or is requiring access to the elastic cluster a possibility? [12:07:55] sorry, even with access to elastic we can't provide that, we do not index this data with the right settings [12:09:22] "even with access to elastic we can't provide that" ack, okay! [12:09:31] the only way I can think of is using dumps and/or the triples we have in hadoop and use sparl/hiveql [12:09:43] s/sparl/spark [12:09:44] so, this wouldnt be for wikidata, rather for some other wikibase [12:09:51] with an app that is already using the elastic indexes directly [12:09:57] ah ok [12:10:08] but if it is not possible I won't stare too hard at how the data is indexed etc :) [12:10:21] ok [12:10:41] this could be done but yes this needs some adaptation of the way we index this data [12:11:02] querying this is then just using standard aggregation [12:11:43] okay, thanks! [14:01:07] We're meeting Andrea: https://meet.google.com/zke-gfiv-jmv [14:01:07] (tanny411, dcausse, ejoseph) [14:02:08] Greetings! LMK if you want me to join too [14:02:19] inflatador: Oh, yes, please do! [14:02:31] You might not be in my Search contact group yet [15:44:24] Can anyone confirm or deny majavah's comment on https://phabricator.wikimedia.org/T298252 re: "local hacks for puppet TLS on deployment-prep"? I might start on the TLS part first, sounds like it might be a blocker [16:05:44] i suspect only \o [16:05:55] hmm, ignore the begining. I guess i typed that last week :) [16:06:53] inflatador: hmm, if i had to guess local hacks means custom patches applied to the deployment-prep puppetmaster. That is a thing people do, although i'm not sure if it was done here. Mostly a guess [16:07:05] checking which host that is [16:07:21] o/ [16:07:23] Thanks, I'm fishing thru the puppet repo myself [16:10:06] inflatador: on deployment-puppetmaster04.deployment-prep.eqiad.wmflabs there is /var/lib/git/labs/private which contains e642b0a9b, titled '[LOCAL] add ssl secrets for deployment-elastic*' [16:10:20] seems plausible that's what they meant? [16:12:40] heh, i was curious if the new domains would work. Instead ssh complains :P ControlPath too long ('/home/ebernhardson/.ssh/sockets/ebernhardson@deployment-puppetmaster04.deployment-prep.eqiad1.wikimedia.cloud-22' >= 108 bytes) [16:14:03] I guess the deployment-prep puppetmaster uses the exact same repo as prod, but its control of servers is limited to deployment-prep? [16:14:51] inflatador: the main repo is the same between prod and deployment-prep, although deployment-prep accumulates one-off patches over time [16:15:25] inflatador: the private repo though is completely separate, i think the prod private repo doesn't exist anywhere except backups and puppetmaster [16:16:15] the deployment-prep private repo is a bit wierd. It's called private, but it's 100% not-private and contains a big file 'THIS_REPOSITORY_IS_NOT_PRIVATE' [16:16:34] but then there are custom hacks to the private repo that only exist on the deployment-prep puppetmaster, currently there are 33 [16:16:42] those are for the actual secrets. Sorry it's all a mess :( [16:17:30] Ah, I just found that 'private' repo to which you refer [16:19:19] Yeah, I'll probably need some help sorting the TLS/secrets scenarios out. Probably talk to ryankemper once he gets in . Looks like this might be a piece of the puzzle: https://wikitech.wikimedia.org/wiki/Puppet-ecdsacert [16:48:35] ebernhardson: you're pretty much correct, the operations/puppet repo is supposed to be identical on deployment-prep and production and the private trees are different (a mix of equivalent servers for deployment-prep and dummy values for all of cloud vps for puppet to work properly without the real secrets) [16:49:31] inflatador: https://wikitech.wikimedia.org/wiki/User:Jbond/Encryption and https://wikitech.wikimedia.org/wiki/HTTPS might give you some additional context [16:51:05] obviously you can see that the main puppet tree (/var/lib/git/operations/puppet on deployment-puppetmaster04) has local changes, those are discouraged but sometimes necessary to keep things working. ideally everything would be merged to the main tree and configured via hiera or whatever [16:52:01] "cfssl-based automated pki system" was referring to https://wikitech.wikimedia.org/wiki/PKI/Clients, there are a few things using it on deployment-prep [16:54:18] Thanks, I was reading that article, but due to my ignorance of puppet and our deployments in general I couldn't find any deployment-prep (roles? modules? profiles? whatever?) in the main puppet repo [17:04:43] Diner [17:07:46] inflatador: usually ::beta things are deployment-prep specific, but we try to re-use production things there as much as possible [17:08:15] the project is intended to be an "exact" replica of the production mediawiki environment [17:10:27] it's also very undermaintained and would probably be totally broken without me keeping it alive for the last year [17:12:40] i suppose you may have run into it already, but hiera on cloud (and by proxy, deployment-prep) is augmented by values provided in the horizon web interface. I think we try not to use those much for deployment-prep, but may be worth checking if something doesn't make sense [17:12:56] * ebernhardson is just thinking of other gotcha's [17:14:02] taavi I'm sorry to hear that! I'd like to help if time permits, but it will probably be awhile before I'm knowledgeable enough to make progress. What are your working hours? I'd like to set up a mtg w/you to go over the env if/when you have time. I work from 1400 - 2300 UTC on weekdays [17:21:35] I don't work here, I'm just a bored student with way too much free time :P [17:22:20] deployment-prep has been like this for years, just with a different person keeping it up, don't worry too much about it unless you actively rely on it [17:28:10] taavi understood. Will confer w/my team and see what they recommend. I'm barely a month into the job, so any context is very much appreciated. [17:51:27] ebernhardson: might I be missing permissions in horizon somewhere? I can't get onto the `deployment-puppetmaster04` host [17:51:29] https://www.irccloud.com/pastebin/yNq4guvp/ [17:52:30] ryankemper: you'd get a different error if you were missing permissions [17:54:02] huh, that is odd [17:54:22] i'd add a few -vvvv and see if anything interesting [17:54:41] * ebernhardson for some reason likes that ssh uses more v's to be more verbose [17:54:54] I love that too [17:55:47] It looks like it's accepting my pubkey and failing after? here's the latter part of the -vvv [17:55:50] https://www.irccloud.com/pastebin/QQXQw3Aw/ [17:55:55] 9359:Jan 18 17:55:02 deployment-puppetmaster04 sshd[30186]: Failed publickey for ryankemper from 172.16.5.8 port 55162 ssh2: ED25519 SHA256:C2RercsX1RqQlAsNEsEXt0jjcIzwLISHgtCsdVgbe1s [17:55:57] 9360:Jan 18 17:55:02 deployment-puppetmaster04 sshd[30186]: fatal: Access denied for user ryankemper by PAM account configuration [preauth] [17:56:05] is it sending your prod key to beta perhaps? [17:56:13] what does your .ssh/config look like? [17:56:22] https://www.irccloud.com/pastebin/SFTTNLkY/ [17:56:32] (SREs go through bastion-restricted, IIRC normal devs use a slightly diff bastion) [17:56:46] ebernhardson: lemme check which key that corresponds to [17:57:47] https://www.irccloud.com/pastebin/833Mgxbt/ [17:57:55] id_ed25519 is my cloud key (I should rename to make it explicit) [17:58:26] ebernhardson: taavi: so I think the above is telling me that I'm sending the right key through but PAM is rejecting the key, ie that it is just a horizon or similar permissions issue? [17:58:41] i named my keys id_rsa_wikimedia_{labs|prod}. Also apparently my labs key is 7 years old and should get a new name :P [17:58:48] errand/lunch, back in ~60m [17:58:54] actually you don't seem to even be a member of the deployment-prep project [17:59:05] oh, that might do it [17:59:08] :P [17:59:49] we'll leave it as an exercise to the reader how I've been here 1.5 years and am just now trying to connect to the deployment-prep hosts for the first time :D [17:59:55] :) [18:00:53] try now? [18:01:00] dinner [18:01:16] taavi: it works! [18:10:47] ryankemper: back in december we pulled wcqs instances back from lvs prod. inflatador is already somewhat familiar with LVS, but maybe you could find some time this week to push it back to prod, and bring inflatador up to speed on our lvs deployment [18:11:01] * ebernhardson was trying to find an appropriate ticket, but not finding one. Maybe needs a ne wone [18:13:05] ebernhardson: we were using https://phabricator.wikimedia.org/T280001 before, so I might just use that [18:13:15] although it does have massive log history now (which isn't necessarily a bad thing) [18:13:35] oh, yea that ticket is probably fine. It has all the other info already [18:14:03] ebernhardson: and agreed, I'll pair with inflatador to drive that forward. I think that and getting the new elastic* hosts into service are the priorities for this week (and we'll need to slot in figuring out the `deployment-prep` hackiness too) [18:14:57] ryankemper: thanks! We are getting awfully close to release. I mean in theory if everything works it's just LVS and domain names, but i'm sure will find problems once it's all routing :) [18:15:33] I share your certainty :P [19:14:27] back [19:22:20] andreaw: is that you? Welcome to our channel! [19:24:20] andreaw: about your phabricator access, if you already have a MediaWiki account, that should be enough. Otherwise, inflatador should be able to help you get your access working. [19:24:48] inflatador is the latest member of the team, so he is probably still somewhat fresh on how all those access work [19:24:51] This is Andrea Westerinen. [19:28:23] When I tried to log into phabricator with my LDAP credentials, I got "Invalid User Name or Password." When I tried to get an email invitation, my email was tagged as "There is no account associated with that email address." [19:29:49] andreaw: this might help: https://www.mediawiki.org/wiki/Phabricator/Help#Creating_your_account [19:29:55] andreaw here's my onboarding page: https://www.mediawiki.org/wiki/Wikimedia_Discovery/Team/Onboarding/Brian . Not sure what your position is, but there should be quite a bit of overalp [19:30:13] err...overlap . If you need help with any of the steps (or anything else), feel free to reach out [19:30:34] short version, creating a mediawiki account should be the easiest [19:34:07] ryankemper: meeting? [19:35:40] gehel: crap got my days mixed up, back in ~7 mins [19:35:58] ryankemper: ack, I'll be there [19:36:03] or we can cancel / reschedule [19:39:44] OK, logged in as AWesterinen :-) [19:40:18] andreaw: good! [19:47:42] inflatador: Hey! We have that meeting next week about new elastic servers. Can you accept the proposal for the new time? [19:48:20] inflatador: also have a look at https://office.wikimedia.org/wiki/Google_Calendar#Advanced_Configuration (and allowing guests to edit meetings) [19:50:32] gehel It wasn't letting me accept in the Google calendar web app, but I selected "joining virtually" and I think it worked [19:51:57] inflatador: I don't see that reflected in my calendar. I've created a new one for tomorrow. [19:52:14] Hopefully we'll all be there (Ryan, you and me) [19:53:51] ah yes, I didn't get today's invite until ~5m ago. Will accept tomorrow's invite [19:55:41] yeah, I had a meeting with Ryan and we discussed that doing it with you makes more sense for the future [20:11:16] lunch [20:45:28] back [20:56:20] * ebernhardson mutters at error messages that never have enough information. `element not interactable`. Well great, now we know somewhere within a 20 minute run that there was a not interactable element, so which test? Where did that element come from? who knows :P [20:56:42] gehel I can't get to the document, https://office.wikimedia.org/wiki/Google_Calendar#Minimum_Configuration, since I can't log into the Wiki with my LDAP credentials. :-) [20:57:30] But, I did set my schedule, etc. [20:58:24] andreaw: not sure how to get you in there, but it looks fairly benign so i copied that one section to an etherpad you can at least read: https://etherpad.wikimedia.org/p/calendar-minimums [21:03:37] ebernhardson +1 Thanks. [21:09:03] officewiki doesn't use ldap authentication, it has its own set of accounts I think [22:26:34] * ebernhardson is back to guessing at what is turning on display_errors [22:26:41] andreaw: based off the following blurb from https://www.mediawiki.org/wiki/Wikimedia_Discovery/Team/Onboarding/Brian, it sounds like you should have been emailed about office wiki access / etc by ITS. So you should e-mail `techsupport@wikimedia.org` if you never got that e-mail [22:27:08] > These accounts [Office, Wikipedia, and Wikimedia Foundation] will be created by ITS. Email techsupport@wikimedia.org if you don't get email with login info. [22:27:09] almost wish php runkit still worked, but that project was insanity so fair it doesn't (that function let you replace already defined functions, including builtins) [22:27:20] Taking off slightly early today due to illness. Hopefully will see y'all tomorrow [22:27:23] inflatador: take care [22:27:26] \o [22:27:39] huh, learned a new xdebug tool today `xdebug_start_function_monitor`. Wrap the execution with it and it tells all call sites that invoked ini_set (or whatever)