[00:07:56] * ebernhardson wonders what about the jan 13 deploy increased number of ElasticaWrite jobs/hr by ~25% [01:44:32] ebernhardson: (for tomorrow) https://gerrit.wikimedia.org/r/c/operations/puppet/+/757124 is merged and post-deploy steps are complete, so we can create the followup `rdf` patch to remove the back-compat logic now (https://gerrit.wikimedia.org/r/c/wikidata/query/rdf/+/757514/1/dist/src/script/runStreamingUpdater.sh#9) [01:49:11] In other news, I just merged the cname change for `commons-query.wikimedia.org` (see https://phabricator.wikimedia.org/T282117#7666274). So we should be all set for tomorrow [08:46:31] and it works! commons-query.wikimedia.org is live :) [09:16:46] back [09:24:51] ejoseph: we could continue around 11am if this works for you [11:04:32] lunch [11:17:05] Lunch [11:19:46] where do we keep filter configuration for blazegraph, specifically for event sender? [12:10:02] it's a bit late for that, but perhaps introducing the streaming updater was a good occasion to actually introduce at least a new minor version ;) [12:10:12] that .101 looks funny :) [12:14:48] yes thought the same :) [12:15:06] filter config should be in puppet passed via java system props [12:15:09] huh, apparently we killed file event sender in september [12:15:15] ah, yeah, I found it [12:15:23] I'm looking at it now [12:16:21] ok, we killed event sender because we reuse the same puppet settings for wcqs beta 2... [12:16:24] * zpapierski is sad [12:16:44] well, nginx access logs it is, I wonder why we didn't use that before [12:16:49] (and a bit worried) [12:22:21] did we decide what is " a more permanent storage" in T299062 ? [12:22:22] T299062: Save stats from wcqs-beta - https://phabricator.wikimedia.org/T299062 [12:22:43] (my internet connection was wonky when we discussed this yesterday) [12:26:58] zpapierski: we discussed saving this on a stat machine and/or hdfs [12:27:33] thx [12:42:00] lunch [14:03:47] zpapierski: if we aggregate the data, we can store it in the same folder in gdrive [14:12:39] Greetings! [14:12:58] o/ [14:17:33] dcausse: merged the eg change for schema bump [14:17:43] ottomata: thanks! [14:37:17] whoohoo , external camera is finally working [14:50:46] \o/ [15:02:35] inflatador: what was the solution? [15:03:01] gehel: I'll do that, simple bash script should suffice [15:05:56] errand, need to pick-up my car from the garage [15:15:50] This is what a person that failed to learn regexp for the last 15 years of career does: https://www.irccloud.com/pastebin/ezRpFV1m/ [15:15:54] I regret nothing [15:46:14] \o [15:46:58] o/ [15:56:53] can somebody smarter with awk/sed help me simplify this?: [15:57:00] https://www.irccloud.com/pastebin/sqhIh7PN/ [15:57:48] it does what I want (aggregating counts for each query, rotated or otherwise), but I'm guessing most of sed/awk invocations can be rolled into one [16:04:50] @team: could someone add "(beta)" to the title on https://commons-query.wikimedia.org/ to be coherent with the discussions we previously had? [16:06:28] zpapierski I plugged it directly into my laptop dock instead of thru the hub. I did have to get an adapter for my mixer, as the dock only has 2 ports [16:08:03] also not sure if it helps with your grep, but I have a friend who likes recs: https://metacpan.org/pod/App::RecordStream . I'm just as bad at regex so I can't offer too much help ;) [16:24:55] ebernhardson: do you perhaps remember where does the settings come for wcqs microsite? [16:25:31] I was looking at custom-config-wcqs.json in puppet but it contains only settings for wmcs one [16:33:46] zpapierski: hmm, no but i bet grep can find it :) sec [16:37:06] zpapierski: i believe its in puppet, modules/query_service/files/gui/custom-config-wcqs.json [16:37:49] zpapierski: hmm, i see you found that but it doesn't work :) sec [16:37:49] that's where I saw that one, but other settings are all wrong - beta 2 def doesn't take e.g. logout link from there [16:38:04] hmm, ok sec lemme find where it's deployed to the microsites [16:38:51] the file users get is https://commons-query.wikimedia.org/custom-config.json [16:39:56] hmm [16:45:18] its returned by the microsite, not the nginx instances. Annoyingly I cant login to a microsite and look around to figure out whats happening. Takes longer :P [16:45:32] * ebernhardson builds pcc so he can fake-recreate microsites locally [16:45:40] but it's not in the gui repo, I think [16:45:53] zpapierski: but `curl -H 'Host: commons-query.wikimedia.org' https://webserver-misc-apps.discovery.wmnet/custom-config.json [16:46:21] zpapierski: i mean that returns the right thing from inside prod, which suggests it is the microsites (also i cant find any reference on the wcqs instances that they are serving it) [16:48:54] I guess so, but I still have no clue where to change the title :( [16:49:38] addshore: do you know where are the settings for w*qs served from? ones like https://commons-query.wikimedia.org/custom-config.json? [16:49:45] oh well thats a pain. miscweb may be returning it, but a puppet catalog for miscweb1002 doesn't have anything called custom-config. Must come from a repo...i guess i'll have to expand my scripts to clone git repos too [16:50:58] ebernhardson: in https://integration.wikimedia.org/ci/job/operations-puppet-catalog-compiler/33536/console you compiled the wrong patch I think [16:51:16] taavi: no thats the right patch, what i'm trying to get is a production catalog of a specific instance [16:51:36] taavi: I can't login to most production instances, so i have to use the puppet catalog to recreate a version of it locally that i can look around in [16:51:46] ahh :D [16:52:10] I can't log in to most hosts either, but I hadn't thought of that before, that's genius [16:53:02] taavi: it's nothing too fancy, ignores most puppet rules and mostly creates the file system structurs and templated files: https://phabricator.wikimedia.org/P19853 [16:55:16] zpapierski: this shouldn't be so hard to find :( And i feel like i set this up originally...sorry will find it :) [16:55:33] no worries [16:55:44] I think you did (or possibly ryankemper ?) [16:55:54] actually, that's probably in gerrit, right? [16:55:56] i certainly wrote the part that split microsites handling off [16:56:59] zpapierski: it's in the gui-deploy repo [16:57:06] https://gerrit.wikimedia.org/r/c/wikidata/query/gui-deploy/+/714623/5 [16:57:11] that looks promising [16:57:13] it's not it, but close [16:57:22] zpapierski: its sites/wcqs/custom-config.json in gui-deploy [16:57:30] wikidata/query/gui-deploy [16:57:43] yep, found it as well, thanks :) [17:00:22] https://gerrit.wikimedia.org/r/c/wikidata/query/gui-deploy/+/758895 - not sure who is responsible for merging this and deploying, though [17:01:43] zpapierski: it uses a plain git::clone with ensure => latest in puppet. I think that means deploy and it will show up on next puppet run [17:01:56] zpapierski: err, i mean merge and it will deploy itself on next puppet run [17:02:34] * ebernhardson tries to find if the bot will merge or not [17:03:02] nope, manually submitted. Should show up in < 1hr [17:04:03] workout, back in ~30 [17:04:09] cool, thx! [17:05:22] workout, sounds familiar, where did I hear this word.. [17:05:34] (I really should get back to it, in all seriousness) [17:06:29] Someone made a good suggestion to put a banner up on the old WCQS URL to announce that the new WCQS is live. Would this be something on us or for WMDE if we wanted to do that? [17:10:05] mpham: we can change that, the wcqs(-alpha?) in labs has all automation disabled so we mostly edit the file in place [17:10:57] * gehel ran puppet on miscweb, so that GUI change is now deplyoed [17:11:00] dinner time! [17:11:07] I was overthinking this, but yeah - since we're not updating wcqs-alpha (I like it!) anymore, we can just add a static page change [17:12:49] Cool! that'd be great if we had a banner there reminding people that beta 2 is now available (i guess I'm just calling the old one 'beta 1' because the new one is beta 2 -- but alpha also makes sense) [17:13:44] i thought there was some banner param or something for custom-config.json [17:17:08] hmm, the banner looks hardcoded and has to change in the javascript. I guess good thing it's static :P [17:18:00] on the other hand, we probably shouldn't retroactively change the labels on our services [17:18:28] sure, beta 2 is probably a better name. I wasn't creative enough to attach a number :P [17:27:33] zpapierski: are you going to figure out the banner or should I? [17:27:58] I also just made a release notes page here: https://commons.wikimedia.org/wiki/Commons:SPARQL_query_service/WCQSbeta2-release-notes and am announcing it more widely now [17:28:36] ebernhardson: I can, but not today (need to prepare for my language classes). If you have time today, go for it, otherwise I'll pick it up tomorrow [17:28:44] zpapierski: kk [17:29:27] thanks for looking into it! [17:38:14] dcausse: is `rdf-streaming-updater: add the reconcilliation stream` ready for merge? I see you had it in the deploy window but it wasn't merged. I can ship that in the evening deploy window [17:38:34] ebernhardson: I was about to deploy eventgate-main [17:39:11] I think I'm not yet clear how all these deps are interacting which each others [17:39:39] dcausse: I'm not sure at all, which is why i had to ask :) I'm very unclear on what needs restarts to load things in this config [17:39:45] and if it's super bad to deploy a mw-config patch without the proper schema updated in eventgate-main and things like that [17:39:54] yes me too [17:40:21] prepping a patch for eventgate main and will deploy that [17:40:34] after the mw-config one should be good [17:40:59] if you have time today that's great but it can certainly wait the EU window tomorrow [17:41:24] i'm sure i can ship it this afternoon, lets get all this wcqs stuff cleared away [17:42:09] thanks! but no worries if you don't get to it :) [17:43:33] unrelated, for the cirrus jobs I looked into it and I feel like it's the same concurrency problem as before. codfw fell way behind cloudelastic suggesting it's not the ability of the cluster to write data, but the ability of mediawiki to send the data to it [17:44:02] yes saw your comment and makes sense [17:44:05] going to poke through that all and figure out how much capacity job runners actually have and how much we are using [17:44:40] they're just waiting but that's probably holding a php worker somehow [17:45:06] yes thats my concern, i dont know about today but in the old days mediawiki runners were severely memory constrainted, something like 1-2GB per php process [17:45:23] so even though it does nothing waiting on a network call, it holds all that memory hostage [17:45:28] yes [17:46:35] i keep ending up back at some sort of streaming application to solve cirrus's problems :P I feel like if this was a flink app holding open 300 network connections would be nothing [17:47:36] :) [17:47:43] and we could certainly batch some writes more easily I'm sure [17:48:13] yes, in theory that could expand to kill the combiner process we use in yarn as well [17:50:38] ejoseph are you still in? I was trying to get mediawiki-vagrant running on M1 Mac, and after talking to bd808 it seems that the project is abandonware. Would like to understand your use case and see if there is an alternative [17:50:39] M1: MediaWiki Userpage - https://phabricator.wikimedia.org/M1 [17:51:30] inflatador: the problem is CirrusSearch tests only run inside vagrant, because none of the development environments that replaced mediawiki-vagrant can run a multi-wiki cluster [17:51:54] (at least, that was true ~2 years ago. maybe someone has since implemented multi-wiki clusters) [17:52:29] there is "some" support but it's fairly manual [17:52:31] ebernhardson I was thinking more like, can we use https://www.mediawiki.org/wiki/MediaWiki-Docker/Extension/WikibaseCirrusSearch ? [17:53:19] https://www.mediawiki.org/wiki/Cli/guide/Docker-Development-Environment might be useful, but I have not used it myself so conjecture. [17:53:23] I'm pretty sure I can hack together something on Mac, most likely a Linux VM running libvirt connecting to vagrant client over libvirt+ssh [17:53:27] inflatador: It should be installable, but in that case you will need multiple development environments for cirrussearch with wikibase and cirrussearch without wikibase [17:53:59] inflatador: you can use https://www.mediawiki.org/wiki/MediaWiki-Docker which is i think the current suggested method [17:54:02] https://www.mediawiki.org/wiki/User:Santhosh.thottingal/WikiFamily [17:54:30] based on a.ddshore "mw" cli [17:55:18] That doesn't look too horrible (but also not pleasant :). Not like mediawiki-vagrant is pleasant either. We've been punting for some time but we have to make a decision and move development environments for Cirrus one of these days [17:55:48] yes it's not just like something you pull and run [17:55:53] but it's getting there [17:57:45] i suppose for ejoseph, probably the docker based method is preferred for now? multi-wiki is only needed when working on multi-wiki functionality like sister search, cross-wiki search, etc. [17:57:57] and cindy will run the multi-wiki tests [17:58:00] Are any of ya'll aware of other teams that test extensions in multiwiki clusters? [17:58:15] not in CI [17:58:44] for emmanuel I think what would be nice then is an image with our plugins [17:59:09] yea that would make sense [17:59:27] I planned to just use a vagrant setup and hack it with es6.8+plugins like you did for cindy [17:59:28] dcausse docker image? [17:59:32] or vagrant? [17:59:38] inflatador: yes a docker image [18:00:02] last time I wanted to do that I ended-up having tons of problems because apt is not there [18:00:25] like "apt install something" is not that trivial with these minimal images [18:00:46] Oh yeah, I know that pain, is it based off alpine or something? [18:00:57] yes lemme find it again [18:01:10] oh I think that the ones provided by elastic [18:02:08] you could completely hax it :P `ar x foo.deb` and `tar -C / -xf data.tar.xz` [18:02:39] * ebernhardson isn't actually suggesting that, but finds it amusing [18:02:54] :) [18:04:47] That's why we have containers, right? ;p [18:05:03] the best I could get is https://people.wikimedia.org/~dcausse/Dockerfile-elastic-dev [18:05:11] but it was not even close to working IIC [18:05:16] *IIRC [18:05:40] it felt painful to just redo what elastic is doing with their image [18:06:39] And mediawiki vagrant "just works"? Or do we customize that one too? [18:07:07] we do customize it heavily for cirrus (using puppet) [18:07:17] mw-vagrant has a puppet repo [18:07:38] Got it, that's probably why it's the best solution for the moment I'm guessing? [18:07:49] yes [18:08:23] you get all the extensions we need, good default config, all required deps like the jobqueue and the like [18:10:04] Is that all within the MW vagrant repo? Maybe it would be best if we walked thru the process of setting up the dev env? LMK if/when you or ebernhardson have time [18:13:08] yes there's a puppet repo in the mw-vagrant repo, (I might not have time to run through it the evening) but Erik might or we can certainly discuss tomorrow after the office hours if you're around [18:14:44] inflatador: yea we can take some time to go through how mw-vagrant works [18:16:02] cool, I can send an invite or we can do right now, whatever works better for you [18:16:08] in theory, the process is documented in tests/integration/README.wmv-wmcs (wikimedia vagrant - wikimedia cloud services) [18:16:16] inflatador: ya we can do now, prefer irc or video? [18:18:01] what the heck, let's do a meet. meet.google.com/mqm-pgtj-dzv [18:52:06] eventgate deploy done, dinner time [19:02:02] lunch/errands, back in ~45 [19:02:17] dcausse: :) [20:27:29] not sure if anyone got my last msg, but I've been back awhile [20:37:37] didn't come through, but it's all good :) [20:55:14] brief lunch, back in 30 [20:56:13] this is my new favorite ticket + resolution: https://phabricator.wikimedia.org/T300315 [20:57:05] i don't think thats the first time either :) [20:59:07] not the first and certainly won't be the last :P [21:33:47] lunch [21:34:07] LOL, is the datacenter on a fault line? Shaking it up [21:34:56] umm, well ulsfo is kinda on a fault line :) [21:35:47] True enough. Maybe they put a trampoline park next to eqiad ;P [22:07:43] back [22:09:48] * ebernhardson wonders if he will ever get an answer from stack overflow that doesn't triger the forbidden api's check [22:16:45] * ebernhardson is not surprised SO suggested new PrintWriter(file) and not new PrintWriter(new BufferedWriter(new OutputStreamWriter(new FileOutputStream(file), StandardCharsets.UTF_8))) so much typing to change one arg :) [23:06:16] Time to make the tacos. See ya tomorrow