[09:13:47] TIL latest (?) gerrit has a "view diff" shortcut to see diff between a given PS and the previous, super convenient [09:48:01] <_joe_> yep [09:48:12] <_joe_> I assumed it was there since forever and I just noticed a few days ago [10:59:21] I believe the integrated shortcut is new indeed. The ability to diff between versions was has been there already, via the top left controls. [11:00:20] yep.. the shorcut seems brand new to me as well [11:06:47] <_joe_> yeah I'm sure it is [11:06:54] <_joe_> I just assumed I didn't notice as usual [11:21:16] godog: yeah that link helper is new in 3.4. One can also do a diff between any patches via the select lists showing `Base` -> `Patchset X`, lets you pick two patchsets to compare between [11:21:30] or an auto-merge if the patchset is a merge commit [11:21:44] the `View Diff` link merely set the select lists for you [11:22:46] I sometime use the URL bar which would show the patchsets being compared after the change number. Eg a comparaison of ps 5 and 9 for change 12345 would have an URL ending with `/12345/5..9` [11:23:18] I find it sometime easier to edit the url bar and press enter rather than using the select list, but that causes the page to be reloaded. [11:57:37] jbond: ok to merge your taskgen changes? [11:57:50] (well, taskgen and git) [11:57:56] klausman: yes please [11:58:05] ack, doing so now [11:58:17] and done. [11:58:23] thanks [12:12:46] hashar: thanks! yeah the previous method is what I've been doing before [13:07:47] Here's a fun one from the sysadmin archives https://www.ibiblio.org/harris/500milemail.html [13:08:28] A true classic [13:16:39] Anyone know about how MW decides which swift cluster(s) to write to? there are dnsdisc entries for swift, swift-ro, and swift-rw; but MW has credentials to each cluster. swift-rw is only pooled in one cluster, the others in both. I guess my question is: suppose (let us say) the eqiad swift cluster catches fire and I don't want MW's performance to be dependent on it; can I achieve that by adjusting some/all of the dnsdisc records in [13:16:39] confctl, or would something more invasive be needed? [13:19:20] To put it another way, I'm a bit worried that media upload performance might be being impacted by the fact that ms-be1059 is bricked, and am wondering if it's straightforward to push writes at codfw instead? [possibly not, given MW tries to write most things to both swift clusters, but we have definitely set pooled/false for swift and swift-ro in the past when doing possibly-disruptive things [13:31:37] <_joe_> Emperor: mediawiki writes to both clusters [13:32:37] so the various dnsdisc records just affect where the caches go looking for things? [13:33:11] (makes me wonders what the swift-rw record is for then) [13:33:33] <_joe_> it was planned to be used by other applications :) [13:33:50] <_joe_> but mediawiki writes to both and reads locally IIRC [13:34:03] sadness [13:34:12] <_joe_> well it's configurable ofc [13:34:17] <_joe_> but it requires a deployment [13:34:25] <_joe_> see operations/mediawiki-config wmf-config/filebackend.php [13:34:44] <_joe_> and the referenced data structure in wmf-config/ProductionServices.php [13:35:14] <_joe_> Amir1 can probably be a good person to ask for pointers to the code [13:35:25] what terrible thing I have done [13:35:28] * Amir1 reads up [13:35:35] <_joe_> Amir1: know mediawiki [13:35:43] <_joe_> and be in the same team as someone asking about it :D [13:35:45] oh that terrible thing [13:35:48] <_joe_> yeah. [13:36:20] [aside: at least ceph would have rebalanced around the failed node by now] [13:36:59] let me see what class in mediawiki would be for that [13:37:09] LocalRepo [13:38:21] and SwiftFileBackend [13:38:32] that's probably what you need to check out [13:39:18] <_joe_> yeah and specifically look for what the option 'readAffinity' does [13:39:54] <_joe_> basically right now we have [13:40:00] <_joe_> 'readAffinity' => ( $specificDC === $wmgDatacenter ), [13:40:11] <_joe_> so false if you're not in the current mediawiki master dc [13:40:14] <_joe_> true if you are [13:40:27] <_joe_> that can be flipped ofc if we want to read from codfw instead [13:41:29] if you have any specific problem, I might be able to look into it from mw side [13:41:44] Emperor: let me know [13:47:17] thanks; I think the takeaway is that we should try and fail ms-be1059 from the rings entirely since I think it being in the rings but kaput is AFAICT causing increased write latency [13:49:48] <_joe_> depends on how much latency and how long is that server going to be out of service, I'd say [14:01:24] how long> not looking good, it's basically bricked ATM [14:02:18] <_joe_> ok then indeed it seems to me we need to rebuild the ring [14:02:32] <_joe_> but otherwise a slightly higher latency in writes might be acceptable [14:05:12] https://grafana.wikimedia.org/d/000000584/swift-4gs?orgId=1&var-DC=eqiad&var-prometheus=thanos&from=1651449600000&to=1652227199000 you can see the uptick on 3rd May corresponding to the host going down [14:06:17] I mean, separately the eqiad cluster upgrade is stalled, which is an at-least-as-compelling reason to fail out this host so we can try and upgrade another [14:55:20] Are grafana dashboards managed via puppet or manually configured via the dashboard? I'd like to fix/update a link on https://grafana-rw.wikimedia.org/d/000000479/frontend-traffic that points to a 404'ed kibana page [14:55:56] I see two dashboards in puppet/modules/grafana/files/dashboards but they appear to be early attempts that aren't necessarily used [14:55:59] brett: it depends on the dashboard :) but that one I believe is edited manually [14:56:22] cdanis: Thanks for the info! [14:56:23] there's also a newer way to write dashboards-as-code, like the SLO dashboards [16:56:52] Emperor: does https://phabricator.wikimedia.org/T307874 sound like the ms-be1059 issue you were troubleshooting? [16:59:32] rzl: that's my best guess, but it is basically a somewhat-educated guess; godog has more historical context, but we do seem to have tickets going back years of "sometimes I find uploads very slow" [16:59:57] nod [17:00:26] I'm only asking with my clinic-duty hat on -- any chance I can interest you in putting a best-guess reply on the task? :D [17:02:24] Not sure I have enough confidence in my answer being correct... [17:03:17] I'll try and catch godog in the morning and update it then [17:04:22] sounds good, thanks! [17:59:50] tried to use the script ./utils/blame_stats.py that John mentions in his mail about puppet licensing. [18:00:05] but: FileNotFoundError: [Errno 2] No such file or directory: 'mergestat' [18:01:37] mergestat is an external tool you need to build locally [18:01:52] ok [18:02:14] looking at https://docs.mergestat.com/ [18:02:41] John and myself spoke about it earlier, for various use cases relying on git log is also fine, so there'll probably be some option, the whole SPDX tooling is still being fleshed out [18:03:13] best to wait a little for converting further modules until you want to be a very early adopter [18:04:10] yea, ACK. so I used the 'git shortlog -es -- ..' and that seems to also give me what I need [18:04:17] just checking email addresses [18:05:27] I'll license the gitlab module, just need Antoine to agree (and he already did on the ticket I think), all other authors are @wikimedia.org [20:02:13] If any of https://github.com/wikimedia/puppet/commits?author=RhinosF1 need licensing approval, just tell me where to comment [20:03:21] RhinosF1: thank you. The place other people have commented has been https://phabricator.wikimedia.org/T67270 so far [20:04:01] f.e. https://phabricator.wikimedia.org/T67270#7746293 [20:05:44] mutante: https://phabricator.wikimedia.org/T67270#7918862 [20:06:06] :) thanks [20:08:04] Np [20:47:20] i thikn it would also be usefull to have some type of CLA in the puppet repo directyl. this way users like RhinosF1 could create a CR adding there own name and email address but i dont know what that would look like and if we may need to ping leagle again to get the format correct. unless anyone knows of a convension we could start with [20:48:27] mutante: fyi you should wait for https://gerrit.wikimedia.org/r/c/operations/puppet/+/789790/19 to get merged before adding any tags as that will make it as simple as: [20:48:33] bundle exec rake 'spdx:convert:module[gitlab]' [20:53:20] I fear we need a time machine to fix licensing on ops/puppet. I'm happy to see a new round of folks poking at the issue, but I'm not sure that any of the blockers from the past are gone now. [20:55:53] The Foundation has the legal right per employment contracts to relicense work for hire in the repo however they like, but figuring out what is and is not work for hire is hard. And when you find things that are not work for hire you have a new problem of contacting that author and negotiating a license change. [20:57:10] bd808: completly agree none of the old problems are solved but at least with spdx tags we can start tagging stuff we know are good to change and also make sure we hav something going foward [20:57:15] jbond: if there's a better place to put it, just ping me [20:57:37] CLAs are the slippery way that most corporate FOSS projects deal with this, but they have a whole other set of baggage for FOSS contributors [20:58:07] there will definetl be a lot of things that will possibly never get resolved but once we know what that looks like we have a better idea to thnk about what to do e.g. rip them out of the mono repo into a seperate one or move them to vendor_modules etc [20:58:08] jbond: yeah, I'm a strong +1 on trying SPDX for net new things [20:58:40] RhinosF1: thanks will do [21:01:38] CLAs are what let Elastic relicense Elasticsearch under a crayon license after being FOSS for more than a decade without even trying to have conversation with contributors. This is the kind of thing I'm thinking about when I say "slippery". [21:02:37] When /me just learned about "crayon licenses" and is happy about that wording [21:03:11] oops, additional "when" at the beginning makes the whole thing sound like I had a stroke :) [21:03:45] ack i see, i gusse that would depend on the wording of the CLA? we could potentially word it so that the permission is only given as long as the work continues to be licence with a apache-20 compatable licence [21:04:54] some of us would prefer a Free as well as Open Source license, but :) [21:05:27] https://drewdevault.com/2021/04/12/DCO.html [21:08:23] bd808: are you an OpenBSD user per chance ? [21:09:57] once upon a time I actually was. Now I'm a hypocrite who rants about licenses on irc while running MacOS & iOS on many devices. [21:10:06] lawl [21:11:14] lol im simlar i use to use it and tend to add bsd licencce to me own work, but apache 20 is free eough for me these days :) [21:13:15] my personal opinion is that GPLv3+ is a magic license because you can stick things made under pretty much any OSI approved license inside of a project using it. AGPLv3+ would be even more awesome for most things if there was clarity on AGPL and embargoed security patches. [21:14:58] AGPL is how Shay could have stuck it to Amazon without harming all of his past FOSS contributors. [21:15:28] But then Shay would have been constrained too and I'm sure he didn't want that [21:22:12] do you consider GPLv3/AGPL to be more free then apache? [21:23:25] * jbond is honestly not trying to start a licence war/debate [21:24:24] Free software is GPL software in my brain. Apache, MIT, BSD, etc as approved by the OSI are Open Source software. [21:25:52] RhinosF1: please add your name/email address to https://phabricator.wikimedia.org/T308013 (by editing the task description), we'll use that for central tracking of acked commits from non-wikimedia.org addresses [21:26:13] feel free to ditch "Example User " while editing :-) [21:27:07] my own ranty feeling is that Open Source was invented as a blanket term to separate the work of some good people who wanted to get corporations to adopt community ownership and collaboration from Stallman. Corps didn't like the word Free in either of its connotations. [21:28:55] ack i tend to think of FSF not OSI when considering what is "free" vs "open" but honestly its not something i have ever really delved into. [21:29:36] I tend to think of other things when FSF or OSI come up :D [21:30:12] I spent a long time in the "I just want folks to use my software" camp and using BSD/MIT licenses. Then at some point I decided that copyleft was actually important largely because mega corps hate it. [21:30:42] moritzm: {{done}} [21:30:49] I very much dislike Open Source as a marketing strategy [21:30:51] What, you don't want corps to swallow your code and close it up? [21:33:39] just tossing my sabots into the machine ;) [21:34:15] puppet itself is under Apache license [21:34:27] jbond: ack, will wait. no rush [21:35:08] appreciated bd808 i could honestly be persuaded either way depending on the day i think :) [21:35:22] we don't license MediaWiki under the PHP license though mutante [21:36:34] by which I mean that there is no demanded tie between the runtime's license and the program's license [21:37:08] Puppet did get sold recently, hopefully they don't pull an SSPL https://www.oregonlive.com/silicon-forest/2022/04/puppet-portlands-largest-tech-company-sold-to-minneapolis-based-perforce-software.html [21:39:25] sure, not a demanded tie [21:39:35] the SSPL nonsense has mostly been about folks attacking AWS for making a better SaaS version o their stack than they do. I don't know that Perforce will have any competition that they can exclude with a license change. [21:41:05] Yeah, I dunno, but it's not a good sign when a private equity group swallows up a tech company. (Speaking from painful experience here) [21:41:41] They'll probably cut staff and try to focus on holding on to existing Puppet Enterprise customers [21:41:54] ^ Yeah, never ends well