[07:18:36] <_joe_> Krinkle: I'll take another look [07:20:07] <_joe_> oh I see aaron merged it, my comments were indeed so minor they weren't worth revisiting the patch [09:35:32] Ack, thx [09:36:38] <_joe_> Krinkle: OTOH, we should *really* rethink the fact that uploading files is a synchronous process [09:45:39] * Krinkle mumbles about having a multimedia team, and event bus reliability. [09:45:44] "Soon" [09:46:09] The winds are blowing in the right direction I think [10:03:56] <_joe_> yeah looks like [10:04:16] <_joe_> but until we do something there, I'm not sure very large files will work if we move metadata handling to shellbox [10:04:30] <_joe_> see https://phabricator.wikimedia.org/T292322#7666754 [10:04:46] <_joe_> I'm asking Tim if he has ideas, but I think we're fixing the wrong problem there tbh [11:01:15] As I understand it, uploads are generally chunked and async already, with the upload page indeed using the APi to wait for progress [11:01:38] That was done many years ago as part of upload wizard [11:02:09] (With the API in core but unused in default html form) [11:02:54] So the latency isn't current spent synchronously in a native form transmission. [11:03:27] But that doesn't make it any less slow ofc [11:03:56] <_joe_> Krinkle: that's not how it works [11:04:23] <_joe_> for upload by url, your POST waits for the process to be completed synchronously [11:04:44] Right but that's not 99.9% of uploads [11:05:12] <_joe_> yes, but it's arguably the 0.1% which needs that the most :P [11:05:22] <_joe_> because it's usually how very large files are uploaded [11:05:30] That's a special right a few people use for imports [11:05:58] Yeah that could be turned into a job indeed with some state mgmt for UI [11:06:46] <_joe_> given a lot of the reports about broken uploads came from people using upload-by-url, I was particularly worried [11:07:41] <_joe_> also I had to test that because Special:Uploadwizard uploads to commons and not to the local wiki, IIRC [11:07:53] <_joe_> or am I mistaken? [11:08:29] So to clarify, you're suggesting it stay this way mostly in terms of shellbox and transfer logic, but that we defer the whole action basically to a queued job. [11:08:57] Afaik UW is only installed on commons. It uploads to the logic wiki, which is commons. [11:09:05] local wiki* [11:09:57] Non-free uploads on other wikis use special:upload, plain html form [11:10:08] <_joe_> yes [11:10:29] <_joe_> that's why I used Special:Upload on test.wikipedia.org [11:10:36] There's no upload by url there afaik [11:10:43] Outside commons I mean [11:10:44] <_joe_> there is :) [11:10:48] Ah okay [11:10:58] <_joe_> https://test.wikipedia.org/wiki/Special:Upload [11:11:24] Right that's in this form [11:11:38] <_joe_> so my main issue is that I guess the commons admins won't be amused by me uploading copies of multiple-GB images for testing on commons itself [11:11:56] <_joe_> I could enable UW on testwiki for now [11:12:00] <_joe_> to test that scenario too [11:12:01] there is a testwikicommons [11:12:04] if that helps [11:12:10] There shouldn't be though [11:12:17] <_joe_> yeah I am not sure it's still active [11:12:35] T213295 [11:12:36] T213295: Close and delete TestCommons from production - https://phabricator.wikimedia.org/T213295 [11:12:41] <_joe_> as in, I don't even think we have the apache config [11:12:59] ok, I assumed it was up as it is still on the db on s4 [11:13:15] It is and it's active [11:13:28] <_joe_> oh ok, that will serve me well then :D [11:13:38] <_joe_> I mean we can still remove it afterwards ofc [11:13:50] But that doesn't do large upload by url right? [11:14:01] I mean isn't that what you wanted to test? [11:15:28] ah, found it: https://test-commons.wikimedia.org/wiki/Main_Page [11:19:50] <_joe_> Krinkle: that I am testing on testwiki [11:19:55] <_joe_> I wanted to test UW there [11:20:07] <_joe_> if it's enabled [11:20:21] <_joe_> because that wiki I think has shellbox enabled for media processing [13:00:59] _joe_: UW seems like already enabled on testwiki (https://test.wikipedia.org/wiki/Special:UploadWizard)? [13:02:37] <_joe_> taavi: uh, TIL, I was convinced it was commons-only [13:03:31] nope, https://phabricator.wikimedia.org/P19811 [13:06:13] <_joe_> yeah I didn't even check, I remember UW not being present on the wikipedias [15:17:39] https://mariadb.com/newsroom/press-releases/mariadb-corporation-ab-to-become-a-publicly-traded-company-via-combination-with-angel-pond-holdings-corporation/ [15:18:23] <_joe_> freemium here we come [15:23:38] :-/ [15:25:56] let's not forget that mariadb corporation is one thing and mariadb foundation is another thing even if after all these years the split isn't clear :p [15:34:29] "The transaction implies a pro forma MariaDB enterprise value of approximately $672 million." yeah, right... [15:34:50] keyword: proforma :D [15:35:33] aluation of companies, especially software businesses, is _so_ much voodoo these days. [15:35:41] +v [16:20:21] WAT [16:20:23] dammit [16:20:39] * apergos checks the calendar. nope, Apr 1 is two months away [16:38:29] Has anyone gotten mediawiki-vagrant to run on their ARM-based Mac? Looks to me like it requires Parallels ATM (gross rent-seeking software). I'm thinking there must be a way to use Linux, maybe run vagrant's libvirt plugin over SSH? Open to suggestions [16:42:42] inflatador: for better or worse, mediawiki-vagrant is functionally abandonware these days. :( [16:44:44] bd808 no worries, is there a newer way to do it with Docker or something? [16:47:07] It could also probably work with an ARM-based Debian image, I'm gonna try that in the short term [16:51:31] looks like the mobileapps group in icinga has no members at the foundation, anyone have suggestions on who should recieve their alerts? [16:51:57] inflatador: yes, Docker is more active MediaWiki dev environment space. I feel like it is still fairly fragmented, but the thing that RelEng and other tooling builders are embracing. [16:54:51] bd808 thanks. A dev on my team is trying to update Elasticsearch plugins, looks like this might be the answer? https://www.mediawiki.org/wiki/MediaWiki-Docker/Extension/WikibaseCirrusSearch [16:56:03] <_joe_> jhathaway: good question! [16:57:29] inflatador: hmm.... ES plugins need an ElasticSearch service. I don't see how that snippet makes that possible at all. [16:57:51] <_joe_> jhathaway: https://www.mediawiki.org/wiki/Developers/Maintainers says the content transform team [16:58:01] <_joe_> which was also my guess [16:58:37] _joe_: thanks, I was unaware of that page [16:59:15] <_joe_> jhathaway: please note they're indicated under "code stewards" [17:01:58] _joe_: noted [17:02:23] <_joe_> which means, they might be surprised you're asking :P [17:02:46] :) [17:12:15] honestly the simplest might be to just install apache/nginx + php-fpm via homebrew, which also gets rid of the overhead from docker's linux VM [17:34:31] _joe_: are you the keeper of the mcrouter package or has someone else assumed that mantle? [17:35:03] <_joe_> andrewbogott: john also did some work, but whoever needs to touch it, needs to assume the mantle [17:35:22] * _joe_ gets the mcrouter packager mantle out of the box, dusts it off [17:35:34] <_joe_> andrewbogott: so what do you need the mantle for? :P [17:35:50] I need to install it on bullseye [17:36:05] Not sure if that's a 'build by hand and upload to apt1001' issue or if there's some kind of automatic package-building bot these days [17:36:24] <_joe_> andrewbogott: may I ask why do you need mcrouter on bullseye? [17:36:38] <_joe_> is there a task? [17:36:58] There is but it's not very descriptive :) https://phabricator.wikimedia.org/T300578 [17:37:31] <_joe_> aahah indeed [17:37:51] <_joe_> so, I guess this is not for wikitech moving to bullseye, so I'm a bit curious [17:37:57] I added a sentence! [17:38:08] It's not for wikitech, for Designate [17:38:22] <_joe_> ok [17:38:26] mcrouter is still our standard memcached solution isn't it? Or have we moved on to yet another one? [17:38:30] <_joe_> let me take a quick look at mcrouter [17:38:34] <_joe_> yeah, nope [17:38:40] <_joe_> and it's by far the best out there [17:38:48] * andrewbogott waiting for the twemcache revival [17:40:53] <_joe_> so yeah they haven't done a release in 2 years so... [17:41:44] when it comes to debian packaging, isn't 'them' us? [17:42:04] <_joe_> I was looking at the source repo [17:42:20] ah, ok [17:44:02] <_joe_> so, your best bet is to get operations/debs/mcrouter, bump the changelog to bullseye, change build.sh there to use docker-registry.wikimedia.org/bullseye:latest [17:44:18] <_joe_> then try to rebuild it locally first [17:44:56] <_joe_> if that works, you can upload a patch and me/jbond can also take a look [17:45:06] ok -- so it builds in a container, meaning I don't specifically need a bullseye build host? [17:47:45] <_joe_> exactly [17:48:10] <_joe_> you might find the script needs tweaking [17:48:18] <_joe_> but importantly it should keep working for buster [18:24:18] <_joe_> jhathaway: uhm I guess we need to add a step to the offboarding procedure [18:28:32] mutante: I just noticed something in data.yaml after seeing your patch. For sc-admins, why '/usr/bin/firejail --join=*' surely you could add any parameter after or is there something special about parameter order? [18:28:55] _joe_: happy to update the doc, do you know where it lives? [18:30:25] https://wikitech.wikimedia.org/wiki/SRE_Offboarding [18:32:41] RhinosF1: eh..yea. probably (https://phabricator.wikimedia.org/rOPUP5070e6b968cbe88dc7b0d3bcde0242e87713b899) [18:35:09] jhathaway: https://wikitech.wikimedia.org/wiki/SRE_Offboarding & https://office.wikimedia.org/wiki/VerboseOffboard#Ops_(applicable_to_all_users) but they might conflict in some details [18:36:06] mutante: me going insane today and missing something wouldn't be a shock but is it worth a task? [18:37:00] RhinosF1: there is one that you can find on that phab link above, with a custom policy [18:37:29] mutante: a custom policy would mean I can't see it :) [18:38:09] could be anything but isnt this the other ticket you just opened recently [18:38:14] about WMF-NDA on phab [18:38:28] ? [18:39:23] could you ask John about that ticket / whether you should make one [18:40:16] jbond: ^ [18:41:11] RhinosF1: sorry, i meant the ticket made by Zabe, it wasn't by you. it is about NDA access to tickets [18:41:30] I don't have NDA [18:43:21] ok, PMed you. and you can make a new one and link it to the other one [22:50:37] What determines if a host's syslog messages end up in logstash? For example syslogs for cloudelastic1003. [22:58:33] jhathaway: unless it's outdated info.. this is how it was once done to do that for phab apache logs, in puppet: https://gerrit.wikimedia.org/r/c/operations/puppet/+/499188/1/modules/profile/manifests/phabricator/main.pp rsyslog::input::file [22:58:42] if in doubt ask -observability though [22:59:54] mutante: thanks, I'll look in puppet as to what roles that host has [23:00:16] jhathaway: try this: :~/puppet$ grep -r rsyslog::input * [23:00:19] jhathaway: the application has to be tagged for forwarding in lookup_table_output.json (https://wikitech.wikimedia.org/wiki/Logstash/Interface) [23:00:21] you will see a couple [23:01:02] ok, I was hoping we had some blanket rules for kernel messages and the like [23:03:51] IIRC (there's a task somewhere, I'm sure), we're not sending all syslog because there is no RBAC for logstash [23:04:29] ah, thanks for the context