[00:18:57] I'm looking into images on the wiki I help out with and notice that basically fat .pngs are being loaded whenever some pages get loaded. I looked into possible solution and found that there were multiple requests for Extension:WebP in the past, however finally it was never introduced on Miraheze, per https://issue-tracker.miraheze.org/T10505 Universal Omega mentioned that there was a change in Cloudflare that was supposed to load images as webp,... [00:19:02] ... however I can't reproduce it on a page such as https://wiki.animalroyale.com/wiki/Animals. Does anyone know if it's supposed to work or what is the fate of that? Would probably be better for bandwidth of both Miraheze/CDN as well as user's if images were actually served as some not-extremely-compressed-webps [00:21:34] don't mind doing a ticket on Phorge about it, though figured I'd ask first because maybe I'm missing something [00:26:32] i thought at least at some point we were serving webp [00:26:44] i remember hearing complaints [00:28:14] hmm, well, yeah, the linked page is 500MB when served in full... Lazy loading helps but half a gigabyte for a bunch of animal pictures is kinda excessive. Will likely have to somehow collapse some content or load it differently later, but if it happens to us it likely is an issue on entire farm. And cmon, 500MB per page is really not efficient 😭 [00:33:01] And yeah, complaints about .webp images being served sound about right, I really dislike how it's implemented often, where .webp is posing as original file or on some sprite files no .webp compression looks good because it blurs entire image [01:05:13] Anyways, prob gonna make a Phorge ticket about it tomorrow if the .webp's being missing isn't some kind of omission. If it's on purpose then I think it would be fair for something like Ext:WebP to exist on Miraheze to have opt-in for .webp images. Though I'd have to look closer into that extension if it actually solves the issue I have. I know Fandom have their own fancy extension that turns images into .webp's pretending to be original format file,... [01:05:18] ... and there a page that costs 500MB on Miraheze costs just 8MB in transfer + some wikis utterly hating .webp because of issues like spritesheets looking like bad watercolor pictures. [07:15:41] Frisk: I doubt you'll get extension WebP at the moment as swift is pretty full on disk space [07:16:09] There is a Cloudflare feature but we only use cf for security & dns now [07:18:42] After the 22nd we will have more storage. As of now we can't do anything that will overuse it as we only have 1% left. [07:24:55] Hey cosmic [07:26:18] Ye we lost CF polish because no caching through them [07:26:23] It's still turned on [07:27:25] There is CF images for on-demand but that's not enabled for us [07:29:07] Will the fisch DDOS stop as well [07:29:22] There's like a permanent banner on CF saying recent DDOS [07:30:53] I didn't realize that. It's probably okay for now [07:30:55] @cosmicalpha did you run recreate on all dpl wiki ^ [07:31:05] Most users complained about it tbh [07:31:21] Yes. And none was owned by old user anymore, I don't know why it still failed. [07:31:34] @reception123 [07:31:53] It is the same error I think but I don't understand why [07:32:15] I may just drop all DPL3 views and rework DPL3 to not need it... [07:32:33] Should have done that 3 years ago tbh [07:32:59] what do I need to run exactly? [07:33:14] oh I just saw in SAL [07:34:31] That should have been fixed [07:34:34] Yeah I have no idea what's up with views failing... [07:35:14] well clearly something is still wrong [07:35:25] Definitely none left from wikis that had turned it off? [07:35:37] Ran `select * from information_schema.views WHERE DEFINER = 'mediawiki@%';` on all DBs and got no results after running script and some drops earlier. [07:35:40] Was it the exact same error? [07:35:45] No I dropped all of those. [07:35:55] Okay that's weird [07:36:17] Yeah, iirc what the last one was [07:36:32] That should be impossible [07:36:47] Is this failed wiki fixed? [07:37:12] [1/5] Yeah just confirmed again [07:37:12] [2/5] ``` [07:37:12] [3/5] MariaDB [(none)]> select * from information_schema.views WHERE DEFINER = 'mediawiki@%'; [07:37:12] [4/5] Empty set (10.576 sec) [07:37:13] [5/5] ``` [07:40:59] [1/11] Yep not on any server: [07:40:59] [2/11] ``` [07:41:00] [3/11] universalomega@puppet181:~$ sudo salt-ssh 'db' cmd.run 'sudo -i mysql -e "SELECT  FROM information_schema.views WHERE DEFINER = \"mediawiki@%\";"' [07:41:00] [4/11] db172.wikitide.net: [07:41:00] [5/11] db182.wikitide.net: [07:41:01] [6/11] db171.wikitide.net: [07:41:01] [7/11] db181.wikitide.net: [07:41:01] [8/11] db151.wikitide.net: [07:41:01] [9/11] db161.wikitide.net: [07:41:02] [10/11] ``` [07:41:02] [11/11] (it does give results with `mediawiki2024@%`) [07:41:43] @rhinosf1 [07:42:10] Yes we fixed the failed wiki also [07:42:40] Baffling [07:42:49] Yeah I don't understand [07:43:02] Good because that step has been forgotten the last twice and mediawiki really doesn't like having half a database [07:43:22] yep [07:43:56] I will at some point when I have time (probably next week) add error handling so if it fails it automatically reverts the rename [07:46:31] [1/22] ``` [07:46:31] [2/22] universalomega@puppet181:~$ sudo salt-ssh 'db' cmd.run 'sudo -i mysql -e "SELECT COUNT() FROM information_schema.views WHERE DEFINER = \"mediawiki2024@%\";"' [07:46:31] [3/22] db172.wikitide.net: [07:46:32] [4/22] COUNT(*) [07:46:32] [5/22] 2 [07:46:32] [6/22] db182.wikitide.net: [07:46:33] [7/22] COUNT(*) [07:46:33] [8/22] 0 [07:46:33] [9/22] db181.wikitide.net: [07:46:34] [10/22] COUNT(*) [07:46:34] [11/22] 136 [07:46:34] [12/22] db171.wikitide.net: [07:46:35] [13/22] COUNT(*) [07:46:35] [14/22] 137 [07:46:36] [15/22] db151.wikitide.net: [07:46:36] [16/22] COUNT(*) [07:46:37] [17/22] 172 [07:46:37] [18/22] db161.wikitide.net: [07:46:38] [19/22] COUNT(*) [07:46:38] [20/22] 152 [07:46:39] [21/22] ``` [07:46:39] [22/22] for the record [07:47:30] [1/23] And of course 0 for all with mediawiki: [07:47:30] [2/23] ``` [07:47:31] [3/23] universalomega@puppet181:~$ sudo salt-ssh 'db' cmd.run 'sudo -i mysql -e "SELECT COUNT() FROM information_schema.views WHERE DEFINER = \"mediawiki@%\";"' [07:47:31] [4/23] db172.wikitide.net: [07:47:31] [5/23] COUNT(*) [07:47:32] [6/23] 0 [07:47:32] [7/23] db182.wikitide.net: [07:47:32] [8/23] COUNT(*) [07:47:33] [9/23] 0 [07:47:33] [10/23] db171.wikitide.net: [07:47:33] [11/23] COUNT(*) [07:47:34] [12/23] 0 [07:47:34] [13/23] db161.wikitide.net: [07:47:35] [14/23] COUNT(*) [07:47:35] [15/23] 0 [07:47:36] [16/23] db181.wikitide.net: [07:47:36] [17/23] COUNT(*) [07:47:37] [18/23] 0 [07:47:37] [19/23] db151.wikitide.net: [07:47:38] [20/23] COUNT(*) [07:47:38] [21/23] 0 [07:47:39] [22/23] ``` [07:47:39] [23/23] so I'm really lost as to the issue... [07:49:48] Oh maybe RENAME TABLE doesn't work for VIEWs at all @rhinosf1 [07:50:02] has it ever worked? [07:50:07] Maybe but it would be a different error [07:50:22] I asked reception if it was the exact same error [07:50:35] @reception123 [07:51:24] [1/6] This was the error reception sent me [07:51:24] [2/6] ``` [07:51:25] [3/6] Wikimedia\Rdbms\DBQueryError from line 1198 of /srv/mediawiki/1.43/includes/libs/rdbms/database/Database.php: Error 1450: Changing schema from 'weaverswiki' to 'jjtwiki' is not allowed [07:51:25] [4/6] Function: Miraheze\MirahezeMagic\Maintenance\RenameDatabase::execute [07:51:25] [5/6] Query: RENAME TABLE `weaverswiki`.`dpl_clview` TO `jjtwiki`.`dpl_clview` [07:51:26] [6/6] ``` [07:52:35] That's not the exact same error [07:52:48] But yes I guess to answer your question [07:52:58] I guess not [07:53:04] Kinda makes sense [07:53:19] I will change the script to do views differently I guess [07:53:25] Views are weird and this method of renaming a DB is non standard [07:53:43] Drop and recreate is probably only option [07:53:58] yeah that's my plan a create from schema then drop old [07:54:05] It's a view so it's free [07:54:53] yeah I'll work on it next week. [07:55:12] Which got very confusing the first time we did this [07:55:29] I'm busy tomorrow, Friday, and quite a bit this weekend but may be able to this weekend... [07:55:30] I don't think anyone but me and you knew what a view was [07:56:27] It seems so. It's not commonly used here (I think the only one is from dpl3) so it's never really used so that does kinda make sense why it's not well known also... [07:57:09] The only other place I know that uses them is the wiki replicas [07:57:32] yeah I honestly should just rework dpl3 to remove it... [07:58:16] Anyway I'm gonna go for now. [07:58:32] I won't be around all that much tomorrow and maybe friday. [08:30:24] I see, thank you all for the response regarding images! That's a bummer [11:05:14] I'm slightly confused about something, if $wgEnableTranscode in TimedMediaHandler is disabled, wouldn't it be fine to just delete the local-transcoded container if it's not being used? [11:05:46] There's a wiki that doesn't needed the transcoded files but it takes up even more than local-public so I was wondering if they could just be deleted [11:24:53] If wgEnableTranscode isn't enabled then that folder would surely be empty? [11:30:44] [1/2] yeah, that's why it doesn't make sense.. [11:30:44] [2/2] miraheze-removededmsongswiki-local-transcoded 154078968302 -> ~154 GB [11:31:21] Is that something we expose in ManageWiki? Maybe they had it enabled at one point and no longer do? [11:31:45] Is there anyway to see the last time it was written too? If It was a while ago then its probably safe to remove I would assume [11:31:48] yes, it's in ManageWiki. I can check [11:32:39] 04:30, 11 September 2024 Ellie contribs changed the settings "wgEnableTranscode" for "removededmsongswiki" [11:32:41] ah, that makes sense then [11:48:30] Frisk: https://issue-tracker.miraheze.org/T13172 confuses tthe fuck out of me [11:48:33] it's fine in cf [11:51:27] what the frisk [11:52:51] hey BlankEclair [11:53:22] how do you get mediawiki to fail like that though [11:54:43] BlankEclair: create 520.php [11:54:49] ah, we did? [11:54:53] BlankEclair: no [11:54:56] oh [11:54:59] but that's how you could [11:55:12] create a page that always dies with the error code passed to it [11:55:15] RhinosF1: > Just as an FYI we know about that one but don't know whats causing it atm [11:55:15] > The url is correct in CF et al but its still going to the wrong place for some people [11:55:15] > Yeah, goes to the right place for a few of us [11:55:15] ~OriginalAuthority on Discord around 2025-02-01 22:29:17 [11:55:15] 520 is cloudflare's response for an unexpected response btw [11:55:17] https://developers.cloudflare.com/support/troubleshooting/cloudflare-errors/troubleshooting-cloudflare-5xx-errors/#error-520-web-server-returns-an-unknown-error [11:55:25] i wonder if it's caching [11:55:35] Frisk: ye i know about it [11:55:41] oh okay [11:55:42] i ain't a fucking clue how to fix it [11:55:48] it works fine in the previews [11:56:34] yeah, no clue either :( Is it one that's been happening for a long time before I made the ticket? [11:56:45] Frisk: not sure [13:23:56] Frisk: you trying a new client? [13:25:18] well, Frisk-hexchat, rather [13:25:28] nah, I just don't have #miraheze-feed opened via my bouncer because that would be a little too much and I sometimes need it to test stuff like RcGcDb [13:25:46] that's no impersonation, no worries haha [15:48:04] For a quick background the maintenace scripts for CS from managewiki don't currently work [15:48:21] Does anyone get why in https://github.com/miraheze/mw-config/blob/master/ManageWikiExtensions.php#L2168 skipLinks and skipParse = false? [15:48:23] shouldn't it be true? [15:49:08] cc @rhinosf1 [15:49:42] (manually what I run is ` mwscript extensions/CirrusSearch/ForceSearchIndex.php wiki --skipLinks -y --indexOnSkip && mwscript extensions/CirrusSearch/ForceSearchIndex.php wiki --skipParse`) [16:05:37] Reception123: no idea [16:05:45] ManageWiki isn't a me question [16:06:11] that's an MW Engineers area [16:06:22] maybe @originalauthority [16:15:12] Git blame says Agent is responsible for adding that. I assume it was just incorrectly added and should be fixed. [16:15:37] and all this time I was told it just wasn't possible to run the same command twice in managewiki [16:15:43] guess it's my fault for not actually having a look [16:17:07] Or maybe it was intentionally set false for that reason? [16:17:35] guess we'll find out with the next CS enable [16:19:43] @originalauthority oh now that I think of it, apparently https://github.com/miraheze/ManageWiki/commit/a72720fe76ffac14e54dcc489778442ade7b373f was done to be able to run the same script twice with different parameters and it never actually successfuly worked? [16:20:42] Seems like a mess [16:20:49] Why does it need to be run twice? [16:20:56] I don't know much about VS [16:21:04] same script twice with different options [16:21:15] can't run both at once [16:21:20] cause CS is stupid [16:21:24] yeah, first you need to do --skipLinks and --indexOnSkip and then separately you need to run it with --skipParse [16:21:37] but I don't get why ManageWiki isn't liking that, it's not _that complex_ [16:21:43] Thats stupid as fuck [16:22:06] originalauthority: it is stupid as fuck yes [16:23:16] well it's even worse to have to manually run 3 scripts every time somone wants CS [16:23:33] so that's why I was hoping we could figure out what's wrong with managewiki and have it do that [16:26:58] My first thought would be to check if the job is actually being sent but i don't really know much about kafka for that heh [16:27:16] it should be logged [16:27:18] i think [16:27:21] it used to be [16:40:39] I think this was imported from WikiTide’s config [16:42:40] the false part must be a mistake right? [16:42:53] though odd that in CA's PR in managewiki itself he also indicates that the usage is using 'false' [16:42:56] @cosmicalpha would know, I don’t [16:43:06] I think he's away these days so that's why I didn't ask him directly [16:44:36] Maybe change it and see if that works? [16:46:05] I just did but creating wikis is broken on beta 😦 [16:46:14] I guess I can just try on prod [17:01:00] Seem to be getting "An error has occurred while searching: We could not complete your search due to a temporary problem. Please try again later." [17:03:07] What I miss [17:03:13] (as a side note there's a lot of "SQL query did not specify the caller " spam in graylog) [17:03:46] TL;DR CirrusSearch needs to run a script twice with different parameters. We don't know if/why managewiki doesn't like doing that [17:04:02] Huh [17:05:50] [1/2] ```Error running /bin/bash '/srv/mediawiki/1.43/vendor/wikimedia/shellbox/src/Command/limit.sh' ''\''/usr/bin/php'\'' '\''/srv/mediawiki/1.43/maintenance/run.php'\'' '\''/srv/mediawiki/1.43/extensions/CirrusSearch/maintenance/ForceSearchIndex.php'\'' '\''--wiki'\'' '\''cirrussearchtestwiki'\'' '\''--skipLinks'\'' '\''--indexOnSkip'\''' 'SB_INCLUDE_STDERR=;SB_CPU_LIMIT=50; SB_CGR [17:05:50] [2/2] OUP='\''/sys/fs/cgroup/memory/mediawiki/job'\''; SB_MEM_LIMIT=0; SB_FILE_SIZE_LIMIT=0; SB_WALL_CLOCK_LIMIT=60; SB_USE_LOG_PIPE=yes': The "CirrusSearch" extension must be installed for this script to run. Please enable it and then try again.``` [17:06:06] @originalauthority this really makes no sense... if CS wasn't enabled then the script wouldn't have been called to run... [17:06:39] the fact of enabling CS is what gets the script to run so I don't get how it could think it's not installed [17:10:04] Maybe theres a race condition [17:10:14] The script might be running before managewiki updates the cache [17:10:32] Some extension is missing a METHOD call somewhere then [17:10:59] hmm, but then how come other maintenance scripts for other extensions work? [17:11:10] or I guess for the others it doesn't matter if the extension is installed before they run? [17:11:51] Yeah, if the maintenance script doesn't state in the constructor that it requires a specific extension MediaWiki will run it anyway [17:12:11] I don't know if thats 100% whats happening but I would assume itts a possibility [17:12:26] Would there be a relatively easy fix? [17:12:35] I'm guessing something like a delay or something? [17:15:33] [1/2] I don't think there would necessarily be an easy fix from the ManageWiki side. If PHP supported async functions we could await the recache, but it doesn't so heh. [17:15:33] [2/2] A quick and really really dirty hack might be to create a maintenance script that doesn't require CirrusSearch, and run that from ManageWiki instead which could queue a job to run the maintnenace script. Really hack way so maybe someone else has other ideas. [17:16:23] Trying to backread but ehat exactly was the issue a little confused? [17:16:55] The script parameters in ManageWikiExtensions.php for CirrusSearch appear to differ from those that would be ran from the CLI. [17:17:27] oh CS. yeah it's a total mess and the way I did it is even worse... but yeah it needs to run twice with different options but PHP keying it prevented that which is what that was for but still didn't work... [17:17:31] They were set to false instead of true. Now I changed to true but the first one fails to run because apparently it's running before managewiki updates the cache [17:18:12] Complex problems require messy solutions heh [17:18:20] But I don't think twice is the issue because even the first time the script runs it doesn't work [17:18:21] Upgrade all Miraheze extensions I fixed this in many of our own yesterday. [17:18:24] because it thinks CS doesn't exist [17:18:57] Yeah that's the problem. ManageWiki tries to run it before it knows the extension exists. [17:18:58] I wonder if you could have ManageWiki run the resetWikiCaches script before it runs the CS script [17:19:15] That would likely cause it to think the wiki doesn't exist first instead [17:19:27] hmm [17:19:43] _enter C# await_ [17:20:04] Maybe we need to fork the CS script and just remove requireExrension... I don't like that though at all... [17:20:40] yeah, I don't really get why that's needed [17:20:53] if you accidentally run the script without CS enabled would something bad happen? [17:21:00] otherwise I don't see why it's necessary [17:21:02] Maybe they'll fix all of this bullshit script running stuff when they migrate CS to OpenSearch I doubt it [17:21:55] Probably not. just would break search indexes maybe I actually don't know. [17:22:08] They definitely won't [17:22:20] We could potentially fork the script, remove the requires extebsion, and instead load the wiki from the db and check that the extension is installed? Since theoretically the script will run after the extension has already been saved? It would mean an extra DB call, but it would likely be marginal as CS is not going to be installed frequently [17:23:31] Well... actually now that I think of it, it might work with a ManageWiki changes to use DeferredUpdates some way to run it after it knows the extension is enabled... [17:27:02] Forking it isn't really an option without forking different classes also... [17:27:39] CS maintenance scripts extend https://github.com/wikimedia/mediawiki-extensions-CirrusSearch/blob/cedd2c45d30be9acefbef1415df787ba72de0126/includes/Maintenance/Maintenance.php which is what has requireExtension [17:27:39] Why we could just copy the script to Mira functions, no? [17:27:45] Ohhh [17:28:06] More bullshit from the WMF ahh [17:31:13] Yep [17:31:54] well in the end it always comes down to the fact that most places don't need to be regularly installing stuff [17:31:58] so for others running 3 scripts isn't a big deal [17:32:06] An extension's maintenance script requiring the extension shouldn't be a shock [17:32:11] Nor is that really bullshit [17:32:50] well I don't think there's actually others that do [17:32:56] or otherwise we'd have seen the same issue with managewiki [17:33:26] And what did you have in mind? The Wiki reviewer group is already the weakest of all the higher privileged groups! One can't go any lower! [17:34:10] I think this is the issue right here. It's not about "priviledges" and roles, that's really not what matters [17:34:56] dude nobody is going trust such roles to a person w/ such attitude, don't you get it? [17:35:10] Can someone confirm that CS is working on https://semanticmediawiki1.mirabeta.org/w/index.php?search=test&title=Special%3ASearch&wprov=acrw1_-1 ? [17:35:29] it seems to be? [17:35:54] I'd consider going to a course on communication skills in a professional environment if I were you [17:36:48] @cosmicalpha I tried Agent's resetwikicache suggestions on beta and enabled CS on that wiki above and it actually seems to have worked [17:37:03] CS seems to be working on SMW1 beta and I don't see any errors in graylog for the scripts [17:37:17] @reception123 @originalauthority I may know a fix for this. It seems a race condition... changing order of how thinks are ran in https://github.com/miraheze/ManageWiki/blob/dfaeb4e0dbbd9a420ecc1bd4a634c88015d915da/includes/Helpers/ManageWikiExtensions.php#L191 might work, reset cache and write to db first then insert the job. [17:37:34] it will because it's only one server. [17:37:51] Ah, right [17:38:04] Probably would sometimes in production to but be inconsistent [17:38:18] either way your solution above would definitely be cleaner [17:39:14] so check requirements then write then reset cache then run mwscript probably would work. But it may cause it not to. Some scripts need to be ran before extensions are enabled also so it's complicated [17:39:30] Ah my initial hypothesis was correct then heh [17:40:41] oh I missed that lol [17:40:42] What if we fail to write to the database though? We'd have to again reset the cache to remove that extension [17:40:43] maybe that could be fixed by adding an option to differentiate between scripts to be ran before and after? [17:41:00] Could try catch it I guess? [17:41:15] rollback if it fails? [17:41:30] Yeah that sounds sensible [18:04:55] This seems to be going fun [18:41:50] That's the default [20:03:12] @cosmicalpha https://github.com/miraheze/mw-config/pull/5780#issuecomment-2657594597 what's the config? [20:03:18] does that mean i can abandon the change [20:04:12] ignoreupgradecheck or something [20:05:10] $smwgIgnoreUpgradeKeyCheck [20:05:42] https://github.com/miraheze/mw-config/commit/3b3ec4ae8dc23936a7e5c8cf3cf14328e554b8f8 [20:07:13] oh so the smw.json file isn't even used @cosmicalpha ? [20:07:23] I don't believe so [20:07:55] oh i guess that's why there was no failures that time when i saw the file wasn't under 1.43 but was under 1.42 [20:08:11] Yeah. [21:34:15] you need to read this, it is the only thing we can tell you at this point https://discord.com/channels/407504499280707585/1006789349498699827/1339347786587574312 [21:35:11] you are rather visibly desperate to have a hat of some sort and I have received enough evidence and observed enough of your conduct in public that you will not have a chance persisting this way, and indeed, it has become disruptive. So I will formally ask you to drop the stick and persistence on this could result in removal from miraheze spaces. [22:01:47] BlankEclair: when you get on https://phabricator.wikimedia.org/T386418 [22:08:52] I think that’s a duplicate of https://phabricator.wikimedia.org/T341435 [22:09:20] oh yeah lol [22:09:42] how tf you merge that in idsk