[00:30:29] [1/4] There does not seem to be a solution that is simultaneously (1) performant, (2) simple to implement, and (3) compatible with multiple databases. `MEMBER OF` satisfies (1) and (2) but not (3), and that's good enough for WG's use case. Using `LIKE` satisfies (2) and (3) but not (1), and I chose it because I didn't want to write too much code. Cargo's solu [00:30:29] [2/4] tion of database-specific implementation satisfies (1) and (3) but not (2). This is an ideal solution but requires heavy modifications to Bucket's codebase, which I'm assuming is not something WG wants to do. [00:30:29] [3/4] I wonder if a full table scan would result in a big performance degradation. A bucket with 10,000 rows is at most a few megabytes on disk, which can be retrieved in less than a second if stored on an SSD. I can try a benchmark once I get home, though the DB's buffer pool/OS page cache will make getting repeatable results very difficult. [00:30:30] [4/4] Alternatively, if performance is a concern, the wiki can simply normalize the table. [01:46:00] yeah, I think the closest thing to a generic solution to this would be to normalize everything automatically, and basically create sub-tables for every repeatable field so then it's easy to do a one-to-many relationship [01:46:12] but that considerably complicates the table management, query logic, etc [01:47:32] and because mysql doesn't have transactional ddl, we are really trying to minimize the amount of state management that is done in the ddl layer [01:48:32] i will say personally i have been unhappy with the performance on full table scans even with ~10k entries [04:01:39] Bucket has a default time limit of 10s, which could be lowered to prevent inefficient table operations from happening and force the user to properly normalize their table? I would assume that most wikis won't hit 10k entries and this is only a problem for very large wiki, for which the operator can be disabled. [04:02:28] [1/2] It seems that for smaller rows this is very quick. Caching probably works in my favor, though, since the database is very small. [04:02:28] [2/2] https://cdn.discordapp.com/attachments/1006789349498699827/1415185632199835669/image.png?ex=68c249d3&is=68c0f853&hm=c4421ae218c0949a908f34e576da9442c0992d34efa8e677a2a3bb3c1eaf6c8c& [04:09:45] yeah it's not really going to be an apples to apples comparison if it's just sitting in an in-memory cache [04:10:00] i've seen some of those full-table-scan queries take closer to 500ms-1000ms [04:10:22] [1/2] > Bucket has a default time limit of 10s, which could be lowered to prevent inefficient table operations from happening and force the user to properly normalize their table [04:10:22] [2/2] I honestly don't think this is going to have the effect you want though, it's just going to frustrate people [04:10:59] I don't think it's really a viable approach [04:53:42] If there's a patch providing support for repeated values in non-MySQL databases, would it be possible to have it merged upstream? [05:15:17] [1/3] On an unrelated note, MW 1.45 errors are tracked now. [05:15:17] [2/3] https://issue-tracker.miraheze.org/P562 [05:15:18] [3/3] Not sure when MW will stop having breaking changes, so this is not the best time to start extension testing. I'll record errors if I encounter any, though. [07:29:53] it depends a lot on what the implementation looks like. i can't definitively say yes [07:39:41] About 7 weeks before release. [07:40:00] Our formal testing window runs from the branch cut [07:40:40] Which should be around 28 October 2025 [08:59:32] @posix_memalign I feel like the addIdentifierQuotes error might actually be a regression, because SQLPlatform::qualifiedTableComponents clearly checks whether the identifier is quoted or not and unquotes it if it is [10:50:35] @paladox db151 has gone sad [10:56:23] also already put into maint mode [10:56:35] ping me if y'all need anything, i'll go back to cuddling [10:58:02] @blankeclair I restarted it but load is still sky high [10:58:41] Like it's going down https://grafana.wikitide.net/d/W9MIkA7iz/wikitide-cluster?orgId=1&from=now-15m&to=now&timezone=browser&var-job=node&var-node=db151.fsslc.wtnet&var-port=9100&viewPanel=panel-281 [10:58:43] i mean ping me if you need me to do smth ^^; [11:00:00] BlankEclair: is anything running? [11:00:27] php-fpm processes on mwtask181 are eating all cpu [11:00:53] also running are updateSpecialPages and CU purgeOldData [11:01:23] BlankEclair: okay, erm, decide when you want to report [11:01:29] I'm not sure why it's sad [11:01:36] Load is high but it's been restarted [11:01:44] If it crashes again you'll need paladox [11:02:00] how often would it happen twice in a row? [11:02:12] if not often, i'm personally fine w/ unmainting it [11:02:21] this has happened like 3 times in the last few weeks but usually it didn't happen a second time [11:02:34] (at least not immediately afterwards) [11:02:52] safe to unmaint? [11:03:04] Ye [11:03:34] unmainting [11:04:02] unmainted [11:28:20] [1/2] can we somehow fix phorge closing tasks if there's a closing keyword in a commit that hasn't been merged to main yet? [11:28:21] [2/2] https://cdn.discordapp.com/attachments/1006789349498699827/1415297841580212306/image.png?ex=68c2b254&is=68c160d4&hm=31ffa2bf2f4499d2886748a8af31ec3f279b8dfe363128a0dd17f821024218c9& [11:28:27] that's quite annoying [11:52:57] We can stop Phorge closing tasks altogether I think [11:53:07] I don't think we can fix it not getting the branch right [11:54:52] I don't think that's necessary as I intentionally tried to have phorge close the task; it never did that unexpectedly [11:55:04] It just closed it too early but ig I'll manually close tasks again [17:42:07] @posix_memalign Since the backport for the table bug has been merged, I've upgraded MinervaNeue, so it should be fixed now when using MobileFrontend. The bug is still present when using other skins, so I've created a backport for the core patch too [22:16:29] [1/4] Just found Schrödinger's wiki [22:16:29] [2/4] https://cdn.discordapp.com/attachments/1006789349498699827/1415460950634401933/image.png?ex=68c34a3c&is=68c1f8bc&hm=1d98eddd13a9471c72657f81b0e91f373248717fc200b467eebc0ed717e7d18d& [22:16:29] [3/4] https://cdn.discordapp.com/attachments/1006789349498699827/1415460951079260310/image.png?ex=68c34a3c&is=68c1f8bc&hm=aa6731c839de106c8b7356bfb9b451b6d28d54cc7d80fce32d85a7484b455225& [22:16:30] [4/4] https://cdn.discordapp.com/attachments/1006789349498699827/1415460951699886171/image.png?ex=68c34a3c&is=68c1f8bc&hm=36e522eae0deebacbebe33ef78d60f445d303d56051d644fffb680d7d95dec31& [22:16:47] everything returns an error page except for the main page [22:20:41] dcmultiversewiki, ghostmachinewiki and rippaversewiki are also partially dead [23:32:43] to be sure, edit can be removed from member right? https://issue-tracker.miraheze.org/T14255 [23:33:51] The member group doesn't have the edit permission by default I think [23:33:53] * and User do [23:38:42] aaah [23:38:45] this is why i ask [23:40:14] > [11/09/2025 08:16] [1/4] Just found Schrödinger's wiki [23:40:17] a wiki between life and death [23:41:11] FYI the default core/managewiki perms are at [23:41:17] fr