[00:58:20] I wonder if there's an actually good reason why Cargo doesn't use prepared statements. Escaping stuff is total hell, and we seem to have encountered a situation where it just doesn't work because {{#replace: {{{1}}} | ' | \' }} simply has no effect when {{{1}}} comes from another Cargo call or something [00:58:39] Probably because the parser already turned the text into some kind of internal representation, so #replace doesn't work on it? I don't know. [01:01:01] I think it's related to format = template, have to do some tests to verify... [01:11:09] yeah confirmed [01:16:06] TL;DR: If you use {{#cargo_query ... | format = template }} then the arguments passed to the template have something weird going on with them such that any ' and " in them cannot be replaced with #replace, which thwarts attempts to escape them if the argument value will be used for something like where = blah = "{{{1}}}" down the line [12:35:24] hi, I wonder how my mediawiki handels cookies. I can't find any cookie file on my server disk [12:39:42] cookies are stored in the browser, not on the server [12:40:39] taavi: thats not right. Webserver has to store it too. [12:40:56] or php [12:42:04] at least the session id [12:45:33] if you are asking how mediawiki stores session data, https://www.mediawiki.org/wiki/Manual:$wgSessionCacheType [12:47:06] indeed [12:49:31] Ok seems to be I have to switch from CACHE_ACCEL to CACHE_DB to see the sesssions... [12:53:02] how do i disable the captcha that appears when creating a new account? [13:35:39] taavi: how do I delete old sessions= [13:50:21] irgendwer4711: what type of session and why? [13:51:19] php user sessions, to force a relogin [13:51:34] it seems apache restart will do [13:56:57] irgendwer4711: if the cache is configured to be in DB, restarting apache won't restart sessions. But you can try truncating the objectcache table: https://www.mediawiki.org/wiki/Manual:Objectcache_table [13:58:16] easiest way to log everyone out is to change https://www.mediawiki.org/wiki/Manual:$wgAuthenticationTokenVersion [13:58:30] Setting a new $wgSessionSecret should also work: https://www.mediawiki.org/wiki/Manual:$wgSessionSecret [13:59:43] Those settings may affect different kind of sessions: "session cookies" vs "remind me on this browser" [14:01:29] Vulpix: was configured CACHE_ACCEL [16:40:39] duesen: I have started a discussion in #mediawiki-parsoid about poor performance of parsoid warming job in 1.40. subbu wanted you pinged. [16:43:25] RhinosF1, I am chatting with him right now about this in the meeting. :-) But, ya, filing a phab task will help us respond there. [16:43:57] I've asked paladox to do one as they have more access to investigate [16:44:01] They are eating dinner [16:44:07] * RhinosF1 is on the bus from work [16:44:47] sounds good! [17:27:03] RhinosF1: Can you see the "causeAction" field of the job specs? What does it say? Also, how many parsoidCachePrewarm jobs do you see? And how many edits do you have per minute? Which site are we talking about? [17:28:16] duesen: paladox, we've seen around 30k stuck in the queue at peak and it's sustained that peak for a long time, no idea, Miraheze [17:28:37] Our job queue having more than a few hundred jobs in it sustained for any length of time is unusual [17:28:50] Also, https://gerrit.wikimedia.org/r/c/mediawiki/core/+/971543 won't hurt and might help, but it's not clear. It will mostly help to make things less terrible when they are already bad (deduplication is most effective when there is a big backlog of jobs, which of course you don't want to have in the first place) [17:29:26] I appreciate it will generate a lot of jobs but there's concern it's not processing them quick enough as it's a huge backlog and rising a lot of the time [17:29:59] So... each edit will trigger one job. And a page view may trigger a job when it hits a page that was invalidated, because a template on that page was edited. [17:31:05] Getting swamped by 30k jobs would mean you had 30k invalidated pages that got visited in a short time... Well, without deduplication, it just means you had 30k visits to an invalidated page before it had a chance to render. [17:31:12] Does that sound realistic? [17:33:27] Hm. On-view warming is triggered by parser cache misses of the "normal" parser cache. What's your parser cache TTL? Do you know how many cache misses you are seeing? [17:34:07] duesen: to get 30k jobs generate an hour from a page view, no. But we don't have 30k unique pages visited. I suspect that 30k would have become a lot higher because it didn't look like it was processing anywhere near quick enough. [17:34:12] TTL is 10 days [17:34:23] Cache miss rate, no, can we get it from somewhere [17:34:40] Re sustained peak: if your main page sees a lot of visits and is slot to render, or your job processing was busy doing something else, I can see 30k redundant parse jobs piling up... [17:34:58] Yes I agree [17:35:05] But it shouldn't be slow to render [17:35:40] oh oops, late for a meeting... [17:36:04] My concern is the rate they are processing, the cache is never going to fill [17:36:13] can you copy this conversation to a ticket? [17:36:28] the pre-warming is not going over all pages. it's triggered by edits. [17:40:25] duesen fyi: https://phabricator.wikimedia.org/T350600 has been filed .. so, at some point summarize findings there in case others also run into this. [17:57:44] subbu: According to what Paladox posted on the ticket, it's taking two minutes to render a page. That together with the lack of deduplication (and no stampede protection yet) would explain the pileup- [17:57:52] But I have no idea why rendering would be that slow. [17:59:58] Rendering that slow is a big concern [18:00:05] We don't have pool counter either [18:00:10] At least as far as I recall [18:00:13] paladox: ^ [18:04:54] parsoid isn't backed by poolcounter till 1.42 - it has only recently landed in master. And yes, 2 mins is too long ... curious what is happening there. [18:14:43] "without deduplication" means that, a busy page generating lots of duplicate parsoidCachePrewarm jobs, once the first job parses and populates the cache, next duplicate job will shortcut when seeing the cache has been populated? Or will it attempt to overwrite the cache by parsing the page again? [18:19:58] duesen: does it matter with that change that in the redisJobChronService we purge the root jobs? I had deployed those two and doesn't seem to keep the backlog down or helping with processing faster. [18:20:04] https://www.irccloud.com/pastebin/NI5u0wtr/ [18:21:21] paladox: No idea, the finer points of jobqueue management are lost on me. And I don't know much about Redis either. Aaron may know. [18:21:30] ah ok [18:21:57] maybe Krinkle as well