[08:45:22] I commented on the task [10:44:05] Krinkle: AaronSchulz: I made a patch that may put up to 1K keys into WANObjectCache per second, can you check that it's same? https://gerrit.wikimedia.org/r/c/mediawiki/core/+/935716 [10:44:35] Basically, it would write a timestamp to the cache every time someone views a page with a stale PC entry. [10:45:36] AS far as I can tell, that's about 200 per second normally, but can go up beyond 1000 per second after template edits. [10:46:29] We are currently queueing jobs at that rate: https://grafana.wikimedia.org/goto/rjc-sl9Vk?orgId=1 [10:46:56] I want to use WANObjectCache to implement stampede protection, so we don't push so many redundant jobs. [10:48:16] AaronSchulz: I also tried to add support for deduplpication in the job itself, could you check whether I got it right? https://gerrit.wikimedia.org/r/c/mediawiki/core/+/935722 [13:29:42] duesen: job deduplication is powered by memcached as well inside Mw php. [13:30:06] Did you find it not working well enough? [13:31:08] One perhaps lighter way to do this is through lock manager, or add(). I think that's how we typically do this. You'd want the job to unlock() proactively probably [13:31:35] The patch says it's in the parser cache miss branch but the calling code seems to be unconditional? [13:31:57] On every page view? [13:33:04] It's not clear to me what's missing in mediawiki already. Didn't we already add parsoid warming to the places where we generate parser output?