[05:11:52] I'm curious how the templates that are used under wiki articles rendered. Example, {{cite book}} tag for example, is that rendered on-the-fly? Could someone tell me how exactly this particular feature works? [05:45:31] fentanyl: whenever the wikitext is rendered into HTML, the contents of templates are looked up and expanded into more wikitext, and then it's all parsed into HTML [05:46:31] legoktm[m]: So, it's done on-the-fly everytime when someone sends request to the article page, right? [05:47:07] no, we cache the rendered HTML [05:47:30] so most readers are actually just being served static HTML from the backend [05:47:46] https://www.mediawiki.org/wiki/Manual:Parser is a good overview of the wikitext->HTML parsing process [05:48:32] I see, I'll take a look, thanks for that link. [05:48:50] https://www.mediawiki.org/wiki/Manual:MediaWiki_architecture#Execution_workflow_of_a_web_request [15:46:25] is it possible to get profiles from a local dev wiki? [15:47:32] profiles? [15:50:12] like the inline PPH profile [15:50:14] *PHP [15:50:57] or, rather, to avoid an XY problem, how do I look at a performance hotspot on a local wiki [15:51:22] (imagining that, as a bear of little brain, IDK what I'm doing with PHP at the best of times) [15:58:13] Which PHP profile? [15:58:21] The NewPP limit report in the HTML comment? [16:01:08] yes, or some other alternative [16:02:04] $wgEnableParserLimitReporting is true by default [16:05:19] ok, and what about the full forceprofile one? [16:06:23] i don't mind if I should be getting that from some other place since I assume X-Wikimedia-Debug is a Wikimedia prod thing [16:07:02] I think that's from XHProf [16:07:09] Needs a PHP extension [16:07:13] And then https://www.mediawiki.org/wiki/MediaWiki-Docker/Configuration_recipes/Profiling [16:14:01] awesome [16:17:33] ok, and the next question: say I have a hotspot that's hammering $pageTitle->getArticleID(), is there a way to warm up PageStore, or a more efficient way to get the IDs (assuming I have a big ol' list of Title)? [16:18:31] LinkBatch maybe? [16:18:43] That's definitely for existence... [16:20:26] Depending on what you're doing... A DB query might be easier [16:20:30] Or tweaking the DB query you're doing... [16:20:36] And pull the ID at that point too [16:21:07] ParserOutput->addTemplate( ... ) for pages in an index [16:21:44] (as in, ProofreadPage indeX) [16:23:10] the list of pages "in" an index returns Titles (and any Title in there might not exist) [16:29:01] Other extensions do setup a linkbatch to then do `$ids[] = $title->getArticleID();` to get a list of all ids [16:31:14] yeah, I think LinkBatch is looking good [16:32:21] And IIRC, it does some caching underneath and that good stuff [18:02:59] Yes, looks like its hugely faster [18:03:11] Thanks for the tip, that's gonna help a lot [18:19:58] sweet [18:42:48] Reedy: re: Would it be better.... ? [18:42:57] i assume the answer is "yes"? [18:44:09] unless it's not rhetorical, then I have no idea, because I learned about LinkBatch from this nice dude in IRC today [18:49:58] if it worth considering building an array from an iterator as a waste? [18:50:25] foreach ( $this as $key => $pageTitle ) { [18:50:25] $pageTitles[] = $pageTitle; [18:50:25] } [18:50:58] foreach ( $this as $key => $pageTitle ) { [18:50:58] $batch->addObj( $pageTitle ); [18:50:58] } [18:51:11] ( i have no handle on how important that is) [18:53:28] oh wait, duh [18:53:33] i can just use the iterable [18:54:47] too much langagues where for ( x in arr ) would give you the key [18:56:08] also is there any plausible way this could/should be tested? [20:55:51] inductiveload: Basically my thing was that there was already code doing the loops etc, so might aswell let them do it