[10:50:10] kostajh: thanks for the review! can you also look at https://gerrit.wikimedia.org/r/c/mediawiki/extensions/OATHAuth/+/988413/ for the same bug? [16:10:18] hello, I try to setup mediawiki-1.40 on apache-2.4 [16:10:57] Can't successfully run a setup, due to Runtime error. [16:11:05] Which is? :) [16:11:15] Reedy: Getting to that :) [16:11:54] Mediawiki fails to find writable tmp directory [16:12:08] No writable temporary directory could be found... [16:12:35] Did I miss something in httpd.conf? [16:12:48] orrrrrr [16:13:13] the default is '/tmp' and it can't be accessed by 'http' user? [16:13:28] butttt [16:13:30] More likely something in your PHP config (and/or the defaults don't work quite right) [16:14:47] !phpinfo [16:14:48] create a file called info.php in your wiki's installation directory. It should have the following content: ; Visit it with your browser to see information about your php installation. If it doesn't work, check file permissions. Remove the file when done, it makes life easier for attackers. [16:20:20] Reedy: I've got it. [16:21:05] Reedy: sys_tem_dir? no value [16:21:22] s/_tem_/_temp_ [16:53:35] fffff yeah! [16:53:55] It was actually php_admin issue [17:24:13] https://patientuser.com/deep/wiki/Annotation:HLA-B shows the text I expect, but SELECT p.page_title, t.old_text FROM page as p, text as t where t.old_id == p.page_latest and p.page_title = "Annotation:HLA-B"; shows different text. can anyone help me to understand why? [17:31:50] bypassing the revision table? [17:33:07] that query seemed to work well until a recent DB hiccup. Can you point me at a more correct sql query to retrieve latest page text? [17:33:50] Probably more by luck than judgement [17:34:56] > To retrieve the text of an article, MediaWiki first searches for page_title in the page table. Then, page_latest is used to search the revision table for rev_id, and rev_text_id is obtained in the process. The value obtained for rev_text_id is used to search for old_id in the text table to retrieve the text. When a page is deleted, the revisions are moved to the archive table. [17:43:02] (plus additional complications on wikis where the text table is hosted externally or page content is stored elsewhere via other means) [17:49:55] I'm still poking at it, but I expected this to be right: [17:49:56] select p.page_title, t.old_text from page as p, revision as r, text as t where p.page_title = 'Annotation:HLA-B' and p.page_id = r.rev_page and t.old_id = r.rev_text_id [17:49:56] however it seems rev_text_id is deprecated, and now there is some path via either slots or content? can anyone get me closer? [17:52:54] from slots, slot_revision_id is the revision id. slot_content_id should join with content (content_id). And content_address will give you the id for the text table [18:27:30] Vulpix I've got a larger query that's failing to return anything, and this seems to isolate why. [18:27:31] select * from content, text where content.content_address = text.old_id; [18:27:31] returns nothing. Any thoughts on why? [18:28:37] ah, I see there is a "tt:" prefix [18:29:01] which seems to preclude a simple join [18:29:32] so I think this would be right except for that [18:29:34] select page.page_title, text.old_text [18:29:34] from page, revision, slots, content, text [18:29:35] where page.page_id = revision.rev_page [18:29:35] and revision.rev_id = slots.slot_revision_id [18:29:36] and slots.slot_content_id = content.content_id [18:29:36] and content.content_address = text.old_id [18:30:22] I find it hard to believe I ought to be doing string manipulations for such a core functionality. Am I missing something in how this content table should be used? [18:31:34] What are you actually trying to do? [18:31:58] There's plenty of wrapper classes (or APIs) you shouldn't (probably) be needing to do SQL queries yourself [18:31:59] I want to ready wikitext content for a given page from sql [18:32:19] I need to walk all pages [18:32:57] so apis would trigger N queries and would be drastically less efficient [18:33:20] Many API calls can give you multiple results [18:34:13] I'm on the server. I have direct access to the sqlite file. the apis are just overhead. [18:34:50] so is potentially maintaining a query through schema changes :) [18:34:57] I accept that [18:35:46] I have very few queries I care about, it's really justĀ  title --> wikitext [18:39:45] SUBSTRING(content_address, 1, 3) [18:42:11] select page.page_title, text.old_text [18:42:12] from page, revision, slots, content, text [18:42:12] where page.page_id = revision.rev_page [18:42:13] and revision.rev_id = slots.slot_revision_id [18:42:13] and slots.slot_content_id = content.content_id [18:42:14] and SUBSTRING(content_address, 4) = text.old_id [18:42:15] does it [18:42:26] many thanks to Reedy and Vulpix [18:46:04] and moonmoon [19:36:50] that will break in case of external storage [19:37:03] but I don't think you're going to use it on that server [19:37:08] I imagine if they're using sqlite... [19:37:09] :) [19:37:27] of course, with ES they couldn't do cross server joins anyway [19:58:29] yep, that was the reason I thought that ;) [20:05:01] You also won't get text if you compress revisions, which may be more common than setting ES