[14:47:40] Not sure when I or anyone else will get to it, but I just created T294767 to track the annoyance of finding what Developer account has been attached to a SUL account. [14:47:40] T294767: User should be told name of existing Developer account when SUL is already in use - https://phabricator.wikimedia.org/T294767 [14:53:19] couldn't you check that on phab? [14:53:32] hmm, that probably solves the opposite problem [14:53:41] finding out the wiki account of a developer [18:48:52] Hi all, does anybody here have experience with PyInstaller and the "Could not find suitable TLS CA certificate bundle" error that it gives from time to time? [18:49:10] Doing what? [18:50:19] https://irc.873gear.com/uploads/1104964f64b0f438/afbeelding.png - this stupid error while trying to connect to the Wikidata API [18:50:58] (green bit is a username that I censored) [18:51:57] What country you from? Also what OS? [18:52:03] Daniuu: ^ [18:52:27] RhinosF1: Windows [18:52:53] Daniuu: version? [18:53:23] It's a Windows 10 with Python 3.8.8 and PyInstaller 4.6 [18:54:00] Win 10 would be unusual if it's LE issues [18:54:25] Can you try updating certifi Daniuu [18:54:33] Or try reinstalling any PyPi packages [18:54:47] I'll try reinstalling requests and OAuth requests [18:56:19] Make sure sub packages are done too [18:56:29] I think certifi might be at fault [18:58:38] It is surely possible that it's an error in some package (I just had to reinstall PyInstaller due to a full reset of my Python distribution) [19:01:26] Well I'm guessing the LE expiry if you haven't used it in a few weeks [19:01:34] But it shouldn't affect you [19:01:35] RhinosF1: that did not help :) [19:01:51] As in, reinstalling did not solve the issue [19:01:53] :( [19:07:18] Hi, is there a way to get the content of a page via the MW api while also executing all parser functions when the wikitext (page content) is returned? [19:07:49] For example, assuming the page contains {{#invoke:}}, that should be executed and the content replaced when the page content is returned. [19:08:35] There's an expand templates api I think [19:09:06] Yes but that will work only for templates and not parser functions I think [19:09:20] In case of the previous discussion, it's also an issue that this app is distributed to other nlwiki VRT members on request :( [19:09:52] RhinosF1: but if you could demonstrate with a parser function, that would be great! [19:10:17] xSavitar: I have no idea if it works for parser functions [19:11:17] Okay! [19:21:06] it should expand most but not all parser functions [19:21:26] e.g. if I put "{{#invoke: Example | hello }}" in https://en.wikipedia.org/wiki/Special:ExpandTemplates it shows "Hello world!" [19:22:02] depending on what you're trying to do, I'd recommend using Parsoid HTML [19:22:50] xSavitar: ^ [19:25:28] legoktm: Thank you very much! I just tried that and it worked using the special page. Let me see if that works using mw api expandtemplates action as well. [19:26:15] I just just want to get a page's content (as wikitext) using mediawiki's API but with all parser functions execute so I can use for archiving for example [19:27:00] Archiving a page with parser functions will not have a snapshot of the page on the day or archiving, it will archive the parser functions which is pointless. I want the wikitext version. [19:30:51] expandtemplates is not perfect because there are some things that can't be expanded, but it's close [19:33:27] You're right. I think this solution you've proposed will do. Using the special page will work but the API at the moment won't. I'll just stick with the special page approach for now while digging more to find an ideal solution. [19:33:39] Thank you very much legoktm! <3 [19:36:41] xSavitar: wait, what's wrong with the API? [19:37:52] legoktm: To give more context to the problem, I'm trying to see how this page: https://meta.wikimedia.org/wiki/Wikimedia_movement_affiliates/Affiliates_Status_Report can be archived quarterly. [19:38:18] But archiving the page each quarter means I need to expand all {{#invoke:}} per quarter so I can have the wikitext version of the page. [19:38:39] If the page is just archived as is, without expansion, I won't be able to have a snapshot of the page each quarter [19:38:54] Can you subst: invokes? [19:39:15] So if I copy the wikitext of the page and put in the special page you proposed, I can get the parser functions expanded. [19:39:47] There's an API module for this, action=expandtemplates I think [19:40:38] And yeah, you can use subst: with #invoke. [19:40:42] You mean `{{subst:#invoke:}}`? [19:40:47] Yep [19:40:58] Should just give you the wikitext behind the module [19:41:46] And for expand templates we just pass the title to the API module I guess? Let me try that --- but will first use subst: invokes first [19:46:55] legoktm: I think I don't need the API at this point if subst: is used. When I do `mw.log(mw.title.makeTitle('', 'Wikimedia movement affiliates/Affiliates Status Report'):getContent())` on a module's console to get the page content, it's the correct one now [19:47:09] All the invokes have been expanded. Thank you very much! [19:47:13] Woot [19:48:05] Ty for explaining the tools legoktm [19:48:13] And glad you found what you wanted xSavitar [19:49:50] RhinosF1: Thank you very much! Your suggestion was correct but I needed that key ingredient (`subst:`) in front of the parser function :) [19:50:18] :) [23:40:48] RhinosF1: just for the log: I was able to fix the issue using this thread: https://stackoverflow.com/questions/17158529/fixing-ssl-certificate-error-in-exe-compiled-with-py2exe-or-pyinstaller [23:40:48] (bottom response did the job)