[01:52:46] /stat@combot [09:38:24] Hello all! Another update about the hackathon registration & scholarship application opening: we're still on it. We're making sure to use a nice and open source tool for the registration form, it will take us a bit more time to release the form, hopefully we will be ready to go tomorrow (December 13). As soon as we open, I will send announcements to all the usual channels (wiki, [09:38:25] email, here on Telegram). [10:48:50] Nice 👍 Update @Auregann (re @Auregann: Hello all! Another update about the hackathon registration & scholarship application opening: we're still on it. We're making su...) [13:38:46] hello! I'm trying to import this directly copying the code, but it seems that something is missing: https://vega.github.io/vega/examples/pi-monte-carlo/. Is there anyone here with Graph: extension knowledge? [13:45:30] Might be worth linking whatever page you're testing it on.. [13:51:02] I just copied the code there to Basque Wikipedia, and added headings... no good result: https://eu.wikipedia.org/wiki/Txantiloi:Graph:Pi_Montecarlo [14:06:24] How are you using it? As currently you've just dumped a load of JSON into a page in the Template namespace [14:09:19] sure, exactly as the other graph: extension graphs (https://eu.wikipedia.org/wiki/Txantiloi:Graph:Saldutako_margolanik_garestienak) [14:10:24] It's not exactly the same, is it? [14:10:33] https://eu.wikipedia.org/w/index.php?title=Txantiloi:Graph:Saldutako_margolanik_garestienak&action=edit [14:10:43] first line [14:10:43] {{#tag:graph|{{:{{FULLPAGENAME}} }}
''''''{{#tag:syntaxhighlight|
[14:10:52] https://eu.wikipedia.org/w/index.php?title=Txantiloi:Graph:Pi_Montecarlo&action=edit [14:10:58] first line [14:10:59] { [14:11:47] and of course, the paste of the first lines didn't seemingly make it through to telegram... [14:12:35] but the "working" example has wrapper code before and after the JSON [14:13:52] I have added the first and last line, but that doesn't work [14:14:58] I would imagine you want to fix the title at the bottom of the page too [14:15:19] yes [14:17:30] Now if you look at your browser console... There are errors : https://tools-static.wmflabs.org/bridgebot/c64e72ba/file_43886.jpg [14:18:36] so it doesn't seem possible [14:22:33] It looks like some syntax has got messed up [14:23:26] `"expr": "datum.data \u003C= num_points"` should be `"expr": "datum.data <= num_points"` [14:23:27] etc [14:31:18] I just copied the original code again, and it broke... I don't understand where the problem lies [14:36:05] Again, there's numerous errors in the browser console [14:36:12] >PARSE DATA FAILED: random_data Error: "sequence" is not a valid transformation [14:36:30] >PARSE DATA FAILED: pi_estimates Error: "window" is not a valid transformation [14:37:24] then I understand that our Vega version is not compatible with this one [14:38:43] That would be the simplest answer, yeah [14:38:54] Some old/out of date libraries in Graph (which isn't very well maintained) [14:39:46] vega2 looks to be 2.6.3, vega1 looks to be 1.5.3 [14:50:21] https://phabricator.wikimedia.org/T223026 [15:09:32] I want to list all articles in Kannada Wikipedia, in the User namespace, having bytes more than 2000. Which query I should use in Petscan? [15:22:06] if User then they're pages not articles. don't see a way to do that with petscan. any more criteria you can add for petscan? (re @Pavanaja: I want to list all articles in Kannada Wikipedia, in the User namespace, having bytes more than 2000. Which query I should use i...) [15:22:25] or write a quarry [15:22:40] Quarry also will do [15:23:45] I want to assign these pages to some students to improve them and make them live. There are many articles, may be 1000 or more in User pages which can be made live with little effort [15:24:25] oh you mean subpage drafts and sandbox? [15:24:35] Yes (re @jeremy_b: oh you mean subpage drafts and sandbox?) [15:24:41] should consult those users first [15:25:16] idk about on kn but most wikis there is a bit of ownership over your own userspace [15:25:39] No such issue [15:39:01] These were by students who were part of Wikipedia in Edn Programs. They have passed out of the colleges. Their articles are lying in user sandboxes. Some of them are almost ready. Can be made live with little editing. Wanted to make use of them [15:50:23] you can still leave them a talk page message first telling them the plan before you do it [15:55:56] Will do. But they are not logging in at all. We will post in VP and go ahead. But how do I list those articles? (re @jeremy_b: you can still leave them a talk page message first telling them the plan before you do it) [15:57:00] not just VP. individual user's talk pages. [15:57:09] IMO [15:57:24] I gave 2 options (re @Pavanaja: Will do. But they are not logging in at all. We will post in VP and go ahead. But how do I list those articles?) [15:57:40] any other commonalities? a category? [16:07:45] I got the quarry- [16:07:45] select page_title from page where page_len > '6144' and page_namespace = 2 and page_is_redirect = 0; [16:08:45] I have used 6144 which is 2048 * 3. That is because I want to list articles having more than 2k words. For Indian languages, in UTF-8, it takes 3 bytes per char [16:19:57] why do you wrap 6144 in quotes? [16:20:10] anyway ok now what? did you run it? [17:24:02] Yes. I got 4997 articles (re @jeremy_b: anyway ok now what? did you run it?)