[00:26:12] [telegram] okay it's UTC friday now! I can start hacking? [02:03:09] I believe this is the pre-hacking period (like pre-gaming, but with a positive result, hopefully) [02:28:08] [telegram] https://play.workadventu.re/@/wmf/hackathon2022/map is the link for the hackathon virtual world. See https://www.mediawiki.org/wiki/Wikimedia_Hackathon_2022/How_to for more details. [02:30:00] [telegram] Oh boy, Stardew Valley! [02:30:37] [telegram] WASD navigation for the win :) [02:32:44] [telegram] Thanks! Is it possible for people to join a session directly from a Jitsi link, in case they have trouble witj WorkAdventure for some reason, and if so, can we have the links added somewhere? :) (re @bd808: https://play.workadventu.re/@/wmf/hackathon2022/map is the link for the hackathon virtual world. See https://www.mediawiki.org/w...) [02:34:19] [telegram] @Auregann yes, @haleywmf is going to publish direct room links soon. We don't want anyone to be locked out of sessions if they have issues with the game map. [03:12:26] [telegram] Direct links to jitsy rooms: https://www.mediawiki.org/wiki/Wikimedia_Hackathon_2022/Schedule/Jitsi [03:49:34] [telegram] I am in my assigned Jitsi room for the presentation. Do I need to be anywhere else? [03:51:46] [telegram] Nope! [03:54:29] [telegram] And, will sessions be recorded? [03:58:47] [telegram] No, for legal reasons we can't record. But you can upload your slides! [04:09:06] has harej's talk already started? I think I'm in the right room... [04:09:24] [telegram] it's running (re @wmtelegram_bot: [irc] has harej's talk already started? I think I'm in the right room...) [04:09:39] [telegram] in "wikibase and wikidata" [04:10:00] oh there we go, I had to reload to see his screen [04:10:37] [telegram] once you're in the room have to hit space to actually get the jitsi open (re @wmtelegram_bot: [irc] oh there we go, I had to reload to see his screen) [04:10:52] I joined the jitsi directly, haven't hopped into workadventure yet [04:15:50] [telegram] Yay, great presentation, James! If you choose to publish your slides, let us know! [05:04:29] [telegram] Is that documented somewhere? As we have recorded things in the past, I wonder what is new/different now. (re @haleywmf: No, for legal reasons we can't record. But you can upload your slides!) [05:31:08] [telegram] Discuss/ask anything about Phabricator now in the Infrastructure Tools Room! (Or join the same session at 16:00UTC later today.) [06:54:18] [telegram] I'm just getting a 500 error (Firefox on Ubuntu). (re @bd808: https://play.workadventu.re/@/wmf/hackathon2022/map is the link for the hackathon virtual world. See https://www.mediawiki.org/w...) [07:04:58] [telegram] It's boring at home alone, is there anyone to chat? [07:48:25] [telegram] https://t.me/+EzDjC1ASXLphZDM0 [09:34:47] good news everyone! [09:36:10] you found irc? :P [09:43:46] [telegram] Hi folks, if you’re having issues with accessing the online game space, we’d love to know more to help troubleshoot and pass on bug reports to the WorkAdventure team. [09:43:47] [telegram] [09:43:49] [telegram] It would be useful to know what issues you’re running into and which platform version and browser version you have the issues on, if you’re okay with sharing those details publicly. Note that this information would be shared with the WorkAdventure team for debugging purposes. Please email dev-advocacy@wikimedia.org with bugs or message me directly. [10:25:46] [telegram] Is that work adventure site supposed to work on mobile phones? Or only desktop? [10:27:53] [telegram] It should work on both mobile and desktop. But some people are running into issues on mobile. [10:27:55] [telegram] the changement of rooms is with a keyboard and not with a mouse? (re @mseckington: Hi folks, if you’re having issues with accessing the online game space, we’d love to know more to help troubleshoot and pass on ...) [10:29:32] [telegram] and when a room is chosen there isn't zoomed [10:30:57] [telegram] Yes, on desktop you need to move on the map with the keyboard (WASD or arrows). On mobile you need to use touch screen. (re @Nehaoua: the changement of rooms is with a keyboard and not with a mouse?) [10:32:51] [telegram] Hi, could you elaborate what you expect to happen? What should be zoomed? When you say "chosen", do you mean entering a room? (re @Nehaoua: and when a room is chosen there isn't zoomed) [10:33:58] [telegram] if you’re trying to join the video call in a room, you’ll need to hit the spacebar (on desktop) or on the popup (on mobile) [10:34:17] [telegram] I suggested that but i understand (a pop up) (re @andreklapper: Hi, could you elaborate what you expect to happen? What should be zoomed? When you say "chosen", do you mean entering a room?) [10:34:42] [telegram] yes, thank's (re @mseckington: if you’re trying to join the video call in a room, you’ll need to hit the spacebar (on desktop) or on the popup (on mobile)) [10:47:12] [telegram] Is there a limit to the number of letters in our names? I tried to use my Wiki Username but couldn't use all. (re @mseckington: Hi folks, if you’re having issues with accessing the online game space, we’d love to know more to help troubleshoot and pass on ...) [10:47:35] [telegram] yes, it's rather short, around 10 characters or so (re @daSupremo: Is there a limit to the number of letters in our names? I tried to use my Wiki Username but couldn't use all.) [10:51:00] [telegram] Yes (re @jhsoby: yes, it's rather short, around 10 characters or so) [10:59:47] [telegram] This may be a basic question, but is there a reason why the Jitsi rooms are not directly linked to from the Hackathon schedule column headings? [10:59:47] [telegram] [10:59:49] [telegram] https://www.mediawiki.org/wiki/Wikimedia_Hackathon_2022/Schedule/Jitsi?tableofcontents=1 [11:00:21] [telegram] https://tools-static.wmflabs.org/bridgebot/ac735432/file_19148.jpg [11:00:46] [telegram] https://tools-static.wmflabs.org/bridgebot/22f125e4/file_19149.jpg [11:02:45] [telegram] How do I get set up in a Jitsi room. I’m still not seeing any info on this. [11:07:00] [telegram] Hi, could you elaborate what you mean by "get set up"? (re @Cyberpower678: How do I get set up in a Jitsi room. I’m still not seeing any info on this.) [11:07:35] [telegram] You should be able to access all the rooms through the online game space. The Jitsi links were published separately for anyone running into issues accessing the game space. The online game space will also allow you to meet people and have conversations outside of the rooms. (re @fuzheado: This may be a basic question, but is there a reason why the Jitsi rooms are not directly linked to from the Hackathon sc [11:17:46] [telegram] There's a user whom I'm trying to meet [11:18:07] [telegram] And it looks like he's next to where I am, but I cannot see their avatar. [11:18:13] [telegram] He sent me a screenshot. [11:18:24] [telegram] Maybe he's not approved or something like that?.. [11:19:40] [telegram] he said u can hear (re @amire80: There's a user whom I'm trying to meet) [11:19:53] [telegram] he talk with me [11:21:07] [telegram] i can hear you [11:26:40] [telegram] Are you still having issues with this? If so, please head to the Help & Moderation room (re @amire80: And it looks like he's next to where I am, but I cannot see their avatar.) [11:31:17] [telegram] With all due respect, the gamespace while cute and fun probably serves as a barrier to entry for 50%++ of potential attendees (re @mseckington: You should be able to access all the rooms through the online game space. The Jitsi links were published separately for anyone r...) [11:31:40] [telegram] I think the number of "Where do i go?" messages in this chat is a good indication of that :) (re @mseckington: You should be able to access all the rooms through the online game space. The Jitsi links were published separately for anyone r...) [11:32:19] [telegram] Also, I am planning to attend by mobile for a number of these sessions. That's nearly impossible if I'm using Work Adventure on an iPhone to try to find the Jitsi rooms [11:33:09] [telegram] Would anyone be offended if in the spirit of Hackathon went and hacked the schedule to point directly at the Jitsi rooms? (re @mseckington: You should be able to access all the rooms through the online game space. The Jitsi links were published separately for anyone r...) [11:34:27] [telegram] This has been my experience in trying to promote the game space concept in previous conferences (WikiConference North America and Wikimania) and it has been... lacking (re @fuzheado: With all due respect, the gamespace while cute and fun probably serves as a barrier to entry for 50%++ of potential attendees) [11:41:44] [telegram] I added the Jitsi links - https://www.mediawiki.org/w/index.php?title=Wikimedia_Hackathon_2022/Schedule/May_20&diff=5222311&oldid=5222272 [11:57:23] [telegram] I'm presenting today, but I have no idea how to set myself up as a presenter (re @andreklapper: Hi, could you elaborate what you mean by "get set up"?) [12:01:06] [telegram] Basically you enter the room, and Jitsi should allow you to screenshare (if you want to present something). It's the third button in the tool bar at the bottom. (re @Cyberpower678: I'm presenting today, but I have no idea how to set myself up as a presenter) [12:43:56] [telegram] Anyone is already working on a project? If so, feel free to drop a short description here, with the related Phabricator ticket, and let others know if you need any help :) [13:10:57] [telegram] I wish you could put little signs in the ground in the ork adventure with text on, like at in person hackathon when you would write on signs on the talbes! [13:35:59] will talks be recorded? there's like 3 I want to watch at the same time [13:36:52] [telegram] Forwarded from haleywmf: No, for legal reasons we can't record. But you can upload your slides! [13:37:35] [telegram] Haley as shared this message [13:41:30] [telegram] Can you encourage people to take good notes on ether pad? (re @dennistobar: No, for legal reasons we can't record. But you can upload your slides!) [14:45:29] [telegram] also wondering more about this (re @Jan_ainali: Is that documented somewhere? As we have recorded things in the past, I wonder what is new/different now.) [14:48:26] [telegram] Hi! Good question. The WMF cannot record participants in a WMF-sponsored event without everyone signing a release form. We decided to avoid that situation, in the interest of making sessions more interactive, avoiding registrations, and not having people concerned with privacy. You are very welcome to upload slides and materials though! [14:49:24] [telegram] I don't think I ever _signed_ anything for any Wikimania though. (re @haleywmf: Hi! Good question. The WMF cannot record participants in a WMF-sponsored event without everyone signing a release form. We decid...) [14:51:28] [telegram] it was implied when you bought the tickets, AFAIR, and then you always had the choice to wear the correct color for the name tag (re @Jan_ainali: I don't think I ever signed anything for any Wikimania though.) [14:51:43] [telegram] have we signed release forms in the past? [14:51:43] [telegram] [14:51:44] [telegram] what if there's a portion of presentation with just the speaker and no audience participation and that's recorded and then questions are left out? or even if questions are during they can be edited out. maybe have the speaker repeat question for the recording. then just need the speaker to sign. [14:52:02] [telegram] Yes, checkboxes and such has there been plenty of. Hence the italics on the word signed. (re @Sannita: it was implied when you bought the tickets, AFAIR, and then you always had the choice to wear the correct color for the name tag) [14:52:24] [telegram] obviously this doesn't work for every type of session. some are more interactive [14:52:43] [telegram] checking a box is a form of signing (re @Jan_ainali: Yes, checkboxes and such has there been plenty of. Hence the italics on the word signed.) [14:52:54] [telegram] it's like click-wrap licenses [14:53:25] [telegram] has click-wrap ever really been tested legally? [14:53:26] [telegram] plus, AFAIR, at Wikimania if you gave a presentation you had to give permission to be recorded [14:53:38] [telegram] yep, and confirmed (re @jeremy_b: has click-wrap ever really been tested legally?) [14:53:49] [telegram] source: my MA thesis :P [14:53:50] [telegram] It depends if the signing is to prove it was me that did or not. Which I guess what a release form is for. (re @Sannita: checking a box is a form of signing) [14:53:57] [telegram] OR (re @Sannita: source: my MA thesis :P) [14:54:07] [telegram] 🤷‍♂️ (re @jeremy_b: OR) [14:56:28] [telegram] second opening ceremony starts in 4 minutes! [15:05:24] [telegram] Is the chat in the work adventure global or per room? [15:05:40] [telegram] The chat is per room [15:06:18] [telegram] I guess not many found the button yet :) [15:06:32] [telegram] Is this the best place for general hackathon chatting? Or IRC? Or Mattermost etc? :D [15:06:32] [telegram] Is there a standard system for etherpads, or should we just set up our own? [15:06:41] [telegram] so many questions ;) [15:09:15] [telegram] this room is connected to IRC at #wikimedia-hackathon. So either works! [15:09:29] [telegram] IRC and the telegram channel should be bridged. Though the lack of IRC traffic is making me check (re @Adam: Is this the best place for general hackathon chatting? Or IRC? Or Mattermost etc? :D) [15:09:41] It is [15:09:55] (its working with #wikimedia-hackathon in libera) [15:10:07] [telegram] There are no mobile push notifications on Mattermost, however, which is a bummer for mobile users [15:10:42] Well, here's my first thing to learn this hackathon... what is Mattermsot? [15:11:43] [telegram] Mattermost is an open-source, self-hostable online chat service with file sharing, search, and integrations. It is designed as an internal chat for organisations and companies, and mostly markets itself as an open-source alternative to Slack[9][10] and Microsoft Teams. [15:11:43] [telegram] [15:11:44] [telegram] https://en.wikipedia.org/wiki/Mattermost (re @wmtelegram_bot: [irc] Well, here's my first thing to learn this hackathon... what is Mattermsot?) [15:12:30] [telegram] *pings self in irc (addshore)* [15:12:39] aaah here it is! [15:14:11] [telegram] https://meta.wikimedia.org/wiki/Wikimedia_Chat is our Mattermost implementation, though it's not very active at the moment [15:15:57] [telegram] Anyone getting a little fatigued with having to download a new messaging app for every person you want to talk to 😅 [15:16:10] [telegram] +1 (re @lcawte: Anyone getting a little fatigued with having to download a new messaging app for every person you want to talk to 😅) [15:16:18] [telegram] 🙃 oh yes [15:18:27] ( so I'll mainly stick to IRC :P ) [15:18:28] [telegram] To remedy this, we will be using email mailing lists for all communications from now on :P (re @lcawte: Anyone getting a little fatigued with having to download a new messaging app for every person you want to talk to 😅) [15:18:56] [telegram] Great... can mailing lists stop bouncing me off please? [15:19:27] [telegram] It's https://xkcd.com/1782/ meets https://xkcd.com/927/ :P [15:20:53] IRC TILL I DIE (or something like that) [15:29:52] [telegram] Anything happening in Wikidata room? [15:30:42] There are people in it! [15:31:55] me too now [15:32:11] addshore, could you help with https://phabricator.wikimedia.org/T301104 or recommend someone who can? [15:34:04] balloons: ooooh interesting. I may be able to help a little! [15:34:22] addshore, Mitar is looking for help on how to get started. Where is the code stored? [15:36:10] let me have a look [15:36:33] yea, I would need some help/pointer where the code which generates these dumps is [15:38:13] Mitar: do you know if the wikidata JSON dumps contain these fields? [15:38:45] With my Wikibase hat on, the title in the API response, is only added to the API repsonse, and is not part of the entity itself [15:38:51] but taht doesnt mean we can't add it to dump :) [15:39:42] [telegram] It might be because account creation is either disabled or hidden somewhere very non-obvious. (re @pharosofalexandria: https://meta.wikimedia.org/wiki/Wikimedia_Chat is our Mattermost implementation, though it's not very active at the moment) [15:40:04] no, it is not, because wikidata's "page title" is the same as entity ID [15:40:59] so there is no concept of page title on wikidata [15:41:13] OK, so how do we add it to the dump? [15:41:39] I mean, we could also add it as sitelink, that is how it is done on wikidata (sitelink to Wikipedia's page title, for example) [15:42:06] So the extension for mediainfo is https://github.com/wikimedia/mediawiki-extensions-WikibaseMediaInfo [15:42:14] but I think having it similar to API would be useful [15:42:18] let me have a look for the code that relates to dumps [15:42:41] thanks [15:44:06] So the dumping is done via a Wikibase maintenance script extensions/Wikibase/repo/maintenance/dumpJson.php [15:47:42] Mitar: also note, I linked on github, but that is a mirror, and the repo is on Gerrit! [15:47:56] lol, much more abstracted code than I expected, I thought it is some ugly script :-) [15:48:02] yea, I get that [15:48:15] So the dumpJson script is in https://github.com/wikimedia/Wikibase/blob/master/repo/maintenance/dumpJson.php [15:48:59] yes, I found that, so the same script is used on wikimedia commons? then it makes sense why things are missing [15:49:14] Yes, for commons the option of a different entity type is used [15:49:47] if you follow through the code this is where the entity (mediainfo entity) gets turned into the JSON you end up seeing https://github.com/wikimedia/Wikibase/blob/44b2d731c507d40472cf6f1392bc378166e2a45f/repo/includes/Dumpers/JsonDumpGenerator.php#L138 [15:51:41] There is already one example of infomation being added to the serializationg in https://github.com/wikimedia/Wikibase/blob/44b2d731c507d40472cf6f1392bc378166e2a45f/repo/includes/Dumpers/JsonDumpGenerator.php#L140 [15:52:44] I'm not sure if you would want to do this for all entities, or just media info [15:53:40] all entities on wikimedia commons, if I understand how things are [15:53:56] are there any other entities there which are not related to files? [15:54:22] > Yes, for commons the option of a different entity type is used [15:54:23] which option is that? I haven't found it yet [15:54:23] [telegram] [PSA] in about 6 minutes there will be the Wikifunctions session at the 2022 Wikimedia Hackathon! Join us in the Mediawiki room! [15:55:33] [telegram] does anyone feel like writing a url shortener that can actually shorten long urls? [15:56:10] [telegram] as opposed to one with an artificial limitation on input link length? [15:56:33] [telegram] yes, like the one that the query service uses that keeps failing on me [15:57:53] @addshore thanks for all the links, I will explore them and see if I can figure things out, if you find/think of anything else which could help me, feel free to share it with me [16:00:43] [telegram] Mitar: https://gerrit.wikimedia.org/g/mediawiki/extensions/WikibaseMediaInfo/+/refs/heads/master/src/DataModel/Serialization/MediaInfoSerializer.php is probably also useful, though I’m not sure if the extra information should go in there or somewhere else [16:01:01] * legoktm waves, good morning [16:01:05] [telegram] [Phab Q&A now!] Discuss/ask anything about Phabricator now in the Infrastructure Tools Room! [16:02:37] Good evening legoktm [16:03:03] Hey legoktm [16:03:20] [telegram] speaking of which (re @mahir256: as opposed to one with an artificial limitation on input link length?) [16:03:32] [telegram] hello legoktm! [16:05:23] Hey legoktm [16:06:10] [telegram] https://tools-static.wmflabs.org/bridgebot/62c57b75/file_19196_oga.ogg [16:07:09] Mitar: the two approaches I can see would be 1) If titles should always be added to JSON, then add the titles to the stuff in JsonDumpGenerator.php#L140 always... 2) If titles should only be added to JSON for media info, you would need to add a pluggable way to add titles to the JSON per entity type, and each entity could then implement its own additions to the JSON [16:07:17] [telegram] I think the link for the Mediawiki room from the schedule is the wrong one, it doesn't link to the room where the session actually takes place. Can someone help fix it? (I don't know how to retrieve the correct link) [16:07:52] [telegram] I just fixed it [16:08:02] [telegram] 5 of us were in the wrong room [16:08:29] by always you mean for all M entities, or also on Wikidata? because there is no title concept on Wikidata, so I think 2) is what is probably needed [16:08:48] Also for wikidata [16:09:33] So there is a concept of title on Wikidata, but for item Q1 the title is also Q1, its different if you look at properties for example, Property P1 is actually title Property:P1 [16:09:58] Lexemes also are not in the main namespace, so a title could be useful, and also for default Wikibase installations, items are actually in the item namespace, so Item:Q1 for example would be the title [16:10:37] would that be useful? if so, then sure, I think 1) feels much easier approach [16:11:01] I think the difference is that API on Wikidata is not adding such title field [16:11:22] you also have title on https://www.wikidata.org/w/api.php?action=wbgetentities&ids=Q1 [16:11:41] and on https://www.wikidata.org/wiki/Special:EntityData/Q1.json [16:11:42] oh, yea [16:11:46] I just noticed https://www.wikidata.org/wiki/Special:EntityData/Q42.json [16:12:42] If you do go with #1 then I would suggest wrapping it in a config variable / feature toggle too :) [16:13:10] The other thing to consider is that looking up the titles might be more expensive :/ (I'll have a quick look at that) [16:15:35] so there are few other fields which might be useful to add if we compare API/dump: ns, pageid, modified [16:16:00] also, it feels to me like lastrevid is already added ad-hoc to the dump, because it looks appended at the end [16:16:11] maybe this is the place to add also title, ns, pageid, and modified [16:17:32] I was right! https://github.com/wikimedia/Wikibase/blob/44b2d731c507d40472cf6f1392bc378166e2a45f/repo/includes/Dumpers/JsonDumpGenerator.php#L149 :-) [16:17:42] so yea, let's add title, ns, pageid, modified there as well? [16:19:42] Mitar, yeah I think you should be okay, as the meta data is used in the lookup of the entity in most cases, and is cached etc [16:21:43] Mitar: in the API, i believe title, and that other information is added here https://github.com/wikimedia/Wikibase/blob/44b2d731c507d40472cf6f1392bc378166e2a45f/repo/includes/Api/ResultBuilder.php#L319 [16:21:46] so you agree with the plan to add all of those additional fields to make data be the same between API and dump? and you propose that this is opt-in or opt-out config variable / feature toggle? [16:22:23] Ultimately it is not my decision if this ends up being run for commons etc, but this would be my approach yes [16:22:54] looks like something similar to this is what you want https://github.com/wikimedia/Wikibase/blob/44b2d731c507d40472cf6f1392bc378166e2a45f/repo/includes/Api/ResultBuilder.php#L354-L362 [16:23:12] I think exactly this [16:23:20] it would be nice if addPageInfoToRecord could be abstracted somehow out [16:23:24] so that the same function is used in both placs [16:23:26] *places [16:24:27] not sure if types match, if $data is array as well [16:24:31] Yes, you could have a little class, that just does this one thing and reuse it between the API and dump :) [16:25:28] Mitar: you should write up what your planning to do in a comment on the phab task now :) [16:25:34] hm, any suggestion where to put this class? some existing utility file? [16:25:54] given that it should be shared between Dumpers and API [16:26:33] For lack of a better place right now, add it to repo/includes [16:29:12] any file/class name suggestion? (sorry for such basic questions, but I am not familiar with codebase and its naming policies) [16:29:56] Mitar: I wouldnt worry too much about the location of the code or names just yet, as thats easy to change when it works :) [16:30:29] OK [16:30:31] thanks! [16:30:37] Call it what it does :) [16:31:09] feel free to subscribe to the issue [16:31:17] I will post the plan there now [16:32:20] and thanks again for all the help [16:32:41] no problem. I'll be on and offline throught the weekend so feel free to ping me, and I'll try to respond [16:32:59] thanks [16:34:21] o/ [16:34:28] Hi Krinkle ! [16:37:45] addshore: I am now [16:37:51] Amir1: can I nerd snipe a few mins of you? :) [16:38:06] sure [16:38:40] I just want to double check that you think that adding title as a key in the output for commons media info jobs probably will not have any big performance impact [16:39:09] AFAIK and as far as i can tell the title at the point of dumping is already cached / used in the request in the loop to get the serialization [16:39:19] addshore: I don't think it'll make a big difference [16:39:25] woo :) [16:39:26] performance-wise [16:39:29] I like how Jitsi lets you scroll down the grid, much like an actual room. [16:39:43] Amir1: it'll obviously make the dump slightly bigger :P [16:39:49] context is https://phabricator.wikimedia.org/T301104 [16:41:33] addshore: what are you planning to add there? title of the item properties in the statements? [16:41:53] the title of page is already there [16:41:54] Amir1: I think T301104 is the context [16:41:55] T301104: Wikimedia Commons structured data dump does not contain all fields, e..g, title - https://phabricator.wikimedia.org/T301104 [16:42:19] primarily for mediainfo, the title. But Mitar was going to take a look at doing it for all entities to start with [16:42:30] bd808 addshore I'm a tiny bit confused. https://commons.wikimedia.org/wiki/Special:EntityData/M76.json has title there [16:42:37] Amir1: for dumps :) [16:42:55] Amir1: for both special entity data and the action API the JSON is tweaked via other things [16:42:56] sure, that's nothing compared to the whole entity [16:43:33] the dump is growing for other reasons [16:43:51] so API has that, but dumps do not, so you cannot link the entity in the dump with the file because title is missing in dumps [16:48:15] I wrote the plan https://phabricator.wikimedia.org/T301104#7945319 [16:49:19] so I am not completely sure about the setting/flag part, because I tend for things just to do the right thing, is there a way to check with whoever is responsible for wikimedia commons structured data dumps if this is something which should be added or if they feel it is OK just to generate them once the code change is merged? [16:57:04] I'll be talking about skins shortly in the mediawiki room if anyone is interested in joining [16:58:10] API question - most efficient way to find out if an image is used - is it just to imageusage list and count results/catch an unused error/response? [17:00:22] [telegram] In a few minutes, @mahir256 will talk about how to use Wikidata Lexicographical Data on Wiktionary in the Wikidata room! [17:00:50] I'll be having a conversation about Tech and Tea in the Cantina any moment now :) [17:02:20] [telegram] Mediawiki room empty? [17:03:09] [telegram] No anymore (re @Yetkin: Mediawiki room empty?) [17:03:10] [telegram] Lcawte: that with iulimit=1, I guess, plus globalusage (with likewise gulimit=1 to reduce the amount of data) [17:04:17] Fortunately it's for local projects so globalusage shouldn't matter too much. [17:05:13] [telegram] Anything for this? [17:05:13] [telegram] [17:05:14] [telegram] Making a MediaWiki skin [17:05:50] I was trying to make an easy search replacement. Or search override for Mediawiki. Currently just hides main search and creates a new skin search and pass clientside js preprocessing for NLP to make a subject matter expert or expert system. [17:06:10] [telegram] Lcawte: ah, okay :) [17:12:42] So you can make a wiki with site specific searches instead of general full text, but context searches. i.e. tech, entertainments, social issues, cooking etc.. specific. [17:12:53] I feel like more people should be tweeting things :P [17:13:16] [telegram] Is there a hashtag? Can't tweet without a hashtag. [17:13:22] #wmhack [17:13:27] always and forever [17:13:56] [telegram] Also, people have run away from Elonnet to ... Mastadon or whatever it's called. [17:14:18] I need to get on that mastadon thing, but I need to pick a server to start on ... [17:14:34] [telegram] Well, there's a ... tech topic ... I guess? [17:16:56] Amir1: want to join the cantina ansd talk mastadon? #nerspinign [17:17:39] where is it? [17:17:43] bottom right [17:24:14] [telegram] Just a note that in a couple of minutes im doing my session on triaging bugs for your community [17:24:16] [telegram] [17:24:17] [telegram] At 20:00 I will then do a shared screen triaging session where well do some live triage of a ticket [17:24:19] [telegram] [17:24:20] [telegram] If u have a ticket u think would be interesting to refine u can add them here: https://etherpad.wikimedia.org/p/T307776 [17:31:54] hey alistair3149 ! [17:32:03] Hey I was going to message you haha [17:32:41] I do have a few follow questions that I want to ask about skin architecture, is this the right place to ask? [17:32:47] yep this is fine [17:34:51] This might be different than the vision of making skin dumb, so I guess that can also apply to extensions as well. Would there be a way to measure performance impact of a skin/extensions? Seasoned MW developers can tell what is expensive or what is a problem right away from Phabricator, but for newer developers that they might not have the skills [17:34:52] or experience to find out about it, or perhaps never. [17:36:10] When you say performance impact, do you mean PHP benchmarking or how long it takes someone on a slow connection to load the page? [17:37:37] Server-side performance mainly. Client-side would also help but my assumption is that skin developers mostly process frontend skill to do client-side benchmarks [17:38:05] PHP benchmarking* [17:38:09] I think Wikimedia deployed code has a lot of unfair advantages as we have instrumentation for quite a few of these things. [17:38:25] But I was just talking about benchmarking with one of my work colleagues [17:40:22] We currently don't benchmark our skin code [17:41:00] but were hoping to, as we're concerned that there's a lot of unnecessary processing relating to supporting the layer we have to support older skins. [17:41:32] We use https://www.mediawiki.org/wiki/Extension:NavigationTiming to track user perceived performance [17:41:54] but I don't think there's a general mediawiki solution for server side benchmarking [17:42:49] Thank you for the link <3 ! It is indeed interesting and I will look into it [17:43:05] [telegram] I started an Etherpad for the Game Jam session in 15 minutes: https://etherpad.wikimedia.org/p/gamejam [17:43:18] [telegram] (In Community Room) [17:43:32] I assume you are worried about performance about star citizen wiki ? [17:44:23] TBH MediaWiki relies a lot on caching layers for server side performance. [17:44:32] We use Varnish for example to cache HTML on majority of pages. [17:45:26] The skin is developed for general use so yes that also includes the Star Citizen Wiki. We are in a weird position for skin development since we have to support anything since LTS :S But I have been using calls to get sitestats and such and I do not know how much they are cached or what are the performance impacts [17:48:20] For example, I was investigating into the language button changes from Vector 2022. $languages from getLanguage() was cached in the PHP file, so what is/should be cached by what layer? I find it difficult to navigate that content or in general finding the best practices :S [17:48:29] Site stats as in page views etc? [17:48:52] Just the sitestats class, getting number of edits, articles, user, etc. [17:49:13] So for this sort of thing, I'd suggest making this an extension not a skin [17:49:32] but I imagine looking that up on every page without caching would be a bit problematic. [17:49:57] You could have a cronjob that generates static HTML [17:50:50] Yeah probably have to lazyload it through JS in the future. Getting sitestats is a pattern in Fandom-derived skins like Cosmos, Evelution, Mirage etc. so I thought it is common practice [17:51:18] alistair3149: in future I'm hoping extensions will be able to provide components like these ones: https://github.com/wikimedia/mediawiki/tree/master/includes/skins/components [17:51:29] and then skins would be able to render them conditionally [17:51:56] I know there is renewed interest in per page stats [17:52:09] e.g. how many people edited a page, what gender are they etc [17:52:13] Thanks for the link! Exciting times ahead definitely! [17:52:35] Ah yes there is, I have been digging information from the Growth team experiments too [17:52:35] but I think the site stats concept is definitely more Fandom-focused [17:52:52] Do all skins copy/paste the same sort of code? [17:53:46] It depends on the PHP level of the developer as you mentioned before [17:53:56] [telegram] Is there a better way to do it for multiple images/files? My (intended) use-case is a gadget or tool to run over the Move to Commons cat on enwiki (which is like 120k images) (re @lucaswerkmeister: Lcawte: that with iulimit=1, I guess, plus globalusage (with likewise gulimit=1 to reduce the amount of data)) [17:54:24] Some people might be able to come up with their own solution, but often time it is copied from other skins, and the chain continues [17:54:57] alistair3149: could you write a phabricator ticket in https://phabricator.wikimedia.org/tag/mediawiki-core-skin-architecture/ pointing to various skins using site stats - i'd be interested in looking at what they're doing and try to work out a more generic solution [17:55:09] [telegram] hm, at that level it might be better to write a Quarry query? get the images from `categorylinks`, left join against `imagelinks`, assert that the imagelinks column is null, something like that (if you know SQL) (re @lcawte: Is there a better way to do it for multiple images/files? My (intended) use-case is a gadget or tool to run over the Move to Com...) [17:55:31] I can't speak for the other people since I am not a developer by trade and I only know enough PHP to read documentation and vaguely say what it is doing :S So I mostly copy and do it with trial and error [17:55:40] This sounds very similar to the advertising use case that Fandom wikis had - https://www.mediawiki.org/wiki/Advertising [17:55:43] [telegram] (or inner join if you wanted to select images that *are* used locally) [17:55:53] Previously there were skins forking skins just to add adverts [17:55:53] Jdlrobson: absolutely I will file a ticket on it later [17:55:58] now we can just use hooks to do that [17:56:06] Thanks alistair3149 [17:57:17] alistair3149: perhaps skins.wmflabs.org could do some basic profiling on skins and give skins a performance score [17:57:46] That would indeed be awesome! It is an exciting project that I wish more people would know! [17:57:59] [telegram] Ah, yeah, maybe. I don't think I've used quarry yet, so, I might have to have a look into that. (re @lucaswerkmeister: hm, at that level it might be better to write a Quarry query? get the images from categorylinks, left join against imagelinks, a...) [17:58:00] I have been working on this API https://skins-demo.wmflabs.org/w/rest.php/v1/skins which now has tags [17:58:38] But for my project to be a search replacement I had to modify the skin and not an extension. Would be nice to have a standard way to override the MW search. [17:59:35] Would it be possible to add a RTL or language selector for the skinlabs? [18:01:22] I was mentioning this because when I was developing the skin, I am unsure what is the best practices regarding handling RTL. Some CSS do get flipped automatically but there isn't an exhaustive list of what is flipped or what not. And for those that are not flipped, what is the best approach to style the elements appropriately? [18:02:12] Hi all, we will be doing a mwcli install party in the mediawiki room starting now! https://mediawiki.org/wiki/Cli [18:02:14] come join us! [18:06:31] @faktor Could you share your code? Would be interested to see if there's anything we can improve there. [18:07:12] @alistair3149 for the preview? Could definitely add that (harder for the skin builder) [18:07:29] https://github.com/cssjanus/cssjanus is what we use to flip CSS rule [18:07:32] Yeah just for the preview [18:07:47] alistair3149: i'll have a look at that today :) [18:08:27] Thank you for both! [18:08:44] https://github.com/jdlrobson/skins.wmflabs.org/issues/12 [18:12:26] [telegram] Hi all! Some committee members made badges! You can put them on your user page. (You will have to decide for yourself if you earn them). https://www.mediawiki.org/wiki/Wikimedia_Hackathon_2022/How_to#Joining_a_session [18:14:25] anyone remember how to mount the dumps folder from a Cloud VPS instance? It's on by default on Toolforge: https://wikitech.wikimedia.org/wiki/Help:Toolforge/Dumps [18:15:45] milimetric: it's a puppet thing (maybe just hiera settings?)... I'm looking for the admin docs on it now. [18:15:58] yeah, I remember a role to enable, but not where/how [18:16:05] Its still very early on.Had to figure out how to coop the MW search, which is done(for now) I am on the last part to make it somewhat useful. Which is adding synonyms to the search to expand search to expert domain. Currently just using the node synonym json as a framework. Not using the node lib though making my own. [18:17:33] Have not added the php expert system part yet. Just the client side search expander so the expert system gets good data. [18:17:50] using NLP [18:19:05] bd808: is there just a place where all the toolforge roles are listed? It'd probably be easy to recognize [18:20:34] I have my test tech demo on(https://spotcheckit.org ) but does not have a lot of data yet, as I have been working on search override. Nor does the synonym json have tech based words yet to give the expert system good data. [18:21:51] Jdlrobson, But if you look at the source, its all client based js right now. In place of the Twiki skin search function. [18:22:59] Actually have the submit button turned off so I can see the debug data. [18:23:23] submit cleared the console log , so had to turn it off [18:23:35] /document.forms[0].submit(); [18:24:27] milimetric: looks like profile::wmcs::nfsclient is where the magic happens. that is gated by profile::wmcs::instance and the 'mount_nfs' hiera setting. [18:24:51] Jdlrobson thank you for all your answers! I am hopping off now but see you around in Phabricator! [18:25:16] But if I turn submit on you can see in the secondary search show what the output is , I am pulling nouns from input and expand the search with synonymy words for a wider search return. [18:26:00] The synonymy data would have a input method for domain specific words tech , cooking etc.. [18:26:17] So the exspanded search would be directed for domain. [18:27:26] milimetric: that's the client side, but you will also need a patch like https://gerrit.wikimedia.org/r/c/operations/puppet/+/679790/ for the server side [18:28:06] thank you I guess ... maybe I'll just scp ... :) I'm only doing a couple files [18:28:15] ex. "How can I get additional data for my server database" , additional would be expanded to the search with "additional" plus "backup" "redundant" giving the return more answers along the same input question. [18:28:38] sorry, punctuation. Thank you! (then me giving up) [18:28:49] full text would not cover this branch of words [18:29:27] * bd808 finally found the runbook at https://wikitech.wikimedia.org/wiki/Portal:Data_Services/Admin/Runbooks/Enable_NFS_for_a_project [18:30:17] nice, thx! [18:31:19] Then when the search is sent to the php mediawiki engine it will then go through a full text search , plus culling out with previous answered questions to get to a final result with the xbert system(expert system). [18:32:11] sugial strike from initial question. after a few results will end with with its best answer. [18:32:25] surgical strike^ [18:35:44] But in this case the site is for Linux one liners or scripts. you would end up with the sites best option to do what you wanted from first question. [18:38:49] In google's case they try to get it done in one search, while this will have a tree branch method over a few results to get better results and full text default is too limited. [18:51:28] afk , have to work on stuff IRL. Will get back to my project later. [18:59:38] legoktm: https://gerrit.wikimedia.org/r/c/mediawiki/extensions/GlobalWatchlist/+/793861 [19:05:26] faktor: is it possible to share the source code? [19:06:15] bye [19:06:59] Seems like the code is minified so I can't read it but it should be very possible to replace the default search implementation [19:13:59] Jdlrobson, I have so far only changed one file one function from the Tweeki skin calling TweekiES . (not counting js libs added. Typo/compromise/ and took the src.json from node lib synonyms) "public function renderSearch" from file:MWROOT/skins/TweekiES/includes/ TweekiESTemplate.php https://pastebin.com/vgVkNawz [19:14:59] Tweeki is the same as Tweeki except for that function so far. Plus js libs. [19:15:14] mean Tweeki is the same as TweekiES [19:16:48] Put the typo js lib in it as I want to use audio and typo will auto correct audio mistakes without intervention. [19:18:24] Audio part not added yet. Will be later , not in scope yet. https://makitweb.com/how-to-add-speech-recognition-to-the-website-javascript/ [19:20:54] js compromise pulls nouns and adjectives , then i can expand on them with domain based synonyms [19:22:04] Then will be the work of the expert system , also not in scope yet. https://www.phpclasses.org/package/10346-PHP-Ask-questions-and-make-decisions-based-on-answers.html [19:24:26] But a TweekiES Mediawiki site will have a template or custom xbert questions and answer rule(s) for site plus domain based synonyms. [19:25:56] The search would know what it has and does not have and redirect user to items its has and knows. Maybe away to another page for other domain items if its not in its domain. [19:26:38] Perhaps you want to use xyz.com instead.We only have domain based items here. [19:32:28] But whats in scope currently: Override MW search(done), Typo js for auto correct without intervention , compromise js to pull useful items(nouns adjectives subjects) from initial input to use , synomym.json(aka src.json)to expand default full text search with domain based related items. [19:34:09] The end result would be the same search , but text ending with extra expanded search items for full text search. [19:34:27] Took a few weeks to figure out part out , but moving forward now. [19:34:43] s/out/that [19:35:25] Ideas was like oh that would be cool. Then was ..how do I do that. figured it out. [19:35:50] Took a while, but happy I can move forward wit the idea now. [19:44:18] Jdlrobson, But that is all I have so far(modified Tweeki skin template file) an idea I am building on. Witt that you have all my FOSS GPL code. [19:44:51] I think it would be a value add to mediawiki and businesses and organizations alike. [19:47:42] With that back to IRL home tasks. Back later. [20:50:01] hey! somebody tried to talk to me in the "VR" in Germany.. and I could not respond because I was in Google Meet in another tab.. wo was it ? [21:20:43] [telegram] @andreklapper are you around :) [21:42:44] @addshore I made a patch: https://gerrit.wikimedia.org/r/c/mediawiki/extensions/Wikibase/+/793934