[06:56:48] I have seen the point of Adam about Subjectivity in Historical Narration. [06:57:55] I attended Italian DL and saw the work of the Italian Research Team. It is a point to be considered when we are done with creating Wikifunctions. [08:08:53] <বোধিসত্ত্ব> https://pretalx.com/wdcon21/submit/beTaVM/info/ [15:40:42] Yes. The assumption is that if a language is associated with a specific point of view, they also more likely will have an article on that specific topic of interest written in their language. Remember, Abstract Wikipedia is a common baseline of knowledge, not there to replace good content on the Wikipedias, but to fill their gaps. [15:41:23] This allows specific language communities to focus on the topics they really care about and let other topics be made available from the common repository. [20:57:37] I ❤️ the progress bar! Fantastic UX work going on it seems. :) : https://tools-static.wmflabs.org/bridgebot/45ed43a4/file_7085.jpg [21:19:43] Thank you! I'll make sure the designer gets the feedback :) [21:33:33] I was thinking a little about the possible uses of Wikifunctions because there was talk about that in the user research. [21:33:34] I'm guessing that LibreOffice could have Wikifunctions and Wikidata integration so that if I write a sentence in past tense and want it in present tense I could select it and "Convert to present tense using Wikifunction" [21:33:36] I'm also thinking this could be done on the word level: I right click "ate" and choose "change tense" get a list of the word in different tenses "to eat", "eating", "ate" "has eaten" and clicking on of those inserts it instead. [21:33:37] Further down the road I really believe all text entered into computers will be somehow NER matched and converted to lexemes, this is not always easy, so I'm guessing LibreOffice will have a function where you go through all the edge cases that could not be matched automatically to a lexeme form. [21:33:39] After matching the document is saved with lexemes in the markup so it can be used downstream by e.g. speec synthesis programs for blind etc. [21:33:40] Maybe even further out government produced text could be mandated by law to be matched to lexemes so that we rule out ambigueties on the form-level and if the author want to make very sure we understand they could mark up the senses too. [21:33:42] So possible user stories could be: [21:35:01] I was thinking a little about the possible uses of Wikifunctions because there was talk about that in the user research. [21:35:01] I'm guessing that LibreOffice could have Wikifunctions and Wikidata integration so that if I write a sentence in past tense and want it in present tense I could select it and "Convert to present tense using Wikifunction" [21:35:03] I'm also thinking this could be done on the word level: I right click "ate" and choose "change tense" get a list of the word in different tenses "to eat", "eating", "ate" "has eaten" and clicking on of those inserts it instead. [21:35:04] Further down the road I really believe all text entered into computers will be somehow NER matched and converted to lexemes, this is not always easy, so I'm guessing LibreOffice will have a function where you go through all the edge cases that could not be matched automatically to a lexeme form. [21:35:06] After matching the document is saved with lexemes in the markup so it can be used downstream by e.g. speec synthesis programs for blind etc. [21:35:07] Maybe even further out government produced text could be mandated by law to be matched to lexemes so that we rule out ambigueties on the form-level and if the author want to make very sure we understand they could mark up the senses too. [21:35:09] So possible user stories could be: [21:35:10] * as a user I want to make a function for danish irregular verbs so that language learners can benefit when the functions are used in software new Danish language learners use [21:35:53] I was thinking a little about the possible uses of Wikifunctions because there was talk about that in the user research. [21:35:54] I'm guessing that LibreOffice could have Wikifunctions and Wikidata integration so that if I write a sentence in past tense and want it in present tense I could select it and "Convert to present tense using Wikifunction" [21:35:55] I'm also thinking this could be done on the word level: I right click "ate" and choose "change tense" get a list of the word in different tenses "to eat", "eating", "ate" "has eaten" and clicking on of those inserts it instead. [21:35:57] Further down the road I really believe all text entered into computers will be somehow NER matched and converted to lexemes, this is not always easy, so I'm guessing LibreOffice will have a function where you go through all the edge cases that could not be matched automatically to a lexeme form. [21:35:58] After matching the document is saved with lexemes in the markup so it can be used downstream by e.g. speech synthesis programs for blind etc. [21:36:00] Maybe even further out government produced text could be mandated by law to be matched to lexemes so that we rule out ambigueties on the form-level and if the author want to make very sure we understand they could mark up the senses too. [21:36:01] So possible user stories could be: [21:36:03] * as a user I want to make a function for danish irregular verbs so that language learners can benefit when the functions are used in software new Danish language learners use [21:39:59] I was thinking a little about the possible uses of Wikifunctions because there was talk about that in the user research. [21:40:00] I'm guessing that LibreOffice could have Wikifunctions and Wikidata integration so that if I write a sentence in past tense and want it in present tense I could select it and "Convert to present tense using Wikifunction" [21:40:01] I'm also thinking this could be done on the word level: I right click "ate" and choose "change tense" get a list of the word in different tenses "to eat", "eating", "ate" "has eaten" and clicking on of those inserts it instead. [21:40:03] Further down the road I really believe all text entered into computers will be somehow NER matched and converted to lexemes, this is not always easy, so I'm guessing LibreOffice will have a function where you go through all the edge cases that could not be matched automatically to a lexeme form. [21:40:04] After matching the document is saved with lexemes in the markup so it can be used downstream by e.g. speech synthesis programs for blind etc. [21:40:06] Maybe even further out government produced text could be mandated by law to be matched to lexemes so that we rule out ambigueties on the form-level and if the author want to make very sure we understand they could mark up the senses too. [21:40:07] So possible user stories could be: [21:40:09] * as a user I want to make a function for danish irregular verbs so that language learners can benefit from when the functions are integrated in a text editor [23:15:47] Weekly Update Nr 47: Thank you, Lindsay! https://meta.wikimedia.org/wiki/Abstract_Wikipedia/Updates/2021-09-30