[08:50:10] !log bsadowski1@tools-bastion-13 tools.stewardbots Restarted StewardBot/SULWatcher because of a connection loss [08:50:13] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.stewardbots/SAL [09:53:29] I'm trying this now. When doing this without options I get `Could not find a public_html folder or a .lighttpd.conf file in your tool home.` [09:53:29] When putting in the options used in the My first NodeJS OAuth tool, I get into a new seperate shell, and then when I get to the "start the webservice" point, it says the webservice command doesn't exist. I tried just doing 'start' but that also doesn't exist (re @bd808: It sounds like you may have missed running webservice (some options here) start on [09:53:30] login.toolforge.org to actua [09:53:31] lly start the ser...) [10:02:49] Ah I found the page https://wikitech.wikimedia.org/wiki/Help:Toolforge/Node.js and it looks like I need to put my code into a different folder and also I need to write a file named server.js which ... must open the file that I actually want to display on the server? I'm not certain about that [10:09:57] also if I, as the tutorial suggests, run a git clone inside my home/www/js folder, the resulting package.json also doesn't seem to end up in the right folder, right? should I be supplying specific options to git clone to make it put the code right into the current folder rather than in a child folder named after the repo? [10:33:18] alright my webserver is now serving hello world at all URLs that start with the name of my tool (see eg. https://shex-validator.toolforge.org/). Now I want it to serve a specific html file in my repo instead. I assume I need to edit my server.js for this but idk what it should be, does anyone else here know? [10:39:37] Mennolt: there are multiple ways to achieve that, you could e.g. use ExpressJS, see https://expressjs.com/en/starter/static-files.html and all the "Getting Started" section in their docs [10:40:41] Thanks dhinus! I'll take a look at that [10:40:59] you can also take a look at the next section in the Node.js wikitech page: https://wikitech.wikimedia.org/wiki/Help:Toolforge/Node.js#Deploying_a_Vue_JS_Application_using_Node_JS_and_Vite [10:42:34] I had issues with step 3 of that section because at the moment my dependency tree is broken, but the stuff I need to work for my tool works anyways so I've been ignoring it [13:02:51] Thanks for the help, I've now got my page up and running: https://shex-validator.toolforge.org/packages/shex-webapp/doc/shex-simple-improved.html [13:06:09] Yay! [13:08:20] I'll probably spend the rest of today changing the code that I uploaded to make more sense for wikidata (eg. changing the defaults away from my specific wikibase cloud instance), and then tomorrow I can spread the news of the launch [13:46:10] [Question] I am learning how to query Wikidata data from Quarry. I have noticed that "logging" is a table in the database "wikidatawiki_p", that table is documented at https://www.mediawiki.org/wiki/Manual:Logging_table . I have also noticed that there exists "logging_logindex", I couldn't find the documentation page of this table at https://www.mediawiki.org/wiki/Manual:Database_layout , [13:46:12] I checked their columns using "DESCRIBE <>" and they seem to have the same columns. My question is: What is the difference between the tables "logging" and "logging_logindex"? [13:47:16] rdrg109: https://wikitech.wikimedia.org/wiki/Help:Wiki_Replicas/Queries#Alternative_views [14:13:50] !log anticomposite@tools-bastion-13 tools.stewardbots SULWatcher/manage.sh restart # SULWatchers disconnected [14:13:52] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.stewardbots/SAL [14:35:52] !log wikisp Add Davevzla as project user [14:35:54] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Wikisp/SAL [15:27:40] !status doing an incidence response drill, nothing affected [15:32:08] !status OK [15:58:58] taavi: Thanks! [16:50:08] !log paws k8s to 1.26 T326985 [16:50:11] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Paws/SAL [16:50:11] T326985: Test PAWS on k8s 1.25 - https://phabricator.wikimedia.org/T326985 [17:54:23] !log paws jupyterlab to 4.1.8 T363596 [17:54:27] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Paws/SAL [17:54:27] T363596: jupyterlab to 4.1.8 - https://phabricator.wikimedia.org/T363596 [17:54:41] !log paws upgrade pywikibot T363131 [17:54:43] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Paws/SAL [17:54:44] T363131: New upstream release for Pywikibot - https://phabricator.wikimedia.org/T363131 [18:07:30] !log bd808@tools-bastion-12 tools.gitlab-webhooks Built new image from 1a65b00c [18:07:32] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.gitlab-webhooks/SAL [18:08:29] !log bd808@tools-bastion-12 tools.gitlab-webhooks Restarted to pick up new image [18:08:30] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.gitlab-webhooks/SAL [18:11:29] A few of my tools use a VueJS frontend + python backend https://gitlab.wikimedia.org/toolforge-repos/wsstats is probably the most sane [18:11:50] Feel free to steal any code you want [18:23:57] [Question] I am learning how to query Wikidata data on Quarry. I have written some queries: Get number of statements on a Wikidata item given its QID: https://quarry.wmcloud.org/query/82372 Get all revisions on a Wikidata item given its QID: https://quarry.wmcloud.org/query/82373 . Now, I want to write a query that gets all the statements that currently exist on a Wikidata item given its [18:23:59] QID. Does anyone know how to do this? [18:27:55] I happen to have a query laying around that I think fits your needs: [18:27:56] ``` [18:27:58] SELECT ?item ?p_property ?p_propertyLabel ?statementLink ?simplevalue ?simplevalueLabel [18:27:59] WHERE [18:28:01] { [18:28:02] wd:Q1065 ?property ?statementLink . [18:28:04] ?statementLink ?simplevalueLink ?simplevalue . [18:28:05] wd:Q1065 ?propdirect ?simplevalue. [18:28:07] wd:Q1065 rdfs:label ?item. [18:28:08] [18:28:10] [18:28:11] #find property label (thanks to tagishsimon) [18:28:13] ?p_property wikibase:claim ?property . [18:28:14] [18:28:16] #find only properties & values with the right namespace [18:28:17] FILTER(STRSTARTS(STR(?propdirect), STR(wdt:))) [18:28:19] FILTER(STRSTARTS(STR(?property), STR(p:))) [18:28:20] FILTER(STRSTARTS(STR(?simplevalueLink), STR(ps:))) [18:28:22] [18:28:23] SERVICE wikibase:label { bd:serviceParam wikibase:language "en". } # Helps get the label in your language, if not, then en language [18:28:25] #only get English language item names [18:28:26] FILTER(LANGMATCHES(LANG(?item), "en")) [18:28:28] }``` [18:28:29] Simply replace Q1065 with your QID [18:29:36] Mennolt: Thanks for the help. Since I am learning how to query Wikidata data on Quarry. I'm more interested on learning how to do it with SQL not with SPARQL. I am just being curious here. I know that the same data can be obtained conveniently using SPARQL. [18:31:43] ah I see [18:31:53] I have no experience with that [18:50:39] Mennolt: No problem. Thanks for the help. [18:50:41] For the record, I have created a topic with that question in Talk:Quarry here: https://www.mediawiki.org/wiki/Topic:Y3u6dz3ci6eqlura [19:08:19] rdrg109: it’s basically not doable. it’s best to use SPARQL instead [19:30:48] lucaswerkmeister: Ok, thanks for the help! [19:35:59] I still have a question though: How does Wikibase is able to know the existing statements on a given Wikibase item? I presume Wikibase stores that information somewhere in a table in a database. Isn't that information stored in any of the tables named wb_* or wbt_*? [19:44:56] ^ For the record, I opened a topic for this question in Talk:Wikibase: https://www.mediawiki.org/wiki/Topic:Y3u994hnsgkbgbz0 [19:46:26] it’s stored in a table that’s not available on the replicas [19:46:56] see https://wikitech.wikimedia.org/wiki/Help:Toolforge/Database#Unavailable_tables, the `text` table is the one with all the page contents [20:06:22] rdrg109: the part that may not be obvious here is that a Q and it's statements are stored natively in MediaWiki as a json blob. That json blob lives in the "text" table. This is the same way that the wikitech source for a wikipedia article is stored. The Wiki Replica databases do not provide access to the "text" table basically because we do not have the storage space to do so. [20:06:46] *the wikitext source [20:07:25] lucaswerkmeister, bd808: Thanks for the help! [20:07:55] if you did have access to the text table you would not get much value from it in SQL as it is not a structured table. It is really just a blob with a bit of metadata [20:09:52] yes, pretty much [20:10:26] for “gets all the statements that currently exist on a Wikidata item” it would be *kinda* usable, as that would just be one JSON blob (though decoding it would still be a pain) [20:10:53] but things like “get all the items with a particular statement” are only doable in SPARQL, the SQL database schema just isn’t optimized for that at all [20:31:41] bd808, lucaswerkmeister: I want to know how a JSON blob looks like for a given item (perhaps, Q8487081 which contains very few statements and a few sitelnks) in the table "text" in Wikidata. Do you happen to know which database dump contain this information? I downloaded the smallest file from one of the mirrors called [20:31:43] wikidatawiki-20240401-pages-articles-multistream27.xml-p118185874p119393781.bz2 and saw the top of the file using $ head -n 10 because it is 12G, but all I can see is XML objects that describe a single revision. [20:34:12] rdrg109: the in there should be the JSON [20:34:17] (but with tons of horrible `"` in it) [20:34:30] at least that’s what I see in https://dumps.wikimedia.org/wikidatawiki/latest/wikidatawiki-latest-pages-articles-multistream27.xml-p118185874p119393781.bz2 [20:35:16] you can also get it from the API via the non-Wikidata-aware core actions https://www.wikidata.org/w/api.php?action=query&titles=Q42&prop=revisions&rvprop=content&rvslots=main&formatversion=2 [20:35:32] but please only look at this for curiosity and don’t use it, this format isn’t stable and has changed in the past :) [20:36:21] lucaswerkmeister: Got it. Thanks for the help! [20:36:42] (`action=wbgetentities` and `Special:EntityData` is how you get it in a stable format :)) [20:43:10] rdrg109: I think one of the things you may be learning today is why we have poured so much time and energy into WDQS. It really is a magically transformative thing added to the core capabilities of Wikibase. Wikibase itself takes care of the very interesting bit of managing and applying a community curated ontology. WDQS then takes the data from that model and gives it new capabilities for finding correlations and connections. [21:57:46] !log bd808@tools-bastion-12 tools.gitlab-webhooks Restart to pick up PYTHONUNBUFFERED=1 envvar [21:57:48] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.gitlab-webhooks/SAL