[04:07:34] anyone here, I have a kind of sensitive (to me anyway) question [13:14:33] Hey, what's the friendliest way extracting data from wikidata? let's say i want to have movies and their producers and actors and all movies which have won some oscars and i want to have that data locally? i mean, that seems query heavy. what would you recommend that i do here? [13:17:07] I would say that if the query doesn't time out, use a query to get the data to your local database [13:17:56] if it does time out, using multiple queries with LIMIT and OFFSET and a delay between them could help [13:18:41] but with limit and offset i might to sort them before, otherwise i might get the same data again, no? [13:18:46] but limit and offset are good ieads [13:19:27] ideas.. [13:29:13] afaik sometimes LIMIT and OFFSET won't help with the query timing out, yeah [13:30:49] uh, okay, so query timeout still might happen? [14:02:58] is there a stable way to extract the data? maybe with account and some ratelimiting? i do not care if it takes long, but stable would be nice. i did a workshop with sparql back in the past and worked a bit with python, but i am not used to the wikidata interface regarding automation [14:03:20] (also i do not want to overload/crash anything, the wikimedia universe is very dear to me :D)