[17:17:22] [Question] I want to get a frequency table of properties used in direct claims in PDF files in Wikimedia Commons. I wrote this query: https://w.wiki/B4Bq , it finishes in 10384 ms for 100 PDF files (it times out if I delete the "LIMIT 100"). I also wrote this query: https://w.wiki/B4Bt which was able to finish in 39107 ms for 1.000.000 PDF files (it times out if I delete the "LIMIT 1000000"). I wonder if any of those queries can be [17:17:23] optimized for avoiding the time out when not using LIMIT. [19:11:33] rodrigo-morales: filtering the predicates only in the final list should be faster [19:11:38] e.g. https://w.wiki/B4ER [22:35:16] markh: Thanks for the help! I run your query and it seems it is faster. It finished in 31446 ms when using "LIMIT 2000000" (two million). It finished in 45920 ms when using LIMIT 3000000 (three million). It did time out when removing the LIMIT. According to this query https://w.wiki/B4Gq, there are 3600341 PDF files. Having the query finished for 1.000.000 items is enough for my use case. I'm interested in learning more about SPARQL so [22:35:16] if anyone knows how to optimize the query to avoid the time out for all PDF files, please let me know.