[04:43:10] [[Tech]]; 172.75.162.190; [none]; https://meta.wikimedia.org/w/index.php?diff=27306024&oldid=27288252&rcid=32208246 [04:43:57] [[Tech]]; XXBlackburnXx; Reverted change by [[Special:Contributions/172.75.162.190|172.75.162.190]] ([[User talk:172.75.162.190|talk]]) to last version by Krol111; https://meta.wikimedia.org/w/index.php?diff=27306071&oldid=27306024&rcid=32208387 [18:40:05] Hey guys! I had a serious problem in hand. I was trying to create embeddings for the Wikipedia Abstracts and downloaded the wikipedia english abstract xml dump. What I have seen the dump contains a high level summary of the actual Abstract of every article and not the full abstract of each articles' themselves [18:41:08] What's more concerning "the way I want to create the dataset if I can't extract abstract from abstract dump, I have to use the API and it has hard limit of 5,000 requests per hour." [18:41:25] for reference this is the link: https://dumps.wikimedia.org/enwiki/latest/ [18:42:05] a glimpse of what the abstract dump looks like on my end [18:42:06] ``` [18:42:06] [18:42:07]   Wikipedia: A [18:42:07] <Guest52>   <url> https://en.wikipedia.org/wiki/A [18:42:08] <Guest52>   <abstract> A-sharp}} [18:42:08] <Guest52>   <links>