[10:53:55] lots of enwiki recentchanges *API* slow queries lately - a bot missbehaving? [11:41:22] Morning all. I've got a little issue adding a new database to the wikireplicas in T319190 - I'm not sure whether I've done something wrong, since this is something of a new process to me. I would like to understand more about the various steps done by the different teams. [11:41:23] T319190: Prepare and check storage layer for bnwikiquote - https://phabricator.wikimedia.org/T319190 [11:42:04] I run the `sre.wikireplicas.add-wiki` cookbook and dry-run looks like this: [11:42:09] https://www.irccloud.com/pastebin/iYNo6rQV/ [11:42:55] However, `maintain_views` seems to bomb out with: `pymysql.err.OperationalError: (1044, "Access denied for user 'maintainviews'@'localhost' to database 'bnwikiquote\\_p'")` [11:44:36] Have I missed a step or done something in the wrong order? I've got several of these databases to add to the wikireplicas, (T317111,T316456,T314639) and I'd like to make sure that it's clear in my head what the DE team responsibilities are for it. Sorry for troubling you. [11:44:36] T316456: Prepare and check storage layer for bclwikiquote - https://phabricator.wikimedia.org/T316456 [11:44:37] T317111: Prepare and check storage layer for tlwikiquote - https://phabricator.wikimedia.org/T317111 [11:44:37] T314639: Prepare and check storage layer for igwikiquote - https://phabricator.wikimedia.org/T314639 [11:56:32] btullis: I think the problem is somewhere else, we have added many new wikis past months and there wasn't any issue [11:56:46] I can take a look after I bring back codesearch online [12:04:16] btullis: Is this done? https://wikitech.wikimedia.org/wiki/Add_a_wiki#Cloud_Services [12:04:26] > GRANT SELECT, SHOW VIEW ON `$wiki\_p`.* TO 'labsdbuser'; [12:06:17] Amir1: Many thanks. I haven't run that command. But that's what I mean the whole process is new to me. I'm not sure who's supposed to do what. [12:25:38] honestly, I don't know it well either [12:58:31] Amir1: OK, no worries. I'm just trying to use this as an opportunity to get my understanding and our docs in better shape. WMCS asked us to take over the responsibility for this part of the wikireplica maintenance from them and there are just some gaps in my knowledge. I can try running the cookbook against one of the other wikis, to find out if it's the same there too. [13:35:08] flaggedrevs needs four rows of flaggedrevs_statistics but it stores millions of them [13:35:20] in plwikisource it's already at 2.4M rows [15:07:52] Amir1: I am thinking of exporting from clouddb or sanitarium into an interim db (e.g. test backup db) and do some checks to verify nothing weird is left; then we can share the entire db as is in a single compressed data dir (that part is automated) [15:08:32] jynus: sounds good to me [15:09:15] but it would help to know the needs/context, e.g. if you could put those into a ticket or forward the request [15:09:27] e.g. maybe the entire db is not needed [15:11:15] the problem with sanitarium is that by that time, per-column filtering is done, but not per row (those happen on clouddb only) [15:12:24] and maybe what you were asked is the revision table, which has very little meaningful information [15:12:36] (by itself) [15:16:19] I want to have rc table too [15:16:33] clouddb1021 should be fine, we can coordinate with data engineering about this [15:16:50] or even we can depool another one and then stop replication [15:17:48] Amir1: yeah, but 4 tables is much easier than 80 :-D [15:18:27] we need actor, comment, rev_comment, and a couple more [15:19:20] well, still exporting 10 is much attainable [15:19:26] more [17:54:37] Hi all! The Growth's considering a change that would mean the number of rows in growthexperiments_mentee_data x1 table will go to ~three times as many (and stabilize at that higher level after a while). Is that something we can go ahead, or should we not increase the number of rows that much? See https://phabricator.wikimedia.org/T295075#8410278 for estimations of the growth. [17:56:48] Amir1: with your permission, I am going to suggest to do a power redundancy check to papaul for T323512 [17:56:48] T323512: db2174 lost power - https://phabricator.wikimedia.org/T323512 [17:57:10] (not now, when you are around and available) [17:57:20] jynus: sounds good to me [17:57:32] it's depooled and downtimed, it should be fine [17:57:35] literally pulling the plug 1 by one [17:57:49] so making sure it doesn't happen again [17:57:58] ah, if it is still depooled, I can tell him that [17:58:42] urbanecm: technically yes but I asked growth team to help with the unbounded growth of echo notifications in x1 a while ago T318523. I need commitment to addressing this and similar tasks [17:58:43] T318523: Don't send article-linked notifications for bots - https://phabricator.wikimedia.org/T318523 [18:00:11] Amir1: I can ask fellow Growth engineers about that one in the engsync meeting, no problem :) [18:00:46] that and T221258 [18:00:46] T221258: Avoid inserting echo_event rows when not needed - https://phabricator.wikimedia.org/T221258 [18:00:48] thanks [18:00:54] will ask [18:02:32] Amir1: can you clarify what does "technically yes" mean? Is it a "yes, but please really work on those two tasks as well"? [18:02:43] yup [18:02:50] noted. thanks [18:03:16] see it as carbon budget, you can avoid my advice but we both will be in trouble in the future [18:03:48] Cannot I just buy credits from Microsoft Encarta XD? [18:04:00] makes sense Amir1 [18:05:35] jynus: one option is new carbon capture technologies aka drop table [18:05:54] I like that [18:06:13] looks ecological to me XD