[07:11:02] 10Fundraising-Backlog: Review Cadence of Audit Files for Manually settled Adyen Donations - https://phabricator.wikimedia.org/T314753 (10krobinson) [07:12:13] 10Fundraising-Backlog: Review Cadence of Audit Files for Manually settled Adyen Donations - https://phabricator.wikimedia.org/T314753 (10krobinson) [08:08:05] 10Fundraising Tech - Chaos Crew, 10fr-donorservices: Adyen donors think they are recurring July 2022 - https://phabricator.wikimedia.org/T313854 (10krobinson) @EMartin - to clarify that tracking doc you have shared refers to donors who **actually set up recurring donations** but did not intend to, and is possi... [09:23:16] (03PS2) 10Abijeet Patro: Remove usage of Translate RevTag class [extensions/CentralNotice] - 10https://gerrit.wikimedia.org/r/811580 (https://phabricator.wikimedia.org/T312007) [09:23:22] (03CR) 10Abijeet Patro: Remove usage of Translate RevTag class (031 comment) [extensions/CentralNotice] - 10https://gerrit.wikimedia.org/r/811580 (https://phabricator.wikimedia.org/T312007) (owner: 10Abijeet Patro) [11:41:33] (03CR) 10Nikerabbit: [C: 03+1] Remove usage of Translate RevTag class [extensions/CentralNotice] - 10https://gerrit.wikimedia.org/r/811580 (https://phabricator.wikimedia.org/T312007) (owner: 10Abijeet Patro) [13:06:10] (03PS8) 10Jgleeson: Add Braintree Webhook Signature Validator component [wikimedia/fundraising/SmashPig] - 10https://gerrit.wikimedia.org/r/817315 (https://phabricator.wikimedia.org/T311169) [13:06:15] (03PS11) 10Jgleeson: Create IPN listener for Braintree [wikimedia/fundraising/SmashPig] - 10https://gerrit.wikimedia.org/r/817357 (https://phabricator.wikimedia.org/T303451) [13:07:35] (03PS12) 10Jgleeson: Create IPN listener for Braintree [wikimedia/fundraising/SmashPig] - 10https://gerrit.wikimedia.org/r/817357 (https://phabricator.wikimedia.org/T303451) [13:27:56] (03PS9) 10Jgleeson: Add Braintree Webhook Signature Validator component [wikimedia/fundraising/SmashPig] - 10https://gerrit.wikimedia.org/r/817315 (https://phabricator.wikimedia.org/T311169) [13:30:21] (03PS13) 10Jgleeson: Create IPN listener for Braintree [wikimedia/fundraising/SmashPig] - 10https://gerrit.wikimedia.org/r/817357 (https://phabricator.wikimedia.org/T303451) [14:28:58] hey jgleeson are you around to ask some silverpop import questions to? look like it failed on acoustics end and not sure if its cause of my testing or cause of ip things [14:29:49] im here cstone just on the ERC call [14:30:00] ah okay right that was this morning [14:30:03] this is the one that fits best in my TZ [14:30:03] no worries [14:30:05] ya [14:30:26] this finishes in 30 minutes so we could dig in them if that works [14:30:48] yeah no rush [14:30:55] thanks! [14:58:33] cstone: that call just wrapped up [14:58:40] whats the low down on silverpop? [14:59:15] gonna look at the mail load failmail now [14:59:18] so i havent done anything really with csv upload part [14:59:25] and when i tested the script it looked like it ran fine but [14:59:33] is there issues with having uploaded too many files? [14:59:47] and now in the folder where it generates the export there is one from the 7 and the 8 [15:00:02] i guess im not sure what is normal and what isnt [15:01:12] im not sure on 'the multiples files being a problem'. are we talking about on the acoustic ftp spot or just on our server [15:01:19] both? [15:01:30] our server has ones from the 7 and 8 now is that normal? [15:01:34] i feel like i've seen multiple files on our side before [15:01:36] when i was looking yesterday it just had ones from the 7 [15:01:37] ok [15:02:04] yeah im not sure if the job deletes old files when it completes maybe [15:02:09] lemme check the python script [15:05:07] https://github.com/wikimedia/wikimedia-fundraising-tools/blob/283cd1d9d01a97253194f17a85eb2c514ef26f6f/silverpop_export/export.py#L139 [15:05:22] so that fn cleans out files that are over a day old [15:05:40] it's one of the last things to happen when update.py is run [15:06:12] https://github.com/wikimedia/wikimedia-fundraising-tools/blob/283cd1d9d01a97253194f17a85eb2c514ef26f6f/silverpop_export/export.py#L48 [15:06:12] so there shouldnt be files from august 7 and 8 in there? [15:06:38] i guess the timestamp is 1 minute earlier [15:07:07] there's a few explanations [15:07:52] script failed to complete / config days_to_keep_files is >1 day / oldest file is not 24h old yet [15:08:06] or was 24h old at the time the last script was run [15:08:13] any of those sound likely cstone ? [15:08:22] yeah its 1 minute less than 24 hours [15:08:29] ah ok then that'll be it [15:09:16] although looking at this slack message again i guess i assumed last night meant 3am utc on the 8 but i think at the time the message was written it would have been the 3am utc on the 7 he was talking about being failed [15:09:28] do you know if there is a job on acoustic side to like look at the files jgleeson [15:09:34] or does that happen whenever they are uploaded [15:09:47] there is a job that trinity wrote I think [15:09:54] ah okay [15:10:03] it looks like files in an ftp location with filename convention we use [15:10:08] looks for files* [15:10:18] and then deletes them once processed [15:10:54] do you know if we can check if the acoustic job completed or we probably would need working in logins then huh [15:12:21] ha [15:12:59] we could ask Katie, she used to be pretty close to that part of things [15:13:20] (who are we waiting to reactivate the deactivated accounts?) [15:13:32] we just need someone with a working acoustic login [15:13:44] do we know who we're going to ask? :) [15:13:53] noah or katie were my options [15:14:05] * greg-g nods [15:14:26] greg-g: i think they regularly expire. typically whenever we try to login to this thing someone needs to give us access [15:14:32] they expire super fast [15:14:42] like if you dont login within like a month you get deactivated [15:14:48] yeah, I think I remember the scrollback saying a month... yeah [15:15:47] Login failed: account disabled. Please contact your internal organization administrator. [15:15:49] :) [15:15:55] so helpful [15:16:35] even if I could get it. I can't remember where in the inner workings of the console that we'd find the confirmation of jobs cstone is referring to [15:16:46] might be best to check with pros [15:16:53] * greg-g nods [15:18:19] idono this whole thing like i dont know when its urgent enough to alert everyone on a sunday? [15:18:46] or who is like willing to be alerted on a sunday not on our team etc [15:20:51] cstone: I'm in creative-email slack [15:20:55] can i link here? [15:20:59] jgleeson: yeah thats the one i was talking about [15:21:02] https://app.slack.com/client/T024KLHS4/C5MAUGA72 [15:21:02] that started all this sorry [15:21:06] failed at the context [15:21:07] ah yeah cool cool [15:21:20] so noah posted at 1am but I see the last upload was 3am [15:21:23] yeah [15:21:31] so I'm asking if they can confirm whether or not things went ok [15:21:59] the ftp upload job log looks ok [15:22:16] yeah i was just trying to figure out if i broke it or acoustic broke it first [15:22:34] but yeah thats the question i would have asked [15:22:59] the file that got sent over was DatabaseUpdate-20220808033502.csv [15:23:17] no other DatabaseUpdate-*.csv files [15:23:51] in that job run. there's a bunch of other files but not the DB update. that suggests multiple files isn't an issue [15:23:59] (with the same prefix) [15:24:50] cstone: my guess is the job needs to be manually right if last nights acoustic-side cron failed [15:25:00] manually run* even [15:26:41] i guess too acoustic side could have just also broken idependtantly of our ip and me testing it issues [15:28:56] i was just wondering what info we had on the <1am failure [15:29:11] im checking that log everything was failing then that might have also [15:30:16] the upload on the 7th looks good [15:30:18] in both instances [15:30:43] the 7 03 job failed though [15:31:10] cause the ip to transfer4. hadnt been updated yet [15:31:33] i dont see that one [15:31:47] -rw-r--r-- 1 jenkins jenkins 7606 Aug 7 17:58 silverpop_emails_upload_files-20220807-175817.log [15:31:48] in silverpop_emails_upload_files right? [15:31:48] -rw-r--r-- 1 jenkins jenkins 7413 Aug 7 18:26 silverpop_emails_upload_files-20220807-182628.log [15:31:50] -rw-r--r-- 1 jenkins jenkins 7412 Aug 8 03:38 silverpop_emails_upload_files-20220808-033829.log [15:31:51] yeah sooo [15:31:54] 17 and 18 were me [15:31:59] ya [15:32:04] which both worked [15:32:09] oh i see ok [15:32:25] is there an older one maybe that failed [15:32:37] yeah the 7 03 failed but my 17 and 18 should have both uploaded files to acoustic [15:32:40] that might have been log rotated [15:33:01] but depending on when the acoustic job runs [15:33:15] it might have only seen the failure? and my 17 and 18 didnt help when noah checked [15:37:28] yeah they have a scheduled task and katie used to also run it ad-hoc when we'd hit issues like this [15:38:04] ah okay i didnt realize that [16:03:17] cstone: Noah confirm that the latest job ran successfully [16:03:25] confirmed* [16:04:08] https://app.slack.com/client/T024KLHS4/C5MAUGA72/thread/C5MAUGA72-1659972510.065319 [16:04:42] sweet [16:04:44] ok nice jgleeson [16:05:23] thanks for jumping on that over the weekend cstone. I completely missed it [16:05:58] cstone, honorary DRI member this week :) [16:06:21] haha I just hadn't muted the fr -tech alert on my phone :P [16:06:29] hs! [16:06:33] ha! * [16:07:29] gonna grab food. back later [16:58:21] oh shoot, looks like we bugzooers maybe should have handled that [16:58:30] sorry and thanks! [16:59:32] I will check on that forgetme failmail [16:59:35] dami did start it [16:59:42] ah nice [16:59:43] ejegg: no one has an active account [16:59:46] unless you do [16:59:47] ohh [16:59:58] probably not, it's been a while since i logged in [17:00:37] trying [17:01:04] account disabled [17:04:09] asking katers on the salesforce webchat for reenablement [17:04:20] I was going to say, add "login to your acoustic account" to the chaos crew's chore list, but, it's only quarterly-ish :( [17:04:46] we just need to schedule a weekly failmail :P [17:06:47] ok, so it looks like the IP allowlist is indeed updated? Sorry, I had seen their warning email but was under the impression that the acoustic network access was already converted to DNS-based. [17:07:31] salesforce webchat?!?! [17:07:50] surely they can't afford slack [17:08:52] I guess I am out of touch with things https://slack.com/intl/en-gb/blog/news/salesforce-completes-acquisition-of-slack [17:09:34] ...representing an enterprise value of approximately $27.7 billion [17:09:42] scary [17:10:09] yeah, it must have been a pretty nice payout for somebody! [17:11:21] Total revenue of $273.4 million, up 36% year-over-year [17:11:32] don't they usually do 10x earnings [17:11:56] that's pretty close to an extra zero error [17:12:03] * ejegg knows very little about mergers and acquisitions [17:12:27] i guess maybe a 'buy it before your competitors do' kind of thing? [17:14:24] i honestly don't know a lot either. i feel like I remember that from news articles when I used to be interesting in that stuff [17:14:33] interested* [17:17:30] cstone: [17:17:34] from katers: Hey Jack! +1 to what Noah said - the import today processed both Sunday’s file and today’s file at once. So we’re caught up now [17:17:46] thanks jgleeson [17:17:52] that makes me thinking multiple files might be a good thing, in this scenario at leasdt [17:17:55] least* [17:18:18] yeah good to know that doesnt break it [17:18:19] each file should have the last 7 days of changes [17:18:40] so unless it breaks for a week, just the latest should be enough [17:19:09] ejegg: when I was testing that the upload worked yesterday i accidentally did it twice and then when noah said it failed I was worried I had broken it [17:19:28] ah yeah good point ejegg [17:28:04] (03PS10) 10Jgleeson: Add Braintree Webhook Signature Validator component [wikimedia/fundraising/SmashPig] - 10https://gerrit.wikimedia.org/r/817315 (https://phabricator.wikimedia.org/T311169) [17:28:52] (03PS14) 10Jgleeson: Create IPN listener for Braintree [wikimedia/fundraising/SmashPig] - 10https://gerrit.wikimedia.org/r/817357 (https://phabricator.wikimedia.org/T303451) [17:43:07] 10Fundraising-Backlog, 10FR-Smashpig: Increase SmashPig PHP minimum version to 7.0 - https://phabricator.wikimedia.org/T243421 (10Ejegg) 05Open→03Resolved p:05Triage→03Medium a:03Ejegg [17:47:53] 10Fundraising-Backlog: QueueWrapper should fall back to damaged/retry message table when redis connection is dead - https://phabricator.wikimedia.org/T314805 (10Ejegg) [18:50:06] (03PS1) 10Jgleeson: WIP: Add Braintree IPN Job handler [wikimedia/fundraising/SmashPig] - 10https://gerrit.wikimedia.org/r/821295 (https://phabricator.wikimedia.org/T314400) [18:50:47] (03CR) 10CI reject: [V: 04-1] WIP: Add Braintree IPN Job handler [wikimedia/fundraising/SmashPig] - 10https://gerrit.wikimedia.org/r/821295 (https://phabricator.wikimedia.org/T314400) (owner: 10Jgleeson) [19:01:47] (03PS2) 10Jgleeson: WIP: Add Braintree IPN Job handler [wikimedia/fundraising/SmashPig] - 10https://gerrit.wikimedia.org/r/821295 (https://phabricator.wikimedia.org/T314400) [19:02:55] fr-tech. I just sent around a draft email that I plan to send to Braintree and Evelyn, to set up a call to get clarity on the IPN/Webhooks stuff. Does that look ok to folks? [19:07:08] cstone: I'm checking out the backlog as my current "project team" ticket is blocked. Do you want any eyes on the recurring stuff [19:08:56] jgleeson: email seems fine with me, while I am not the one familiar with our old PayPal IPN. btw acoustic is ok to login now~jgleeson> : [19:17:33] yeah if you want to look at recurring jgleeson [19:33:06] thanks wfan ! [19:33:08] and cstone [19:52:44] (03CR) 10Jgleeson: [C: 03+2] "Tested locally. New metrics showing up as expected." [wikimedia/fundraising/crm] - 10https://gerrit.wikimedia.org/r/820234 (https://phabricator.wikimedia.org/T313000) (owner: 10Ejegg) [19:53:28] thanks for the CR jgleeson ! [20:07:56] the cache miss patch - https://gerrit.wikimedia.org/r/c/wikimedia/fundraising/crm/civicrm/+/820576 (it's OK to merge this now cos it still needs submodule merges etc) [20:08:14] (03Merged) 10jenkins-bot: Add timers for insert + update API calls [wikimedia/fundraising/crm] - 10https://gerrit.wikimedia.org/r/820234 (https://phabricator.wikimedia.org/T313000) (owner: 10Ejegg) [20:10:02] 10Fundraising-Backlog: Review Cadence of Audit Files for Manually settled Adyen Donations - https://phabricator.wikimedia.org/T314753 (10EMartin) + 1 @krobinson We need to put Adyen on the same track that Ingenico was to avoid these lags that could have bigger impact when we are operating at scale. [20:10:38] (03PS1) 10Eileen: Merge branch 'master' of https://gerrit.wikimedia.org/r/wikimedia/fundraising/crm into deployment [wikimedia/fundraising/crm] (deployment) - 10https://gerrit.wikimedia.org/r/821305 [20:11:02] ejegg: I looked to deploy your patch but there are a few there - https://gerrit.wikimedia.org/r/c/wikimedia/fundraising/crm/+/821305 [20:11:15] eileen___: oh yeah, that includes a SmashPig update [20:11:28] (03CR) 10CI reject: [V: 04-1] Merge branch 'master' of https://gerrit.wikimedia.org/r/wikimedia/fundraising/crm into deployment [wikimedia/fundraising/crm] (deployment) - 10https://gerrit.wikimedia.org/r/821305 (owner: 10Eileen) [20:13:21] 10Fundraising Sprint NaN is a Number, 10Fundraising-Backlog: Adyen - auto settle stopped prior donors (Pending transaction Resolver) - https://phabricator.wikimedia.org/T299692 (10XenoRyet) [20:26:12] 10Fundraising Tech - Chaos Crew, 10Fundraising-Backlog: QueueWrapper should fall back to damaged/retry message table when redis connection is dead - https://phabricator.wikimedia.org/T314805 (10XenoRyet) [20:26:14] 10Fundraising Tech - Chaos Crew, 10Fundraising-Backlog: Make Adyen default for 6ENC campaigns by 13 Aug 2022 - https://phabricator.wikimedia.org/T314687 (10XenoRyet) [20:26:16] 10Fundraising Tech - Chaos Crew, 10Fundraising-Backlog: Update Braintree Order ID after successful payment - https://phabricator.wikimedia.org/T314681 (10XenoRyet) [20:26:18] 10Fundraising Tech - Chaos Crew, 10Fundraising-Backlog: Way to send Civi TY email to a group? - https://phabricator.wikimedia.org/T314525 (10XenoRyet) [20:27:44] 10Fundraising-Backlog: Way to send Civi TY email to a group? - https://phabricator.wikimedia.org/T314525 (10XenoRyet) [20:38:00] dwisehaupt: I'll make a phab task for the new log files [20:38:08] sounds good. [20:38:11] np ejegg [20:39:46] eileen___: we do have redis slowlog on for queries taking longer than 10000 microseconds. the one query that repeatedly shows up in the 20000-27000 range is "KEYS crm/prevnext/*/all". keep in mind, that is .02-.027 seconds. [20:40:35] hmm - that's the wrong query.... [20:40:41] it's possible there is something else that may be slow but masked by that query showing up every 5-10 seconds [20:41:36] i can rerun the slowlog command as much as we want/need to try and capture. right now we are keeping 10 previous entries and they are all that same query when i have checked. [20:43:49] can we do something to capture it to log files somewhat regularly? [20:43:55] perhaps we temporarily increase the length of the slowlog that is captured and then run the things we know to be slow. [20:44:16] i haven't found a way (yet) to capture it to a log file as compared to just within redis. [20:44:54] that prevnext one would be user search results - the one that I got a win from had `token_metadata` in the string [20:45:11] but for the last 10 minutes, it has only been that query logged. [20:45:44] ok - I was using redis-cli monitor & looking at repetitive `get` requests when I found ^^ [20:47:02] 10fundraising-tech-ops: New remote log files for payments-wiki - https://phabricator.wikimedia.org/T314819 (10Ejegg) [20:47:18] ok dwisehaupt, those are the asks ^^ [20:48:07] - prev-next is legit search results so would be likely when users searching (in which case a few ms is not an issue) [20:48:15] ejegg: coolthx [20:48:21] eileen___: so is is likely that someone will call has() multiple times before calling get() on a key? [20:48:53] ejegg: yeah - this is the pattern [20:48:54] if (Civi::cache('metadata')->has($this->getCacheKey())) { [20:48:54] return Civi::cache('metadata')->get($this->getCacheKey()); [20:48:54] } [20:49:06] I notice that the delegated has() result isn't cached, but you'd have to have a whole separate otherValues array cache to do that... [20:49:08] i just bumped the slowlog length to 100 to give a longer timeframe. we'll see in a few minutes. [20:49:44] or even 'keysThatAreAlsoSetButWhichWeDontHaveValuesFor' [20:50:00] yeah the array-cache doesn't hold the whole metadata cache - only those values accessed in the currenc session [20:50:53] right... so unfortunately if has() returns negative, we never get to the get() call that updates the array cache [20:51:07] so in a long running process we would still be hitting redis every time to see if it's there yet [20:51:50] whereas if we had a hasKeys array cache we could stash the negative result [20:52:12] would be a much more extensive change though [20:52:32] ohhh [20:52:44] actually, caching negatives would totally be possible with the same values() call [20:53:21] sorry, not valeus() call [20:53:28] I mean the same values property [20:54:05] so when has() delegate retuns true, we don't update any property as we assume the caller will call get() next and that will be cached with the actual value [20:54:27] but when has() delegate returns false we can add the nack value for the key to the values() call [20:54:51] ugh, again I mean to the values property [21:01:51] i'll suggest that in gerrit [21:02:51] wfan: eileen wants to do a CRM deploy, which would push out your audit and tokenization change [21:03:44] eileen___: ohhh, i see, lookup failures are explicitly NOT cached [21:04:20] ok. so we have 100 over the last 18 minutes, and they are all that same query. [21:04:22] Oh, ejegg: thanks. Sounds good, I just got one concern that we are not having any cron job working now so no audit parse files for audit to run [21:04:28] this is on the civi1001 redis instance. [21:05:27] ejegg: so I just got back to this & just trying to process the above [21:05:40] eileen___: so basically you can ignore it [21:05:48] currently if a `get` has cached a response then `has` will bypass that [21:05:54] i thought i had a small optimization [21:06:16] but it works differently than I thought [21:06:33] eileen___: yep, I thought that we could cache negative 'has' responses [21:06:56] but we're not actually caching negative 'get' responses anyway [21:07:01] so never mind [21:07:21] anyway, this will make a big difference for the case when the key exists [21:07:34] this=your patch [21:09:27] (03CR) 10Ejegg: [C: 03+2] "Looks like a good speed-up for the case when a popular key is set!" [wikimedia/fundraising/crm/civicrm] - 10https://gerrit.wikimedia.org/r/820576 (https://phabricator.wikimedia.org/T313000) (owner: 10Eileen) [21:10:35] cool - re deploying your timer patch.... should just the patch go out? [21:10:56] eileen___: should be fine [21:11:04] err, as far as I know [21:11:17] it's been a long time since we added more lines to a prometheus metric file [21:11:21] ok - it may need vendor updates then? [21:11:38] eileen___: ohh right, that merge definitely will need a matching vendor update [21:11:54] since it's got a smash-pig change in it [21:12:22] yeah vendor update is needed, but the smash-pig and di version tag is updated already. [21:16:08] eileen___: on a completely different topic, I'm leaning towards C+2 on this patch to map unknown language codes to en_US on the way in: https://gerrit.wikimedia.org/r/c/wikimedia/fundraising/crm/+/820751 . I know you had a case for them failing loudly, but the testers just keep using language=%%isolang%% and asking why their donation doesn't get to Civi [21:16:57] they're copying links directly from email templates without fixing params, and we have tried to tell them not to [21:17:22] ejegg: I think my thinking was fail loudly while traffic is low so we can figure them & sort them - and then as we get busier then switch to defaulting [21:17:27] heh, I guess we could look specifically for %%isolang%% and send a special email to the tester's email address explaining they're doing it wrong :) [21:17:35] lol [21:17:56] * greg-g whistles [21:18:06] or even put detection in the donation forms :) [21:19:05] ok eileen___, I think we're close enough to high-traffic, and also that we're not seeing any more human language codes getting caught up there [21:19:13] sure [21:19:20] I'll just merge it [21:19:26] go for it [21:19:52] heh, just going to fix the WS first [21:24:15] (03PS5) 10Ejegg: Handle invalid language codes properly in Civi [wikimedia/fundraising/crm] - 10https://gerrit.wikimedia.org/r/820751 (https://phabricator.wikimedia.org/T313092) (owner: 10Damilare Adedoyin) [21:24:41] (03CR) 10Ejegg: "PS5: rebase and minor spacing adjustment" [wikimedia/fundraising/crm] - 10https://gerrit.wikimedia.org/r/820751 (https://phabricator.wikimedia.org/T313092) (owner: 10Damilare Adedoyin) [21:25:48] (03Merged) 10jenkins-bot: Fix cache miss in FastArrays on 'has()' [wikimedia/fundraising/crm/civicrm] - 10https://gerrit.wikimedia.org/r/820576 (https://phabricator.wikimedia.org/T313000) (owner: 10Eileen) [21:26:13] (03CR) 10Ejegg: "Thanks! This should quiet down some failmail." [wikimedia/fundraising/crm] - 10https://gerrit.wikimedia.org/r/820751 (https://phabricator.wikimedia.org/T313092) (owner: 10Damilare Adedoyin) [21:26:17] (03CR) 10Ejegg: [C: 03+2] Handle invalid language codes properly in Civi [wikimedia/fundraising/crm] - 10https://gerrit.wikimedia.org/r/820751 (https://phabricator.wikimedia.org/T313092) (owner: 10Damilare Adedoyin) [21:28:37] oh hmm, i guess we could have just changed the WMFHelper lookup to give us the default rather than try/catching [21:40:02] (03Merged) 10jenkins-bot: Handle invalid language codes properly in Civi [wikimedia/fundraising/crm] - 10https://gerrit.wikimedia.org/r/820751 (https://phabricator.wikimedia.org/T313092) (owner: 10Damilare Adedoyin) [21:48:36] ejegg: shall I put up the vendor update?You aren't working on it? [21:51:49] sure eileen___ ! [21:56:59] 10Fundraising-Backlog: Adyen audit parser finding lots of missed transactions - https://phabricator.wikimedia.org/T306631 (10Cstone) 05Open→03Resolved a:03Cstone This was solved with https://phabricator.wikimedia.org/T306944 [22:11:55] 10fundraising-tech-ops: New remote log files for payments-wiki - https://phabricator.wikimedia.org/T314819 (10Dwisehaupt) @Ejegg Just wanted to double check on the payments-client_errors match. Do we want log lines that match to only go into the client_errors log or do we want then to fall through to the process... [22:13:07] (03PS1) 10Eileen: Update donation-interface to 2.5.7.5, smashpig to 2.5.7.5 [wikimedia/fundraising/crm/vendor] - 10https://gerrit.wikimedia.org/r/821317 [22:13:34] 10fundraising-tech-ops: New remote log files for payments-wiki - https://phabricator.wikimedia.org/T314819 (10Ejegg) @Dwisehaupt Yep, let's avoid double-logging and only put them in the new file. [22:14:07] ejegg: I just put up the vendor patch - not completely sure why those bin patches have so much noise [22:14:28] 10fundraising-tech-ops: New remote log files for payments-wiki - https://phabricator.wikimedia.org/T314819 (10Ejegg) @Dwisehaupt Ah, and also avoid them going into the payments.error file [22:14:38] eileen___: argh! The new version of composer kills some symlinks I think [22:14:49] and we're currently relying on that symlink to run drush [22:14:55] oh.... [22:15:05] so... I've just been snipping the bin update out of my vendor commits [22:15:12] argh [22:15:12] i really should figure that out [22:15:21] that might be a chaos crew job [22:15:28] yeah [22:15:41] making a phab right now so as not to forget [22:16:07] (03PS1) 10Eileen: Update donation-interface to 2.5.7.5, smashpig to 2.5.7.5 [wikimedia/fundraising/crm/vendor] - 10https://gerrit.wikimedia.org/r/821318 [22:16:17] ok ^^ should be better then... [22:16:26] eileen___: might want to change the smashpig version in that commit message [22:17:55] (03PS2) 10Eileen: Update donation-interface to 2.5.7.5, smashpig to 0.8.2.2 [wikimedia/fundraising/crm/vendor] - 10https://gerrit.wikimedia.org/r/821318 [22:18:04] thanks! [22:18:09] ah - serves me right trying to read composer.json in cli [22:18:19] 10Fundraising Tech - Chaos Crew: Adjust drush wrapper path for bin/symlink behavior of recent Composer versions - https://phabricator.wikimedia.org/T314826 (10Ejegg) [22:18:49] that might also need ops help [22:21:14] cool [22:24:04] 10Fundraising Sprint Esperantoland, 10Fundraising Sprint File Systems Stage Show, 10Fundraising Sprint Git Rebase Jump, 10Fundraising Sprint Humongous bacteria petting zoo, and 18 others: Fix civicrm repo to be non-symlinked - https://phabricator.wikimedia.org/T289100 (10Eileenmcnaughton) @jgleeson your fa... [22:29:14] i just took a peek at T314826, we currently have the drush executable as /srv/org.wikimedia.civicrm/vendor/drush/drush/drush which doesn't have any symlinks in the path. we call that executable from our wrapper at /usr/local/bin/drush i think i'm misunderstanding what is needed. [22:29:14] T314826: Adjust drush wrapper path for bin/symlink behavior of recent Composer versions - https://phabricator.wikimedia.org/T314826 [22:41:25] 10Fundraising Sprint NaN is a Number, 10Fundraising-Backlog, 10FR-Adyen, 10FR-Alerts, and 2 others: Adyen audit parser: fix recurring as much as possible - https://phabricator.wikimedia.org/T297856 (10Cstone) Looks like the audit parser has been behaving with recurring since 2022-05-10 and not hunting fore... [22:44:33] (03CR) 10Eileen: [C: 03+2] Update donation-interface to 2.5.7.5, smashpig to 0.8.2.2 [wikimedia/fundraising/crm/vendor] - 10https://gerrit.wikimedia.org/r/821318 (owner: 10Eileen) [22:45:52] (03PS2) 10Eileen: Merge branch 'master' of https://gerrit.wikimedia.org/r/wikimedia/fundraising/crm into deployment [wikimedia/fundraising/crm] (deployment) - 10https://gerrit.wikimedia.org/r/821305 [22:46:05] (03CR) 10CI reject: [V: 04-1] Merge branch 'master' of https://gerrit.wikimedia.org/r/wikimedia/fundraising/crm into deployment [wikimedia/fundraising/crm] (deployment) - 10https://gerrit.wikimedia.org/r/821305 (owner: 10Eileen) [22:47:03] (03Abandoned) 10Eileen: Update donation-interface to 2.5.7.5, smashpig to 2.5.7.5 [wikimedia/fundraising/crm/vendor] - 10https://gerrit.wikimedia.org/r/821317 (owner: 10Eileen) [22:47:33] 10Fundraising Sprint NaN is a Number, 10Fundraising-Backlog: Adyen - auto settle stopped prior donors (Pending transaction Resolver) - https://phabricator.wikimedia.org/T299692 (10Cstone) I think this is at least tangentially related too https://phabricator.wikimedia.org/T299690 [22:48:38] (03CR) 10Eileen: [C: 03+2] "recheck" [wikimedia/fundraising/crm/vendor] - 10https://gerrit.wikimedia.org/r/821318 (owner: 10Eileen) [22:48:59] hmm, is there no auto-merge for crm/vendor? [22:49:39] ok, seems to be happening: https://integration.wikimedia.org/ci/job/wikimedia-fundraising-civicrm-docker/8299/ [22:54:49] ejegg: it always seems to take forever to kick off - that's why I do recheck after CR [22:57:34] (03Merged) 10jenkins-bot: Update donation-interface to 2.5.7.5, smashpig to 0.8.2.2 [wikimedia/fundraising/crm/vendor] - 10https://gerrit.wikimedia.org/r/821318 (owner: 10Eileen) [22:57:43] ah i see [22:58:51] 10Fundraising-Backlog: Review Cadence of Audit Files for Manually settled Adyen Donations - https://phabricator.wikimedia.org/T314753 (10Ejegg) @EMartin we are using the settlement batch report (https://docs.adyen.com/reporting/settlement-reconciliation ) from Adyen as our audit/reconciliation report. The report... [23:00:07] 10Fundraising-Backlog, 10Wikimedia-Fundraising-CiviCRM: Long-running tasks such as audit parsing can lose Redis connection and drop queue messages - https://phabricator.wikimedia.org/T266591 (10Ejegg) [23:00:31] (03PS1) 10Ejegg: Option for queue push to fall back to db [wikimedia/fundraising/SmashPig] - 10https://gerrit.wikimedia.org/r/821320 (https://phabricator.wikimedia.org/T314805) [23:02:56] 10fundraising-tech-ops: Fundraising access request for dbu - https://phabricator.wikimedia.org/T314827 (10Dwisehaupt) [23:10:11] 10Fundraising Tech - Chaos Crew, 10Fundraising-Backlog: Update Braintree Order ID after successful payment - https://phabricator.wikimedia.org/T314681 (10Ejegg) This could be related to Monthly Convert. For non-MC countries, we clear out the session after a successful donation so reloading the form always rege... [23:10:42] (03CR) 10Eileen: "recheck" [wikimedia/fundraising/crm] (deployment) - 10https://gerrit.wikimedia.org/r/821305 (owner: 10Eileen) [23:11:19] eileen___: for deployment I think we still do have to manually Submit [23:11:25] no? [23:11:41] I think not if we wait long enough [23:11:51] ah ok [23:13:33] (03PS3) 10Eileen: Merge branch 'master' of https://gerrit.wikimedia.org/r/wikimedia/fundraising/crm into deployment [wikimedia/fundraising/crm] (deployment) - 10https://gerrit.wikimedia.org/r/821305 [23:15:19] ejegg: ok that verified - all good now? [23:19:15] 10Fundraising-Backlog: Review Cadence of Audit Files for Manually settled Adyen Donations - https://phabricator.wikimedia.org/T314753 (10EMartin) Hi @Ejegg Thanks. I will check with Finance. I doubt they will want to increase the frequency since Ingenico has been at weekly but it's a good time to ask. Assume... [23:21:47] looks good to me eileen___ [23:22:20] (03CR) 10Eileen: [C: 03+2] Merge branch 'master' of https://gerrit.wikimedia.org/r/wikimedia/fundraising/crm into deployment [wikimedia/fundraising/crm] (deployment) - 10https://gerrit.wikimedia.org/r/821305 (owner: 10Eileen) [23:32:15] !log civicrm upgraded from 497bddf7 to 1f91ac2d [23:32:16] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [23:32:56] !log config revision changed from f5668044 to 787cd0e0 eileen [23:32:57] Logged the message at https://wikitech.wikimedia.org/wiki/Server_Admin_Log [23:39:57] 10Fundraising-Backlog: Review Cadence of Audit Files for Manually settled Adyen Donations - https://phabricator.wikimedia.org/T314753 (10Ejegg) I guess Ingenico must not link their reporting schedule to their financial payout schedule. For Adyen there are other reports available but it would take some work to f... [23:42:42] 10Fundraising-Backlog, 10Wikimedia-Fundraising-CiviCRM, 10fundraising-tech-ops: Investigate Redis speed - https://phabricator.wikimedia.org/T314619 (10Dwisehaupt) Tested looking at the redis slowlog to see if anything showed up in there. After expanding the collection to 100 and collecting for 18 minutes, al... [23:43:15] ejegg: I deployed it - I don't know how long the new metric will take to show up [23:44:17] 10Fundraising-Backlog, 10Wikimedia-Fundraising-CiviCRM, 10fundraising-tech-ops: Investigate Redis speed - https://phabricator.wikimedia.org/T314619 (10Eileenmcnaughton) Just noting that the `prevnext` cache is hit by user action rather than by our queues [23:46:57] eileen___: is it a new prometheus metric? [23:47:04] dwisehaupt: yep [23:47:21] should show up on here https://frmon.frdev.wikimedia.org/d/Pq1YNMviz/fundraising-overview?orgId=1&refresh=1m&viewPanel=33&from=now-15m&to=now [23:47:25] should be pretty quick [23:48:22] ah. we'll have to add it to the graph. each metric there is it's own, not a glob. [23:51:48] yeah. they are starting to populate [23:54:49] eileen___: we can add them to that graph, but what 'name' do we want for each? "create contact api" and "update contact api"? [23:55:18] yeah - but the existing could be changes to 'save contact' [23:55:23] because it wraps both [23:55:38] the existing "Create Contact"? [23:56:14] yep [23:57:34] can worry about that later though [23:57:49] easy to do it all at once. [23:58:01] let me rearrange them so they look pretty and pick some colors.