[07:28:37] 10serviceops, 10MW-on-K8s, 10Observability-Logging: Some apache access logs are invalid json - https://phabricator.wikimedia.org/T340935 (10Joe) p:05Triage→03High Some updates: I managed to isolate the problem by using the following simple configuration: ` module(load="mmjsonparse") template(name="sim... [07:47:18] 10serviceops, 10Machine-Learning-Team, 10Platform Team Initiatives (API Gateway): Review LiftWing's usage of the API Gateway - https://phabricator.wikimedia.org/T340982 (10elukey) 05Open→03Resolved a:03elukey Change deployed to the API Gateway, thank's all for the feedback and the chats on IRC (Alexand... [08:08:20] 10serviceops, 10Beta-Cluster-Infrastructure, 10wikidiff2, 10Better-Diffs-2023, 10Community-Tech (CommTech-Kanban): Install wikidiff2 1.14.1 deb on deployment-prep & test - https://phabricator.wikimedia.org/T340542 (10MoritzMuehlenhoff) Sure, I'll update the package later the day. [08:11:04] 10serviceops, 10Beta-Cluster-Infrastructure, 10wikidiff2, 10Better-Diffs-2023, 10Community-Tech (CommTech-Kanban): Install wikidiff2 1.14.1 deb on deployment-prep & test - https://phabricator.wikimedia.org/T340542 (10Joe) We also need to rebuild the base php-fpm images for mediawiki on k8s [08:19:06] FYI, kubetcd1004 and kubestagetcd1006 will briefly go down for a reboot [08:34:37] 10serviceops, 10SRE, 10envoy: Refactor envoy max_requests_per_connection from Cluster to HttpProtocolOptions - https://phabricator.wikimedia.org/T304124 (10JMeybohm) a:03JMeybohm [08:34:43] 10serviceops, 10SRE, 10Traffic, 10envoy: Set a limit to the number of allowed active connections via runtime key overload.global_downstream_max_connections - https://phabricator.wikimedia.org/T340955 (10JMeybohm) a:03JMeybohm [08:34:55] 10serviceops, 10SRE, 10envoy: Remove tls_minimum_protocol_version from envoy config - https://phabricator.wikimedia.org/T337453 (10JMeybohm) a:03JMeybohm [08:59:48] 10serviceops, 10MW-on-K8s, 10Release-Engineering-Team: MediaWiki deployment to kubernetes fails on group1 promotion - https://phabricator.wikimedia.org/T341114 (10hashar) [08:59:57] 10serviceops, 10MW-on-K8s, 10Release-Engineering-Team: MediaWiki deployment to kubernetes fails on group1 promotion - https://phabricator.wikimedia.org/T341114 (10hashar) [09:00:13] 10serviceops, 10MW-on-K8s, 10Release-Engineering-Team: MediaWiki deployment to kubernetes fails on group1 promotion - https://phabricator.wikimedia.org/T341114 (10hashar) p:05Triage→03Unbreak! [09:01:08] helm is failed when doing the mediawiki promotion. `helmfile -e eqiad --selector name=main apply in /srv/deployment-charts/helmfile.d/services/mw-api-ext` is unhappy about a yaml file that got altered with the new mediawiki image [09:01:17] that seems to be solely for the mw-api-ext service [09:01:22] I filed https://phabricator.wikimedia.org/T341114 [09:02:42] <_joe_> claime: ^^ [09:02:52] <_joe_> can you take a look? [09:06:38] 10serviceops, 10MW-on-K8s, 10Release-Engineering-Team: MediaWiki deployment to kubernetes fails on group1 promotion - https://phabricator.wikimedia.org/T341114 (10hashar) The helm output mentions a timeout: ` COMBINED OUTPUT: WARNING: Kubernetes configuration file is group-readable. This is insecure. Locat... [09:07:05] looks like helm ends up reaching a 10 minutes timeout for mw-api-ext [09:07:19] looks like the namespace is getting out of quota, which is blocking the updated pods from starting [09:07:21] Error creating: pods "mw-api-ext.eqiad.main-5dfdb996cf-g5hsz" is forbidden: exceeded quota: quota-compute-resources, requested: limits.cpu=8250m, used: limits.cpu=82500m, limited: limits.cpu=90 [09:07:23] but for the other services each deploys take less than a minute [09:07:32] <_joe_> taavi: yep, it needs to get a quota bump [09:07:37] from where do you find that taavi? :) [09:07:44] <_joe_> hashar: events for the namespace [09:07:58] I don't even know how to check those :] [09:08:19] <_joe_> hashar: do you have to perform another deploy right now? [09:08:27] <_joe_> else I'll finish what I am doing before fixing this [09:08:30] nop [09:08:36] <_joe_> ok, thanks [09:08:44] my concern is I am guessing mw-api-ext has not been upgraded [09:09:22] so I guess whatever kind of requests hit that service have wikis served wmf.15 code rather than wmf.16 [09:09:33] I'll fix it [09:09:39] _joe_: ^ [09:09:41] <_joe_> yes, so a few wikis [09:09:44] <_joe_> claime: thanks :) [09:09:45] sorry I was afk for a minute [09:09:54] including commons and wikidata :-] [09:10:04] which are not served from k8s [09:10:14] \o/ [09:10:25] <_joe_> hashar: not including those :) [09:10:31] <_joe_> claime: np [09:11:22] _joe_: Giving it the same quota as mw-web [09:11:46] <_joe_> claime: seems reasonable [09:12:50] https://gerrit.wikimedia.org/r/c/operations/deployment-charts/+/935685 [09:12:55] 10serviceops, 10Beta-Cluster-Infrastructure, 10wikidiff2, 10Better-Diffs-2023, 10Community-Tech (CommTech-Kanban): Install wikidiff2 1.14.1 deb on deployment-prep & test - https://phabricator.wikimedia.org/T340542 (10MoritzMuehlenhoff) >>! In T340542#8987623, @TheresNoTime wrote: > @MoritzMuehlenhoff (se... [09:13:50] claime: maybe attach that gerrit change to `Bug: T341114` ? ;) [09:14:23] hashar: btw, troubleshooting doc for k8s (just fyi) https://wikitech.wikimedia.org/wiki/Kubernetes/Troubleshooting#Troubleshooting_a_deployment [09:14:29] hashar: sure [09:14:43] 10serviceops, 10MW-on-K8s, 10Release-Engineering-Team, 10Patch-For-Review: MediaWiki deployment to kubernetes fails on group1 promotion - https://phabricator.wikimedia.org/T341114 (10hashar) ` lang=irc 09:07:19 looks like the namespace is getting out of quota, which is blocking the updated pods fro... [09:17:22] CI is done, +2'ing myself [09:17:52] 10serviceops, 10MW-on-K8s, 10Release-Engineering-Team, 10Patch-For-Review: MediaWiki deployment to kubernetes fails on group1 promotion - https://phabricator.wikimedia.org/T341114 (10Clement_Goubert) 05Open→03In progress a:03Clement_Goubert [09:19:17] 10serviceops, 10docker-pkg: Rationalize and update the use of base images in our docker-pkg repositories - https://phabricator.wikimedia.org/T341115 (10Joe) [09:19:33] 10serviceops, 10docker-pkg: Rationalize and update the use of base images in our docker-pkg repositories - https://phabricator.wikimedia.org/T341115 (10Joe) p:05Triage→03Medium [09:20:01] claime: I have added the `kubectl get events` to our train deploy doc https://wikitech.wikimedia.org/wiki/Heterogeneous_deployment/Train_deploys#Troubleshoot_Kubernetes_deployment :] [09:22:33] then I guess the helm deployment can be redone, I imagine `helmfile -e eqiad --selector name=main apply` ? [09:22:44] or maybe I can rerun scap [09:22:44] may not work [09:22:56] I usually run scap sync-world --k8s-only [09:23:11] It'll redeploy all mw-on-k8s [09:23:53] ok quota update done [09:24:02] Running scap [09:24:51] 10serviceops, 10CX-deployments, 10MinT, 10Language-Team (Language-2023-July-September): Remove Flores key from production - https://phabricator.wikimedia.org/T337284 (10Nikerabbit) This might be causing following errors in Logstash ([[https://logstash.wikimedia.org/app/discover#/doc/logstash-*/logstash-k8s... [09:26:14] 09:26:07 Finished Running helmfile -e eqiad --selector name=main apply in /srv/deployment-charts/helmfile.d/services/mw-api-ext (duration: 00m 36s) [09:26:29] All good. [09:26:51] merci claime ! :) [09:27:15] 10serviceops, 10MW-on-K8s, 10Release-Engineering-Team, 10Patch-For-Review: MediaWiki deployment to kubernetes fails on group1 promotion - https://phabricator.wikimedia.org/T341114 (10hashar) 05In progress→03Resolved [09:27:29] beside that MediaWiki errors log is quiet [09:29:50] ok cool [09:30:18] hashar: Think I can start sending more traffic to mw-on-k8s then if you're done? [09:30:33] up to you yes [09:30:37] the train looks good so far [09:30:38] awesome [09:32:30] <_joe_> claime: keep an eye on not getting commons traffic on either cluster :) [09:33:19] _joe_: Yeah i'll prepare a logstash [09:34:07] <_joe_> claime: the access log one should have all you need [09:34:08] _joe_: btw, we do get commons traffic on mw-api-int from some internal services [09:34:27] <_joe_> claime: and that's ok, that's mostly read-only access [09:34:29] _joe_: yep, but filtering on the right namespaces and stuff before merging and having to scramble seems like a good idea [09:34:39] <_joe_> yeep [09:37:20] 10serviceops, 10CX-cxserver, 10Kubernetes, 10Language-Team (Language-2023-July-September): cxserver: Section Mapping Database (m5) not accessible by certain region - https://phabricator.wikimedia.org/T341117 (10Ladsgroup) The databases are working fine, I think k8s can't reach the new proxies again. cc. @a... [09:39:25] ugh, more db getting blocked by egress ^ [09:41:35] 10serviceops, 10CX-cxserver, 10Kubernetes, 10Language-Team (Language-2023-July-September): cxserver: Section Mapping Database (m5) not accessible by certain region - https://phabricator.wikimedia.org/T341117 (10Marostegui) This might be related to T337812 I have failed back to dbproxy1017 until it is inves... [09:42:33] Traffic climbing steadily on mw-api-ext [09:43:14] puppet run is done, now we watch :p [09:43:25] * volans runs away [09:43:31] :-P [09:43:33] coward [09:44:53] Getting some actual frwiki/enwiki traffic to mw-web and mw-api-ext [09:45:01] no big error rise or anything [09:47:42] we may want to fix our apache logs sending only 127.0.0.1 as source ip [09:54:41] 10serviceops, 10SRE, 10Traffic, 10envoy: Set a limit to the number of allowed active connections via runtime key overload.global_downstream_max_connections - https://phabricator.wikimedia.org/T340955 (10JMeybohm) `max(sum by (instance) (envoy_http_downstream_cx_active))` over the last 30 days tops out at ~... [09:56:24] 10serviceops, 10CX-cxserver, 10Kubernetes, 10Language-Team (Language-2023-July-September): cxserver: Section Mapping Database (m5) not accessible by certain region - https://phabricator.wikimedia.org/T341117 (10Marostegui) Confirmed this works now: https://cxserver.wikimedia.org/v2/suggest/sections/United_... [09:57:08] 10serviceops, 10CX-cxserver, 10Kubernetes, 10Language-Team (Language-2023-July-September): cxserver: Section Mapping Database (m5) not accessible by certain region - https://phabricator.wikimedia.org/T341117 (10Marostegui) [09:57:10] 10serviceops, 10Data-Persistence: WikiKube: Investigate how to abstract misc Mariadb clusters host/ip information so that no deployment of apps is needed when a master is failed over - https://phabricator.wikimedia.org/T340843 (10Marostegui) [10:13:03] 10serviceops, 10SRE, 10Traffic, 10envoy, 10Patch-For-Review: Upgrade Envoy to supported version - https://phabricator.wikimedia.org/T300324 (10BTullis) >>! In T300324#8988266, @JMeybohm wrote: > ... as datahub (cc @BTullis ) which I did not deploy because it has a huge diff I'm not able to reason about.... [10:24:27] 10serviceops, 10MW-on-K8s, 10Observability-Logging: Some apache access logs are invalid json - https://phabricator.wikimedia.org/T340935 (10akosiaris) Traced down the addition of the escape. It was a nice trip down memory lane: https://svn.apache.org/viewvc/httpd/httpd/branches/APACHE_2_0_BRANCH/server/util.... [10:25:07] 10serviceops, 10CX-cxserver, 10Kubernetes, 10Language-Team (Language-2023-July-September): cxserver: Section Mapping Database (m5) not accessible by certain region - https://phabricator.wikimedia.org/T341117 (10akosiaris) a:03akosiaris [10:45:32] kubetcd1006 will also briefly go down for a reboot [10:45:47] ack [10:52:55] 10serviceops, 10Data-Persistence: WikiKube: Investigate how to abstract misc Mariadb clusters host/ip information so that no deployment of apps is needed when a master is failed over - https://phabricator.wikimedia.org/T340843 (10Clement_Goubert) @Marostegui Can you provide us with the list of ports that would... [10:54:53] 10serviceops, 10Data-Persistence: WikiKube: Investigate how to abstract misc Mariadb clusters host/ip information so that no deployment of apps is needed when a master is failed over - https://phabricator.wikimedia.org/T340843 (10Marostegui) You'd need to open: 3321, 3322, 3323, 3325 too as those are the ones... [10:55:48] 10serviceops, 10Foundational Technology Requests, 10Prod-Kubernetes, 10Kubernetes, 10Patch-For-Review: Post Kubernetes v1.23 cleanup - https://phabricator.wikimedia.org/T328291 (10JMeybohm) [10:55:51] 10serviceops, 10Prod-Kubernetes, 10Kubernetes: Selected IPv6 service-cluster-up ranges are to big - https://phabricator.wikimedia.org/T335285 (10JMeybohm) 05Open→03Resolved Change has been deployed today [10:55:53] 10serviceops, 10Foundational Technology Requests, 10Prod-Kubernetes, 10Shared-Data-Infrastructure, and 2 others: Update Kubernetes clusters to v1.23 - https://phabricator.wikimedia.org/T307943 (10JMeybohm) [11:03:23] 10serviceops, 10Wikimedia-Site-requests: Cleanup cirrus keys in $wmfSwiftEqiadConfig - https://phabricator.wikimedia.org/T199220 (10MatthewVernon) [11:03:48] 10serviceops, 10SRE-swift-storage, 10Patch-For-Review: Remove search:backup swift account and storage - https://phabricator.wikimedia.org/T341081 (10MatthewVernon) 05Open→03Resolved All done, including roll-restart of the proxies to make this change take effect. [11:04:36] 10serviceops, 10iPoid-Service, 10Patch-For-Review, 10Service-deployment-requests: New Service Request 'iPoid' - https://phabricator.wikimedia.org/T325147 (10kostajh) [11:11:42] 10serviceops, 10MW-on-K8s, 10SRE, 10Traffic, and 3 others: Direct 0.5% of all traffic to mw-on-k8s - https://phabricator.wikimedia.org/T341078 (10Clement_Goubert) Everything looks good. mw-api-ext: {F37129502} {F37129504} {F37129506} mw-web: {F37129508} {F37129510} {F37129512} [11:16:27] 10serviceops, 10Parsoid, 10Parsoid-Read-Views, 10RESTbase Sunsetting: High insertion rate of ParsoidCachePrewarmJob causes substantial backlog - https://phabricator.wikimedia.org/T341123 (10daniel) [11:19:22] 10serviceops, 10CX-cxserver, 10Kubernetes, 10Language-Team (Language-2023-July-September): cxserver: Section Mapping Database (m5) not accessible by certain region - https://phabricator.wikimedia.org/T341117 (10KartikMistry) >>! In T341117#8989921, @Marostegui wrote: > This might be related to T337812 I ha... [11:19:40] 10serviceops, 10Parsoid, 10Parsoid-Read-Views, 10RESTbase Sunsetting: High insertion rate of ParsoidCachePrewarmJob causes substantial backlog - https://phabricator.wikimedia.org/T341123 (10daniel) [11:36:17] 10serviceops, 10Data-Persistence: WikiKube: Investigate how to abstract misc Mariadb clusters host/ip information so that no deployment of apps is needed when a master is failed over - https://phabricator.wikimedia.org/T340843 (10akosiaris) So, the full list would be: 3306 3310 3311 3312 3313 3314 3315 3316 3... [11:50:50] 10serviceops, 10Parsoid, 10Parsoid-Read-Views, 10RESTbase Sunsetting, 10Patch-For-Review: High insertion rate of ParsoidCachePrewarmJob causes substantial backlog - https://phabricator.wikimedia.org/T341123 (10daniel) [11:58:49] 10serviceops, 10Data-Persistence: WikiKube: Investigate how to abstract misc Mariadb clusters host/ip information so that no deployment of apps is needed when a master is failed over - https://phabricator.wikimedia.org/T340843 (10Marostegui) So, dbproxies have nothing listening on 3310 3311 3312 3313 3314 3315... [12:01:11] 10serviceops, 10Data-Persistence: WikiKube: Investigate how to abstract misc Mariadb clusters host/ip information so that no deployment of apps is needed when a master is failed over - https://phabricator.wikimedia.org/T340843 (10akosiaris) Cool, that completes the picture I was trying to form, thanks! [12:03:01] 10serviceops, 10CX-cxserver, 10Kubernetes, 10Language-Team (Language-2023-July-September): cxserver: Section Mapping Database (m5) not accessible by certain region - https://phabricator.wikimedia.org/T341117 (10Marostegui) Cool, once the FW has been changed, we'd need to revert that patch and confirm it ke... [12:41:18] akosiaris: thanks for taking care of the broken backlog metrics. Is there anything I need to do with https://grafana.wikimedia.org/d/t_x3DEu4k/parsoid-health?forceLogin=&forceLogin=&orgId=1&refresh=15m&viewPanel=37 to benefir from the fix? [12:45:02] duesen: yes, hit F5 :-) [12:45:55] this is the first of the metrics we are switching to histograms, we have another few down the line, we are trying to assess if were are breaking too much, which is why we are moving piece by piece [13:32:02] hi folks! [13:32:22] Any opposition if I expand kafka topic partitions like indicated in https://phabricator.wikimedia.org/T338357#8990636 ? [13:35:22] claime: --^ [13:41:39] <_joe_> elukey: +1 [13:41:57] <_joe_> tbh I thought refreshlinks had 3 partitions at least :/ [13:44:42] yeah :( [13:44:51] we may need to move them around [13:55:55] done! [13:56:04] let's see if Changeprop likes it [13:58:40] Fingers crossed [14:01:32] so far it seems that changeprop doesn't need a restart [14:02:30] claime: I think it will be a long road, I am not confident that we'll resolve [14:02:33] :( [14:03:38] job processing rates have fallen sharply [14:03:51] https://grafana.wikimedia.org/goto/hrXW3L9Vk?orgId=1 [14:03:56] I think it does need a restart [14:04:44] the last time it took a bit to catch up though [14:04:44] ah it may be picking back up [14:04:47] yeah [14:04:49] even after the restart [14:06:27] claime: also changeprop hates me, this needs to be taken into account :) [14:06:38] jokes aside, let's give it 10/15 mins, if it doesn't change I'll roll restart [14:06:49] elukey: I think changeprop hates everyone tbh [14:10:55] (of course I realized that I checked the changeprop dashboard and not the job queue one, good job Luca) [14:11:05] mw-on-k8s is holding up pretty well to its *checks graphs* respectively 50 and 20 rps lol [14:12:35] <_joe_> claime: hey don't diss your kid [14:12:48] <_joe_> the api cluster got something north of 100 rps at some point [14:13:06] <_joe_> that already sets us in the top 10% of all kubernetes clusters [14:13:10] lmao [14:13:18] <_joe_> if you add the 30k rps for sessionstore/echostore... [14:13:19] One of the kubernetes clusters of all time [14:14:20] elukey: p50 has hit p99 backlog times, I think a restart may be warranted [14:14:37] I was about to say the same, doing it sigh [14:14:41] <3 [14:18:19] eqiad done :) [14:18:45] and also codfw [14:18:55] (changeprop-jobqueue) [14:18:59] 10serviceops, 10SRE, 10Traffic, 10envoy, 10Patch-For-Review: Set a limit to the number of allowed active connections via runtime key overload.global_downstream_max_connections - https://phabricator.wikimedia.org/T340955 (10akosiaris) >>! In T340955#8989979, @JMeybohm wrote: > `max(sum by (instance) (envo... [14:19:16] c'mon cp-jq, play nice [14:19:47] I'd like to undestand why we need a restart though [14:19:51] maybe it is the old kafka client [14:20:08] cp-jq sounds like a droids name in star wars :D [14:20:37] jayme: Yeah, the annoying one that nobody really knows how it works or how to repair it [14:20:50] eheh [14:20:53] elukey: mebbe [14:21:46] claime: looks healthier now [14:22:09] elukey: definitely does [14:22:31] let's wait a bit and check a flamegraph to see if it lowered job push time [14:23:17] I am wondering if it is possible to have an alert that fires when a topic gets a traffic volume that is not good for its partitions [14:24:07] elukey: I'd like to check parsoidCacheWarmer's volume, now that I think about it [14:27:46] I was wondering why there was so much difference between two pods and and the rest, but the refreshLink job is partitioned by db section [14:27:49] So that explains it [14:29:22] elukey: I have no idea what that volume would be, do you have a rule of thumb? [14:29:49] claime: good question, I don't have a solid answer right now [14:30:02] I started checking the topics with most volume basically [14:30:14] but some of them are not steady, there are waves [14:34:26] yeah tolerable volumes are somewhat dependent on the consumers [14:34:39] but surges that don't abate would be easy enough to alert on [14:35:07] but could also be tricky for some jobs that are delayed or just cause big backlogs (like ThumbnailRender recently has been doing) [14:41:50] 10serviceops, 10SRE, 10Traffic, 10envoy, 10Patch-For-Review: Set a limit to the number of allowed active connections via runtime key overload.global_downstream_max_connections - https://phabricator.wikimedia.org/T340955 (10JMeybohm) I've reduced the limit to 50k (which is what https://www.envoyproxy.io/d... [15:26:40] there are two things that we may try to set in node-rdkafka: [15:26:42] https://github.com/Blizzard/node-rdkafka/blob/master/config.d.ts#L683 [15:26:53] https://github.com/Blizzard/node-rdkafka/blob/master/config.d.ts#L739 [15:27:17] linger.ms states how much time a producer (in this case, changeprop) waits to collect messages before sending [15:27:39] and batch.size states how many msgs to collect before sending to the broker [15:27:47] IIUC the first limit that is hit wins [15:28:00] we don't set anything afaics, so we have the defaults: [15:28:14] linger.ms: 5 [15:28:31] batch.size: 1000000 [15:28:51] I don't see the latter being hit in our case, the former is maybe a little tight? [15:30:53] I'll report this in the task [15:35:57] I'd also propose to upgrade the node-rdkafka version since we are running something from years ago [15:45:02] yeah, for sure. [15:45:23] 10serviceops, 10ChangeProp, 10WMF-JobQueue: Check if node-rdkafka's version on changeprop can be upgraded - https://phabricator.wikimedia.org/T341140 (10elukey) [15:45:27] created --^ [15:51:38] of course the above doesn't tackle the main issue since Amir reported the jobrunner's latency, but it would help in general when we try to set any parameter/upgrade/etc.. [15:52:59] mmm or not, if the brokers are hammered by changeprop they may also respond in a slower way for eventgate [16:01:24] I boldly created https://gerrit.wikimedia.org/r/c/operations/deployment-charts/+/935772 [16:01:29] if it is a bad idea just -2 me :) [16:03:58] (going afk, will read later) [16:25:30] elukey: tests pass for the rdkafka bump, which is interesting. Admittedly we mock most kafka behaviours for obvious reasons [16:25:37] But I'd be willing to try it out in staging and see how we get on [16:25:52] very thankful for such expansive tests on changeprop though <3 [18:22:11] 10serviceops, 10ChangeProp, 10WMF-JobQueue: Check if node-rdkafka's version on changeprop can be upgraded from 2.8.1 - https://phabricator.wikimedia.org/T341140 (10Aklapper) [21:17:16] 10serviceops, 10Abstract Wikipedia team (Phase λ – Launch), 10Patch-For-Review: Kubernetes Wikifunctions security and control measures - https://phabricator.wikimedia.org/T326785 (10cmassaro) Hello! According to [[ https://wikimedia.slack.com/archives/CTFK3B423/p1688375409565119?thread_ts=1688137189.354779&c... [23:12:36] 10serviceops, 10Abstract Wikipedia team (Phase λ – Launch), 10Patch-For-Review: Kubernetes Wikifunctions security and control measures - https://phabricator.wikimedia.org/T326785 (10akosiaris) >>! In T326785#8991980, @cmassaro wrote: > Hello! According to [[ https://wikimedia.slack.com/archives/CTFK3B423/p16...