[05:29:18] are there plans to enable distributed caching/something similar for CI/CD? it seems using a cache between jobs doesn't work since the jobs almost always run on different runners (so the cache is essentially "left behind" on the job where the cache was originally pushed to) [05:32:14] https://gitlab.wikimedia.org/repos/10nm/ultraviolet/-/pipelines/8377 is an example of this; the `test_jest` job seems to say that the cache was extracted but checking to see if the files that *should* be there are there doesn't work [06:25:51] well the cache could always be empty, you can't rely on it being present [07:08:47] previous job pushes to cache, next job pulls it [07:09:34] it's a way to share files between jobs so that the jobs remain separate (and thus, individually restartable should something go wrong) and dependencies don't get redownloaded on every single job [07:11:15] or at least it should be, based on how it's used on GitLab.com and per https://docs.gitlab.com/ee/ci/caching/#good-caching-practices [07:15:13] ok, seems like https://docs.gitlab.com/runner/configuration/speed_up_job_execution.html#use-a-distributed-cache needs to be set up [07:15:29] probably worth filing a bug asking for it if one doesn't exist yet [07:17:14] will check [07:57:18] 10GitLab: Consider enabling distributed caching for GitLab runners - https://phabricator.wikimedia.org/T328516 (10Chlod) [08:27:10] 10GitLab (Infrastructure), 10Release-Engineering-Team, 10serviceops-collab, 10Security: GitLab Security Release: 15.8.1, 15.7.6, and 15.6.7 - https://phabricator.wikimedia.org/T328518 (10Jelto) [08:27:20] 10GitLab (Infrastructure), 10Release-Engineering-Team, 10serviceops-collab, 10Security: GitLab Security Release: 15.8.1, 15.7.6, and 15.6.7 - https://phabricator.wikimedia.org/T328518 (10Jelto) p:05Triageβ†’03High [09:22:15] hi, does anyone know how I can list all my pending merge requests? I can not find such a list on our gitlab :-\ [09:23:31] there are lists for todo, merge requests assigned to me and review requests made to me [09:28:18] found it. Gotta use the link for the "merge requested assigned to me", change the filter to list the ones that I have authored (`author=Hashar`) [09:28:44] found that via the issue "list of all merge requests on my projects" https://gitlab.com/gitlab-org/gitlab/-/issues/26846#note_215641583 [09:29:09] the more specific issue "Ability to make the default of dashboard/merge_request to what the current user authored" https://gitlab.com/gitlab-org/gitlab/-/issues/23386 [09:30:01] which got closed in favor of "Merge requests that require my attention" https://gitlab.com/groups/gitlab-org/-/epics/5331 [09:32:59] that is the epic to implement attention to Gitlab which got declined eventually after a prototype https://gitlab.com/groups/gitlab-org/-/epics/5331#note_1009199006 [09:33:39] workaround: bookmark the search :] ( https://gitlab.wikimedia.org/dashboard/merge_requests?scope=all&state=opened&author_username=hashar ) [09:33:42] {solved} [09:42:08] the recap as to why the feature got dropped is in a 8 minutes video ( https://www.youtube.com/watch?v=Fgf6prHAOMk ) [10:24:52] 10GitLab, 10serviceops-collab: Investigate incremental backups for GitLab - https://phabricator.wikimedia.org/T324506 (10Jelto) >>! In T324506#8575734, @Arnoldokoth wrote: > Yes, you are absolutely right about this. I performed the incremental backup on gitlab1003 and it pretty much maxed out the disk usage. T... [11:20:45] 10GitLab: Consider enabling distributed caching for GitLab runners - https://phabricator.wikimedia.org/T328516 (10Chlod) [12:05:04] 10GitLab (Infrastructure), 10serviceops-collab: ensure Gitlab logs end up in logstash - https://phabricator.wikimedia.org/T322261 (10eoghan) So it looks like gitlab logs are already configured to be included in logstash, someone beat us to it in 2021! They're configured here ([[ https://gerrit.wikimedia.org/r/... [12:45:07] 10GitLab (Infrastructure), 10serviceops-collab: ensure Gitlab logs end up in logstash - https://phabricator.wikimedia.org/T322261 (10Jelto) It seems logstash integration has been implemented and tracked in T274462 already. Somehow it stopped working. The change https://gerrit.wikimedia.org/r/c/operations/pupp... [14:57:40] 10GitLab, 10Release-Engineering-Team, 10serviceops-collab, 10Patch-For-Review: Align the GitLab runner tags - https://phabricator.wikimedia.org/T325069 (10Jelto) 05Openβ†’03Resolved So the last tag which isn't settled yet is `cloud`. I don't see big benefits in migrating from tag `cloud` to `public-cloud... [15:50:59] 10GitLab (CI & Job Runners), 10Release-Engineering-Team: Consider enabling distributed caching for GitLab runners - https://phabricator.wikimedia.org/T328516 (10brennen) [16:26:35] jelto: do you happen to know envoy well? we're trying to use it to manage connections to buildkitd in the gitlab-cloud-runner cluster (enqueue when a max is reached, wait for autoscaling to happen, then reassign to newly available upstream) and running into some limitations. i'm wondering how long to bang my head against it before giving up and looking for a different solution, maybe a client side one :) [16:28:18] it's handling the traffic as opaque tcp, not grpc, because all buildkit grpc requests need to be handled by the same backend throughout the same session [16:49:01] chlod: we have a distributed cache set up for the runners on digitalocean but there's no current equivalent on wmcs runners [16:58:52] dduvall: not very good. I remember I used envoy once on kubernetes for GRPC connections. That needed a kubernetes headless service (https://kubernetes.io/docs/concepts/services-networking/service/#headless-services). [16:59:30] you may get some more experienced support in #wikimedia-serviceops or #wikimedia-k8s-sig :) I think they are using envoy quite a lot maybe also with some grpc apps [17:00:53] ah, fantastic. ty! [17:38:49] 10GitLab (CI & Job Runners), 10Release-Engineering-Team: Consider enabling distributed caching for GitLab runners - https://phabricator.wikimedia.org/T328516 (10thcipriani) [17:38:53] 10GitLab (CI & Job Runners), 10serviceops-collab, 10Release-Engineering-Team (Priority Backlog πŸ“₯), 10User-brennen: Provision untrusted instance-wide GitLab job runners to handle user-level projects and merge requests from forks - https://phabricator.wikimedia.org/T297426 (10thcipriani) [17:39:39] 10GitLab (CI & Job Runners), 10Release-Engineering-Team (Priority Backlog πŸ“₯): Consider enabling distributed caching for GitLab runners - https://phabricator.wikimedia.org/T328516 (10thcipriani) We're working on moving shared runners to a space where we have this enabled, so the hope is it will Just Workβ„’ once... [17:40:08] 10GitLab (CI & Job Runners), 10serviceops-collab, 10Release-Engineering-Team (Radar): Cleanup and attach volumes for gitlab-runners WMCS project - https://phabricator.wikimedia.org/T328283 (10thcipriani) [17:41:49] 10GitLab (CI & Job Runners), 10CirrusSearch, 10Discovery-Search, 10Release-Engineering-Team (Blocking 🧱): Consider adding "official" docker.elastic.co images to the list of allowed images for gitlab runners - https://phabricator.wikimedia.org/T327519 (10thcipriani) a:03brennen [17:42:36] 10GitLab (CI & Job Runners), 10mwcli, 10Release-Engineering-Team (Blocking 🧱): Add registry.gitlab.com/dependabot-gitlab/dependabot to list of allowed images for gitlab runners - https://phabricator.wikimedia.org/T326507 (10thcipriani) a:03brennen [17:45:30] 10GitLab (CI & Job Runners), 10serviceops-collab, 10Release-Engineering-Team (Radar): Cleanup and attach volumes for gitlab-runners WMCS project - https://phabricator.wikimedia.org/T328283 (10thcipriani) [17:46:38] 10GitLab, 10MediaWiki-extensions-Gadgets, 10Release-Engineering-Team, 10Security-Team, 10Security: Allow Javascript files from Wikimedia GitLab to be loaded as scripts in Wikimedia wikis - https://phabricator.wikimedia.org/T321458 (10brennen) [17:50:19] 10GitLab (Administration, Settings & Policy), 10Release-Engineering-Team, 10Upstream: Gitlab make public the default fork option - https://phabricator.wikimedia.org/T324016 (10brennen) p:05Triageβ†’03Low I haven't looked but I'm guessing this is a question for upstream. [17:51:31] 10GitLab (Administration, Settings & Policy), 10Release-Engineering-Team (Seen), 10Upstream: Gitlab make public the default fork option - https://phabricator.wikimedia.org/T324016 (10brennen) [17:54:35] 10GitLab (Project Migration), 10serviceops-collab, 10Release-Engineering-Team (Radar): Create new GitLab project group: wm-juniors-il - https://phabricator.wikimedia.org/T313750 (10brennen) [18:35:15] 10GitLab (Project Migration), 10serviceops-collab, 10Release-Engineering-Team (Radar): Create new GitLab project group: wm-juniors-il - https://phabricator.wikimedia.org/T313750 (10Dzahn) 05In progressβ†’03Stalled setting to stalled due to lack of response [22:11:07] 10GitLab (Infrastructure), 10serviceops-collab: Migrate gitlab-test instance to bullseye - https://phabricator.wikimedia.org/T318521 (10Dzahn) >>! In T318521#8552699, @hashar wrote: > Puppet fails on the instance `gitlab-prod-1002`, from today email: This is fixed now. Puppet run does not fail any longer: No...