[08:30:58] 10GitLab, 10Data-Engineering, 10Release-Engineering-Team: Improve speed of Gitlab CI - https://phabricator.wikimedia.org/T311111 (10hashar) + #gitlab since there is surely caching optimization that would need to be added. It looks like building the image takes a while. Probably related, last week we had Gi... [08:39:30] 10GitLab (CI & Job Runners), 10Release-Engineering-Team, 10User-brennen: GitLab runners: allowed_images patterns need to be loosened to include subdirectories - https://phabricator.wikimedia.org/T310535 (10mmartorana) Hi @brennen, it seems that the issue is resolved now. Thank you so much. [08:52:26] 10GitLab, 10Data-Engineering, 10Release-Engineering-Team, 10Performance Issue: Improve speed of Gitlab CI - https://phabricator.wikimedia.org/T311111 (10Reedy) [08:59:47] 10GitLab (CI & Job Runners), 10Release-Engineering-Team, 10User-brennen: GitLab runners: allowed_images patterns need to be loosened to include subdirectories - https://phabricator.wikimedia.org/T310535 (10Jelto) I can confirm, pipeline works again for me. Re-configuring the Cloud Runners was block because o... [09:35:46] I have switched `gitlab-prod-1001.devtools.eqiad1.wikimedia.cloud` instance to use the project Puppet master `puppetmaster-1001.devtools.eqiad1.wikimedia.cloud [09:35:58] cause puppet was not running on that instance [09:36:59] 10GitLab, 10Data-Engineering: Experiencing pipeline failure due to disk-space issues - https://phabricator.wikimedia.org/T310593 (10Jelto) Docker cache is cleaned every 24h on GitLab Runner nodes now. So failing jobs due to full docker volume should happen less frequent. [09:42:04] 10GitLab, 10Data-Engineering: Experiencing pipeline failure due to disk-space issues - https://phabricator.wikimedia.org/T310593 (10hashar) 05Open→03Resolved For the scope of this task, that solves the issue. Additional tasks can be filed to keep the cache longer, potentially share them across runners etc [09:42:27] jelto: I have closed the disk space issue task, looks like cleaning the cache volumes on a daily basis solves it immediately [09:43:13] looks like the way to go is to have the runners to use S3/Swift for storing the caches [09:43:56] hashar: great :) Keep in mind the job will run at 5am UTC, so it takes until tomorrow to take effect. Long term I would also like to use some external storage if possible :) [09:44:09] sounds good :] [10:05:12] 10GitLab, 10Data-Engineering: Experiencing pipeline failure due to disk-space issues - https://phabricator.wikimedia.org/T310593 (10Antoine_Quhen) Thanks! Also for space & speed, we may not using the ci cache properly: * https://docs.gitlab.com/ee/ci/caching/ * https://phabricator.wikimedia.org/T311111 [12:07:50] 10GitLab (Administration, Settings & Policy), 10Patch-For-Review, 10Release-Engineering-Team (GitLab-a-thon 🦊), 10cloud-services-team (Kanban): gitlab: consider enabling docker container registry - https://phabricator.wikimedia.org/T304845 (10Ottomata) Aw, that is unfortunate! We were really hoping for th... [12:19:19] 10GitLab (Administration, Settings & Policy), 10Patch-For-Review, 10Release-Engineering-Team (GitLab-a-thon 🦊), 10cloud-services-team (Kanban): gitlab: consider enabling docker container registry - https://phabricator.wikimedia.org/T304845 (10akosiaris) >>! In T304845#8019564, @Ottomata wrote: > Aw, that i... [12:31:03] 10GitLab (Administration, Settings & Policy), 10Patch-For-Review, 10Release-Engineering-Team (GitLab-a-thon 🦊), 10cloud-services-team (Kanban): gitlab: consider enabling docker container registry - https://phabricator.wikimedia.org/T304845 (10Ottomata) Hm, yeah @hashar had told me not to do this, because t... [12:52:35] 10GitLab, 10Data-Engineering, 10Release-Engineering-Team, 10Performance Issue: Improve speed of Gitlab CI - https://phabricator.wikimedia.org/T311111 (10Ottomata) Also relevant: {T304845} discussion at https://phabricator.wikimedia.org/T304845#8017565 [17:45:31] fyi: I just created gitlab-runner1003 in devtools project, so we have a total of two runners for the test instance. One serves as a untrusted runner and one as a trusted. I'll configure them tomorrow. [18:04:21] jelto: thanks. let me know if there's anything i can help with. [19:26:12] ACK, i'm glad this worked out, quota wise. we are at the limit. this is good though