[10:26:20] 10GitLab (Administration, Settings & Policy), 10Security-Team, 10ContentSecurityPolicy, 10Patch-For-Review, and 4 others: Define a Content Security Policy for GitLab - https://phabricator.wikimedia.org/T285363 (10hashar) 05In progress→03Resolved [13:43:16] 10GitLab (CI & Job Runners), 10Release Pipeline, 10Security-Team, 10Release-Engineering-Team (Doing), and 2 others: Figure out the future of (or replacements for) PipelineLib in a GitLab world - https://phabricator.wikimedia.org/T287211 (10hashar) [14:12:18] 10GitLab (Administration, Settings & Policy), 10Security-Team, 10ContentSecurityPolicy, 10Release-Engineering-Team (Doing), and 3 others: Define a Content Security Policy for GitLab - https://phabricator.wikimedia.org/T285363 (10sbassett) [14:59:40] jelto: I have to make a phone call, I am unlikely to attend the gitlab IC sync meeting [14:59:49] hashar: ack [16:42:28] AH ha! I figured out what was going on with my MR pipelines. [16:42:38] I had an included job that had a when: manual set [16:43:00] which was basically an else when: manual, which meant that that job was always included in every pipeline, including MR ones. [16:43:43] that caused the MR pipeline to be created with a manual job, but since the tests rules did not explicitly run in MR pipeline, the tests jobs weren't declared in that pipeline [16:44:05] i just added a rule to not declare my manual job in MR pipeline, and no MR pipeline was created. [16:44:13] which is in hindsight what I wanted in the first place. [16:49:16] ottomata: thanks for the update - glad it made sense in the end. :) [16:57:05] same thing that happened a few days ago: gitlab-prod-1001.devtools is down and rebooting it now to fix that [17:05:43] and ..it's back again. just like last time [17:12:12] thing that was running before reboot was ""cronjob:database_batched_background_migration_ci_database"" [17:12:20] but that doesnt mean it has to be the cause [17:14:04] hrm. [17:16:55] it's not like I see any obvious errors in syslog [17:17:00] it was doing stuff apparently [17:17:15] but you can't ssh to it when that happens [17:17:25] and 'soft' reboot is sufficient to bring it back [17:25:52] 10GitLab (Infrastructure), 10DC-Ops, 10SRE, 10ops-eqiad, 10serviceops: Q3:(Need By: TBD) rack/setup/install gitlab100[3|4] and gitlab-runner100[2|3|4] - https://phabricator.wikimedia.org/T301177 (10Dzahn) Tyler, Brennen, added you here per our meeting today. So that you can see status of the physical hos... [17:27:16] not being able to ssh to it feels pretty weird. [17:28:17] I notice this because I get the email that says "puppet failed on this host" [17:28:33] but then when you try to check why.. it's not reachable [17:29:06] I assume that's a small bug in itself that it claims puppet failed when in reality it just can't check [17:29:29] but.. it does point to an issue..so..yea [18:16:48] 10GitLab (Infrastructure), 10DC-Ops, 10SRE, 10ops-codfw: Q3:(Need By: TBD) rack/setup/install gitlab200[2|3] and gitlab-runner200[2|3|4] - https://phabricator.wikimedia.org/T301183 (10Papaul) [19:40:56] after reimaging a gitlab-runner: PANIC: mkdir /home/gitlab-runner: permission denied whut:P [21:05:30] 10GitLab (Integrations), 10Wikimedia-Interwiki-links, 10Release-Engineering-Team (Radar): Evaluate an interwiki to WMF GitLab - https://phabricator.wikimedia.org/T305755 (10valerio.bozzolan) I've seen the `gitlab:` interwiki prefix was created! Now let's wait for the cache update.