[08:00:41] Cteam: welcome to today 🦄! Don’t forget to post your update in thread. [08:00:41] Feel free to include: [08:00:41] 1. 🕫 Anything you'd like to share about your work [08:00:41] 2. ☏ Anything you'd like to get help with [08:00:41] 3. ⚠ Anything you're currently blocked on [08:00:41] (this message is from a toolforge job under the admin project) [18:14:06] Done: [18:14:06] * [toolforge] created the new project board for iteration 17 [18:14:06] * [ceph] tried to drain a node (cloudcephosd1012) to reimage and upgrade the OS, it caused an NFS hiccup on tools [18:14:06] * [ceph graphs] moved another graph to gnmi, only one left (drops from routers) [18:14:06] * [cloud prometheus] there was a param missing in cloud.yaml that broke puppet runs on prometheus vms [18:14:07] Doing: [18:14:07] * [review patches] started but did not finish yet :/, alerts and the nfs hiccup took most of the free time today [18:14:08] * [jobs-emailer] Merge the patches to enable monitoring [18:14:08] * [toolforge,bastion container] I have to retake this, now that we can load configs from the environment for toolforge clis [18:14:09] * [openstack,nova-api-metadata] have to do a bit of a look on why there's no logs on logstash for that service [18:14:09] * [dell,ceph hard drives] We got a reply yesterday afternoon asking for some info, have to gather and reply back [18:14:10] * [ceph,qos] will be around to try it out tomorrow (with andrewbogott and cmooney), might prevent the hiccup on NFS we had today (if the root of it was caused by network saturation) [18:14:26] Blockers: [18:14:31] * None