[16:49:00] I was talking with jayme, that was keep seeing miscweb.svc checks on Icinga even if they should be removed by now (see https://icinga.wikimedia.org/cgi-bin/icinga/status.cgi?search_string=miscweb.svc ) [16:49:31] A grep in /etc/{icinga,nagios} confirmed that there is no miscweb.svc reference anymore [16:49:46] could it be that Icinga didn't get properly refreshed by puppet when needed? [16:50:02] Would be ok to issue a manual restart? [16:50:38] sure go for it volans, even a test reload first perhaps ? [16:50:45] sure [16:50:59] I'm digging a task for this issue we had in the past, not sure if related in this case [16:51:14] the icinga config was broken for a while for related changes [16:51:18] not sure if that could explain it [16:51:31] (there was a long chat about it between -operations and -sre) [16:51:59] mmhh so this is the task I had in mind https://phabricator.wikimedia.org/T263027 though I'm not sure that's it [16:52:37] it is true that the change/notify pattern in puppet kinda breaks when the file doesn't change again and the notify has failed [16:52:48] so the notify has to "wait" for the file to change again before running [16:52:55] subsequent puppet runs were noop [16:53:07] so it might not have been reloaded since, that was my bet at least [16:53:13] issued reload [16:53:19] yeah that'd explain it I think [16:53:23] all checks are gone [16:53:39] jayme: all good! you're now without monitoring :-P enjoy ;) [16:54:09] ack, thanks [16:54:44] reload was enough fwiw [16:57:27] cheers [19:19:22] hiya, i'm working on upgrading nodes to bullsye in deployment-prep [19:19:35] i see there is a kafka logging cluster there that uses main-deployment-prep zookeeper [19:19:38] great. [19:19:44] i am upgrading that zookeeper cluster now [19:19:45] however [19:19:53] in project puppet hiera, there is [19:20:00] zookeeper_clusters: [19:20:00] logging-deployment-prep: [19:20:00] hosts: [19:20:00] deployment-zookeeper02.deployment-prep.eqiad.wmflabs: '2' [19:20:09] I don't see this 'logging-deployment-prep' zk cluster anywhere [19:20:17] perhaps it can just be removed from hiera? [19:29:46] i'm going to be bold and do this ^ i believe it will be fine. [19:30:00] if there are issues with some kafka logging stuff in deployment prep, I am probably the cause. [19:31:25] ok thx for the heads up ottomata [19:31:58] ottomata: if an issue in pipeline ingest crops up, https://beta-logs.wmcloud.org/ will show it pretty quickly.