[07:22:31] hi [10:02:12] ./*.err { [10:02:12] monthly [10:02:14] rotate 12 [10:02:15] dateext [10:02:17] compress [10:02:18] delaycompress [10:02:20] missingok [10:02:21] notifempty [10:02:23] } [10:02:24] ./*.out { [10:02:26] monthly [10:02:27] rotate 12 [10:02:29] dateext [10:02:30] compress [10:02:32] delaycompress [10:02:33] missingok [10:02:35] notifempty [10:02:36] } [10:02:38] [10:02:39] I currently have this. Tried running it but I got this error: [10:02:41] [10:02:42] 1 lines must begin with a keyword or a filename (possibly in double quotes) [10:21:39] try using double quotes like `"./*err" "./*out" {` [10:21:50] Klein ^ [10:27:46] Klein: just added a note on the docs (tested) https://wikitech.wikimedia.org/wiki/Help:Toolforge/Jobs_framework#Pruning_all_logfiles_at_once [10:40:55] Ah, yes. Now it works! Thank you! [10:40:56] [10:40:57] Do I put this in my yml file to load it with my other jobs or do I just do a one time @daily run from the terminal? [11:03:09] # smallem-logrotate [11:03:09] - name: logrotate-smallem [11:03:11] command: logrotate -v ./logrotate-smallem.conf [11:03:12] state: ./logrotate-smallem.state [11:03:14] image: mariadbv [11:03:15] schedule: "@monthly" [11:03:17] emails: all [11:03:18] [11:03:20] I created this yml entry. Hope I've done it correctly. Wasn't sure about the separation of the command and state arguments. [11:09:28] Klein: yep, 'state' is not recognized there, you should put everything in the command, like `command: logrotate -v $TOOL_DATA_DIR/logrotate-all.conf --state $TOOL_DATA_DIR/logrotate-all.state` [11:14:42] Okay. Thank you very much for assisting me! :) [11:30:38] !log lucaswerkmeister@tools-sgebastion-10 tools.stashbot ./bin/stashbot.sh restart # quit IRC a few minutes ago, cause unknown [11:31:53] let’s just re-log that, I think it hadn’t joined yet [11:31:56] !log lucaswerkmeister@tools-sgebastion-10 tools.stashbot ./bin/stashbot.sh restart # quit IRC a few minutes ago, cause unknown [11:31:59] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.stashbot/SAL [11:32:04] yay [13:21:49] !log anticomposite@tools-sgebastion-10 tools.stewardbots ./stewardbots/StewardBot/manage.sh restart # disconnected [13:21:50] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.stewardbots/SAL [13:21:55] !log anticomposite@tools-sgebastion-10 tools.stewardbots SULWatcher/manage.sh restart # SULWatchers disconnected [13:21:56] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.stewardbots/SAL [13:45:10] !log tools migrating toolforge.org floating IP from tools-proxy-06 to tools-proxy-7 T361223 [13:45:14] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [13:45:15] T361223: Upgrade Toolforge front proxies to Bookworm - https://phabricator.wikimedia.org/T361223 [13:46:16] oops, it'd help if I assigned the correct security groups to the new instance [13:50:40] I always forget that too [13:53:18] maybe I need to add a flag to copy the security groups from the previous instance to the wmcs.vps.create_instance_with_prefix cookbook [13:59:04] Would be handy! [14:00:12] btw I'm replacing (or trying to replace) the etcd cluster in toolsbeta. We'll see how it goes. [14:06:37] andrewbogott: i think the cookbooks for that are up-to-date, so in theory the main thing to worry about is the etcd version bump in new debian releases [14:10:30] ok, cool. Would you anticipate it being able to cluster across that version boundary? [14:13:37] good q. it might be smart to go from buster (3.2) to bullseye (3.3) first, and only then to 3.4. read the upstream upgrade notes first ofc [14:16:50] yep, docs definitely say you can't skip from .2 to .4 [14:16:53] So, bullseye it is! [14:29:53] andrewbogott: please tag your SALs against T349207 and not T360699 [14:29:53] T349207: [infra] Upgrade Toolforge K8s etcd nodes to Bullseye - https://phabricator.wikimedia.org/T349207 [14:29:54] T360699: Toolsbeta: migrate to Debian Bullseye or later - https://phabricator.wikimedia.org/T360699 [14:30:04] 'k [19:11:48] bd808 no hurry at all, but I added you to T360724 (building a shared opensearch cluster). Toolhub indices will probably go there, so if you have any feedback feel free to add [19:11:49] T360724: Gather requirements for shared opensearch cluster - https://phabricator.wikimedia.org/T360724 [19:31:03] !log auditlogging replaced puppetmaster04 with auditlogging-puppetserver-1; 04 is shut down and can be deleted later. [19:31:06] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Auditlogging/SAL [19:48:27] !log mediawiki-vagrant replaced mwv-puppetmaster with mwv-puppetserver-01 [19:48:29] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Mediawiki-vagrant/SAL [20:07:05] inflatador: cool. what do you need from me for that? I can probably find some description of the tiny little index that Toolhub uses in some past task. [20:18:00] dduvall, have a moment? I'm replacing the puppetserver in gitlab-runners, seeing an odd issue with profile::gitlab::runner::token [20:22:48] andrewbogott: he's out on sabattical [20:22:59] hm, ok [20:23:07] then I wonder who... knows about gitlab runners [20:23:21] pop into #wikimedia-gitlab and ask? [20:23:37] good idea [20:23:55] I would hope jelto knows something... [20:25:07] Yeah, I was hoping for someone in our timezone though [20:50:42] ok, I'm glad I asked! Apparently you can put secrets in /etc/puppet/secret/hieradata as well as in /var/lib/git/labs/private [20:50:56] I hope no one else is doing that [20:53:02] !log gitlab-runners replaced gitlab-runners-puppetmaster-01 with gitlab-runners-puppetserver-01 and moved secrets to /srv/git/labs/private. Shut down gitlab-runners-puppetmaster-01 and it can be deleted once projectadmins are convinced that all is well. [20:53:04] bd808 we're early in the process, if you have any "nice-to-haves" (backup/restore? dashboards?) or hard requirements (disk space?) feel free to add [20:53:05] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Gitlab-runners/SAL [20:53:43] inflatador: :nod: My needs are tiny and boring, but I'll watch the ticket and try to help out where I can [20:54:39] andrewbogott: I'm tempted to submit a puppet patch that would empty that directory and keep it empty [20:55:01] hm, not a bad idea [20:55:11] nobody needs yet another place to hide hierdata [20:55:13] Tiny and boring works ;) . I'll keep ya posted [20:56:09] inflatador: If we had PVCs in the prod k8s cluster I would seriously just have a one-node opensearch cluster for the project. It is that tiny and boring. :) [20:57:14] bd808 that's phase 2 ;) . b-tullis has a ceph cluster, eventually we'll get to PVCs [20:57:46] bd808: actually, in theory that's where /var/lib/labs/private was deployed to. So... [20:57:53] * andrewbogott is in 'how did this even work before' land [20:58:34] Ah, I'm wrong, there's both /etc/puppet/secret and /etc/puppet/private [20:58:44] I think I can just remove secret from the lookup tree [20:59:43] (jargon explainer: PVC == persistant volume claim. Basically a little hard drive that you can attach and detach to a Kubernetes Pod) [21:14:50] bd808: https://gerrit.wikimedia.org/r/c/operations/puppet/+/1015392 apparently lots of people are using that 'feature'. Or at least one person in a bunch of projects [21:14:57] But I'm still against it! [21:35:30] andrewbogott: thanks for tilting at the windmill [21:39:39] !log added toyofuku to deployment-prep [21:39:40] tgr: Unknown project "added" [21:46:53] tgr: you need a project label for a !log in this channel. `!log deployment-prep ...` in that case ^ [22:27:39] !log deployment-prep added toyofuku to deployment-prep [22:27:42] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Deployment-prep/SAL [23:45:58] !log auditlogging Deleted syslog-server-04 (Buster) per T127717#9671931 [23:46:01] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Auditlogging/SAL [23:46:01] T127717: Move Cloud VPS auth.logs to central logging - https://phabricator.wikimedia.org/T127717