[10:56:57] Hello! Can someone tell me what would be the fastest way to have some files periodically deleted now that we've transitioned in Kubernetes? I just want to delete the .err/.out files for my scripts every 6 months or 1 year so they don't grow out to be too big. I remember having a command in my crontab for that which doesn't work anymore now that the grid is gone. [11:03:10] Klein: you can use https://wikitech.wikimedia.org/wiki/Help:Toolforge/Jobs_framework#Pruning_log_files to rotate your log files [11:09:46] let me know if you need help with the config file/syntax [11:12:39] Thanks a lot! I have one general question before I start implementing it: What does rotate mean in this case? I'm not familiar with the term. Can you give me a very short explanation of what rotate and copy truncate commands do? Just to have the general idea as I will try to use GPT for further help. [11:36:05] !log melos@tools-sgebastion-10 tools.stewardbots ./stewardbots/StewardBot/manage.sh restart # RC reader not reading RC [11:36:46] !log melos@tools-sgebastion-10 tools.stewardbots SULWatcher/manage.sh restart # SULWatchers disconnected [11:42:15] !log melos@tools-sgebastion-10 tools.stewardbots ./stewardbots/StewardBot/manage.sh restart # disconnected [11:42:17] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.stewardbots/SAL [11:47:24] !log admin deleting about 2k stale puppet certs by running wmcs-puppetcertleaks in delete mode [11:47:28] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [12:23:52] !log cloudinfra hard reboot cloudinfra-cloudvps-puppetserver-1 [12:23:53] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Cloudinfra/SAL [13:16:51] Klein: It means that the log gets renamed to something like `mylog.1`, then `mylog.2` every time the log is 'rotated' (every day, every week, ...) so you end up with `mylog` being the logs since the last rotation, `mylog.1` for the previous rotation, `mylog.2` for the one before that and so on for the retention configured [15:10:54] my cron jobs have not been firing since 20:00 UTC yesterday. Anything wrong? [15:16:20] Yetkin: nothing we are aware of, what's the tool? And one of the cron names that should have run? [15:17:29] dcaro: Tool name is superyetkin. Job name is job-beyaz-liste [15:17:37] ack, looking [15:19:08] Thank you! I'll try implementing it soon. :)) (re @wmtelegram_bot: Klein: It means that the log gets renamed to something like `mylog.1`, then `mylog.2` every time the log is 'rotated' (e...) [15:21:25] Yetkin: it seems that you are reaching the quota (15 running crons/jobs out of 15, and almost 8G of ram of 8G available) [15:21:34] https://www.irccloud.com/pastebin/uIkvNV3f/ [15:22:49] hmm, let me look at the running one-off and cron jobs [15:23:37] any reason for me to reach the quota? [15:26:20] Yetkin: your jobs are using too much memory, that made all the crons that try to start not be able to (`toolforge job list` shows the ones that are currently failing due to the memory), and that made the queue of 'trying to run' jobs fill up the 15 jobs quota, so other crons will not even try [15:29:55] dcaro: How to find which job(s) are causing the issue? Can you give me the list of jobs running now? [15:31:43] looking [15:55:08] Yetkin: found it, you have two shell sessions requesting 2G of ram each and limiting themselves to 4G [15:55:20] that's what's making all the jobs get stuck [16:15:36] dcaro: how to stop them? [16:16:01] Yetkin: you are not connected to them anymore? [16:19:31] no, I started them to run one of my PHP scripts. I no longer need them [16:21:58] if so, you can list them with `kubectl get pods` and delete them with 'kubectl delete pods -l app.kubernetes.io/component=webservice-interactive' [16:26:59] thanks for that. How can I see the amount of RAM my jobs are using at the moment? [16:27:46] Yetkin: there's a `toolforge jobs quota` command that will show the total usage [16:48:57] thanks man [16:51:07] yw [18:07:17] With the help of GPT I was able to cook this content for the pruning logs file: [18:07:18] [18:07:20] ./smallem-wp.sh.err { [18:07:21] monthly [18:07:23] rotate 12 [18:07:24] dateext [18:07:26] compress [18:07:27] delaycompress [18:07:29] missingok [18:07:30] notifempty [18:07:32] } [18:07:33] ./smallem-wp.sh.out { [18:07:35] monthly [18:07:36] rotate 12 [18:07:38] dateext [18:07:39] compress [18:07:41] delaycompress [18:07:42] missingok [18:07:44] notifempty [18:07:45] } [18:07:47] ./smallem-wq.sh.err { [18:07:48] monthly [18:07:50] rotate 12 [18:07:51] dateext [18:07:53] compress [18:07:54] delaycompress [18:07:56] missingok [18:07:58] notifempty [18:08:00] } [18:08:02] ... (continues with all my jobs - nvm the indentation) [18:08:04] [18:09:09] My jobs run once per month and I want to delete the logs once a year. [18:46:53] Klein: nice, I think that you can use globs and more than one in a line, like `"./*.err" "./*.out" {' if you don't want to repeat every job (and also catch new jobs) [19:04:08] How can I do that? [19:04:37] Ah, with regex. I didn't catch that at first. [19:22:39] glob pattern (as single `*` matching anything, instead of regex `.*` [19:30:51] Ooh! Okay, thank you! Will do that and try to automatise it later. Do I need to chmod the config file after saving it?