[00:09:14] !log tools rebooting tools-prometheus-8 [00:09:17] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [12:58:43] how can I run a one-off job on tools that can access the mysql replica? I've tried several images, but none of them recognize sql, mariadb or mysql commands [13:13:54] dungodung: there's a `mariadb` image that has mysql in it [13:17:08] dcaro, I tried with it, but the job still fails, but the .err file is not updated... and the man page for toolforge-jobs doesn't list mariadb as a current image [13:17:51] `toolforge jobs images` will list the current available images (more updated than the man page) [13:18:18] what's the tool + command you are trying to run? [13:19:28] huh, that command indeed does list mariadb [13:19:34] I just need a resultset from a query, which I put in a file [13:19:40] so it's not really a tool [13:21:02] but the bash command I'm trying is mysql --defaults-file=$HOME/replica.my.cnf -h enwiki.analytics.db.svc.wikimedia.cloud enwiki_p -N < /data/project/.../bla.sql > /data/project/.../bla.out [13:25:04] this works for me [13:25:12] https://www.irccloud.com/pastebin/kQOWdpQ4/ [13:25:18] tools.wm-lol@tools-bastion-13:~$ toolforge jobs run --command "mysql --defaults-file=\$TOOL_DATA_DIR/replica.my.cnf -h enwiki.analytics.db.svc.wikimedia.cloud enwiki_p -N < \$TOOL_DATA_DIR/test.sql > \$TOOL_DATA_DIR/test.sql.out" --image mariadb testsql [13:28:32] it's possible that the query hangs, then the job gets killed? (or OOM or similar) [13:29:03] yeah, it worked for me when I did a limit 10 [13:29:19] I need a larger dataset though, so yeah, probably gets killed [13:34:33] a quick check shows that the number of entries that match that query is ~20.000.000, that's a varchar(255) so that might take ~9G of space... I'd recommend trying to chunk that in pieces if possible instead of trying to download it all at once [13:35:22] alright, I guess I could do that [13:35:28] (actually ~5G on the worst side I think) [13:40:18] 👍 thanks, I'd recommend trying to avoid big queries and preferring several smaller ones when needed if it makes sense yep [13:43:39] yeah, thanks for the help! chunking the query into a million results per query did the trick; the final file is just 387MB [13:45:24] 👍 nice, way smaller than I expected xd, titles use way less than all the 255 chars they have I guess [13:47:35] yeah [13:49:26] hmm... ~20 chars or so? (~8% size) kinda makes sense xd