[09:11:41] 10serviceops, 10Prod-Kubernetes, 10Release Pipeline, 10Patch-For-Review: Refactor our helmfile.d dir structure for services - https://phabricator.wikimedia.org/T258572 (10JMeybohm) [09:12:01] 10serviceops, 10Patch-For-Review: Sporadic issues on helm dependency build in CI - https://phabricator.wikimedia.org/T261313 (10JMeybohm) 05Open→03Resolved I'll close this as we have not seen errors like this for quite a while and all parallel usages of helm in the Rakefile use dedicated helm directories p... [16:53:10] 10serviceops, 10Release Pipeline: Production buster-nodejs10-devel image has npm 5.x, which is not actually compatible with node 10.x - https://phabricator.wikimedia.org/T284112 (10Jdforrester-WMF) [19:58:32] would it be ok to put ..eh.. 3.6 GB of uncompressed static HTML into my container.. if the point of it is..serving that content? but then.. is it ok to put all of them int a git repo to still use pipeline to build it? it seems not but that would be a problem for my project the [20:00:00] do I need to store that as a .tar.gz elsewhere and then pull it in and uncompress inside the container? [20:00:15] a Debian package, even ? [20:03:31] what's the content, out of curiosity? (no opinion, just interested) [20:07:52] mutante: I think it would just be a bad fit to have all those files in a git repo [20:08:15] apergos: static-bugzilla :) [20:08:28] it's all the old bugs text [20:08:39] you could upload a tarball to one of releases.wm.o, download.wm.o, or swift and then have the container download it and uncompress it [20:08:50] oh wow [20:08:56] legoktm: yea, *nod*. so maybe a tarball, of course that will save > 95% [20:09:07] and then in Docker file I use run commands to pull it? [20:09:10] yeah why would there not be a copy on download.wm.o? sounds like a prime candidate [20:09:12] and unpack it into the container? [20:09:17] if it's all public, which I guess it is [20:09:58] apergos: https://dumps.wikimedia.org/other/bugzilla/ [20:10:00] I mean people apt-get install in their dockerfiles, why wouldn't you wget or curl something? [20:10:03] but that is the DB [20:10:03] see? [20:10:08] the HTML could be next to that [20:10:11] indeed [20:10:37] I think wget over https + sha256sum integrity check would be secure [20:10:45] legoktm: basically that was already my backup idea, thanks for confirming that more or less [20:10:58] well, I did not have swift in mind [20:11:10] but yea, apergos, I can totally upload that to misc dumps [20:11:19] and then also use that to pull from? heh [20:11:47] this still requires hosting a 3GB docker image, which is probably not the best but it should rarely change, only when the base image changes [20:11:50] compress it a lot, dujmps.wm.o has bw caps for download [20:11:52] apergos: but "apt" is already a thing in Blubber language and wget is not :p [20:11:57] that is basically the thig [20:12:07] oh. well I dunno about blubber, I was just thinking about dockerfiles [20:12:15] mutante: https://github.com/wikimedia/toolhub/blob/main/.pipeline/blubber.yaml#L140-L147 is a hacky way to do that kind of stuff with blubber [20:12:18] you can run arbitrary commands in blubber, see https://gerrit.wikimedia.org/r/plugins/gitiles/wikimedia/irc/ircservserv-config/+/refs/heads/master/.pipeline/blubber.yaml [20:12:28] yea, unfortunately I cant just edit Docker files directly [20:12:35] unlesss I start skipping Blubber part [20:12:46] thanks bd808 and legoktm [20:12:47] (I totally copied my example from toolhub :)) [20:13:10] I would not find this on wikitech! thanks! [20:13:21] * bd808 abuses blubber for fun and profit [20:15:34] also, can apache serve files out of a tarball directly? [20:15:44] I know you can do some magic to have it serve precompressed gzip files [20:17:25] hmm.. that's not a bad idea, i'll take a look [20:26:23] its easy magic in nginx to serve .gz from disk -- https://nginx.org/en/docs/http/ngx_http_gzip_static_module.html [20:27:18] bd808: I was thinking of somehow serving the files inside the tarball without needing to uncompress it [20:27:35] maybe that's not actually that valuable since docker images are also compressed during transit [20:28:00] *nod* that would probably require some helper thing (httpd module or a tiny app) [20:29:13] but if it's a static archive of something like the old bugzilla... could be worth finding/building a little bitty golang service that would act as the httpd process [20:29:20] if these were stored in a zim file (which uses some compression internally) you could use kiwix-serve, packaged in Debian too :) [20:30:07] oh, good reminder for me to create the Rust base images today [20:30:12] ;)