[07:27:40] hello everybody [07:28:02] I am trying to work a little with helm, and now I think that I am missing some pieces [07:28:45] for example, do you have a specific workflow to split a giant yaml file in multiple ones automatically? (say to populate the templates/ dir in helm) [07:28:53] instead of doing it manually I mean [07:34:36] ok I just learnt about jx gitops split/rename [08:06:41] elukey: I usually just write pieces of the huge yaml as separate templates [08:06:48] that then I include in the main structure [08:07:40] see for instance https://gerrit.wikimedia.org/r/plugins/gitiles/operations/deployment-charts/+/refs/heads/master/charts/mediawiki/templates/deployment.yaml.tpl [08:44:25] _joe_ very interesting, I didn't use tpl but it could be good for the knative serving use case.. upstream provides two big yaml files: one with crds, and one with all the rest. I created two charts, and in the latter I tried to break it down the yaml into more "specialized" ones but after some testing I think that ordering matters (for example I get some strange RBAC issues with kubeflow that [08:44:31] disappear if I apply the single yaml file all in once via kubectl apply) [08:44:46] err joe :) [08:45:43] I've read that helm does some ordering behind the scenes before applying, but it is probably not the one that knative wants [08:46:23] possibly, yes [08:46:31] anyways it's ok if you just import the yamls [08:46:36] and state the origin of them [08:46:36] side note - the operator requires as well to apply a big yaml file (so same problem) and it seems not production ready according to upstream [08:47:42] joe: ah so basically I can import the big yaml, add some templating (just to control values via values.yaml) under templates/ in the chart? [08:47:49] if so my life would become way easier [08:47:53] and upgrades too [08:48:16] https://github.com/knative/serving/releases/download/v0.18.1/serving-core.yaml [08:48:19] is the file basically [08:48:23] plus https://github.com/knative/serving/releases/download/v0.18.1/serving-crds.yaml [08:48:24] hello, for T284887 I noticed that passwords::cxserver in the private puppet repo, but it seems unused. Is that legacy from the pre-k8s worlds and can be removed? [08:48:36] *that there is also [08:49:02] in addition to profile::kubernetes::deployment_server_secrets::services that AFAIK is the one to update [08:49:05] correct me if I'm wrong [08:49:18] volans: let me verify it, it might be referenced somewhere in the hiera stuff [08:49:37] but no, you're not wrong in principle [08:50:10] ack, not planning to delete it, just to add the new keys only to the k8s side [08:50:22] or if it's needed to add them to both [08:50:38] volans: definitely only to k8s [08:50:54] great, thx [08:52:24] volans: btw the credentials blocks in the yaml are identical [08:52:30] we could use yaml references there... [08:52:42] between staging/codfw/eqiad you mean? [08:52:46] yes [08:52:50] indeed [09:01:27] volans: do you know how to have the changes applied? [09:01:34] if not I can guide you [09:06:00] not all the gory details *but*, I don't have yet the secrets and kartik already sent https://gerrit.wikimedia.org/r/c/operations/deployment-charts/+/699089 [09:15:12] ack perfect, so you need to add the credentials to hiera [09:15:54] then merge this patch, ensure puppet has run on the deployment host, and you can greenlight kartik for deployment [09:19:14] great, simple enough. Will do. [09:22:25] 10serviceops, 10Continuous-Integration-Infrastructure, 10DC-Ops, 10netops: Flapping codfw management alarm ( contint2001.mgmt/SSH is CRITICAL ) - https://phabricator.wikimedia.org/T283582 (10hashar) Looking at the alarm history since midnight UTC at https://icinga.wikimedia.org/cgi-bin/icinga/history.cgi?h... [09:22:34] yeah it is simple, you just need to know the sequence of things [09:23:01] I'm starting to think all this stuff should really not come from puppet but from a private repository for services [11:00:01] 10serviceops, 10SRE, 10docker-pkg: Refresh all images in production-images - https://phabricator.wikimedia.org/T284431 (10Jelto) @Joe I think a //periodic job rebuilding the images// is implemented already. See [modules/docker/manifests/baseimages.pp#65](https://gerrit.wikimedia.org/r/plugins/gitiles/operat... [11:07:34] 10serviceops, 10SRE, 10docker-pkg: Refresh all images in production-images - https://phabricator.wikimedia.org/T284431 (10Joe) Hi @Jelto this task is about `production-images` that, in insider jargon, means the images built on top of that base layer that is already being rebuilt every sunday. Those can be f... [13:30:04] 10serviceops, 10SRE, 10docker-pkg: Refresh all images in production-images - https://phabricator.wikimedia.org/T284431 (10Joe) So, getting into more details: - we usually build those images on deneb, using a script called `build-production-images`, which basically just runs docker-pkg from a virtualenv (see... [14:02:17] 10serviceops, 10SRE, 10docker-pkg, 10Patch-For-Review: Refresh all images in production-images - https://phabricator.wikimedia.org/T284431 (10JMeybohm) [16:01:59] everyone: please add your notes to the SRE doc [16:02:09] jayme /jelto/mutante