[09:37:43] hey ebernhardson o/ I just substribed you to https://phabricator.wikimedia.org/T331894 as I added some more context and ideas. The problems you are facing with consuming zookeeper cluster details in helm are not new and I think we should maybe take a step back and look at the bigger picture rather than trying to invent yet another workaround. I hope this doesn't feel like not valuing the work you've already done - actually I think arguing [09:37:44] about and optimizing the used data structures is very crucial on the way to a more generic solution [09:38:56] everybody elses input is extremely welcome as as well :) [13:05:46] jayme: it implies that all services are behind something like lvs? [13:06:41] dcausse: not really. actually non of them are (kafka, zookeeper, databases) :) [13:07:31] LVS is less problematic as the IPs do not change usually. So less need for updates in networkpolicies of particular charts [13:08:58] sure, we have to research if e.g. a kafka client is OK contacting a random broker behing lvs for doing its "discovery" phase [13:09:59] I perhaps mistunderstood what is suggested [13:12:41] I'm not suggesting putting them behind LVS but just the (iptables based) load balancing that kubernetes does by default. Also it's not required (even with what I suggested) to use those load balanced endpoints. Direct connections will still be possible. It's just that it would make life easier for applications not needing to know then names of kafka brokers (and update them) at all [13:14:42] yes that's a major pain point to have to specify broker ips manually [13:22:45] oh when you say "main-eqiad.kafka.svc.cluster.local:9092" this is calico acting as loadbalancer for kafka-main? [13:54:05] jayme: is this something you plan to work on in the near future? is there something you think we can help with? [14:08:49] not calico (acting as LB) that is a standard k8s feature. it's the same with when you specify a service object for your pods (their IPs are the endpoints then) [14:10:10] planning for it would be a bit of a stretch but I wanted to at least kick of a discussion as I feel we're sinking time into more and more workarounds without looking at trying to fix the actual problem :) [14:10:54] so for now I'm trying to get an idea if other k8s folks would think this is utterly stupid, I forgot something or maybe there is a more clever way [14:20:53] I don't understand k8s enough to judge but the few bits I understand seems sane to me, IIUC instead of interpolating yaml files managed by puppet and do app deploys to refresh these resources, there would be something listening for puppet changes and update the k8s resources accordingly [14:21:47] client apps would just reference a "logical name" of the resource they want to access [15:39:11] yes, exactly. Although I have not thought about how to do that exactly. Maybe something as easy as a cronjob calling a helmfile deploy [16:01:36] jayme: thanks for poking at it, I'm all for finding a more generic way to handle this. I'm not particularly thrilled with the current solution either, although i'm starting to wonder if i should simply hardcode the appropriate strings into our test app because this is looking a bit large and i'm not trying to put more blockers in front of the thing we are actually trying to do :) [16:02:09] i've been trying to avoid simplying copying lists of kafka or zk brokers like i see in other charts, it just seems wrong, but i'm now seeing why people are taking the shortcut [17:05:49] Interesting discussion, I just subscribed. I need to learn more about K8s networking in general...I come from nomad/host-based networking