[10:38:22] if i want to remove a stale file from labs/private, is there a useful way to ensure that it doesn't break operations/puppet? [10:40:39] git grep + pcc [10:40:52] volans: how do i run pcc without a puppet CR? [10:40:56] keep in mind WMCS too [10:41:05] eh. [10:41:36] ofc pcc you can only after the fact of merging labs-private [10:41:45] that's what i was afraid of [10:42:30] i guess then git grep, submit labs-private, make a no-op puppet CR and run pcc... and hope? [10:42:51] ah screw it. what could possibly go wrong. [10:43:04] * Emperor prepares to buy kormat a t-shirt ;-) [10:43:08] if you know where the file is used you can also check on the hosts with cumin, both in prod and wmcs [10:43:27] kormat: is that file existing in puppet's private repo? [10:43:33] if not it cannot affect prod [10:43:33] volans: the file is hieradata/role/common/otrs.yaml. it was removed from the real private repo recently. [10:43:51] so at most you can break WMCS, so I would concentrate there [10:44:10] the whole role got renamed from otrs to vrts [10:46:01] volans: out of interest, how _does_ one check for WMCS usage? it doesn't look like WMCS themselves have a way to check, either. [10:47:25] kormat: https://openstack-browser.toolforge.org/puppetclass/ and https://wikitech.wikimedia.org/wiki/Cumin#WMCS_Cloud_VPS_infrastructure are the ones I know of [10:47:32] for more ask directly the WMCS team ;) [10:48:46] volans: as per https://gerrit.wikimedia.org/r/c/operations/puppet/+/751725/2#message-ed218f092f3380c79bb8d47a1181cb829a2ea05d, i don't think they have a better answer [10:50:02] if you know what that file was converted to into the hosts from puppet you could check those with cumin, that's my best answer [10:50:25] for standalone puppetmasters with local patches that might use the same file in a different way... yes there is basically no way [10:53:55] volans: also it seems i don't have access to the cloud-cumin hosts anyway 🤷‍♀️ [10:54:38] can I help? [10:55:05] taavi: it's more of a theoretical discussion for now, so i think we're ok. but thanks :) [11:09:28] ufff. puppet question: how can i access variables that are set in `hieradata/role/common/vrts.yaml` from a role that is _not_ vrts? [11:10:50] mmm it seems something that shouldn't happen, what is the use case? [11:11:43] elukey: that's what i was afraid of :/ the vrts database password is set in that file. the vrts role obviously needs it, but so does profile::mariadb::grants::production [11:12:42] i guess the answer is that it needs to be moved to... `hieradata/common.yaml`? [11:14:40] probably better to a profile that is incldued in both places [11:14:49] but I would ask jbond :) [11:15:42] yeah both things +1 :D [11:36:27] kormat: you can place the password into hieradata/common/profile/vrts.yaml, in which case anything that uses profile::vrts in production will default to that value then in profile::mariadb::grants::production yuo can lookup('profile::vrts::database_pass') direcly (with the appropriate lintignore) [11:38:20] this is slightly better then putting it in common.yaml as it keeps the vtrs namspacing however the password is still in the global hiera scope meaning anything can lookit up i.e. anyother profile could call lookup('profile::vrts::database_pass'). [11:39:23] in this case thugh i think thats fine, most mysql passwords are i the passwords class modules which has a simlar issue [11:39:41] further many passwords in production are also in the global scope [11:41:15] further if you did want to restrict this password it would likley mean duplicating it in many mariadb roles (which would be worse for maintanability) [11:41:28] as such i would go with putting it in hieradata/common/profile/vrts.yaml [11:48:55] jbond: Function lookup() did not find a value for the name 'netbox::api_url' (file: /etc/puppet/modules/profile/manifests/spicerack.pp, line: 25) on node cumin1001.eqiad.wmnet [11:50:21] volans: ack looking [12:09:44] volans: shuold be fixed now sorry about that [12:15:01] thank [12:15:04] s [12:25:26] https://github.com/memcached/memcached/wiki/Proxy - looks really nice [12:25:48] the memcached maintainer called out twemproxy and mcrouter "abandonware" after the above [12:26:06] so good to keep in mind that there is something like the above [14:59:13] I know TysonAndre mainly from the work on phan and php-core, interesting to see them here as well: https://github.com/memcached/memcached/pull/716 [15:18:24] <_joe_> calling mcrouter abandonware is, well [15:18:32] <_joe_> interesting as a prespective [15:19:26] <_joe_> elukey: where did he called mcrouter abandonware? [15:20:33] I guess if one goes by the fact that the last tagged release was a few years back it can be considered abandoned, but seeing as the main branch is quite active that's a bit dissonant indeed [15:20:56] <_joe_> that wiki is full of passive-aggressive slights at past implementation [15:21:20] _joe_ last email of the memcached google group [15:22:10] <_joe_> if you're in that group, can you reply "Giuseppe Lavagetto says you're full of shit" [15:22:28] :D [15:22:39] <_joe_> mcrouter does all the right things in the right way, if only it was properly documented [15:23:07] <_joe_> and it doesn't allow you to take the biggest footgun of all times and point it at your caching layer (lua-as-config-logic) [15:23:16] this is where I passively-aggressively say "thanks for volunteering", right? :-P [15:24:08] <_joe_> apergos: volunteering and doing fb an unpaid favor? [15:24:20] <_joe_> I've documented the stuff we use on our systems :P [15:24:20] got it in one! [15:24:34] <_joe_> but on the other hand [15:24:50] <_joe_> I would love to have a nice mc proxy that doesn't depend on fb to survive [15:25:03] <_joe_> nor on their terrible c++ libraries (aptly named "folly" [15:25:42] I would like gitlab not to be open coure and phabriactor not to be in maintenance freeze too but here we are :-/ [15:25:46] *core [15:26:39] _joe_ I think that the meaning of the "abandonware" word is that now that memcached offers a proxy those projects are probably fading away and or not needed anymore [15:27:09] <_joe_> oh ok so just delusional not self-promoting [15:27:35] well there is one person maintaining memcached since years ago [15:27:59] I am pretty sure we can give them some credit :) [15:28:49] I am not even sure how much mcrouter is developed these days [15:33:13] <_joe_> elukey: pretty actively [15:33:21] <_joe_> it's used in production at meta [15:33:27] <_joe_> and here [15:34:01] well it is a good sign, fb's open source support has always been a little strange [15:34:20] anyway, I am not suggesting to move to the new proxy, just keep it in mind [15:50:52] mcrouter stopped tagging new OSS releases but you can use their monthly tags as "releases" since there's some semblance of ABI compatibility between libraries with the same monthly tag in their ecosystem [15:50:55] in theory at least [15:55:10] you can't build the metaverse on monthly tags, seems they moved to weekly ones looking at https://github.com/facebook/mcrouter/tags ... [15:56:22] right, I misremembered - thanks for clarifying [15:58:00] unfortunately stuff like https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104008 really mandates some dockerized/reproducible buildenv