[00:58:11] thanks mutante! [00:59:46] _joe_: wait, we're already running 8.1? pretty sure wmerrors is still broken for 8+ [01:00:28] also based on https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1014460 my expectation is that bookworm ends up with 8.2 [04:09:24] <_joe_> legoktm: no we're not [04:10:35] ok, then I didn't properly understand what this meant: 12:58:58 _joe_: is see on the RED dash that PHP 8.1 is already installed on some servers, awesome :) [04:11:31] <_joe_> legoktm: I guess he meant that someone was naughty and already added it to the php versions list there :P [04:12:05] heh gotcha [10:01:04] <_joe_> jbond: I see you have worked on refactoring the profile::lvs::configuration class [10:01:25] <_joe_> and in doing so, you've removed any data structure where I can find all LVS hosts of a specific class, across datacenters [10:02:15] <_joe_> and that is needed for https://gerrit.wikimedia.org/r/c/operations/puppet/+/841148 [10:02:43] <_joe_> context is https://phabricator.wikimedia.org/T238751 [10:04:38] <_joe_> so would you be opposed to grouping all the things in a single data structure encompassing all datacenters? [10:05:06] <_joe_> or, is there an alternative maybe using a puppetdb query? let me take a look [10:05:51] <_joe_> ah! I can search all nodes with the right motd message maybe [10:11:18] joe looking [10:15:02] hmm that is a bug profile::lvs::configuration::lvs_class_hosts was ment to be backwards compatable [10:16:29] you can do a query for the motd but that seems a bit hacky let me think if can fiux it, need to refresh on the old structure [10:18:39] <_joe_> jbond: I would need to make a single structure out of profile::lvs::configuration::class_hosts [10:18:44] <_joe_> with one stanza per dc [10:18:55] <_joe_> and select the right one inside the profile itself [10:19:14] <_joe_> basically $all_lvs_class_hosts = lookup() [10:19:33] <_joe_> $class_hosts = $all_lvs_class_hosts[$::site] [10:20:34] _joe_: yes that is probably the best way [10:20:48] <_joe_> with puppetdb_query it also works, if we want it not to be hacky we can [10:20:58] <_joe_> whatever you prefer [10:21:00] we could keep the current structures and create a function that asks puppetdb for all instances of profile::lvs::configuration and merge the params [10:21:40] but its probably more performant and less britl top omve it to hiera the list is allready static so i dont think we gain much by adding puppetdb [10:24:15] <_joe_> yeah [10:24:21] <_joe_> ok, let me do that then [10:24:30] _joe_: on sec im almost there [10:26:10] <_joe_> thanks :) [10:29:35] btw it would be nice to add cumin aliases for the various LVS classes IMHO [10:29:57] so +1 to have an easy way to get them [10:32:36] volans: ack that shold be easy enough to add [10:38:40] _joe_: i have created https://gerrit.wikimedia.org/r/c/operations/puppet/+/844458 and was just looking at how to fix lvs_class_hosts, however looking at the original code https://gerrit.wikimedia.org/r/c/operations/puppet/+/834549/8/modules/profile/manifests/lvs/configuration.pp#b3, that structre was b uild using a selector so it only ever had site local data or am i missing something? [10:40:31] <_joe_> jbond: don't look at the original code [10:42:04] <_joe_> and, I only need the hiera data structure, so yes $lvs_class_hosts was indeed only local data [10:42:16] <_joe_> I was using the pre-filter variable :P [10:42:26] _joe_: ok let me rephrase :) AFAKT ... never mind answered :) [10:42:33] just fixing up the rspec [13:45:26] dcaro: I think this puppet change is waiting to be merged on the puppetmasters. https://gerrit.wikimedia.org/r/c/labs/private/+/826219 [13:45:34] Are you OK if I merge it with mine? [13:46:28] btullis: yes! thanks a lot [13:46:41] Ack 👍 [14:01:32] !bash XioNoX> dunno if it's a typo but " We have observed congestion on some links between Singapore and US. This is due to Major outage in Marseille. " sounds like going around the planet in the wrong direction [14:01:32] Amir1: Stored quip at https://bash.toolforge.org/quip/eJmM8IMB6FQ6iqKio5Ne [15:59:59] curious, why does provisioning disks on ganeti take so long? [16:00:40] the backind drbd syncs contents across the network [16:00:44] backing* [16:01:27] ah drbd! [16:01:32] my old nemisis [16:01:36] haha [16:02:33] I used drbd along with pacemaker circa 2004, wow was that a fragile setup [16:08:26] <_joe_> jhathaway: btw etcd works better when it's not on DRBD but direct storage [16:08:49] That poor etcd trying to do high iops on drbd [16:08:53] <_joe_> as DRBD is let's say "not great at latency on sequential writes" [16:09:05] <_joe_> and etcd while not iops is very sensible to latencies in io [16:09:08] <_joe_> because raft [16:09:23] Ah, not iops, I stand corrected [16:09:24] <_joe_> while *not doing particularly high [16:09:25] _joe_: makes sense, I assume I can change the ganeti config and reprovision? [16:09:45] <_joe_> jhathaway: yeah, even once it's launched I think [16:09:58] ok, I'll take a look, thanks [16:10:08] etcd can be converted not to use DRBD after deployment: https://wikitech.wikimedia.org/wiki/Ganeti#Change_disk_template_for_a_VM_(aka_drop_DBRD) [16:10:23] thanks btullis! [16:10:36] yw [16:13:13] hey, so.. I would like to ask about admin on wikitech. Some pages are protected so only admins can edit them. I think there is an assumption that all root users / SRE automatically are wiki admins but that is not the case. Then last time I made someone an admin because of that I remember a discussion started about "interface admin" vs "admin" and how I should not do that either. Does anyone know? [16:14:04] "interface admin" is enough to get the rights but "admin" should not be used unless there is more? [16:17:47] ah, "content admin" is the third one and makes more sense for this [16:17:59] interface admin was just for the sidebars etc [16:18:33] yea, that's the one. using that [16:26:39] mutante: broadly, anyone with prod root can have advanced rights on wikitech as needed. There are some WP:BEANS reasons that we make it harder for non-roots to have rights to edit sitewide js there. The whole "is this the most narrow rights" thing comes up sometimes (maybe I have even done that in the past?), but mostly if you are root already it seems moot to me. [16:29:29] bd808: ACK, makes sense. well.. "content admin" is good enough for what I wanted, so that's fine [16:33:16] * claime just discovered WP:BEANS and the associated Uh-huh page [16:33:57] * claime will now go to dinner, having apparently broken databases and triggered nuclear destruction in Comic Sans MS [18:00:45] jhathaway: there is a cookbook to conver the storage of a Ganeti VM [18:01:25] sre.ganeti.changedisk [18:01:28] *convert [18:05:24] I've added a mention of it to https://wikitech.wikimedia.org/wiki/Ganeti#Automated_steps [18:20:36] volans: thanks [19:33:26] Do we have any scripts that auto-generate phab decom tickets? [19:34:18] inflatador: close, pre-filled template https://phabricator.wikimedia.org/maniphest/task/edit/form/52/ [19:35:03] you are just supposed to replace the FQDN and save it [19:35:34] this is from https://wikitech.wikimedia.org/wiki/Server_Lifecycle#Reclaim_to_Spares_OR_Decommission [19:37:07] mutante got it, thanks! I'll have to work up a script one of these days ;) [19:37:45] inflatador: maybe it should be added to the decom cookbook? [19:38:06] there is existing code to create phab tickets (the one that creates the "RAID failed" tickets) [19:38:39] the existing template takes approx 5 seconds longer though.. wouldn't stress it [19:39:36] hmm..nevermind, I guess it's supposed to exist _before_ the actual decom runs [19:40:11] mutante I can get the verbiage to spit out pretty easily at least [19:44:00] usually there is work to be done *before* executing the decom cookbook, like taking the host out of production, hence usually the decom task is created before running it