[04:53:44] So my irccloud isn't available, and my bouncer has crashed, so great day [05:24:20] marostegui14: ouch [05:24:42] yeah, and as marostegui is still online, I cannot use it [05:24:48] I have emailed irccloud to see what's going on [05:40:57] marostegui14: you can use "/msg nickserv ghost marostegui" to disconnect the old connection [05:42:13] Nice! [05:42:32] majavah that worked! [05:42:34] thanks [05:47:14] I have recovered my bouncer too as well, so at least I have this now [05:49:14] This is ironic: https://twitter.com/IRCCloud/status/1430404602319548419 [05:49:33] :( [05:49:53] marostegui: oh, are they using s7 too? [05:50:04] Looks so! [05:50:13] Their centralauth db! [06:14:02] R/O time: 75s. that's reasonable. [06:21:29] that's very good! [06:22:54] https://phabricator.wikimedia.org/T288803#7307264 thoughts? [06:23:26] marostegui: let's do it [06:24:28] marostegui: i'm making some updates to my switchover-tmpl.sh [06:24:29] kormat: It might be tight if there are issues (ie: reimage issues etc) [06:24:32] But ok! [06:24:38] marostegui: i vote to not have issues [06:24:50] XD [06:25:13] but that's just me. maybe other people feel differentls [06:25:16] *differently [06:26:16] marostegui: https://phabricator.wikimedia.org/P17043#87281 [06:28:32] recommended usage is: create an empty task, then run the script providing that task id [06:28:40] that way it'll populate the task id throughout the template [06:28:58] kormat: that's so nice [06:29:19] and also includes the wikitech output for the calendar! which I always struggle with! [06:29:22] <3 [06:29:32] i haaated that bit this morning. so yeah, now it does :) [06:29:40] kormat: is that script on dbtools repo already? [06:29:58] ugh, no. fiiiine fixing. [06:30:49] haha [06:32:38] now it is :P [06:33:50] marostegui: btw, have you seen my fine writing? https://phabricator.wikimedia.org/P17050#87355 [06:36:47] kormat: I have that pending to read yeah - I wasn't able to finish with email backlog yesterday [06:36:58] kormat: Just used the script btw: https://phabricator.wikimedia.org/T289650 [06:37:31] marostegui: sweeet :) [08:34:34] Emperor: i have a proposal for you [08:34:50] Emperor: i'll help you with your puppet woes, and you help me with my debian packaging woes. deal? [08:37:31] :) [08:37:51] [I have a meeting about backups at 10; perhaps thereafter?] [08:38:00] 👍 [08:41:48] kormat jynus 6th Sept for s4 switchover looks good? [08:42:12] is that a date for codfw? [08:42:17] yeah, the primary [08:42:19] marostegui: WFM [08:42:44] I haven't prepared that one patch, but I am sure I can prepare it by then :-D [08:43:05] Excellent, we don't have much room that's why it is "so" close [08:43:12] Thank you both! [08:43:47] as cleanup tasks are not important, please be patient about those, I won't be prioritizing those [08:43:58] yup no issues [08:44:08] kormat: I just noticed this: " |who={{ircnick|kormat|Stevie Beth Mhaol}}, {{ircnick|marostegui|Manuel 'Early Bird' Arostegui}} [08:44:11] (cleanup tasks == removing old unused stretch instances) [08:44:13] XDDD [08:44:20] marostegui: took you long enough ;) [08:44:43] hahaha [08:53:07] and of course, image metadata has a unique format found in the position 70 million [08:55:13] backing up commonswiki: 30 minutes. Understanding what mediawiki means to know what to backup: 1 year [08:55:45] XDDD [08:56:41] you won't belive the amount of garbage found- files with no name, files with NULL name, files with impossible encoding according to its own rules, disappearing paths [09:01:47] Emperor, sorry I got distracted, connecting in 1 minute [09:03:44] OK... [09:09:38] firefox froze my os while trying to share my screen [09:10:10] joy and kittens [10:29:22] kormat: I will start deploying the ParserCache soon today. The main patch got merged, now trying to get the tiny patch merged [10:29:31] I'm backporting the main patch atm [10:29:34] Amir1: \o/ i like you again [10:29:48] 💔 [10:32:06] Emperor: "365041 Restore 1 95.90 M OK 25-Aug-21 10:27 RestoreFiles" [10:32:47] please check the files got restored as intended (without a hurry) :-) [10:33:16] I will work on a patch to try to make the behaviour more transparent [10:46:40] jynus: 2e5554fa423461ef28db258304c3df25 /var/tmp/bacula-restores/srv/backups/dumps/latest/dump.s3.2021-08-24--00-00-02/testwiki.gz.tar [10:46:57] since we don't actually want to use that, shall I delete? [10:47:41] carefully [10:47:50] but yes [10:49:03] done [11:54:36] marostegui: it's a good things reimaging never has problems, right? [11:55:36] And it always happens when dealing with masters or candidates! [13:12:26] kormat: I've sent you a bit of a brain-dump, sorry... [13:15:40] Emperor: that's roughly what i had worked out [13:15:59] let me read it in detail before i say anything else though [13:16:11] ta. My typing-break software has started yelling at me in any case :) [14:09:40] Emperor: so.. we have 2 choices. either we can go the route of having `mariadb::service` and `mariadb::service_instance`. or we can try and support both multi- and single-instance in a single profile [14:09:53] the fact that db_inventory doesn't use mariadb::service is a bug [14:10:13] Emperor: do you have any idea if it's possible to add a parameterised override file? [14:10:30] maybe mariadb.service.d/override@.conf, or something [14:16:43] I think mariadb@.service.d/thingy.conf [14:16:55] becuase the parameterised service is called mariadb@.service [14:17:17] i know that mariadb@s7.service.d/thingy.conf works for mariadb@s7 [14:17:26] (as i tested it earlier) [14:17:38] if what you're suggesting works, that would simplify some stuff [14:18:05] I have previously done /etc/systemd/system/ceph-mon@.service.d entries [14:18:16] ah haah. ok, promising! [14:18:45] Emperor: note that this bypasses the issue of /etc/systemd/system/mariadb.service.d/ already being 'owned' by an existing puppet class [14:31:40] so mariadb::misc::db_inventory needs re-writing to use mariadb::service ? [14:32:14] Emperor: it (probably) needs just 1 line added [14:33:45] I think we'll still have the problem with /etc/systemd/system/mariadb.service.d/ (unless I also re-write mariadb::service to use systemd::service, and possibly even then?) [14:34:32] Emperor: so the question is do we ever want mariadb::service but not want prometheus-mysql-exporter? [14:34:57] (i'm pretty sure the answer is 'no, we always want both') [14:37:10] currently, multiinstance hosts use mariadb::service to override mariadb.service to disable it ; those hosts are using prometheus::mysqld_exporter_instance instead of prometheus::mysqld_exporter [14:37:31] * kormat nods [14:41:47] so there are hosts where we currently have mariadb::service but not prometheus::mysqld_exporter. [14:42:34] can you give an example hostname or two? [14:42:37] Confusingly I think it's mariadb::monitor::prometheus vs prometheus::mysqld_exporter_instance [14:43:01] (or I'm confused by our puppet layout again) [14:43:24] when dealing with our puppet code, you either want an industrial flamethrower, or tunnel vision [14:44:08] I think db1096.eqiad.wmnet ? [14:44:38] core multiinstance, ok yeah, that makes sense [14:45:00] it would have mariadb::service with the override disabling it, plus the multi-instance exporter puppet classes [14:45:25] that matches my understanding [14:46:20] ...which was one of the reasons I was suggesting systemd::unit allow specification of alternative override file names [14:47:14] what is the use-case of multiple override files in this case? [14:51:21] 1) shut puppet up 2) not change the existing override file names 3) you might legitimately want to e.g. override LimitNOFILE=200000 from the mariadb::core role in one place and set the Wants: PME override from the PME role in another place [14:51:55] [I picked that example 'cos mariadb::core did that in the past and it's still in the file commented out] [14:56:27] i'd be more inclined to go for a single, templated override file [14:56:30] like we do for configs [14:58:29] anyway, i have a meeting now. will try to take another look at this tomorrow [14:58:58] you'll still need to either allow a different name or handle the fact that currently mariadb::service uses override.conf and systemd::unit uses puppet-override.conf and so a bunch of things would need renaming on prod. And I'm not sure you can have 1 template called from different places? [14:59:34] Emperor: ah, i'm implying i would change how the override thing currently works [15:00:18] since, broadly, for both single and multiple isntance hosts you'd have changes you want to make from the mariadb::thingy role/profile and from the prometheus-mysqld-exporter role/profile [15:00:50] Unless you want to make it impossible to have one without the other and replace both with a hybridge mariadb-and-PME role [15:00:57] hybrid [15:01:17] [but we can come back to this tomorrow :) ] [15:01:19] there's another option [15:01:44] make the mariadb-override changes from mariadb::service, and pass in a flag to say if we're using PME or not [15:02:46] Emperor, that is why I suggested exports [15:03:33] I don't understand what that means here [15:33:21] if the issue is a particular codebase depending on what other independent codebase does (even within the same host) [15:34:02] you can export data on the independent codebase, and then retrieve it on the other, like kormat says, as a flag to do A or B [15:34:29] export as in: https://puppet.com/docs/puppet/6/lang_exported.html [15:36:14] e.g. and this is just an example- you "export" the need of an override on prometheus exporter code [15:36:28] Hm, that seems more like the for-use-on-another-node use case [15:36:30] and on the mariadb side, you collect that need and do one thing or another [15:36:53] yes, it is normally used for that, but can also be retrieved on the same node too [15:37:07] again, just throwing ideas in case it helps [15:37:57] typical usage is declaring multiple website defines in some codepath [15:38:29] and then on a central location you capture the variable number of exported defines and you process it in a single location [15:40:53] just trying to give options in case they could simplify what you want to do [15:42:24] motd for backups is a place where exports are used within the same node, so that backups can be defined anywhere, but only in one location they are all printed [15:44:50] All other things being equal, given you can use >1 override file, it'd seem to me natural to do so if you want to apply different overrides from different places. [15:45:19] please note that I haven't looked at the issue in detail [15:46:11] it could be a wrong solution, but have a look in case it helps [15:48:18] check backup::set and profile::backup::host for a trivial example [15:49:09] I want to think it is similar because the idea is being able to declare backup::sets from everywhere [15:49:31] but maybe I am confusing you more (sorry if I am)