[12:48:16] woe, the coffee machine demands descaling before I can have more espresso! [12:50:56] Emperor: is coffee machine free [12:51:42] With coffee [12:55:09] Spookreeeno: sorry, I can't understand what you're asking [12:56:09] Emperor: I need coffee [12:56:15] Today has been a long Monday [12:57:24] :) [12:57:47] A long Monday isn't good [13:26:07] kormat: o/ [13:26:11] https://gerrit.wikimedia.org/r/c/operations/puppet/+/735688 [13:26:30] ottomata: working through it currently [13:26:33] OH :) [13:26:38] thank you [14:02:40] thanks, looking... [14:04:28] i didn't review the my.cnf.erb file in detail, just fyi [14:05:17] the changes to the common stuff is all fine [14:06:40] that's fine, its a copy/paste of another file, i sortof reviewed. will probably have to tweak some things. [14:07:04] cool. thanks. responded about the mysql role. can continue that discussion here in IRC or on patch, as you prefer [14:09:10] mysql_role is a node-level settings for us [14:09:23] because we never mix-and-match types of instances on a node [14:09:28] that ways lies madness [14:09:36] right, but is it intentionlly that way, or just a consequence of not usually doing multi instance? [14:09:47] we do a lot of multi-instance? [14:09:48] maybe i don't fullly understand what the mysql_role is for, lemme read more [14:09:54] multli-instance master :) [14:10:16] even with multi-instance master, there's no way i (at least) would want master and non-master instances to be on the same host [14:10:50] hm, i guess a reason for doing multi master would be to do failover/maintenance on one wthout affecting the other [14:11:02] the host a instance runs on seems irrelevant, doesn't it? [14:11:03] decisions to depool/repool, change master, etc, are all made at the host level in production [14:11:45] host level or dns name level? [14:11:57] * kormat peers suspiciously [14:11:58] host level [14:15:44] looking for mariadb failover stuff, is it not possible currently to failover a host based on name rather than IP? [14:16:24] i think we can't use LVS stuff anyway [14:16:25] we only work with hostnames for db operations [14:16:26] iiuc [14:16:38] but when you said 'dns name level', i figured you must have been thinking about CNAMEs [14:16:38] kormat: hostname or IP? [14:16:50] 👀 hostname [14:17:02] ok, so does it matter if it is A or CNAME? [14:17:21] that's not something i'd be excited about trying :P [14:17:31] i would strongly recommend sticking to A [14:18:44] i don't think it is possible for us to use lvs failover in only eqiad. we tried to set up some discovery endpoints and lvs failover for somethign, and SRE told us that was really only for cross DC failover or cross DC active/active, which we don't have in this case [14:19:27] we use CNAMEs sometimes, but that isn't great, and i see that esp with a writable stateful service its a little weird to change the master cname around [14:19:58] ottomata: we have CNAMEs pointing to the masters, e.g. `s1-master` [14:19:59] more likely, we'd have to change the master hostname config in puppet and apply to all clients [14:20:13] but it's basically only documentation, as you can't use ssl via a cname [14:20:22] and we never use it at all for db maintenance/operations [14:20:36] but, i either case, how does it matter which host a master instance is on? the application doesn't care [14:21:38] i no longer have any idea what we're discussing :) [14:21:41] ok [14:22:13] i have an-db1001 and an-db1002, and each one has 2 mariadb instances, [14:22:20] usually each master is on 1001 [14:22:42] but, perhaps some big mistake was made on one of the 'tool' databases [14:23:37] hmm actually, trying to come up with an example...maybe i'm wrong there isn't a case? [14:24:13] the example i was about to write was about doing a restore to only one of the instances, but in that case i don't really need to to a master failover [14:24:39] * kormat nods [14:24:40] the use case i was trying to support was maintenance that required a master failover to the other host for only one instance. [14:26:41] hm, is there a need to ever to do a master failover for only 1 instance? I had assumed yes, but i can only think of maintenance on the host that would require both masters to be moved. [14:27:26] it's not a situation i'd want to be in [14:27:29] to hard to reason about [14:27:57] hm, i don't think its harder to reason about than if the instances were on different hosts [14:28:32] the only thing the instances share is an IP [14:28:44] .. and the rest of the host :P [14:29:03] (config-wise) [14:29:03] but maybe there's never a need to do it. [14:29:37] config wise i mean. there are host resource/performance implications, but those are present if all the databases are in one instance too [14:34:56] reasons we do a failover: kernel upgrade, OS upgrade, firmware upgrade, mariadb upgrade, certain schema changes. of those, only the last one is on the instance level. [14:35:05] everything else is on a host level [14:35:31] if you split your masters across 2 hosts, then most maintenance on either host will require a failover [14:35:39] yeah...schema changes or config changes ok [14:35:51] per-instance mariadb config changes [14:35:59] might be a case? [14:36:04] extremely rare, for us. [14:36:12] like, maybe once a year, if that [14:36:12] likely rare for us too. [14:36:57] schema changes for you are almost certainly going to require downtime to perform [14:37:25] MW schema changes are required to allow us to roll them out to replicas first [14:37:27] our tables aren't so big, and writes really aren't a lot [14:37:39] i would not expect third-party applications to be so kind about it [14:37:57] but also, downtime for us is not so difficult [14:38:05] it might get harder eventually... [14:38:05] yeahy [14:38:10] but for now pretty easy to do [14:39:41] alright, thanks for review and convo, i'm going to follow the other path now and look back into a simple single instance master [14:40:21] no promises that i won't get annoyed with that or find a reason to try and pursue this again....buuuuuuut atm it seems like its just not worth it [14:40:28] thanks for your patience :) [14:40:36] haha, no worries. [16:49:40] Emperor: for video tracks at HD and above, very long media files could extend into a few gigabytes. but most files will be much shorter. currently they get bundled with audio & video together in one file for each output resolution, but i'm looking into switching it to a streaming-friendlier format that i can also use on newer iPhones. this can split between track types, and can split chunks by time (but doesn't [16:49:40] have to), it's pretty flexible packaging in theory [17:00:20] I wouldn't be inclined to split up a-few-G files from an object store perspective [17:00:53] (which is not to say you can't if it makes your application life easier) [17:01:12] *nod* as long as it's not a problem for the object store to carry them, and for the http delivery to do byte-range requests into them, large files are moderately easier to work with [17:01:55] (just a lot less housekeeping with separate filenames and objects!) [17:45:08] qq: recommendation for innodb_buffer_pool_size? new an-db hosts have 128G RAM, on disk /srv/sqldata/ibdata1 is 1.4G [18:38:33] ottomata: if the host is only running MySQL, what it's recommended is around 70-80% of your total RAM [18:39:26] ottomata: if not, whatever you can give it, the more dataset you can get in memory the best, but ibdata1 size isn't too relevant here, the whole dataset is [18:40:00] s/the best/the better [18:56:02] its only mysql [18:56:21] can do 80% of ram [18:56:28] will do, thanks marostegui [19:06:59] another qq: do you all have any config management for grants, or is it all manual? [19:08:23] we have the grants in puppet but only for tracking. puppet doesn't add or remove grants from the db [19:09:32] i have some old unused puppet for bigtop (previously cloudera stuff) to manage some grants...but i'm considering removing it [19:09:41] its kinda nice for cloud vps testing to automate that [19:09:57] but happens so rarely in prod that maybe its ok to just rely on docs [19:11:26] I don't feel comfortable having puppet changing the grants live to be honest [19:11:41] anyways, I'm off today! see you tomorrow :-) [19:13:43] yea, it seems ok if done well enough that it won't do anything once it has done it the first time [19:13:48] but probably not worth it atm [19:13:51] thanks, laters!