[00:00:10] They are separate block devices though [00:00:14] Which is different [00:00:20] from the old way I mean [00:00:47] so in the old model, there wasn't an operational reason for "your" benefit to have the extra disk space not be part of the boot disk by default? (This sounds like a complaint, but it isn't meant that way, I'm just curious to learn something new) [00:00:55] So to add srv and var/lib/docker you'd want two block devices in the flavor [00:02:19] ah, I think I'm getting it. the image, when paused/snapshotted, is self-aware of the disk sizes. so in order for it to have that space allocated by default you'd need separate flavours of the OS image whereas now presumably you can tell openstack to launch them and then it's runtime/puppet's job to handle it from there and yoyu get to reuse the os images? maybe? [00:02:29] In the old model it was not part of the boot disk by default so people could split it up as they saw fit. We were using local disk on hypervisors as well, so disk management was very different from our end. [00:03:44] Yeah, that's more or less the idea. The image is a 20GB system [00:03:54] ok. I realize this a big tangent. That was interesting to figure out though, thanks. [00:04:16] As much as I was able to explain to greater or lesser extent correctly :) [00:04:28] so in a nut shell, there is not currently a way to puppetise with the existing cinderutils that the ephemeral 40g should be mounted as /foo with 30G and /bar with 10G for example? [00:04:46] Not directly. [00:05:03] If we added more ephem disks, perhaps and did it in order [00:05:25] That would likely work...especially if we set the sizes around expectations [00:05:40] It surprises me that paritioning a mount would require separate block devices. [00:05:52] The other way is to have puppet install LVM and provision your disk with that in puppet instead of using cinderutils [00:06:20] There's actually a real puppet LVM module that we never used because it does things we didn't like (like removing volumes if you unset them) [00:06:43] So you could make it partition the block device [00:06:49] The cinderutils things don't [00:07:06] That was to remove steps for the primary focus of it, which is cinder [00:07:22] cinder itself has nothing at all to do with your use case [00:07:32] the next question tells you how little I know about sysadmin stuff, but if one had a linux laptop with one external drive that one would like to partition the (100%) empty space of. Assuming that's possible and not uncommon, would that normally be done with lvm? [00:07:41] Our old LVM module assumed you already had a volume group created [00:07:48] that's gone in the images [00:07:52] so that's why it won't work anymore [00:09:02] You only need LVM if you want to turn physical devices into logical ones for some reason. You don't need it for storage management at all. It's very useful, but if you just want to set up a filesystem, you don't even need to partition a drive to do that. [00:09:15] Not saying you should do that sort of thing [00:09:16] I wouldn't [00:09:53] But on Amazon EBS volumes and cinder volumes in openstack, you can change their sizes and things and just put filesystems on the bare, unpartitioned block devices [00:10:36] If you created a volume group and installed the lvm utils, you can use the old LVM module stuff we have (functionally)...as long as that volume group is named "vd" [00:10:43] btw [00:11:41] but I mean, if you are trying to make things work, that might do it. A new module could be set up that acts like that specifically for CI [00:11:52] I think the main thing we'd want is for the directory/mount to have its own size limit basically. anything that can get us two paths to the same epemeral disk such that one of them has a cap would do. If they have to share a common path (e.g. can't have /foo and /bar on the same disk without lvm?) we might even be able to do that by changing where docker looks for the cache or symlinking it. [00:12:18] That too [00:12:57] You can also just do an fdisk partition...but yeah. On Monday, we can try to make things fit better for CI maybe. [00:13:47] I could put up a quick patch that would make the old LVM stuff work on new things with this setup, but it would be extremely fragile unless it has a lot more thought into it [00:13:57] Fragile sounds bad. [00:14:00] No worries. [00:14:01] pre-defning two block devices with their own ephemeral size baked into the image flavour seems like it'd potentially make things harder for wmcs, but any way, I think we're happy to follow any recommendation. whatever works best :) [00:14:40] if the least-weird/maintaince-heavy thing also happens to be the most convenient for us, then even better [00:14:55] I mean, ultimately, if you install the lvm utils from apt and then run 'pvcreate /dev/sdb' and use that to create a volume group named vd, the lvm module will work. [00:15:23] That doesn't seem hard to make into part of a module [00:15:40] The hard part is making sure your new disk actually is named sdb :) [00:17:59] I have to go. I don't want to leave you blocked, but I am not sure I can give you something super easy to use really quickly either. [00:18:59] It's fine. [00:19:57] Ok. Sorry this got kind of put off for a long time [00:21:38] We'll figure it out. [05:29:11] Will PAWS notebooks keep running in the background even if the browser tab is closed? [08:41:33] !log metricsinfra silence deployment-prep alerts yet again [08:41:35] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Metricsinfra/SAL [08:51:35] !log tools depool tools-sgeexec-0907 [08:51:38] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools/SAL [19:13:47] !log tools.sal Restart for fcgi container crash [19:13:49] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.sal/SAL