Install proxmox on a partition instead of a full-disk

By default, installing Proxmox with ZFS during the installation process will force you to use the entire disk for the root zpool. For most installs this is good enough. However, I like to do things differently sometimes.

I have a pair of Samsung 840 Pro 256GB SSDs that I wanted to use for my new homelab that I am currently building (moving from vmware to proxmox). You may be wondering why I want to install the operating system on a partition instead of an entire disk. Several reasons:

1. Proxmox (ZFS-on-Linux) does not yet support SSD TRIM, FreeBSD does support it so migrating from FreeNAS into Proxmox I should be aware of it.
2. Data redundancy for the root filesystem does not need to be large. Even if I do RAID1 with my two SSDs I won’t be storing my critical data or VMs in the rpool – I want a smaller sized root pool that has fault-tolerance (RAID1). A partition of 60GB mirrored in two SSDs should fit the bill here.
3. ZIL Intent Log experimentation, I also want to experiment by using the same two SSDs to speed up my ZFS writes. I want a small partition in a stripe (RAID0) for performance, 45GB total (22.5gb per ssd) is plenty for this.
4. The left over unused space will be left untouched so that the SSD will have more available blocks during the controller’s built-in garbage collection (not the same as TRIM)

I don’t have enough time to go into a lot of details (it’s past 4am), so I will get to how to do it. If you are trying to follow my same steps, you will need at least 3 hard drives.

1. On a hard drive or device you don’t care to use in the final outcome, install Proxmox as you would normally. Wipe the entire partition table and let it install RAID0 on the whole disk.
2. Boot into your new installation, have the two new disks you want to keep attached to the system and ensure linux sees them fdisk should help with this.
3. You will now need to create the partitions on the new disks (not rpool):

You will need to know how to calculate hard disk sectors and multiply by your block size. I don’t have time to go over it but I will do a quick TL;DR example to give you an idea:

We want 25GB slice so that is around 25000000000 bytes / 512 (block size) = 48828125 total sectors to allocate this storage amount.

Take a look at the partition table to make sure you create something similar, fdisk -l /dev/sd$ (your rpool disk). We will leave 8MB disk at the end of the partition, Proxmox by default creates 3 partitions: GRUB_BOOT, ZFS data, Solaris 8MB.

This command creates the partitions for my new array, I’ve described them for you by the -c command. It should be self-explanatory.

# sgdisk -z /dev/sdb
# sgdisk -a1 -n1:34:2047 -t1:EF02 -c1:”BIOS boot” -n2:2048:156252048 -t2:BF01 -c2:”mirror” -n3:156252049:205080174 -t3:BF01 -c3:”stripe” -n4:205080175:205096559 -t4:BF0 /dev/sda

# sgdisk -a1 -n1:34:2047 -t1:EF02 -c1:”BIOS boot” -n2:2048:156252048 -t2:BF01 -c2:”mirror” -n3:156252049:205080174 -t3:BF01 -c3:”stripe” -n4:205080175:205096559 -t4:BF0 /dev/sdc
# zpool create -f stripe -o ashift=13 /dev/sda3 /dev/sdc3
# zpool create -f newroot -o ashift=13 mirror /dev/sda2 /dev/sdc2
# grub-install /dev/disk/by-id/ata-Samsung_SSD_840_PRO_Series_S1ATNSADB46090M
# grub-install /dev/disk/by-id/ata-Samsung_SSD_840_PRO_Series_S12RNEACC59063B

Backup & moving stuff.
# zfs snapshot -r rpool@fullbackup
# zfs list -t snapshot
# zfs send -R rpool@fullbackup | zfs recv -vFd newroot
root@pve:/# zpool get bootfs
NAME PROPERTY VALUE SOURCE
newroot bootfs – default
rpool bootfs rpool/ROOT/pve-1 local
stripe bootfs – default
root@pve:/# zpool set bootfs=newroot/ROOT/pve-1 newroot
zpool export newroot
zpool import -o altroot=/mnt newroot
— rebooted with freenas live cd, enter shell, import newroot with new name rpool. rebooted
— boot into proxmox recovery — once it boots, do recovery
grub-install /dev/sdb
grub-install /dev/sda
update-grub2
update-initramfs -u

#zpool set bootfs=newroot rpool could also work without renaming via FreeNAS but didn’t try.

Comments are closed.