If you’re having issues with zfsonlinux and your pool not expanding after replacing your hard drives with larger ones then here is a trick to fix it. Continue reading
Tag Archives: zfs
Fix zfs-mount.service failing after reboot on Proxmox
In my new homelab migration to Proxmox I came across a bug that will prevent you from being able to mount all your ZFS mount points and be a pain in the ass even more if you host containers in that folder.
Continue reading
Posted in Linux, Proxmox, Technology, Troubleshooting, Virtualization
Tagged containers, proxmox, zfs
Install proxmox on a partition instead of a full-disk
By default, installing Proxmox with ZFS during the installation process will force you to use the entire disk for the root zpool. For most installs this is good enough. However, I like to do things differently sometimes.
I have a pair of Samsung 840 Pro 256GB SSDs that I wanted to use for my new homelab that I am currently building (moving from vmware to proxmox). You may be wondering why I want to install the operating system on a partition instead of an entire disk. Several reasons:
Continue reading
Posted in Guides, Linux, Proxmox, Technology, Virtualization
Virtualization hypervisor and containers all in one
I’m a big fan of virtualization, the ability to run multiple platforms and operating systems (called guests) in a single server (called host) is probably one of the best computing technologies of the past 10 years.
Personally, I have been using virtualization circa 2004. It all took off after 2006 when chip manufacturer’s started bundling virtualization technologies in their processors (Intel VT-x or AMD-v). The reason why “cloud” computing is so popular can also be attributed to virtualization.
In a container world…
However, in the past couple of years a new technology has been making making the rounds everywhere, the words “containers”, “docker”, “orchestration” is picking up steam in the past year. They say that containers are changing the landscape for system administrators and application developers.
Claims that containers can be built and deployed in seconds, share a common storage layer and allow you to resize the container in real-time when you need more performance or capacity are really exciting concepts and I think the time is now for me to jump in and learn a thing of two about this new technology when its hot a new. Continue reading
Posted in Cloud, Linux, Proxmox, Virtualization
Tagged containers, docker, openindiana, opensolaris, proxmox, smartos, zfs
Building a low power Sandy Bridge ESXi + ZFS Storage Array
I have finals this week, so I will update this post as I have more time. In the meantime, I am working to get vmware ESXi (free version of vmware Virtualization server hypervisor) onto a custom whitebox build to replace my aging Intel Core 2 Quad Q9450 server that uses around 125 Watts while idle. Continue reading
Checking for Hard drive READ and WRITE Cache (onboard) on Solaris
To check for read and write cache for your hard drives do the following:
Giovanni@server:~# format -e
Searching for disks…done
AVAILABLE DISK SELECTIONS:
0. c8t0d0 <DEFAULT cyl 60797 alt 2 hd 255 sec 252>
/pci@0,0/pci15d9,d380@1f,2/disk@0,0
1. c8t1d0 <ATA-Hitachi HDS72202-A3EA-1.82TB>
/pci@0,0/pci15d9,d380@1f,2/disk@1,0
2. c8t2d0 <ATA-Hitachi HDS72202-A28A-1.82TB>
/pci@0,0/pci15d9,d380@1f,2/disk@2,0
3. c8t3d0 <ATA-Hitachi HDS72202-A3EA-1.82TB>
/pci@0,0/pci15d9,d380@1f,2/disk@3,0
4. c8t4d0 <ATA-Hitachi HDS72202-A3EA-1.82TB>
/pci@0,0/pci15d9,d380@1f,2/disk@4,0
5. c8t5d0 <ATA-Hitachi HDS72202-A3EA-1.82TB>
/pci@0,0/pci15d9,d380@1f,2/disk@5,0
Specify disk (enter its number):
Select a drive, lets pick 5 from the list.
Specify disk (enter its number): 5
selecting c8t5d0
[disk formatted]
/dev/dsk/c8t5d0s0 is part of active ZFS pool gpool. Please see zpool(1M).
FORMAT MENU:
disk – select a disk
type – select (define) a disk type
partition – select (define) a partition table
current – describe the current disk
format – format and analyze the disk
fdisk – run the fdisk program
repair – repair a defective sector
label – write label to the disk
analyze – surface analysis
defect – defect list management
backup – search for backup labels
verify – read and display labels
inquiry – show vendor, product and revision
scsi – independent SCSI mode selects
cache – enable, disable or query SCSI disk cache
volname – set 8-character volume name
!<cmd> – execute <cmd>, then return
quit
format>
Now let’s do the checking
Enter “cache” to enter cache menu.
CACHE MENU:
write_cache – display or modify write cache settings
read_cache – display or modify read cache settings
!<cmd> – execute <cmd>, then return
quit
cache>
Type: “write_cache” or “read_cache” depending on what you would like to see, lets use write:
cache> write_cache
WRITE_CACHE MENU:
display – display current setting of write cache
enable – enable write cache
disable – disable write cache
!<cmd> – execute <cmd>, then return
quit
write_cache> display
Write Cache is enabled
write_cache>
Use the same for read_cache and to disable and enable.
Tagged cache, hard drive, opensolaris, performance, read, read_cache, write, write_cache, zfs
Setup Filebench on Solaris for benchmarking
Like any other newbie on Solaris, I didn’t know how to install the packages, I am used to yum or apt-get install but anyway on Solaris I did:
Giovanni@server:~/Downloads/filebench-1.4.8# pkg install SUNWfilebench
DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 60/60 0.32/0.32PHASE ACTIONS
Install Phase 82/82
Giovanni@server:~/Downloads/filebench-1.4.8#
and it was installed 🙂 Use pkg search to search for packages.
Giovanni@server:/usr/benchmarks/filebench# bin/go_filebench
FileBench Version 1.4.4
filebench> load varmail
742: 3.707: Varmail Version 2.1 personality successfully loaded
742: 3.707: Usage: set $dir=<dir>
742: 3.707: set $filesize=<size> defaults to 16384
742: 3.707: set $nfiles=<value> defaults to 1000
742: 3.707: set $nthreads=<value> defaults to 16
742: 3.707: set $meaniosize=<value> defaults to 16384
742: 3.707: set $readiosize=<size> defaults to 1048576
742: 3.707: set $meandirwidth=<size> defaults to 1000000
742: 3.707: (sets mean dir width and dir depth is calculated as log (width, nfiles)
742: 3.707: dirdepth therefore defaults to dir depth of 1 as in postmark
742: 3.707: set $meandir lower to increase depth beyond 1 if desired)
742: 3.707:
742: 3.707: run runtime (e.g. run 60)
filebench> set $dir=/gpool
filebench> run 60
742: 27.078: Creating/pre-allocating files and filesets
742: 27.081: Fileset bigfileset: 1000 files, 0 leafdirs avg dir = 1000000, avg depth = 0.5, mbytes=15
742: 27.096: Removed any existing fileset bigfileset in 1 seconds
742: 27.096: making tree for filset /gpool/bigfileset
742: 27.096: Creating fileset bigfileset…
742: 35.092: Preallocated 812 of 1000 of fileset bigfileset in 8 seconds
742: 35.092: waiting for fileset pre-allocation to finish
742: 35.092: Starting 1 filereader instances
744: 36.102: Starting 16 filereaderthread threads
742: 39.112: Running…
742: 99.712: Run took 60 seconds…
742: 99.713: Per-Operation Breakdown
closefile4 449ops/s 0.0mb/s 0.0ms/op 3us/op-cpu
readfile4 449ops/s 7.0mb/s 0.0ms/op 19us/op-cpu
openfile4 449ops/s 0.0mb/s 0.0ms/op 18us/op-cpu
closefile3 449ops/s 0.0mb/s 0.0ms/op 3us/op-cpu
fsyncfile3 449ops/s 0.0mb/s 17.4ms/op 20us/op-cpu
appendfilerand3 449ops/s 3.5mb/s 0.0ms/op 27us/op-cpu
readfile3 449ops/s 7.0mb/s 0.0ms/op 18us/op-cpu
openfile3 449ops/s 0.0mb/s 0.0ms/op 18us/op-cpu
closefile2 449ops/s 0.0mb/s 0.0ms/op 3us/op-cpu
fsyncfile2 449ops/s 0.0mb/s 17.9ms/op 17us/op-cpu
appendfilerand2 449ops/s 3.5mb/s 0.0ms/op 23us/op-cpu
createfile2 449ops/s 0.0mb/s 0.1ms/op 52us/op-cpu
deletefile1 449ops/s 0.0mb/s 0.0ms/op 33us/op-cpu742: 99.713:
IO Summary: 353667 ops, 5836.1 ops/s, (898/898 r/w) 21.0mb/s, 78us cpu/op, 8.9ms latency
742: 99.713: Shutting down processes
filebench>
742: 110.144: Aborting…
Going back to normal
Create a Storage Pool
This will create a pool named “gpool” using RAIDZ (raid5) with member drives c8t1d0 c8t2d0 c8t3d0 c8t4d0 c8t5d0
Giovanni@server:~# zpool create gpool raidz c8t1d0 c8t2d0 c8t3d0 c8t4d0 c8t5d0
Giovanni@server:~# zpool status
pool: gpool
state: ONLINE
scrub: none requested
config:NAME STATE READ WRITE CKSUM
gpool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0
c8t2d0 ONLINE 0 0 0
c8t3d0 ONLINE 0 0 0
c8t4d0 ONLINE 0 0 0
c8t5d0 ONLINE 0 0 0errors: No known data errors
pool: rpool
state: ONLINE
scrub: none requested
config:NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c8t0d0s0 ONLINE 0 0 0errors: No known data errors
Giovanni@server:~#
OK.
How to view available SATA hard drives
You will be able to view hardware ID’s for hard drives using ‘format’
Giovanni@server:~# format
Searching for disks…done
AVAILABLE DISK SELECTIONS:
0. c8t0d0 <DEFAULT cyl 60797 alt 2 hd 255 sec 252>
/pci@0,0/pci15d9,d380@1f,2/disk@0,0
1. c8t1d0 <ATA-Hitachi HDS72202-A3EA-1.82TB>
/pci@0,0/pci15d9,d380@1f,2/disk@1,0
2. c8t2d0 <ATA-Hitachi HDS72202-A28A-1.82TB>
/pci@0,0/pci15d9,d380@1f,2/disk@2,0
3. c8t3d0 <ATA-Hitachi HDS72202-A3EA-1.82TB>
/pci@0,0/pci15d9,d380@1f,2/disk@3,0
4. c8t4d0 <ATA-Hitachi HDS72202-A3EA-1.82TB>
/pci@0,0/pci15d9,d380@1f,2/disk@4,0
5. c8t5d0 <ATA-Hitachi HDS72202-A3EA-1.82TB>
/pci@0,0/pci15d9,d380@1f,2/disk@5,0
Specify disk (enter its number):
Hard drives are located on /dev/dsk in Opensolaris. Compare to the zpool status and add drives that are new to the system (not yet in any storage pools)