<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>zfs &#8211; Giovanni F. Mazzeo De Santolo</title>
	<atom:link href="https://desantolo.com/tag/zfs/feed/" rel="self" type="application/rss+xml" />
	<link>https://desantolo.com</link>
	<description>That italian IT guy</description>
	<lastBuildDate>Sun, 27 Dec 2020 05:38:52 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.2</generator>
<site xmlns="com-wordpress:feed-additions:1">123042357</site>	<item>
		<title>Fix ZFSonLinux pool  auto expanding</title>
		<link>https://desantolo.com/2017/07/fix-zfsonlinux-pool-auto-expanding/</link>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Mon, 24 Jul 2017 05:00:16 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Troubleshooting]]></category>
		<category><![CDATA[zfs]]></category>
		<category><![CDATA[zpool]]></category>
		<guid isPermaLink="false">https://desantolo.com/?p=560</guid>

					<description><![CDATA[If you&#8217;re having issues with zfsonlinux and your pool not expanding after replacing your hard drives with larger ones then here is a trick to fix it. # zpool set autoexpand=on {pool name} # zpool online -e {pool-name} {disk name/id &#8230; <a href="https://desantolo.com/2017/07/fix-zfsonlinux-pool-auto-expanding/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>If you&#8217;re having issues with zfsonlinux and your pool not expanding after replacing your hard drives with larger ones then here is a trick to fix it.<span id="more-560"></span></p>
<p class="p1"><span class="s1"># zpool set autoexpand=on {pool name}</span></p>
<p class="p1"># zpool online -e {pool-name} {disk name/id as displayed on zpool status}</p>
<p>Your pool should resize after running the second command. The first command was only to make sure you had set the zfs property that&#8217;s needed for the second command to expand the pool.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">560</post-id>	</item>
		<item>
		<title>Fix zfs-mount.service failing after reboot on Proxmox</title>
		<link>https://desantolo.com/2017/07/fix-zfs-mount-service-failing-after-reboot-on-proxmox/</link>
					<comments>https://desantolo.com/2017/07/fix-zfs-mount-service-failing-after-reboot-on-proxmox/#comments</comments>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Sat, 01 Jul 2017 01:33:33 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Proxmox]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Troubleshooting]]></category>
		<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[containers]]></category>
		<category><![CDATA[proxmox]]></category>
		<category><![CDATA[zfs]]></category>
		<guid isPermaLink="false">https://desantolo.com/?p=545</guid>

					<description><![CDATA[In my new homelab migration to Proxmox I came across a bug that will prevent you from being able to mount all your ZFS mount points and be a pain in the ass even more if you host containers in &#8230; <a href="https://desantolo.com/2017/07/fix-zfs-mount-service-failing-after-reboot-on-proxmox/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>In my new homelab migration to Proxmox I came across a bug that will prevent you from being able to mount all your ZFS mount points and be a pain in the ass even more if you host containers in that folder.<br />
<span id="more-545"></span><br />
<strong>Cause of the problem:</strong> When you use a different zpool than the default rpool, and setup a directory mount for PVE to use for ISO datastore, VZ dump, etc on reboot if the zfs mount points have not completed mounting at boot time. Proxmox will attempt to create the directory path structure.</p>
<p>The problem with creating a directory for something before is mounted is that when zfs-mount.service runs and attempts to mount the zfs mount points you will get these kind of errors:</p>
<p><code>root@pve:~# <strong>systemctl status zfs-mount.service</strong></code><br />
<code>● zfs-mount.service - Mount ZFS filesystems</code><br />
<code> Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)</code><br />
<code> Active: failed (Result: exit-code) since Fri 2017-06-30 18:10:21 PDT; 21s ago</code><br />
<code> Process: 6590 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)</code><br />
<code> Main PID: 6590 (code=exited, status=1/FAILURE)</code></p>
<p><code>Jun 30 18:10:19 pve systemd[1]: Starting Mount ZFS filesystems...</code><br />
<code>Jun 30 18:10:20 pve zfs[6590]: cannot mount '/gdata/pve/subvol-102-disk-1': directory is not empty</code><br />
<code>Jun 30 18:10:20 pve zfs[6590]: cannot mount '/gdata/pve/subvol-106-disk-1': directory is not empty</code><br />
<code>Jun 30 18:10:20 pve zfs[6590]: cannot mount '/gdata/pve/subvol-109-disk-1': directory is not empty</code><br />
<code>Jun 30 18:10:21 pve systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE</code><br />
<code>Jun 30 18:10:21 pve systemd[1]: Failed to start Mount ZFS filesystems.</code><br />
<code>Jun 30 18:10:21 pve systemd[1]: zfs-mount.service: Unit entered failed state.</code><br />
<code>Jun 30 18:10:21 pve systemd[1]: zfs-mount.service: Failed with result 'exit-code'.</code></p>
<p><strong>Fixing the root of the problem:</strong> change how proxmox deals with mounts by editing /etc/pve/storage.cfg &#8211; you need to add &#8220;mkdir 0&#8221; and &#8220;is_mountpoint&#8221; to the directory mount. Example:</p>
<p><code>dir: gdata-dump</code><br />
<code> path /gdata/vz</code><br />
<code> content iso,vztmpl,backup</code><br />
<code> maxfiles 0</code><br />
<code> shared 0</code><br />
<code> mkdir 0</code><br />
<code> is_mountpoint 1</code></p>
<p>Now we need to do some system cleanup before we reboot and confirm the problem is fixed.</p>
<p>Let&#8217;s check which mount points have failed:<br />
<code>root@pve:~# <strong>zfs list -r -o name,mountpoint,mounted</strong></code></p>
<p>Now let&#8217;s umount all zfs mount points (except rpool of course &#8211; assuming the rootfs is zfs)</p>
<p><code># zfs umount -a</code></p>
<p>After making sure ZFS mount points are unmounted, now we can delete the empty folders. Recall the failed mount points that the zfs list command gave you and one by one delete them like so:</p>
<p><code># rm -rf /gdata/pve/subvol-102-disk-1</code></p>
<p>Do this for each folder that showed issues mounting. You have a choice to remount everything with zfs mount -O -a &#8212; or better&#8230; reboot the system and check its fixed. I like the later better. So reboot.</p>
<p>After it boots back up check that service was able to mount zfs without issues:</p>
<p><code># systemctl status zfs-mount.service</code><br />
<code># zfs list -r -o name,mountpoint,mounted</code></p>
<p>That&#8217;s all folks&#8230; if you made the edit to storage.cfg and added the two variables this should not occur again. This was an annoying bug to deal with but good to have found a better solution than a startup script doing some dirty tricks!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://desantolo.com/2017/07/fix-zfs-mount-service-failing-after-reboot-on-proxmox/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">545</post-id>	</item>
		<item>
		<title>Install proxmox on a partition instead of a full-disk</title>
		<link>https://desantolo.com/2017/06/zfs-proxmox-on-a-partition-instead-of-a-full-disk/</link>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Sun, 11 Jun 2017 11:24:31 +0000</pubDate>
				<category><![CDATA[Guides]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Proxmox]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[freenas]]></category>
		<category><![CDATA[proxmox]]></category>
		<category><![CDATA[ssd]]></category>
		<category><![CDATA[zfs]]></category>
		<guid isPermaLink="false">https://desantolo.com/?p=532</guid>

					<description><![CDATA[By default, installing Proxmox with ZFS during the installation process will force you to use the entire disk for the root zpool. For most installs this is good enough. However, I like to do things differently sometimes. I have a &#8230; <a href="https://desantolo.com/2017/06/zfs-proxmox-on-a-partition-instead-of-a-full-disk/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>By default, installing Proxmox with ZFS during the installation process will force you to use the entire disk for the root zpool. For most installs this is good enough. However, I like to do things differently sometimes.</p>
<p>I have a pair of Samsung 840 Pro 256GB SSDs that I wanted to use for my new homelab that I am currently building (moving from vmware to proxmox). You may be wondering why I want to install the operating system on a partition instead of an entire disk. Several reasons:<br />
<span id="more-532"></span><br />
1. Proxmox (ZFS-on-Linux) does not yet support SSD TRIM, FreeBSD does support it so migrating from FreeNAS into Proxmox I should be aware of it.<br />
2. Data redundancy for the root filesystem does not need to be large. Even if I do RAID1 with my two SSDs I won&#8217;t be storing my critical data or VMs in the rpool &#8211; I want a smaller sized root pool that has fault-tolerance (RAID1). A partition of 60GB mirrored in two SSDs should fit the bill here.<br />
3. ZIL Intent Log experimentation, I also want to experiment by using the same two SSDs to speed up my ZFS writes. I want a small partition in a stripe (RAID0) for performance, 45GB total (22.5gb per ssd) is plenty for this.<br />
4. The left over unused space will be left untouched so that the SSD will have more available blocks during the controller&#8217;s built-in garbage collection (not the same as TRIM)</p>
<p>I don&#8217;t have enough time to go into a lot of details (it&#8217;s past 4am), so I will get to how to do it. If you are trying to follow my same steps, you will need at least 3 hard drives.</p>
<p>1. On a hard drive or device you don&#8217;t care to use in the final outcome, install Proxmox as you would normally. Wipe the entire partition table and let it install RAID0 on the whole disk.<br />
2. Boot into your new installation, have the two new disks you want to keep attached to the system and ensure linux sees them fdisk should help with this.<br />
3. You will now need to create the partitions on the new disks (not rpool):</p>
<p>You will need to know how to calculate hard disk sectors and multiply by your block size. I don&#8217;t have time to go over it but I will do a quick TL;DR example to give you an idea:</p>
<p>We want 25GB slice so that is around 25000000000 bytes / 512 (block size) = 48828125 total sectors to allocate this storage amount.</p>
<p>Take a look at the partition table to make sure you create something similar, fdisk -l /dev/sd$ (your rpool disk). We will leave 8MB disk at the end of the partition, Proxmox by default creates 3 partitions: GRUB_BOOT, ZFS data, Solaris 8MB.</p>
<p>This command creates the partitions for my new array, I&#8217;ve described them for you by the -c command. It should be self-explanatory.</p>
<p># sgdisk -z /dev/sdb<br />
# sgdisk -a1 -n1:34:2047 -t1:EF02 -c1:&#8221;BIOS boot&#8221; -n2:2048:156252048 -t2:BF01 -c2:&#8221;mirror&#8221; -n3:156252049:205080174 -t3:BF01 -c3:&#8221;stripe&#8221; -n4:205080175:205096559 -t4:BF0 /dev/sda</p>
<p># sgdisk -a1 -n1:34:2047 -t1:EF02 -c1:&#8221;BIOS boot&#8221; -n2:2048:156252048 -t2:BF01 -c2:&#8221;mirror&#8221; -n3:156252049:205080174 -t3:BF01 -c3:&#8221;stripe&#8221; -n4:205080175:205096559 -t4:BF0 /dev/sdc<br />
# zpool create -f stripe -o ashift=13 /dev/sda3 /dev/sdc3<br />
# zpool create -f newroot -o ashift=13 mirror /dev/sda2 /dev/sdc2<br />
# grub-install /dev/disk/by-id/ata-Samsung_SSD_840_PRO_Series_S1ATNSADB46090M<br />
# grub-install /dev/disk/by-id/ata-Samsung_SSD_840_PRO_Series_S12RNEACC59063B</p>
<p>Backup &amp; moving stuff.<br />
# zfs snapshot -r rpool@fullbackup<br />
# zfs list -t snapshot<br />
# zfs send -R rpool@fullbackup | zfs recv -vFd newroot<br />
root@pve:/# zpool get bootfs<br />
NAME PROPERTY VALUE SOURCE<br />
newroot bootfs &#8211; default<br />
rpool bootfs rpool/ROOT/pve-1 local<br />
stripe bootfs &#8211; default<br />
root@pve:/# zpool set bootfs=newroot/ROOT/pve-1 newroot<br />
zpool export newroot<br />
zpool import -o altroot=/mnt newroot<br />
&#8212; rebooted with freenas live cd, enter shell, import newroot with new name rpool. rebooted<br />
&#8212; boot into proxmox recovery &#8212; once it boots, do recovery<br />
grub-install /dev/sdb<br />
grub-install /dev/sda<br />
update-grub2<br />
update-initramfs -u</p>
<p>#zpool set bootfs=newroot rpool could also work without renaming via FreeNAS but didn&#8217;t try.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">532</post-id>	</item>
		<item>
		<title>Virtualization hypervisor and containers all in one</title>
		<link>https://desantolo.com/2017/01/virtualization-hypervisor-docker-containers-all-in-one/</link>
					<comments>https://desantolo.com/2017/01/virtualization-hypervisor-docker-containers-all-in-one/#comments</comments>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Sun, 08 Jan 2017 10:01:26 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Proxmox]]></category>
		<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[containers]]></category>
		<category><![CDATA[docker]]></category>
		<category><![CDATA[openindiana]]></category>
		<category><![CDATA[opensolaris]]></category>
		<category><![CDATA[proxmox]]></category>
		<category><![CDATA[smartos]]></category>
		<category><![CDATA[zfs]]></category>
		<guid isPermaLink="false">https://desantolo.com/?p=475</guid>

					<description><![CDATA[I&#8217;m a big fan of virtualization, the ability to run multiple platforms and operating systems (called guests) in a single server (called host) is probably one of the best computing technologies of the past 10 years. Personally, I have been &#8230; <a href="https://desantolo.com/2017/01/virtualization-hypervisor-docker-containers-all-in-one/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>I&#8217;m a big fan of virtualization, the ability to run multiple platforms and operating systems (called guests) in a single server (called host) is probably one of the best computing technologies of the past 10 years.</p>
<p>Personally, I have been using virtualization circa 2004. It all took off after 2006 when chip manufacturer&#8217;s started bundling virtualization technologies in their processors (Intel VT-x or AMD-v). The reason why &#8220;cloud&#8221; computing is so popular can also be attributed to virtualization.</p>
<h3>In a container world&#8230;</h3>
<p>However, in the past couple of years a new technology has been making making the rounds everywhere, the words &#8220;containers&#8221;, &#8220;docker&#8221;, &#8220;orchestration&#8221; is picking up steam in the past year. They say that containers are changing the landscape for system administrators and application developers.</p>
<p>Claims that containers can be built and deployed in seconds, share a common storage layer and allow you to resize the container in real-time when you need more performance or capacity are really exciting concepts and I think the time is now for me to jump in and learn a thing of two about this new technology when its hot a new.<span id="more-475"></span></p>
<h3>Time to ditch vmware ESXi for a hybrid hypervisor?</h3>
<p>You may remember my blog entry <a href="https://desantolo.com/2011/05/building-a-low-power-sandy-bridge-esxi-zfs-storage-array/">building a low-power sandy bridge ESXi server with ZFS</a> &#8211; now 5 years later it is time to find a new platform that will allow me to keep my legacy virtual machines (VMs) as well as allow me to host containers using Docker.</p>
<p>The process of finding a suitable replacement for ESXi may take awhile and more than just a single entry on my blog. This is the first entry on my journey.</p>
<p>Before replacing something that works with a new platform I think it is good to point out the strengths and weaknesses of vmware ESXi (which has been my platform of choice for 6 years)</p>
<h4>Strengths of ESXi</h4>
<ul>
<li>awesome windows GUI vSphere client that allows you to manage your hypervisor without the need for console or ssh</li>
<li>a web-interface to manage it too if you <a href="https://labs.vmware.com/flings/esxi-embedded-host-client">install a plugin</a></li>
<li>virtual switch with VLAN support</li>
<li>support for PCI passthrough (Intel VT-d) allowing you to assign PCI devices to virtual guests</li>
</ul>
<h4>Weaknesses</h4>
<ul>
<li>does not support docker containers (unless you wish to create a virtual machine and run docker from there &#8211; but I prefer a central platform if possible)</li>
<li>vmware <a href="http://pubs.vmware.com/Release_Notes/en/vsphere/65/vsphere-client-65-html5-functionality-support.html">continues to remove features from the free version</a> of ESX &#8211; vSphere client interface is no longer availabel in their latest release</li>
<li>Nothing exciting has been released by vmware in the past 2 years (in terms of ESXi) and they push esxi users into paid licensees</li>
</ul>
<h3>The alternatives?</h3>
<p><strong>SmartOS</strong> is a fork off OpenIndiana/OpenSolaris, it seems to have a lot of great security features and features from Solaris that enjoy (you may have read of my love for the ZFS filesystem which is native to SmartOS). Joyent has recently open-sourced their SmartDataCenter &#8220;SDC&#8221; or they are now calling it Triton Enterprise.</p>
<p>What I like about it other than the fact it uses native Solaris and it uses the ZFS filesystem for storage is the fact that it is a <strong>hybrid hypervisor</strong>. It can host containers and VMs (using technology similar to virtualbox since virtualbox is also from solaris).</p>
<p>The downside of this platform seems to be the complexity needed to deploy containers with this tool. You need to have a &#8220;head node&#8221; to be the brains of the platform, the &#8220;head node&#8221; does a lot of critical things. It monitors the network, the other compute nodes (where you host your vms/containers), it also hosts the database for all the nodes. In dev mode you can force the head node to also be able to host VMs but this is not recommended or good practice.</p>
<p>The web interface (SmartDataCenter) to manage your containers and VMs is also very rudementary, there is no built-in console to your guests. You need to run a lot of commands in the head node&#8217;s shell to make JSON queries to grab the data you want like the VNC server and port address for your guests.</p>
<p>Honestly I have not dug much deeper into SmartOS but I probably should, it looks like an awesome project. I am sure for people that want to use their platform for scalable container/hypervisor deployments it makes sense, but to replace my single server at home doing virtualization it does not look very likely this may be a good choice given the complexity.</p>
<p><strong>Proxmox</strong> is another platform I am looking at, you may recall that 7 years ago I discovered proxmox virtual environment and started using it on my lab. That was Proxmox VE 1.5 I think and I recently discovered they have made a lot of strides in the right direction.</p>
<p>Just a few weeks ago they released their latest PVE 4.4 and they are now supporting ZFS data pools (via FUSE/zfsonlinux), not to mention that they have replaced OpenVZ with LXC (linux containers). It may be worth it for me to download their latest release and check out their platform again.</p>
<p>Other than Proxmox or SmartOS, I have not come across any other &#8216;hybrid&#8217; hypervisors. Please share in the comments if there is something else I should check out.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://desantolo.com/2017/01/virtualization-hypervisor-docker-containers-all-in-one/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">475</post-id>	</item>
		<item>
		<title>Building a low power Sandy Bridge ESXi + ZFS Storage Array</title>
		<link>https://desantolo.com/2011/05/building-a-low-power-sandy-bridge-esxi-zfs-storage-array/</link>
					<comments>https://desantolo.com/2011/05/building-a-low-power-sandy-bridge-esxi-zfs-storage-array/#comments</comments>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Tue, 17 May 2011 20:08:23 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[esxi]]></category>
		<category><![CDATA[zfs]]></category>
		<guid isPermaLink="false">http://desantolo.com/?p=278</guid>

					<description><![CDATA[I have finals this week, so I will update this post as I have more time. In the meantime, I am working to get vmware ESXi (free version of vmware Virtualization server hypervisor) onto a custom whitebox build to replace &#8230; <a href="https://desantolo.com/2011/05/building-a-low-power-sandy-bridge-esxi-zfs-storage-array/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>I have finals this week, so I will update this post as I have more time. In the meantime, I am working to get vmware ESXi (free version of vmware Virtualization server hypervisor) onto a custom whitebox build to replace my aging Intel Core 2 Quad Q9450 server that uses around 125 Watts while idle.<span id="more-278"></span></p>
<p>&nbsp;</p>
<p><a href="https://i0.wp.com/desantolo.com/wp-content/uploads/2011/05/IMG_0444.jpg"><img data-recalc-dims="1" fetchpriority="high" decoding="async" data-attachment-id="282" data-permalink="https://desantolo.com/2011/05/building-a-low-power-sandy-bridge-esxi-zfs-storage-array/img_0444/" data-orig-file="https://i0.wp.com/desantolo.com/wp-content/uploads/2011/05/IMG_0444.jpg?fit=614%2C819&amp;ssl=1" data-orig-size="614,819" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;}" data-image-title="old power consumption" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/desantolo.com/wp-content/uploads/2011/05/IMG_0444.jpg?fit=224%2C300&amp;ssl=1" data-large-file="https://i0.wp.com/desantolo.com/wp-content/uploads/2011/05/IMG_0444.jpg?fit=500%2C667&amp;ssl=1" class="aligncenter size-full wp-image-282" title="old power consumption" src="https://i0.wp.com/desantolo.com/wp-content/uploads/2011/05/IMG_0444.jpg?resize=500%2C667" alt="" width="500" height="667" srcset="https://i0.wp.com/desantolo.com/wp-content/uploads/2011/05/IMG_0444.jpg?w=614&amp;ssl=1 614w, https://i0.wp.com/desantolo.com/wp-content/uploads/2011/05/IMG_0444.jpg?resize=224%2C300&amp;ssl=1 224w" sizes="(max-width: 500px) 100vw, 500px" /></a>Let me start by giving you a brief overview of my old system:</p>
<ul>
<li>Intel Core 2 Quad Q9450 2.66Ghz LGA 775 95W TDP</li>
<li>Corsair Builder CX430 430Watt Power Supply</li>
<li>4GB PC2-6400 667Mhz DDR2 Ram 1.5v</li>
<li>ATI Radeon 4800 basic PCI-E graphics (no PCI-E power needed)</li>
<li>Biostar Tpower I45 Motherboard (I45 Chipset)</li>
<li>LSI SAS3041E-R SATA II 300Mbps RAID controller</li>
<li>OS: Ubuntu Linux 10.04</li>
<li>Storage: Oracle ZFS (via FUSE-ZFS) **</li>
<li>RAID-Z with four drives (3x 2TB plus one 1TB drive)</li>
</ul>
<p><a href="http://desantolo.com/wp-content/uploads/2011/05/IMG_0444.jpg"></a><a href="http://desantolo.com/wp-content/uploads/2011/05/IMG_0446.jpg"></a><a href="https://i0.wp.com/desantolo.com/wp-content/uploads/2011/05/IMG_0446.jpg"><img data-recalc-dims="1" decoding="async" data-attachment-id="283" data-permalink="https://desantolo.com/2011/05/building-a-low-power-sandy-bridge-esxi-zfs-storage-array/img_0446/" data-orig-file="https://i0.wp.com/desantolo.com/wp-content/uploads/2011/05/IMG_0446.jpg?fit=614%2C819&amp;ssl=1" data-orig-size="614,819" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;}" data-image-title="old system" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/desantolo.com/wp-content/uploads/2011/05/IMG_0446.jpg?fit=224%2C300&amp;ssl=1" data-large-file="https://i0.wp.com/desantolo.com/wp-content/uploads/2011/05/IMG_0446.jpg?fit=500%2C667&amp;ssl=1" class="aligncenter size-full wp-image-283" title="old system" src="https://i0.wp.com/desantolo.com/wp-content/uploads/2011/05/IMG_0446.jpg?resize=500%2C667" alt="" width="500" height="667" srcset="https://i0.wp.com/desantolo.com/wp-content/uploads/2011/05/IMG_0446.jpg?w=614&amp;ssl=1 614w, https://i0.wp.com/desantolo.com/wp-content/uploads/2011/05/IMG_0446.jpg?resize=224%2C300&amp;ssl=1 224w" sizes="(max-width: 500px) 100vw, 500px" /></a><br />
As you can see, while idle the system is drawing 130 Watts constantly at a minimum. This becomes a problem since the server is online for 24&#215;7 and thanks to EZ Kill-A-Watt it costs an estimated $30 a month to run the server. About $0.80 a day in electricity alone.</p>
<p>With today&#8217;s green technology, Sandy Bridge new processor steppings (Intel Speedstep) and green hard drives (replacing my old 1TB 7200 RPM drive with quieter, lower-power 2TB hitachi), I hope to reduce my idle power consumption at least 20%. <em>Based on $30 a month total cost to run my old server, this means I would be saving $6+ a month on my electric bill with this new build</em>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://desantolo.com/2011/05/building-a-low-power-sandy-bridge-esxi-zfs-storage-array/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">278</post-id>	</item>
		<item>
		<title>Checking for Hard drive READ and WRITE Cache (onboard) on Solaris</title>
		<link>https://desantolo.com/2010/05/checking-for-hard-drive-read-and-write-cache-onboard-on-solaris/</link>
					<comments>https://desantolo.com/2010/05/checking-for-hard-drive-read-and-write-cache-onboard-on-solaris/#respond</comments>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Wed, 05 May 2010 15:09:45 +0000</pubDate>
				<category><![CDATA[Guides]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[cache]]></category>
		<category><![CDATA[hard drive]]></category>
		<category><![CDATA[opensolaris]]></category>
		<category><![CDATA[performance]]></category>
		<category><![CDATA[read]]></category>
		<category><![CDATA[read_cache]]></category>
		<category><![CDATA[write]]></category>
		<category><![CDATA[write_cache]]></category>
		<category><![CDATA[zfs]]></category>
		<guid isPermaLink="false">http://gioflux.wordpress.com/?p=14</guid>

					<description><![CDATA[To check for read and write cache for your hard drives do the following: Giovanni@server:~# format -e Searching for disks&#8230;done AVAILABLE DISK SELECTIONS: 0. c8t0d0 &#60;DEFAULT cyl 60797 alt 2 hd 255 sec 252&#62; /pci@0,0/pci15d9,d380@1f,2/disk@0,0 1. c8t1d0 &#60;ATA-Hitachi HDS72202-A3EA-1.82TB&#62; /pci@0,0/pci15d9,d380@1f,2/disk@1,0 &#8230; <a href="https://desantolo.com/2010/05/checking-for-hard-drive-read-and-write-cache-onboard-on-solaris/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>To check for read and write cache for your hard drives do the following:</p>
<p><a href="mailto:Giovanni@server">Giovanni@server</a>:~# format -e<br />
Searching for disks&#8230;done<br />
AVAILABLE DISK SELECTIONS:<br />
0. c8t0d0 &lt;DEFAULT cyl 60797 alt 2 hd 255 sec 252&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@0,0</a><br />
1. c8t1d0 &lt;ATA-Hitachi HDS72202-A3EA-1.82TB&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@1,0</a><br />
2. c8t2d0 &lt;ATA-Hitachi HDS72202-A28A-1.82TB&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@2,0</a><br />
3. c8t3d0 &lt;ATA-Hitachi HDS72202-A3EA-1.82TB&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@3,0</a><br />
4. c8t4d0 &lt;ATA-Hitachi HDS72202-A3EA-1.82TB&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@4,0</a><br />
5. c8t5d0 &lt;ATA-Hitachi HDS72202-A3EA-1.82TB&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@5,0</a><br />
Specify disk (enter its number):</p>
<p>Select a drive, lets pick 5 from the list.</p>
<p>Specify disk (enter its number): 5<br />
selecting c8t5d0<br />
[disk formatted]<br />
/dev/dsk/c8t5d0s0 is part of active ZFS pool gpool. Please see zpool(1M).<br />
FORMAT MENU:<br />
disk       &#8211; select a disk<br />
type       &#8211; select (define) a disk type<br />
partition  &#8211; select (define) a partition table<br />
current    &#8211; describe the current disk<br />
format     &#8211; format and analyze the disk<br />
fdisk      &#8211; run the fdisk program<br />
repair     &#8211; repair a defective sector<br />
label      &#8211; write label to the disk<br />
analyze    &#8211; surface analysis<br />
defect     &#8211; defect list management<br />
backup     &#8211; search for backup labels<br />
verify     &#8211; read and display labels<br />
inquiry    &#8211; show vendor, product and revision<br />
scsi       &#8211; independent SCSI mode selects<br />
cache      &#8211; enable, disable or query SCSI disk cache<br />
volname    &#8211; set 8-character volume name<br />
!&lt;cmd&gt;     &#8211; execute &lt;cmd&gt;, then return<br />
quit<br />
format&gt;</p>
<p>Now let&#8217;s do the checking</p>
<p>Enter &#8220;cache&#8221; to enter cache menu.</p>
<p>CACHE MENU:<br />
write_cache &#8211; display or modify write cache settings<br />
read_cache  &#8211; display or modify read cache settings<br />
!&lt;cmd&gt;      &#8211; execute &lt;cmd&gt;, then return<br />
quit<br />
cache&gt;</p>
<p>Type: &#8220;write_cache&#8221; or &#8220;read_cache&#8221; depending on what you would like to see, lets use write:</p>
<p>cache&gt; write_cache<br />
WRITE_CACHE MENU:<br />
display     &#8211; display current setting of write cache<br />
enable      &#8211; enable write cache<br />
disable     &#8211; disable write cache<br />
!&lt;cmd&gt;      &#8211; execute &lt;cmd&gt;, then return<br />
quit<br />
write_cache&gt; display<br />
Write Cache is enabled<br />
write_cache&gt;</p>
<p>Use the same for read_cache and to disable and enable.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://desantolo.com/2010/05/checking-for-hard-drive-read-and-write-cache-onboard-on-solaris/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">14</post-id>	</item>
		<item>
		<title>Setup Filebench on Solaris for benchmarking</title>
		<link>https://desantolo.com/2010/05/setup-filebench-on-solaris-for-benchmarking/</link>
					<comments>https://desantolo.com/2010/05/setup-filebench-on-solaris-for-benchmarking/#respond</comments>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Mon, 03 May 2010 19:15:37 +0000</pubDate>
				<category><![CDATA[Guides]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[benchmark]]></category>
		<category><![CDATA[install]]></category>
		<category><![CDATA[opensolaris]]></category>
		<category><![CDATA[pkg]]></category>
		<category><![CDATA[zfs]]></category>
		<guid isPermaLink="false">http://gioflux.wordpress.com/?p=9</guid>

					<description><![CDATA[Like any other newbie on Solaris, I didn&#8217;t know how to install the packages, I am used to yum or apt-get install but anyway on Solaris I did: Giovanni@server:~/Downloads/filebench-1.4.8# pkg install SUNWfilebench DOWNLOAD                                    PKGS       FILES     XFER (MB) Completed                                    1/1       60/60     &#8230; <a href="https://desantolo.com/2010/05/setup-filebench-on-solaris-for-benchmarking/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>Like any other newbie on Solaris, I didn&#8217;t know how to install the packages, I am used to yum or apt-get install but anyway on Solaris I did:</p>
<blockquote><p><a href="mailto:Giovanni@server:~/Downloads/filebench-1.4.8">Giovanni@server:~/Downloads/filebench-1.4.8</a># pkg install SUNWfilebench<br />
DOWNLOAD                                    PKGS       FILES     XFER (MB)<br />
Completed                                    1/1       60/60     0.32/0.32</p>
<p>PHASE                                        ACTIONS<br />
Install Phase                                  82/82<br />
<a href="mailto:Giovanni@server:~/Downloads/filebench-1.4.8">Giovanni@server:~/Downloads/filebench-1.4.8</a>#</p></blockquote>
<p>and it was installed <img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Use <strong>pkg search</strong> to search for packages.</p>
<blockquote><p><a href="mailto:Giovanni@server:/usr/benchmarks/filebench">Giovanni@server:/usr/benchmarks/filebench</a># bin/go_filebench<br />
FileBench Version 1.4.4<br />
filebench&gt; load varmail<br />
742: 3.707: Varmail Version 2.1 personality successfully loaded<br />
742: 3.707: Usage: set $dir=&lt;dir&gt;<br />
742: 3.707:        set $filesize=&lt;size&gt;    defaults to 16384<br />
742: 3.707:        set $nfiles=&lt;value&gt;     defaults to 1000<br />
742: 3.707:        set $nthreads=&lt;value&gt;   defaults to 16<br />
742: 3.707:        set $meaniosize=&lt;value&gt; defaults to 16384<br />
742: 3.707:        set $readiosize=&lt;size&gt;  defaults to 1048576<br />
742: 3.707:        set $meandirwidth=&lt;size&gt; defaults to 1000000<br />
742: 3.707: (sets mean dir width and dir depth is calculated as log (width, nfiles)<br />
742: 3.707:  dirdepth therefore defaults to dir depth of 1 as in postmark<br />
742: 3.707:  set $meandir lower to increase depth beyond 1 if desired)<br />
742: 3.707:<br />
742: 3.707:        run runtime (e.g. run 60)<br />
filebench&gt; set $dir=/gpool<br />
filebench&gt; run 60<br />
742: 27.078: Creating/pre-allocating files and filesets<br />
742: 27.081: Fileset bigfileset: 1000 files, 0 leafdirs avg dir = 1000000, avg depth = 0.5, mbytes=15<br />
742: 27.096: Removed any existing fileset bigfileset in 1 seconds<br />
742: 27.096: making tree for filset /gpool/bigfileset<br />
742: 27.096: Creating fileset bigfileset&#8230;<br />
742: 35.092: Preallocated 812 of 1000 of fileset bigfileset in 8 seconds<br />
742: 35.092: waiting for fileset pre-allocation to finish<br />
742: 35.092: Starting 1 filereader instances<br />
744: 36.102: Starting 16 filereaderthread threads<br />
742: 39.112: Running&#8230;<br />
742: 99.712: Run took 60 seconds&#8230;<br />
742: 99.713: Per-Operation Breakdown<br />
closefile4                449ops/s   0.0mb/s      0.0ms/op        3us/op-cpu<br />
readfile4                 449ops/s   7.0mb/s      0.0ms/op       19us/op-cpu<br />
openfile4                 449ops/s   0.0mb/s      0.0ms/op       18us/op-cpu<br />
closefile3                449ops/s   0.0mb/s      0.0ms/op        3us/op-cpu<br />
fsyncfile3                449ops/s   0.0mb/s     17.4ms/op       20us/op-cpu<br />
appendfilerand3           449ops/s   3.5mb/s      0.0ms/op       27us/op-cpu<br />
readfile3                 449ops/s   7.0mb/s      0.0ms/op       18us/op-cpu<br />
openfile3                 449ops/s   0.0mb/s      0.0ms/op       18us/op-cpu<br />
closefile2                449ops/s   0.0mb/s      0.0ms/op        3us/op-cpu<br />
fsyncfile2                449ops/s   0.0mb/s     17.9ms/op       17us/op-cpu<br />
appendfilerand2           449ops/s   3.5mb/s      0.0ms/op       23us/op-cpu<br />
createfile2               449ops/s   0.0mb/s      0.1ms/op       52us/op-cpu<br />
deletefile1               449ops/s   0.0mb/s      0.0ms/op       33us/op-cpu</p>
<p>742: 99.713:<br />
IO Summary:      353667 ops, 5836.1 ops/s, (898/898 r/w)  21.0mb/s,     78us cpu/op,   8.9ms latency<br />
742: 99.713: Shutting down processes<br />
filebench&gt;<br />
742: 110.144: Aborting&#8230;</p></blockquote>
<p>Going back to normal</p>
]]></content:encoded>
					
					<wfw:commentRss>https://desantolo.com/2010/05/setup-filebench-on-solaris-for-benchmarking/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">12</post-id>	</item>
		<item>
		<title>Create a Storage Pool</title>
		<link>https://desantolo.com/2010/05/create-a-storage-pool/</link>
					<comments>https://desantolo.com/2010/05/create-a-storage-pool/#respond</comments>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Sun, 02 May 2010 22:11:35 +0000</pubDate>
				<category><![CDATA[Guides]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[create]]></category>
		<category><![CDATA[opensolaris]]></category>
		<category><![CDATA[sata]]></category>
		<category><![CDATA[zfs]]></category>
		<category><![CDATA[zpool]]></category>
		<guid isPermaLink="false">http://gioflux.wordpress.com/?p=6</guid>

					<description><![CDATA[This will create a pool named &#8220;gpool&#8221; using RAIDZ (raid5) with member drives  c8t1d0 c8t2d0 c8t3d0 c8t4d0 c8t5d0 Giovanni@server:~# zpool create gpool raidz c8t1d0 c8t2d0 c8t3d0 c8t4d0 c8t5d0 Giovanni@server:~# zpool status pool: gpool state: ONLINE scrub: none requested config: NAME        STATE     &#8230; <a href="https://desantolo.com/2010/05/create-a-storage-pool/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>This will create a pool named &#8220;gpool&#8221; using RAIDZ (raid5) with member drives  c8t1d0 c8t2d0 c8t3d0 c8t4d0 c8t5d0</p>
<blockquote><p><a href="mailto:Giovanni@server">Giovanni@server</a>:~# zpool create gpool raidz c8t1d0 c8t2d0 c8t3d0 c8t4d0 c8t5d0<br />
<a href="mailto:Giovanni@server">Giovanni@server</a>:~# zpool status<br />
pool: gpool<br />
state: ONLINE<br />
scrub: none requested<br />
config:</p>
<p>NAME        STATE     READ WRITE CKSUM<br />
gpool       ONLINE       0     0     0<br />
raidz1    ONLINE       0     0     0<br />
c8t1d0  ONLINE       0     0     0<br />
c8t2d0  ONLINE       0     0     0<br />
c8t3d0  ONLINE       0     0     0<br />
c8t4d0  ONLINE       0     0     0<br />
c8t5d0  ONLINE       0     0     0</p>
<p>errors: No known data errors</p>
<p>pool: rpool<br />
state: ONLINE<br />
scrub: none requested<br />
config:</p>
<p>NAME        STATE     READ WRITE CKSUM<br />
rpool       ONLINE       0     0     0<br />
c8t0d0s0  ONLINE       0     0     0</p>
<p>errors: No known data errors<br />
<a href="mailto:Giovanni@server">Giovanni@server</a>:~#</p></blockquote>
<p>OK.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://desantolo.com/2010/05/create-a-storage-pool/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">6</post-id>	</item>
		<item>
		<title>How to view available SATA hard drives</title>
		<link>https://desantolo.com/2010/05/how-to-view-available-sata-hard-drives/</link>
					<comments>https://desantolo.com/2010/05/how-to-view-available-sata-hard-drives/#respond</comments>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Sun, 02 May 2010 21:41:27 +0000</pubDate>
				<category><![CDATA[Guides]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[create]]></category>
		<category><![CDATA[opensolaris]]></category>
		<category><![CDATA[sata]]></category>
		<category><![CDATA[zfs]]></category>
		<category><![CDATA[zpool]]></category>
		<guid isPermaLink="false">http://gioflux.wordpress.com/?p=3</guid>

					<description><![CDATA[You will be able to view hardware ID&#8217;s for hard drives using &#8216;format&#8217; Giovanni@server:~# format Searching for disks&#8230;done AVAILABLE DISK SELECTIONS: 0. c8t0d0 &#60;DEFAULT cyl 60797 alt 2 hd 255 sec 252&#62; /pci@0,0/pci15d9,d380@1f,2/disk@0,0 1. c8t1d0 &#60;ATA-Hitachi HDS72202-A3EA-1.82TB&#62; /pci@0,0/pci15d9,d380@1f,2/disk@1,0 2. c8t2d0 &#8230; <a href="https://desantolo.com/2010/05/how-to-view-available-sata-hard-drives/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>You will be able to view hardware ID&#8217;s for hard drives using &#8216;format&#8217;</p>
<blockquote><p><a href="mailto:Giovanni@server">Giovanni@server</a>:~# format<br />
Searching for disks&#8230;done<br />
AVAILABLE DISK SELECTIONS:<br />
0. c8t0d0 &lt;DEFAULT cyl 60797 alt 2 hd 255 sec 252&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@0,0</a><br />
1. c8t1d0 &lt;ATA-Hitachi HDS72202-A3EA-1.82TB&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@1,0</a><br />
2. c8t2d0 &lt;ATA-Hitachi HDS72202-A28A-1.82TB&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@2,0</a><br />
3. c8t3d0 &lt;ATA-Hitachi HDS72202-A3EA-1.82TB&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@3,0</a><br />
4. c8t4d0 &lt;ATA-Hitachi HDS72202-A3EA-1.82TB&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@4,0</a><br />
5. c8t5d0 &lt;ATA-Hitachi HDS72202-A3EA-1.82TB&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@5,0</a><br />
Specify disk (enter its number):</p></blockquote>
<p>Hard drives are located on <strong>/dev/dsk</strong> in Opensolaris. Compare to the zpool status and add drives that are new to the system (not yet in any storage pools)</p>
]]></content:encoded>
					
					<wfw:commentRss>https://desantolo.com/2010/05/how-to-view-available-sata-hard-drives/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">11</post-id>	</item>
	</channel>
</rss>
