<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>opensolaris &#8211; Giovanni F. Mazzeo De Santolo</title>
	<atom:link href="https://desantolo.com/tag/opensolaris/feed/" rel="self" type="application/rss+xml" />
	<link>https://desantolo.com</link>
	<description>That italian IT guy</description>
	<lastBuildDate>Mon, 09 Jan 2017 01:58:43 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.2</generator>
<site xmlns="com-wordpress:feed-additions:1">123042357</site>	<item>
		<title>Virtualization hypervisor and containers all in one</title>
		<link>https://desantolo.com/2017/01/virtualization-hypervisor-docker-containers-all-in-one/</link>
					<comments>https://desantolo.com/2017/01/virtualization-hypervisor-docker-containers-all-in-one/#comments</comments>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Sun, 08 Jan 2017 10:01:26 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Proxmox]]></category>
		<category><![CDATA[Virtualization]]></category>
		<category><![CDATA[containers]]></category>
		<category><![CDATA[docker]]></category>
		<category><![CDATA[openindiana]]></category>
		<category><![CDATA[opensolaris]]></category>
		<category><![CDATA[proxmox]]></category>
		<category><![CDATA[smartos]]></category>
		<category><![CDATA[zfs]]></category>
		<guid isPermaLink="false">https://desantolo.com/?p=475</guid>

					<description><![CDATA[I&#8217;m a big fan of virtualization, the ability to run multiple platforms and operating systems (called guests) in a single server (called host) is probably one of the best computing technologies of the past 10 years. Personally, I have been &#8230; <a href="https://desantolo.com/2017/01/virtualization-hypervisor-docker-containers-all-in-one/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>I&#8217;m a big fan of virtualization, the ability to run multiple platforms and operating systems (called guests) in a single server (called host) is probably one of the best computing technologies of the past 10 years.</p>
<p>Personally, I have been using virtualization circa 2004. It all took off after 2006 when chip manufacturer&#8217;s started bundling virtualization technologies in their processors (Intel VT-x or AMD-v). The reason why &#8220;cloud&#8221; computing is so popular can also be attributed to virtualization.</p>
<h3>In a container world&#8230;</h3>
<p>However, in the past couple of years a new technology has been making making the rounds everywhere, the words &#8220;containers&#8221;, &#8220;docker&#8221;, &#8220;orchestration&#8221; is picking up steam in the past year. They say that containers are changing the landscape for system administrators and application developers.</p>
<p>Claims that containers can be built and deployed in seconds, share a common storage layer and allow you to resize the container in real-time when you need more performance or capacity are really exciting concepts and I think the time is now for me to jump in and learn a thing of two about this new technology when its hot a new.<span id="more-475"></span></p>
<h3>Time to ditch vmware ESXi for a hybrid hypervisor?</h3>
<p>You may remember my blog entry <a href="https://desantolo.com/2011/05/building-a-low-power-sandy-bridge-esxi-zfs-storage-array/">building a low-power sandy bridge ESXi server with ZFS</a> &#8211; now 5 years later it is time to find a new platform that will allow me to keep my legacy virtual machines (VMs) as well as allow me to host containers using Docker.</p>
<p>The process of finding a suitable replacement for ESXi may take awhile and more than just a single entry on my blog. This is the first entry on my journey.</p>
<p>Before replacing something that works with a new platform I think it is good to point out the strengths and weaknesses of vmware ESXi (which has been my platform of choice for 6 years)</p>
<h4>Strengths of ESXi</h4>
<ul>
<li>awesome windows GUI vSphere client that allows you to manage your hypervisor without the need for console or ssh</li>
<li>a web-interface to manage it too if you <a href="https://labs.vmware.com/flings/esxi-embedded-host-client">install a plugin</a></li>
<li>virtual switch with VLAN support</li>
<li>support for PCI passthrough (Intel VT-d) allowing you to assign PCI devices to virtual guests</li>
</ul>
<h4>Weaknesses</h4>
<ul>
<li>does not support docker containers (unless you wish to create a virtual machine and run docker from there &#8211; but I prefer a central platform if possible)</li>
<li>vmware <a href="http://pubs.vmware.com/Release_Notes/en/vsphere/65/vsphere-client-65-html5-functionality-support.html">continues to remove features from the free version</a> of ESX &#8211; vSphere client interface is no longer availabel in their latest release</li>
<li>Nothing exciting has been released by vmware in the past 2 years (in terms of ESXi) and they push esxi users into paid licensees</li>
</ul>
<h3>The alternatives?</h3>
<p><strong>SmartOS</strong> is a fork off OpenIndiana/OpenSolaris, it seems to have a lot of great security features and features from Solaris that enjoy (you may have read of my love for the ZFS filesystem which is native to SmartOS). Joyent has recently open-sourced their SmartDataCenter &#8220;SDC&#8221; or they are now calling it Triton Enterprise.</p>
<p>What I like about it other than the fact it uses native Solaris and it uses the ZFS filesystem for storage is the fact that it is a <strong>hybrid hypervisor</strong>. It can host containers and VMs (using technology similar to virtualbox since virtualbox is also from solaris).</p>
<p>The downside of this platform seems to be the complexity needed to deploy containers with this tool. You need to have a &#8220;head node&#8221; to be the brains of the platform, the &#8220;head node&#8221; does a lot of critical things. It monitors the network, the other compute nodes (where you host your vms/containers), it also hosts the database for all the nodes. In dev mode you can force the head node to also be able to host VMs but this is not recommended or good practice.</p>
<p>The web interface (SmartDataCenter) to manage your containers and VMs is also very rudementary, there is no built-in console to your guests. You need to run a lot of commands in the head node&#8217;s shell to make JSON queries to grab the data you want like the VNC server and port address for your guests.</p>
<p>Honestly I have not dug much deeper into SmartOS but I probably should, it looks like an awesome project. I am sure for people that want to use their platform for scalable container/hypervisor deployments it makes sense, but to replace my single server at home doing virtualization it does not look very likely this may be a good choice given the complexity.</p>
<p><strong>Proxmox</strong> is another platform I am looking at, you may recall that 7 years ago I discovered proxmox virtual environment and started using it on my lab. That was Proxmox VE 1.5 I think and I recently discovered they have made a lot of strides in the right direction.</p>
<p>Just a few weeks ago they released their latest PVE 4.4 and they are now supporting ZFS data pools (via FUSE/zfsonlinux), not to mention that they have replaced OpenVZ with LXC (linux containers). It may be worth it for me to download their latest release and check out their platform again.</p>
<p>Other than Proxmox or SmartOS, I have not come across any other &#8216;hybrid&#8217; hypervisors. Please share in the comments if there is something else I should check out.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://desantolo.com/2017/01/virtualization-hypervisor-docker-containers-all-in-one/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">475</post-id>	</item>
		<item>
		<title>Checking for Hard drive READ and WRITE Cache (onboard) on Solaris</title>
		<link>https://desantolo.com/2010/05/checking-for-hard-drive-read-and-write-cache-onboard-on-solaris/</link>
					<comments>https://desantolo.com/2010/05/checking-for-hard-drive-read-and-write-cache-onboard-on-solaris/#respond</comments>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Wed, 05 May 2010 15:09:45 +0000</pubDate>
				<category><![CDATA[Guides]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[cache]]></category>
		<category><![CDATA[hard drive]]></category>
		<category><![CDATA[opensolaris]]></category>
		<category><![CDATA[performance]]></category>
		<category><![CDATA[read]]></category>
		<category><![CDATA[read_cache]]></category>
		<category><![CDATA[write]]></category>
		<category><![CDATA[write_cache]]></category>
		<category><![CDATA[zfs]]></category>
		<guid isPermaLink="false">http://gioflux.wordpress.com/?p=14</guid>

					<description><![CDATA[To check for read and write cache for your hard drives do the following: Giovanni@server:~# format -e Searching for disks&#8230;done AVAILABLE DISK SELECTIONS: 0. c8t0d0 &#60;DEFAULT cyl 60797 alt 2 hd 255 sec 252&#62; /pci@0,0/pci15d9,d380@1f,2/disk@0,0 1. c8t1d0 &#60;ATA-Hitachi HDS72202-A3EA-1.82TB&#62; /pci@0,0/pci15d9,d380@1f,2/disk@1,0 &#8230; <a href="https://desantolo.com/2010/05/checking-for-hard-drive-read-and-write-cache-onboard-on-solaris/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>To check for read and write cache for your hard drives do the following:</p>
<p><a href="mailto:Giovanni@server">Giovanni@server</a>:~# format -e<br />
Searching for disks&#8230;done<br />
AVAILABLE DISK SELECTIONS:<br />
0. c8t0d0 &lt;DEFAULT cyl 60797 alt 2 hd 255 sec 252&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@0,0</a><br />
1. c8t1d0 &lt;ATA-Hitachi HDS72202-A3EA-1.82TB&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@1,0</a><br />
2. c8t2d0 &lt;ATA-Hitachi HDS72202-A28A-1.82TB&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@2,0</a><br />
3. c8t3d0 &lt;ATA-Hitachi HDS72202-A3EA-1.82TB&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@3,0</a><br />
4. c8t4d0 &lt;ATA-Hitachi HDS72202-A3EA-1.82TB&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@4,0</a><br />
5. c8t5d0 &lt;ATA-Hitachi HDS72202-A3EA-1.82TB&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@5,0</a><br />
Specify disk (enter its number):</p>
<p>Select a drive, lets pick 5 from the list.</p>
<p>Specify disk (enter its number): 5<br />
selecting c8t5d0<br />
[disk formatted]<br />
/dev/dsk/c8t5d0s0 is part of active ZFS pool gpool. Please see zpool(1M).<br />
FORMAT MENU:<br />
disk       &#8211; select a disk<br />
type       &#8211; select (define) a disk type<br />
partition  &#8211; select (define) a partition table<br />
current    &#8211; describe the current disk<br />
format     &#8211; format and analyze the disk<br />
fdisk      &#8211; run the fdisk program<br />
repair     &#8211; repair a defective sector<br />
label      &#8211; write label to the disk<br />
analyze    &#8211; surface analysis<br />
defect     &#8211; defect list management<br />
backup     &#8211; search for backup labels<br />
verify     &#8211; read and display labels<br />
inquiry    &#8211; show vendor, product and revision<br />
scsi       &#8211; independent SCSI mode selects<br />
cache      &#8211; enable, disable or query SCSI disk cache<br />
volname    &#8211; set 8-character volume name<br />
!&lt;cmd&gt;     &#8211; execute &lt;cmd&gt;, then return<br />
quit<br />
format&gt;</p>
<p>Now let&#8217;s do the checking</p>
<p>Enter &#8220;cache&#8221; to enter cache menu.</p>
<p>CACHE MENU:<br />
write_cache &#8211; display or modify write cache settings<br />
read_cache  &#8211; display or modify read cache settings<br />
!&lt;cmd&gt;      &#8211; execute &lt;cmd&gt;, then return<br />
quit<br />
cache&gt;</p>
<p>Type: &#8220;write_cache&#8221; or &#8220;read_cache&#8221; depending on what you would like to see, lets use write:</p>
<p>cache&gt; write_cache<br />
WRITE_CACHE MENU:<br />
display     &#8211; display current setting of write cache<br />
enable      &#8211; enable write cache<br />
disable     &#8211; disable write cache<br />
!&lt;cmd&gt;      &#8211; execute &lt;cmd&gt;, then return<br />
quit<br />
write_cache&gt; display<br />
Write Cache is enabled<br />
write_cache&gt;</p>
<p>Use the same for read_cache and to disable and enable.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://desantolo.com/2010/05/checking-for-hard-drive-read-and-write-cache-onboard-on-solaris/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">14</post-id>	</item>
		<item>
		<title>Setup Filebench on Solaris for benchmarking</title>
		<link>https://desantolo.com/2010/05/setup-filebench-on-solaris-for-benchmarking/</link>
					<comments>https://desantolo.com/2010/05/setup-filebench-on-solaris-for-benchmarking/#respond</comments>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Mon, 03 May 2010 19:15:37 +0000</pubDate>
				<category><![CDATA[Guides]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[benchmark]]></category>
		<category><![CDATA[install]]></category>
		<category><![CDATA[opensolaris]]></category>
		<category><![CDATA[pkg]]></category>
		<category><![CDATA[zfs]]></category>
		<guid isPermaLink="false">http://gioflux.wordpress.com/?p=9</guid>

					<description><![CDATA[Like any other newbie on Solaris, I didn&#8217;t know how to install the packages, I am used to yum or apt-get install but anyway on Solaris I did: Giovanni@server:~/Downloads/filebench-1.4.8# pkg install SUNWfilebench DOWNLOAD                                    PKGS       FILES     XFER (MB) Completed                                    1/1       60/60     &#8230; <a href="https://desantolo.com/2010/05/setup-filebench-on-solaris-for-benchmarking/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>Like any other newbie on Solaris, I didn&#8217;t know how to install the packages, I am used to yum or apt-get install but anyway on Solaris I did:</p>
<blockquote><p><a href="mailto:Giovanni@server:~/Downloads/filebench-1.4.8">Giovanni@server:~/Downloads/filebench-1.4.8</a># pkg install SUNWfilebench<br />
DOWNLOAD                                    PKGS       FILES     XFER (MB)<br />
Completed                                    1/1       60/60     0.32/0.32</p>
<p>PHASE                                        ACTIONS<br />
Install Phase                                  82/82<br />
<a href="mailto:Giovanni@server:~/Downloads/filebench-1.4.8">Giovanni@server:~/Downloads/filebench-1.4.8</a>#</p></blockquote>
<p>and it was installed <img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Use <strong>pkg search</strong> to search for packages.</p>
<blockquote><p><a href="mailto:Giovanni@server:/usr/benchmarks/filebench">Giovanni@server:/usr/benchmarks/filebench</a># bin/go_filebench<br />
FileBench Version 1.4.4<br />
filebench&gt; load varmail<br />
742: 3.707: Varmail Version 2.1 personality successfully loaded<br />
742: 3.707: Usage: set $dir=&lt;dir&gt;<br />
742: 3.707:        set $filesize=&lt;size&gt;    defaults to 16384<br />
742: 3.707:        set $nfiles=&lt;value&gt;     defaults to 1000<br />
742: 3.707:        set $nthreads=&lt;value&gt;   defaults to 16<br />
742: 3.707:        set $meaniosize=&lt;value&gt; defaults to 16384<br />
742: 3.707:        set $readiosize=&lt;size&gt;  defaults to 1048576<br />
742: 3.707:        set $meandirwidth=&lt;size&gt; defaults to 1000000<br />
742: 3.707: (sets mean dir width and dir depth is calculated as log (width, nfiles)<br />
742: 3.707:  dirdepth therefore defaults to dir depth of 1 as in postmark<br />
742: 3.707:  set $meandir lower to increase depth beyond 1 if desired)<br />
742: 3.707:<br />
742: 3.707:        run runtime (e.g. run 60)<br />
filebench&gt; set $dir=/gpool<br />
filebench&gt; run 60<br />
742: 27.078: Creating/pre-allocating files and filesets<br />
742: 27.081: Fileset bigfileset: 1000 files, 0 leafdirs avg dir = 1000000, avg depth = 0.5, mbytes=15<br />
742: 27.096: Removed any existing fileset bigfileset in 1 seconds<br />
742: 27.096: making tree for filset /gpool/bigfileset<br />
742: 27.096: Creating fileset bigfileset&#8230;<br />
742: 35.092: Preallocated 812 of 1000 of fileset bigfileset in 8 seconds<br />
742: 35.092: waiting for fileset pre-allocation to finish<br />
742: 35.092: Starting 1 filereader instances<br />
744: 36.102: Starting 16 filereaderthread threads<br />
742: 39.112: Running&#8230;<br />
742: 99.712: Run took 60 seconds&#8230;<br />
742: 99.713: Per-Operation Breakdown<br />
closefile4                449ops/s   0.0mb/s      0.0ms/op        3us/op-cpu<br />
readfile4                 449ops/s   7.0mb/s      0.0ms/op       19us/op-cpu<br />
openfile4                 449ops/s   0.0mb/s      0.0ms/op       18us/op-cpu<br />
closefile3                449ops/s   0.0mb/s      0.0ms/op        3us/op-cpu<br />
fsyncfile3                449ops/s   0.0mb/s     17.4ms/op       20us/op-cpu<br />
appendfilerand3           449ops/s   3.5mb/s      0.0ms/op       27us/op-cpu<br />
readfile3                 449ops/s   7.0mb/s      0.0ms/op       18us/op-cpu<br />
openfile3                 449ops/s   0.0mb/s      0.0ms/op       18us/op-cpu<br />
closefile2                449ops/s   0.0mb/s      0.0ms/op        3us/op-cpu<br />
fsyncfile2                449ops/s   0.0mb/s     17.9ms/op       17us/op-cpu<br />
appendfilerand2           449ops/s   3.5mb/s      0.0ms/op       23us/op-cpu<br />
createfile2               449ops/s   0.0mb/s      0.1ms/op       52us/op-cpu<br />
deletefile1               449ops/s   0.0mb/s      0.0ms/op       33us/op-cpu</p>
<p>742: 99.713:<br />
IO Summary:      353667 ops, 5836.1 ops/s, (898/898 r/w)  21.0mb/s,     78us cpu/op,   8.9ms latency<br />
742: 99.713: Shutting down processes<br />
filebench&gt;<br />
742: 110.144: Aborting&#8230;</p></blockquote>
<p>Going back to normal</p>
]]></content:encoded>
					
					<wfw:commentRss>https://desantolo.com/2010/05/setup-filebench-on-solaris-for-benchmarking/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">12</post-id>	</item>
		<item>
		<title>Create a Storage Pool</title>
		<link>https://desantolo.com/2010/05/create-a-storage-pool/</link>
					<comments>https://desantolo.com/2010/05/create-a-storage-pool/#respond</comments>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Sun, 02 May 2010 22:11:35 +0000</pubDate>
				<category><![CDATA[Guides]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[create]]></category>
		<category><![CDATA[opensolaris]]></category>
		<category><![CDATA[sata]]></category>
		<category><![CDATA[zfs]]></category>
		<category><![CDATA[zpool]]></category>
		<guid isPermaLink="false">http://gioflux.wordpress.com/?p=6</guid>

					<description><![CDATA[This will create a pool named &#8220;gpool&#8221; using RAIDZ (raid5) with member drives  c8t1d0 c8t2d0 c8t3d0 c8t4d0 c8t5d0 Giovanni@server:~# zpool create gpool raidz c8t1d0 c8t2d0 c8t3d0 c8t4d0 c8t5d0 Giovanni@server:~# zpool status pool: gpool state: ONLINE scrub: none requested config: NAME        STATE     &#8230; <a href="https://desantolo.com/2010/05/create-a-storage-pool/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>This will create a pool named &#8220;gpool&#8221; using RAIDZ (raid5) with member drives  c8t1d0 c8t2d0 c8t3d0 c8t4d0 c8t5d0</p>
<blockquote><p><a href="mailto:Giovanni@server">Giovanni@server</a>:~# zpool create gpool raidz c8t1d0 c8t2d0 c8t3d0 c8t4d0 c8t5d0<br />
<a href="mailto:Giovanni@server">Giovanni@server</a>:~# zpool status<br />
pool: gpool<br />
state: ONLINE<br />
scrub: none requested<br />
config:</p>
<p>NAME        STATE     READ WRITE CKSUM<br />
gpool       ONLINE       0     0     0<br />
raidz1    ONLINE       0     0     0<br />
c8t1d0  ONLINE       0     0     0<br />
c8t2d0  ONLINE       0     0     0<br />
c8t3d0  ONLINE       0     0     0<br />
c8t4d0  ONLINE       0     0     0<br />
c8t5d0  ONLINE       0     0     0</p>
<p>errors: No known data errors</p>
<p>pool: rpool<br />
state: ONLINE<br />
scrub: none requested<br />
config:</p>
<p>NAME        STATE     READ WRITE CKSUM<br />
rpool       ONLINE       0     0     0<br />
c8t0d0s0  ONLINE       0     0     0</p>
<p>errors: No known data errors<br />
<a href="mailto:Giovanni@server">Giovanni@server</a>:~#</p></blockquote>
<p>OK.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://desantolo.com/2010/05/create-a-storage-pool/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">6</post-id>	</item>
		<item>
		<title>How to view available SATA hard drives</title>
		<link>https://desantolo.com/2010/05/how-to-view-available-sata-hard-drives/</link>
					<comments>https://desantolo.com/2010/05/how-to-view-available-sata-hard-drives/#respond</comments>
		
		<dc:creator><![CDATA[Giovanni]]></dc:creator>
		<pubDate>Sun, 02 May 2010 21:41:27 +0000</pubDate>
				<category><![CDATA[Guides]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[create]]></category>
		<category><![CDATA[opensolaris]]></category>
		<category><![CDATA[sata]]></category>
		<category><![CDATA[zfs]]></category>
		<category><![CDATA[zpool]]></category>
		<guid isPermaLink="false">http://gioflux.wordpress.com/?p=3</guid>

					<description><![CDATA[You will be able to view hardware ID&#8217;s for hard drives using &#8216;format&#8217; Giovanni@server:~# format Searching for disks&#8230;done AVAILABLE DISK SELECTIONS: 0. c8t0d0 &#60;DEFAULT cyl 60797 alt 2 hd 255 sec 252&#62; /pci@0,0/pci15d9,d380@1f,2/disk@0,0 1. c8t1d0 &#60;ATA-Hitachi HDS72202-A3EA-1.82TB&#62; /pci@0,0/pci15d9,d380@1f,2/disk@1,0 2. c8t2d0 &#8230; <a href="https://desantolo.com/2010/05/how-to-view-available-sata-hard-drives/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
										<content:encoded><![CDATA[<p>You will be able to view hardware ID&#8217;s for hard drives using &#8216;format&#8217;</p>
<blockquote><p><a href="mailto:Giovanni@server">Giovanni@server</a>:~# format<br />
Searching for disks&#8230;done<br />
AVAILABLE DISK SELECTIONS:<br />
0. c8t0d0 &lt;DEFAULT cyl 60797 alt 2 hd 255 sec 252&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@0,0</a><br />
1. c8t1d0 &lt;ATA-Hitachi HDS72202-A3EA-1.82TB&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@1,0</a><br />
2. c8t2d0 &lt;ATA-Hitachi HDS72202-A28A-1.82TB&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@2,0</a><br />
3. c8t3d0 &lt;ATA-Hitachi HDS72202-A3EA-1.82TB&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@3,0</a><br />
4. c8t4d0 &lt;ATA-Hitachi HDS72202-A3EA-1.82TB&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@4,0</a><br />
5. c8t5d0 &lt;ATA-Hitachi HDS72202-A3EA-1.82TB&gt;<br />
<a>/pci@0,0/pci15d9,d380@1f,2/disk@5,0</a><br />
Specify disk (enter its number):</p></blockquote>
<p>Hard drives are located on <strong>/dev/dsk</strong> in Opensolaris. Compare to the zpool status and add drives that are new to the system (not yet in any storage pools)</p>
]]></content:encoded>
					
					<wfw:commentRss>https://desantolo.com/2010/05/how-to-view-available-sata-hard-drives/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">11</post-id>	</item>
	</channel>
</rss>
