Category Archives: Linux

Checking for Hard drive READ and WRITE Cache (onboard) on Solaris

To check for read and write cache for your hard drives do the following:

Giovanni@server:~# format -e
Searching for disks…done
AVAILABLE DISK SELECTIONS:
0. c8t0d0 <DEFAULT cyl 60797 alt 2 hd 255 sec 252>
/pci@0,0/pci15d9,d380@1f,2/disk@0,0
1. c8t1d0 <ATA-Hitachi HDS72202-A3EA-1.82TB>
/pci@0,0/pci15d9,d380@1f,2/disk@1,0
2. c8t2d0 <ATA-Hitachi HDS72202-A28A-1.82TB>
/pci@0,0/pci15d9,d380@1f,2/disk@2,0
3. c8t3d0 <ATA-Hitachi HDS72202-A3EA-1.82TB>
/pci@0,0/pci15d9,d380@1f,2/disk@3,0
4. c8t4d0 <ATA-Hitachi HDS72202-A3EA-1.82TB>
/pci@0,0/pci15d9,d380@1f,2/disk@4,0
5. c8t5d0 <ATA-Hitachi HDS72202-A3EA-1.82TB>
/pci@0,0/pci15d9,d380@1f,2/disk@5,0
Specify disk (enter its number):

Select a drive, lets pick 5 from the list.

Specify disk (enter its number): 5
selecting c8t5d0
[disk formatted]
/dev/dsk/c8t5d0s0 is part of active ZFS pool gpool. Please see zpool(1M).
FORMAT MENU:
disk       – select a disk
type       – select (define) a disk type
partition  – select (define) a partition table
current    – describe the current disk
format     – format and analyze the disk
fdisk      – run the fdisk program
repair     – repair a defective sector
label      – write label to the disk
analyze    – surface analysis
defect     – defect list management
backup     – search for backup labels
verify     – read and display labels
inquiry    – show vendor, product and revision
scsi       – independent SCSI mode selects
cache      – enable, disable or query SCSI disk cache
volname    – set 8-character volume name
!<cmd>     – execute <cmd>, then return
quit
format>

Now let’s do the checking

Enter “cache” to enter cache menu.

CACHE MENU:
write_cache – display or modify write cache settings
read_cache  – display or modify read cache settings
!<cmd>      – execute <cmd>, then return
quit
cache>

Type: “write_cache” or “read_cache” depending on what you would like to see, lets use write:

cache> write_cache
WRITE_CACHE MENU:
display     – display current setting of write cache
enable      – enable write cache
disable     – disable write cache
!<cmd>      – execute <cmd>, then return
quit
write_cache> display
Write Cache is enabled
write_cache>

Use the same for read_cache and to disable and enable.

Setup Filebench on Solaris for benchmarking

Like any other newbie on Solaris, I didn’t know how to install the packages, I am used to yum or apt-get install but anyway on Solaris I did:

Giovanni@server:~/Downloads/filebench-1.4.8# pkg install SUNWfilebench
DOWNLOAD                                    PKGS       FILES     XFER (MB)
Completed                                    1/1       60/60     0.32/0.32

PHASE                                        ACTIONS
Install Phase                                  82/82
Giovanni@server:~/Downloads/filebench-1.4.8#

and it was installed 🙂 Use pkg search to search for packages.

Giovanni@server:/usr/benchmarks/filebench# bin/go_filebench
FileBench Version 1.4.4
filebench> load varmail
742: 3.707: Varmail Version 2.1 personality successfully loaded
742: 3.707: Usage: set $dir=<dir>
742: 3.707:        set $filesize=<size>    defaults to 16384
742: 3.707:        set $nfiles=<value>     defaults to 1000
742: 3.707:        set $nthreads=<value>   defaults to 16
742: 3.707:        set $meaniosize=<value> defaults to 16384
742: 3.707:        set $readiosize=<size>  defaults to 1048576
742: 3.707:        set $meandirwidth=<size> defaults to 1000000
742: 3.707: (sets mean dir width and dir depth is calculated as log (width, nfiles)
742: 3.707:  dirdepth therefore defaults to dir depth of 1 as in postmark
742: 3.707:  set $meandir lower to increase depth beyond 1 if desired)
742: 3.707:
742: 3.707:        run runtime (e.g. run 60)
filebench> set $dir=/gpool
filebench> run 60
742: 27.078: Creating/pre-allocating files and filesets
742: 27.081: Fileset bigfileset: 1000 files, 0 leafdirs avg dir = 1000000, avg depth = 0.5, mbytes=15
742: 27.096: Removed any existing fileset bigfileset in 1 seconds
742: 27.096: making tree for filset /gpool/bigfileset
742: 27.096: Creating fileset bigfileset…
742: 35.092: Preallocated 812 of 1000 of fileset bigfileset in 8 seconds
742: 35.092: waiting for fileset pre-allocation to finish
742: 35.092: Starting 1 filereader instances
744: 36.102: Starting 16 filereaderthread threads
742: 39.112: Running…
742: 99.712: Run took 60 seconds…
742: 99.713: Per-Operation Breakdown
closefile4                449ops/s   0.0mb/s      0.0ms/op        3us/op-cpu
readfile4                 449ops/s   7.0mb/s      0.0ms/op       19us/op-cpu
openfile4                 449ops/s   0.0mb/s      0.0ms/op       18us/op-cpu
closefile3                449ops/s   0.0mb/s      0.0ms/op        3us/op-cpu
fsyncfile3                449ops/s   0.0mb/s     17.4ms/op       20us/op-cpu
appendfilerand3           449ops/s   3.5mb/s      0.0ms/op       27us/op-cpu
readfile3                 449ops/s   7.0mb/s      0.0ms/op       18us/op-cpu
openfile3                 449ops/s   0.0mb/s      0.0ms/op       18us/op-cpu
closefile2                449ops/s   0.0mb/s      0.0ms/op        3us/op-cpu
fsyncfile2                449ops/s   0.0mb/s     17.9ms/op       17us/op-cpu
appendfilerand2           449ops/s   3.5mb/s      0.0ms/op       23us/op-cpu
createfile2               449ops/s   0.0mb/s      0.1ms/op       52us/op-cpu
deletefile1               449ops/s   0.0mb/s      0.0ms/op       33us/op-cpu

742: 99.713:
IO Summary:      353667 ops, 5836.1 ops/s, (898/898 r/w)  21.0mb/s,     78us cpu/op,   8.9ms latency
742: 99.713: Shutting down processes
filebench>
742: 110.144: Aborting…

Going back to normal

Create a Storage Pool

This will create a pool named “gpool” using RAIDZ (raid5) with member drives  c8t1d0 c8t2d0 c8t3d0 c8t4d0 c8t5d0

Giovanni@server:~# zpool create gpool raidz c8t1d0 c8t2d0 c8t3d0 c8t4d0 c8t5d0
Giovanni@server:~# zpool status
pool: gpool
state: ONLINE
scrub: none requested
config:

NAME        STATE     READ WRITE CKSUM
gpool       ONLINE       0     0     0
raidz1    ONLINE       0     0     0
c8t1d0  ONLINE       0     0     0
c8t2d0  ONLINE       0     0     0
c8t3d0  ONLINE       0     0     0
c8t4d0  ONLINE       0     0     0
c8t5d0  ONLINE       0     0     0

errors: No known data errors

pool: rpool
state: ONLINE
scrub: none requested
config:

NAME        STATE     READ WRITE CKSUM
rpool       ONLINE       0     0     0
c8t0d0s0  ONLINE       0     0     0

errors: No known data errors
Giovanni@server:~#

OK.

How to view available SATA hard drives

You will be able to view hardware ID’s for hard drives using ‘format’

Giovanni@server:~# format
Searching for disks…done
AVAILABLE DISK SELECTIONS:
0. c8t0d0 <DEFAULT cyl 60797 alt 2 hd 255 sec 252>
/pci@0,0/pci15d9,d380@1f,2/disk@0,0
1. c8t1d0 <ATA-Hitachi HDS72202-A3EA-1.82TB>
/pci@0,0/pci15d9,d380@1f,2/disk@1,0
2. c8t2d0 <ATA-Hitachi HDS72202-A28A-1.82TB>
/pci@0,0/pci15d9,d380@1f,2/disk@2,0
3. c8t3d0 <ATA-Hitachi HDS72202-A3EA-1.82TB>
/pci@0,0/pci15d9,d380@1f,2/disk@3,0
4. c8t4d0 <ATA-Hitachi HDS72202-A3EA-1.82TB>
/pci@0,0/pci15d9,d380@1f,2/disk@4,0
5. c8t5d0 <ATA-Hitachi HDS72202-A3EA-1.82TB>
/pci@0,0/pci15d9,d380@1f,2/disk@5,0
Specify disk (enter its number):

Hard drives are located on /dev/dsk in Opensolaris. Compare to the zpool status and add drives that are new to the system (not yet in any storage pools)