Saturday, March 28, 2009

Testing/upgrading new versions of EON

Testing new versions of EON ZFS NAS with a previous USB/CF install is simple. The risk is minimal and backing out to the previous working version is simple. Simply boot the previous verion and follow the steps. This should work with USB(tested), compact flash(tested) and virtual installs(untested). First, transfer the new eon-0.590-b110-64-cifs.iso to your storage pool. You can do a transfer using a CIFS share or winSCP or via sftp. Let's say we transferred it to /pool/eon-0.590-b110-64-cifs.iso. Then we would mount the new image:
lofiadm -a /pool/eon-0.590-b110-64-cifs.iso /dev/lofi/1
mkidr -p /mnt/new
mount /dev/lofi/1 /mnt/new
Preserve your previous version
cd /mnt/eon0/boot
mv x86.eon /pool/x86.eon.backup
tar -cvf - . | gzip > /pool/boot.tgz
Transfer the new version (still in /mnt/eon0/boot, which should be empty)
rm -rf amd64 grub platform
cd /mnt/new/boot
cp -pR * /mnt/eon0/boot
updimg.sh /mnt/eon0/x86.eon
The new contents of /mnt/eon0/boot should have amd64, grub, platform and the new x86.eon. Now, replace any custom changes you had in /mnt/eon0/boot/menu.lst. Also, do not run zpool or zfs upgrade until you're satisfied you like the new version as there is no way of going back to a previous zpool (currently v14) or zfs (currently v3) version. You can now reboot into the new EON ZFS NAS. From there you can re-run setup and updimg.sh to re-id your new version or mount your previous version and transfer any customizations.

Friday, March 27, 2009

EON 64-bit 0.59.0 based on SNV_110 is released!

Embedded Operating system/Networking (EON), RAM based live ZFS NAS appliance is released on Genunix! Much thanks to Genunix.org for download hosting and serving the opensolaris community.

It is available in a CIFS and Samba flavor
tryitEON 64-bit x86 CIFS ISO image version 0.59.0 based on snv_110
tryitEON 64-bit x86 Samba ISO image version 0.59.0 based on snv_110

tryitEON 32-bit x86 CIFS ISO image version 0.59.0 based on snv_110

tryitEON 32-bit x86 Samba ISO image version 0.59.0 based on snv_110New/Fix:
- install.sh should work with vdi/vmware disks properly.
- image footprint/runtime is smaller requires less RAM.

Tuesday, March 17, 2009

EON snv_109 is alive

Here is a beta preview of EON ZFS NAS based on snv_109. Yes, it is alive. There are 2 build bugs that I am trying to resolve. A core on rtc and some manifest-import at boot time control. I will try to resolve and release this image as soon as possible. Note that with release snv_109 we get ACL shares on CIFS. ACLs on shares brings better compatibility with the Microsoft implementation and allows more control over access than the CIFS server previously supported. The "shares" file /pool/fs/.zfs/shares is shown at the end of the video. I wanted to find out why multi-core CPU support is showing incorrectly (kstat, mpstat, psrinfo) for this release but that may have to wait.

Tuesday, March 10, 2009

Customizing and optimizing EON

EON provides a method to allow customizing and optimizing your storage through a simple and easy to maintain process. There is a legacy startup script /etc/rc3.d/S99local that enables this. At boot it searches for /mnt/eon0/.exec and /mnt/eon0/.remove. This resides on the USB drive or CF and allows to automate and add your own commands at run level 3. It also allows you to reduce the RAM footprint of the image, leaving more for when ZFS gets hungry.

This was done so it was not necessary to run /usr/bin/updimg.sh everytime a simple change was needed, for example adding or modifying a script. It also allows the image to include NFS which everyone may not use. It would be a waste for that person to always have the /usr/lib/nfs binaries around, so to give back control to that person, they can simply add /usr/lib/nfs to /mnt/eon0/.remove and all those binaries will be removed at boot giving back the memory for other use. Similarly for other binaries (/usr/sfw/sbin/swat) and kernel drivers not applicable to your needs or hardware.

The thing to remember when making entries to /mnt/eon0/.exec is that it should be a non-interactive command. Commented entries are ignored. Exerpt listing of .exec
/usr/sbin/swap -a /dev/zvol/dsk/abyss/swap
/usr/sbin/ucodeadm -u /platform/i86pc/ucode/intel-ucode.txt
/usr/sbin/ndd -set /dev/tcp tcp_time_wait_interval 30000
For /mnt/eon0/.remove entries you can go wild and experiment because you are erasing it from RAM so if you remove something you realize you need, simply remove the entry from .remove, reboot and all should be back. I have included a default set and commented some that I have found safe to remove in some cases. Exerpt listing of .remove
/platform/i86pc/ucode/intel-ucode.txt
#/kernel/crypto/aes
#/kernel/crypto/arcfour
#/kernel/crypto/blowfish
#/kernel/crypto/des
#/kernel/crypto/ecc
#/kernel/crypto/rsa
#/kernel/crypto/sha2
/etc/svc/repository-boot*
/kernel/drv/amd64/elxl
/kernel/drv/amd64/iprb
#/kernel/drv/amd64/kmdb
#/kernel/drv/amd64/intel_nb5000
/kernel/drv/power*
/kernel/drv/amd64/power

Tuesday, March 3, 2009

Benchmarking your EON ZFS NAS

Being able to test the performance of your storage unit is always important. Creating real world application loads and recording accurate statistics is not easy. Or is it? Sun has a great tool for this, called Filebench. Filebench is a new framework for simulating applications on file systems. So let's use filebench to test our EON ZFS NAS. There's a wide range of tests that can be performed and a detailed howto (see example varmail run) is here. Download the filebench_opensolaris-1.3.4_x86_pkg.tar.gz here Unpack it on your zpool
gzip -dc filebench_opensolaris-1.3.4_x86_pkg.tar.gz | tar -xf -
Create the necessary link
(cd /usr ; ln -s ../ZPOOL/filebench/reloc  benchmarks)
(cd /opt ; ln -s ../ZPOOL/filebench/reloc/filebench filebench)
That's it. We are ready to test.

Testing my PIII, Dell 4100, 1Ghz w 512Mb, 2Gb swap and 3x36Gb raidz1 pool royal produced the following:
::::::::::::::
copyfiles.stats
::::::::::::::
Flowop totals:
closefile2 997ops/s 0.0mb/s 0.0ms/op 11us/op-cpu
closefile1 997ops/s 0.0mb/s 0.0ms/op 19us/op-cpu
writefile2 997ops/s 15.0mb/s 0.2ms/op 230us/op-cpu
createfile2 997ops/s 0.0mb/s 0.3ms/op 304us/op-cpu
readfile1 998ops/s 15.0mb/s 0.1ms/op 109us/op-cpu
openfile1 998ops/s 0.0mb/s 0.1ms/op 113us/op-cpu

IO Summary: 6002 ops 5983.2 ops/s, 998/997 r/w 29.9mb/s,
4373uscpu/op
::::::::::::::
createfiles.stats
::::::::::::::
Flowop totals:
limit 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
closefile1 189ops/s 0.0mb/s 0.6ms/op 19us/op-cpu
writefile1 189ops/s 2.9mb/s 34.1ms/op 229us/op-cpu
createfile1 189ops/s 0.0mb/s 44.1ms/op 367us/op-cpu

IO Summary: 149974 ops 566.2 ops/s, 0/189 r/w 2.9mb/s, 4
9326uscpu/op
::::::::::::::
deletefiles.stats
::::::::::::::
Flowop totals:
limit 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
deletefile1 2725ops/s 0.0mb/s 3.9ms/op 140us/op-cpu

IO Summary: 50000 ops 2725.3 ops/s, 0/0 r/w 0.0mb/s,
0uscpu/op
::::::::::::::
mongo.stats
::::::::::::::
Flowop totals:
deletefile1 499ops/s 0.0mb/s 0.2ms/op 204us/op-cpu
closefile2 500ops/s 0.0mb/s 0.0ms/op 13us/op-cpu
readfile1 500ops/s 7.0mb/s 0.1ms/op 115us/op-cpu
openfile2 500ops/s 0.0mb/s 0.1ms/op 105us/op-cpu
closefile1 500ops/s 0.0mb/s 0.0ms/op 18us/op-cpu
appendfilerand1 500ops/s 4.0mb/s 0.3ms/op 292us/op-cpu
openfile1 500ops/s 0.0mb/s 0.1ms/op 84us/op-cpu

IO Summary: 7006 ops 3496.4 ops/s, 500/500 r/w 11.0mb/s,
4771uscpu/op
::::::::::::::
multistreamread.stats
::::::::::::::
Flowop totals:
limit 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
seqread4 2ops/s 1.8mb/s 455.4ms/op 13636us/op-cpu
limit 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
seqread3 2ops/s 2.0mb/s 428.7ms/op 24988us/op-cpu
limit 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
seqread2 2ops/s 1.7mb/s 473.3ms/op 27942us/op-cpu
limit 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
seqread1 1ops/s 1.4mb/s 556.6ms/op 27728us/op-cpu

IO Summary: 83 ops 7.2 ops/s, 7/0 r/w 6.9mb/s, 13323
21uscpu/op
::::::::::::::
multistreamreaddirect.stats
::::::::::::::
Flowop totals:
limit 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
seqread4 3ops/s 2.7mb/s 348.3ms/op 16887us/op-cpu
limit 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
seqread3 3ops/s 3.2mb/s 269.0ms/op 20442us/op-cpu
limit 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
seqread2 3ops/s 2.6mb/s 270.5ms/op 18601us/op-cpu
limit 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
seqread1 3ops/s 2.9mb/s 303.6ms/op 23271us/op-cpu

IO Summary: 128 ops 11.8 ops/s, 12/0 r/w 11.4mb/s, 8253
65uscpu/op
::::::::::::::
multistreamwrite.stats
::::::::::::::
Flowop totals:
limit 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
seqwrite4 6ops/s 5.9mb/s 160.7ms/op 8830us/op-cpu
limit 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
seqwrite3 5ops/s 5.4mb/s 174.0ms/op 8889us/op-cpu
limit 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
seqwrite2 6ops/s 6.0mb/s 157.6ms/op 8765us/op-cpu
limit 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
seqwrite1 6ops/s 6.2mb/s 150.0ms/op 9035us/op-cpu

IO Summary: 248 ops 23.8 ops/s, 0/24 r/w 23.4mb/s, 332
054uscpu/op
::::::::::::::
multistreamwritedirect.stats
::::::::::::::
Flowop totals:
limit 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
seqwrite4 6ops/s 5.4mb/s 170.1ms/op 9011us/op-cpu
limit 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
seqwrite3 6ops/s 5.5mb/s 167.2ms/op 8884us/op-cpu
limit 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
seqwrite2 6ops/s 6.2mb/s 148.6ms/op 8877us/op-cpu
limit 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
seqwrite1 6ops/s 5.4mb/s 167.6ms/op 9260us/op-cpu

IO Summary: 249 ops 22.9 ops/s, 0/23 r/w 22.5mb/s, 347
543uscpu/op
::::::::::::::
randomread.stats
::::::::::::::
Flowop totals:
rand-rate 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
rand-read1 11269ops/s 88.0mb/s 0.1ms/op 67us/op-cpu

IO Summary: 112852 ops 11269.1 ops/s, 11269/0 r/w 88.0mb/s, 8
63uscpu/op
::::::::::::::
randomwrite.stats
::::::::::::::
Flowop totals:
rand-rate 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
rand-write1 8569ops/s 66.9mb/s 0.1ms/op 95us/op-cpu

IO Summary: 85813 ops 8569.1 ops/s, 0/8569 r/w 66.9mb/s,
1139uscpu/op
::::::::::::::
singlestreamread.stats
::::::::::::::
Flowop totals:
limit 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
seqread 45ops/s 45.1mb/s 21.3ms/op 7863us/op-cpu

IO Summary: 472 ops 45.2 ops/s, 45/0 r/w 45.1mb/s, 2169
77uscpu/op
::::::::::::::
singlestreamwrite.stats
::::::::::::::
Flowop totals:
limit 0ops/s 0.0mb/s 0.0ms/op 0us/op-cpu
seqwrite 25ops/s 25.2mb/s 36.5ms/op 8801us/op-cpu

IO Summary: 280 ops 25.3 ops/s, 0/25 r/w 25.2mb/s, 341
069uscpu/op