Monday, October 19, 2009

EON ZFS Storage 0.59.4 based on snv_124 released!

Embedded Operating system/Networking (EON), RAM based live ZFS NAS appliance is released on Genunix! Many thanks to Genunix.org for download hosting and serving the opensolaris community.

EON ZFS storage is available in 32 and 64-bit, CIFS and Samba versions:
tryitEON 64-bit x86 CIFS ISO image version 0.59.4 based on snv_124
tryitEON 64-bit x86 Samba ISO image version 0.59.4 based on snv_124
tryitEON 32-bit x86 CIFS ISO image version 0.59.4 based on snv_124
tryitEON 32-bit x86 Samba ISO image version 0.59.4 based on snv_124
New/Changes/Fixes:
- initialization of ntpd, nscd at boot time, moved to /mnt/eon0/.exec
- added /mnt/eon0/.disable for K99local stop for a cleaner shutdown
- added /mnt/eon0/.purge to allow removing drivers and binaries not needed by your image
- new version of install.sh. Fixes a bug for virtual disks, multiple runs and improved error checking of stages
- new transporter.sh CLI to automate upgrades, backups or downgrades to backed-up versions
- eon rebooting at grub(since snv_122) in ESXi, Fusion and various versions of VMware workstation. This is related to bug 6820576. Workaround, at grub press e and add on the end of the kernel line "-B disable-pcieb=true"

21 comments:

Unknown said...

Thanks a lot for this new release!
Not sure if this bug was fixed in this release, but I'm experiencing this strange and annoying behavior running previous release of ON.

Here is what happens:

[root@storage ~]# updimg.sh /mnt/eon0/boot/x86.eon
Updating files in /mnt/eon0/.backup to x86.eon
backup in /mnt/eon0/boot/x86.eon.1
/mnt/eon0/.backup: OK
gzcat /mnt/eon0/boot/x86.eon > /tmp/x86.22345
gzcat: stdout: No space left on device
[root@storage ~]# df -h /tmp
Filesystem size used avail capacity Mounted on
swap 218M 184K 218M 1% /tmp
[root@storage ~]# umount -f /mnt/upd/
[root@storage ~]# lofiadm -d /dev/lofi/1
[root@storage ~]# rm -rf /tmp/x86.22345


[root@storage ~]# updimg.sh /mnt/eon0/boot/x86.eon
Updating files in /mnt/eon0/.backup to x86.eon
backup in /mnt/eon0/boot/x86.eon.1
/mnt/eon0/.backup: OK
gzcat /mnt/eon0/boot/x86.eon > /tmp/x86.22506
gzcat: stdout: No space left on device
[root@storage ~]# umount -f /mnt/upd/
[root@storage ~]# lofiadm -d /dev/lofi/1
[root@storage ~]# rm -rf /tmp/x86.22506

[root@storage ~]# updimg.sh /mnt/eon0/boot/x86.eon
Updating files in /mnt/eon0/.backup to x86.eon
backup in /mnt/eon0/boot/x86.eon.1
/mnt/eon0/.backup: OK
gzcat /mnt/eon0/boot/x86.eon > /tmp/x86.22662
gzcat: stdout: No space left on device
[root@storage ~]# umount -f /mnt/upd/
[root@storage ~]# lofiadm -d /dev/lofi/1
[root@storage ~]# rm -rf /tmp/x86.22662
[root@storage ~]# df -h /tmp
Filesystem size used avail capacity Mounted on
swap 432M 184K 432M 1% /tmp

[root@storage ~]# updimg.sh /mnt/eon0/boot/x86.eon
Updating files in /mnt/eon0/.backup to x86.eon
backup in /mnt/eon0/boot/x86.eon.1
/mnt/eon0/.backup: OK
gzcat /mnt/eon0/boot/x86.eon > /tmp/x86.22819
lofiadm -a /tmp/x86.22819 /dev/lofi/1
mounting ... /dev/lofi/1 /mnt/upd
copying /etc/svc/repository.db
Press enter to continue after adding drivers
umounting ... /mnt/upd
lofiadm -d /dev/lofi/1
mv -f /mnt/eon0/boot/x86.eon.0 /mnt/eon0/boot/x86.eon.1
mv -f /mnt/eon0/boot/x86.eon /mnt/eon0/boot/x86.eon.0
gzip -f -9 -c /tmp/x86.22819 > /mnt/eon0/boot/x86.eon
/mnt/eon0/boot/x86.eon: OK

Not sure if that's exclusively EON bug, or OpenSolaris generic bug.

Thanks,
Dmitry

Andre Lue said...

dmirty,

It looks like a case of running out of memory(RAM) with the x86.eon image being unpacked.

Kindly paste me the output of:
swap -sh
lgrpinfo
ls -al /mnt/eon0/boot/x86.eon
also if you have it when it occurs
ls -al /tmp/x86.eon.xxxx

Running this with ZFS swap present should solve the problem.

Denis J. Cirulis said...

Hi, Andre. Something strange with 0.59.4. I create large raidz2 pool like this: zpool create -f tank raidz2 c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0 c0t6d0 spare c0t7d0

Pool is usable, but when I reboot the appliance zpool list show "no pools available". What I'm doing wrong ?

Andre Lue said...

Denis,

If hostid is set this shouldn't be needed but if hostids match and the pool isn't automounting, the easy workaround is to uncomment the the zpool import -f -a line in /mnt/eon0/.exec

Trying to figure out why sometimes when system hostid and zpool hostid matches, the pool does not get mounted. I suspect this is a more a opensolaris thing to fix though.

Unknown said...

Hi Andre,

Thanks for quick response.

Here is the output:
[root@storage ~]# swap -sh
total: 41M allocated + 14M reserved = 55M used, 165M available
[root@storage ~]# lgrpinfo
lgroup 0 (root):
Children: none
CPUs: 0 1
Memory: installed 4.0G, allocated 3.8G, free 184M
Lgroup resources: 0 (CPU); 0 (memory)
Load: 0.00218
Latency: 0
[root@storage ~]# ls -al /mnt/eon0/boot/x86.eon
-r--r--r-- 1 root root 101476682 Oct 20 11:48 /mnt/eon0/boot/x86.eon

When I have it happen next time I'll post the output of ls -al /tmp/x86.eon.xxxx

Is it happening mainly because of low memory (only 4 GB) and ZFS aggressively consuming as mush as it can? Is there any way to reserve /tmp space to be, let's say, 400 MB?
Or having the swap on ZFS pool is a better solution in your opinion?

Thanks,
Dmitry

Andre Lue said...

dmitry,

4Gb is plenty. Is the storage under heavy use when this occurs? Does the "No space left" error occur intermittently(meaning succeed at times without doing anything different)? You are using the 64-bit Samba image correct?

Some ZFS swap is definitely recommended for better performance. 4GB or larger in your case.

Denis J. Cirulis said...

Andre,
how can I set hostid ?
My /etc/hostid contains:

# DO NOT EDIT
"_I__gg772f"

And if I fire a command hostid I get:

0088ffa7

Andre Lue said...

Dennis,

hostid tells you the current id. It is dynamically set each boot unless you preserve it by running setup.

To see the pool hostid you run zdb -v and convert the hostid="8 digit decimal" to hex. This should match the value of hostid for automounting but there are cases it matches and automount does not occur. In this case you simply use zpool import -f -a in /mnt/eon0/.exec.

Unknown said...

It occurs every time I run updimg.sh script. For some reason /tmp partition space gets reduced over time to less than 100MB. I was assuming it's because ZFS aggressively allocates RAM for L2 cache. Yes, I use 64-bit Samba version.
I'll setup ZFS swap sometimes when I have a chance, so that does not happen any more.
I did not have the same issue with any previous version though.

Thanks again,
Dmitry

Andre Lue said...

dmitry,

Hmmm sounds weird or a memory leak. Can you do the following and tell me if there are any cores?
du -ak / | grep core

Feel free to paste bin or start a thread in the solaris forums with more details on the problem or details to recreate what you're doing.

dimsoft said...

planned web interface as in sun open storage 7000?

Andre Lue said...

dimsoft,

There are some works in progress:
napp-it
webmin (just needs a zfs module)

but nothing on a comparable scale of what's in the 7000 aka fishworks or amber road.

Unknown said...

thanks for your great work here. i have little problem, maybe you can help:

i try to set my local timezone: edited TZ=xxx in /etc/TIMEZONE, then i added this file in /mnt/eon0/.backup and ran updimg.sh - but after a restart the timezone is still the old one and in /etc/TIMEZONE is the old TZ value again. when i edit the file and run a
". /etc/TIMEZONE"
"export TZ"
the timezone is set correctly for the current user/session

also, is it normal that after every restart i get an ip by the dhcp, even if i set a manuel ip via /user/bin/setup ?

i'm sure it's a mistake i make somewhere, i'm quite new to solaris, so please be gentle :-)

dimsoft said...

'll show how to create iscsi target

Andre Lue said...

tralafiti,

Based on your description you did everything right. But it doesn't sound like /etc/TIMEZONE was preserved in the new image since it is unchanged. Try adding /etc/default/init to .backup to which /etc/TIMEZONE is a symlinked.

Also, try this
echo $TZ (should say GMT default)
then try changing at a cli level and tell me how that fares by doing
export TZ=your_TZ
date
observe the time is correct to whichever TZ you set (example TZ=US/Pacific)

Unknown said...

adding /etc/default/init to .backup did the job, thank you!

Unknown said...

My zpool automounting stopped working all of a sudden. The hostid matches on the host and the zpool and I even destroyed and recreated the zpool to no avail. Adding "zpool import -f -a" to "/mnt/eon/.exec" works but I would prefer to know why the automounter fails. Is this problem specific to snv_124?

Cheers,
Patrik

Andre Lue said...

Patrik,

Since your hostid matches and it does not automount, the zpool import -f -a is the current work around. I haven't been able to get an answer on why this happens.

geoff matters said...

automount also fails for me, although hostid via "hostid" and "zdb -v" match.

a note: the commented out command in .exec is "/usr/sbin/zpool -a" missing the "import".

in my case "-f" is not necessary (no hostid mismatch) and i think it best in this case to NOT include -f in .exec... should a cirstumance arise in which something requires forcing, I would rather have it fail and require intervention than force automatically.

mikeathome said...

Andre,
updated my 58.9_sv104 NAS to 59.4_sv124. I imported and upgraded the zpool; it worked BUT it is not persistent, after a reboot the pool is gone and need to be imported again.

I tried updimg.sh, didn't work.

P.S.
I had to do a FRESH install since the USB stick failed (could not follow your upgrade procedure).

Andre Lue said...

mike,

In snv124 the zvolinit devices-local was deprecated. So to deal with this I added zpool import -a to /mnt/eon0/.exec.

However there was a typo as mentioned by geoff above.
"/usr/sbin/zpool -a" missing the "import"

so just make sure the line reads
/usr/sbin/zpool import -a

This is fixed in version 0.59.5 which waiting on Genunix.org to be linked.