Thursday, February 5, 2009

FreeBSD ZFS Running Strong, Rocking Hard

I decided about a month ago to back up my current Ubuntu-based file server and put ZFS with raidz on it. Tonight, I finally got around to doing it. If you haven't heard about ZFS or Raidz, I suggest being ready to have your mind blown.

I've been a FreeBSD user on and off since about 2.6. Initially, I only chose it because I wanted the help of all of my FreeBSD guru buddies on IRC (I really should drop in and say hi to those guys, it's been too long). It really grew on me though. I feel extremely comfortable with it's kernel, filesystem layout and ports system.

That's why I was so happy to hear that it supported ZFS out of the box in 7.0. However, after a bit of research and reading about all the tuning necessary and how out of date the FreeBSD 7.1 branch of ZFS was, I thought I'd go to the source instead. I decided to try OpenSolaris.

Unfortunately, OpenSolaris 2008.11 refused to boot. It would get to the copyright notice and just lock up solid. I messed around with it and tried some kernel flags but nothing seemed to help. In the end I went back to my old friend FreeBSD.

The last version I used was 5.0 as a file server in my dorm room. One of the best things about FreeBSD is it has a winning formula and it hasn't changed much in all of the years i've used it. I just picked it up and everything was right where I remembered it being. Microsoft could learn from this example of strong consistency (cough, office2007-vista-windows 7, cough).

The only issue I ran into is lack of support for my integrated NIC. I didn't feel too bad about that though. The server is running on an old AS-ROCK 939Dual-SATA2 motherboard with Realtek RTL8201CL onboard. It's pretty much trash anyway. So I went over to my box of PCI cards, grabbed an old Intel 82558 Pro NIC, and off I went.

After that it was a hop skip and a jump to getting ZFS going. I added the standard lines to my loader.conf to tune for conservative memory usage and sprinkled in some vfs.zfs.vdev.cache.size="5M" to compinsate for my meager one gigabyte of ram. After that simply added zfs_enable="YES" to my rc.conf and off I went. One command and less than a second later it was finished:

# zpool create pool raidz2 ad4 ad6 ad8 ad10 ad12 ad16

I had a full 2TB, 6 drive, double parity raidz array ready to go. Wow.

For an even bigger shock take a walk down memory lane with me and check out what I had to go through to build my RAID-5 array in FreeBSD 5.0. What a pain in the ass that setup was. The funniest part is that it's not even taking into account how software raid in 5.0 would randomly crash or how the raid service would have to be manually started from a command line after each boot.

No comments: