Hardware RAID question: OS that supports expanded volumes

ekological

n00b
Joined
Sep 7, 2009
Messages
40
Hi guys,

I'm trying to build a file server and settle on a OS. I have a 3ware 9650SE-16 port controller currently connected to four 2TB WD enterprise drives. I'm nowhere near the capacity that the new file server will offer and want to put off purchasing drives until I need them (the drives will inevitably become cheaper, etc), so I would like to add drives one or two at a time.

I played around with OpenSolaris because it's free and the benefits of ZFS intrigued me, but it doesn't seem that you can easily add drives. I first configured three drives in a RAID5 and OpenSolaris recognized it perfectly fine. I then added another drive and migrated the RAID5 to use all four drives. After the procedure was done (it took 3 days), I was able to see the physical size of the RAID volume was bigger, but the useable size was still the same as before adding the fourth drive. I think regular Solaris has an autoexpand feature, but it's not implemented in OpenSolaris.

Anyway, I was wondering if anyone out there has any "been there, done that" in terms of starting off a hardware raid volume with a small number of drives and add new drives over time and was able to use the new space.

TIA,
Chester
 
linux + xfs file system.

The problem with growing the size of the array is that you need to re-size the partition + grow the file system afterwords to be able to use the new space.

Usually we are talking 2tb+ partitions so gpt is used and resizing those partitions is pita, not to mention most os don't have the tools necessary to do that.

They way I chose to address that is to run linux with xfs running directly on raw device (array). I also use lvm in the middle but it's there only to allow me to carve the space any way I want. With no partitions to deal with being able to use new space after array oce is just 1 command away ( xfs_growfs )
 
linux + xfs file system.

The problem with growing the size of the array is that you need to re-size the partition + grow the file system afterwords to be able to use the new space.

Usually we are talking 2tb+ partitions so gpt is used and resizing those partitions is pita, not to mention most os don't have the tools necessary to do that.

They way I chose to address that is to run linux with xfs running directly on raw device (array). I also use lvm in the middle but it's there only to allow me to carve the space any way I want. With no partitions to deal with being able to use new space after array oce is just 1 command away ( xfs_growfs )

I have used this method many times with great success. Just remember without a partition the drive will appear "blank". This is not a problem just something to remember when fdisk'ing/mk.fs/etc at a later date.
 
I don't see why you wouldn't be able to do this on Solaris if you wanted to. I'm not a Solaris expert, but I'm sure it's possible if you do a bit of research.

Though I too am using XFS on top of LVM and am very satisfied with it.
 
The problem with growing the size of the array is that you need to re-size the partition + grow the file system afterwords to be able to use the new space.
Yeah, that is usually a problem...

I don't know any OSes that can auto-expand partitions to fit newly available drive space. But *nix OSes are unknown to me, so there might be something out there that might suit you on that specific need.

On the Windows side, though, if memory serves me right you can expand partitions from within the OS starting with Vista/Server 2008, so that might be of help.

And you also have *cough* WHS *cough*, where Drive Expander takes care of extra volumes just like they are a single drive. However, RAID is not officially supported within WHS, so something might go (VERY) wrong while you're expanding your array, or adding the extra space as a new volume to the DE pool...

Cheers.

Miguel
 
Windows has no problem with expanding array sizes. I've expanded volumes multiple times.
 
Linux+XFS will allow you to do filesystem expansion while the disks are mounted and online, which is nice.
 
Thanks for the advice, guys. Sadly, I searched and searched and it doesn't appear to be easy on OpenSolaris. Solaris...yes, just not OpenSolaris. I already installed Ubuntu desktop and am giving it a whirl. Samba performance is pretty good and I managed to run SqueezeCenter on the new server to stream music to my audio device. I'm currently migrating the 4TB RAID5 volume to a 6TB RAID5 volume =) Started this morning and it's 16% done. DOH!
 
Check the /proc/sys/dev/raid/speed_limit_max tunable, if it's too low it'll cripple your rebuild performance.

Also, on OpenSolaris, are you using ZFS or some other filesystem? I still don't see why this wouldn't be trivial to do; every other modern filesystem has an easy way to do this, and my understanding of ZFS was that it was basically handled automatically. Can you give a bit more detail about why you're running into issues; maybe we can find a solution.
 
May not be directly what you asked for, but flexraid might give you what you are looking for, it is software raid.
 
I just checked and the value is 200000. Thanks for the tip.

As for OpenSolaris, I was using ZFS. I asked on the ZFS forums and OpenSolaris forums and was greeted with a bunch of replies insulting me for spending $$$$ on a hardware RAID card because the best is to use either the motherboard SATA ports or cheap adapters and use the ZRAID software RAID. Then others said that I have to think differently because OpenSolaris was designed for enterprise use and that someone looking to add a drive at a time was not thinking in the enterprise-frame-of-mind. :rolleyes: Their suggestion was to add drives in three, creating a new ZRAID volume and adding it an existing zpool. I guess for up to six drives, it's about even in terms of useable space (I would lose two drives under their scheme of two zraid volumes consisting of three drives each and if I had six drives, I would probably switch over to RAID6).

I'm relatively new to the *nix administration side of things (I usually develop scripts and write code) so I'm not sure why I couldn't do what I wanted to do. I used the "format" command to see the drive IDs and after expanding the LUN, I saw the new size; however, when I did "zpool list" or something like that, the partition (for lack of the correct term if I go that wrong) was still the old size.

Here's an example of a thread from somewhere:

http://www.opensolaris.org/jive/thread.jspa?threadID=109021&tstart=0

It doesn't seem like the person got his issue resolved and much of what was discussed was way over my head.

Chester

Check the /proc/sys/dev/raid/speed_limit_max tunable, if it's too low it'll cripple your rebuild performance.

Also, on OpenSolaris, are you using ZFS or some other filesystem? I still don't see why this wouldn't be trivial to do; every other modern filesystem has an easy way to do this, and my understanding of ZFS was that it was basically handled automatically. Can you give a bit more detail about why you're running into issues; maybe we can find a solution.
 
The autoexpand property definitely seems to be available in OpenSolaris, there are lots of mentions of it in the mailing lists and the documentation. It's also a pretty core feature, and probably not something Sun would want to/could easily extract from their open OS. What happens when you try 'zpool set autoexpand=on tank'? From a quick search around the mailing lists it appears that before this property was added (fairly recently - solution proposed in 2008) the expansion was automatic (since sometime in 2006).

It *should* just work, from what I can tell. But like I said, I'm not a Solaris expert and have used it only briefly.
 
Have done OCEs on the same controller as you under Windows Server 2008. Just requires a restart after every expansion so that Disk Management can see the additional space, and then I can simply expand the partition to match.
 
I have thought about Windows Server 2003 or 2008 as power consumption might be a little better. The only reason why I've stayed away is system crashes on my workstation that would necessitate a rebuild/verify of the RAID volume, which is why this whole exercise started to begin with. I suppose a Windows server OS running on a server motherboard with ECC RAM and really nothing else might be more stable. Then there's the whole cost of licenses, which are not a problem with Ubuntu or OpenSolaris. But I'm glad to know it's an option and, being native Windows, performance might be better than running things over Samba.
 
Hey guys,

The migration finished and I tried to grow the partition using:

Code:
sudo xfs_growfs /media/sdb1

It just seems to display information without doing anything.

Code:
meta-data=/dev/sdb1              isize=256    agcount=5, agsize=244138800 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=976557047, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=32768, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

I tried to use the graphical gparted, which required me to unmount the mount and then it allowed me to resize, but then it appeard to do nothing. Any ideas? Did I create the initial XFS partition incorrectly? I used the gparted app and created the original partition as a primary partition. gparted now shows 1.82TB unallocated.

Thanks,
Chester
 
Yes, the best method for this is without a partition, which would be /media/sdb. Then, once you grew the array you'd simply do xfs_growfs /media/sdb and it would grow to the max extends of the array.

Do you already have data on the array?
 
How do you mount /dev/sdb?

There is data on the array as this is a test to make sure I can do everything i need to do before I move everything over and rely on this file server for my needs.

Thanks,
Chester

Yes, the best method for this is without a partition, which would be /media/sdb. Then, once you grew the array you'd simply do xfs_growfs /media/sdb and it would grow to the max extends of the array.

Do you already have data on the array?
 
don't use partitions, you only adding extra layer and asking for trouble. Use xfs on the whole raw disk
 
You need to delete the partition and format the array with:

Code:
[root@localhost ~]# mkfs.xfs /dev/sdb

Notice the missing "1", then your all set! Remember, this will delete all your data and you'll be starting over.
 
holdon, if you're starting over you should properly tune your xfs system, you need to specify sunit and swidth values appropriate to your array.
 
By all means, I'm all ears =) Heck, I used the default stripe size (256K) for the volume. The main use for this server is to hold blu ray movies, lossless compressed audio and pictures from my dSLR (12MB files, though that might change for the larger if I get a better camera). Any inputs and insights are greatly welcomed.

TIA,
Chester

holdon, if you're starting over you should properly tune your xfs system, you need to specify sunit and swidth values appropriate to your array.
 
like i said the specific options depend on type of array, stripe size and # of disks but the general formula is
Code:
 mkfs.xfs -d su=<stripe_size>,sw=<nr_data_disks> -l version=2,su=<stripe_size> /dev/sdX
where nr_data_disks is number of data disks in the array so for raid 6 it would be # of disks -2, raid 5 #-1 etc
 
also forgot to mention u should add noatime option to your mount entry in /etc/fstab
 
You don't *need* to reformat to add sunit and swidth tuning values, they can be set at mount time.
 
don't use partitions, you only adding extra layer and asking for trouble. Use xfs on the whole raw disk

Until you add a new disk and you can't tell which is new because there's no partition on it.. easier to tell quickly if there's already data on it with a partition.
 
Until you add a new disk and you can't tell which is new because there's no partition on it.. easier to tell quickly if there's already data on it with a partition.

In his case this will not be an issue as the OS will never see the single drive. Most of the time you'd be installing a new drive anyhow, which also will have no partitions.

However the point is very valid and it does get confusing.
 
I'm curious since I haven't done anything yet. Is there a way to expand things with the way I have it, partitions and all? I would like to see if I could make this setup work, just as an exercise.
 
did u do gpt partition? I haven't checked in a while but last time I checked there were no linux tools to resize the partition.
 
Until you add a new disk and you can't tell which is new because there's no partition on it.. easier to tell quickly if there's already data on it with a partition.

You can always tell, granted not with fdisk, but you can see what's mounted or fstab etc. Also this will be a file server, and the raw device is a raid array not a single drive so even with fdisk you can quickly see the size and be able to tell which is which.
 
I'm curious since I haven't done anything yet. Is there a way to expand things with the way I have it, partitions and all? I would like to see if I could make this setup work, just as an exercise.

Yes, you can. You have to delete the partition then start a new one at the exact same starting point/block, only with a further stopping.
 
Maybe this is a dumb question. If you delete the partition and recreate one, this deletes the data, correct? I already rebuilt the array as a 3-disk array and created a XFS filesystem outlined above. It's not growing from 4TB --> 6TB, but I was just curious.

Thanks,
Chester

Yes, you can. You have to delete the partition then start a new one at the exact same starting point/block, only with a further stopping.
 
deleting partitions doesn't delete the data, if you successfully recreate a valid partition with same starting point the data will be accessible again
 
Back
Top