ARECA Owner's Thread (SAS/SATA RAID Cards)

ix is split internally.

You hit a SAS Tunneling protocol limit long before you have a bandwith issue due to expanders.
 
Odditory,

Did you use the Hitachi utility to change any parameters on each hard drive in regards for power management? If yes, what values did you use?

I'm trying to use the HDD power management on the Areca to spin up my array on demand and although it works fine using the 1 minute setting on the controller, when I actually set it to something realistic the drives do not spin up in time and I get timeout errors in the log. I've tried all the values up to 2.5 on the Stagger Power On Control to no avail. The only thing I can think of is I need to set something on each drive.
 
No I haven't changed any values with the Hitachi feature tool. I've already reported this phenomenon to Areca- I don't think the drives are doing anything wrong- they certainly don't take 8x longer than other drives to spin up, I think the Areca card is handling the process wrong and the "timeout" is a false-positive, and setting the spin-up stagger value abnormally high is just a workaround to the bug rather than the solution.

I've tested and put a Hitachi 2Tb into an SATA dock and timed how fast it appeared in windows disk management, and saw pretty much same result as doing the same test with a Hitachi 1Tb (7200rpm) or Western Digital 1Tb (7200rpm) - both which the Areca 1680 supports spinning up at even at the lowest .4 second stagger setting just fine.

As I said before, on one 1680 controller with 12 x 2Tb hitachi's I get away with 1.0 second spinup without timeouts, on another 1680 with 20 x 2Tb Hitachi's I can't go lower than 2.0 seconds. I'll report back what Areca tech supt. has to say.
 
Last edited:
I think im missing something here.

I expanded my R6 raidset and added a hotspare
So i now have 7+2p+1hs

I have several volumes and would like to expand them, but i dont see where i can do that.
When i go to modify volume, and choose the one i would like to expand, i can enter in a new size, but its already at the max and i have several TBs available.

help?
 
I have several volumes and would like to expand them, but i dont see where i can do that.
help?

From the 1680 manual, p.70:
"Only the last volume can expand capacity"

Which I think means only the most-recently created volume in a given raidset (physically the "last" volume in the raidset) can be expanded.
 
Well, I finally have the server I'm going to use for data backups back in my hands after being on emergency loan to a client..what a pain, having to start from scratch with the system. But the client is happy.

I'll be buying an Areca 1680i, the HP SAS expander and 20 drives of the Hitachi 2 TB units. Will have the drives in two RAID 6 arrays. The case is a Norco 4220.

Sounds like a number of folks have multiple Areca cards so I'm wondering if anyone here wants to sell a working Areca 1680i, preferably with everything that comes with it new (cables, manual). Just need the two port internal model, not the "x" series.

If you have one that you want to sell please send me a PM. Thanks.
 
I'm going to sell mine when Areca releases their new 1880 line, but they're taking their sweet time about it as always.
 
I'm currently running an Areca 1220 with 8 Seagate 7200.11 1.5TB CC1H drives in Raid6. It works pretty well but I'm out of space, so time to upgrade. I'm using 2 * Startech 5 Bay Sata Backplanes to hold my drives. I'm really happy with those, too, but I only have 2 available bays left.

I've read every page of this thread and the HP SAS Expander thread and I've narrowed my options down to the following:

Norco 4220 + HP SAS Expander + 1680X card.
Norco 4220 + HP SAS Expander + 1880X card.

The questions I have: I want to keep using the 10 Startech bays in my current server and Norco 4220. I'll put the 1680x in my current server and the Expander in either the Norco or my current server. I'd need a 1x to 4/8x adapter to put it in my current server (Tyan S2865 AG2NRF only has PCIe 16x/1x/1x). If I put the Expander in the Norco I would need to run cables from the Expander back to my current server as well as to the Norco bays inside. Does anyone see an issue with that?

The only parts question is whether to get a 1680X now, or wait for an 1880X. I'll need to use the array roaming feature because I don't have enough extra disk to offload my data and make a new array. I assume even though the chipset has changed I'll be able to just plug my existing 1220 array into an 1880 and it will come up. Thoughts?
 
anyone use the other Hdd Power Management features other than Stagger Power On Control? what comments on Time To Hdd Low Power Idle or Time To Hdd Low RPM Mode?
 
Has anyone had their Backup battery die on their card? I've got a 6120 card that wont charge the battery past 96% (1.5 years old, backing a 1gig sodimm)

I know its a 3.6volt Lithium 1100mAH pack, its sooo tempting to just cut open the pack and replace whatever cells are there, unless i can find an equivalent pack at a radio shack kinda store ;)
 
So, recently I found out how much I dislike my 1680ix-24.

I've been running 12x WD 1tb FALS(TLER ON) for about 9 months now. Everything has been fine.

I was looking to expand and I did some research(mainly on this forum) and decided on the hitachi 7k2000 drives. Bought 6x of them last week. Threw them in, a day or so later I had a fresh RAID 5 ready for data. Before I started filling it up I decided to benchmark it. I tried hdtach, and the line is all over the place, flies up to 400mb/s and falls to 70mb/s wildly. Hd Tune shows the reads are around 350mb/s ~50mb/s. Writing to the array averages about 130mb/s. So I played with the caches, tried some things and nothing helped. I figured it was ether the controller or I had a bad drive. I removed the drives from the 1680ix-24 and dropped them in my other box that has my old ARC-1220 and remade the RAID 5. Wouldn't you know, the read line is a nice smooth 350MB/s with a 590MB/s burst. And writes average 250MB/s.

I am so sick of this finicky card. When I bought it i knew it was picky, but I never guessed it would be this bad.
 
I had no issues with my 1680ix-24 except when it came to SAS expanders. I got 800mb/s all day and night with it.
 
I've noticed with my 1680LP that it works great with one RAID 5 array but once you add another array (RAID 0 or 5) the performace drops significantly. If I'm streaming from the RAID 5 array and copying files from the other RAID 0 array from/to the RAID 5 array the performance will drop down to as low as ~10MB/s.

I've come to the conclusion that Areca cards bench great but have crappy performance with multiple requests. I'm thinking of switching back to 3ware. The 3ware doesn't have anywhere close to the Areca benchmark performance but is consistent with its performance. With 3ware, I could be streaming to multiple clients and copy from one array to another and still get over 100MB/s in copy speed with no drops under 100.

My 1680LP is connected to an HP Expander. The RAId 5 array is 5 WD 1.5GP drives and the RAID 0 array is 2 WD 1TB RE2 drives. Same config as I had on the 3Ware 9650. Caching is turned on, no battery backup is connected. Maybe the Expander is the culprit but I dont have time to test it out.
 
anyone use the other Hdd Power Management features other than Stagger Power On Control? what comments on Time To Hdd Low Power Idle or Time To Hdd Low RPM Mode?

Both work, and spindown too. Low power idle is minimal savings over full power idle, probably ~1.5 W/drive (Hitachi 2TB), Low RPM is ~3.5 W/drive. Spindown, of course, consumes <0.5 W/drive. I have LPidle-LPrpm-Spindown timings set to 2-10-30 on my Areca 1680ix-12.
 
Areca is one of fast when comes to Multithreaded Read & Write Patterns.
p42.png


http://www.xbitlabs.com/articles/storage/display/6-sas-raid-controllers-roundup.html
 
Yeah, on one array. That's exactly what I'm talking about. Areca has great benchmarking cards, but in practice aren't good. Add a couple of arrays on the same card then runs the tests.
 
Last edited:
Sorry, but i don't agree with your, areca are the fastest cards I know.

I have a couple of setups with multiple arrays, and they are so much good. If you don't like areca, well, you probably should try something like 3ware, you probably would commit suicide! :D

Guys I'm having problems with the hitachi disks too, 2TB ones, anyone have tried disable NCQ? If i disable NCQ I don't have any timeout on my disks, but instead of fast I/O I get 15MB/sec... that's weird, it shouldn't happen.

Cheers,
 
Sorry, but i don't agree with your, areca are the fastest cards I know.

I have a couple of setups with multiple arrays, and they are so much good. If you don't like areca, well, you probably should try something like 3ware, you probably would commit suicide! :D

Guys I'm having problems with the hitachi disks too, 2TB ones, anyone have tried disable NCQ? If i disable NCQ I don't have any timeout on my disks, but instead of fast I/O I get 15MB/sec... that's weird, it shouldn't happen.

Cheers,

I had to set staggered spin up higher for my hitachis.
 
I had no issues with my 1680ix-24 except when it came to SAS expanders. I got 800mb/s all day and night with it.

+1.

@Kritter: sounds like some sort of anomaly, because my 1680ix-24 has been nothing but consistent. Only reason performance would vary like that is if there was some sort of conflict, like you'd already created a partition and some windows process was accessing it, stuff like that.
 
Last edited:
like you'd already created a partition and some windows process was accessing it, stuff like that.

I benchmarked it before initialization then after and after partioning/formating. Same results, performance curve all over the place. Whereas the 12x WD1TB array on the same controller gives me a nice steady line.

But I gave up on getting them to work on the 1680ix. They work well on the 1220, and I don't need more than 14tb ATM.
 
Anyone having problems with slow background init with firmware 1.48? I have a 1680ix-12 w/2gb cache.

Old config was 12x750gb (RAID 6) drives and it might have taken 24 hours to init, if even that. This was firmware 1.46.

New config has 4x750gb drives and 8x2tb drives, all RAID 6. The 4x750gb took 30 hours. The 8x2tb is looking to take 4 days in total.

A few weeks ago, a drive dropped out of the old config and it went right back in. The rebuild took 6 or 7 hours. This was with firmware 1.48.

Match anyone's experiences?
 
Actual differences, I'm not sure, but storport is much newer and is replacing scsiport, so you should probably use that.
 
9ish hours for a RAID6 init of 8 2TB Hitachi drives.
18+ hours for same RAID6 if you leave a certain option disabled.

The option I think was "Disk Write Cache Mode". You may want to turn that one while you initialize the raid.

Firmware 1.48, 1680LP, with a BBU.
 
9ish hours for a RAID6 init of 8 2TB Hitachi drives.
18+ hours for same RAID6 if you leave a certain option disabled.

The option I think was "Disk Write Cache Mode". You may want to turn that one while you initialize the raid.

Firmware 1.48, 1680LP, with a BBU.

I had the disk caches on. I leave them on since I haven't bought a BBU yet.

Firmware 1.48 - 8x Hitachi 7k2000 2TB - RAID5 Init:

1680ix-24 4GB: 41 Hours

1220 512MB: 5.5 hours

Yes, 41 hours on the faster card.

OK, so its not just me. The WD20EADS* are a lot slower than the 7k2000, so I guess it makes sense. Maybe its the expander. I have one in the 1680ix-12 and Kritter has one in the 1680ix-24. xnoodle doesn't with the 1680LP. I wonder what they broke?

Now that it is done, I have no problems maxing Gig ethernet with iscsi using Fedora/tgtd so I have no complaints. The Intel 82574L NICs on the other hand...:mad: That was an hour of my life that I want back.

*I did have to jumper to SATA150 for these to work without timeouts, just in case anyone was wondering.
 
I had the disk caches on. I leave them on since I haven't bought a BBU yet.



OK, so its not just me. The WD20EADS* are a lot slower than the 7k2000, so I guess it makes sense. Maybe its the expander. I have one in the 1680ix-12 and Kritter has one in the 1680ix-24. xnoodle doesn't with the 1680LP. I wonder what they broke?

Now that it is done, I have no problems maxing Gig ethernet with iscsi using Fedora/tgtd so I have no complaints. The Intel 82574L NICs on the other hand...:mad: That was an hour of my life that I want back.

*I did have to jumper to SATA150 for these to work without timeouts, just in case anyone was wondering.

I think the expander chip might have a big effect on build performance. I just got a newer revision of the Adaptec 52445 controller which has a completely different expander chip, previously it took 40hours to initialize 10 WD20EADS, the new revision card is looking like it will do it in about 24hours.

What were the promblems you had with the 82574Ls? I have a board with them built-in, it seems Jumbo Frames is deliberately disabled due to some sort of bug in the silicon.
 
I think the expander chip might have a big effect on build performance. I just got a newer revision of the Adaptec 52445 controller which has a completely different expander chip, previously it took 40hours to initialize 10 WD20EADS, the new revision card is looking like it will do it in about 24hours.

My 1680ix-12 is from 2008. However, I was surprised to see my 4x 750gb array take as long as it did, when the old 12 x 750gb array took less time. Maybe its from having the WD20EADS on the same expander? Not sure.

What were the promblems you had with the 82574Ls? I have a board with them built-in, it seems Jumbo Frames is deliberately disabled due to some sort of bug in the silicon.

Literally billions of errors and a loss of connectivity to the network when passing much more than a ping. This is with kernel 2.6.32 - the original 2.6.30 seemed to be ok. The Intel website driver behaved the same way. The fix ended up being to pass pcie_aspm=off to the kernel at startup. The 82574L has a problem with PCIe power states and the driver doesn't turn them off like it should.

I have a pair of them on my X8SIL-F. Pretty good board otherwise.
 
My 1680ix-12 is from 2008. However, I was surprised to see my 4x 750gb array take as long as it did, when the old 12 x 750gb array took less time. Maybe its from having the WD20EADS on the same expander? Not sure.



Literally billions of errors and a loss of connectivity to the network when passing much more than a ping. This is with kernel 2.6.32 - the original 2.6.30 seemed to be ok. The Intel website driver behaved the same way. The fix ended up being to pass pcie_aspm=off to the kernel at startup. The 82574L has a problem with PCIe power states and the driver doesn't turn them off like it should.

I have a pair of them on my X8SIL-F. Pretty good board otherwise.

I've got confused, the chips I have are 82573Ls, ASPM needs to be disabled to get stable jumbo frames, after reading about it for a while I thought it must be disabled by the NIC rom that is built-in to the motherboard bios so isnt very easy for an end user to fix. I wasnt aware that disabling at driver level helps. The V1.0.2-k2 driver included with Centos 5.4 detects ASPM is enabled and then refuses to allow jumbo frames. Will have to give that kernel option a try.
 
I've got confused, the chips I have are 82573Ls, ASPM needs to be disabled to get stable jumbo frames, after reading about it for a while I thought it must be disabled by the NIC rom that is built-in to the motherboard bios so isnt very easy for an end user to fix. I wasnt aware that disabling at driver level helps. The V1.0.2-k2 driver included with Centos 5.4 detects ASPM is enabled and then refuses to allow jumbo frames. Will have to give that kernel option a try.

The 82574L sounds like its buggier - I needed to disabled ASPM before it would work at all!

I actually have ASPM disabled in the BIOS and have from day one. In my somewhat limited experience, Linux ignores a lot of BIOS options - like it still detects BIOS disabled serial and parallel ports. It doesn't surprise me that it ignores the ASPM setting.
 
TIP #1[/b]: If you've connected a SAS expander to an Areca 1680 series card and it's not seeing all the drives, trying setting "SES2" to "Disabled". Example the HP SAS Expander.

Can this be done in the BIOS and if so where? I have a 1680i with the 1.47 firmware.
 
I had the disk caches on. I leave them on since I haven't bought a BBU yet.



OK, so its not just me. The WD20EADS* are a lot slower than the 7k2000, so I guess it makes sense. Maybe its the expander. I have one in the 1680ix-12 and Kritter has one in the 1680ix-24. xnoodle doesn't with the 1680LP. I wonder what they broke?

Now that it is done, I have no problems maxing Gig ethernet with iscsi using Fedora/tgtd so I have no complaints. The Intel 82574L NICs on the other hand...:mad: That was an hour of my life that I want back.

*I did have to jumper to SATA150 for these to work without timeouts, just in case anyone was wondering.

My times are with the HP SAS expander connected via the only SFF-8087 port on my 1680LP.
 
Can this be done in the BIOS and if so where? I have a 1680i with the 1.47 firmware.

No, this has to be done on the webGUI, the option doesn't exist in the boot time BIOS menu. However you may not have to much longer, since a few weeks ago I sent Areca an HP expander to analyze the SES2 incompatibility. Lo and behold, today I received a beta firmware file which is supposed to fix it. I'm going to test now and report back.
 
Last edited:
I've noticed with my 1680LP that it works great with one RAID 5 array but once you add another array (RAID 0 or 5) the performace drops significantly. If I'm streaming from the RAID 5 array and copying files from the other RAID 0 array from/to the RAID 5 array the performance will drop down to as low as ~10MB/s.

Unfortunately, I've found the same thing. I have 4x750gb RAID 6 and 8x2tb RAID 6 Raidsets. When benchmarking the 2nd array alone, I have no problems maxing gig iscsi. If I do something like run dd on the first array, my iSCSI performance drops to 10mb/sec or worse.

This is on an 1680ix-12 w/2gb cache running 1.48. OS is Fedora 12.

I'm thinking of replacing the 1680 with a 1212 and a 1222, but I don't know if it'll solve the problem.
 
I got rid of the Areca. I tried every setting possible and nothing fixed it.
 
Back
Top