Raid 1 gets degraded every few months, rebuild fixes it.. buy why?

gerbiaNem

2[H]4U
Joined
Mar 6, 2005
Messages
2,169
So I have two 3TB Seagates in a Raid 1 on an Intel RST enterprise x79 board. They have the latest firmware, so the dead drive issue should be a non-issue.

I have read patrol enabled to check for errors, I know it can slow down the performance.. not a big deal.

Every few months, the system will show the array as degraded, and will hard lock right after this happens. I also hear a click which I'm not sure originates from my UPS, or my sound card (loud click when it turns on, Xonar Essence STX).

I restart the system, reset the drive to normal, the array rebuilds and all is fine for a few months. I'm slightly overclocked to 4.2 on my 3930k, everything else is completely stable.

I know the drives aren't "raid" drives, so power save disconnects could be causing the issue, but I want to get some opinions to see if this is a common occurrence. I'm not sure if raid read enterprise drives are really necessary for a software based raid.

Thanks for any information.
 
Seagate Barracudas? They don't have CCTL (their version of TLER), so maybe you are requesting a file that is corrupt or something and running into a URE. Without CCTL, the drive is going to continue over and over trying to access and read that block, since with CCTL it would time out after a few tries and mark it bad and move on.

Clicking sound, I have no idea.
 
Seagate Barracudas? They don't have CCTL (their version of TLER), so maybe you are requesting a file that is corrupt or something and running into a URE. Without CCTL, the drive is going to continue over and over trying to access and read that block, since with CCTL it would time out after a few tries and mark it bad and move on.

Clicking sound, I have no idea.

This makes sense to me, too. The disk gets a little busy and gets spat out of the array. Rebuild doesn't show any issues but happens anyways. Without the right kind of drives, this will happen until one drive totally fails. You would have to get different drives (Seagate ES2 or Constellation) to resolve the issue.
 
Happens to me on my 3930K and RIVE with two 2TB Hitachi Deskstars. Even crashing will have them rebuild, but every so often I have to reset one of the drives.

I don't think it's the CPU or motherboard, but just something with HDD's. I have 2 other RAID arrays (RAID 0) with SSD's and they never have issues.
 
So I have an update.

I removed my second GTX670 from my machine, and I've been running with only one for a few days now with no issues whatsoever *knock on wood*. I noticed that when my array would fail, my two monitors running off the second card would artifact a little bit (black lines), but I never thought that it might be my video card causing it.

The problem got so bad that even my raid 0 primary array started having issues. Maybe the southbridge was having issues keeping up with all of the bandwidith? I'm going to run like this for a week to confirm if that's what was happening. I initially tried upping the PCH voltage with no success.
 
So I have an update.

I removed my second GTX670 from my machine, and I've been running with only one for a few days now with no issues whatsoever *knock on wood*. I noticed that when my array would fail, my two monitors running off the second card would artifact a little bit (black lines), but I never thought that it might be my video card causing it.

The problem got so bad that even my raid 0 primary array started having issues. Maybe the southbridge was having issues keeping up with all of the bandwidith? I'm going to run like this for a week to confirm if that's what was happening. I initially tried upping the PCH voltage with no success.

Hows your power? Do you use an in-line UPS? How old is the power supply and of what wattage.
 
Back
Top