Quote:
Originally Posted by blue68f100
It's been a while since I have done an thing with the 4100. Normally after replacing a HD...
|
As I mentioned in my original post, I successfully replaced a failed (crashed) drive in this unit last December. At the time, I was impressed with how easy it was to accomplish. I wish it was that simple now.
Quote:
Originally Posted by blue68f100
Drive 3 has a different starting point then the other 2 drives. This normally accours if OS updates were done after the RAID5 array was built. OS v2-v3-v4 all calculate the starting point different, this is a time bomb waiting to happen. If this is the case just backup the data while it is in degraded mode (3 of 4 drives).
|
I believe that Drive 3 has a different starting point because it is a different capacity from the other three drives. This is the drive that was replaced last December (it was a failed 80G drive that was replaced with a new 120G drive).
The OS was updated prior to the array being built and was not updated after the array was (re)built.
Quote:
Originally Posted by blue68f100
Now you say it fails at 5%. This is the point on the HD's where the table is kept. So your orig/bad drive is toast. I use Spinrite to check hd's out.
|
If I pull the drives and use Spinrite (which I'll have to buy) won't that result in an even worse "orphan" situation where each of the all four drives are orphaned from the array? How would I tell the server, "No, you stupid hunk of silicon, these ARE the original drives! Rebuild the array, already!!"
The sad part here is that I can't even get the server running in degraded mode with all four of the original drives, in the original installed order. At this point, I'd settle for that.
Quote:
Originally Posted by blue68f100
Read the FAQ's. Also verify the 4100 MB has the MOD done to it. Ref. to Sticky thread "Attention 4100 users" at the top of the theads.
|
I've practically memorized the FAQs from reading them so many times. I'm even read the
Mirror Repair for Orphan process in the wiki, but that seems to involve a failed "orphan" drive.
My mainboard is a -001 model. I don't think the mod was done to it (it has been two years since I checked and my memory of that specific item is a bit fuzzy) but my understanding is that the mod "problem" shows up on a restart/rebott after the drives have been upgraded. Mine has been an operating RAID5 setup for over two years with multiple restarts and one disk hard-crash (the drive made a sound similar to a jet engine when powered up) and replacement with a mis-matched (larger capacity, obviously) drive. Since that replacement, the unit has seen at least five reboots. (It's worth noting that this current situation was -not- predicated by a reboot.)
Quote:
Originally Posted by blue68f100
Once you break a Raid5 4 disk array below 3 HD all data is lost. Your only option is a recovery service.
|
This is the frustrating thing: I -haven't- broken the array at all. I have the exact same four drives, in the same exact order (yes, I labeled them) in the exact same server. All four drives appear to be in good working order... unless I'm misunderstanding what is meant by "orphan". In my mind that indicates a "good" drive that has lost sync with the array, not a "failed" drive. Either way, as it currently stands, the array won't mount, even in degraded mode. But if you look at the info, it looks like the array has (at least) three working drives ...
At this point my goal is to get the array running (in degraded mode) and copy all my data off.
Unless there is another suggestion, my next plan is to:
co de config individual 10000 10010 10018
reboot
co de config raid 10000 10008 10010 10018
pray the array mounts
copy the data off and restart with four "new" drives
I have no idea if this will work, but I'm running out of ideas. It looks like the Guardian OS has provisions to accept an "orphan" drive back into an array, but I'm running SnapOS. When I issue the
co de config individual command do the drives end up as "orphans" or do they end up as JBOD? Can I do a
co de config raid on a mixed set of orphan/JBOD drives? (Remember these are the same drives in the same order as the working array.)