The RAID_CRACKED drive can't be mounted in degraded mode because it fails the disk check at 5%
As I dig into this more and more it looks like I lost the drive table from disk1 (the failed disk) and disk2 is fine, but now an "orphan" - ergo, two drives out and the array is dead. The big question is: how do I get the orphaned disk2 back into the array...?
I did try a
co de resync 60000 but that did not seem to change anything. I don't see a
/force option for that, but I'll give it a shot.
I will try
co de config raid 10000 10008 10010 10018 without breaking the array first, but I think that the server wont allow me to do this.
If it turns out that the array is well and truly dead (which boggles my mind - how can a single drive failure kill a RAID5 setup so easily!?) then I may pony up the extra $30 and buy another 120G unit and upgrade the whole thing to a 480G array.
Of course, my trust in the Snap server is gone now that I've seen complete data loss from a single drive failure. And since the 4100 isn't compatible with Vista, it might juts be a better strategy to follow suit with many others in this forum and dump the 4100 completely and start looking for a used 4500 or 18000.