Re: Trying to replace failed drive in SNAP 4500
It has been my experience that most new drives come with a signature already written to them. This way, the operating system (most commonly windows) will show you the new disk in Disk Management as being a nice healthy basic disk. Then, you format with your filesystem, and away you go.
I have recently been playing with a 4200 and various disks, and I came upon an interesting discovery (GOS 5.2.067):
A healthy GOS disk going from position 4 to position 3 showed up as FAILED in the Disk/Unit management screen in the GOS. Moving it back to position 4 did not correct the issue.
I took the drive and mounted it in an external drive box, and ran it through lots of disk diags inside windows. Everything checked out perfectly.
In the end, the only way to get the drive to go back into the Snap as a healthy disk was to 'clean' the disk using Diskpart under windows, which removes the signature that an operating system puts on a disk. This allowed the Snap to say, 'oh, lookie ... a new disk for me to use. I'll put a signature on it, and sync it into the boot raid config ... ' etc, etc.
Also of interest, putting a freshly cleaned disk into a running snap was not enough to get the GOS to recognize that a device replacement had occurred. I was dismayed to discover that I had to reboot the server before the fresh disk would be integrated into the system for use.
That's my story ... YMMV.
__________________
=================================================
SnapServer 510 (4x 2TB)
SnapServer 520 (4x 2TB), S50 12x 2TB
DL185 G5, 12x 450GB SAS 15K, FreeNas 9.2
Lenovo IX4-300D, 4x 3TB WD RED
|