View Single Post
Unread 11-06-2015, 12:00 PM   #3
Cooling Neophyte
Join Date: Jul 2013
Location: Mountain View
Posts: 4
Default Re: 2TB Drives in a 550

Put the original disk in slot 1 (sda).

Boot the system with the other drives out and wait for the system to fully come up.

Insert each other drive one at a time and make sure that the system recognizes it:

Log in over ssh:

1) At the login prompt use the account 'admin'
2) default password is 'admin' else it will be whatever you changed it to.
I recommend that you change the password otherwise you won't be able to go any further.
3) At the CLI prompt issue the command: osshell
This will drop you to a real BASH prompt
4) Issue the command: su -
This is where you will need to have changed the password. The password is the same as the 'admin' password, unless you haven't changed it - then it's a default password known only by the developers.
Once you're in BASH as root you can totally wipe those red drives. It's likely they have some sort of formatting on them that the system doesn't really recognize.

Verify that the drives are detected:

cat /etc/devices

It should look like this:

-sh-3.1# cat /etc/devices
/dev/sda SCSI6:0-0 976762584 976 GB WDC-WD10EALS-00 0/0 WD-WCATR0306500
/dev/sdb SCSI6:0-1 976762584 976 GB WDC-WD10EALS-00 0/1 WD-WCATR0314773
/dev/sdc SCSI6:0-2 976762584 976 GB WDC-WD10EALS-00 0/2 WD-WCATR0312087
/dev/sdd SCSI6:0-3 976762584 976 GB WDC-WD10EALS-00 0/3 WD-WCATR0307303

This should list the HDD by slot and serial number.

Then you can wipe them. Be VERY careful not to wipe sda or you will be out of luck!

dd if=/dev/zero of=/dev/sd? bs=512M count=1

Replace the ? in the above line with b, c, d (run sequentially not at the same time)

This will wipe the MBR and the first couple hundred megabytes of the drive - if you really want to get punchy you can leave off the bs=512M part and it will instead zero out the whole drive. If you chose the second option be prepared to take several hours for each drive. Don't bother increasing the count, people that recommend writing zeros several times don't really understand how magnetic media works (hint it's bs - no one has ever recovered overwritten data as stated by acquaintances who work at DriveSavers which is probably the premier data recovery company in the world).

After you zero the MBR then you can clone the OS - this is an automatic process that happens on a hotadd:

hotremove /dev/sdb
hotremove /dev/sdc
hotremove /dev/sdd

hotadd /dev/sdb
hotadd /dev/sdc
hotadd /dev/sdd

Verify that the partitions are cloning:

cat /proc/mdstat

You should see md100 and md101 being rebuilt. These are both RAID1 and contain the rootfs and a swap partition respectively. If it doesn't work, you may want to physically pull the drives and reinsert them. If you do it that way it may be wise to wait a minute or so between drives. The partitions are small and should clone very quickly.

Once the drives have cloned you should see something like this:

-sh-3.1# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md101 : active raid1 sda5[0] sdd5[3] sdc5[2] sdb5[1] 8387520 blocks [4/4] [UUUU]
md100 : active raid1 sda2[0] sdd2[3] sdc2[2] sdb2[1] 4194240 blocks [4/4] [UUUU]

Then pull disk 1 (sda), put in your new drive and repeat the above dd process and hotremove/hotadd procedure.

Once it's all done you should be able to go into the UI and build a new RAID array using the new capacity.

If this does not work then the BIOS in the system may not correctly recognize the drive geometry and unfortunately you're not going to be able to do anything about that without an upgrade to the last build of Guardian OS for the 500 series which was GOS 6.5.029. This build did upgrade the BIOS in the systems and does recognize 2TB drives for sure - but the 5.2 builds should also. 3TB and larger drives were never validated on these series.
Trod is offline   Reply With Quote