|
|
Snap Server / NAS / Storage Technical Goodies The Home for Snap Server Hacking, Storage and NAS info. And NAS / Snap Classifides |
Thread Tools |
08-20-2007, 09:11 AM | #1 |
Cooling Neophyte
Join Date: Jul 2007
Location: Sweden
Posts: 8
|
Nas 4100 Panic
I need some help with my Quantum 4100 with 4 100GB disk. I filled the disks in my unit to over 90%, I recognized that was bad!
The unit get PANIC: ASSERT(fcbp->fcb_magic == FCB_MAGIC) 07/24/2007 20:16:11 109 F SYS | Call Stack : $00247F81 $002493A0 $0024A64F $00249892 $00249B90 $00241853 $002391FC $0023652F $001041C3 this happens when the unit tries to mount the raid disk 60000 (raid5) and run fsck: 07/24/2007 21:16:30 110 I L01 | File System Check : Executing fsck /dev/rraid0 /force /fix /fixfatal 07/24/2007 21:16:30 110 D SYS | Propagate on /pri2/os_private: Success - 8 files, 0 dirs; Errors - 0 files, 0 dirs 07/24/2007 21:16:30 110 W L01 | File System Check : partition is NOT clean. 07/24/2007 21:16:30 110 D SYS | Fsck - Using primary superblock 07/24/2007 21:16:30 110 D SYS | 62227408 bytes pre-allocated 07/24/2007 21:16:30 110 D SYS | Memory allocation for i-node cache: 90% of free RAM 07/24/2007 21:16:31 110 D L01 | File System Check : Failed to allocate 54578690 bytes for rcd_lncntp!!! 07/24/2007 21:16:31 110 D SYS | -- Swap-based Fsck -- 07/24/2007 21:16:31 110 D SYS | 11569 i-node cache blocks, cache hash table: 4093 entries 07/24/2007 21:16:31 110 D SYS | 256 i-nodes per generic cache block 07/24/2007 21:16:31 110 D SYS | 170 i-nodes per directory cache block 07/24/2007 21:16:31 110 I L01 | File System Check : ** Phase 1 - Check blocks and sizes 07/24/2007 21:24:34 110 W L01 | File System Check : 30627695 Dup I=6501122 07/24/2007 21:35:27 110 I L01 | File System Check : ** Phase 1b - Rescan for more duplicate blocks I also tied to mount the drives manually: config device automount disable reboot unit config devices fsck 60000 /fix /fixfatal /altsb config device mount 60000 07/27/2007 10:04:06 121 D L01 | File System Check : Failed to allocate 54578690 bytes for rcd_lncntp!!! 07/27/2007 10:04:06 121 D SYS | -- Swap-based Fsck -- 07/27/2007 10:04:06 121 D SYS | 11569 i-node cache blocks, cache hash table: 4093 entries 07/27/2007 10:04:06 121 D SYS | 256 i-nodes per generic cache block 07/27/2007 10:04:06 121 D SYS | 170 i-nodes per directory cache block 07/27/2007 10:04:06 121 I L01 | File System Check : ** Phase 1 - Check blocks and sizes 07/27/2007 10:11:55 121 W L01 | File System Check : 30627695 Dup I=6501122 07/27/2007 10:22:35 121 I L01 | File System Check : ** Phase 1b - Rescan for more duplicate blocks 07/27/2007 10:30:38 121 W L01 | File System Check : 30627695 Dup I=6501122 07/27/2007 10:30:38 121 I L01 | File System Check : ** Phase 2 - Check pathnames 07/27/2007 10:30:42 121 D SYS | Stack reference = 0x003FC6A8 07/27/2007 10:30:42 121 D SYS | Detected a unit with 128M of memory. 07/27/2007 10:30:42 121 D SYS | Dumping 0x7FFFE00 bytes of memory from address 0x0, file offset 0x200 ... 07/27/2007 10:30:42 121 D SYS | Done. System Initialization : Server v3.4.803 Build Date: Jan 15 2003 18:04:19 Boot Count: 111 Executable built by KEVIN Hardware platform:2.2.1 Model:2 (128 MBytes) S/N:xxxxxx Please help! Best Regards Mats |
08-20-2007, 11:25 AM | #2 |
Thermophile
Join Date: Jul 2005
Location: Plano, TX
Posts: 3,135
|
Re: Nas 4100 Panic
Have you tried a reset to factory or just reboots? Need assist if you reset to factory to do the initial setup. No data is lost during this, just PW, network settings. And I see your using version 3.4.803
Do you have your data backed up?????? Yes, these units do not like to be filled up. And it seams to happen quite regularly. Which indicates that these use storage space for cache. Can you post the results of "co de info" This will give drive info. If you can get it. Would like to verify that you have not lost a drive. I have a couple of thoughts to try. One if you have a copy of spinrite, run it on all drvies. Since it is not OS dependent and works on the controller level no data is lost. Several users have posted good results with it. Then see if it will come up, without going into panic. The other option is if nothing else works. But I want more info before I sugest it.
__________________
1 Snap 4500 - 1.0T (4 x 250gig WD2500SB RE), Raid5, 1 Snap 4500 - 1.6T (4 x 400gig Seagates), Raid5, 1 Snap 4200 - 4.0T (4 x 2gig Seagates), Raid5, Using SATA converts from Andy Link to SnapOS FAQ's http://forums.procooling.com/vbb/showthread.php?t=13820 |
08-21-2007, 05:40 AM | #3 |
Cooling Neophyte
Join Date: Jul 2007
Location: Sweden
Posts: 8
|
Re: Nas 4100 Panic
Here is the output from command: info device
Logical Device: 10006 Position: 0 JBOD Size (KB): 32296 Free (KB): 25608 Private Mounted Label:Private Contains system files only Unique Id: 0x7281F56115D4DD45 Mount: /priv Index: 12 Order: 0 Partition: 10006 Physical: 10007 FS Size (KB): 32768 Starting Blk: 515 Private Physical: 10007 Drive Slot: 0 IDE Size (KB): 97685504 Fixed Logical Device: 1000E (USD 1380.3) Position: 0 JBOD Size (KB): 32296 Free (KB): 22784 Private Mounted Label:Private Contains system files only Unique Id: 0x4A0AE8356FB6E544 Mount: /pri2 Index: 13 Order: 1 Partition: 1000E Physical: 1000F FS Size (KB): 32768 Starting Blk: 515 Private Physical: 1000F Drive Slot: 1 IDE Size (KB): 97685504 Fixed Logical Device: 60000 Position: 1 RAID Size (KB): 291086832 Free (KB): 0 Public Unmounted Label:RAID5 Large data protection disk Unique Id: 0x179FD7F07A222E0B Mount: /0 Index: 0 Order: 255 Partition: 10000 Physical: 10007 R 60000 Size (KB): 97028944 Starting Blk: 81942 Public Physical: 10007 Drive Slot: 0 IDE Size (KB): 97685504 Fixed Partition: 10008 Physical: 1000F R 60000 Size (KB): 97028944 Starting Blk: 81942 Public Physical: 1000F Drive Slot: 1 IDE Size (KB): 97685504 Fixed Partition: 10010 Physical: 10017 R 60000 Size (KB): 97028944 Starting Blk: 81942 Public Physical: 10017 Drive Slot: 2 IDE Size (KB): 97685504 Fixed Partition: 10018 Physical: 1001F R 60000 Size (KB): 97028944 Starting Blk: 94169 Public Physical: 1001F Drive Slot: 3 IDE Size (KB): 117246464 Fixed is this that information you asked about? or, else I can run "co de info" tonight. I have done reboot but not a "reset to factory", instructions please. I had one space 120GB drive and dd one disk at the time and replaced in the unit. All drives was successfully cloned and working in the unit. Will spinrite make a better jobb and find some errors? I have upgrade the memory to 256MB, no change. Is it possible to manually fix this duplicate I-node? I have backup of some of the content, but not all |
08-21-2007, 06:03 AM | #4 |
Thermophile
Join Date: Jul 2005
Location: Plano, TX
Posts: 3,135
|
Re: Nas 4100 Panic
Drive 4 has a different starting point than the rest. This causes all kind of problems. It has also been replaced at one time (120gig, not 100gig).
What revision MB is this? Check the Sticky at the top of the threads on 4100's. This will also cause all kinds of problem. dd meant that it could read the data but does not do any repair. Spinrite can repair stuff if it finds things. Your problem is what Spinrite can repair, inode. Since you have copied all drives to new ones, run spinrite on your copys preserving the originals.
__________________
1 Snap 4500 - 1.0T (4 x 250gig WD2500SB RE), Raid5, 1 Snap 4500 - 1.6T (4 x 400gig Seagates), Raid5, 1 Snap 4200 - 4.0T (4 x 2gig Seagates), Raid5, Using SATA converts from Andy Link to SnapOS FAQ's http://forums.procooling.com/vbb/showthread.php?t=13820 |
08-21-2007, 12:06 PM | #5 |
Cooling Neophyte
Join Date: Jul 2007
Location: Sweden
Posts: 8
|
Re: Nas 4100 Panic
This is the output after a "reset to factory":
08/21/2007 10:18:23 0 I L00 | File System : Opened FDB for device 0x1000E 08/21/2007 10:18:23 0 D SYS | Scheduled ACL Set and Propagate at /pri2/os_private for FDB_ID_13 08/21/2007 10:18:23 0 I L01 | File System Check : Executing fsck /dev/rraid0 /force /fix /fixfatal 08/21/2007 10:18:23 0 D SYS | Propagate on /pri2/os_private: Success - 8 files, 0 dirs; Errors - 0 files, 0 dirs 08/21/2007 10:18:23 0 W L01 | File System Check : partition is NOT clean. 08/21/2007 10:18:23 0 D SYS | Fsck - Using primary superblock 08/21/2007 10:18:24 0 D SYS | 151072288 bytes pre-allocated 08/21/2007 10:18:24 0 D SYS | Memory allocation for i-node cache: 90% of free RAM 08/21/2007 10:18:25 0 D L01 | File System Check : Failed to allocate 109157380 bytes for rcd_backlinks!!! 08/21/2007 10:18:27 0 D SYS | -- Swap-based Fsck -- 08/21/2007 10:18:27 0 D SYS | 31091 i-node cache blocks, cache hash table: 12281 entries 08/21/2007 10:18:27 0 D SYS | 256 i-nodes per generic cache block 08/21/2007 10:18:27 0 D SYS | 170 i-nodes per directory cache block 08/21/2007 10:18:27 0 I L01 | File System Check : ** Phase 1 - Check blocks and sizes 08/21/2007 10:18:29 0 D SYS | AFP: Allocated 63 volumes, 16384 files, 256 users 08/21/2007 10:18:29 0 D SYS | AFP: initialization complete 08/21/2007 10:19:13 0 I NET | DHCP: T1 length was 0 is now 180000 08/21/2007 10:19:13 0 I NET | DHCP: T2 length was 0 is now 270000 08/21/2007 10:19:13 0 I NET | DHCP/BOOTP: Setting IP address to 10.10.1.250 08/21/2007 10:19:13 0 D SYS | Update IP... 08/21/2007 10:19:13 0 D SYS | BOOTP: DNS = 15010A0A 08/21/2007 10:19:35 0 D SMB | SMB : Becoming master browser for WORKGROUP 08/21/2007 10:24:12 0 I SYS | System Database : SDB has been written to flash at 2007/08/21 10:24:12. 08/21/2007 10:24:13 0 D SYS | fsd: The SDB is being burned... Complete! 08/21/2007 10:24:14 0 D SYS | fsd: The SDB Shadow is being burned... Complete! 08/21/2007 10:26:31 0 W L01 | File System Check : 30627695 Dup I=6501122 08/21/2007 10:28:36 0 D SYS | Failed to find Credential in list when expected. 08/21/2007 10:36:12 0 I SYS | System Database : SDB has been written to flash at 2007/08/21 10:36:12. 08/21/2007 10:36:13 0 D SYS | fsd: The SDB is being burned... Complete! 08/21/2007 10:36:14 0 D SYS | fsd: The SDB Shadow is being burned... Complete! 08/21/2007 10:37:25 0 I L01 | File System Check : ** Phase 1b - Rescan for more duplicate blocks 08/21/2007 10:45:42 0 W L01 | File System Check : 30627695 Dup I=6501122 08/21/2007 10:45:42 0 I L01 | File System Check : ** Phase 2 - Check pathnames 08/21/2007 10:45:46 0 D SYS | Stack reference = 0x003F55A4 08/21/2007 10:45:46 0 D SYS | Detected a unit with 256M of memory. 08/21/2007 10:45:46 0 D SYS | Dumping 0x7FFFE00 bytes of memory from address 0x0, file offset 0x200 ... 08/21/2007 10:45:46 0 D SYS | Done. The sticker on the MB says -003.A Can Spinrite really repair inodes on my disk? If so, I will by the software! |
08-21-2007, 04:06 PM | #6 |
Thermophile
Join Date: Jul 2005
Location: Plano, TX
Posts: 3,135
|
Re: Nas 4100 Panic
Spinrite has in the past, but it's not a sure thing when it comes to snaps.
.003.A means it has been modified. But you need to check. A few users have found theirs were not, but was marked as being done. So you you need to confirm the 2 areas. If yours started out as a 240gig or greater it should have been done.
__________________
1 Snap 4500 - 1.0T (4 x 250gig WD2500SB RE), Raid5, 1 Snap 4500 - 1.6T (4 x 400gig Seagates), Raid5, 1 Snap 4200 - 4.0T (4 x 2gig Seagates), Raid5, Using SATA converts from Andy Link to SnapOS FAQ's http://forums.procooling.com/vbb/showthread.php?t=13820 |
08-22-2007, 06:56 AM | #7 |
Cooling Neophyte
Join Date: Jul 2007
Location: Sweden
Posts: 8
|
Re: Nas 4100 Panic
I have bought the Spinrite software and run it on the disk but NO error where found.
I have confirmed the two modifications are done. Any more suggestions? |
08-22-2007, 07:38 AM | #8 |
Thermophile
Join Date: Jul 2005
Location: Plano, TX
Posts: 3,135
|
Re: Nas 4100 Panic
Snaps if they have repaired things can take over 1 hr to come on line, if no panic. So if no panic give it some time. Looking at the last log did not see the panic after reset to factory.
You need to run it on all disk not just one. Run the maintance mode on all disk. ALL it takes is one marker to be off on 1 drive to kill the array. Looking through the log, I'm unable to determine which drive the inode is on.
__________________
1 Snap 4500 - 1.0T (4 x 250gig WD2500SB RE), Raid5, 1 Snap 4500 - 1.6T (4 x 400gig Seagates), Raid5, 1 Snap 4200 - 4.0T (4 x 2gig Seagates), Raid5, Using SATA converts from Andy Link to SnapOS FAQ's http://forums.procooling.com/vbb/showthread.php?t=13820 |
08-22-2007, 02:34 PM | #9 |
Cooling Neophyte
Join Date: Jul 2007
Location: Sweden
Posts: 8
|
Re: Nas 4100 Panic
I ran the repair mode in Spinrite on all 4 disks (my bad English in the replay).
After each Spinrite session a remount the drive in the unit and started and the unit ran diskcheck, but panic right after "File System Check : ** Phase 2 - Check pathnames". I´ll run Spinrite on all disks in maintance mode right after I fixed a fan to cool the disk of. In the end of the repair mode the disks where near 60 degrees. |
08-24-2007, 08:12 AM | #10 |
Cooling Neophyte
Join Date: Jul 2007
Location: Sweden
Posts: 8
|
Re: Nas 4100 Panic
Now I have run Spinrite in maintance mode on all 4 disks, no errors.
Still my unit panic right after "File System Check : ** Phase 2 - Check pathnames". Is there any program/command to remove that duplicated inode? Or skip diskcheck on mount? When a run "config devices fsck 60000 /fix /fixfatal /altsb" and "config device mount 60000" I get: File System Check : partition is clean. Update FDB 0x60000... but still panic on Phase 2.
__________________
http://www.futurewave.se |
08-24-2007, 06:55 PM | #11 |
Thermophile
Join Date: Jul 2005
Location: Plano, TX
Posts: 3,135
|
Re: Nas 4100 Panic
We have seen this may times before, some are lucky that spinrite was able to correct there problem. If the data is MUST HAVE, send to a recovery service.
Sorry, I have run out of things to try. The problems seams to be with rcd_backlinks, is this a folder on your shares? If so you may beable to remove that share/folder and get it to boot. Last Ditch Effort: If you make full image files of the drives so you can restore them to original state, there is only one thing left to try. BUT THIS IS VERY RISKY AND ALL DATA CAN BE LOST. IT IS CRUTIAL THAT YOU HAVE IMAGE FILES OF ALL HDs. NOT RESPONSIABLE FOR DATA LOST. Start failing 1 drive at a time and see if it will run in degraded mode. My first pick would be drive 4 due to a different starting point. The only way to recover if it fails is that you MUST RESTORE THE OTHER 3 DRIVES. This seams to happen everytimes someone over fill the drives, wish I had a solution for you.
__________________
1 Snap 4500 - 1.0T (4 x 250gig WD2500SB RE), Raid5, 1 Snap 4500 - 1.6T (4 x 400gig Seagates), Raid5, 1 Snap 4200 - 4.0T (4 x 2gig Seagates), Raid5, Using SATA converts from Andy Link to SnapOS FAQ's http://forums.procooling.com/vbb/showthread.php?t=13820 |
08-25-2007, 04:55 AM | #12 |
Cooling Neophyte
Join Date: Jul 2007
Location: Sweden
Posts: 8
|
Re: Nas 4100 Panic
rcd_backlinks can be a file on my unit but I don´t know.
Is it possible to make a images of the drives to files with dd and be 110% sure thats I can restore back to disks. Why I ask? I have plenty of storage but I only have one 120GB drive to spare.
__________________
http://www.futurewave.se |
08-25-2007, 07:19 PM | #13 |
Thermophile
Join Date: Jul 2005
Location: Plano, TX
Posts: 3,135
|
Re: Nas 4100 Panic
Since we use RAW copy my guess is ONLY if you can do a RAW copy to file. Have not done a complet copy to file. And the answer to that is maybe NOT. The normal cloning inturpets drive data, but will not copy the MBR back if restoring a drive. MBR must be done as a seperate step. This is not done because changing drive size the values change in the MBR. And a RAW copy the drives must be identical size. There should be someting that works but now is not the time to be experimenting.
If you deside to try it, I believe there is a verify cmd that can be used.
__________________
1 Snap 4500 - 1.0T (4 x 250gig WD2500SB RE), Raid5, 1 Snap 4500 - 1.6T (4 x 400gig Seagates), Raid5, 1 Snap 4200 - 4.0T (4 x 2gig Seagates), Raid5, Using SATA converts from Andy Link to SnapOS FAQ's http://forums.procooling.com/vbb/showthread.php?t=13820 |
09-02-2007, 02:43 PM | #14 |
Cooling Neophyte
Join Date: Jul 2007
Location: Sweden
Posts: 8
|
Re: Nas 4100 Panic
I bought a 500GB disk and made copies of the disks everything went well. I run fdisk on disk3 (slot4) to remove the data.
Next run the check disk crashed but the unit say alive, but no raid5 (60000) was online. Made disk3 as spare, resynchronize went well. 100% A info device shows: Logical Device: 60000 Position: 1 RAID Size (KB): 291086832 Free (KB): 0 Public Unmounted Label:RAID5 Large data protection disk Unique Id: 0x179FD7F07A222E0B Mount: /0 Index: 0 Order: 255 Partition: 10000 Physical: 10007 R 60000 Size (KB): 97028944 Starting Blk: 81942 Public Physical: 10007 Drive Slot: 0 IDE Size (KB): 97685504 Fixed Partition: 10008 Physical: 1000F R 60000 Size (KB): 97028944 Starting Blk: 81942 Public Physical: 1000F Drive Slot: 1 IDE Size (KB): 97685504 Fixed Partition: 10010 Physical: 10017 R 60000 Size (KB): 97028944 Starting Blk: 81942 Public Physical: 10017 Drive Slot: 2 IDE Size (KB): 97685504 Fixed Partition: 10018 Physical: 1001F R 60000 Size (KB): 97028944 Starting Blk: 110471 Public Physical: 1001F Drive Slot: 3 IDE Size (KB): 117246464 Fixed Now a different starting block but not the same as the others. A info log T shows: 09/02/2007 13:27:32 13 W L01 | File System Check : -853277728 Bad I=15502055 09/02/2007 13:27:32 13 W L01 | File System Check : -287794079 Bad I=15502055 09/02/2007 13:27:32 13 W L01 | File System Check : -1564659590 Bad I=15502055 09/02/2007 13:27:32 13 W L01 | File System Check : 1756769933 Bad I=15502055 09/02/2007 13:27:32 13 W L01 | File System Check : 82218016 Bad I=15502055 09/02/2007 13:27:32 13 W L01 | File System Check : -1150483072 Bad I=15502055 09/02/2007 13:27:32 13 W L01 | File System Check : 1026962167 Bad I=15502055 09/02/2007 13:27:32 13 W L01 | File System Check : -441465043 Bad I=15502055 09/02/2007 13:27:32 13 W L01 | File System Check : 144775390 Bad I=15502055 09/02/2007 13:27:32 13 W L01 | File System Check : Excessive bad blks I=15502055 (Skipping) 09/02/2007 13:27:32 13 W L01 | File System Check : 1773729310 Bad I=15502058 09/02/2007 13:27:32 13 W L01 | File System Check : 997438886 Bad I=15502058 09/02/2007 13:27:32 13 W L01 | File System Check : -1738855517 Bad I=15502058 09/02/2007 13:27:32 13 W L01 | File System Check : 1004617867 Bad I=15502058 09/02/2007 13:27:32 13 W L01 | File System Check : 1134175325 Bad I=15502058 09/02/2007 13:27:32 13 W L01 | File System Check : -30598314 Bad I=15502058 09/02/2007 13:27:32 13 W L01 | File System Check : 1795435894 Bad I=15502058 09/02/2007 13:27:32 13 W L01 | File System Check : 65392884 Bad I=15502058 09/02/2007 13:27:32 13 W L01 | File System Check : -1921967036 Bad I=15502058 09/02/2007 13:27:32 13 W L01 | File System Check : 968070673 Bad I=15502058 09/02/2007 13:27:32 13 W L01 | File System Check : -1340734065 Bad I=15502058 09/02/2007 13:27:32 13 W L01 | File System Check : Excessive bad blks I=15502058 (Skipping) 09/02/2007 13:27:32 13 W L01 | File System Check : ACL i-node 15502059: bad size: -827165116 09/02/2007 13:27:32 13 W L01 | File System Check : 639768228 Bad I=15502059 09/02/2007 13:27:32 13 W L01 | File System Check : -1745661366 Bad I=15502059 09/02/2007 13:27:32 13 W L01 | File System Check : -811690114 Bad I=15502059 09/02/2007 13:27:32 13 W L01 | File System Check : 207781844 Bad I=15502059 09/02/2007 13:27:32 13 W L01 | File System Check : -1965266637 Bad I=15502059 09/02/2007 13:27:32 13 W L01 | File System Check : -918168199 Bad I=15502059 09/02/2007 13:27:32 13 W L01 | File System Check : -229380607 Bad I=15502059 09/02/2007 13:27:32 13 W L01 | File System Check : 1315174787 Bad I=15502059 09/02/2007 13:27:32 13 W L01 | File System Check : 1774512583 Bad I=15502059 09/02/2007 13:27:32 13 W L01 | File System Check : 308796889 Bad I=15502059 09/02/2007 13:27:32 13 W L01 | File System Check : 1844649938 Bad I=15502059 09/02/2007 13:27:32 13 W L01 | File System Check : Excessive bad blks I=15502059 (Skipping) 09/02/2007 13:27:32 13 W L01 | File System Check : 1166672694 Bad I=15502072 09/02/2007 13:27:32 13 W L01 | File System Check : 515319961 Bad I=15502072 09/02/2007 13:27:32 13 W L01 | File System Check : 1541279875 Bad I=15502072 09/02/2007 13:27:32 13 W L01 | File System Check : -1031498671 Bad I=15502072 09/02/2007 13:27:32 13 W L01 | File System Check : 307001954 Bad I=15502072 09/02/2007 13:27:32 13 W L01 | File System Check : -255184727 Bad I=15502072 09/02/2007 13:27:32 13 W L01 | File System Check : -1600122368 Bad I=15502072 09/02/2007 13:27:32 13 W L01 | File System Check : -1194920832 Bad I=15502072 09/02/2007 13:27:32 13 W L01 | File System Check : 1361719748 Bad I=15502072 09/02/2007 13:27:32 13 W L01 | File System Check : 1558813932 Bad I=15502072 09/02/2007 13:27:32 13 W L01 | File System Check : 201836336 Bad I=15502072 09/02/2007 13:27:32 13 W L01 | File System Check : Excessive bad blks I=15502072 (Skipping) but still crashes after 25%, phase 2. The bad I= changes if a restart the unit, if a restart over and over is it possible to recover some data? Or should a restore the disks and remove the data on a other disk?
__________________
http://www.futurewave.se |
09-03-2007, 06:46 AM | #15 | |
Thermophile
Join Date: Jul 2005
Location: Plano, TX
Posts: 3,135
|
Re: Nas 4100 Panic
Quote:
It's looks like the HD you replaced was not the one, since it failed resysce. I did not want a replacement drive installed. Now re image all hds (maybe just the one that was replaced) and remove another hd. If it reports a broken raid5 you will have to reimage all drives to original state. I would not put in the replacement drive. I want it to run in degrade mode. This way your are only dealing with one variable at a time. Beside when you get to drive 1 it must be identical, a over size drive will not work.
__________________
1 Snap 4500 - 1.0T (4 x 250gig WD2500SB RE), Raid5, 1 Snap 4500 - 1.6T (4 x 400gig Seagates), Raid5, 1 Snap 4200 - 4.0T (4 x 2gig Seagates), Raid5, Using SATA converts from Andy Link to SnapOS FAQ's http://forums.procooling.com/vbb/showthread.php?t=13820 |
|
Currently Active Users Viewing This Thread: 1 (0 members and 1 guests) | |
|
|