![]() | ||
|
![]() |
#301 |
Cooling Neophyte
Join Date: Sep 2006
Location: Seattle
Posts: 9
|
![]()
Cool I found the .sup retrieved when the box was still under support agreement. So next I will just see if I can bump to full 256MB SIMM, upgrade the OS from the admin page, and format the RAID5. Will let you know the result asap.
|
![]() |
![]() |
![]() |
#302 | |
Thermophile
Join Date: May 2006
Location: Yakima, WA
Posts: 1,282
|
![]() Quote:
![]() |
|
![]() |
![]() |
![]() |
#303 | |
Cooling Neophyte
Join Date: Aug 2006
Location: central US
Posts: 67
|
![]() Quote:
It took about 24-26 hours to build the raid, but when it was done I had just under 900Gb Raid 5. I am having some permissions problems, but thats probably my ignorance. I'll learn it. I coppied about 100G to it - no problems at all. It just looks like one huge disk. ![]() Oh, there was one thing... I seem to be missing the pictures in the help section. I don't know why or which .sup file they come from. Not that big of a deal. |
|
![]() |
![]() |
![]() |
#304 | |
Thermophile
Join Date: May 2006
Location: Yakima, WA
Posts: 1,282
|
![]() Quote:
|
|
![]() |
![]() |
![]() |
#305 | |
Cooling Neophyte
Join Date: Aug 2006
Location: central US
Posts: 67
|
![]() Quote:
|
|
![]() |
![]() |
![]() |
#306 |
Cooling Neophyte
Join Date: Sep 2006
Location: SHANHGAI
Posts: 4
|
![]()
i need a copy of 3.4.805 for my snap2000 also. thanks in advance
|
![]() |
![]() |
![]() |
#307 | |
Thermophile
Join Date: Jul 2005
Location: Plano, TX
Posts: 3,135
|
![]()
Radio,
Quote:
__________________
1 Snap 4500 - 1.0T (4 x 250gig WD2500SB RE), Raid5, 1 Snap 4500 - 1.6T (4 x 400gig Seagates), Raid5, 1 Snap 4200 - 4.0T (4 x 2gig Seagates), Raid5, Using SATA converts from Andy Link to SnapOS FAQ's http://forums.procooling.com/vbb/showthread.php?t=13820 |
|
![]() |
![]() |
![]() |
#308 |
Cooling Neophyte
Join Date: Aug 2006
Location: Minnesota
Posts: 52
|
![]()
Can someone help me out. I have snap 4000 with ver 3.4.803. Can someone please send me copy of 3.4.805. eschwa@charter.net. Thanks.
|
![]() |
![]() |
![]() |
#309 |
Cooling Neophyte
Join Date: Sep 2006
Location: Seattle
Posts: 9
|
![]()
All the single drives reported OK initially, but on creation of the RAID5 config, the error log reports (for the RAID5 portion):
E File System Check : FSCK fatal error = 12 Disk 60000 9/15/2006 5:23:36 PM E File System Check : No valid file system - format device. Disk 60000 9/15/2006 5:23:36 PM W File System : 1 member(s) missing in logical device (original ID: 60000) Disk 60000 9/15/2006 5:23:36 PM E File System : Logical set member 0 not found. Original device ID: 60000 Disk 60000 9/15/2006 5:23:36 PM W File System : Device 0x00010008 busy - abort devSet Disk 60000 9/15/2006 4:08:32 PM W File System : Device 0x00010008 busy - abort devSet Disk 60000 9/15/2006 4:07:37 PM The server was apparently automatically validating/formatting after installing the new drives when I powered it up. So when I tried to execute the "co de format 10000 /reinit" (and other devices) it didn't allow this; so I waited for the formatting to complete and then created a RAID 5 configuration. That's when it reported the errors above. So now I've removed the RAID 5 configuration (since maybe I should have formatted even though it already had succeeded at that?) and after this is removed, I will try to execute the /reinit command again. |
![]() |
![]() |
![]() |
#310 |
Thermophile
Join Date: Jul 2005
Location: Plano, TX
Posts: 3,135
|
![]()
Remove the drives and connect to a pc and use Seatools to check the SMART data. These being the new .10 they have no track record. It's hard to conceve that seagate would release anything that was not ready. And most have a higher QC at the start of a new production. Seatool may have a utility to reduce the hd size.
I still think they are too big for the Snap OS/HW. It should take no more than ~8 hrs to build an array that size, with 4x250gig reportedly taking 5-6 hrs. Phoenix, do you think it may be a problem with v3 ? Would v4 be in order???
__________________
1 Snap 4500 - 1.0T (4 x 250gig WD2500SB RE), Raid5, 1 Snap 4500 - 1.6T (4 x 400gig Seagates), Raid5, 1 Snap 4200 - 4.0T (4 x 2gig Seagates), Raid5, Using SATA converts from Andy Link to SnapOS FAQ's http://forums.procooling.com/vbb/showthread.php?t=13820 |
![]() |
![]() |
![]() |
#311 |
Cooling Neophyte
Join Date: Sep 2006
Location: Seattle
Posts: 9
|
![]()
I noticed these same errors when I initially upgraded the OS with the 30GB drives (which were already working). So when I saw the errors for the 320GB Seagates I didn't lose hope. Anyway, when I removed the RAID5 configuration, I noticed that each single drive had 7 files on it after I let the auto-validation/format complete.
Then... I clicked the Help link in the Admin, but it wasn't available at all. I wondered what other parts of the OS might be missing (since it probably installs to hidden system folder on hard drive, eh?). So I re-installed the SnapOS from the Admin console. This caused the OS to be installed and each drive to have 11 files (including help, etc.) and apparently included something that allowed the "Disk 60000" reported in the error (probably actually a reference to the raid controller?) to identify the file system using a driver that exists in the system folder -- which was now installed on the hard drives. Re-upgrading the OS rebuilt the drives again (approx 10 minutes per drive). Then I ran the /reinit on drive 4 just for kicks, which left it with 7 files while the others still had their 11. Anyways, I then clicked the "Create RAID5 Configuration" in the Admin and it completed successfully in just under 2 hours . It reports 898,211MB and is rebuilding the "backup disk" (which seems to take quite a while as it is 1% complete after 20 minutes). |
![]() |
![]() |
![]() |
#312 |
Cooling Neophyte
Join Date: Sep 2006
Location: Seattle
Posts: 9
|
![]()
Here are my results. Some useful steps and a couple of errors on my part that might reveal inner workings.
HD Upgrade 4x30GB Quantum Fireball ==> 4x320GB 7200.10 ATA-100 Seagate OS Upgrade (3.1.608 ==> 3.4.807) ========== 1. On version 3.1.608 there was no admin option available to upgrade the OS. So I used the command line tool called OSUpgrade.exe. To get OSupgrade.exe, I ran the SnapOS 4.0.860 PATCH executable (the download available publicly on Adaptec's knowledgebase). This extracted files to a folder, one of them was UTIL_ZIP.exe. I ran the extracted UTIL_ZIP.exe to unzip the OSUpgrade.exe utility and copied to c:\snap. 2. Opened a DOS prompt in Win2k ("Start/Run/cmd", CD \snap) 3. Entered the command ---> osupdate <snap-server-name> <my_password> "c:\snap\snapos_34807.sup" 4. I noticed failures were transmitted after installation of the OS (fsck error=12). Even though the Snap still contains the 30GB drives that were working fine with previous OS. I knew this was not likely a file system problem with the hard drive, but considered it was perhaps from a corrupted OS installation or bad .sup file (stay tuned). 5. I installed new 7200.10 Seagate drives (being careful to set one drive as master and the other as slave for each IDE channel, just like the 30GB drives had set). 6. Format the drives. 6a. I powered on the snap server. NOTE: My snap server began validating the newly inserted drives so access to the snap ui takes a bit and formatting/validation has to complete first. Skip 6b if your snap server appears to begin formatting the drives. 6b. If your snap server does not begin validation, perhaps due to a configuration setting (e.g., being set to auto-repair disk errors on startup), you may need to manually format the drives. If so, navigate to http://<your-snap-server-name>/config/debug and run the command co de info Then use the four devices in the results that have the label "Single disk" and for each, executed a reinit. For example: co de format 10000 /reinit co de format 10008 /reinit co de format 10010 /reinit co de format 10018 /reinit 7. During this process I saw same report of error fsck (file system not found - format) from Step 4. This is when I realized that my snap server was already validating/formatting. Each 320GB drive took approx 10 minutes to format. FYI - Whether I attempted the short or long version of the first command (10000 /reinit) I got an error (but now figure that it occurs if the referenced device is already busy). --DO NOT CREATE A RAID 5 CONFIGURATION YET -- 7a. When it finished formatting, I WRONGLY selected Disk Utilities / Configure Disks / Create a Disk Configuration / RAID 5 / check-marked each of the 4 drives / clicked Next. After this finished, it reported a single drive 320GB AND a "large partition RAID5" 910GB. And the RAID5 icon was red, with a report of the same error fsck error=12, this time on the "RAID" drive. 7b. I removed RAID5 configuration, which began a reformat of all drives (completed successfully, all sizes are reported again at 320GB). --CONTINUE -- FYI - I believe the fsck error I got was caused by the OS being unable to install parts of itself on my new drives (especially seeing as they didn't even exist when I'd run the OS upgrade in step 3). 8. I flailed around a bit before figuring this out (though it is QUITE sufficiently covered in this forum :) ). But at this point, I just needed to re-install the SnapOS. This time using the cool new Admin option in 3.4.807 available at: http://<your-snap-server-name>/config?Func=OSUpdateSend 9. The OS upgrade window said to wait for a restart and was kinda weird when the page timed out, but I followed instructions and didn't close the browser or navigate anywhere, but just refreshed the page. It finally reported success and I clicked OK. 10. I noticed that help pages were now available and realized that the files must get stored in a hidden system folder on the drive(s). 11. For kicks, I ran co de format 10018 /reinit which completed successfully but left the drive with 7 files instead of its original 11 files. 12. When it finished formatting, I selected Disk Utilities / Configure Disks / Create a Disk Configuration / RAID 5 / check-marked each of the 4 drives / clicked Next. After this finished, it reported a single "large partition RAID5" 910GB. And the RAID5 icon is GREEN, with no reports of errors. It is now 6% complete with "Rebuilding the backup disk" which probably is normal for a RAID5 performance, or partly because I left the 128MB DRAM in there... hmm.. HATS OFF TO PHOENIX, KLOCKWORK, BLUE, AND KLINGLER WOOT! |
![]() |
![]() |
![]() |
#313 |
Thermophile
Join Date: May 2006
Location: Yakima, WA
Posts: 1,282
|
![]()
David, DC4,
As I eluded to in my previous message, while the OS (for the most part) is in flashram for the 4000, there is "part" of it being stored in a hidden directory on the disk(s). I am not sure if it is on all drives or just the first drive. Speculations David? Anywise, as near as I could tell for 100% sure, it "looked" like just help type files in this hidden directory on the drive, but I have always been suspect that there is more of the OS there than just this, but no way to verify for sure (for me at least). I suspect some of his problems were not having the whole OS available, but when he got the OS installed fully, it made correction possible. Next, another one of my "speculations" that I cannot prove. I suspect the format on the drives, as you hinted at David, from one version of the OS to another can be different, at least with the RAID configurations. Now I do not mean just from v2 to v3 or v3 to v4, I mean even within main versions, say v3.3.x to v3.4.x. After reading a zillion messages here on the forum, it seems we get weird never explained problems, particularly with the RAID configs that magicly go away when an OS change is done and the drives get reinitialized/formatted. Thinking about it, the Linux/Unix that the SNAP OS is "based" on, well, it does not support the type of RAID configurations and boundaries we see with the SNAPs, at least to my knowledge (I could be wrong). This leads me to speculate that Meridian/Quantum/Adaptec wrote their own RAID routines into the OS. This leads me to believe that some of those "unknown" changes from one version to another may also contain changes in this "proprietary" portions of the OS. As I said, I can't prove it, but it sure seems funny to me how many RAID issues we see here and there happen after OS changes (even small ones), and then magicly disappear after the drives get a reinit/format. Now before you go off and think I am a nut, think on this a moment. The 4000 and some of the 2x00 units operate on a software RAID. This means it is all in the OS. Do you really think a version change in the OS might not affect this? Next, many of the OS changes around here are to change from a non LBA48 bit OS to an LBA48 bit supported OS. Ya think maybe messing with the LBA48 bit OS stuff just might have an effect on how a drive is formatted and/or how a RAID might work on it? Just some things to think about, nothing I can prove. DC4, I would absolutely love it if you would be the test guy for us since you have the drives. Can you provide the following information please; 1. Model number of your 4000? Specificly if it ends with -01 -02, -03, or -04 (David, I still suspect there are some differences in the revisions). 2. List the basics of your 4000 (OS, Hardware, BIOS)? 3. The exact model hard drives you are using (Seagate STxxxxxx). 4. The time it took for the RAID 5 to build... Then do a little experiementing for us/me (you may save me a lot of money I don't have to spend). After the RAID is built, tell us the specs on the drives. 1. Actual formatted capacity (each) 2. Actual RAID 5 capacity Then store a crapload of data on the arrary. Not just a little tiny bit, I mean like a couple hundred gigs (at least 100GB minimum). Then, after all this test data is on there, unplug one of the drives (unit powered down for safety to hardware). Check the logs and see what it actualy reports after you power it back on. Let's see if it reports properly. Then, wipe that drive you removed form the array (put it in a PC and low level it). If you do not have a utility, let me know, but I am guessing Seagate has an init utility on their website that will do just fine. Then put the drive back in, and let the SNAP reinitialiaze/format the drive. Report how long it took for this format to take place. Then when it is done, try to put the drive back into the array and let it rebuild (let's see if it rebuilds the array properly). Then report how long it took for this rebuild of the array (and let us know how much test data you used. In short form; 1. Store test data (report amount of test data used) 2. Remove a drive from the array (power the SNAP down first) 3. Check SNAP logs to see if it reports the loss properly (and let us know too) 4. Wipe (low level format) the drive you removed from the array (on a PC) 5. Reinstall the drive into the SNAP and let it initialize (report time to initialize) 6. Place driva back into RAID 5 array and let it rebuild (report time to rebuild) DC4, I know this is a lot of effort, but you will be probably saving me some money, helping us out with needed information, and even helping yourself as well in the process. Think about it this way.... You have your SNAP working now, but.... If/when a drive fails down the road some day, wouldn't you like to know for sure you are going to be able to replace that drive and rebuild the array without losing your real, and probably valuable, data in the process? I know I would, and there has been some question about this with these larger hard drives... David, I think I answered your question(s) here, but if not, say so. But, I do not think you need to go above 3.4.805 to use these 320GB drives. Version 4 may have some other goodies we do not know of (more efficient, faster, better recovery, etc.), but I doubt it is required. I wish we knew what the changes were between 3.4.805 and .807 as well as the changes between 3.4.80x and 4.0.860 beyond the MS AD stuff. I bet there are some little tweaks and fixes, but with it's extra memory usage, I just don't see it worth doing without knowing for sure. It would have eben nice if we coudl try it, and go backwards in OS if we wanted to go back, but alas, no can do, or so I have heard. I also wonder if the 4.0.860 woudl allow us to go past the 1.2TB limit we know of (keep in mind, so far all theactual people trying it were using 3.4.80x). I do think if DC4 follows this, we may in fact get some of our questions answered and be helpful down the road. As a side note, I do not think the hardware is making any diference here on the limits or int he problems we are seeing/saw here. I am however convinced that if you are going to use RAID, at least RAID 5, and you change the OS, you need to reinstall your OS and reinitialize the drives afterwards if you change the hard drives in a 4000 (and maybe some 2x00 units). There is my whole nickels worth... |
![]() |
![]() |
![]() |
#314 |
Thermophile
Join Date: Jul 2005
Location: Plano, TX
Posts: 3,135
|
![]()
And you thought I was the only one writing books ...
I have discovered on many cases there is a problem with XP and the older version of Assist. I think that some of the file in the OS do not work with XP, Win98 Required. In most cases the manual says to use Assist and you used a utility that came with v4 to install a v3 os. Is it posiable that it was design only for the patch ???? When you fail the test drive make it DRIVE 1 This is the criticle one. With a 2x00 model in raid 1 we have an advantage, in that the data set is complete with no speed change. We can always move the drive to another position. If in the 10000 position we can take it out of raid with no data loss.
__________________
1 Snap 4500 - 1.0T (4 x 250gig WD2500SB RE), Raid5, 1 Snap 4500 - 1.6T (4 x 400gig Seagates), Raid5, 1 Snap 4200 - 4.0T (4 x 2gig Seagates), Raid5, Using SATA converts from Andy Link to SnapOS FAQ's http://forums.procooling.com/vbb/showthread.php?t=13820 Last edited by blue68f100; 09-16-2006 at 10:55 AM. |
![]() |
![]() |
![]() |
#315 |
Thermophile
Join Date: Jul 2005
Location: Plano, TX
Posts: 3,135
|
![]()
Has any one confirmed that the 4100 is using a hardware raid controller or using a deciated controllers with software raid??????
__________________
1 Snap 4500 - 1.0T (4 x 250gig WD2500SB RE), Raid5, 1 Snap 4500 - 1.6T (4 x 400gig Seagates), Raid5, 1 Snap 4200 - 4.0T (4 x 2gig Seagates), Raid5, Using SATA converts from Andy Link to SnapOS FAQ's http://forums.procooling.com/vbb/showthread.php?t=13820 |
![]() |
![]() |
![]() |
#316 | ||
Thermophile
Join Date: May 2006
Location: Yakima, WA
Posts: 1,282
|
![]() Quote:
![]() Also, you didn't answer my question; Quote:
|
||
![]() |
![]() |
![]() |
#317 | |
Thermophile
Join Date: May 2006
Location: Yakima, WA
Posts: 1,282
|
![]() Quote:
For someone that has a 4100, look at the main chips on the circuit board, get the numbers, and then look them up with some internet searches. This will tell you for sure... In most (not all) cases, the chip doing the XOR operations for RAID 5 is a larger, cpu type, chip. It is usualy just that, a genera cpu or a dedicated XOR cpu chip. Thus, not hard to find or look up... Without this dedicated XOR chip, it cannot be doing Hardware RAID 3, 5, 6, or any RAID using parity with XOR calculations. Now for the catch 22... All of this is a matter of perception. In a workstation or server or whatever type PC where we refer to hardware and software RAID, what is being talked about is if the controller does the calculations and work for the RAID or if the calculations and work for the RAID are being offloaded (at least in part) to the PC memory and processor through software (the driver). There is some stuff in here for the other RAID modes, but where it becomes serious is with RAID modes that use parity XOR calculations (RAID 3, 5, 6, etc). Depending on the number, speed, and size of the drives being used, this can be a lot of cpu load. So, in a PC, a "hardware" RAID is prefered so as not to drag down the PC, plus the "hardware" RAID is usually faster as well (and more reliable). Fact is tho, this "hardware" RAID is still doing it in software to a minor extent, just not in a driver, but rather in firmware on the controller board, sort of. This is not exact, but this description keeps it simple. Now in the case of a SNAP (or some other type NAS units), the unit is not doing anything else much other than maybe some web interface stuffs and the like. The cpu on the SNAP/NAS motherboard is not really being used for anything other than this minor web interface stuff and the RAID work. The software RAID driver is built into the OS, and so, in a way, the SNAP motherboard is in a way, like a hardware RAID controller board. Maybe not quite as efficient, but very similar. I guess a way to look at it is that the SNAP controller boards are a hybrid hardware RAID sort of. In other words, it may all be a moot point, so long as the software (OS) is effective, efficient, and reliable. The only real concern is the cpu and memory on board. Some of these units are getting along in years, made back when the largest hard drives around were like 30 GB and less. The cpu and memory expansion abilities may not be enough to do the heavy lifting for some of the much larger arrays we can create now days with much larger and faster drives. But then, with the limitations we are finding in the OS, well.... Last edited by Phoenix32; 09-16-2006 at 05:30 PM. |
|
![]() |
![]() |
![]() |
#318 |
Cooling Neophyte
Join Date: Sep 2006
Location: Seattle
Posts: 9
|
![]()
It will take some time to perform the data destruction test. But I agree that this is worth the effort if it will help. And after all the whole point of building a RAID5 is to have certainty that your test scenario results in 0 data loss. So I should place a large amount of data on the RAID and then remove drive 0 (device 10000) correct? I suppose this will be the physical drive located on IDE channel 0 "master"?
Before I do this, let me relay the answer to your other questions: 1. Model: 4000-2 (4 drive IDE: "Laser") 2. OS: 3.4.807 (US) Hardware: 2.0.1 BIOS: 2.0.252 128mb RAM (bumped from the install 64mb) 3. Hard Drives: 4x ST3320620A 320GB 7200.10 Seagate Barracuda 7200 RPM IDE Ultra ATA100 Hard Drive These are $94.99/each at http://www.newegg.com/Product/Produc...82E16822148139 4. Ready to use in approx 1hr 50 minutes (drives formatted). RAID5 completed in 23 hours (built RAID5 backup disk). Only catch was when a person installs fresh hard drives, if they don't have an OS that supports the larger size drives.. they must install the OS once to allow formatting the drives... and install it again (a second upgrade of the same version) to allow the OS to put those files Phoenix mentioned on the drive(s). 1. Debug info reports each drive is formatted with 312570880 KB (298GB if we divide the KB by 1024 to get the MB, and another 1024 to get GB) 2. Debug info reports the RAID5 "large data protection disk" is at 919768520 KB (894GB) 2a. Interestingly, View Disk Status reports the RAID5 - Large data protection disk at Total<MB>=898,211 Free<MB>897,313 which seems to be lower (876GB) ** So I will let you know if this actually has the capability to allow a person to store a single 800GB video stream ... where even if one of the drives blows up, still show the entire video without a hiccup. |
![]() |
![]() |
![]() |
#319 | |||||||
Thermophile
Join Date: May 2006
Location: Yakima, WA
Posts: 1,282
|
![]() Quote:
Quote:
Quote:
-04 model of the 4000 v3.4.805 OS (will probably up to the 3.4.807 for any bug fixes possible) Hardware 2.0.1 BIOS 2.0.282 256MB SDRAM Quote:
Quote:
Quote:
![]() Quote:
THANK YOU for taking the time to test this all out! It should help you, me, and everyone else down the line when these type questions come up. It sure helps to know that the damn thing is going to work as it should with these larger drives. I am still puzzled with this amount of time it is taking to create the RAID array. David? Anyone? Ideas? |
|||||||
![]() |
![]() |
![]() |
#320 |
Cooling Savant
Join Date: Aug 2004
Location: UK
Posts: 909
|
![]()
I always thought it was 1024 every time.....
http://en.wikipedia.org/w/index.php?title=Byte Its just with it being base 10, it comes at 1024..... But using 1024 only gives 2.4% error margin
__________________
Snap Server Help Wiki - http://wiki.procooling.com/index.php/Snap_Server Snap Server 2200 v3.4.807 2x 250GB Seagate Barracuda 7200.9 w/ UNIDFC601512M Replacement Fan "Did you really think it would be that easy??" Other NAS's 1x NSLU2 w/ 512mb Corsair Flash Voyager Running Unslung 6.8b 1x NSLU2 w/ 8Gb LaCie Carte Orange Running Debian/NSLU2 Stable 4.0r0 250GB LaCie Ethernet Disk Running Windows XP Embedded |
![]() |
![]() |
![]() |
#321 |
Cooling Neophyte
Join Date: Aug 2006
Location: central US
Posts: 67
|
![]()
I'm posting this here because it seems it's where everyone is looking.
I have successfully built a raid 5 with 4 320Gb disks, but it will not rebuild after an error (power outage) detailed description here: http://forums.procooling.com/vbb/showthread.php?t=13488 My best guess is that I need more that 64Gb because of the large disk size. Can anyone offer more than a hunch? Is there any data/evidence available on this? Does anyone have any experience with this error message: File System Check : FSCK fatal error = 8 Disk 60000 RAID 5 9/17/2006 2:00:45 AM File System Check : Failed to allocate 10930276 bytes for update bitmap!!! Disk 60000 |
![]() |
![]() |
![]() |
#322 |
Thermophile
Join Date: Jul 2005
Location: Plano, TX
Posts: 3,135
|
![]()
Your problems are all related with the 300+gig drives.
64 meg is not enough ram for sure, min 128 prefered 256meg PC100/133 DIMM's would be in order. I feal it will make it run faster but do not have any faith in correcting the problem. ![]()
__________________
1 Snap 4500 - 1.0T (4 x 250gig WD2500SB RE), Raid5, 1 Snap 4500 - 1.6T (4 x 400gig Seagates), Raid5, 1 Snap 4200 - 4.0T (4 x 2gig Seagates), Raid5, Using SATA converts from Andy Link to SnapOS FAQ's http://forums.procooling.com/vbb/showthread.php?t=13820 |
![]() |
![]() |
![]() |
#323 | |
Cooling Neophyte
Join Date: Aug 2006
Location: central US
Posts: 67
|
![]() Quote:
I'm not sure I understand what you mean. are you saying you can't use drives over 300Gb? Or are you saying that the problem is low memory because of a large drive? Is this from experience or just a guess. (I ask because I see all sorts of guesses pass off as fact - "certain drives don't work well", "can't backup to an older OS" etc.) Why is 64M not enough? If putting more in doesn't solve the problem, then 64M may very well be just fine. I'm not worried about speed. Is there any evidence or past experience that would suggest problems rebuilding with 64M? Without experience, all I have to go on is the error message, and the word "bitmap" and the phrase "can not allocate". This all suggests cramped memory to me. Hummmm, makes me wonder though, are these machines using virtual memory? If it can't use the disk, it may be limited to physical ram. Also, how will this rebuild if it was in the middle of writing to 4 disks when the power failed? All of a sudden the raid 5 doesn't seem so safe. |
|
![]() |
![]() |
![]() |
#324 | |
Thermophile
Join Date: May 2006
Location: Yakima, WA
Posts: 1,282
|
![]() Quote:
1. It is possible that 64 megs of RAM is just not enough for an array of that size in a SNAP OS environment, and more so because if I remember right you are using v4 OS (the memory hog of the SNAP OSes). I do admit I am doubtful here, but it is possible. RAM size should make a difference as to how fast it works, initializes, rebuilds, etc, but it should not affect if it can or cannot do the array. In most OS environments, if the data being used was too large for the available memory, it would just parse it down to enough chunks to fit. It would slow it down dramaticly, but not stop it from working. In the SNAP OS, who knows? So back to it being the possible problem, but.... 2. As speculated in one of my earlier posts, there is the issue of how the different SNAP OS initialize a drive. I do not recall your original steps in making this all work, but is it possible your drives got initialized on one SNAP OS and now trying to rebuild the array in a different OS? Not to mention what OS the RAID 5 was built on? So who knows, maybe? 3. Then there is the issue of SNAP OS limits which we been working o for a while now. Does 4x320Gb really work? Is it to much? Sure, it "seems" to format and build fine. But, what about this extraordinary amount of time it seems to take building the array? And will it actually rebuild the arrary? Maybe, just maybe, even though it "seems" to work, 4x320GB might just be a little too much, just over the limit of what the OS can actaully handle? Part of what we are trying to determine here with DC4. 4. Maybe you have a hardware problem. Maybe a couple drives got scrambled data, not just one. Remember, you can only lose one drive in the array and do a recovery. Now I do not mean drive FAILURE, I mean just got scrambled data in the power outage you had... Best I can suggest is to look at and investigate these things, watch and see what happens with the test DC4 is doing, and go from there.... I know it aint much, but that's all I got.... ![]() |
|
![]() |
![]() |
![]() |
#325 | |
Thermophile
Join Date: May 2006
Location: Yakima, WA
Posts: 1,282
|
![]() Quote:
|
|
![]() |
![]() |
![]() |
Currently Active Users Viewing This Thread: 1 (0 members and 1 guests) | |
|
|