View Single Post
Unread 05-05-2010, 05:51 AM   #13
HarryC
Cooling Neophyte
 
Join Date: Apr 2010
Location: phelps
Posts: 21
Default Re: Large >1TB drives in GOS unit

Enough of who's the expert. This discussion should be about what is tested and actually works and what doesn't.
As I have said previously we now have three 550's that are undergoing reliability testing (a few weeks, not long term) I agree long term testing would be great but do want me to tell you about the results in two years? I will but most would like some results much sooner.
I just checked one 550:
Guardian OS 4.3.007
Bios Adapt108
Memory 1024MB
CPU speed 2394Mhz
Drives are reported as 976GB WDC-WD1001FALS
Ethernet Bonding is set to Load Balance
Dedicated switched Gigabit to servers
The iscsi thruput isn't bad but could be better. I have additional memory for the 550s which probably will improve their performance, That will be the next stage of testing. The machines DL380 G5 2xquad core 3.15s have nics that can do iscsi in the nic, but it is somewhat unclear whether VMware 4.x is using that or does it in software. Does anyone know for sure? The VMware is HPs version susposedly optomized for their systems. I'll post performance figures when inital testing is done and I've tried more memory. While the performance isn't bad it isn't near what our Promise arrays do which are on direct U320 SCSI channels. Obviously Gig Iscsi physically can't run that fast, but is so flexible for a san.
Who else had tried drives in raid arrays, What we can use is info on drive models and what kind of raid array (hardware controller or software raid) number of drives used, hot spares? I also like to yank a running drive to see how fast the rebuild is some are very fast while I've had other setups take 40+hours to recover a 1TB drive.
All this will help the community as we move forward
HarryC
HarryC is offline   Reply With Quote