If I've understood your question.
If you attach 2*40GB HD's to either a normal IDE/SCSI controller you are going to end up with at least 2 drive letters (not allowing for NT type configs).
With a RAID controller (depending on how you set it up) the data is striped across all disks. So as far as the OS is concerned the 2*40GB HD's look like 1 80GB HD. This is a very basic explaination. This also gives you a performance boost. In theroy the more drives you use the bigger the performance boost. Although in practice the available bandwidth tends to max out after a certain point.
Adaptec have a good white paper on RAID levels
here.
Adaptec do entry level SCSI RAID controllers starting from $500. Your then going to have to buy SCSI drives which tend to cost about 25% more than IDE.
I'm not sure the cost is worth it for individual use. I bought my server setup to alow me to work with similar hardware to my clients.
My server setup is:
PIII 500Mhz
256MB RAM
6*9.1GB UW SCSI (RAID 5)
Compaq 3200 Scsi Raid Controller with 64MB of cache
Adaptec 2940UW SCSI controller
1 * Plextor 32x CDrom
1 * Plextor 40x CDrom
1 * 35/70GB HP DLT tape drive
Running Novell Netware 5.1
So in my server setup one of the six HD's can fail but my server can still access the storage, just at reduced performance. I replace the failed drive - where upon the new disk is repopulated automatically with data rebuilt from the other five disks checksum information. Because of the way RAID 5 works you tend to loose about 1 disks worth of storage. So I only get 5*9.1Gb's instead of 6*9.1Gb's.
Sorry another long post.
Hope this helps.