Originally Posted by SEMC
Pain, massive unrelenting pain. Wow, torn completely apart in not one, but two threads. Yay, I think my humor might be recovering soon.
We don't mean to totally destroy you SEMC just discipline you (somebody hand me the whips again)
I get the feeling that we should be whipping your lecturers for not teaching you properly. I mean it is not your fault that some of the information you were given is faulty. I mean after all, a round earth - what sort of nonsense is that - everybody knows the earth is flat.
You know the best lesson you can learn from this SEMC, always question. I once heard a very good definition of "Due Diligence" (sometimes used in the financial markets). All that "due diligence" means is to be able to distinguish fact, from opinion.
Very powerful statement that - being able to distinguish fact from opinion. Remember that and it will serve you well.
My suggestion for you? Go back thru your notes and cross check the information they have given you with other sources.
Oh and btw RAID 3 - minimum of 3 disks (can be more). If we assume a 3 member RAID 3 raidset then data is striped onto the first 2 disks. A block of parity is then computed for the data stripes that was written on the first 2 disks and that block of parity data is written onto the 3rd "parity disk" (it is not just a few bits - it is a complete block of parity). The downside of RAID 3 is that the parity disk becomes a bottleneck due to the amount of data continually being written or read from it.
RAID 5 (min 3 disks) distributes the parity across all disks in the raidset so that the parity disk is not a bottleneck.
If one disk in a RAID 3 or a RAID 5 raidset fails you can rebuild the data from the remaining disks without restoring from backup (I think you said that if the parity disk on RAID 3 failed you had to restore from backup - this is not the case).
Whilst RAID 0 does introduce additional risk, the risk is not doubled. The additional risk is calculated from MTBF (mean time before failure) information. It is not done by simple addition (for a mathematical explaination of how to compute the increased probability of failure by adding an extra disk to a RAID 0 raidset, see this
post). The risk of using RAID 0 is the same as using JBOD (just a bunch of disks) to store information on. The only additional penalty of RAID 0 over JBOD is the time to restore the data is longer (one smaller disk vs a virtual larger disk made up of smaller disks). The performance gain of RAID 0 (striping) over the increased risk of a single disk failure (particularly given the high reliability figures for current disks) makes RAID 0 an attractive alternative for some non-critical applications. The most data that can be lost is that which has changed since the last backup.
High performance features like faster rotational spindles, seek times, larger caches are initially high cost features due to the recoupment of development costs (so called NRE costs). Those features will appear first on disks that are targetted at performance users - typically server farms and database systems. These users demand performance and are not as price sensitive as consumers. Consumers are normally not as demanding but are very price sensitive. High volume lower specification devices are released for consumer applications to gain market share. Eventually higher performance hardware will filter down to the consumer level once NRE costs are recouped by the high margin (ie. professional market). This is why you do not see all the goodies that SCSI disks have in a consumer (ie IDE) disks. It would cost to much to implement and thus raise the price beyond the consumer budget.
Laptops rarely support RAID due to weight, power, heat and cost considerations. Sager's inclusion of a RAID architecture is a unique selling point for them which appeals to high end users. However, most games are not dependent on disk activity in order to run (they are primarily memory resident). Games are however dependent on GPU performance, video memory speed, CPU speed and associated main memory speed (the infamous FSB and memory latency debate - dual channel Intel vs single channel SiS chips for example).
RAID 0 implemented properly will give a gain in data throughput (ie how much can be read or written in a given timeframe) of between 1.4 to 1.8 times irrespective of the disk interface due to caches and hardware RAID controllers.
Hope this helps.