Originally Posted by qhn
Are we saying that up to certain point, the more the worse in throughput?
NO, Although at a certain point I think that adding more drives does nothing to improve throughput... it just provides more storage. Raid 0 is to HDDs / SSDs what SLi is to nVidia video cards, however RAID controllers, for the most part aren't designed for SSDs as of yet, they have saturation limits that mean that while it may take 40 to 50 HDDs to reach saturation from the outer edge of the HDD to the inner edge of the HDD, the same saturation may only take 6 to 8 SSDs, solely because SSDs have a more consisten "r/w" curve (almost flat) to them. Much like for the average home-user, a 3-way GTX 285 2GB system with an i7-975 and 12GB Dominator GT 2GHz RAM @ 7-8-7-20 is completely impractical to run a 1440x900 LCD, 16 x25-e's are impractical, or 24 Samsung SSDs are impractical in that way as well. However, what it does show is the ability that systems have for high-performance gaming (SLi / i7) or server applications (SSDs) and how far technology still has to evolve in order to meet these new ultra-performance goodies.
If you have a controller whose throughput is say 2GB / sec, and a SSD is 200MB r/w (for the sake of argument), assuming perfect RAID 0, it would only take 10 to 11 SSDs to fully saturate the RAID controller. Adding more SSDs or HDDs post-saturation does nothing to hinder performance so much as it just adds more space. I don't think a controller has a "storage size limit" so much as a bandwidth limit. Granted, the x25-e per drive is going to have a much higher IOPS ability than the samsung drives, due to a different architecture (SLC to MLC), and also the intention of the drive. The Samsung is designed as a consumer-level drive, so IOPS and ability to send and receive operations by the thousands are sacrificed for a more "i gotta install, gotta open files fast" mentality. Odds are, the Samsung drives are not going to be RAID'd in a severe, server-like fashion the way the Intel x25-e would. The x25-e, while smaller in size due to the SLC architecture, has an extremely fast read rate, a pretty fast write rate, and very high IOPS rates. Servers have to be able to quickly read, dispatch, and receive information at very high intervals, and at random sizes, and then write them quickly while doing high amounts of random write operations. This is extremely hard on a drive, and in a server environment, an MLC drive would last maybe a few months, even with the best TRIM algorithm available. This is due to the MLC architecture, which layers 2 cells together to add space, at the cost of performance and longevity. (However, one must note that most people using an MLC drive are laptop users, home users, and maybe small businesses.)
If you were to look at a "normal" MLC drive (we'll omit the new x25-m, as other than in write MB/sec, it's as fast as the current, G1, x25-e), the read IOPS may be several thousand IOPS, but the write IOPS may be 100 to 200 IOPS. This is in part due to the controller, but also due to what the market thinks of MLC as a "performance good". The new kingston V+-series, advertised as a mainstream option, may get 6,000 random read IOPS, but its random write IOPS is 100 to 300 IOPS, which is not a significant amount more than the fastest and highest capacity 7200rpm HDDs. However, its reads and writes are 220mb/sec read and 180mb / sec write. For a home user, we want to be able to have our firefox and our media players and our office applications open quickly, and our songs ripped to a drive quickly, and those are not particularly IOPS-demanding. In addition, with the advent of Intel and Micron finishing a 3-cell layered SSD, more space, fewer IOPS, and shorter life span of SSD, but "greater value" for the consumer. What this means is that you'll see in the laptop world 3 to 4 types of SSDs. The absolute value, high # of layers SSD designed to be cheap, light in use, and good for the traveller, home user, or person who isn't intensive on heavy writes, the performance MLC, which will have 2 layers of chips, a fairly high capacity, fast reads and writes, decent IOPS (usually about the same as the SLC amount of the previous generation), and can handle light workstation loads, and the SLC, bandwidth saturating, controller ravaging beast that may find its way into a few notebooks by users who don't care about anything but raw performance.
Controllers have a LONG way to go before they catch on to the SSD's capability as a raid'd entity, much like it took years for HDDs to start to move to the wayside and allow SSDs to finally balance out the CPU/GPU/RAM vs. Storage dichotomy that has existed for the past 4 to 5 years. (the first group's technology for speed evolving at a much higher rate than the HDD's speed improvements)
Just my .02