Originally Posted by ezfeeling
Interesting read so far
Well, ready or not here comes my BS.
First, to Karl's Scientific purpose. I cannot see how Intel have survived that front. Most research lab when needing to use computer for computing uses Unix workstation. The only real obvious advatage of Itanium is that instead of the usual Unix based system it will be using Win64.
Actually, the first OS available for Itanium wasn't Win64. And, it isn't even widely used at that. Most Itanium servers run the HP/UX flavor of unix.
[quote]2) 64bit. Do we need it or not? Answer is yes. Now before you guys tell me that no houswife needs 64bit power and stuff, hear me out first. As most of u guys already know that CPU now are pretty much @ limit of what electronics can allow speed wise, being proven by the serious slowdown of speed war in the recent time. How then can you grind more preformance out of a nearly used up platform? Better chip design! Currently, There are 2 ways to make this happen.[quote]
Doubling the path to 64 bits does NOT mean more speed. You could make an 8 bit processor run at 3ghz. It won't be able to access much memory, but it would be fast as hell!
|a) Going RISC. By specifing what CPU should be doing, you can reduce the complexity of the CPU and greatly improve what it does, and does it a lot faster. Maybe not far into the future we will be seeing some sort of multi-processor solution similar of what they did w/ North/Southbridge by dedicating 1 processor to do certain things and another for other chores. After all, Pentium was orignially a 2-CPU CPU.
But going RISC means serious drawbacks for developers. EVERYTHING has to be timed "just right" on a RISC system in order for it to be efficient. There's little margin for error if you want an app to be "efficient". If you need an example of just how difficult that can be with "everyday" apps, look no further than the PowerPC.
Bottom line: It's been done before, and the shortcomings far outweigh the benefits, at least for home computing. CISC has always won out because new (and better) iterations of a particular cpu can retain its compatibility because the underlying instruction set is already there. I don't think RISC is a long term option at all.
|b) Increase basic CPU efficiency, this means pretty much improving on the
core design. A very basic fomular for CPU efficiency that is also basis for Intel's iCOMP is efficiency = bus speed * data passed onto PC per cycle. DDR was designed to overcom the fact that bus speed was not matching the CPU speed in term of development. By making CPU 64 bits, you can in theory process twice as much information per cycle. Now that would definently increase the design overhead of the chip, but AMD had proven that they can handle the task, adding to complexity while upping the perfomance bar. Giving us a chip that outpreform AthlonXP @ the 32 bit game. If that don't gear you to look forward to what it can do 64bit wise, I really donno what can.
Like anything else, iComp certainly doesn't tell the whole story. And no, 64 bit means you can PROCESS twice the information per clock cycle, not necessarily do twice as much work per clock cycle. I suppose on a basic level, you're right. You could, in THEORY do 2x as much work per cycle, but in practice, it simply doesn't happen
Again, not until there's a "killer app" designed to do so.
|There are a lot more reason to say why 64bit is necessory, just like 16->32 bit is inevitable. as it opens a new direction of chipset design for home users. Do we really need it? Computers are powerful enuf? well, 640k was more then plenty and 1MBHD was a dream come true @ one point. This is more of deciding to start now or start later matter.
I totally agree with you here.
A lot of people take what Billy Boy Gates said about "640k being enough for anybody". He didn't say it would be the case 10 years from then. He also said it at a time where home computers had 64k at best
Like I said, I agree 64 bit is important. But no one has been developing towards that goal, which means we have a boatload of early adopters that simply won't see the benefit of 64 bit computing and are simply overpaying for the privilege now.
When 64 bit hits the big time, the Itaniums and Opterons available today are going to be nearly worthless. I don't fault AMD or Intel. I fault software developers for not being hungry enough and being complacent with technology available today.
This is a lot different than 15 years ago. They certainly were hungry enough when they knew how much better Desqview, Windows 3.0 and OS/2 would run on a 32 bit versus 16 bit environment. Software was just WAITING for the hardware to become available. Today, it's the opposite. There's hardware available, but no software available. And that's why I feel 64 bit is a WASTE right now. Without software, the hardware is useless. And adding patches to games that marginally improve the performance is NOT the solution to making a case for 64 bit.
| no killer app for 64 bit yet? well.. if you don't have the platform yet how do you design a killer app for it? Think Karl, Think, and Think really really hard!
How did Quarterdeck develop Desqview for a 32 bit environment when all that was available was 16 bit?
How did Microsoft do it with Windows 1.0 or 2.0? Microsoft pitched to intel "Boy, it sure would be nice if we had a 32 bit cpu that would do X", and wouldn't you know it, a year later, out came "Windows 386".
How did IBM do it with the first releases of OS/2?
|3) are we still discussing if AMD should design their own chipset? As most of giys point out to Karl already that AMD doesn't have the resource have to spend to design their own chip. Granted, it is much a better idea to design your own chip to take advantage the CPU. Here's something to say though, AMD has never really concentrate on making motherboard chipset like Intel was doing all the way back to 80286 date. To ask AMD to divert more resource to design new chipset, something AMD isn't familiar w/ at this point while other chipmaker are already making good enough chipset that allows AMD to show how it kick Intel's butt, to take Karl's word, "i think it's a good business move". After all, there's a good o'l saying, "If it ain't broke, don't fix it."
Making companies like Dell go to multiple manufacturers for an architecture solution only limits their options. Sure, nforce is a good chipset (but anything from VIA is only good for me to poop on). But that adds a layer of complexity to any architectural design. That's one of the reasons it's kept out of the mass market. It adds complexity and *uncertainty* to any engineering design. Nforce might be the killer chipset today. But what happens if VIA makes an even better one next year? More work for Dell to do to secure product. It adds a layer of complexity that most integrators simply don't want or need. It adds cost to their product to have to deal with multiple vendors.
Look at your phone bill. I use Verizon for my local, long distance and cellular phone service. I get one bill, I have one relationship to worry about, and I don't have to deal with the hassle of differing due dates, differing rates, differing places to mail payment, etc etc etc. That's a simple representation of what a business has to go through. When you're talking about 16 models of PC's across 2 different vendors, thats a *SIGNIFICANT* amount of work!
Intel makes the motherboards, CPU's and chipsets for Dell. One stop shopping, for one price. To shop around costs money, and it would literally cost more for Dell to offer AMD - and that cost would simply be passed on to consumers.