My standard answer to the question "how much RAM is enough?" - if your CPU can self-test your RAM in less than one second, you don't have enough RAM. Expressed another way, if you had a CPU that could access one byte of memory per clock cycle, then a 1 GHZ CPU should have 1GB of RAM. Mostly that's a joke answer, but it tends to be correct more often than not.
re: setting your swap file size to zero and Windows continuing to Page. Well first of all, it's a Paging file, not a swap file. As others have pointed out, Windows never swaps entire processes out. There haven't been any swapping-based operating systems written in over 30 years, all recent Virtual-memory based OSs use paging.
Secondly, there's more than one kind of paging activity, which is why you still see paging happening when your pagefile is set to zero. Windows demand-pages executables into memory. Every page of memory that a program's executable code occupies is paged in from the program file on disk. That kind of paging activity is always happening, regardless of the size of your paging file. The paging file is only used for memory pages that don't correspond to executable file (programs, DLLs, etc) pages, i.e., raw data. And most of the time, if your RAM is big enough, your paging file will be empty. Windows won't use it if it doesn't need to.
There are documented cases where having "too much" RAM will slow a PC down, but it's not common on today's CPUs. Basically, the CPU has a fixed set of resources for managing its in-core page tables. These are lookup tables that the processor's memory manager uses to map virtual addresses to physical addresses. Every page of physical memory must have a slot in the lookup table in order for the memory manager to reference it. Some older CPUs had a smaller amount of page tables, designed to address only 512MB of RAM. While you could install more RAM into such a system, the machine would slow down because there weren't enough in-core page tables to address more than 512MB. It would have to page out its page tables(!!) from the CPU into RAM to manage the full address space. Any time a CPU has to go out to RAM instead of using its on-chip resources, performance suffers.
re: RAMdisks - most of the time they are a waste. But there are two issues that frequently occur in badly written Windows apps, that make RAMdisks useful.
1) the app is designed to write its data into temporary files. These are apps where the authors never considered the possibility that they would run on computers with enough RAM to hold all their temporary data at once, and so they use temp files even though they don't need to. SuperPi is an example of this kind of program; it writes a few megabytes of temp files as the number of digits increases, even though all of its temp data will easily fit into 512MB of RAM. A well-written program would check how much total memory it can safely allocate in the current system, and only use temp files if it absolutely needed to.
2) Usually a filesystem cache is more effective than a RAMdisk. But the Win2K/XP/etc filesystem cache doesn't give users enough control over setting the size of the cache or how it will get used. As such, Windows generally doesn't cache as much as it could, definitely not as much as it should. The reason 3rd party programs (like a CD accelerator that performs readahead caching) are effective in Windows is because the native Windows filesystem cache sucks. Back in the DOS days, SmartCache was really great because you could tune the sizes, writeback parameters, etc. to do whatever you wanted. I could compile a huge set of source code and all of the source files and object files would get cached; if I edited one file and reran the compile the disk wouldn't even spin because the compiler and all the files were still in the RAM cache. But on WindowsXP, even though I have over a gig free, it keeps going back to the disk. In situations like this, it's worthwhile (for productivity reasons) to operate with a RAMdisk, although *ideally* you'd be better off with a filesystem cache that was worth a damn.
Also, so far I've seen some pretty piss-poor RAMdisks for Windows. They're just too slow at copying data from memory to memory. Since you can pretty much guarantee that you're working with aligned 512 byte-or-more blocks, you should be using SSE or other streaming instructions to pipeline as big a memory transfer as possible. The people writing code for Windows these days just don't seem to be anywhere near as good as the bad old days of DOS when good utilities were written in hand-optimized assembly language. People rely too much on ultra-high-level languages. Those are great for user-friendly GUIs, but system code needs to be written in a system programming language...