The difference in RAM speeds


I am building a new PC with the intel i7-930 processor. I am wanting to use 12gb of RAM for it (6x2gb sticks).

Here is some of the RAM I am looking at G.SKILL 6GB (3 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Triple Channel Kit Desktop 2 of these 3 stick kits.

They are rated at 1600 however some of the user reviews on that product say they xcould not get it to run at 1600 and the supplier who makes the RAM made a comment that "I7 doesn't support over DDR3 1066"

So I am curious, I do not know much about this, I know the more GB of RAM the better but as for the speeds, I am not sure how much of a difference it makes.

So can someone explain to me what the difference in performance may be based on speeds of 1200 vs 1600 ram?

Best Answer

If some parts of the processor/memory sub-system can run along at a clock of 1600 but other are limited to 1066, then it will all run at 1066 (the speed of the slowest) so there is usually little to gain from having some components that can run fast (likewise, they are unlikely to make things slower either).

If everything can negotiate the higher speed then tasks where the main bottleneck is main memory bandwidth will run quicker as more data can be shuffled over the bus in a given amount of time. In reality most tasks don;t saturate the processor<->memory bus most of the time as tight inner loops usually operate of datasets that fit in the processor's cache so the need to access main memory isn't there for chunks of the time so doubling the clock will not double your systems performance (it will improve it slightly, but other bottlenecks will minimise the benefit).

There is one issue that might mean you are better getting the slower memory - running at different speeds might slightly alter the latency timings supported and voltage range requirements so if you get faster RAM make sure it is rated as compatible with the slower speed, just in case.

In the days of yore matching clock speeds could be more important. Some old 486DX3 chips would run at 33x2 if they found a 33MHz bus or 25*3 if they found a 25MHz bus - depending on what you were running and how much cache the particular chip had one or the other would be better. Sometimes (a Mandelbrot calculation loop for example) the 25*3 would be faster as the CPU could operate on register values and cached data at 75MHz rather than 66MHz, but for some tasks (say, a video encode operation) the 33*2 would be faster as it could perform bulk access to/from main memory (or off-chip cache) with a 33MHz signalling rate instead of 25MHz). There are similar effects at play with modern CPUs but they are not nearly as pronounced (so unless you are a hard-code speed freak for who every 0.1% counts don't worry about it) - modern CPUs have much finer grained control of their external<->internal multipliers so the difference won't be nearly as much as the 33/25 difference, and with their on-board memory controllers, more intelligent pipelines with duplicated core blocks & out-of-order-execution potential, and multiple cores, they can be far brighter about doing other things while waiting for one particular operation's data to arrive from off-chip.