I completed two quick tests with an older Maze3 waterblock on the A64 3700+ at stock speed and voltage using the same water-cooling loop as before.
Personally, I find these results rather disturbing. I would have guessed these four waterblocks would have shown a measurable difference when tested on a modern CPU. In reality, it appears they don’t. Admittedly the CPU temperature is not “accurate” but overall the test conditions were held relatively consistent (much more so than a typical user would ever experience under normal operating conditions).
Question: Why does the IHS appear to have such a large affect on waterblock performance – essentially leveling the field between newer high-performance WB’s and older mediocre WB’s?
I have been using a similar CPU test bed (A64 3200+) for well over a year to test heatsink fans and occasionally a water-cooling system – but I never used it to compare individual waterblocks. My experience has shown that after testing well over a dozen HSF’s and half a dozen complete water-cooling systems, there has almost ALWAYS been a measurable difference between various products. Again, while not accurate, the data was repeatable (I can go back and re-test a HSF I tested 6 months ago and get essentially the same numbers), which seems useful for comparative reviews.
Adding a calibrated thermocouple to the side of the A64 IHS provided additional data to go along with the relatively useless internal diode temperature. Even though it’s not representative of the CPU core temperature, the IHS thermocouple almost ALWAYS reported a measurable difference between different products.
Prior to actually doing any waterblock testing/reviews I became convinced (from all the discussion on various forums) that the only “right way” to test waterblocks was with a custom built test bench and that no serious enthusiast would ever consider test results collected from a CPU on a live computer with MBM5! (I also had a good bit of the hardware needed and an interest in learning by doing, which led to my building a waterblock test bench.) Once it was built, I opted not to do waterblock tests on the CPU test platform. The thought did cross my mind, but I just assumed I would see similar results to what I typically saw when testing a HSF or water-cooling system (without the more accurate numbers the test bench produces).
However, I do remember playing around with different waterblocks several years ago on an AMD 1400 and then an XP-2400+ (both with exposed dies) and seeing measurable differences. It’s the newer CPU’s with IHS’s that I never got around to testing waterblocks on – until now. And it appears that somehow the IHS is having a huge affect on waterblock performance.
ASFAIK, the IHS serves two main purposes: (1) it spreads heat away from a relatively small area over the core to a much larger area that contacts a heatsink or waterblock base and (2) provides mechanical protection to the rather fragile/brittle silicon core. Because most IHS’s are thin copper, there still exists a relative hot-spot over the core area (thermal modeling clearly shows this).
Question: How does all this impact previous thinking on WB testing? Are thermal die simulators (without IHS) no longer useful? To be valid (more useful?) should future thermal die simulators incorporate an IHS? Or should we abandon thermal die testing and go back to live CPU’s? (Which one? How to measure temps and power?)
IMHO, thermal die simulators and live CPU testing each have their place. Thermal die testing (with or without an IHS) produces data that should be of particular interest to waterblock designers and hard-core users that run their CPU’s without the IHS. Live CPU testing may be of more interest to the general water-cooling community who has no desire to remove the CPU’s IHS.
I come away from all this a bit discouraged and with more questions than answers…