Quote:
However, just because it isn't science doesn't make the data useless.
|
If the testing isn't done scientifically then interpretation based upon data collected is worthless. CONTROL variables as much as possible, vary one at a time, and collect many replicates. This is the basis for testing. If several parameters vary then the results are of little use. A bit of careful thought and test design can save a LOT of work. There are people who have used relatively simple tools and gotten useful numbers because they designed their tests carefully. People without any formal technical training. I dont care WHO does testing; only what comes out of their efforts.
At this time, ranking waterblock performance with in socket thermistors is akin to determining the outside weather by sticking your tongue on a window of your house. Sure the tongue is sensitive to changes in temperature, and sure the window temperature is related to outside temperature pretty strongly. And this method is just fine for your three year old. But your tv meteorologist?
Hardware reviewers are supposedly the experts right? They are supposedly more knowledgeable and more experienced than any of their readers in th topic at hand. Otherwise why are they the reviewer instead of the reader? So if the testing of cooling gear is a technical issue (a rather difficult one actually) then why would you not expect said reviewer to show some degree of technical expertise?
No apologies from me; I am not the "average" reader. I read reviews (when I do) with an eye to (1) How they were done and (2) How generally applicable are the results. If 6 months worth of testing culminates in a review that's ONLY relevant for the wcing system (and motherboard/CPU) the test was done in then it isn't too much use for me or anyone else.
If the error in your temperature measurements is large, then the numbers themselves aren't of much use and instead impressions of things like mounting, ease of use, and value become what is really important in the review. No complaints from me here; blocks do perform closely and so value and quality of manufacturing are important in making buying decisions. But the fact is that people DON'T base buying decisions on that part of a review. They base them upon numbers that have no statistical significance instead. And that's unfortunate. Ask the reviewer and they'll happily tell you that the numbers aren't set in stone and the tests weren't as accurate as they'd like (if they are honest). But that doesn't matter because the results are in a graph and look completely objective. Why not propagate error? That would be the honest thing to do. Or at least test three times and plot average + std deviation from mean rather than a single number.