View Single Post
Unread 11-30-2003, 01:48 AM   #94
Cathar
Thermophile
 
Cathar's Avatar
 
Join Date: Sep 2002
Location: Melbourne, Australia
Posts: 2,538
Default

Hmmm, reviewers working with manufacturers is the way that testing should be done, IMO.

This is not bias, nor is it an indication of bias. There are things that can go wrong with testing. If there is an issue then it is good to raise this with the manufacturer before publishing (witness the recent RBX review at OC.com).

This is not biasing the review though. A manufacturer will have done testing. They best know what to expect from their product. I had a reviewer recently quiz me about the poor results he was getting in comparison to what he had been hearing from others about the Cascade. We went over all the potential issues together via email exchange. In the end it resulted in a stalemate where whatever I suggested might be wrong, the reviewer said that issue was not the case.

I gave up and said basically that whatever he was seeing was what he was seeing, and so be it.

Two weeks later the reviewer finds out that it was a mounting issue. This is why mounting for reviews is such a hobby horse of mine. It also didn't help that the reviewer was mounting the block on a P4 without the standard mounting bracket, he had manufacturered up his own. He neglected to tell me about that too.

Now I didn't even know that this reviewer had the block for purposes of review until he had contacted me. He had acquired it from someone else as seems to often be the case with the Cascade reviews.

Where I'm going with this anecdote is that the interaction between mgfr and reviewer was not a form of bias. I knew what the block was capable of, the reviewer didn't get results that remotely matched, but over time the issue was resolved by the reviewer, independently after I had given up, but ultimately through listening to my concerns.

Just because a mfgr wants to work with a reviewer to determine if the rough scale of measure is close to what's expected (no mfgr expects the results to exactly match theirs due to the different test beds - just be within reason), that does not indicate bias. It is just ensuring that no silly mistakes have been made before the review is published and the mfgr suffers the consequences of someone else's mistake.

In the end, it is the reviewer's decision whether or not to listen to the mfgr. It is the reviewer's decision to post data/results.

Bias only occurs if the reviewer's results are modified or omitted at the request of the mfgr. Until a review is published, how can anyone sit back and accuse of bias?

P.S. - I still don't know if the review that I was talking about has been published or not. I don't believe that it has. The reviewer hasn't contacted me again, and I don't seem to see any indication of it via Google.

Last edited by Cathar; 11-30-2003 at 02:00 AM.
Cathar is offline   Reply With Quote