Quote:
Originally posted by pHaestus
...
As for Ben he cares about googling and blue sky and his post per day count. I wouldn't make big purchases based upon that.
...
Add the ability to measure water temps to 0.01C res with decent reproducibility. Add a bigger pump to offset that. Fiddle with how to control water temps. Add a die simulator and related hardware. Etc etc.
...
To me it is irresponsible to start some "Alliance" and make these recommendations that tons of money must be invested to get some "magical" accuracy.
...
Do the best with what you have/can afford, be honest about your testbed's limitations, and upgrade as needed/as available. "I'm waiting to find that used with traceable certs" is a valid excuse for not going platinum RTD in my mind.
|
I couldn't care less about my post count, actually.
[edit: rambling removed]
I'm sorry for my ramblings about that ultra high accuracy bit. I was just trying to figure out a (relatively) cheap way of measuring that secondary loss. Obviously, as Bill himself pointed out, it's beyond our capabilities/means. I certainly didn't mean to imply that we would need to spend huge amounts of money (used or new).
Going over some theoretical figures, I'm trying to figure out how each individual error applies to the result: the C/W. One thing that's stomping me right now (a brain fart), is, if I have a temp differential (die to water) error of say, +/- 0.2 deg C, given say, a 70 Watt source (measured at +/- 2%), then what's my C/W's error margin?