http://www.rit.edu/~vwlsps/uncertain...rtainties.html
http://www.rit.edu/~vwlsps/uncertain...tiespart1.html
http://www.rit.edu/~vwlsps/uncertain...tiespart2.html
Fairly general overview. I have a set of old handwritten analytical chemistry lecture notes I actually refer to though for most basic stats.
Quote:
One thing that's stomping me right now (a brain fart), is, if I have a temp differential (die to water) error of say, +/- 0.2 deg C, given say, a 70 Watt source (measured at +/- 2%), then what's my C/W's error margin?
|
Think about it logically. The upper limit on the calculated C/W will be when you overestimate the temperature differential by the largest amount and underestimate the wattage my the maximum amount. The lower limit would be the smallest temp differential divided by the largest wattage. So let's say you have a real 0.2C/W; that's a 14C delta T. The range of C/W you can get from your error is 13.8/71.4 =0.19 to 14.2/68.6 = 0.21 or your C/W value is +/- 0.01. That's actually pretty good; you'll see bigger differences than that from wb mount to wb mount.
Digitec and my YSI probes are guaranteed to 0.3C accuracy out of box. My Fluke is probably a little worse. This number is a worst case scenario statement from mfgr that the temperature will be within this range of the true temp; it is not a measure of how tightly the instrument responds to a change in temperature. I would expect the digitec to be offset by as much as 0.3C, but when using it to monitor changes in temperature the error in the delta T is much less. Meaning the linearity in response is quite good. I can't necessarily say the same about my TCs and so I don't care for them as much.
Um what I am saying I guess is I have bigger error bars if I want to compare my numbers with your numbers than I do if I want to compare a block I test on today with a block I test next month on the same setup.