The difference between sensor error and uncertainty
QUESTION: What is the difference between sensor measurement error and the sensor uncertainty as published in a sensor’s datasheet?
ANSWER: Sensor error is the is the deviation of the measured value from the actual value of the property being measured. When measuring atmospheric air temperature, it is the sensor reading minus the actual air temperature.
The hard part is determining the actual air temperature so that one can calculate the sensor error. All sensors produce measurement error, even the ones used to determine the reference temperature. Measurement error is made up of not only the sensor deviation but is also influenced by the speed of change of the property being measured versus the sensor’s ability to react in time to these chances, and this is where things can get very complicated.
How is sensor datasheet uncertainty determined?
In simple terms, measurement uncertainty as stated in a datasheet is taken as the range of the 95 closest measurements to the average (mean) of 100 measurements. So if the 95 measurements fall into a range of plus 0.5 °C and minus -0.5 °C away from the 100 measurement mean, than the sensor uncertainty as written in the data sheet will be ±0.5 °C.
Now the actual determination of the ±0.5°C is quite a bit more mathematical and is calculated using standard deviation equations which are based on scientific and mathematical methods that have been standardized world wide. This ensures that manufacturers and scientists are all on the same page when specifying sensor uncertainty. Yet despite this, we do find that variation still occurs and there are other factors like the sensor time constant τ (tau) 63.2% that can shift and expand the sensor uncertainty in many applications like atmospheric and climatic air temperature measurement.
The sensor datasheet uncertainty (or probable error of measurement) is also called the 95% uncertainty. In technical terms it is the two standard deviations or 2σ (two sigma) variation. It is the result of extensive sensor testing in steady state conditions, where for example, the liquid temperature is held constant in a well insulated container and the sensor is allowed plenty of time to equalize with the liquid media being measured before a reading is taken.