Key points
The application design and specification of new instruments involves careful consideration of a large number of parameters. When it comes to the measurement itself, typically focus is placed on the measurement range and accuracy of the instrument without considering the impact on the overall measurement system or process being measured.
The measurement system starts with the instrument but more often than not in an industrial setting the instruments signal passes through a transmitter, input card and PLC before the final result is displayed on a computer screen; all of these pieces of equipment sum up the measurement system.
The measurement system is also subject to additional uncertainties from the reference standard used during calibration and its loop tolerance which is for all intents and purposes is allowable uncertainty.
Measurement System: Accuracy vs Uncertainty
Before going any further let’s take a moment to consider the differences between the term’s accuracy and uncertainty in the context of measurement. The term accuracy refers to the difference between a measured value and the true value.
This definition itself is arguably flawed as a true value can never be perfectly measured. The term uncertainty refers to the fact that no measurement is perfect there is always a degree of error/uncertainty based on the limit of the measurement’s capability.
When describing measurements, the difference in these terms doesn’t really matter, they are both used to describe the limit of the measurement’s capability. For example, one could state a Class A 100Ω Platinum Resistance Temperature Device has an accuracy of +/-0.35OC over the range 0 – 100OC or that it has an uncertainty of +/-0.35OC over the range 0 – 100OC.
It’s worth noting that whichever term is being used to express the limitation of measurement capability that a worst-case scenario case should always be used. To illustrate this point further let us return to the example above of the Class A PT100A who’s uncertainty varies depending on the temperature:
Temperature OC | Uncertainty OC |
-200 | +/-0.55 |
-100 | +/-0.35 |
0 | +/-0.15 |
100 | +/-0.35 |
200 | +/-0.55 |
300 | +/-0.75 |
400 | +/-0.95 |
500 | +/-1.15 |
600 | +/-1.35 |
Whilst it has an uncertainty of +/-0.35OC over the range 0 – 100OC, if it were measuring a process that varied from 0 – 200OC its uncertainty would increase to 0.55OC. Whilst this is not true of the measurement all the time as least some of the time it would be operating with an uncertainty of +/-0.35OC or less when characterising the limitations of the measurement in a 0 – 200OC it would be ill advisable not to assume the worst case scenario from the perspective of good design practice.
Instrument Uncertainty vs Measurement System Uncertainty
The above example of the PT100A considers instrument uncertainty only! The chances are that this probe is connected or integrated with a transmitter which will likely add an additional uncertainty (typically 0.1% of the configured measurement range for a 4-20mA transmitter) and the transmitter is likely connected to a PLC or DCS (input cards typically add a further 0.1% of uncertainty).
These uncertainties are largely inescapable; they can be mitigated by buying the most expensive and most accurate (i.e. least uncertain) measurement sensors which in the case of the PT100A would be 1/10DIN sensor with an uncertainty of +/-0.08OC over the range 0 – 100OC.
A digital transmitter that communicates in ASCII (e.g. Foundation Fieldbus) rather than mA would remove uncertainties associated with analogue communication between the transmitter and PLC.
The uncertainties don’t end there though because the measurement system must be maintained in a qualified state via calibration. The reference standard can have a huge impact on measurement system uncertainty; if it has the same uncertainty as the instrument being calibrated it can add up to an additional 40% uncertainty to the overall measurement system.
If the reference standard has half the uncertainty of the instrument being calibrated it only adds up to an additional 8% uncertainty. A common recommendation is that a secondary standard is at least twice as accurate as the instrument it is being used to calibrate.
The loop tolerance (i.e. calibration tolerance) adds further uncertainty and whilst this can be mitigated to a point by using smaller tolerances the trade-off is an increased risk of calibration failure (discussed later in this article).
Does it matter?
So far this might be starting to sound like something that belongs in a textbook with no real-world application! In a GMP manufacturing setting, this couldn’t be further from the truth! Proper consideration of measurement system uncertainty is the key to:
- Equipment Capability
- Measurement System Reliability
- Conformity Assessment
Measurement System: Equipment Capability
It is common for regulatory bodies to state the requirement for instruments used in process measurement to be fit for the intended purpose and capable of producing valid results (e.g. FDA CFR 21). In order to produce valid results the measurement system uncertainty must be proportionately smaller than the difference between the upper and lower limits in which the process operates as well as being able to provide measurements for the entire process range.
Test Uncertainty Ratio is commonly used to evaluate the suitability of a measurement in its intended process application.
A Test Uncertainty Ratio of 4:1 (or greater) is widely accepted in industry as the ideal (although not always possible). Understanding and quantifying measurement system uncertainty is key to controlling process variability.
Measurement System Reliability
A reliable measurement system is going to maintain qualified state (i.e. remain within loop tolerance) between calibrations. If loop tolerance is smaller than measurement system uncertainty it will frequently fail calibrations no matter how short the interval between calibrations.
Typical best practice is to design instrument loops so they can statistically pass calibration 95% or more of the time or more by ensuring loop tolerance is at least 2 times the instrument loop uncertainty (combined uncertainty to the sensor, transmitter and PLC input card).
Conformity Assessment
In an industrial measurement context conformity assessment is about practically demonstrating that a system is operating between specified limits (e.g. environmental, process safety, regulatory commitments etc).
If a process measurement displays the same value as its upper or lower limit, there is a 50% chance that it has exceeded that limit (due to its measurement uncertainty). Without knowing the measurement system uncertainty it’s impossible to know at what point in the measurement range the process risks breaching its specified limits. It’s not unusual for regulatory bodies to ask if measurement uncertainty is considered as a process measurement approaches a critical limit.
Overall measurement system uncertainty is not the easiest subject and there isn’t a huge amount of user-friendly information available about its application. The key points to remember are:
- Every measurement has an associated uncertainty.
- When considering at measurement uncertainty examine the entire measurement system and not just the instrument.
- If measurement system uncertainty is bigger than processing limits, while the displayed value might be within the limits, the actual value may be outside the limits.
This article was never intended to be a complete guide for how to evaluate measurement uncertainty but primer to introduce the concepts and their value in industry. For an introduction on how to evaluate measurement uncertainty, “Measurement System Uncertainty: Part 2 – How?” will be coming in a subsequent issue of Process Industry Informer.