← Back to Pressure Measurement & Control category

A World Under Pressure: Pressure Measurements For Process Industries

Introduction

Pressure is the second most measured parameter after temperature in process industries.  It is often important to measure accurately to control the quality of the final product and quickly to control dynamic processes.  

Other important considerations are the challenging environments of many applications, the safety requirements of use in explosive atmospheres and the cost of maintaining calibration records. The choice of suitable pressure measurement products can be daunting.  

This article considers the different technologies that are available and what advantages each brings, as well as how those performance advantages suit particular applications.

neculai moisoi

Senior Metrologist, Druck

ian abb

Industrial Product Manager, Druck

Within this article the typical uses of pressure sensors in process industries and what challenges users face will also be depicted.

Fundamental types of pressure sensors

The pressure sensors industry underwent a rapid development after the initial invention in 1938 of the strain bonded gauges by E. E. Simmons of the California Institute of Technology and A.C. Ruge of Massachusetts Institute of Technology. Whilst many form factors are available, a generic appearance of a pressure sensor is presented below.  

Pressure measurement

Figure 1: A generic pressure sensor with an isolation diaphragm

Types by sensing principle

Piezo-resistive sensors are the most used type of sensors, due to the variety of applications that they suit high levels of accuracy, and their normally robust construction.

Most piezo-resistive sensor are based on a Wheatstone bridge on a silicon substrate, where each resistor in the bridge changes its value with the applied strain/pressure and then this signal can be modified into a variety of electrical outputs.

There are few applications where parameters are required that are outside of the capabilities of piezo-resistive sensors, however for most industrial applications piezo resistive sensors are the preferred choice.

Capacitive sensors come also in a big variety of shapes and forms, with generally very simple shapes where a thin diaphragm is one plate of a capacitor and the applied pressure will cause the elastic deformation/ movement of the diaphragm and hence a change in the electrical capacity.

Due to their high sensitivity, they make good sensors for pressure measuring lower than 20 mbar (20 hPa), but caution should be used as they are typically sensitive to vibrations and shocks.

Inductive sensors have a similar approach to capacitive sensors, where a sensing (elastic) capsule is moving a core element inside of a linear variable differential transformer, hence the change in induction is proportional with the applied pressure.

A large variety of sensing capsules can be used with inductive sensors; meaning that they can be used for different ranges, but care must be taken with electrical noise and/or shock and vibrations.

Resonant pressure transducers are some of the most accurate silicon-based sensors on the market and the functioning principle is based on the change in frequency of a resonator as the stress is applied to it, generally through a silicon diaphragm, connected to the ends of the resonator.

They are generally robust sensors and come in different form factors with various degree of protection against the environmental factors. However, their mechanical principle could make them vulnerable to mechanical waves, especially if these mechanical waves activate different frequency modes of the resonator.      

​Types by output

The main two types of pressure sensors are continuous quantities output or discrete (digital) output.

The continuous types are mV, V, mA and Hz) and each type has its own advantages and disadvantages. With this in mind, each should be chosen based on the application and environment. As an example, a mV output is desirable if the signal has to be modified in a bespoke application without the need to send it for long distances, while a V output or mA output can be sent over relatively long distances, as it is less likely to get affected by the operating environment than a mV output.   

Digital output pressure sensors are becoming more popular as they can be easy integrated into computerised systems, using the same set of wires for multiple sensors (Modbus, Profibus, Canbus) and that they can also be used as a “plug & play” configuration (RS232, USB) or even connected without wires (Wireless, Bluetooth).

Pressure sensors metrological characteristics

Pressure sensors’ metrological characteristics can vary substantially from manufacturer to manufacturer and it is very important to understand these characteristics to ensure a suitable sensor is chosen for the intended application.

From a pure metrological perspective, some of the parameters (e.g. repeatability, precision, accuracy) have a qualitative definition, however through the years they have been used as quantitative parameters, hence we will use them in the same way. Below we will focus on the main metrological parameters:      

Signal Offset

Signal offset is the error of the sensor at the minimum pressure (Figure 2 shows the signal offset for a pressure sensor with a range of 0 to 1000 mbar and an output of 0 to 5 V). From a practical point of view, it is important to know if the sensor offset can be adjusted (“re-zeroed”) as many sensors could drift over time and the ability to “re-zero” would be desirable.

Together with the offset adjustment, many sensors have the capability to adjust the span (output at maximum pressure minus output at minimum pressure), which also will help to correct drift over time. Measuring and resetting such offsets requires a calibration and maintenance plan to ensure performance remains within required limits.

gauge pressure sensor 0-1000 mbar. Pressure measurement

​Figure 2: Pressure sensor offset


Sensitivity

The sensor sensitivity is the ratio of the output signal change and the pressure change. If we consider the graph above where the output changes by 5 V, while the pressure changes by 1000 mbar, then the sensitivity is 5 mV/mbar. This an important parameter for the way we would use the signal in the application, as well as determining how sensor performance will be affected by electrical noise.

Precision

Precision is generally the term is used to describe sensors’ behaviour in terms of its repeatability, linearity error and signal hysteresis.  Traditionally some manufacturers have used the term “accuracy” to describe this parameter.

However, as a rule of thumb, whatever the name for this parameter, the best approach is to understand what its constituent parts are. Precision as a parameter does not tell us how accurately we are measuring pressure, but more how the sensors itself behaves.  For example - is it repeatable? is it linear? is there pressure or temperature hysteresis?

Repeatability is the closeness of the agreement between the results of successive measurements of the same pressure carried out under the same conditions of measurement in a relatively short period of time. Often the repeatability is determined as the standard deviation of the repeated measurements or the amplitude (maximum – minimum).

Linearity error is determined as the difference between the measured value by the sensor and the theoretical line (which is either determined as BSL- best straight line fit or TSL – terminal straight line fit), which assumes linear behaviour of the sensor. Figure 3 represents the linearity error for a BSL case and in order to characterise the sensors, the maximum error is chosen (as the worst-case scenario).    

gauge pressure sensor 0-2000 mbar. Pressure measurement

Figure 3: Pressure sensor linearity error

​Pressure or temperature hysteresis error is the difference between two separate measurements taken at the same point, but one where the is increasing and one where the value is decreasing. The hysteresis size varies based on both the pressure sensor technology and the physical construction of the sensor.

absolute pressure sensor 0-100 mbar. Pressure measurement

Figure 4: Pressure hysteresis error


​Generally, the 3 parameters described above are included in one specification, which defines the acceptable limits for precision (example: precision is +/- 0.1 % of full scale).

​Accuracy

Accuracy should be associated with the specified measurement error, including the impact of systematic error, random error and drift (in cases where accuracy is specified over a period of time). The accuracy of a pressure sensor or of a measurement is obtained as part of the measurement uncertainty evaluation and includes many factors, including the standard and/or the Unit Under Test (UUT) uncertainties, precision, etc. Evaluating the uncertainty of measurement/calibration requires specialist knowledge, hence here, we will focus how to interpret accuracy.

Every measurement should have the measurement uncertainty with it (either through an accuracy statement in the data sheet or uncertainty in a calibration certificate). Most of the time, the accuracy is evaluated as Expanded Uncertainty, which is assumed of following a normal distribution and the coverage factor = 2. In simple terms as per the example below: the true value of the measured quantity x is found with a 95 % probability within the range (x-U, x+U).  

 

Pressure measurement

Figure 5: Accuracy representation for a pressure value X


When comparing precision with accuracy for a pressure sensor: precision tells us how the sensor behaves, while accuracy (and it will include the precision factors) tells us how accurate our measurement is, or which are the boundaries containing the true value of the measurement.

Long Term Stability (Drift)

Long term stability of an instrument is often referred by its opposite quantity - namely the long-term drift, as per the following definitions:

  • ​Stability of a measuring instrument is the property of a measuring instrument, whereby its metrological properties remain constant in time
  • ​Instrumental drift is the continuous or incremental change over time in indication, due to changes in metrological properties of a measuring instrument

Most of the time, the drift of an instrument follows a given mathematical model over time, but due to the variation from part to part for any given model, the drift is expressed as a tolerance range, D=+/- 10 Pa, hence it should be included in the overall accuracy of the instrument.

Generally, the pressure sensor exhibits some form of drift over time, hence it is important that systems are designed with the potential to be adjusted for both: offset and span drift and that a calibration and maintenance program is adopted.

​Influence quantities

Influence quantities are any external (i.e. not included in the input/output) quantities, which can influence the performance of a pressure sensor. Most of the time, the pressure sensor manufacturers provide the range for the influence quantity and its effect on the metrological characteristics of the sensor.

The influence of external factors is mostly given as a tolerance range, which should be taken in account when evaluating the instrument accuracy. For example, when considering the temperature effect for a pressure sensor, the influence quantity (temperature) is defined -10 to +50°C and then its effect is defined as a tolerance range ±0.75% of full scale.

The influence quantities and their effect should be supplied by manufacturers through their datasheets and they differ based on what the measured quantity is and the type of sensor., The most common influence quantities are: temperature, humidity, atmospheric pressure, electromagnetic fields, vibrations, noise.

Based on the application, the stated specifications for influencing quantities should be closely examined, as in some cases the effects induced are much greater than the claimed precision and/or accuracy.

Calibration of pressure sensors

In years past, pressure sensors would be taken out of the system into which they are incorporated and calibrated in a metrological laboratory. However, this will come with an associated cost, as either spare sensors will be need to be installed to prevent downtime or the system would simply stop and the asset would be down until the sensors are returned from calibration/testing.

In modern times, most of the sensors are calibrated “on the spot” using pressure calibrators, some of which have the capability to generate and measure pressure at the same time and the output (in mV, V, mA, etc). Also, many of the calibrators can evaluate the calibration uncertainty (many times referenced as “accuracy”) and store/transmit the data automatically through a data management system.

It is always advisable to have such a system as it keeps all calibration data securely, which make it easier to manage the assets, reduces reporting errors and helps to conform with ISO certifications.

The method of calibration is a direct method and beside the operational checks that needs to be performed prior to selection (choosing the right fittings for the pressure connections, ensuring a leak free system, safety precautions, etc), the calibrator should be chosen as a rule of thumb to be 4 times more accurate than the sensor calibrated.   

portable calibrator. Pressure measurement

Figure 6: Remote calibration of a pressure sensor by using a modular calibrator

​Choosing the right sensor for the application

Choosing the right sensor for an application is about matching the requirements of the application to the particular parameter that the user is interested in.  

For example, in a leak test application absolute accuracy is a secondary consideration to noise.  Whether the reading has changed is key and thus if the sensors misreads 10 bar as 9 bar the consequences are not profound, as high resolution is of greater importance, in order to see a small change in pressure.  

As another example in a control loop speed of response is critical.  If the sensor is outputting the pressure that was present 100 ms in the past it will be very difficult to optimise a dynamic process.  

Of course, there are some applications, like fiscal transfer and delivering a mass of gas to a process, where the overall accuracy is the most important factor.  There is a 1:1 relationship between pressure error and the error on the mass, or to put this another way, a 1% error in the pressure reading is a 1% error in the bill.  

In this situation it is important to take not only the headline “accuracy” figure into account, but also the performance over the operating temperature range and then to include the stability figure to set a recalibration period in order to maintain the overall accuracy at all times.  

Manufacturers tend to put as much detail as they can on their technical and marketing literature, but must balance the amount of information given with making the information easily accessible and understandable.  

For increased certainty of best matching requirements to a particular application, it is often necessary to engage a pressure sensor supplier’s design engineering team and in some critical cases, it is possible to engage in a partnership in order to design a custom solution for a specific application.  

Process Industry Informer

Related news

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share via