Techniques for Measurement Error Reduction: A Review

Download Full-Text PDF Cite this Publication

Text Only Version

Techniques for Measurement Error Reduction: A Review

1Niranjan and 2Neha Rani

1Department of EEE, 2Department of ECE RVS College of Engineering and Technology, Jamshedpur India

Abstract Variety of techniques has been evolved for the calculation of measurement errors and their reduction. The paper will throw some light on the methods for error suppression. For process control, troubleshooting and quality assurance, accurate measurements are required. It is a difficult task to have a control over the error. It depends on a list of factors. Especially, errors are uncertain during the test conditions. Evaluation of system performance by cross correlation in the presence of pseudorandom Gaussian Noise, etc. some of the techniques used to minimize errors. This paper presents a review of these techniques.

KeywordsNoise; signal conditioning; DMM; FPGA; LAN; LXI.


    Test engineers face a unique set of challenges when measuring temperature, strain, pressure and flow in or near a test article in electrically noisy or hazardous environments. The nature of data acquisition, calibration of measurement devices and several other factors play an important role in minimizing the error. The centralized approach involves placing data acquisition instrumentation in a control room located some distance from the test article. Programmable internal signal conditioning on each channel eliminates the need for external cabling, which can result in lower measurement quality, increased setup time, and additional maintenance.

    A distributed topology protects against adjacent channel over-voltage and noise interference that occurs when measurements are made consisting common filter circuitry, as seen in scanning DMM architecture [1]. Measurement noise can be further minimized by using statistical tools such as auto-correlation, etc. In this paper we present a review on these techniques.

    The organization of the paper is as follows. The next section describes about Distributed Measurement Systems and their advantages over the centralized measurement approach. In the third section we present the tools required for noise estimation and finally the conclusions are presented in the fourth section

  2. DISTRIBUTED DATA ACQUISITION Distributed measurements offer numerous advantages over

    traditional centralized methods and have become increasingly popular, especially in data acquisition applications. Advances in electronic component designs and packaging along with the introduction of the LXI (LAN extensions for Instrumentation) interface standard have provided the basis for a powerful new generation of instrumentation.

    The strategic placement of data acquisition instrumentation around or near the test article can result in significant advantages. The benefits of this approach include [3]:

    1. Quick setup

    2. Simplified calibration and maintenance

    3. Excitation source closer to bridges

    4. Reduced cabling costs and noise

    5. Minimized debugging

    6. Improved transportability

    The above benefits encompass the entire operational life of a project, including installation, maintenance, support, and calibration. Cost savings begin at the time of installation by greatly reducing the cost of cabling and associated installation, debugging and testing. Simplified calibration and excitation further improves system performance. Reducing the effects of noise on transducer cables and simplifying calibration increases overall accuracy.

    Distributed measurement systems must also include a convenient mechanism for in-place self-calibration, as well as NIST-traceable calibration. Complete end-to-end internal self- calibration provides significant accuracy improvements over other test configurations. Self-calibration compensates for circuitry drift that has occurred since the last full calibration and is relatively easy to perform. While employing distributed measurement techniques can simplify setup and improve overall performance, they also introduce a new set of challenges including synchronization, timing, and data management. LXI (LAN extensions for Instrumentation) has emerged as the next generation instrumentation interface and resolves many of these issues. LXI is based on Ethernet technology, the most commonly used open platform communications interface. LAN synchronization that incorporates the IEEE-1588 Precision Time Protocol (PTP)[2] enables multiple devices to be synchronized utilizing a single LAN Ethernet connection. PTP defines a precision clock synchronization protocol for networked measurement and control systems that is designed to enable the synchronization of systems that include clocks of different precision, resolution and stability. The most accurate and deterministic synchronization mechanism between multiple devices involves the implementation of a hardware trigger interface. As a result, the LXI standard also defines a high-performance trigger interface referred to as the Trigger Bus, which can provide the link between all devices in the test system for both triggering and clock signal distribution.

    1. System Calibration:

      It is critical for test engineers to confidently rely on the integrity of the data produced by their measurement devices. This confidence is primarily achieved through instrument calibration and traceable verification standards. Specifically, a traceable source is used by the instrument undergoing calibration to adjust and verify the quality of the measurement. This has often been viewed as a painful but necessary process involving system disassembly and downtime.

      In most cases, test engineers are required to disassemble, test stations and send each instrument to its respective vendors factory for calibration. Some costly alternatives include ordering spare instruments for each test station, hiring an outside calibration service, or constructing an in-house calibration laboratory. Reducing these costs and alleviating the downtime associated with the calibration process ultimately benefits all test and measurement applications. VTIs integrated data acquisition, signal conditioning systems are designed to simplify the calibration process and have added features that guarantee measurement accuracy. By taking advantage of the distributed measurement, benefits of LXI and designing instruments with on-board precision voltage references, calibration becomes more convenient and reliable. The LXI standard allows vendors to embed an easy-to-use calibration process directly into the instruments firmware, allowing the end user to execute a complete calibration in minutes, at the click of a button.

      A fully integrated web interface streamlines the calibration process, making it more convenient and cost effective. To perform a complete NIST-traceable calibration, a host computer and precision voltmeter are required. Simply connect the voltmeter to the instrument utilizing banana jacks, access the web interface using a standard Internet browser, and click the button that commands the instrument to perform the automatic factory calibration.

    2. LAN Synchronization:

      LAN synchronization, incorporating the IEEE-1588 Precision Time Protocol (PTP), highlights another fundamental advantage of LXI Class B devices that is ideal for distributed measurements. This completely over-the-wire approach provides an ideal mechanism to synchronize multiple instruments separated by hundreds or thousands of meters. PTP defines a precision clock synchronization protocol for networked measurement and control systems. The protocol is designed o enable the synchronization of systems that include clocks of different precision, resolution and stability. Sub- microsecond accuracy can be achieved with minimal network and local clock computing resources, and with little administrative attention from the user. There are several ways in which PTP can be implemented, ranging from user-level software control, to kernel-level driver modifications, to hardware implementations utilizing dedicated FPGA devices. The highest level of precision is obtained when hardware implementations assist in the time stamping of incoming and outgoing network packets or frames, which can result in delay fluctuations that are within the nanosecond range. PTP provides multiple device synchronization while eliminating the need for external cabling between devices. This approach is less accurate than hardware triggering. However, Gigabit Ethernet can provide synchronization times in the hundreds of nanosecond range, which may be suitable for slower data

      acquisition rates common with thermocouple measurements. Below are examples of PTP implementations and the associated relative synchronization accuracy comparisons.

      Fig.1.Networking of IEEE via software

      Table1.slave delta of various IEEE mechanisms

    3. Hardware Synchronization:

      IEEE-1588 provides incredible advances in over-the-wire synchronization; however, there will always be instances where additional accuracy is required. The most accurate and deterministic synchronization mechanism between multiple devices involves the implementation of a hardware trigger interface. As a result of this requirement, the LXI standard defines a high-performance trigger interface referred to as the Trigger Bus. The LXI Trigger Bus is required in LXI Class A devices and provides the link between all devices in the test system for both triggering and clock signal distribution.

      Deterministic trigger generation and propagation between multiple devices is accomplished with an eight-channel, multipoint-low-voltage differential signal (M-LVDS) interface. This architecture permits individual lines to be configured as a source and/or receiver while supporting external time-based or software-generated triggering and clock distribution. Common topologies including star, daisy- chain, and hybrid configurations are supported. They provide the flexibility to distribute the trigger lines as dictated by the application requirements. Additional flexibility is realized with the addition of a star hub, which permits very tight trigger tolerances to be maintained throughout a large distribution network.

      1. Variance of estimate of Correlation functions:

    Estimates of correlation functions obtained by averaging over a finite interval have statistical errors depending on the statistical properties of the signals and the length of the averaging time. The mathematical expression for the magnitude of the errors is a complicated function of the signal statistics and the averaging times, and for large T the expression is given by


    var(RXY ( )) T

    RXX (u)RYY (u) RXY (u )RYX (u )du

    Fig.2. Hardware distribution network


    Many measurements of noise and random phenomena involve averaging. The properties of the noise that we wish to measure may be simple quantities like the mean value and the

        1. value, or we may be interested in much more complicated functions of two variables like cross correlation functions. In the measurement of these quantities, averaging is used. The mathematical theory suggests that averages (integrals) should be taken over infinite time to get precise results, but averages taken over finite intervals can yield results sufficiently accurate for practical purposes [4].The result of an experiment designed to measure some parameter of a random signal is called a statistical estimate. An estimate of the mean value of a signal is written, the caret over then indicating that it is an estimate. The estimate of the mean value of the variable x (t) measured over a period of T sec would be written as




          x(t)dt T 0

          The error of the estimate of the mean value of a Gaussian random signal will itself be a Gaussian random variable, and in the case of band-limited Gaussian noise, the variance is given by:

          In the particular case in which x and y are Gaussian band- limited signals with identical bandwidths, measurements with Noise as a Test Signal. Random noise can be used as a test signal in many practical situations. There are two quite separate circumstances under which it is appropriate to use a random test signal. In the first, a random signal can be used to simulate the real-life operating conditions of a practical system in order to determine its overall behavior. The second use of random signals is as an alternative to sine waves in order to collect data about the dynamic behavior of the system. A random signal is often the most appropriate test signal to use in order to simulate the actual working conditions of a system. This is especially true when nonlinear effects are present. Under these conditions, knowledge of the response to a sinusoidal signal does not enable us to predict accurately the response to any other waveform. The main areas of application for statistical measurements by using random test signals follow [5]:

          1. For simulating actual working conditions, especially when nonlinear effects are present

          2. In slow systems having long time constants

          3. In noisy situations which would require very long averaging times even with sine waves

          4. For short-lived or expensive test runs.


2 All physical measurements are ultimately limited in

var() x




Where B is the noise bandwidth, x is the R.M.S value of the fluctuating Mean Square Value Estimates. The measurement of the mean square value of a random signal is subject to statistical errors similar to those in the mean value estimate. In the case of bandwidth-limited white noise with zero mean value, the variance of the estimate of the mean square value 2 is given approximately by

accuracy by the presence of background noise .It introduces an unpredictable error. In communication and radar applications the detection of noise in sine wave is detected by cross correlation with a reference sine wave. With the help of mathematical tools like average, mean values, different correlation functions one may be able to have control over the errors. In this paper we have discussed some of the mathematical and architectural strategies for reduction of measurement noise.


var() x


For a 1 percent maximum error with 95 percent confidence, this demands that the bandwidth time product BT must exceed 4×104. For smaller errors, the percentage error in root mean square values is half the percentage error in the mean square value; e.g., a measurement that gives a ±1 percent error in measurement of mean square value implies an accuracy of ±Y2 percent in terms of R.M.S measurement.

[1.] Semancis John, Techniques to reduce measurement error in challenging environment", VTI Instruments Corp.

[2.] IEEE,"Analysis on LAN Synchronization", directory

[3.] Semancis John, Techniques to reduce measurement error in challenging environment _pdf

[4.] Ibid., p. 243.

[5.] Oliver Cage, Electronics measurements and instruments, McGraw Hill

Leave a Reply

Your email address will not be published. Required fields are marked *