VPK Analyser

DOI : 10.17577/IJERTCONV8IS11051

Download Full-Text PDF Cite this Publication

Text Only Version

VPK Analyser

Ananya K S

Department of Electronics and Communication,

JSSATE,

Bengaluru-560060

Shuchitha A

Bhumika L Raju Department of Electronics and Communication,

JSSATE,

Bengaluru-560060

Padmini Dhruvaraj Department of Electronics and Communication,

JSSATE,

Bengaluru-560060

Suguna G C

Department of Electronics and Communication, JSSATE,

Bengaluru-560060

Assistant Professor

Department of Electronics and Communication, JSSATE,

Bengaluru-560060

Abstract – In current situation, various advanced sophisticated machineries are used to diagnose human health. But this way of analysis of human health is expensive and painful. In ancient days, wrist pulse was used to find out the root cause of a disease. This method was used by Indian Ayurveda and Traditional Chinese Medicine (TCM). According to Ayurveda, human health is mainly dependent on three essential components. The symptom of a disease is the imbalance in any of these components. Variation of health condition in a subject can be determined based on the wrist pulse parameters such as pulse width, transition period, pulse strength, rhythm, amplitude shape and speed. The prime motive behind this work is to design and develop a device to measure components like Vata, Pitta and Kapha of a subject simultaneously with various quantified applied pressures and the system is connected to the cloud through IoT. This intended outcome gains sufficient momentum if the diagnostic system envisioned above can be of minimal structural dimensions thereby satisfying the consumer demand of low cost and portability. Flexible devices attached to the human skin are valuable for the detection of various health conditions due to their ability to monitor slight changes in vital and arterial signals. This essentially allows early detection of potential diseases and thereby promotes timely treatments.

Key Words: Vata, Pitta, Kapha, Ayurveda, Wrist Pulse.

  1. INTRODUCTION

    Ayurveda literally means Ayur as life and Veda as Knowledge. According to Ayurveda, the five basic elements from which the whole body is constituted namely Ether, Air, Fire, Water and Earth. These ve elements manifest in terms of three basic principles called Vata, Pitta and Kapha in the human body.

    Vata, Pitta and Kapha are three elements responsible for governing all the biological, physiological and psychological activities of the living organisms. The healthy and unhealthy status of an individual is described in Ayurveda in terms of balance existing between these three elements. Under healthy conditions, the movement associated with the individual element is reected in the form of the pulse observation following its fundamental characteristics. The imbalance of these three elements results due to either excess or deciency of any of the elements leading to unhealthy status.

  2. HARDWARE IMPLEMENTATION

      1. Pulse Acquisition Circuit

        Fig 2.1: Circuit diagram to acquire pulse signals

        Pulse signal from the human body is acquired using MPXM2053 sensors and an instrumentation amplifier, INA128 is used to amplify the obtained signal and the output is displayed on the CRO. The output shows us that the peak voltages are obtained when the pulse is sensed and the output varies for different dc voltages.

      2. MPXM2053D Sensor

        The MPXM2053 sensors are silicon piezo resistive pressure sensors. They provide linear and accurate output voltage that is directly proportional to the pressure applied. To achieve offset calibration and precise span, laser trimming is used with temperature compensation.

        Fig 2.2: MPXM2053D Sensor with its pin configuration

        1. Features Of MPXM2053D Sensor

          • Gauge Ported and Non Ported Options

          • Available in Easy-to-Use Tape & Reel

          • Easy-to-Use Chip Carrier Package Options

          • Temperature Compensated Over 0°C to +85°C

          • Ratiometric to Supply Voltage

          • Differential and Gauge Pressure Options

        2. Specifications of MPXM2053D

          • Operating Pressure : 7 psi

          • Pressure Type : Absolute

          • Operating Supply Voltage : 10V

          • Maximum Operating Temperature : 125 C

          • Minimum Operating Temperature : -40 C

          • Operating Supply Current : 6mA

      3. INA128 Amplifier

    The INA128 is a low-power, general purpose instrumentation amplifier with great accuracy. The small size and 3 op amp design makes this amplifier ideal for many applications. The current circuit with input feedback provides wide bandwidth even at high gain (200 kHz at G = 100). The INA128 provides an industry-standard gain equation. The INA128 is accessible in SO-8 surface-mount and 8-pin plastic DIP packages, specified for the temperature range of 40°C to +85°C. This amplifier is also obtainable in a dual configuration.

    band frequency range and it rejects or attenuates signals in stop band frequency range.

    Notch filters remove narrow bands of frequencies. In the output plot of Fig 3.4 it can be seen that the notch filters pass the frequency components below and above the cutoff frequency. Usually small amount of phase lag is caused at the gain crossover frequency. To remove the noise from fixed-frequency noise sources such as from line frequency (50-60 Hz) notch filters can be used. Resonances from the system can also be removed using notch filters.

    Fig 3.1: Circuit representation of Notch Filter

    3.2 Low Pass Butterworth Filter

    A low pass filter is used to remove the high frequency components to avoid misidentification of high frequency components before converting it in digital form. The frequency range of wrist pulse pressure signal is 0 Hz to 20 Hz. Here a low pass filter with 97 Hz cutoff frequency has been designed to investigate the power spectrum of pulse in higher frequency bands during unhealthy conditions. A low pass Butterworth filter of second order has been included along with an offset correction circuit.

    The falloff response rate of the filter is determined using the number of poles in the circuit. The number of poles in the filter depends on number of the reactive elements (capacitors or inductors) in the circuit.

    The amplitude response of nth order Butterworth filter is given by:

    Fig 2.3: INA128 pin configuration

        1. Features of INA128

          • Low Input Bias Current: 5 nA maximum

          • Low offset voltage: 50 V maximum

          • Inputs protected to ±40 V

          • Low drift: 0.5 V/°C maximum

          • Low quiescent current: 700 A

          • High CMR: 120 dB minimum

          • Packages: 8-pin plastic DIP, SO-8

          • Wide range of supply voltage: ±2.25 V to ±18 V

    Vout / Vin = 1 / {1 + (f / f c)2n} Where, n is the number of poles in the circuit.

    The flatness of the filter response is directly proportional to the number of poles.

    f: Operating frequency of the circuit. fc: Cutoff frequency of the circuit.

    The second order low pass Butterworth filter is built by adding another set of RC networks to the first order Butterworth. In the second order low pass filter, after the cut-off frequency, the gain rolls-off very fast.

    3.1 Notch Filter

  3. FILTERS

    A Notch Filter (Band Stop or Band Reject Filter) passes the signal above and below a specific frequency band called the stop

    Fig 3.2: Circuit representation of second order Buttrworth low pass filter

    In a second order filter, the cut-off frequency is inversely proportional to the values of capacitor and resistor in two RC sections in the filter. It is calculated using:

    fc = 1 / (2R2C2)

    The rate at which the gain rolls off is 40dB/decade and the slope in the response is -40dB/decade. The transfer function of the filter is given as:

    Vout / Vin = Amax / {1 + (f/fc)4}

    Fig 3.3: Filter circuit implemented in Multisim

    Fig 3.4: Bode plot output obtained in Multisim

  4. SOFTWARE IMPLENTATION

      1. Discrete Wavelet Transform (DWT):

        The Filter Bank (FB) implementation of DWT (up to three levels) is shown in Fig. 3.1(a). Here we can see that the original signal (x(n)) is passed through both high pass (g(n)) filter and low pass (h(n)) filter. Some mathematical properties must be satisfied by these filters reconstruct the original signal. The output of both filter is down sampled by factor of two. Their outputs g(n) (high pass ) and h(n) (low pass) are called detail coefficients (Cd) and approximation coefficients (Ca) respectively. The low frequency components are represented by Ca and high frequency components are represented by Cd. In DWT, to get the next level of decomposition, Ca is passed through both low and high pass filters and is down sampled to get the next level Cd and Ca and so on.

        The result of the DWT is a series of coefficients in one approximation coefficient and J detail coefficients, where J is the final decomposition level number. These coefficients build an orthogonal basis and to reconstruct the original signal, the inverse wavelet transform (IWT) is applied.

        Down-sampling has an important role in the decomposition process. While a signal is decomposed, every time it passes through the filter pair the length of the signal is halved, hence at level 1, 2, 3.., the length of the signal will be 1/2, 1/4, 1/8 … of its original length, and to prevent redundancy same pair of filter is used in different levels.

        The noise from the signal can be separated because in the post- decomposition results, the noise and the signal manifest

        differently when the noisy PD signal is decomposed and proper threshold is applied at each levels.

        Wavelet based denoising technique generally entails three steps:

        1. Decomposing: In this step mother wavelet and decomposition level J is selected and decomposition coefficients are computed at each level.

        2. Thresholding: This step involves the calculation of threshold values for each level (for each level separately or for the whole set of the coefficients together) and applying that calculated threshold value to at each level.

        3. Reconstruction: This step involves the reverse of decomposition to reconstruct the signal with the modified coefficients.

          Fig 4.1: FB implementation of DWT

      2. Evaluation Parameters

        Mean Squared Error Evaluation: Mean squared error value provides information on how the regression line fits. The regression line fits better if the MSE value is smaller as the magnitude of error is smaller for smaller MSE values.

        Where N is the number of data points, fi is the value returned by the model and yi is the actual value for data point i.

        Peak signal-to-noise ratio: PSNR is the ratio of the maximum

        power possible in a signal to the power of corrupting noise which influences the consistency of its representation. PSNR is usually expressed in decibel scale, as many signals have a very wide dynamic range.

        PSNR is usually defined as:

        Where, MAXI is the maximum pixel value possible of the image. When samples are represented using linear PCM with B bits per sample, MAXI is 2B1.

      3. Support Vector Machine Based Classifier

    A support vector machine (SVM) is a type of deep learning algorithm that performs supervised learning for classification or regression of data groups. A technique called the kernel trick is used to transform the data. An optimal boundary can be found

    between the possible outputs based on these transformations. Basically, complex data transformations are done and then it figures out how the data can be separated based on the labels or outputs that are defined.

    In ML and AI, both input and output data is provided by supervised learning, which is labeled for classification. For future data processing, this classification provides a learning source. Two data groups are separated using support vector machine classification. To sort the groups according to patterns, hyperplanes are constructed by the algorithm. For classification or regression of data groups, support vector machine constructs set of hyperplanes or a hyperplane in infinite dimensional space. A learning model is built by SVM which assigns new examples. By these functions, SVMs are called a binary linear, non- probabilistic classifier. In probabilistic classification, Platt Scaling method can be used by SVMs.

    Compared to other supervised learning machines, labeled data is required to train SVM. For classification, group of materials will be labeled. SVMs can perform unsupervised learning after processing numerous training examples. The best separation of data is achieved by the algorithms when the boundary around the hyperplane is being maximized.

    Vladimir N Vapnik and Alexey Ya Chervonenkis invented SVMs in 1963. Subsequently, they have been used in image, hypertext and text classification.

    SVM is basically a linear and binary classifier based on statistical learning theory. SVM aims to identify an optimal decision boundary as separating hyperplane defined by

    F (x) = wTx + b = 0

    between two groups by maximizing the margin between the boundaries of both the classes defined by

    f(x) = wT x + b = ±1

    Due to this reason SVM has been proved to perform better for unknown data sets and hence gives higher generalization capability. It has also been proved to be better for a small number of training samples. The training samples of both the groups lying on the separating boundaries of both the groups are identified as Support Vectors. Out of all the training samples only support vectors are sufficient to define the decision boundary. The weight vector as a solution for the decision boundary is achieved by below equations where xn and yn are training sample and target output respectively and n are the Largrange multipliers identified as optimization solutions. nsv represents the number of support vectors.

    nynxn

    (wTxn yn)

    For testing a new data z, f(z) = wT z + b is computed and z is classified as class 1 if the sum is positive and as class 2 otherwise.

    For the classification problem requiring the nonlinear separating boundary, SVM provides the use of Kernel functions

    K (xi, xj) = (xi).(xj) ,

    To map the input space into higher dimensional kernel space with the help of non-linear function () without increasing computational complexity. The dimensionality of the data is

    linked to linear separability. Support Vectors achieved in this higher dimensional kernel space helps in classifying the two groups with the help of linear decisional boundary generated by SVM. For non-linear separable cases, SVM provides a soft margin approach, where error is allowed to be introduced while deciding separating hyperplane. To control the amount of such error, a regularization parameter C as a penalty is introduced in the optimization problem of maximizing the margin.

    nyn.K(xn)

    (wT.K(xn) yn)

    Where K is a non-linear kernel function. For unknown test data z, f(z) = wT.K(z) +b is computed for deciding the class associated with z.

        1. Linear SVM

          Training dataset of {\displaystyle n}n points of the below form is taken

          (x1,y1),(xn,yn)

          Where the value of yi is 1 or 1, where the group to which xi belongs will be represented by then. Each xi is a p- dimensional real vector. The group of xi for which yi=1 and the grup of xi for which yi=-1 is divided by "maximum-margin hyperplane", to maximize the distance between the nearest point xi from either group the hyperplane.

          Fig 4.3(a): Maximum-margin hyperplane and margins for an SVM trained with samples from two classes.

          Any hyperplane can be defined as the set of points x fulfilling:

          w . x b =0,

          Where w is the normal vector to the hyperplane. This is much like Hesse normal form, except that w is not necessarily a unit vector. The normal vector w and the offset of the hyperplane

          from the origin are determined by the parameter .

        2. Non-Linear SVM

          In 1963, Vapnik proposed an hyperplane algorithm called maximum-margin, which constructed a linear classifier. Bernhard E. Boser, Vladimir N. Vapnik, and Isabelle M. Guyon recommended a way to construct nonlinear classifiers in 1992, by applying the kernel trick to maximum-margin hyperplanes. The developed algorithm was similar, except that a nonlinear kernel function replaced every dot product in the algorithm.

          That resulted in the algorithm to fit the maximum-margin hyperplane in a transformed feature space. The transformed space may be high dimensional. Although, the classifier in transformed feature space is a hyperplane, in the original input space it might be nonlinear.

          Fig 4.3(b): Kernel Machine

          It is noted that the generalization error of support vector machines is increased while working in a higher dimensional feature space, although with increase with the number of samples, the error will be decreased.

          Some common kernels include:

          • Polynomial (homogeneous): k(xi , xj)= (xi . xj)d

          • Polynomial (inhomogeneous): k(xi , xj)= (xi . xj + 1)d

          • Gaussian radial basis function: k(xi , xj)= exp (-|xi- xj

    |2) for >0

    4.4 Evaluation Parameters

    Precision score: It is the ratio tp / (tp + fp) where number of true positives is tp and number of false positives is fp. The ability of the classifier not to label a negative sample as positive instinctively is precision. The best value is 1 and the worst value is 0.

    Accuracy: It is a description of systematic errors, a measure of statistical bias. Low accuracy causes a difference between a result and a "true" value.

    F1 Score: It is the harmonic mean of the recall and precision, where F1 score best value is 1 which is the perfect recall and precision. The F1 score is also known as the SørensenDice coefficient or Dice similarity coefficient (DSC).

    Recall score: It is the ratio tp / (tp + fp) where number of true positives is tp and number of false positives is fp. The ability of the classifier to find all the positive samples instinctively is recall. The best value is 1 and the worst value is 0.

  5. RESULT

    Fig 5.1 shows us the CRO output of wrist pulse signals which are acquired using sensors and are amplified using an INA128 amplifier. As the pulse signal has a lot of noise, it will be passed through a notch and a second order low pass filter. The output of the low-pass filter is given to DAQ and the digitalized pulse output is given to DWT denoising technique.

    Fig 5.2 shows the DWT output. Here the wrist pulse signal acquired is denoised using discrete wavelet transform using a multilevel decomposition and the wavelet object used is sym. We have taken the maximum level of decomposition and a threshold value of 0.04. Then MSE and PSNR are calculated for the reconstructed signal. Higher the psnr value better is the signal. Since the psnr obtained is around 60 dB from all the

    data sets we can infer that we have obtained a well denoised signal with very less noise. Lower mean square error indicates the signal with less distortion and since we have obtained a lower mse we can infer that the denoised signal we have obtained is a good signal with less distortion.

    In fig 5.2(d), we have considered eight different wrist pulse data and tabulated the parameter values obtained.

    The output of the denosing technique is given to the SVM classifier. Fig 5.3 shows us the classification of data in linear form while fig 5.4 shows us the classification of data in non- linear form.

    The accuracy we found for linear is around 50% where we use one over one decision function. In here, the classification of data isnt sufficient hence its not recommended, whereas we get 85% accuracy in Non-linear SVM using RBF kernel. We worked on different kernels like poly, linear, sigmoid and RBF. We found out that RBF would give a better accuracy compared to the rest.

    Also to get better accuracy we perform feature selection using Anova. There we could identify the significant features and hence use them in our code to get a better result.

    Fig 5.6 shows the results of ANOVA analysis. Here the variance between the means of two populations is F VALUE and the probability of getting a result close to the one that was actually observed is P VALUE.

    If the P-value P<0.05, then the obtained P-value from ANOVA analysis is significant, and we have got it as 0.0. Therefore, it is concluded that there are significant differences among treatments. Though from ANOVA analysis it is clear the treatment differences are statistically significant, but ANOVA does not tell which treatments are significantly different from each other. We have to perform multiple pair wise comparison (Post-hoc comparison) analysis using Tukey HSD test to know the pairs of significant different treatments.

    The result of multiple comparisons of means using Tukey HSD led us to identify the presence or absence of significant differences between each combination of pair treatments.

    The comparisons for treatments which rejected null hypothesis and indicated statistical significant differences with each other are RATIO, SVA, PHR, PR, and PT1. These treatments were used to develop our desired model.

    5.1 Output Figures

    Fig 5.1: CRO Output of acquiring pulse signal

    Fig 5.2(a): Decomposed Wavelets using DWT

    Fig 5.2(b): Reconstructed Denoised Signal

    Fig 5.2(c): Parameters calculated

    Fig 5.2(d): DWT parameters for 8 different pulse data

    Fig 5.3(a): Linear SVM

    Fig 5.3(b): Evaluation parameters for linear SVM

    Fig.5.4(a): Non-linear SVM

    Fig 5.4(b): Evaluation parameters for non-linear SVM output

    Fig 5.5: Comparison between linear and non-linear SVM

    Fig 5.6: ANOVA results

  6. CONCLUSION

VPK analyser can be seen as advancement to the existing health-care systems. This device can be used for preliminary or remote diagnosis, which can help reduce difficulties faced by patients seeking consultation

DWT denoising technique used provides time and frequency information simultaneously and wavelets can be adjusted and adapted easily.

The support vector machine is efficient to classify classes and for the analysis of healthy and unhealthy patients. The model can be improved using neural networks and also theres a scope of improvement in perfecting the model.

The device which can be built using the above techniques will be non-invasive in nature and also effective. Diseases can be detected early using this method. This can be easily available because of its cost effectiveness.

The existing pulse measuring devices measure and analyse only single pulse component at a time and earlier approaches are expensive and invasive in nature. To overcome this we have aimed to develop a wrist pulse analyser which can measure all three pulses (vata, kapha, pitta) simultaneously and update the information on to the cloud so that further analysis and necessary action can be initiated by experts. And the developed device is cost-effective and non-invasive.

As a whole, VPK analyser can be seen as a boon in maintaining good general health in the society as it can be easily available because its cost effectiveness.

REFERENCES

  1. Jiang, Zhixing, David Zhang, and Guangming Lu. "A Roust Wrist Pulse Acquisition System Based on Multisensor Collaboration and Signal Quality Assessment." IEEE Transactions on Instrumentation and Measurement (2019).

  2. GC, Suguna, and S. T. Veerabhadrappa. "A review of wrist pulse analysis." Biomedical Research 30.4 (2019): 538-545.

  3. Zhang, David, Wangmeng Zuo, and Peng Wang. "Compound Pressure Signal Acquisition." Computational Pulse Signal Analysis. Springer, Singapore, 2018. 13-34.

  4. Goyal, Krittika, and Ravinder Agarwal. "Pulse based sensor design for wrist pulse signal analysis and health diagnosis." (2017).

  5. Goyal, Krittika, and Akhil Gupta. "A literature survey on different types of pulse based sensor for acquisition of pulse." International Journal of Control Theory and Applications 9.10 (2016): 361-365.

  6. Abirami, A., et al. "Acquisition of heart rate using NI- myRio." International journal of innovative research in electrical, electronic, instrument and control engineering 3 (2015).

  7. Thakkar, Suket, and Bhaskar Thakker. "Wrist Pulse Acquisition and Recording System." Communications on Applied Electronics (CAE) 1.6 (2015): 20-24.

  8. Kallurkar, Prajkta, et al. "Nadi Diagnosis Techniques." International Journal Of Public Mental Health And Neurosciences 2.1 (2015): 17-23.

  9. S. Anu Roopa Devi, R. Keerthana, P. Mahalakshmi, R. Harisudhan, and

    S. Rathna Prabha PC based Monitoring of Human Pulse Signal using LabVIEW international journal of innovative research in electrical, electronics, instrumentation and control engineering Vol. 3, Issue 3,

    March 2015

  10. McLellan, Steven, et al. "A microprocessor-based wrist pulse simulator for pulse diagnosis in traditional Chinese medicine." 2014 40th Annual Northeast Bioengineering Conference (NEBEC). IEEE, 2014.

  11. Wang, Peng, et al. "A comparison of three types of pulse signals: Physical meaning and diagnosis performance." 2013 6th International Conference on Biomedical Engineering and Informatics. IEEE, 2013.

  12. Arunkumar, N., and KM Mohamed Sirajudeen. "Approximate Entropy based ayurvedic pulse diagnosis for diabetics-a case study." 3rd International Conference on Trendz in Information Sciences & Computing (TISC2011). IEEE, 2011.

  13. Sareen, Meghna, et al. "Nadi Yantra: a robust system design to capture the signals from the radial artery for non-invasive diagnosis." 2008 2nd

    International Conference on Bioinformatics and Biomedical Engineering. IEEE, 2008.

  14. Mortazavi, S. H., and S. M. Shahrtash. "Comparing denoising performance of DWT, WPT, SWT and DT-CWT for partial discharge signals." 2008 43rd International Universities Power Engineering Conference. IEEE, 2008.

  15. Joshi, Aniruddha, et al. "Nadi tarangini: A pulse based diagnostic system." 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 2007.

  16. Prabu, S., V. Balamurugan, and K. Vengatesan. "Design of cognitive image filters for suppression of noise level in medical images." Measurement 141 (2019): 296-301.

Leave a Reply