Scan Power Reduction for Linear Test Compression Schemes by Modified Reseeding Technique

DOI : 10.17577/IJERTCONV1IS06072

Download Full-Text PDF Cite this Publication

Text Only Version

Scan Power Reduction for Linear Test Compression Schemes by Modified Reseeding Technique

M.SHALINI

    1. (VLSI Design)

      Srinivasan Engineering College Perambalur, India mshalu91@gmail.com

      ABSTRACT- On-chip test compression schemes based on XOR network have been used due to their high compression ratio and efficient decompression mechanism. Compression reduces test data volume and decompressor reduces test application time. Such a scheme needs high unspecified bits in test cubes. In the presence unspecified bits there is a difficulty in preprocessing test cube. Due to the power effect of different seed, significant power reductions in the test cube can be achieved through selecting power-optimal seed. This work inquires into the aforementioned flexibility in the seed space, and summary a mathematical and algorithmic framework for a power aware linear test compression scheme. The technique incurs no hardware overhead. It can be easily fixed into the industrial test compaction/compression flow. Experimental results confirm that the technique delivers significant scan power reduction with negligible impact on the compression ratio.

      Index TermsScan power reduction, scan slice transformation, seed selection, test compression.

      1. INTRODUCTION

        The increase in a growing volume of test data in todays circuits. So increase in test data volumes results in more test application times and needs automatic test equipments (ATEs) with larger test storage and higher bandwidth, significantly boosting test costs. In order to overcome this, a test compression technique is used with better test costs for suppression of the test data volume and reducing the tester bandwidth and the test application time with no harmful effect on test quality. However circuits with a large number of scan cells has more scan power due to high switching density of scan cells.

        In order to reduce power for non-compressed test vectors and the attainment of a possible compression goal limited works have been done previously. The attainment of either target needs the suitable usage of the unspecified bits (dont cares) in the test cubes. The identification of a

        dont care usage condition that fulfils both constraints deliver a power-aware compression scheme.

        XOR network have been used in large industrial scan designs, due to their utmost compression ratio and decompression mechanism. However such schemes specify dont care through a linear transformation of seed vectors which prevents the pre-processing the original test cubes. So the scan power level is highly increased. Severe reduction can be achieved by a sensible seed selection, if there are multiple legal seeds. Following this identification, a power- aware test compression technique has been found in previous work, which identifies seeds giving low level of switching activities between scan slices. The XOR network can be represented as a linear equation, where reduction of power can be achieved by matching the value of the linear equation with the corresponding bit in the adjacent scan slices. They observed that the inconsistent subgroups in the linear system are the main cause of scan slice mismatch. This can be solved by the slice matching problem by removing the equations that make system inconsistent. A technique is used in order to identify sequentially the equation removal decisions that can develop system consistency. However, this technique gives only to a local optimal solution as system relaxation fails to point the set of exclusion targets.

        We use a methodology which finds the power friendly seeds by a system relaxation process. A mathematical transformation is used to fix the compression restrictions into a linear system which solves the problem of power reduction and compression. In addition seed space is also reduced. A transformation technique is identified to simultaneously find multiple inconsistent subgroups in linear system. Compared to the sequential exclusion strategy used previously, the transformation increase the capability of the inconsistent subgroup process. The efficiency of multiple inconsistent subgroups simultaneously identifies the minimum removal. The minimum exclusion during the system relaxation process gives a more number of satisfied equations. Such a low power and compression scheme utilize the adaptable seed.

        Use an automatic-test pattern generation (ATPG) flow having a concurrent compaction and compression scheme. The optimised ATPG flow ensures by cube compaction the delivery of a less number of seed for each of the cubes. This further leads to a reduction in the average and peak scan power. The scheme has little effect on the fault dropping. The methodology results in scan power reduction. The work can be extended by giving more number of bits in compressed form to the XOR network. Here scan power is more due to the high transition density between the test slice. So to reduce this power, the transitions between them are compared to all bits of the test slice. The drastic power reduction can be achieved by this exploring work.

      2. RELATED WORK

        Many techniques introduced for test data volume reduction, where the techniques related to linear compression scheme has also category of test data volume reduction. This technique utilize the flexibility in dont care filling, and compress the unfilled test cubes to short seed vectors. Due to the unspecified bit ratio both linear compression schemes result in high compression ratio.

        The other work of compression techniques uses the nonlinear code-based approaches. The compression ratio of nonlinear codes-based approach is less than the linear compression techniques. In order to process the unspecified bits, to reduce the switching densities many works have been used. These techniques can be easily applied into the ATPG algorithm without any hardware overhead.

        The other approach focuses on reducing test power through some change of scan architecture. In order to achieve power reduction there must be a suitable usage of unspecified bits. There are many schemes such as capture power reduction and DFT architecture. Power reduction can be attained but increases hardware cost due to insertion of gate circuitry. Further instead of DFT, performs an X-filling to reduce the power. The X-bits are filled based on their power effect.

        A power compression scheme related to scan chain portioning is also introduced. This group the scan cells so achieve a easy encoding of test data. In further work, a novel methodology for seed selection is used to solve the problem of power reduction through removal of some equations.

        Achieving power reduction in compression need an appropriate dont care utilization to maximally fulfil the constraints for both goals. The work uses a capture power reduction technique dedicated to the nonlinear encoding compression scheme.

        Linear constraint propagation is performed during the X-filling process to guarantee the compressibility of the filled test cubes.

        Fig.1.Scan architecture using XOR network-based decompression hardware

        Fig.2. Example XOR network and its matrix representation

      3. PRELIMINARIES

        Fig. 1 shows the scan test architecture using an on- chip decompression hardware based on XOR network. The XOR network is used for compression and decompression mechanism.

        The input seed is compressed and by using XOR network it is decompressed. The decompression process does not depend on the response-compaction mechanism. The fixed-length to fixed-length decompression mechanism given by the XOR network-based decompression scheme avoids he need for synchronization between ATE and the decompression hardware.

        Test slices can be constructed from very short seeds, thus results in significant compression ratios. As an example XOR network is shown in Fig. 2, which can decompress a 4-bit seed to a 9-bit test slice. The linear transformation can be represented as a matrix multiplication over the finite field. The seed is found by solving the set of linear equations similar to all the specified bits in the test slice. The set of linear equations to be solved varies as the specified bit sets of each test slice are typically different. The linear compression scheme can be fixed into the generation flow to have a concurrent compression and compaction. The ATPG algorithm starts by producing an unfilled test cube. Every newly produced test cube is compacted with the previous cubes. When a compacted cube attains a compression threshold, a seed is identified for original cube. They unspecified bits are then completely specified on the seed.

      4. POWER-AWARE TEST COMPRESSION

        Although the linear compression scheme gives an appreciable benefits in test volume reduction, the way of filling unspecified bits results in highly random switching, thus there is appreciable scan power consumption.

        The scan phase requires higher power consumption compared to the capture phase. But causes heat and increases the risk of over screening good chips due to the exceedingly high scan mode dynamic IR-drop. Fig. 3(a) shows the scan mode dynamic IR-drop measured from an industrial chip. In capture phase, the state switching typically inducing fewer flip-flop toggling and a less power consumption.

        Since scan power consumption is drawback of scan- based testing, a power-aware test compression technique is needed. It is difficult to notice that how adjacent filling in previous work is used in connection with the compression. To solve this problem, there is a need to use the flexibility in the seed identification process.

        The idea is a test slices can be constructed from multiple alternative seeds. Since a seed is a resolving problem for the linear system found by the specified bits of

        As shown in Fig. 4, consider the XOR network in Fig.

        2 is used as the decompression hardware. The test cube produced from Seed 1 has a 17 transition, whereas Seed 2 has only 9 transition patterns. Choose seed 2 for compression gives a 50%reduction with no effect on the compression ratio.

        Introducing seed selection technique has fault dropping in unpredictable ways. Achieving the power-aware test compression scheme needs an efficient method for finding power-optimal seed.

      5. ALGORITHMIC FRAMEWORK

        Scan power is due to the transition between adjacent test slices in the test cube. Achieving power reduction needs the matching of test slice with neighbours. Slice matching can be done from one end of the test cube to the other end. Matching step specifies one test slice, which is used as the reference value for the next matching step.

        Fig.4. Power reduction through seed selection

        Fig. 5.Matrix representation of slice matching

        B. Seed Space Transformation

        the test slice, the seed space depend on the constraints in the linear system.

        Fig.3. High dynamic IR-drop during scan phase. (a)Dynamic IR-drop in shift cycle.(b) Shift IR-drop versus Capture IR-drop

        A. Problem Formulation

        To apply slice matching without affecting compressibility, needs to find the constraints imposed by the compression scheme. Let us take the two slices for maximal matching. The matching requires the identification of a seed, which fulfils the following two conditions.

        1. All the specified bits must be generated from the seed for the correctness of compression.

        2. For the dont care bits, the filling values produced from the seed should match the values in other test slice.

        The two test slices can be perfectly matched with zero scan power between them. In all possibility though, an inconsistent system with no solution of zero scan power is expected.

        Fig. 6.Extracting independent bits of seed

        If the linear equation system for the slice matching problem is inconsistent, solving this problem is not onlyexpensive but difficult to handle, as it need the coefficient matrix of a size and the fulfilment of the strict constraints.So we use mathematical transformation which

        maps the linear system to a much smaller system constructed over only the independent bits of the seed, with the constraints of Condition 1 implicitly embedded, thus enabling a more efficient mathematical treatment of the original problem.

        1. To make correct dropping decisions that improve the consistency, it is mainly to examine the root cause of inconsistency. If the column in the right side is included into coefficient matrix as the last column, then augmented matrix is determined. To achieve this goal, we use the concept of primitive inconsistent group.

          The constraint given by condition 1 can be taken from the equation similar to the specified bits. To achieve this goal, an intermediate subsystem can be constructed and converted to reduced row-echelon using Gauss-Jordan elimination. Fig. 6 shows the matrix conversion for the slice matching. In the reduced row-echelon form, the coefficient matrix columns where the leading 1 correspond to dependent variable and the remaining refer to independent ones. The slice matching problem reduces to the independent bit pattern that satisfies the reduced system.

          Fig.7. Constructing a reduced system in independent bit space

          C. Power-Optimal Seed Identification

          As the seed increases as the function of the independent bit dimension, schemes for the power-optimal seed, i.e. the seed that satisfies the reduced linear system become highly desirable.

        2. Basic Idea: System Relaxation: Dropping a set of equations may convert the inconsistent system to a consistent one. But increase the outcome of finding the some equations to be Definition 1: A group of M linear equations is a primitive inconsistent group, if it satisfies below conditions.

          1. The linear system created with these equations is inconsistent.

            Fig. 9 presents an example for the matrix transformation process. For each row, the transformation process finds the of the row is in the last column, the process skips this row and proceeds to find the leading 1 of the next row. It finds whether they found column contains additional 1s in the positions. If

          2. Primitive Inconsistent Group Characteristics: The technique finds primitive inconsistent groups by searching for algebraic characteristics on the augmented matrix of the original system. This significantly reduces cost and speeds up the finding process.

            The algebraic characteristics of the primitive inconsistent groups can be described as in the following lemmas.

            Lemma 1: The coefficient rows of subset of equations in a primitive inconsistent group are linearly independent.

            Lemma 2: The row sum of the augmented matrix of a primitive inconsistent group is a unit vector. The coefficient part of its row sum is all zeros and the right-most bit of the row sum is 1.

          3. Primitive Inconsistent Group Identification: we propose a transformation algorithm to generate a matrix with the aforementioned characteristics. The proposed transformation is attained by performing a sequence of row- elementary operations on the matrix. Nonetheless, it differs from the traditional row-elementary operation based transformation techniques such as Gauss-Jordan elimination as only a forward propagation of the rows in the matrix is performed. Fig. 9. Primitive inconsistent group identification through forward propagation.

        fully specified tests are added to the test list. The column that contains the leading 1 of the row. Row has same number any such additional 1 is found, the transformation process avoids it by adding the row to the row containing this poition.

        additional 1. In the example given in Fig. 9, the leading 1 of the first row is in the third column, and the second and seventh rows also have a 1 in the third column. These two additional 1s are eliminated by adding the first row to the second and seventh rows, respectively.

        Fig.10. Iterative primitive inconsistent group identification through system perturbation.

        Fig. 10 contains three primitive inconsistent groups, whereas the first transformation can only identify two of them. To find the primitive inconsistent groups, one needs to perturb the system so as to enable the transformation process. The system perturbation consists of removing all the groups identified by dropping their shared equations. If primitive inconsistent groups are present, the remaining system would be inconsistent.

      6. CONCURRENT COMPACTION/COMPRESSION FOR LOW POWER SCAN

        The achievement of full fault coverage with linear compression schemes needs to include such schemes into the ATPG flow. Here concurrent compaction and compression are performedTo use the power reduction benefit of the scheme, it needs to uniformly provide seed space being compressed. This needs the ATPG algorithm to perform more target selection.

        We thus use a power aware compaction/compression flow. A newly generated cube can be compacted with multiple cubes. Therefore, it is possible to select the compaction candidate which has the power reduction potential the least. Since the system rank of the resulting cube can be achieved during the compressibility check, the compaction will always select the compaction candidate which gives the minimum rank. Such a scheme significantly decreases more scan power reduction. The cubes that increase above the rank threshold.

        If any such additional 1 is found, the transformation process avoids it by adding the row to the row containing this position.

      7. EXPERIMENTAL RESULT

        The work has been implemented in MODELSIM SE 6.2C for effectiveness validation. First compare the used technique with the previous work of power reduction technique based on combinational linear decompression schemes. The impacts on scan power and compression ratios are reported. To perform a comparison, the decompression hardware based on XOR is used. To explore the impact of the compression level on scan power and compression ratio, the power-aware process is repeated multiple times, with rank threshold applied during the compression process. Although the effectiveness of the scheme decreases at higher rank thresholds, a 20%40%reduction in scan power can still be attained even with highly aggressive compaction.

      8. CONCLUSION

Using the power-aware test compression scheme appreciable power reduction is being achieved. That is XOR network-based on-chip test compression scheme is used in order to transform the compressed seed into decompressed test slices. In the multiple test slices, there is a consumption of more power due to high transition between them. In order to reduce this transition, during compression phase the power optimal seed is selected as input seed. Then transition is compared with nearby bits. So can achieve a power reduction. This work is further explored by comparing transition of test slice with all bits. By this appreciable power reduction is expected compared to previous work.

REFERENCES

  1. J. Saxena, K. M. Butler, V. B. Jayaram, S. Kundu, N. V. Arvind, P.Sreeprakash, and M. Hachinger, A case study of IR- drop in structured at-speed testing, in Proc. ITC, 2003, pp. 10981104.

  2. I. Bayraktaroglu and A. Orailoglu, Concurrent application of compaction and compression for test time and data volume reduction

    in scan designs, IEEE Trans. Comput., vol. 52, no. 11, pp. 14801489,Nov. 2003

  3. C. Krishna, A. Jas, and N. Touba, Test vector encoding using partial LFSR reseeding, in Proc. ITC, 2001, pp. 885 893.

  4. W. Rao, I. Bayraktaroglu, and A. Orailoglu, Test application time and volume compression through seed overlapping, in Proc. DAC, 2001,pp. 732737.

  5. K. Balakrishnan and N. Touba, Improving linear test data compression,IEEE Trans. Very Large Scale Integr. (VLSI)

    Syst., vol. 14, no.11, pp. 12271237, Nov. 2006

  6. M. Chen and A. Orailoglu, Scan power reduction in linear test datacompression scheme, in Proc. ICCAD, 2009, pp. 78 82.

  7. J. Rajski, J. Tyszer, M. Kassab, and N. Mukherjee, Embedded deterministictest, IEEE Trans. Comput.-Aided Design Integr. Circuits Syst., vol. 23, no. 5, pp. 776792, May 2004.

  8. A. Jas, J. Ghosh-Dastidar, and N. Touba, Scan vector compression/decompression using statistical coding, in Proc. DAC, 1999, pp. 2529.

  9. A. Chandra and K. Chakrabarty, System-on-a-Chip test- data compression and decompression architectures based on golomb codes,IEEE Trans. Comput.-Aided Design Integr. Circuits Syst., vol. 20, no.3, pp. 335368, Mar. 2001.

  10. Z.Wang and K. Chakrabarty, Test data compression for IP embedded cores using selective encoding for scan slices, in Proc. ITC, 2005, pp.581590.

  11. P. T. Gonciari, B. M. Al-Hashimi, and N. Nicolici, Variable-length input huffman coding for system-on-a-chip test, IEEE Trans.Comput.-Aided Design Integr. Circuits Syst., vol. 22, no. 6, pp.783796, Jun. 2003.

  12. N. Touba, Survey of test vector compression techniques, IEEE DesignTest Comput., vol. 23, no. 4, pp. 294303, Aug. 2006.

  13. P. Girard, Survey of low-power testing of VLSI circuits, IEEE DesignTest Comput., vol. 19, no. 3, pp. 8090, Aug. 2002.

  14. K. M. Butler, J. Saxena, A. Jain, T. Fryars, J. Lewis, and

G. Hetherington,Minimizing power consumption.

Leave a Reply