V. Kamakoti

Indian Institute of Technology Madras, Chennai, Tamil Nādu, India

Are you V. Kamakoti?

Claim your profile

Publications (73)5.82 Total impact

  • [show abstract] [hide abstract]
    ABSTRACT: Conventional ATPG tools help in detecting only the equivalence class to which a fault belongs and not the fault itself. This paper presents PinPoint, a technique that further divides the equivalence class into smaller sets based on the capture power consumed by the circuit under test in the presence of different faults in it, thus aiding in narrowing down on the fault. Applying the technique on ITC benchmark circuits yielded significant improvement in diagnostic resolution.
    Test Symposium (ETS), 2013 18th IEEE European; 01/2013
  • [show abstract] [hide abstract]
    ABSTRACT: With increasing computing power in mobile devices, conserving battery power (or extending battery life) has become crucial. This together with the fact that most applications running on these mobile devices are increasingly error tolerant, has created immense interest in stochastic (or inexact) computing. In this paper, we present a framework wherein, the devices can operate at varying error tolerant modes while significantly reducing the power dissipated. Further, in very deep sub-micron technologies, temperature has a crucial role in both performance and power. The proposed framework presents a novel layered synthesis optimization coupled with temperature aware supply and body bias voltage scaling to operate the design at various “tunable” error tolerant modes. We implement the proposed technique on a H.264 decoder block in industrial 28nm low leakage technology node, and demonstrate reductions in total power varying from 30% to 45%, while changing the operating mode from exact computing to inaccurate/error-tolerant computing.
    Quality Electronic Design (ASQED), 2013 5th Asia Symposium on; 01/2013
  • [show abstract] [hide abstract]
    ABSTRACT: Buffers in on-chip networks constitute a significant proportion of the power consumption and area of the interconnect, and hence reducing them is an important problem. Application-specific designs have nonuniform network utilization, thereby requiring a buffer-sizing approach that tackles the nonuniformity. Also, congestion effects that occur during network operation need to be captured when sizing the buffers. Many NoCs are designed to operate in multiple voltage/frequency islands, with interisland communication taking place through frequency converters. To this end, we propose a two-phase algorithm to size the switch buffers in network-on-chips (NoCs) considering support for multiple-frequency islands. Our algorithm considers both the static and dynamic effects when sizing buffers. We analyze the impact of placing frequency converters (FCs) on a link, as well as pack and send units that effectively utilize network bandwidth. Experiments on many realistic system-on-Chip (SoC) benchmark show that our algorithm results in 42% reduction in amount of buffering when compared to a standard buffering approach.
    Journal of Electrical and Computer Engineering. 01/2012; 2012.
  • Source
    Journal of Low Power Electronics 01/2012; 8(5):684-695.
  • [show abstract] [hide abstract]
    ABSTRACT: We consider the problem of reducing active mode leakage power by modifying the post-synthesis netlists of combi- national logic blocks. The stacking effect is used to reduce leakage power, but instead of a separate signal one of the inputs to the gate itself is used. The approach is studied on multiplier blocks. It is found that a significant number of nets have high probabilities of being constant at 0 or 1. In specific applications such as those having high peak to average ratio, like audio and other signal processing applications, this effect is more pronounced. We show how these signals can be used to put gates to sleep, thus saving significant leakage power. I. INTRODUCTION
    IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2011, 4-6 July 2011, Chennai, India; 01/2011
  • [show abstract] [hide abstract]
    ABSTRACT: This work presents a hardware implementation of an FIR filter that is self-adaptive; that responds to arbitrary frequency response landscapes; that has built-in coefficient error tolerance capabilities; and that has a minimal adaptation latency. This hardware design is based on a heuristic genetic algorithm. Experimental results show that the proposed design is more efficient than non-evolutionary designs even for arbitrary response filters. As a byproduct, the paper also presents a novel flow for the complete hardware design of what is termed as an Evolutionary System on Chip (ESoC). With the inclusion of an evolutionary process, the ESoC is a new paradigm in modern System on Chip (SoC) designs. The ESoC methodology could be a very useful structured FPGA/ASIC implementation alternative in many practical applications of FIR filters.
    Applied Soft Computing. 01/2011;
  • K. Seth, V. Kamakoti, S. Srinivasan
    [show abstract] [hide abstract]
    ABSTRACT: The H.264 encoded video is highly sensitive to loss of motion vectors during transmission. Several statistical techniques are proposed for recovering such lost motion vectors. These use only the motion vectors that belong to the macroblocks that are horizontally or vertically adjacent to the lost macroblock, to recover the latter. Intuitively this is one of the main reasons behind why these techniques yield inferior solutions in scenarios where there is a non-linear motion. This paper proposes B-Spline based statistical techniques that comprehensively address the motion vector recovery problem in the presence of different types of motions that include slow, fast/sudden, continuous and non-linear movements. Testing the proposed algorithms with different benchmark video sequences shows an average improvement of up to 2 dB in the Peak Signal to Noise Ratio of some of the recovered videos, over existing techniques. A 2 dB improvement in PSNR is very significant from an application point of view.
    IEEE Transactions on Broadcasting 01/2011; · 2.09 Impact Factor
  • Kavish Seth, V. Kamakoti, S. Srinivasan
    [show abstract] [hide abstract]
    ABSTRACT: Error concealment in video communication is becoming increasingly important because of the growing interest in video delivery over unreliable channels such as wireless networks and the Internet. A subclass of this error concealment in video communication is known as motion vector recovery (MVR). MVR techniques try to retrieve the lost motion information in the compressed video streams based on the available information in the locality (both spatial and temporal) of the lost data. The activities and practice in the area of MVR-based error concealment during the last two decades has been mainly elaborated here. A performance comparison of the prominent MVR techniques has also been presented.
    IETE Technical Review. 01/2011;
  • Advanced Parallel Processing Technologies - 9th International Symposium, APPT 2011, Shanghai, China, September 26-27, 2011. Proceedings; 01/2011
  • [show abstract] [hide abstract]
    ABSTRACT: Buffers in on-chip networks constitute a significant proportion of the power consumption and area of the interconnect. Hence, reducing the buffering overhead of Networks on Chips (NoCs) is an important problem. For application-specific designs, the network utilization across the different links and switches is non-uniform, thereby requiring a buffer sizing approach that tackles the non uniformity. Moreover, congestion effects that occur during network operation needs to be captured when sizing the buffers. To this end, we propose a two-phase algorithm to size the switch buffers in NoCs. Our algorithm considers both the static (based on bandwidth and latency requirements) and dynamic (based on simulation) effects when sizing buffers. Our experiments show that the application of the algorithm results in 42% reduction in amount of buffering required to meet the application constraints when compared to a standard buffering approach.
    IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2011, 4-6 July 2011, Chennai, India; 01/2011
  • K. Seth, V. Kamakoti, S. Srinivasan
    [show abstract] [hide abstract]
    ABSTRACT: This study proposes a novel motion vector recovery (MVR) algorithm for the H.264 video coding standard, which takes into account the change in the motion vectors (MVs) in different directions. Existing algorithms for MVR are confined to use the horizontal or vertical directions to recover the lost MVs. However, in the presence of non-linear movements or a fast/sudden motion of any object in a scene of the given input video, the MVs recovered by these algorithms turn out to be inaccurate. The proposed directional interpolation-based technique can interpolate the MVs in any direction based on the tendency of motion around the lost macro block, thus making it suitable to handle non-linear or fast motions. Testing the proposed technique with different benchmark video sequences shows an average improvement 1-2-dB in the peak signal-to-noise ratio of the recovered video over existing techniques.
    IET Image Processing 05/2010; · 0.90 Impact Factor
  • Kavish Seth, V. Kamakoti, S. Srinivasan
    [show abstract] [hide abstract]
    ABSTRACT: This paper proposes a fast statistical approach to recover lost motion vectors in H.264 video coding standard. Unlike other video coding standards, the motion vectors of H.264 cover smaller area of the video frame being encoded. This leads to a strong correlation between neighboring motion vectors, thus making H.264 standard amenable for statistical analysis to recover the lost motion vectors. This paper proposes a Pearson Correlation Coefficient based matching algorithm that speeds up the recovery of lost motion vectors with very less compromise in visual quality of the recovered video. To the best of our knowledge, this is the first attempt that employs correlation coefficient for motion vector recovery. Experimental results obtained by employing the proposed algorithm on standard benchmark video sequences show that they yield comparable quality of recovered video with significantly less computation than the best reported in the literature, thus making it suitable for real-time applications.
    Proc SPIE 02/2010;
  • Lavanya Jagan, V Kamakoti
    [show abstract] [hide abstract]
    ABSTRACT: In a highly dynamic semiconductor manufacturing technology, the migration to the next process technology node tries to create a smaller-faster-cheaper chip. To achieve this, the integrated circuit (IC) designs are becoming more complex and new manufacturing materials/processes are introduced. Additionally, demands for sustainable yield and shorter time to volume production have to be met as they govern the profitability of the semiconductor industry. As a result, rapid yield improvement techniques are crucial to overcome huge yield losses. Any yield improvement technique typically involves detecting the yield loss (testing) and identifying its root cause (diagnosis). This paper presents a survey of the existing research work reported in the literature for test and diagnostics of nanometer chips and presents interesting open issues.
    IETE Technical Review. 01/2010;
  • [show abstract] [hide abstract]
    ABSTRACT: The usage of more advanced, less mature processes during manufacturing of semiconductor devices has increased the need for performing unconventional types of testing, like temperature-testing, in order to maintain the same high quality levels. However, performing temperature-testing is costly. This paper proposes a viable low-cost alternative to temperature testing that quantifies the impact of temperature variations on the test quality and also determines optimal test conditions. The test flow proposed is empirically validated on an industrial-standard die. The results obtained show that majority of the defects that were originally detected by temperature-testing are also detected by the proposed test flow, thereby reducing the dependence on temperature testing to achieve zero-defect quality. Details of an interesting defect behavior at cold test conditions is also presented.
    VLSI Design 2010: 23rd International Conference on VLSI Design, 9th International Conference on Embedded Systems, Bangalore, India, 3-7 January 2010; 01/2010
  • [show abstract] [hide abstract]
    ABSTRACT: Volume Yield Diagnostics (VYD) is crucial to diagnose critical systematic yield issues from the reports obtained by testing thousands of chips. This paper presents an efficient clustering technique for VYD that has been shown to work successfully both in the simulation environment as well as on real industrial failure data.
    VLSI Design, 2009 22nd International Conference on; 02/2009
  • K. Shyamala, J. Vimalkumar, V. Kamakoti
    J. Low Power Electronics. 01/2009; 5:429-438.
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: With the emergence of complex high-performance microprocessors, functional test generation has become a crucial design step. Constraint-based test generation is a well-studied directed behavioral level functional test generation paradigm. The paradigm involves conversion of a given circuit model into a set of constraints and employing constraint solvers to generate tests for it. However, automatic extraction of constraints from a given behavioral hardware design language (HDL) model remained a challenging open problem. This paper proposes an approach for automatic extraction of word-level model constraints from the behavioral verilog HDL description. The scenarios to be tested are also expressed as constraints. The model and the scenario constraints are solved together using an integer solver to arrive at the necessary functional test. The effectiveness of the approach is demonstrated by automatically generating the constraint models for: 1) an exclusive-shared-invalid multiprocessor cache coherency model and 2) the 16-bit DLX-architecture, from their respective Verilog-based behavioral models. Experimental results that generate test vectors for high level scenarios like pipeline hazards, cache miss, etc., spanning over multiple time-frames are presented.
    IEEE Transactions on Very Large Scale Integration (VLSI) Systems 05/2008; · 1.22 Impact Factor
  • [show abstract] [hide abstract]
    ABSTRACT: In sub-70 nm technologies, leakage power becomes a significant component of the total power. Designers address this concern by extensive use of adaptive voltage scaling techniques to reduce dynamic as well as leakage power. Low-power scan test schemes that have evolved in the past primarily address dynamic power reduction, and are less effective in reducing the total power. This paper proposes a Power-Managed Scan (PMScan) scheme which exploits the presence of adaptive voltage scaling logic to reduce test power. Some practical implementation challenges that arise when the proposed scheme is employed on industrial designs are also discussed. Experimental results on benchmark circuits and industrial designs show that employing the proposed technique leads to a significant reduction in dynamic and leakage power. The proposed method can also be used as a vehicle to trade-off test application time with test power by suitably adjusting the scan shift frequency and scan-mode power supplies.
    Journal of Low Power Electronics 01/2008; 4:101-110.
  • [show abstract] [hide abstract]
    ABSTRACT: In sub-70 nm technologies, leakage power becomes a significant component of the total power. Designers address this concern by extensive use of adaptive voltage scaling techniques to reduce dynamic as well as leakage power. Low-power scan test schemes that have evolved in the past primarily address dynamic power reduction, and are less effective in reducing the total power. We propose a power-managed scan (PMScan) scheme which exploits the presence of adaptive voltage scaling logic to reduce test power. We also discuss some practical implementation challenges that arise when the proposed scheme is employed on industrial designs. Experimental results on benchmark circuits and industrial designs show a significant reduction in dynamic and leakage power. The proposed method can also be used as a vehicle to trade-off test application time with test power by suitably adjusting the scan shift frequency and scan-mode power supplies.
    Test Conference, 2007. ITC 2007. IEEE International; 11/2007
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: Process variation is an increasingly dominant phenomenon affecting both power and performance in sub-100 nm technologies. Cost considerations often do not permit over-designing the power supply infrastructure for test mode, considering the worst-case scenario. Test application must not over-exercise the power supply grids, lest the tests will damage the device or lead to false test failures. The problem of debugging a delay test failure can therefore be highly complex. We argue that false delay test failures can be avoided by generating "safe" patterns that are tolerant to on-chip variations. A statistical framework for power-safe pattern generation is proposed, which uses process variation information, power grid topology and regional constraints on switching activity. Experimental results are provided on benchmark circuits to demonstrate the effectiveness of the framework.
    Test Conference, 2007. ITC 2007. IEEE International; 11/2007