Chapter

Ultra-Low-Power Strategy for Reliable IoE Nanoscale Integrated Circuits

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Ultra-low-power strategies have a huge importance in today's integrated circuits designed for internet of everything (IoE) applications, as all portable devices quest for the never-ending battery life. Dynamic voltage and frequency scaling techniques can be rewarding, and the drastic power savings obtained in subthreshold voltage operation makes this an important technique to be used in battery-operated devices. However, unpredictability in nanoscale chips is high, and working at reduced supply voltages makes circuits more vulnerable to operational-induced delay-faults and transient-faults. The goal is to implement an adaptive voltage scaling (AVS) strategy, which can work at subthreshold voltages to considerably reduce power consumption. The proposed strategy uses aging-aware local and global performance sensors to enhance reliability and fault-tolerance and allows circuits to be dynamically optimized during their lifetime while prevents error occurrence. Spice simulations in 65nm CMOS technology demonstrate the results.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Moreover, many solutions appear for single output converters, but to reduce power consumption in today's IoT chips, aggressive power reduction techniques are being increasingly used and require multi-domain power supply outputs [15]. As explained in [16], to achieve ultra-low energy consumption levels in these new IoT chips, Adaptive Voltage and Frequency Scaling (AVS) techniques and Dynamic Voltage and Frequency Scaling (DVFS) techniques should be used, to work at different power supply voltage (VDD) and clock frequency levels. ...
... As many systems have idle periods of time, such time periods can be used to perform self-test, and inform the system manager if everything is OK, or if system maintenance is required, or even if the system must be shut down, as safe operation is at risk. Safety-critical applications [22] may justify adding redundancy and voting or the deployment of built-in sensors in IoT applications [23,24]. ...
Chapter
Hardware/Software (hw/sw) systems changed the human way of living. Internet of Things (IoT) and Artificial Intelligence (AI), now two dominant research themes, are intended and expected to change it more. Hopefully, for the good. In this book chapter, relevant challenges associated with the development of a “society” of intelligent smart objects are highlighted. Humans and smart objects are expected to interact. Humans with natural intelligence (people) and smart objects (things) with artificial intelligence. The Internet, the platform of globalization, has connected people around the world, and will be progressively the platform for connecting “things”. Will humans be able to build up an IoT that benefit them, while keeping a sustainable environment on this planet? How will designers guarantee that the IoT world will not run out of control? What are the standards? How to implement them? These issues are addressed in this chapter from the engineering and educational points of view. In fact, when dealing with “decision making systems”, not only design and test should guarantee the correct and safe operation, but also the soundness of the decisions such smart objects take, during their lifetime. The concept of Design for Accountability (DfA) is, thus, proposed and some initial guidelines are outlined.
Article
Full-text available
Multiple-output DC–DC converters are essential in a multitude of applications where different DC output voltages are required. The interest and importance of this type of multiport configuration is also reflected in that many electronics manufacturers currently develop integrated solutions. Traditionally, the different output voltages required are obtained by means of a transformer with several windings, which are in addition to providing electrical isolation. However, the current trend in the development of multiple-output DC–DC converters follows general aspects, such as low losses, high-power density, and high efficiency, as well as the development of new architectures and control strategies. Certainly, simple structures with a reduced number of components and power switches will be one of the new trends, especially to reduce the size. In this sense, the incorporation of devices with a Wide Band Gap (WBG), particularly Gallium Nitride (GaN) and Silicon Carbide (SiC), will establish future trends, advantages, and disadvantages in the development and applications of multiple-output DC–DC converters. In this paper, we present a review of the most important topics related to multiple-output DC–DC converters based on their main topologies and configurations, applications, solutions, and trends. A wide variety of configurations and topologies of multiple-output DC–DC converters are shown (more than 30), isolated and non-isolated, single and multiple switches, and based on soft and hard switching techniques, which are used in many different applications and solutions.
Patent
Full-text available
System and method are provided for continually monitoring reliability, or aging, of a digital system and for issuing a warning signal if digital system operation degrades past a specified threshold.
Article
Full-text available
Considering the variety of studies that have been reported in low-power designing era, the subthreshold design trend in Very Large Scale Integrated (VLSI) circuits has experienced a significant development in recent years. Growing need for the lowest power consumption has been the primary motivation for increase in research in this area although other goals, such as lowest energy delay production, have also been achieved through sub-threshold design. There are, however, few extensive studies that provide a comprehensive design insight to catch up with the rapid pace and large-scale implementations of sub-threshold digital design methodology. This paper presents a complete review of recent studies in this field and explores all aspects of sub-threshold design methodology. Moreover, near-threshold design and low-power pipelining are also considered to provide a general review of sub-threshold applications. At the end, a discussion about future directions in ultralow-power design is also included.
Article
Full-text available
Portable/Implantable biomedical applications usually exhibit stringent power budgets for prolonging battery life time, but loose operating frequency requirements due to small bio-signal bandwidths, typically below a few kHz. The use of sub-threshold digital circuits is ideal in such scenario to achieve optimized power/speed tradeoffs. This paper discusses the design of a sub-threshold standard cell library using a standard 0.18-µm CMOS technology. A complete library of 56 standard cells is designed and the methodology is ensured through schematic design, transistor width scaling and layout design, as well as timing, power and functionality characterization. Performance comparison between our sub-threshold standard cell library and a commercial standard cell library using a 5-stage ring oscillator and an ECG designated FIR filter is performed. Simulation results show that our library achieves a total power saving of 95.62% and a leakage power reduction of 97.54% when compared with the same design implemented by the commercial standard cell library (SCL).
Article
Full-text available
Abstract—The IEEE 802.15.4 standard relaxes the requirements on the receiver front-end making subthreshold operation a viable solution. The specification is discussed and guidelines are presented for a small area ultra-low-power design. A subthreshold biased low-noise amplifier (LNA) has been designed and fabricated for the 2.4-GHz IEEE 802.15.4 standard using a standard low-cost 0.18- m RF CMOS process. The single-stage LNA saves on chip area by using only one inductor. The measured gain is more than 20 dB with an 11 of 19 dB while using 630 A of dc current. The measured noise figure is 5.2 dB. Published version
Conference Paper
Full-text available
Temperature, voltage, and current sensors monitor the operation of a TCP/IP offload accelerator engine fabricated in 90nm CMOS, and a control unit dynamically changes frequency, voltage, and body bias for optimum performance and energy efficiency. Fast response to droops and temperature changes is enabled by a multi-PLL clocking unit and on-chip body bias. Adaptive techniques are also used to compensate performance degradation due to device aging, reducing the aging guardband.
Conference Paper
Full-text available
With increasing clock frequencies and silicon integration, power aware computing has become a critical concern in the design of embedded processors and systems-on-chip. One of the more effective and widely used methods for power-aware computing is dynamic voltage scaling (DVS). In order to obtain the maximum power savings from DVS, it is essential to scale the supply voltage as low as possible while ensuring correct operation of the processor. The critical voltage is chosen such that under a worst-case scenario of process and environmental variations, the processor always operates correctly. However, this approach leads to a very conservative supply voltage since such a worst-case combination of different variabilities is very rare. In this paper, we propose a new approach to DVS, called Razor, based on dynamic detection and correction of circuit timing errors. The key idea of Razor is to tune the supply voltage by monitoring the error rate during circuit operation, thereby eliminating the need for voltage margins and exploiting the data dependence of circuit delay. A Razor flip-flop is introduced that double-samples pipeline stage values, once with a fast clock and again with a time-borrowing delayed clock. A metastability-tolerant comparator then validates latch values sampled with the fast clock. In the event of timing error, a modified pipeline mispeculation recovery mechanism restores correct program state. A prototype Razor pipeline was designed in a 0.18 μm technology and was analyzed. Razor energy overhead during normal operation is limited to 3.1%. Analyses of a full-custom multiplier and a SPICE-level Kogge-Stone adder model reveal that substantial energy savings are possible for these devices (up to 64.2%) with little impact on performance due to error recovery (less than 3%).
Article
Full-text available
This paper presents methods for efficient energy-performance optimization at the circuit and micro-architectural levels. The optimal balance between energy and performance is achieved when the sensitivity of energy to a change in performance is equal for all the design variables. The sensitivity-based optimizations minimize energy subject to a delay constraint. Energy savings of about 65% can be achieved without delay penalty with equalization of sensitivities to sizing, supply, and threshold voltage in a 64-bit adder, compared to the reference design sized for minimum delay. Circuit optimization is effective only in the region of about ±30% around the reference delay; outside of this region the optimization becomes too costly either in terms of energy or delay. Using optimal energy-delay tradeoffs from the circuit level and introducing more degrees of freedom, the optimization is hierarchically extended to higher abstraction layers. We focus on the micro-architectural optimization and demonstrate that the scope of energy-efficient optimization can be extended by the choice of circuit topology or the level of parallelism. In a 64-bit ALU example, parallelism of five provides a three-fold performance increase, while requiring the same energy as the reference design. Parallel or time-multiplexed solutions significantly affect the area of their respective designs, so the overall design cost is minimized when optimal energy-area tradeoff is achieved.
Chapter
Human-Computer Interaction (HCI) applications need reliable hardware and the development of today’s sensors and cyber-physical systems for HCI applications is critical. Moreover, such hardware is becoming more and more self-powered, and mobile devices are today important devices for HCI applications. While battery-operated devices quest for the never-ending battery, aggressive low-power techniques are used in today’s hardware systems to accomplish such mission. Techniques like Dynamic Voltage and Frequency Scaling (DVFS) and the use of subthreshold power-supply voltages can effectively achieve substantial power savings. However, working at reduced power-supply voltages, and reduced clock frequency, imposes additional challenges in the design and operation of devices. Today’s chips face several parametric variations, such as PVTA (Process, power-supply Voltage, Temperature and Aging) variation, which can affect circuit performance and reliability is affected. This paper presents a performance sensor solution to be used in cyber-physical systems to improve reliability of today’s chips, guaranteeing an error-free operation, even with the use of aggressive low-power techniques. In fact, this performance sensor allows optimize the trade-off between power and performance, avoiding the occurrence of errors. In order to be easily used and adopted by industry, the performance sensor is a non-intrusive global sensor, which uses two dummy critical paths to sense performance for the power-supply voltage and clock frequency used, and for the existing PVTA variation. The novelty of this solution is on the new architecture for the sensor, which allows the operation at VDDs’ subthreshold voltage levels. This feature makes this global sensor a unique solution to control DVFS, even at subthreshold voltages, avoid performance errors and allow optimizing circuit operation and performance. Simulations using a SPICE tool allowed characterizing the new sensor to work at sub-threshold voltages, and results are presented for a 65 nm CMOS technology, which uses a CMOS Predictive Technology Models (PTM) technology. The results show that the sensor increases sensibility when PVTA degradations increase, even when working at subthreshold voltages.
Chapter
The Internet of Things (IoT) paradigm is enabling easy access and interaction with a wide variety of devices, some of them self-powered, equipped with microcontrollers, sensors and sensor networks. Low power and ultra-low-power strategies, as never before, have a huge importance in today’s CMOS integrated circuits, as all portable devices quest for the never-ending battery life, but also with smaller and smaller dimensions every day. The solution is to use clever power management strategies and reduce drastically power consumption in IoT chips. Dynamic Voltage and Frequency Scaling techniques can be rewardingly, and using operation at subthreshold power-supply voltages can effectively achieve significant power savings. However, by reducing the power-supply voltage it imposes the reduction of performance and, consequently, delay increase, which in turn makes the circuit more vulnerable to operational-induced delay-faults and transient-faults. What is the best compromise between power, delay and performance? This paper proposes an automatic methodology and tool to perform power-delay analysis in CMOS gates and circuits, to identify automatically the best compromise between power and delay. By instantiating HSPICE simulator, the proposed tool can automatically perform analysis such as: power-delay product, energy-delay product, power dissipation, or even dynamic and static power dissipations. The optimum operation point in respect to the power-supply voltage is defined, for each circuit or sub-circuit and considering subthreshold operation or not, to the minimum power-supply voltage where the delays do not increase too much and that implements a compromise between delay and power consumption. The algorithm is presented, along with CMOS circuit examples and all the analysis’ results are shown for typical benchmark circuits. Results indicate that subthreshold voltages can be a good compromise in reducing power and increasing delays.
Chapter
The low power quest in CMOS integrated circuits is pushing power-supply voltages to enter the subthreshold levels. The drastic power savings obtained in subthreshold voltage operation makes this an important technique to be used in battery-operated devices. However, working at subthreshold power-supply voltages, frequency operation has to be reduced, making Dynamic Voltage and Frequency Scaling (DVFS) methodologies hard to implement. In fact, existing solutions use wide safety margins and DVFS are typically implemented with static and pre-defined steps, both for the supply-voltage or the clock frequency. But changes in VDD and in clock frequency impose additional challenges, as delay faults may arise, especially in nanometer technologies. Moreover, when a PVTA (Process, power-supply Voltage, Temperature and Aging) variation occurs, circuit performance is affected and circuits are more prone to have delay-faults, especially when cumulative degradations pile up. This paper presents an improved version of the Scout Flip-Flop, the Low-power version, a performance Sensor for tolerance and predictive detection of delay-faults in synchronous digital circuits, which now can operate at power-supply subthreshold voltage levels. The sensor is based on a master-slave Flip-Flop (FF), the Scout FF, with built-in sensor functionality to locally identify critical operations, denoted here as in the eminence of an error, a performance error. The novelty of this solution is on the new architecture for sensor functionality, which allows the operation at VDDs’ subthreshold voltage levels. This feature makes Scout FF a unique solution to control DVFS and avoid delay-fault errors, allowing optimizing circuit operation and performance. To accomplish this, two distinct guard-band windows are created: a tolerance window; and a detection window. Simulations using a SPICE tool allowed characterizing the new sensor and flip-flop to work at sub-threshold voltages, and results are presented for a 65 nm CMOS technology, which uses Predictive Technology Models (PTM). The results show that the improved Scout’s version is effective on tolerance and predictive error detection, working at subthreshold voltages.
Article
This paper presents the Scout Flip-Flop, a new performance Sensor for toleranCe and predictive detectiOn of delay-faUlTs in synchronous digital circuits. The sensor is based on a new master-slave Flip-Flop (FF), the Scout FF, with built-in functionality to locally (inside the FF) create two distinct guard-band windows: (1) a tolerance window, to increase tolerance to late transitions, making the Scout's master latch transparent during an additional predefined period after the clock trigger; and (2) a detection window, which starts before the clock edge trigger and persists during the tolerance window, to inform that performance and circuit functionality is at risk. When a PVTA (Process, power-supply Voltage, Temperature and Aging) variation occurs, circuit performance is affected and a delay-fault may occur. Hence, the existence of a tolerance window, introduces an extra time-slack by borrowing time from subsequent clock cycles. Moreover, as the predictive-error detection window starts prior to the clock edge trigger, it provides an additional safety margin and may be used to trigger corrective actions before real error occurrence, such as clock frequency reduction. Both tolerance and detection windows are defined by design and are sensitive to performance errors, increasing its size in worst PVTA conditions. Extensive SPICE simulations allowed characterizing the new flip-flop and simulation results are presented for 65nm CMOS technology, using Berkeley Predictive Technology Models (PTM), showing Scout's effectiveness on tolerance and predictive error detection.
Article
Carrizo (CZ, Fig. 4.8.7) is AMD's next-generation mobile performance accelerated processing unit (APU), which includes four Excavator (XV) processor cores and eight Radeon™ graphics core next (GCN) cores, implemented in a 28nm HKMG planar dual-oxide FET technology featuring 3 Vts of thin-oxide devices and 12 layers of Cu-based metallization. This 28nm technology is a density-focused version of the 28nm technology used by Steamroller (SR) [1] featuring eight 1× metals for dense routing, one 2× and one 4× for low-RC routing and two 16x metals for power distribution.
Article
The circuit failure prediction technique is a different aging sensor approach proposed to address the issue of aging-aware power or frequency tuning with predictive fault detection. The idea behind this approach is to anticipate system failure before it really occurs, using an early signal capture at critical selected memory cells. Their major application has been to reduce the pessimistic worst-case delay to accommodate power-supply voltage, and temperature (PVT) variations, which significantly limit system performance. This concept has been improved and used to monitor performance errors during a product' lifetime. Another improved local sensor architecture has also been proposed to perform a constant monitoring of heterogeneous voltage and temperature aging (VTA) variations. The main advantages of this sensor are that it adapts itself to VTA variations, enhancing its error prediction capability as variations increase and performs a constant monitoring.
Conference Paper
This paper presents a new approach on aging sensors for synchronous digital circuits. An adaptive error-prediction flip-flop architecture with built-in aging sensor is proposed, performing on-line monitoring of long-term performance degradation of CMOS digital systems. The main advantage is that the sensor's performance degradation works in favor of the predictive error detection. The sensor is out of the signal path. Performance error prediction is implemented by the detection of late transitions at flip-flop data input, caused by aging (namely, due to NBTI), or to physical defects activated by long lifetime operation. Such errors must not occur in safety-critical systems (automotive, health, space). A sensor insertion algorithm is also proposed, to selectively insert them in key locations in the design. Sensors can be always active or at pre-defined states. Simulation results are presented for a balanced pipeline multiplier in 65 nm CMOS technology, using Berkeley Predictive Technology Models (PTM). It is shown that the impact of aging degradation and/or PVT (Process, power supply Voltage and Temperature) variations on the sensor enhance error prediction.
Conference Paper
The purpose of this paper is to present a predictive error detection methodology, based on monitoring of long-term performance degradation of semiconductor systems. Delay variation is used to sense timing degradation due to aging (namely, due to NBTI), or to physical defects activated by long lifetime operation, which may occur in safety-critical systems (automotive, health, space). Error is prevented by detecting critical paths abnormal (but not fatal) propagation delays. A monitoring procedure and a programmable aging sensor are proposed. The sensor is selectively inserted in key locations in the design and can be activated either on user's requirement, or at pre-defined situations (e.g., at power-up). The sensor is optimized to exhibit low sensitivity to PVT (Process, power supply Voltage and Temperature) variations. Sensor limitations are analysed. A new sensor architecture and a sensor insertion algorithm are proposed. Simulation results are presented with a ST 65 nm sensor design.
Conference Paper
Dynamic voltage scaling (DVS) is a popular approach for energy reduction of integrated circuits. Current processors that use DVS typically have an operating voltage range from full to half of the maximum Vdd. However, it is possible to construct designs that operate over a much larger voltage range: from full Vdd to subthreshold voltages. This possibility raises the question of whether a larger voltage range improves the energy efficiency of DVS. First, from a theoretical point of view, we show that for subthreshold supply voltages leakage energy becomes dominant, making "just in time completion" energy inefficient. We derive an analytical model for the minimum energy optimal voltage and study its trends with technology scaling. Second, we use the proposed model to study the workload activity of an actual processor and analyze the energy efficiency as a function of the lower limit of voltage scaling. Based on this study, we show that extending the voltage range below 1/2 Vdd will improve the energy efficiency for most processor designs, while extending this range to subthreshold operation is beneficial only for very specific applications. Finally, we show that operation deep in the subthreshold voltage range is never energy-efficient.
Article
Negative Bias Temperature Instability (NBTI) is one of the most critical device reliability issues facing scaled CMOS technology. In order to better understand the characteristics of this mechanism, accurate and efficient means of measuring its effects must be explored. In this work, we describe an on-chip NBTI degradation sensor using two delay-locked loops (DLL). The increase in PMOS transistor threshold due to NBTI stress is translated into the control voltage of a DLL for high sensing gain. Measurements from a 0.13μm test chip show a maximum gain of 16X in the operating range of interest, with microsecond order measurement times for minimal unwanted recovery. The proposed NBTI sensor also supports various DC and AC stress modes.
Article
The implementation of complex functionality in low-power (LP) nano-CMOS technologies must be carried out in the presence of enhanced susceptibility to PVT (Process, power supply Voltage and Temperature) variations. VT variations are environmental or operation-dependent parametric disturbances. Power constraints (in normal and test mode) are critical, especially for high-performance digital systems. Both dynamic and leakage power induce variable (in time and space) thermal maps across the chip. PVT variations lead to timing variations. These should be accommodated without losing performance. Dynamic, on-line time management becomes necessary. The purpose of this paper is to present a VT-aware time management methodology which leads to improved PVT tolerance, without compromising performance or testability. First, the methodology is presented, highlighting its characteristics and limitations. Its underlying principle is to introduce additional tolerance to VT variations, by time borrowing, dynamically controlling the time of the clock edge trigger driving specific memory cells (referred to as critical memory cells, CME). VT variations are locally sensed, and dynamic delay insertion in the clock signal driving CME is performed, using Dynamic Delay Buffer (DDB) cells. Then, methodology automation, using the proprietary DyDA tool, is explained. The methodology is proved to be efficient, even in the presence of process variations. Finally, it is shown that VT tolerance insertion does not necessarily reduce delay fault detection, as multi-VDD or multi-frequency self-test can be used to recover detection capability.
Conference Paper
Digital circuits operating in the subthreshold region provide the minimum energy solution for applications with strict energy constraints. This paper examines the effect of sizing on energy for subthreshold circuits. We show that minimum sized devices are theoretically optimal for reducing energy. A fabricated 0.18 μm test chip is used to compare normal sizing and sizing for minimum VDD. Measurements show that existing standard cell libraries offer a good solution for minimizing energy in subthreshold circuits.
Article
A subthreshold current reduction logic, the dual-VT self-timed (DVST) logic, is developed for the possible application to multigigabit synchronous DRAM. Minimizing subthreshold current is a critical problem in low-voltage CMOS logic. DVST logic has potential advantages over conventional dual-VT logic in terms of circuit delay, subthreshold current, operating voltage, and area consumption. A detailed comparison of the conventional logic and the DVST logic is carried out by SPICE simulation. Methods for the determination of the threshold voltages of low-and high-VT MOS transistors, for the optimization of the width of the MOS transistors in the circuit, and for the determination of the delay time of the resetting signal are developed. Examples of basic logic blocks and inverter chains are illustrated with their simulation results. The DVST logic circuit in which the subthreshold leakage current path is blocked by a large high-VT MOS transistor can reduce the subthreshold current to the same level of high-VT logic. It can operate two times faster than the conventional dual-VT logic at 1.0-V supply voltage by removing the limitation in γ of the dual-VT logic. In the voltage range of 0.8-1.5 V it operates at even higher speed than the low-VT logic and only below 0.7-V supply voltage it is exceeded by the low-VT logic. Its application to synchronous DRAM, especially in the wave pipeline architecture of the data path, is described
Article
Analog circuits based on the subthreshold operation of CMOS devices are very attractive for ultralow power, high gain, and moderate frequency applications. In this paper, the analog performance of 100 nm dual-material gate (DMG) CMOS devices in the subthreshold regime of operation is reported for the first time. The analog performance parameters, namely drain-current (Id), transconductance (gm), transconductance generation factor (gm/Id), early voltage (VA), output resistance (Ro) and intrinsic gain for the DMG n-MOS devices, and and for the DMG p-MOS devices are systematically investigated with the help of extensive device simulations. The effects of different capacitances on the unity-gain frequency are also studied. The DMG CMOS devices are found to have significantly better performance as compared to their single-material gate (SMG) counterpart. More than 70% improvement in the voltage gain is observed for the CMOS amplifiers when dual-material gates, instead of single-material gates, are used in both the n- and p-channel devices.
Article
Subthreshold circuit design is promising for future ultralow-energy sensor applications as well as highly parallel high-performance processing. Device scaling has the potential to increase speed in addition to decreasing both energy and cost in subthreshold circuits. However, no study has yet considered whether device scaling to 45 nm and beyond will be beneficial for subthreshold logic. We investigate the implications of device scaling on subthreshold logic and SRAM and And that the slow scaling of gate-oxide thickness leads to a 60% reduction in Ion/Ioff between the 90- and 32-nm device generations. We highlight the effects of this device degradation on noise margins, delay, and energy. We subsequently propose an alternative scaling strategy and demonstrate significant improvements in noise margins, delay, and energy in sub-Vth circuits. Using both optimized and unoptimized subthreshold device models, we explore the robustness of scaled subthreshold SRAM. We use a simple variability model and find that even small memories become unstable at advanced technology nodes. However, the simple device optimizations suggested in this paper can be used to improve nominal read noise margins by 64% at the 32-nm node.
Article
This paper presents the comparative study on the device design method of the subthreshold slope and the threshold voltage control in fully-depleted silicon-on-insulator MOSFETs under sub-100-nm regime. As for the threshold voltage adjustment method, the combination of the back gate bias and the gate work function controls is found to provide the superior short channel effects, the suppression of the threshold voltage fluctuation due to the SOI thickness variation, and the current drive improvement. As for the subthreshold slope, the importance and the necessity of buried oxide engineering are pointed out from the viewpoint of both the substrate capacitance and short-channel effects. It is shown, consequently, that the optimization of the thickness and the permittivity of buried oxides have a significant impact on the control of the subthreshold slope under sub-100-nm regime. When the gate length is less than 100 nm, the subthreshold slope has a minimum value at the buried oxide thickness of around 40 nm, irrespective of the SOI thickness. It is also shown that the reduction in the permittivity of the buried oxides under a constant buried oxide capacitance improves the subthreshold slope.
Article
In this paper, we propose MOSFETs that are suitable for subthreshold digital circuit operations. The MOSFET subthreshold circuit would use subthreshold leakage current as the operating current to achieve ultralow power consumption when speed is not of utmost importance. We derive the theoretical limit of delay and energy consumption in MOSFET subthreshold circuit, and show that devices that have an ideal subthreshold slope are optimal for subthreshold operations due to the smaller gate capacitance, as well as the higher current. The analysis suggests that a double gate (DG)-MOSFET is promising for subthreshold operations due to its near-ideal subthreshold slope. The results of our investigation into the optimal device characteristics for DG-MOSFET subthreshold operation show that devices with longer channel length (compared to minimum gate length) can be used for robust subthreshold operation without any loss of performance. In addition, it is shown that the source and drain structure of DG-MOSFET can be simplified for subthreshold operations since source and drain need not be raised to reduce the parasitic resistance.
Article
This paper examines energy minimization for circuits operating in the subthreshold region. Subthreshold operation is emerging as an energy-saving approach to many energy-constrained applications where processor speed is less important. In this paper, we solve equations for total energy to provide an analytical solution for the optimum VDD and VT to minimize energy for a given frequency in subthreshold operation. We show the dependence of the optimum VDD for a given technology on design characteristics and operating conditions. This paper also examines the effect of sizing on energy consumption for subthreshold circuits. We show that minimum sized devices are theoretically optimal for reducing energy. A fabricated 0.18-μm test chip is used to compare normal sizing and sizing to minimize operational VDD and to verify the energy models. Measurements show that existing standard cell libraries offer a good solution for minimizing energy in subthreshold circuits.
Article
In this work, a new low-voltage low-power CMOS voltage reference independent of temperature is presented. It is based on subthreshold MOSFETs and on compensating a PTAT-based variable with the gate-source voltage of a subthreshold MOSFET. The circuit, designed with a standard 1.2-μm CMOS technology, exhibits an average voltage of about 295 mV with an average temperature coefficient of 119 ppm/°C in the range -25 to +125°C. A brief study of gate-source voltage behavior with respect to temperature in subthreshold MOSFETs is also reported.
Article
This paper investigates the effect of lowering the supply and threshold voltages on the energy efficiency of CMOS circuits. Using a first-order model of the energy and delay of a CMOS circuit, we show that lowering the supply and threshold voltage is generally advantageous, especially when the transistors are velocity saturated and the nodes have a high activity factor, In fact, for modern submicron technologies, this simple analysis suggests optimal energy efficiency at supply voltages under 0.5 V. Other process and circuit parameters have almost no effect on this optimal operating point. If there is some uncertainty in the value of the threshold or supply voltage, however, the power advantage of this very low voltage operation diminishes. Therefore, unless active feedback is used to control the uncertainty, in the future the supply and threshold voltage will not decrease drastically, but rather will continue to scale down to maintain constant electric fields
Article
This paper discusses the motivation, opportunities, and problems associated with implementing digital logic at very low voltages, including the challenge of making use of the available real estate in 3D multichip modules, energy requirements of very large neural networks, energy optimization metrics and their impact on system design, modeling problems, circuit design constraints, possible fabrication process modifications to improve performance, and barriers to practical implementation. 1 Introduction As technology continues to scale into the submicron regime, massively parallel architectures are increasingly being constrained by power considerations. Minimizing the energy per operation throughout the system is assuming increasing importance. We are investigating "Ultra Low Power CMOS" to reduce the energy per operation in massively parallel signal processors, microsatellites, and large scale neural networks. We are investigating operating with supply and threshold voltages of a few h...
Detecting Soft Errors in Stencil based Computations
  • V C Sharma
  • G Gopalakrishnan
  • G Bronevetsky
Embedded Integrated Circuit Aging Sensor System
  • C R Gauthier
  • P R Trivedi
  • G S Yee