Figure 4 - uploaded by Fajingbesi Fawwaz Eniola
Content may be subject to copyright.
Source publication
In modern computing, multitasking is the most favorable aspect. An un-pipelined instruction cycle (fetch-execute cycle) CPU processes instructions one after another increasing duration at lesser speed in completing tasks. With pipelined computer architecture, unprecedented improvement in size and speed are achievable. This work investigates the pos...
Similar publications
The reciprocal square root is an important computation for which many very sophisticated algorithms exist (see for example \cite{863046,863031} and the references therein). In this paper we develop a simple differential compensation (much like those developed in \cite{borges}) that can be used to improve the accuracy of a naive calculation. The app...
Citations
... Currently, the VLSI digital systems design overburdened with many complex features. Multitasking, parallelism makes the system slow and consumes more power to meet these customer requirements; and designers have to compromise with the critical factors [1]. ...
Pipelining is a technique that exploits parallelism, among the instructions in a sequential instruction stream to get increased throughput, and it lessens the total time to complete the work. . The major objective of this architecture is to design a low power high performance structure which fulfils all the requirements of the design. The critical factors like power, frequency, area, propagation delay are analysed using Spartan 3E XC3E 1600e device with Xilinx tool. In this paper, the 32-bit MIPS RISC processor is used in 6-stage pipelining to optimize the critical performance factors. The fundamental functional blocks of the processor include Input/Output blocks, configurable logic blocks, Block RAM, and Digital clock Manager and each block permits to connect to multiple sources for the routing. The Auxiliary units enhance the performance of the processor. The comparative study elevates the designed model in terms of Area, Power and Frequency. MATLAB2D/3D graphs represents the relationship among various parameters of this pipelining. In this pipeline model, it consumes very less power (0.129 W),path delay (11.180 ns) and low LUT utilization (421). Similarly, the proposed model achieves better frequency increase (285.583 Mhz.), which obtained better results compared to other models.
RISC V architecture is finding its importance with semiconductor industry and academia. With the availability of open instruction, set design of the processor is possible. The RTL needs an extensive verification. Simulation-based methods are rampant, but exhaustive test generations are required. The papers reports design and System Verilog verification of the five-stage RISC V processor. Mentor Questa simulator is used to verify the design. The code coverage reported is 80%.
The main aim is to implement 128-bit RISC processor using pipelining techniques through FPGA with the help of von Neumann architecture. With the increase in the use of the FPGA in various embedded applications, there is a need to support processor designs on FPGA. The type of processor proposed is a soft processor with a simple instruction set which can be modified according to use because of the reconfigurable nature of FPGA. The type of architecture implemented is von Neumann. Prominent feature of the processor is pipelining which improves the performance considerably such that one instruction is executed per clock cycle. Due to the increase in innovations in the development of processors, the increasing popularity of open source projects like RISC-V ISA (Instruction Set Architecture), there is a need to also rapidly understand these designs and also upgrade them which can easily be performed on FPGA with trade off in speeds and size as compared to commercial ASIC processors, and hence, we are motivated to understand these systems. In this paper, a 128-bit RISC processor is implemented using FPGA pipelining.KeywordsRISC—reduced instruction set computerFPGA—field programmable gate arrayISA—instruction set architectureASIC—application specific integrated circuit
In cloud computing technology, task scheduling is one of the research challenges. For these various algorithms, works such as particle swarm optimization (PSO), firefly algorithm, ant colony optimization (ACO) and genetic algorithm (GA). PSO is inspired by the bird’s movement, and ACO is based on the behaviour of ants. GA works based on the natural evolution process. This paper presents the hybrid of PSO-ACO-GA for task scheduling on virtual machines of cloud computing known as ant particle swarm genetic algorithm (APSGA). Here, GA and PSO will perform iteration to get the task basis on fitness value and further ACO will distribute the task on specific virtual machines. This paper has achieved improved results for parameters such as CPU utilization, makespan and execution time. Our proposed algorithm has achieved makespan that is reduced by 27.1%, 19.45% and 21.24% with compare to PSO, ACO and GA, respectively. It has achieved maximum of CPU utilization and execution time.
RISC-V is a free and open instruction set architecture (ISA) based on reduced instruction set computer (RISC) principles. RISC-V ISA enables a new phase in the field of processors through open standard association. The address of RISC-V is based on 32-bit and 64-bit variants. The essential RISC-V is a 32-bit integer instruction set defined as RV32I, which efficiently supports the operating system environments and also suits for the embedded system applications. In this paper, a survey is carried for 5-stage in-order pipeline implementation and ways to overcome pipelining hazards for structural hazards, data hazards, and control hazards on RISC-V processors. Being open-source and free, this is adopted in many commercial and academic research and projects.
Natural language interfaces are gaining popularity as an alternative interface for non-technical users. Natural language interface to database (NLIDB) systems have been attracting considerable interest recently that are being developed to accept user’s query in natural language (NL), and then converting this NL query to an SQL query, the SQL query is executed to extract the resultant data from the database. This Text-to-SQL task is a long-standing, open problem, and towards solving the problem, the standard approach that is followed is to implement a sequence-to-sequence model. In this paper, I recast the Text-to-SQL task as a machine translation problem using sequence-to-sequence-style neural network models. To this end, I have introduced a parallel corpus that I have developed using the WikiSQL dataset. Though there are a lot of work done in this area using sequence-to-sequence-style models, most of the state-of-the-art models use semantic parsing or a variation of it. None of these models’ accuracy exceeds 90%. In contrast to it, my model is based on a very simple architecture as it uses an open-source neural machine translation toolkit OpenNMT, that implements a standard SEQ2SEQ model, and though my model’s performance is not better than the said models in predicting on test and development datasets, its training accuracy is higher than any existing NLIDB system to the best of my knowledge.