Conference Paper

FPGA Implementation of a Data-Driven Stochastic Biochemical Simulator with the Next Reaction Method

Keio Univ., Yokohama
Conference: Field Programmable Logic and Applications, 2007. FPL 2007. International Conference on
Source: IEEE Xplore


This paper introduces a scalable FPGA implementation of a stochastic simulation algorithm (SSA) called the next reaction method. There are some hardware approaches of SSAs that obtained high-throughput on reconfigurable devices such as FPGAs, but these works lacked in scalability. The design of this work can accommodate to the increasing size of target biochemical models, or to make use of increasing capacity of FPGAs. Interconnection network between arithmetic circuits and multiple simulation circuits aims to perform a data-driven multi-threading simulation. Approximately 8 times speedup was obtained compared to an execution on Xeon 2.80 GHz.

Full-text preview

Available from:
  • Source
    • "The network is a hierarchical structure using specialized routers called " concentrators " and " distributors " as shown in Fig. 1. Compared to the simple multiplexer[4], the concentrator has FIFOs to prevent latency for packet transfer. This gives large flexibility to interconnection network when transferring data packets with variable length via common data path. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Stochastic simulation of biochemical reaction networks are widely focused by life scientists to represent stochastic behaviors in cellular processes. Stochastic algorithm has loop-and thread-level parallelism, and it is suitable for running on application specific hardware to achieve high performance with low cost. We have implemented and evaluated the FPGA-based stochastic simulator according to theoretical research of the algorithm. This paper introduces an improved architecture for accelerating a stochastic simulation algorithm called the Next Reaction Method. This new architecture has scalability to various size of FPGA. As the result with a middle-range FPGA, 5.38 times higher throughput was obtained compared to software running on a Core 2 Quad Q6600 2.40 GHz.
    Full-text · Conference Paper · Oct 2008
  • Source
    • "However, the increasing size and floating-point capabilities of FPGAs are allowing larger event-driven Monte-Carlo simulations, using the exponential distribution to model waiting times. One example of this is the modelling of biochemical systems , where chemical reactions within cells are simulated to estimate changes in concentration over time [1]. Another example is in finance, where the time between loan defaults is simulated, allowing the expected value of a portfolio of loans to be estimated at future points in time [2]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The exponential distribution is a key distribution in many event-driven Monte-Carlo simulations, where it is used to model the time between random events in the system. This paper shows that each bit of a fixed-point exponential random variate is an independent Bernoulli variate, allowing the bits to be generated in parallel. This parallelism is of little interest in software, but is particularly well suited to FPGA generators, where huge numbers of independent uniform bits can be cheaply generated per cycle. Two generation architectures are developed using this approach, one using only logic elements to generate individual bits, and another using block-RAMs to group multiple bits together. The two methods are evaluated at three different quality-resource trade-offs, and when compared to existing methods have both higher performance and better resource utilisation. The method is particularly useful for very high performance applications, as extremely high-quality 36-bit exponential variates can be generated at 600MHz in the Virtex-4 architecture, using just 880 slices and no block-RAMs or embedded DSP blocks.
    Preview · Conference Paper · Oct 2008
  • Source

    Preview · Article ·
Show more