## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

The concept of elementary (flux) modes provides a rigorous description of pathways in metabolic networks. Finding the elementary modes with minimum number of reactions (shortest elementary modes) is an interesting problem and has potential uses in various applications. However, this problem is NP-hard. This work is an initial step to analyze this problem from a parameterized computation view. With the number of reactions in elementary modes as natural parameter, we prove that finding the shortest elementary modes in metabolic networks is W-hard.

To read the full-text of this research,

you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.

The paper surveys parameterized algorithms and complexities for computational tasks on biopolymer sequences, including the
problems of longest common subsequence, shortest common supersequence, pairwise sequence alignment, multiple sequencing alignment,
structure–sequence alignment and structure–structure alignment. Algorithm techniques, built on the structural-unit level as
well as on the residue level, are discussed.

Most interesting real-world optimization problems are very challenging from a computational point of view. In fact, quite
often, finding an optimal or even a near-optimal solution to a large-scale optimization problem may require computational
resources far beyond what is practically available. There is a substantial body of literature exploring the computational
properties of optimization problems by considering how the computational demands of a solution method grow with the size of
the problem instance to be solved (see e.g. Chapter 11 or Aho et al., 1979). A key distinction is made between problems that require computational resources that grow polynomially with problem size
versus those for which the required resources grow exponentially. The former category of problems are called efficiently solvable,
whereas problems in the latter category are deemed intractable because the exponential growth in required computational resources renders all but the smallest instances of such problems
unsolvable.

Microbial cells operate under governing constraints that limit their range of possible functions. With the availability of annotated genome sequences, it has become possible to reconstruct genome-scale biochemical reaction networks for microorganisms. The imposition of governing constraints on a reconstructed biochemical network leads to the definition of achievable cellular functions. In recent years, a substantial and growing toolbox of computational analysis methods has been developed to study the characteristics and capabilities of microorganisms using a constraint-based reconstruction and analysis (COBRA) approach. This approach provides a biochemically and genetically consistent framework for the generation of hypotheses and the testing of functions of microbial cells.

The concept of elementary (flux) modes provides a rigorous description of pathways in metabolic networks and proved to be valuable in a number of applications. However, the computation of elementary modes is a hard computational task that gave rise to several variants of algorithms during the last years. This work brings substantial progresses to this issue. The authors start with a brief review of results obtained from previous work regarding (a) a unified framework for elementary-mode computation, (b) network compression and redundancy removal and (c) the binary approach by which elementary modes are determined as binary patterns reducing the memory demand drastically without loss of speed. Then the authors will address herein further issues. First, a new way to perform the elementarity tests required during the computation of elementary modes which empirically improves significantly the computation time in large networks is proposed. Second, a method to compute only those elementary modes where certain reactions are involved is derived. Relying on this method, a promising approach for computing EMs in a completely distributed manner by decomposing the full problem in arbitrarity many sub-tasks is presented. The new methods have been implemented in the freely available software tools FluxAnalyzer and Metatool and benchmark tests in realistic networks emphasise the potential of our proposed algorithms.

Contents 1. Foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Keep the Parameter Fixed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Preliminaries and Agreements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Parameterized Complexity---a Brief Overview . . . . . . . . . . . . . . 6 1.3.1 Basic Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.2 Interpreting Fixed-Parameter Tractability . . . . . . . . . . . 9 1.4 Vertex Cover -- an Illustrative Example . . . . . . . . . . . . . . . . . 11 1.4.1 Parameterize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.4.2 Specialize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.4.3 Generalize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.4.4 Count or Enumerate . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

This is not a book about algorithms. Certainly, it is full of algorithms, but that’s not what this book is about. This book is about possibilities. Its purpose is to present you not only with the prerequisite mandatory knowledge of the available problem-solving techniques, but more importantly to expand your ability to frame new problems and to think creatively — in essence, to solve the problem of how to solve problems, a talent that has become a lost art. Instead of devoting the necessary time and critical thinking required to frame a problem, to adjust our representation of the pieces of the puzzle, we have become complacent and simply reach for the most convenient subroutine, a magic pill to cure our ills. The trouble with magic is that, empirically, it has a very low success rate, and often relies on external devices such as mirrors and smoke. As with magic, most of the seemingly successful applications of problem solving in the real world are illusory, mere specters of what could have been achieved.

This paper presents an -time polynomial-space algorithm for Vertex Cover improving the previous -time polynomial-space upper bound by Chen, Kanj, and Jia. Most of the previous algorithms rely on exhaustive case-by-case branching rules, and an underlying conservative worst-case-scenario assumption. The contribution of the paper lies in the simplicity, uniformity, and obliviousness of the algorithm presented. Several new techniques, as well as generalizations of previous techniques, are introduced including: general folding, struction, tuples, and local amortized analysis. The algorithm also improves the -time exponential-space upper bound for the problem by Chandran and Grandoni.

Constraint-based approaches recently brought new insight into our understanding of metabolism. By making very simple assumptions such as that the system is at steady-state and some reactions are irreversible, and without requiring kinetic parameters, general properties of the system can be derived. A central concept in this methodology is the notion of an elementary mode (EM for short) which represents a minimal functional subsystem. The computation of EMs still forms a limiting step in metabolic studies and several algorithms have been proposed to address this problem leading to increasingly faster methods. However, although a theoretical upper bound on the number of elementary modes that a network may possess has been established, surprisingly, the complexity of this problem has never been systematically studied.

The prospect of understanding the relationship between the genome and the physiology of an organism is an important incentive to reconstruct metabolic networks. The first steps in the process can be automated and it does not take much effort to obtain an initial metabolic reconstruction from a genome sequence. However, such a reconstruction is certainly not flawless and correction of the many imperfections is laborious. It requires the combined analysis of the available information on protein sequence, phylogeny, gene-context and co-occurrence but is also aided by high-throughput experimental data. Simultaneously, the reconstructed network provides the opportunity to visualize the "omics" data within a relevant biological functional context and thus aids the interpretation of those data.

This chapter surveys the use of fixed-parameter algorithms in phylogenetics. A central computational problem in this field is the construction of a likely phylogeny (genealogical tree) for a set of species based on observed differences in the phenotype, differences in the genotype, or given partial phylogenies. Ideally, one would like to construct so-called perfect phylogenies, which arise from an elementary evolutionary model, but in practice one must often be content with phylogenies whose "distance from perfection" is as small as possible. The computation of phylogenies also has applications in seemingly unrelated areas such as genomic sequencing and finding and understanding genes. The numerous computational problems arising in phylogenetics are often NP-complete, but for many natural parametrizations they can be solved using fixed-parameter algorithms.

We show that the parameterized problem Perfect Code belongs to W[1]. This result closes an old open question, because it was often conjectured that Perfect Code could be a natural problem having complexity degree intermediate between W[1] and W[2]. This result also shows W[1]-membership of the parameterized problem Weighted Exact CNF Satisfiability.

Improved parameterized upper bounds for Vertex Cover

- J Chen
- I A Kanj
- G Xia

Parameterized Complexity