Information Processing Letters

Published by Elsevier
Online ISSN: 0020-0190
Publications
Conference Paper
An error-comprising workflow definition might provoke serious problems to an enterprise, especially when it is involved with mission critical business processes. Concurrency of workflow processes is known as one of the major sources causing such an invalid workflow process definition. So, the conflicts caused by concurrent workflow processes should be considered deliberately when defining concurrent workflow processes. However, it is very difficult to ascertain whether a workflow process is free from conflicts or not without any experimental executions at runtime; this would be very tedious and time consuming work for process designers. If we can analyze the conflicts immanent in concurrent workflow definition prior to runtime, it would be very helpful to business process designers and many other users of workflow management systems. The authors propose a set-based constraint system to analyze possible read-write conflicts and write-write conflicts between activities which read and write to the shared variables in a workflow process definition. The system is composed of two phases. In the first phase, it generates set constraints from a structured workflow definition. In the second phase, it finds the minimal solution of the set constraints
 
Conference Paper
The authors show that high-resolution images can be encoded and decoded efficiently in parallel. They present an algorithm based on the hierarchical multi-level progressive (MLP) method, used either with Huffman coding or with a new variant of arithmetic coding called quasi-arithmetic coding. The coding step can be parallelized, even though the codes for different pixels are of different lengths; parallelization of the prediction and error modeling components is straightforward.< >
 
Conference Paper
Schnorr and Shamir and independently Kunde, have shown that sorting N = n <sup>2</sup> inputs into snake-like ordering on a n × n mesh requires 3 n - o ( n ) steps. Using a less restrictive, more realistic model the author shows that the sorting N = n <sup>2</sup> integers in the range [1. . . N ] can be performed in 2 n + o ( n ) steps on a n × n mesh
 
Conference Paper
We consider two general precedence-constrained scheduling problems that have wide applicability in the areas of parallel processing, high performance compiling, and digital system synthesis. These problems are intractable so it is important to be able to compute tight bounds on their solutions. A tight lower bound on makespan scheduling can be obtained by replacing precedence constraints with release and due dates, giving a problem that can be efficiently solved. We demonstrate that recursively applying this approach yields a bound that is provably tighter than other known bounds, and experimentally shown to achieve the optimal value at least 86.5% of the time over a synthetic benchmark. We compute the best known lower bound on weighted completion time scheduling by applying the recent discovery of a new algorithm for solving a related scheduling problem. Experiments show that this bound significantly outperforms the linear programming-based bound. We have therefore demonstrated that combinatorial algorithms can be a valuable alternative to linear programming for computing tight bounds on large scheduling problems
 
Conference Paper
A multilayer feedforward neural network is proposed to solve sorting problems. The network has O(n<sup>2</sup>) neurons and O(n<sup>2 </sup>) links. The number of layers is fixed regardless of input size. Thus, the computation time of the network is independent of input size, and the sorting network has a time complexity of O(1)
 
Conference Paper
An effective way to process subsequence matching in time-series databases was proposed. An R*-tree was constructed on query window points, for effective processing of the window-join. The proposed method accessed each R*-tree page built on data windows exactly once without incurring any index-level false alarms. Therefore, in terms of the number of disk accesses, the method proved to be optimal. Performance evaluation through extensive experiments showed the superiority of the proposed method over the previous one.
 
Conference Paper
We consider strongly-connected, directed networks of identical synchronous, finite-state processors with in- and out-degree uniformly bounded by a network constant. Via a straightforward extension of R. Ostrovsky and D. Wilkerson's backwards communication algorithm (1995), we exhibit a protocol which solves the global topology determination problem, the problem of having a root processor map the global topology of a network of. unknown size and topology, with running time O(ND) where N represents the number of processors and D represents the diameter of the network. A simple counting argument suffices to show that the global topology determination problem has time-complexity Ω(N log N) which makes the protocol presented asymptotically time-optimal for many large networks
 
Conference Paper
In this paper the analysis of packet loss for Gilbert-model with loss rate feedback is performed. Based on the historical loss rate feedbacks, an iterative algorithm can be derived to compute the conditional version of P(m, n), and which is the probability of m lost packets within a block of n packets. Experimental simulations validate the analytical results, and they are helpful for any joint source-channel coding system for which FEC protection with loss rate feedback is suitable.
 
Conference Paper
An increasing proportion of the effort of skilled programmers is devoted to servicing the legacy of software. The techniques and tools currently in use to tackle the problem take good advantage of the results of past research into programming theory. I suggest that new generations of tools will be based on concepts and principles developed by basic research of the present and by future research directed at currently outstanding challenges. These points are illustrated by examples drawn from my personal experience. They show that academic research and education can contribute to industrial development and production in an atmosphere of mutual respect for their different allegiances and timescales, and in recognition of convergence of their long-term goals.
 
Conference Paper
The existence of an oracle A such that ⊕P<sup>A </sup> is not contained in PP<sup>PHA</sup> is proved. This separation follows in a straightforward manner from a circuit complexity result, which is also proved: to compute the parity of n inputs, any constant depth circuit consisting of a single threshold gate on top of ANDs and ORs requires exponential size in n
 
Conference Paper
Two sequences of items sorted in increasing order are given: a sequence A of size n and a sequence B of size m . It is required to determine, for every item of A , the smallest item of B (if one exists) that is larger than it. The paper presents two parallel algorithms for the problem. The first algorithm requires O(log m +log n ) time using n processors on an EREW PRAM. On an EREW PRAM with p ( p &les;min{ m , n }) processors, the second algorithm runs in O(log n +<sub>p</sub>/<sup>n</sup>) time when m &les; n , or in O(log m +<sub>p</sub>/<sup>n </sup>log<sub>n</sub>/<sup>2m</sup>) time when m > n . The second algorithm is optimal
 
Article
In 1996, Jakobsson, Sako, and Impagliazzo and, on the other hand, Chaum introduced the notion of designated verifier signatures to solve some of the intrinsic problems of undeniable signatures. The generalization of this concept was formally investigated by Laguillaumie and Vergnaud as multi-designated verifiers signatures. Recently, Laguillaumie and Vergnaud proposed the first multi-designated verifiers signature scheme which protects the anonymity of signers without encryption. In this paper, we show that their scheme is insecure against rogue-key attacks.
 
Article
The input of the Edge Multicut problem consists of an undirected graph G and pairs of terminals {s1,t1},…,{sm,tm}; the task is to remove a minimum set of edges such that si and ti are disconnected for every 1⩽i⩽m. The parameterized complexity of the problem, parameterized by the maximum number k of edges that are allowed to be removed, is currently open. The main result of the paper is a parameterized 2-approximation algorithm: in time f(k)⋅nO(1), we can either find a solution of size 2k or correctly conclude that there is no solution of size k.The proposed algorithm is based on a transformation of the Edge Multicut problem into a variant of the parameterized Max-2SAT problem, where the parameter is related to the number of clauses that are not satisfied. It follows from previous results that the latter problem can be 2-approximated in a fixed-parameter time; on the other hand, we show here that it is W[1]-hard. Thus the additional contribution of the present paper is introducing the first natural W[1]-hard problem that is constant-ratio fixed-parameter approximable.
 
Article
We consider the conjectured O(N2+ϵ) time complexity of multiplying any two N×N matrices A and B. Our main result is a deterministic Compressed Sensing (CS) algorithm that both rapidly and accurately computes A⋅B provided that the resulting matrix product is sparse/compressible. As a consequence of our main result we increase the class of matrices A, for any given N×N matrix B, which allows the exact computation of A⋅B to be carried out using the conjectured O(N2+ϵ) operations. Additionally, in the process of developing our matrix multiplication procedure, we present a modified version of Indyk's recently proposed extractor-based CS algorithm [P. Indyk, Explicit constructions for compressed sensing of sparse signals, in: SODA, 2008] which is resilient to noise.
 
Article
We present an algorithm to find a Hamiltonian cycle in a proper interval graph in O(m+n) time, where m is the number of edges and n is the number of vertices in the graph. The algorithm is simpler and shorter than previous algorithms for the problem.
 
Article
In this paper the minmax (regret) versions of some basic polynomially solvable deterministic network problems are discussed. It is shown that if the number of scenarios is unbounded, then the problems under consideration are not approximable within log1−ϵK for any ϵ>0 unless NP⊆DTIME(npolylogn), where K is the number of scenarios.
 
Article
Morpion Solitaire is a pencil-and-paper game for a single player. A move in this game consists of putting a cross at a lattice point and then drawing a line segment that passes through exactly five consecutive crosses. The objective is to make as many moves as possible, starting from a standard initial configuration of crosses. For one of the variants of this game, called 5D, we prove an upper bound of 121 on the number of moves. This is done by introducing line-based analysis, and improves the known upper bound of 138 obtained by potential-based analysis.
 
Article
We present a new implementation of the Kou, Markowsky and Berman algorithm for finding a Steiner tree for a connected, undirected distance graph with a specified subset S of the set of vertices V. The total distance of all edges of this Steiner tree is at most 2(1-1/l) times that of a Steiner minimal tree, where l is the minimum number of leaves in any Steiner minimal tree for the given graph. The algorithm runs in O(|E|+|V|log|V|) time in the worst case, where E is the set of all edges and V the set of all vertices in the graph.
 
Article
In this paper we demonstrate a fast correlation attack on the recently proposed stream cipher LILI-128. The attack has complexity around 271 bit operations assuming a received sequence of length around 230 bits and a precomputation phase of complexity 279 table lookups. This complexity is significantly lower than 2112, which was conjectured by the inventors of LILI-128 to be a lower bound on the complexity of any attack.
 
Article
To prove a block cipher is secure against Differential Cryptanalysis one should show the maximum differential probability is a small enough value. We will prove that the maximum differential probability of a 13 round Skipjack-like structure is bounded by p4, where p is the maximum differential probability of round function.
 
Article
A set of items has to be assigned to a set of bins with size one. If necessary, the size of the bins can be extended. The objective is to minimize the total size, i.e., the sum of the sizes of the bins. The Longest Processing Time heuristic is applied to this NP-hard problem. For this approximation algorithm we prove a worst-case bound of which is shown to be tight when the number of bins is even.
 
Article
In this paper we present an attack upon the Needham-Schroeder public-key authentication protocol. The attack allows an intruder to impersonate another agent.
 
Article
In 2012 B\'ona showed the rather surprising fact that the cumulative number of occurrences of the classical patterns $231$ and $213$ are the same on the set of permutations avoiding $132$, beside the pattern based statistics $231$ and $213$ do not have the same distribution on this set. Here we show that if it is required for the symbols playing the role of $1$ and $3$ in the occurrences of $231$ and $213$ to be adjacent, then the obtained statistics are equidistributed on the set of $132$-avoiding permutations. Actually, expressed in terms of vincular patterns, we prove the following more general results: the statistics based on the patterns $b-ca$, $b-ac$ and $ba-c$, together with other statistics, have the same joint distribution on $S_n(132)$, and so do the patterns $bc-a$ and $c-ab$; and up to trivial transformations, these statistics are the only based on length three proper (not classical nor adjacent) vincular patterns which are equidistributed on a set of permutations avoiding a classical length three pattern.
 
Article
Let S denote a set of N records whose keys are distinct nonnegative integers less than some initially specified bound M. This paper introduces a new data structure, called the y-fast trie, which uses Θ(N) space and Θ(log log M) time for range queries on a random access machine. We will also define a simpler but less efficient structure, called the x-fast trie.
 
Article
We present here a new model of computation: the Self-Modifying Finite Automaton (SMFA). This is similar to a standard finite automaton, but changes to the machine are allowed during a computation. It is shown here that a weak form of this model has the power to recognize an important class of context-free languages, the metalinear languages, as well as some significant non-context-free languages. Less restricted forms of SMFAs accept even more.
 
Article
The (2,1)-total labelling number of a graph G is the width of the smallest range of integers that suffices to label the vertices and the edges of G such that no two adjacent vertices, or two adjacent edges, have the same label and the difference between the labels of a vertex and its incident edges is at least 2.Let T be a tree with maximum degree Δ⩾4. Let DΔ(T) denote the set of integers k for which there exist two distinct vertices of maximum degree of distance at k in T. It was known that . In this paper, we prove that if 1∉DΔ(T) or 2∉DΔ(T), then . The result is best possible in the sense that, for any fixed integer k⩾3, there exist infinitely many trees T with Δ⩾4 and k∉DΔ(T) such that .
 
Article
The approximability of the following optimization problem is investigated: Given a connected graph G = (V, E), find the maximally balanced connected partition for G, i.e. a partition (V1, V2) of V into disjoint sets V1 and V2 such that both subgraphs of G induced by V1 and V2 are connected, and maximize an objective function “balance”, . We prove that for any ϵ > 0 it is NP-hard (even for bipartite graphs) to approximate the maximum balance of the connected partition for G = (V, E) with an absolute error guarantee of ¦V¦1 − ε. On the other hand, we give a polynomial-time approximation algorithm that solves the problem within even when vertices of G are weighted. The variation of the problem is equivalent to the Maximally Balanced Spanning Tree Problem studied by Galbiati, Maffioli and Morzenti (1995). Our simple polynomial-time algorithm approximates the solution of that problem within 1.072.
 
Article
Signcryption is a new paradigm in public key cryptography. A remarkable property of a signcryption scheme is that it fulfills both the functions of public key encryption and digital signature, with a cost significantly smaller than that required by signature-then-encryption. The purposes of this paper are to demonstrate how to specify signcryption schemes on elliptic curves over finite fields, and to examine the efficiency of such schemes. Our analysis shows that when compared with signaturethen-encryption on elliptic curves, signcryption on the curves represents a 58% saving in computational cost and a 40% saving in communication overhead.
 
Article
In this paper we analyze the complexity of recovering cryptographic keys when messages are encrypted under various keys. We suggest key-collision attacks, which show that the theoretic strength of a block cipher (in ECB mode) cannot exceed the square root of the size of the key space. As a result, in some circumstances, some keys can be recovered while they are still in use, and these keys can then be used to substitute messages by messages more favorable to the attacker (e.g., transfer $1000000 to bank account 123-4567890). Taking DES as our example, we show that one key of DES can be recovered with complexity 228, and one 168-bit key of (three-key) triple-DES can be recovered with complexity 284. We also discuss the theoretic strength of chaining modes of operation, and show that in some cases they may be vulnerable to such attacks.
 
Article
A cryptographic implementation is proposed for access control in a situation where users and information items are classified into security classes organized as a rooted tree, with the most privileged security class at the root. Each user stores a single key of fixed size corresponding to the user's security class. Keys for security classes in the subtree below the user's security class are generated from this key by iterative application of one-way functions. New security classes can be defined without altering existing keys. The scheme proposed here is based on conventional cryptosystems (as opposed to public key cryptosystems).
 
Article
An N-superconcentrator is a directed, acyclic graph with N input nodes and N output nodes such that every subset of the inputs and every subset of the outputs of same cardinality can be connected by node-disjoint paths. It is known that linear-size and bounded-degree superconcentrators exist. Here it is proved that such superconcentrators exist (by a random construction of certain expander graphs as building blocks) having density 28 (where the density is the number of edges divided by N). The best known density before this paper was 34.2 [U. Schöning, Construction of expanders and superconcentrators using Kolmogorov complexity, J. Random Structures Algorithms 17 (2000) 64–77] or 33 [L.A. Bassalygo, Personal communication, 2004].
 
Distribution of blocksets of ORL database for (a) two blocks/image, (b) four blocks/image, (c) seven blocks/images. For each blockset, only three main components (obtained by PCA) are displayed. Also face images have been normalized and resized to 56 × 46. 
(a) Experimental comparison on ORL database. (b) Experimental comparison for different number of blocks per image. 
Article
Direct extension of (2D) matrix-based linear subspace algorithms to kernel-induced feature space is computationally intractable and also fails to exploit local characteristics of input data. In this letter, we develop a 2D generalized framework which integrates the concept of kernel machines with 2D principal component analysis (PCA) and 2D linear discriminant analysis (LDA). In order to remedy the mentioned drawbacks, we propose a block-wise approach based on the assumption that data is multi-modally distributed in so-called block manifolds. Proposed methods, namely block-wise 2D kernel PCA (B2D-KPCA) and block-wise 2D generalized discriminant analysis (B2D-GDA), attempt to find local nonlinear subspace projections in each block manifold or alternatively search for linear subspace projections in kernel space associated with each blockset. Experimental results on ORL face database attests to the reliability of the proposed block-wise approach compared with related published methods.
 
Chapter
In this paper, two simple task migration schemes are first proposed for 2D mesh, multicomputers with supporting X-Y wormhole routing in the one-port communication model. We next propose a hybrid task migration scheme which attempts to minimize the total transmission latency. Finally, we compare all of our proposed task migration schemes using performance analysis.
 
Article
In this paper we present a parameterized algorithm that solves the Convex Recoloring problem for trees in O(k256∗poly(n)). This improves the currently best upper bound of O(kk(k/log k)∗poly(n)) achieved by Moran and Snir.
 
Article
The particle swarm optimization algorithm is analyzed using standard results from the dynamic system theory. Graphical parameter selection guidelines are derived. The exploration–exploitation tradeoff is discussed and illustrated. Examples of performance on benchmark functions superior to previously published results are given.
 
Article
In this paper we contribute to the understanding of the geometric properties of 3D drawings. Namely, we show how to make a 3D straight-line grid drawing of 4-colorable graphs in O(n2) volume. Moreover, we prove that each bipartite graph needs at least volume.
 
Article
The recent interest in three-dimensional graph drawing has been motivating studies on how to extend two-dimensional techniques to higher dimensions. A common 2D approach for computing an orthogonal drawing separates the task of defining the shape of the drawing from the task of computing its coordinates. First results towards finding a three-dimensional counterpart of this approach are presented by G. Di Battista, et al. [Graph Drawing (Proc. GD'00), Lecture Notes in Comput. Sci., vol. 1984, Springer, Berlin, 2001; Theoret. Comput. Sci. 289 (2002) 897], where characterizations of orthogonal representations of paths and cycles are studied. In this paper we show that the characterization for cycles given by G. Di Battista, et al. [Theoret. Comput. Sci. 289 (2002) 897] does not immediately extend to even seemingly simple graphs.
 
Article
In this note, we prove a simple theorem that provides a lower bound on the size of nondeterministic finite automata which accept a given regular language.
 
Article
We show that, for almost all N-variable Boolean functions f, at least N/4-O(\sqrt{N} log N) queries are required to compute f in quantum black-box model with bounded error.
 
Article
Edison-80, a superset of the programming language Edison, was implemented on an Intel development system. It aims at providing software designers with a programming environment, which combines the benefits of abstraction of high-level languages with those of machine-orientation and is powerful enough to support the development of nontrivial system software with a spectrum of facilities, including parallel processing. Modifications (e.g., the separate compilation facility) are introduced into Edison-80 to make it more powerful than the original. In spite of being an interpreted language, the resulting Edison-80 object code is down-loadable and compatible with the existing compiled languages on the system, so that they can be linked together and run on user systems. Experiences of implementation will be reported.
 
Article
It is shown that, for any equation $X=_{RS} t_X$ in the LLTS-oriented process calculus $\text{CLL}_R$, if $X$ is strongly guarded in $t_X$, then the recursive term $\langle X|X=t_X \rangle$ is the greatest solution of this equation w.r.t L\"{u}ttgen and Vogler's ready simulation.
 
Article
In this paper, we propose a fast iterative modular multiplication algorithm for calculating the product AB modulo N, where N is a large modulus in number-theoretic cryptosystems, such as RSA cryptosystems. Our algorithm requires additions on average for an n-bit modulus if k carry bits are dealt with in each loop. For a 512-bit modulus, the known fastest modular multiplication algorithm, Chen and Liu's algorithm, requires 517 additions on average. However, compared to Chen and Liu's algorithm, our algorithm reduces the number of additions by 26% for a 512-bit modulus.
 
Article
We derive a simple efficient algorithm for Abelian periods knowing all Abelian squares in a string. An efficient algorithm for the latter problem was given by Cummings and Smyth in 1997. By the way we show an alternative algorithm for Abelian squares. We also obtain a linear time algorithm finding all `long' Abelian periods. The aim of the paper is a (new) reduction of the problem of all Abelian periods to that of (already solved) all Abelian squares which provides new insight into both connected problems.
 
Article
In a graph $G=(V,E)$, a bisection $(X,Y)$ is a partition of $V$ into sets $X$ and $Y$ such that $|X|\le |Y|\le |X|+1$. The size of $(X,Y)$ is the number of edges between $X$ and $Y$. In the Max Bisection problem we are given a graph $G=(V,E)$ and are required to find a bisection of maximum size. It is not hard to see that $\lceil |E|/2 \rceil$ is a tight lower bound on the maximum size of a bisection of $G$. We study parameterized complexity of the following parameterized problem called Max Bisection above Tight Lower Bound (Max-Bisec-ATLB): decide whether a graph $G=(V,E)$ has a bisection of size at least $\lceil |E|/2 \rceil+k,$ where $k$ is the parameter. We show that this parameterized problem has a kernel with $O(k^2)$ vertices and $O(k^3)$ edges, i.e., every instance of Max-Bisec-ATLB is equivalent to an instance of Max-Bisec-ATLB on a graph with at most $O(k^2)$ vertices and $O(k^3)$ edges.
 
Article
Analyzing sequence composition is a basic task in genomic research. In this paper, to efficiently compute shortest absent words in a genomic sequence, we present a linear-time algorithm, which firstly estimates the length of shortest absent words by probabilistic method, and then based on such estimation, finds out all shortest absent words in a genomic sequence. Our algorithm only needs to scan the genomic sequence once without the space requirements of index structures such as suffix trees and suffix arrays. Experimental results show that our algorithm uses only 1.5 minutes for the computation of shortest absent words in human genome, and therefore is more efficient than any other existing algorithms.
 
Article
We bridge the gap between compositional evaluators and abstract machines for the lambda-calculus, using closure conversion, transformation into continuation-passing style, and defunctionalization of continuations. This article is a followup of our article at PPDP 2003, where we consider call by name and call by value. Here, however, we consider call by need.We derive a lazy abstract machine from an ordinary call-by-need evaluator that threads a heap of updatable cells. In this resulting abstract machine, the continuation fragment for updating a heap cell naturally appears as an ‘update marker’, an implementation technique that was invented for the Three Instruction Machine and subsequently used to construct lazy variants of Krivine's abstract machine. Tuning the evaluator leads to other implementation techniques such as unboxed values. The correctness of the resulting abstract machines is a corollary of the correctness of the original evaluators and of the program transformations used in the derivation.
 
Article
A well-known problem in Petri net theory is to formalise an appropriate causality-based concept of process or run for place/transition systems. The so-called individual token interpretation, where tokens are distinguished according to their causal history, giving rise to the processes of Goltz and Reisig, is often considered too detailed. The problem of defining a fully satisfying more abstract concept of process for general place/transition systems has so-far not been solved. In this paper, we recall the proposal of defining an abstract notion of process, here called BD-process, in terms of equivalence classes of Goltz-Reisig processes, using an equivalence proposed by Best and Devillers. It yields a fully satisfying solution for at least all one-safe nets. However, for certain nets which intuitively have different conflicting behaviours, it yields only one maximal abstract process. Here we identify a class of place/transition systems, called structural conflict nets, where conflict and concurrency due to token multiplicity are clearly separated. We show that, in the case of structural conflict nets, the equivalence proposed by Best and Devillers yields a unique maximal abstract process only for conflict-free nets. Thereby BD-processes constitute a simple and fully satisfying solution in the class of structural conflict nets.
 
Article
A property P of a language is said to be definable by abstract interpretation if there is a monotonic map abs from the domain of standard semantics to an abstract domain A of finite height, and a partition of the abstract domain into two parts A(sub p) and A(sub non p), such that any value has property P if and only if abs maps it to an element of A(sub p). Head-strictness is a property of functions over lists which asserts, roughly speaking, that whenever the function looks at the tail of a list, it looks at the head of the tail. We prove that head-strictness is not definable by abstract interpretation. We then present a non-monotonic abstract interpretation for head-strictness and prove its safety.
 
Article
We relate two models of security protocols, namely the linear logic or multiset rewriting model, and the classical logic, Horn clause representation of protocols. More specifically, we show that the latter model is an abstraction of the former, in which the number of repetitions of each fact is forgotten. This result formally characterizes the approximations made by the classical logic model.
 
Article
In this paper we consider the application of accelerated techniques in order to increase the rate of convergence of the diffusive iterative load balancing algorithms. In particular, we compare the application of Semi-Iterative, Second Degree and Variable Extrapolation techniques on the basic diffusion method for various types of network graphs.
 
Top-cited authors
Robert Endre Tarjan
  • Princeton University
Ioan Cristian Trelea
Mihalis Yannakakis
  • Columbia University
Kurt Mehlhorn
  • Max Planck Institute for Informatics
David S. Johnson
  • Columbia University