Article

Efficacy of the Metropolis Algorithm for the Minimum-Weight Codeword Problem Using Codeword and Generator Search Spaces

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This article studies the efficacy of the Metropolis algorithm for the minimum-weight codeword problem . The input is a linear code $C$ given by its generator matrix and our task is to compute a nonzero codeword in the code $C$ of least weight. In particular, we study the Metropolis algorithm on two possible search spaces for the problem: 1) the codeword space and 2) the generator space . The former is the space of all codewords of the input code and is the most natural one to use and hence has been used in previous work on this problem. The latter is the space of all generator matrices of the input code and is studied for the first time in this article. In this article, we show that for an appropriately chosen temperature parameter the Metropolis algorithm mixes rapidly when either of the search spaces mentioned above are used. Experimentally, we demonstrate that the Metropolis algorithm performs favorably when compared to previous attempts. When using the generator space, the Metropolis algorithm is able to outperform the previous algorithms in most of the cases. We have also provided both theoretical and experimental justification to show why the generator space is a worthwhile search space to use for this problem.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Artificial intelligence based on optimisation algorithms and specifically the meta-heuristic search methods has been very handy and powerful for solving computationally complicated and hard real-world problems [13]. The Meta-Heuristic search methods have been successfully and widely applied to minimum distance problems in the algebraic coding theory, for example see [1,[6][7][8]. These search schemes allow one to obtain optimal results much faster when compared with the standard linear search methods. ...
... Let D 2n be the dihedral group of order 2n for n > 2 with the ordering given in Eq. (1). Then, the construction matrix ϕ(σ (w)) of w ∈ F 4 ϕ D 2n has the following forms: ...
... The dual of this code is the [14,1,14] 4 quasicyclic code of degree 2, and its generator matrix has the following form: ...
Article
Full-text available
Construction of maximal entanglement-assisted quantum error correction (EAQEC) codes is one of the fundamental problems of quantum computing and quantum information. The objective of this paper is twofold: firstly, to obtain all possible construction matrices of the linear codes over the skew group ring F4 ϕ G, where G is the cyclic and dihedral groups of finite orders; and secondly, to obtain some good maximal EAQEC codes over the finite field F4 by using skew construction matrices. Additionally, to speed up the computational search time, we employ a nature inspired heuristic optimisation algorithm, the virus optimisation (VO) algorithm. With our method, we obtain a number of good maximal EAQEC codes over the finite field F4 in a reasonably short time. In particular, we improve the lower bounds of 18 maximal EAQEC codes that exist in the literature. Moreover, some of our EAQEC codes turn out to be also maximum distance separable (MDS) codes. Also, by using our construction matrices, we provide counterexamples to Theorems 4 and 5 of Lai et al. (Quantum Inf Process 13(4):957–990, 2014), on the non-existence of maximal EAQEC codes with parameters [[n, 1, n; n − 1]] and [[n, n − 1, 2; 1]] for an even length n. We also give a counterexample to another Theorem found in Lai and Ashikhmin (IEEE Trans Inf Theory 64:(1), 622–639, 2018), which states that there is no entanglement-assisted stabilizer code with parameters [[4, 2, 3; 2]]4.
... Most of the papers in the literature uses codewords as a search space for the minimum distance problem. Recently, generator matrices were considered as a search space, which turned out to be a better approach than using the codewords as a search space, please see [1] for details. In this work, we also consider generator matrices as a search space. ...
... Therefore, it is natural to use generator matrices in minimum weight codeword problem. As far as our knowledge, [1,6] and [7] are the only papers that in which generator matrices are used in place of codewords as a search space. ...
... Many of the papers in the literature use codewords as a search space to calculate the minimum distance of a given code. But, in the presented algorithms, the generator matrices are used as a search space as suggested in [1]. ...
Article
Finding the minimum distance of linear codes is one of the main problems in coding theory. The importance of the minimum distance comes from its error-correcting and error-detecting capability of the handled codes. It was proven that this problem is an NP-hard that is the solution of this problem can be guessed and verified in polynomial time but no particular rule is followed to make the guess and some meta-heuristic approaches in the literature have been used to solve this problem. In this paper, swarm-based optimization techniques, bat and firefly, are applied to the minimum distance problem by integrating the algebraic operator to the handled algorithms.
... Note that MA is a widely used local search-based metaheuristic [42]. As per the literature, it is successful in finding a good solution for many optimization problems [11,[41][42][43][44][45][46][47][48][49][50]. Further details about MA are discussed in Section 4. ...
... Based on the problem definition, one has to define the appropriate neighborhood structure and fitness/cost function. For basic definition of MC and its mixing time, refer to the standard textbooks and paper given in the literature [41,48,49,51,52]. ...
Article
Full-text available
The structural property of the search graph plays an important role in the success of local search-based metaheuristic algorithms. Magnification is one of the structural properties of the search graph. This study builds the relationship between the magnification of a search graph and the mixing time of Markov Chain (MC) induced by the local search-based metaheuristics on that search space. The result shows that the ergodic reversible Markov chain induced by the local search-based metaheuristics is inversely proportional to magnification. This result indicates that it is desirable to use a search space with large magnification for the optimization problem in hand rather than using any search spaces. The performance of local search-based metaheuristics may be good on such search spaces since the mixing time of the underlying Markov chain is inversely proportional to the magnification of search space. Using these relations, this work shows that MC induced by the Metropolis Algorithm (MA) mixes rapidly if the search graph has a large magnification. This indicates that for any combinatorial optimization problem, the Markov chains associated with the MA mix rapidly i.e., in polynomial time if the underlying search graph has large magnification. The usefulness of the obtained results is illustrated using the 0/1-Knapsack Problem, which is a well-studied combinatorial optimization problem in the literature and is NP-Complete. Using the theoretical results obtained, this work shows that Markov Chains (MCs) associated with the local search-based metaheuristics like random walk and MA for 0/1-Knapsack Problem mixes rapidly.
... Metaheuristic optimization algorithms have been widely used successfully for many engineering problems in which solution steps take exhaustively long time [6,15,16,18,22]. In terms of algebraic coding theory, optimization algorithms were used only for the minimum distance problem [1,2,3,4]. Searching for self-dual codes with new parameters is one of the main problem in algebraic coding theory and linear search was the only search tool for this problem. Although linear search for self-dual codes achieve good results for small size search space, this is really time consuming when the search field grows. ...
... Metaheuristic algorithms have been very useful and powerful tool to simulate real-world problems that were previously difficult or impossible to solve [6]. For algebraic coding theory, optimization algorithms were used only for the minimum distance problem, for example [1,2,3,4]. ...
Preprint
Full-text available
In this paper, a virus optimization algorithm, which is one of the metaheuristic optimization technique, is employed for the first time to the problem of finding extremal binary self-dual codes. We present a number of generator matrices of the form $[I_{36} \ | \ \tau_3(v)],$ where $I_{36}$ is the $36 \times 36$ identity matrix, $v$ is an element in the group matrix ring $M_3(\mathbb{F}_2)G$ and $G$ is a finite group of order 12, which we then employ together with the the virus optimization algorithm and the genetic algorithm to search for extremal binary self-dual codes of length 72. We obtain that the virus optimization algorithm finds more extremal binary self-dual codes than the genetic algorithm. Moreover, by employing the above mentioned constructions together with the virus optimization algorithm, we are able to obtain 39 Type I and 19 Type II codes of length 72, with parameters in their weight enumerators that were not known in the literature before.
... Definition 6 (Magnification [46], [ To find the lower bound on magnification, we make use of the canonical path method [46], [47]. In the canonical path method, we first need to define a unique path (say U ,V ) between any two search space elements U and V . ...
Article
Full-text available
We present a day-ahead scheduling strategy for an Energy Storage System (ESS) in a microgrid using two algorithms - Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The scheduling strategy aims to minimize the cost paid by consumers in a microgrid subject to dynamic pricing. We define an objective function for the optimization problem, present its search space, and study its structural properties. We prove that the search space has a magnification of at least 50 × (B <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">c</sub> - B <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">d</sub> + 1), where Bc and Bd are the maximum depths of charge and discharge in an hour (in percentage) of the ESS respectively. In a simulation involving load, energy generation, and grid price forecasts for three microgrids of different sizes, we obtain ESS schedules that provide average cost reductions of 11.31% (using GA) and 14.31% (using PSO) over the ESS schedule obtained using Net Power Based Algorithm.
Article
Finding the minimum distance of linear codes is a non-deterministic polynomial-time-hard problem and different approaches are used in the literature to solve this problem. Although, some of the methods focus on finding the true distances by using exact algorithms, some of them focus on optimization algorithms to find the lower or upper bounds of the distance. In this study, we focus on the latter approach. We first give the swarm intelligence background of artificial bee colony algorithm, we explain the algebraic approach of such algorithm and call it the algebraic artificial bee colony algorithm (A-ABC). Moreover, we develop the A-ABC algorithm by integrating it with the algebraic differential mutation operator. We call the developed algorithm the mutation-based algebraic artificial bee colony algorithm (MBA-ABC). We apply both; the A-ABC and MBA-ABC algorithms to the problem of finding the minimum distance of linear codes. The achieved results indicate that the MBA-ABC algorithm has a superior performance when compared with the A-ABC algorithm when finding the minimum distance of Bose, Chaudhuri, and Hocquenghem (BCH) codes (a special type of linear codes).
Article
In this paper, a genetic algorithm, one of the evolutionary algorithm optimization methods, is used for the first time for the problem of computing extremal binary self-dual codes. We present a comparison of the computational times between the genetic algorithm and a linear search for different size search spaces and show that the genetic algorithm is capable of computing binary self-dual codes significantly faster than the linear search. Moreover, by employing a known matrix construction together with the genetic algorithm, we are able to obtain new binary self-dual codes of lengths 68 and 72 in a significantly short time. In particular, we obtain 11 new binary self-dual codes of length 68 and 17 new binary self-dual codes of length 72.
Article
Full-text available
We show that the minimum distance of a linear code (or equivalently, the weight of the lightest codeword) is not approximable to within any constant factor in random polynomial time (RP), unless NP equals RP. Under the stronger assumption that NP is not contained in RQP (random quasi-polynomial time), we show that the minimum distance is not approximable to within the factor 2log(1-ε)n, for any ε>0, where n denotes the block length of the code. Our results hold for codes over every finite field, including the special case of binary codes. In the process we show that the nearest codeword problem is hard to solve even under the promise that the number of errors is (a constant factor) smaller than the distance of the code. This is a particularly meaningful version of the nearest codeword problem. Our results strengthen (though using stronger assumptions) a previous result of A. Vardy (1997) who showed that the minimum distance is NP-hard to compute exactly. Our results are obtained by adapting proofs of analogous results for integer lattices due to M. Ajtai (1998) and D. Micciancio (1998). A critical component in the adaptation is our use of linear codes that perform better than random (linear) codes
Conference Paper
Full-text available
Finding the minimum distance of linear codes is in general a NP-hard problem, we propose an efficient algorithm to attack this problem. The principle of this approach is to search code words locally around the all-zero code word perturbed by a level of noise magnitude, in other words the maximum of noise that can be corrected by a Soft-In decoder, anticipating that the resultant nearest non-zero code words will most likely contain the minimum Hamming weight code word, whose Hamming weight is equal to the minimum distance of the linear code. A numerous results prove that the proposed algorithm is valid for general linear codes and it is very fast comparing to all others known techniques, therefore it is a good tool for computing. Comparing to Joanna's works, we proof that our algorithm has a low complexity with a fast time of execution. For some linear RQs, QDCs and BCHs codes with unknown minimum distance, we give a good estimation (true) of the minimum distance where the length is less than 439.
Article
Full-text available
Computer science uses pseudo-random sequences on a daily basis. They are used in games, in network communication protocols, and above all in cryptographic protocols. Number theory is the basis for the majority of pseudo-random generators. In this paper, we propose a new construction of pseudo random generator. This generator is based on the syndrome decoding problem of the rational binary Goppa code. The parameters of the code are generated randomly. The generator is proven secure, effective and simple to implement. it provides an alternative to the theory of numbers and is distinguished by its good performance.
Article
Full-text available
The evaluation of the minimum distance of linear block codes remains an open problem in coding theory, and it is not easy to determine its true value by classical methods, for this reason the problem has been solved in the literature with heuristic techniques such as genetic algorithms and local search algorithms. In this paper we propose two approaches to attack the hardness of this problem. The first approach is based on genetic algorithms and it yield to good results comparing to another work based also on genetic algorithms. The second approach is based on a new randomized algorithm which we call Multiple Impulse Method MIM, where the principle is to search codewords locally around the all-zero codeword perturbed by a minimum level of noise, anticipating that the resultant nearest nonzero codewords will most likely contain the minimum Hamming-weight codeword whose Hamming weight is equal to the minimum distance of the linear code.
Chapter
Full-text available
Simulated annealing is a popular local search meta-heuristic used to address discrete and, to a lesser extent, continuous optimization problems. The key feature of simulated annealing is that it provides a means to escape local optima by allowing hill-climbing moves (i.e., moves which worsen the objective function value) in hopes of finding a global optimum. A brief history of simulated annealing is presented, including a review of its application to discrete and continuous optimization problems. Convergence theory for simulated annealing is reviewed, as well as recent advances in the analysis of finite time performance. Other local search algorithms are discussed in terms of their relationship to simulated annealing. The chapter also presents practical guidelines for the implementation of simulated annealing in terms of cooling schedules, neighborhood functions, and appropriate applications.
Article
Full-text available
correcting codes among codes of short lengths. The errorcorrecting capability of a code is directly related to its minimum distance. It is computationally very difficult to determine the true minimum distance of BCH codes. We analyze and compare the behaviour of different heuristic search techniques when applied to the problem of finding the true minimum weight of BCH codes. A basic genetic algorithm significantly outperformed hill-climbing, tabu search and hybrid techniques.
Article
Many real-world optimization problems cannot be modeled by a well-described objective function to apply methods from mathematical optimization theory. Then randomized search heuristics are applied – often with good success. Although heuristical by nature, they are algorithms and can be analyzed like all randomized algorithms, at least in principle. Two fundamental results of this kind are presented to show how such a theory can be developed.
Article
We study the performance of the Metropolis algorithm for the problem of finding a code word of weight less than or equal to M, given a generator matrix of an [n; κ]-binary linear code. The algorithm uses the set Sκ of all κ × κ invertible matrices as its search space where two elements are considered adjacent if one can be obtained from the other via an elementary row operation (i.e by adding one row to another or by swapping two rows.) We prove that the Markov chains associated with the Metropolis algorithm mix rapidly for suitable choices of the temperature parameter T. We ran the Metropolis algorithm for a number of codes and found that the algorithm performed very well in comparison to previously known experimental results.
Conference Paper
Linear codes with good algebraic structures have been used in a number of cryptographic or information-security applications, such as wire-tap channels of type II and secret sharing schemes. For a code-based secret sharing scheme, the problem of determining the minimal access sets is reduced to finding the minimal codewords of the dual code. It is well known that the latter problem is a hard problem for an arbitrary linear code. Constant weight codes and two-weight codes have been studied in the literature, for their applications to secret sharing schemes. In this paper, we study a class of three-weight codes. Making use of the finite projective geometry, we will give a sufficient and necessary condition for a linear code to be a three-weight code. The geometric approach that we will establish also provides a convenient method to construct three-weight codes. More importantly, we will determine the minimal codewords of a three-weight code, making use of the geometric approach.
Article
Tabu search is a deterministic combinatorial optimization technique. In this paper an implementation in an error-correcting code context is presented and then used to investigate the minimum distances of some linear block (BCH) codes. Of the two search strategies (both are implementation aspects of tabu search), the one involving sets of moves and a ‘back-tracking’ facility is found to give better upper bounds for the minimum distances. Computational results obtained using tabu search show that it is a useful and effective optimization technique for providing good minimum distance values of linear block codes. A limited comparison with recent results, obtained using simulated annealing, reveals that tabu search may give lower minimum distances in much shorter execution times.
Article
Shamir's scheme for sharing secrets is closely related to Reed-Solomon coding schemes. Decoding algorithms for Reed-Solomon codes provide extensions and generalizations of Shamir's method.
Article
In principle, every linear code can be used to construct a secret sharing scheme. However, in general, determining the access structure of the scheme is very hard. On the other hand, finding error correcting codes that produce secret sharing schemes with efficient access structures is also difficult. In this paper, we study a set of minimal codewords for certain classes of binary linear codes, and then determine the access structure of secret sharing schemes based on these codes. Furthermore, we prove that the secret sharing schemes obtained are democratic in the sense that every participant is involved in the same number of minimal access sets.
Article
In practical terms all coded electronic signals are prone to corruption during transmission but may be corrected by using error-correcting codes. The minimum distance of a code is important because it is the major parameter affecting the error-correcting performance of a code. In this paper a recent heuristic combinatorial optimisation algorithm, called ant colony optimisation (ACO), is applied to the problem of determining minimum distances of error-correcting codes.The ACO algorithm is motivated by analogy with natural phenomena, in particular, the ability of a colony of ants to ‘optimise’ their collective endeavours. In this paper the biological background for ACO is explained and its computational implementation is presented in an error-correcting code context. The particular implementation of ACO makes use of a tabu search (TS) improvement phase to give a computationally enhanced algorithm (ACOTS). Two classes of codes are then used to show that ACOTS is a useful and viable optimisation technique to investigate minimum distances of error-correcting codes.
Conference Paper
. We show a simple and efficient construction of a pseudorandomgenerator based on the intractability of an NP-complete problemfrom the area of error-correcting codes. The generator is proved as secureas a hard instance of the syndrome decoding problem. Each application ofthe scheme generates a linear amount of bits in only quadratic computingtime.1 IntroductionA pseudo-random generator is an algorithm producing strings of bits that lookrandom. The concept of "randomly looking" has been ...
Book
Synopsis.- 1 Preliminaries.- 1.1 Some basic definitions.- 1.2 Notions of tractability.- 1.3 An extended model.- 1.4 Counting, generation and self-reducibility.- 1.5 An interesting class of relations.- 2 Markov chains and rapid mixing.- 2.1 The Markov chain approach to generation problems.- 2.2 Conductance and the rate of convergence.- 2.3 A characterisation of rapid mixing.- 3 Direct Applications.- 3.1 Some simple examples.- 3.2 Approximating the permanent.- 3.3 Monomer-dimer systems.- 3.4 Concluding remarks.- 4 Indirect Applications.- 4.1 A robust notion of approximate counting.- 4.2 Self-embeddable relations.- 4.3 Graphs with specified degrees.- Appendix: Recent developments.
Article
In this paper we show how to divide data D into n pieces in such a way that D is easily reconstructable from any k pieces, but even complete knowledge of k - 1 pieces reveals absolutely no information about D. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.
Article
A general method, suitable for fast computing machines, for investigating such properties as equations of state for substances consisting of interacting individual molecules is described. The method consists of a modified Monte Carlo integration over configuration space. Results for the two-dimensional rigid-sphere system have been obtained on the Los Alamos MANIAC and are presented here. These results are compared to the free volume equation of state and to a four-term virial coefficient expansion. The Journal of Chemical Physics is copyrighted by The American Institute of Physics.
Article
MEMBER, IEEE, AND HENK C. A. V~ TILBORG The fact that the general decoding problem for linear codes and the general problem of finding the weights of a linear code are both NP-complete is shown. This strongly suggests, but does not rigorously imply, that no algorithm for either of these problems which runs in polynomial time exists.
Article
The applicability of techniques in coding theory to problems in cryptography is illustrated by examples concerning secret-sharing schemes with tailored access priveleges, the design of perfect local randomizers, the construction of t-resilient functions, and the quantization of the nonlinearity of boolean functions. Some novel coding concepts, in particular the notions of minimal codewords in linear codes and of a partition of the space of n- tuples based on nonlinear systematic codes akin to the coset partition for linear codes, are shown to be necessary to treat the cryptographic problems considered. The concepts of dual codes and dual distance as well as the relation between codes and orthogonal arrays are seen to play a central role in these applications of coding theory to cryptography. 1 Introduction Coding theory, which had its inception in the late 1940's, is now generally regarded as a mature science. Cryptography on the other hand, at least in the public sector, is ...
Article
: The use of a linear code to "split" secrets into equal-size shares is considered. The determination of which sets of shares can be used to obtain the secret leads to the apparently new notion of minimal codewords in a linear code. It is shown that the minimal codewords in the dual code completely specify the access structure of the secret-sharing scheme, and conversely. 1. Introduction In an (S, T) threshold secret-sharing scheme as introduced by Shamir [1], a q-ary secret is "split" into S q-ary shares in such a manner that any T shares uniquely determine the secret but any T - 1 or fewer shares provide no information about the secret. Shamir constructed such (S, T) threshold schemes (where 1 T S < q) by taking the secret to be the constant term in a monic polynomial of degree T over the finite field GF(q) whose T - 1 other coefficients are selected uniformly at random; the S shares are the values of this polynomial at any S specified and distinct nonzero elements of GF(q). McEl...
Article
This chapter focuses on approximation algorithms, which are algorithms of the second kind with a provably good worst-case ratio between the value of the solution found by the algorithm and the true optimum. It devotes results that follow from conjectural forms of the probabilistically checkable proof (PCP) theorem, such as the Unique Games conjecture. The chapter focuses on techniques used to prove inapproximability results, and reviews what is known for various fundamental problems. The chapter discusses integrality gap results for various optimization problems. After approaching the field from the perspectives of techniques and of results for specific problems, it discusses a number of alternative questions that have been pursued. The chapter also discusses the study of complexity classes of combinatorial optimization problems, of relations between average-case complexity and inapproximability, and of the issue of witness length in PCP constructions.
Probability and Computing: Randomized Algorithms and Probabilistic Analysis
Information Theory, Coding and Cryptography
  • R Bose