The Theory of Error Correcting Codes
... Boolean functions have applications in propositional logic, electrical engineering, game theory, reliability, combinatorics, and linear programming [1]. They are also very important in complexity theory [2,3], coding theory and cryptography [4], social sciences [5,6], medicine and biology [7,8]. Boolean functions are a useful tool for building models inside a large number of processes in nature, logic, engineering, or science. ...
... Denote by m µ f the vector composed by the coefficients of the ANF of f (x) given in expression (4), that is, ...
... The following example shows this relation. Example 5: Consider again Example 4. We have that b({s τ }) = (3,4,6,7,8,9,10,11,12). We can identify this set of indices with the set of minterms of a Boolean function f (x) of 4 variables, that is, we have that ...
Boolean functions and binary sequences are main tools used in cryptography. In this work, we introduce a new bijection between the set of Boolean functions and the set of binary sequences with period a power of two. We establish a connection between them which allows us to study some properties of Boolean functions through binary sequences and vice versa. Then, we define a new representation of sequences, based on Boolean functions and derived from the algebraic normal form, named reverse-ANF. Next, we study the relation between such a representation and other representations of Boolean functions as well as between such a representation and the binary sequences. Finally, we analyse the generalized self-shrinking sequences in terms of Boolean functions and some of their properties using the different representations.
... 31] (see Definition 25 in the Appendix), then we have the restrictions q > g and m ≥ t + N (r − t). Furthermore, if we choose as local code a doubly extended Reed-Solomon code [11,Ch. 11,Sec. ...
... generates a doubly extended Reed-Solomon code [11,Ch. 11,Sec. ...
... q is also MDS. Such a generator matrix A exists for any MDS code (let A be systematic in the first h + δ − 1 coordinates and apply [11,Ch. 11,Th. ...
In this work, we introduce maximally recoverable codes with locality and availability. We consider locally repairable codes (LRCs) where certain subsets of t symbols belong each to N local repair sets, which are pairwise disjoint after removing the t symbols, and which are of size and can correct erasures locally. Classical LRCs with N disjoint repair sets and LRCs with N -availability are recovered when setting and , respectively. Allowing enables our codes to reduce the storage overhead for the same locality and availability. In this setting, we define maximally recoverable LRCs (MR-LRCs) as those that can correct any globally correctable erasure pattern given the locality and availability constraints. We provide three explicit constructions, based on MSRD codes, each attaining the smallest finite-field sizes for some parameter regime. Finally, we extend the known lower bound on finite-field sizes from classical MR-LRCs to our setting.
... where μ is the Mobius function [8]. ...
... Remark 2 [8,9] The number of irreducible polynomials of degree t over F q is defined as: ...
... By using [8]( Ch.12, §3, (10)) we can rewrite this equation in the following form: ...
We consider a subclass of p-ary self-reversible generalized (L, G) codes with a locator set , where p is a prime number. The numerator of a rational function is the formal derivative of the denominator . The Goppa polynomial of degree 2t, t being odd, is either an irreducible self-reversible polynomial of degree 2t, or a non-irreducible self-reversible polynomial of degree 2t of the form , where is any irreducible non self-reversible polynomial of degree t. Estimates for minimum distance and redundancy are obtained for codes from this subclass. It is shown that among these codes, there are codes lying on the Gilbert–Varshamov bound. As a special case, binary codes from this subclass that contains codes lying also on Gilbert–Varshamov bound are considered.
... In general, there are some upper bounds on cardinalities or minimum distances of codes. Optimal codes attaining these bounds are particularly interesting, see [29]. The Singleton bound for general codes asserts M ≤ q n−d+1 and codes attaining this bound is called (nonlinear) maximal distance separable (MDS) codes. ...
... The Singleton bound for general codes asserts M ≤ q n−d+1 and codes attaining this bound is called (nonlinear) maximal distance separable (MDS) codes. Reed-Solomon codes are well-known linear MDS codes, see [29]. For a linear code C ⊂ F n q , we denote A i (C) the number of codewords with the weight i, 0 ≤ i ≤ n. ...
... 3), the corresponding Solomon-Stiffler code is a binary linear [23,5,11] 2 code with the maximum weight 16. Then a minimal [29,5,11] 2 code with the maximum weight 22 is constructed from this [23, 5, 11] 2 code, from Theorem 2.1. This minimal code violates the Ashikhmin-Barg condition. ...
In recent years, there have been many constructions of minimal linear codes violating the Ashikhmin-Barg condition from Boolean functions, linear codes with few nonzero weights or partial difference sets. In this paper, we first give a general method to transform a minimal code satisfying the Ashikhmin-Barg condition to a minimal code violating the Ashikhmin-Barg condition. Then we give a construction of a minimal code satisfying the Ashikhmin-Barg condition from an arbitrary projective linear code. Hence an arbitrary projective linear code can be transformed to a minimal codes violating the Ashikhmin-Barg condition. Then we give infinite many families of minimal codes violating the Ashikhamin-Barg condition. Weight distributions of constructed minimal codes violating the Ashikhmin-Barg condition in this paper are determined. Many minimal linear codes violating the Ashikhmin-Barg condition with their minimum weights close to the optimal or the best known minimum weights of linear codes are constructed in this paper. Moreover, many infinite families of self-orthogonal binary minimal codes violating the Ashikhmin-Barg condition are also given.
... The weight distribution of a linear code C of length n is the sequence (A0, A1, . . . , An), where Ai is the number of codewords of weight i in C. [25]). Let C be an [n, k] linear code over a finitefield Fq with weight distribution ...
... Longer codes tend to have fewer codewords at the minimum weight, but more codewords at higher weights. The [50, 25,13] code shows a much more spread-out distribution compared to the shorter codes, with significant numbers of codewords at higher weights. ...
... However, our study includes a broader range of code parameters, providing a more comprehensive view of ternary cyclic codes. The observed symmetry in weight distributions confirms the theoretical expectations for linear codes, as described by MacWilliams and Sloane [25]. This symmetry can be exploited in applications such as coded modulation and cryptography. ...
Linear cyclic ternary codes defined over the Galois field GF(3) exhibit several advantages over their binary counterparts. For instance, they provide an extra option for each pulse resulting into a larger set of available codes at any given length. This paper presents a comprehensive study of classes of linear cyclic ternary codes of length 25 ≤ n ≤ 50. While binary codes have been extensively studied, the properties and applications of longer ternary codes remain less explored. This study address this gap by providing an in-depth characterization of these codes for the stated lengths. Using computational methods implemented in Magma software, a diverse set of linear cyclic ternary codes over GF(3) were generated and analyzed. The paper provides a multifaceted characterization framework that integrates algebraic, combinatorial, and geometric perspectives, offering a holistic understanding of these codes. This study contributes to the theoretical advancement of non-binary codes and their practical applications in error correction, cryptography, and communication systems.
... 2. The 2-designs we obtained from the [32, 16,9] code are particularly noteworthy, as they have parameters not previously reported in the literature for designs derived from ternary codes of this length. ...
... In this section, we provide a comprehensive characterization of the generated linear cyclic ternary codes based on the collective results from our analysis of their properties, associated designs, and constructed lattices. Table 3 summarizes the key characteristics of the studied codes: 16,9] Weight Dist: Symmetric, peaks at w = 17 Design: 2-(32, 9,16) Lattice: Min norm 9, kissing number 256 Automorphism: 16,9] Weight Dist: Symmetric, peaks at w = 17 Design: 1-(33, 9, 264) Lattice: Min norm 9, kissing number 264 Automorphism: 17,9] Weight Dist: Symmetric, peaks at w = 1. The weight distribution of C is symmetric around ⌊n/2⌋. ...
... In this section, we provide a comprehensive characterization of the generated linear cyclic ternary codes based on the collective results from our analysis of their properties, associated designs, and constructed lattices. Table 3 summarizes the key characteristics of the studied codes: 16,9] Weight Dist: Symmetric, peaks at w = 17 Design: 2-(32, 9,16) Lattice: Min norm 9, kissing number 256 Automorphism: 16,9] Weight Dist: Symmetric, peaks at w = 17 Design: 1-(33, 9, 264) Lattice: Min norm 9, kissing number 264 Automorphism: 17,9] Weight Dist: Symmetric, peaks at w = 1. The weight distribution of C is symmetric around ⌊n/2⌋. ...
In this study, we investigate the relationships between code parameters and lattice properties, providing new insights into the structure of ternary codes from a geometric perspective. Our findings extend the existing knowledge of ternary cyclic codes, particularly for lengths exceeding 25. We construct several new codes with favorable parameters, constructed previously unreported combinatorial designs, and characterized lattices with unique properties. The results demonstrate that ternary cyclic codes exhibit high structural regularity and often produce interesting designs and lattices with properties distinct from their binary counterparts. The research reveal strong interconnections between Coding Theory, Combinatorial Design Theory, and Lattice Theory in the context of ternary codes. We provide a multifaceted characterization framework that integrates algebraic, combinatorial, and geometric perspectives, offering a holistic understanding of these codes. This study contributes to the theoretical advancement of non-binary codes and opens new avenues for their practical applications in error correction, cryptography, and communication systems.
... The use of random IQP circuits for the verification protocol is however problematic due to the anticoncentration property [26,28]. To address this issue, the Shepherd-Bremner scheme employs an obfuscated quadratic-residue code (QRC) to construct the pair (U IQP , s) [29]. While the Shepherd-Bremner scheme was experimentally attractive, it suffered from a drawback as its cryptographic assumptions were nonstandard and lacked sufficient study compared to TCF-based protocols. ...
... The first explicit construction recipe of (H, s) for the case θ = π/8 is given by Shepherd and Bremner [24]. In the their construction, H s is constructed from a specific error-correcting code, the quadratic-residue code (QRC) [29], which guarantees that the correlation function is always 1/ √ 2, a value sufficiently away from zero as desired. Formally, let H QRC n,m,q = {(H, s)} be a family of pairs of an IQP matrix H ∈ F m×n 2 and a secret s so that H s generates a QRC of length q (up to row permutations) and H is of full column rank. ...
... Below, we propose a parameter regime that can invalidate the attack in Ref. [30]. Given the length q of the QRC, we have r = (q + 1)/2 and m 1 = q [29]. So, the first formula in Eq. (21) gives n ≥ (q − 1)/2 + 2λ and the second formula gives the range of the number of redundant rows n − (q + 1)/2 ≤ m 2 ≤ 2n − 2λ − q. ...
Sampling problems demonstrating beyond classical computing power with noisy intermediate-scale quantum devices have been experimentally realized. In those realizations, however, our trust that the quantum devices faithfully solve the claimed sampling problems is usually limited to simulations of smaller-scale instances and is, therefore, indirect. The problem of verifiable quantum advantage aims to resolve this critical issue and provides us with greater confidence in a claimed advantage. Instantaneous quantum polynomial-time (IQP) sampling has been proposed to achieve beyond classical capabilities with a verifiable scheme based on quadratic-residue codes (QRC). Unfortunately, this verification scheme was recently broken by an attack proposed by Kahanamoku-Meyer. In this work, we revive IQP-based verifiable quantum advantage by making two major contributions. Firstly, we introduce a family of IQP sampling protocols called the , which builds on results linking IQP circuits, the stabilizer formalism, coding theory, and an efficient characterization of IQP-circuit correlation functions. This construction extends the scope of existing IQP-based schemes while maintaining their simplicity and verifiability. Secondly, we introduce the (HSC) problem as a well-defined mathematical challenge that underlies the stabilizer scheme. To assess classical security, we explore a class of attacks based on secret extraction, including Kahanamoku-Meyer’s attack as a special case. We provide evidence of the security of the stabilizer scheme, assuming the hardness of the HSC problem. We also point out that the vulnerability observed in the original QRC scheme is primarily attributed to inappropriate parameter choices, which can be naturally rectified with proper parameter settings.
Published by the American Physical Society 2025
... In its most foundational form, the decoding problem is one of solving a linear system of parity-check equations of the form Hx = σ [17]. Decoders for qLDPC codes frequently rely on Gaussian elimination to directly solve this equation. ...
... Classically, an [n, k, d] code encodes k bits' worth of logical information into n physical bits, where n > k in order to introduce redundancy to protect the bulk from errors (i.e. bitflips), and the code distance d is defined as the minimum Hamming distance between two codewords, or equivalently the minimum number of physical errors needed to form a logical error [17]. For example, the [3, 1, 3] repetition code encodes a single logical bit as ...
... However, it does not suffice to find any arbitrary solution: we wish to find the most likely error consistent with the syndrome. Decoding algorithms therefore attempt to find or approximate optimal solutions and may or may not utilise Gaussian elimination to this end [8], [12], [17]- [19]. ...
Decoders for quantum LDPC codes generally rely on solving a parity-check equation with Gaussian elimination, with the generalised union-find decoder performing this repeatedly on growing clusters. We present an online variant of the Gaussian elimination algorithm which maintains an LUP decomposition in order to process only new rows and columns as they are added to a system of equations. This is equivalent to performing Gaussian elimination once on the final system of equations, in contrast to the multiple rounds of Gaussian elimination employed by the generalised union-find decoder. It thus significantly reduces the number of operations performed by the decoder. We consider the generalised union-find decoder as an example use case and present a complexity analysis demonstrating that both variants take time cubic in the number of qubits in the general case, but that the number of operations performed by the online variant is lower by an amount which itself scales cubically. This analysis is also extended to the regime of 'well-behaved' codes in which the number of growth iterations required is bounded logarithmically in error weight. Finally, we show empirically that our online variant outperforms the original offline decoder in average-case time complexity on codes with sparser parity-check matrices or greater covering radius.
... Proof of Corollary 3.5. By standard arguments for the sphere-packing bound and GV bound [26], the size of any code does not exceed q n /|Ball t,b (x)|, and there exists a code of size at least q n /|Ball 2t,b (x)|. We then simply apply the bound of Theorem 3.1. ...
... by taking advantage of the classic primitive narrow-sense [n, k, d] q BCH codes [26], we have ...
We study optimal reconstruction codes over the multiple-burst substitution channel. Our main contribution is establishing a trade-off between the error-correction capability of the code, the number of reads used in the reconstruction process, and the decoding list size. We show that over a channel that introduces at most t bursts, we can use a length-n code capable of correcting errors, with reads, and decoding with a list of size , where . In the process of proving this, we establish sharp asymptotic bounds on the size of error balls in the burst metric. More precisely, we prove a Johnson-type lower bound via Kahn's Theorem on large matchings in hypergraphs, and an upper bound via a novel variant of Kleitman's Theorem under the burst metric, which might be of independent interest. Beyond this main trade-off, we derive several related results using a variety of combinatorial techniques. In particular, along with tools from recent advances in discrete geometry, we improve the classical Gilbert-Varshamov bound in the asymptotic regime for multiple bursts, and determine the minimum redundancy required for reconstruction codes with polynomially many reads. We also propose an efficient list-reconstruction algorithm that achieves the above guarantees, based on a majority-with-threshold decoding scheme.
... The study of finite metric spaces equipped with the Hamming distance has been central to coding theory and combinatorics since the seminal work of Hamming [1]. The space E n q , consisting of all n-dimensional vectors over an alphabet of size q, forms the natural ambient space for error-correcting codes and has been extensively studied from various perspectives [2,3]. ...
... The notion of rank in coding theory typically refers to the rank of matrices over finite fields, as studied extensively in rank-metric codes [5,4]. The Hamming distance has been fundamental since [1], with classical bounds including the Plotkin bound [6] and sphere-packing bounds [2]. ...
We introduce a novel concept of rank for subsets of finite metric spaces E n q (the set of all n-dimensional vectors over an alphabet of size q) equipped with the Hamming distance, where the rank R(A) of a subset A is defined as the number of non-constant columns in the matrix formed by the vectors of A. This purely combinatorial definition provides a new perspective on the structure of finite metric spaces, distinct from traditional linear-algebraic notions of rank. We establish tight bounds for R(A) in terms of D A , the sum of Hamming distances between all pairs of elements in A. Specifically, we prove that 2qD A (q−1)|A| 2 ⩽ R(A) ⩽ D A |A|−1 when |A|/q ⩾ 1, with a modified lower bound for the case |A|/q < 1. These bounds show that the rank is constrained by the metric properties of the subset. Furthermore, we introduce the concept of metrically dense subsets, which are subsets that minimize rank among all isometric images. This notion captures an extremal property of subsets that represent their distance structure in the most compact way possible. We prove that subsets with uniform column distribution are metrically dense, and as a special case, establish that when q is a prime power, every linear subspace of E n q is metrically dense. This reveals a fundamental connection between the algebraic and metric structures of these spaces.
... These codes are optimal for error detection and correction, as they attain the largest possible minimum distance for a given length and dimension. MDS codes also have applications in combinatorial designs and are closely related to finite geometry [19,11]. A k-dimensional linear MDS code over F q is equivalent to an arc in the projective space P G(k − 1, q). ...
... GRS codes are the only known class of MDS codes over F q , which exist for all lengths n ≤ q +1 and dimension k ≤ n. MDS conjecture is given by Segre [24] in the context of finite projective geometry (for details, see [5,19]). Generalized Reed-Solomon (GRS) codes have found several practical applications due to their algebraic structures and properties. ...
We investigate two classes of extended codes and provide necessary and sufficient conditions for these codes to be non-GRS MDS codes. We also determine the parity check matrices for these codes. Using the connection of MDS codes with arcs in finite projective spaces, we give a new characterization of o-monomials.
... For further discussion, we begin by reviewing fundamental concepts of an error-correcting code. Excellent references on this topic are MacWilliams and Sloane (1977), Stinson (2006) and Hedayat et al. (2012). Error-correcting codes are used to detect and correct errors that occur during data transmission over noisy communication channels. ...
... It is a (16, 256, 6) 2 code with the property that it has dual distance 6 and offers the advantage over linear codes in that any binary linear code of length 16 with minimal distance 6 can contain at most 128 codewords. For development of families of nonlinear codes that generalize the Nordstrom-Robinson code we refer to MacWilliams and Sloane (1977). ...
Orthogonal arrays are arguably one of the most fascinating and important statistical tools for efficient data collection. They have a simple, natural definition, desirable properties when used as fractional factorials, and a rich and beautiful mathematical theory. Their connections with combinatorics, finite fields, geometry, and error-correcting codes are profound. Orthogonal arrays have been widely used in agriculture, engineering, manufacturing, and high-technology industries for quality and productivity improvement experiments. In recent years, they have drawn rapidly growing interest from various fields such as computer experiments, integration, visualization, optimization, big data, machine learning/artificial intelligence through successful applications in those fields. We review the fundamental concepts and statistical properties and report recent developments. Discussions of recent applications and connections with various fields are presented.
... First, we review fundamental concepts and key results related to orthogonal arrays and linear codes. See [24,Chapter 7] and [18,Chapter 5] for details. ...
... for which see [24,Theorem 11,Chapter 7]. Let C be the BCH code in GF(q) k , defined as above. ...
A weighted t-design in is a finite weighted set that exactly integrates all polynomials of degree at most t with respect to a given probability measure. A fundamental problem is to construct weighted t-designs with as few points as possible. Victoir (2004) proposed a method to reduce the size of weighted t-designs while preserving the t-design property by using combinatorial objects such as combinatorial designs or orthogonal arrays with two levels. In this paper, we give an algebro-combinatorial generalization of both Victoir's method and its variant by the present authors (2014) in the framework of Euclidean polynomial spaces, enabling us to reduce the size of weighted designs obtained from the classical product rule. Our generalization allows the use of orthogonal arrays with arbitrary levels, whereas Victoir only treated the case of two levels. As an application, we present a construction of equi-weighted 5-designs with points for product measures such as Gaussian measure on or equilibrium measure on , where d is any integer at least 5. The construction is explicit and does not rely on numerical approximations. Moreover, we establish an existence theorem of Gaussian t-designs with N points for any , where for fixed sufficiently large prime power q. As a corollary of this result, we give an improvement of a famous theorem by Milman (1988) on isometric embeddings of the classical finite-dimensional Banach spaces.
... Block cipher LED [17], and hash function PHOTON [16] used this recursive based construction. In contrast, non-recursive methods encompass various techniques, such as the use of Cauchy matrices [18,24], Vandermonde matrices [22,31], Hadamard matrices [33], and their generalizations [28]. Furthermore, inspired by the use of circulant MDS matrix in the diffusion layer of AES, various authors [5,19,20,23] have explored the construction of circulant MDS matrices. ...
... A code that meets this bound is known as a maximum distance separable (MDS) code. An alternative definition of an MDS code, as given in MacWilliams and Sloane [24], is the following. This definition leads to the following characterization of an MDS matrix. ...
In 1998, Daemen et al. introduced a circulant Maximum Distance Separable (MDS) matrix in the diffusion layer of the Rijndael block cipher, drawing significant attention to circulant MDS matrices. This block cipher is now universally acclaimed as the AES block cipher. In 2016, Liu and Sim introduced cyclic matrices by modifying the permutation of circulant matrices and established the existence of MDS property for orthogonal left-circulant matrices, a notable subclass within cyclic matrices. While circulant matrices have been well-studied in the literature, the properties of cyclic matrices are not. Back in 1961, Friedman introduced g-circulant matrices which form a subclass of cyclic matrices. In this article, we first establish a permutation equivalence between a cyclic matrix and a circulant matrix. We explore properties of cyclic matrices similar to g-circulant matrices. Additionally, we determine the determinant of g-circulant matrices of order and prove that they cannot be simultaneously orthogonal and MDS over a finite field of characteristic 2. Furthermore, we prove that this result holds for any cyclic matrix.
... , L−1. RSSSs can also be constructed based on explicit MDS codes [43], [44]. RSSSs based on nested codes were proposed in [45] which focuses on secure multiplication computation. ...
... Method (i). We synthesize 2k times to generate all probability vectors in Y and Y − according to Remark 3. In decoding of the first entry of X, we simply mix the shares from Y and Y − according to (43), and then sequence once to retrieve A ⊗ Y . Finally, the first entry of X is obtained by (39). ...
Emerging DNA storage technologies use composite DNA letters, where information is represented by a probability vector, leading to higher information density and lower synthesis costs. However, it faces the problem of information leakage in sharing the DNA vessels among untrusted vendors. This paper introduces an asymptotic ramp secret sharing scheme (ARSSS) for secret information storage using composite DNA letters. This innovative scheme, inspired by secret sharing methods over finite fields and enhanced with a modified matrix-vector multiplication operation for probability vectors, achieves asymptotic information-theoretic data security for a large alphabet size. Moreover, this scheme reduces the number of reading operations for DNA samples compared to traditional schemes, and therefore lowers the complexity and the cost of DNA-based secret sharing. We further explore the construction of the scheme, starting with a proof of the existence of a suitable generator, followed by practical examples. Finally, we demonstrate efficient constructions to support large information sizes, which utilize multiple vessels for each secret share rather than a single vessel.
... En plus de synthétiser les résultats classiques comme les codes linéaires, une synthèse sur des techniques et des définitions récentes relatives à la contribution jointe à ce document est inclue, ce qui va aider le lecteur à une bonne compréhension. Pour une introduction profonde à la théorie du codage, le lecteur se référe à [6], [8], [45], [50] et [58]. ...
... Ces derniers sont définis à la façon standard dans [43] et [50]. ...
The works of this thesis have reached this level by producing for papers, three of them are already published.
The first is entitled, "\textit{On some classes of linear codes over and their covering radii}", Journal of Applied Mathematics and Computing, 2016.
In this paper, we have established a simples codes, and a MacDonald cdes of type and over .
We have also examined the covering radius of these codes. In addition, we have examined the binary image of simplex codes of type attained the Gilbert bound.
The second is entitled, "\textit{Simplex and MacDonald codes over }", Journal of Applied Mathematics and Computing, 2016. In this paper, we have given a new homogeneous weight as its Gray map of the ring . We have created a simplex and MacDonald codes of type and over this ring as well. We have studied many characteristics, as their binary images.
The third is entitled, "Codes over and their covering radii". Journal of Algebra, Number Theory: Advances and Applications. Volume 16, Number 1, pp. 25-39, 2016. In this paper, we have created the first of order of Reed-Muller codes over starting from a simplex codes to type over this ring, and calculated the exact value of the covering radius of these codes.
In the fourth is entitled, "\textit{Secret Sharing Schemes Based on Gray Images of Linear Codes over }, International Conference on Coding and Cryptography ICCC, USTHB, Algiers, Algeria, 2015 (communiqu\'e par K. Chatouh). In this paper, the secret sharing schemes obtained from a class of linear codes are the Gray images of simplex and MacDonald codes of type and over , with and .
The
motivations in studying such codes comes form the fact that these codes are with few weights.
They also find some other applications such as PSK modulation and some cryptographic purposes.
... Example 2. Let m = 6 and n = 2 m − 1 then t = 3. We can compute that Then the binary cyclic code C 1 with defining set Z 1 has parameters [63, 33, 7] and its dual C ⊥ 1 has parameters [63, 30,12]. Similarly, C 2 with defining set Z 2 has parameters [63, 31,9] and its dual C ⊥ 2 has parameters [63, 32,8]. ...
... [30]). Let C be a binary cyclic code of length n with a defining set Z. If there are integers δ and h with 2 ≤ δ ≤ n such that {h + i (mod n) : 0 ≤ i ≤ δ − 2} ⊂ Z, then the minimum distance d(C) of the code C is at least δ. ...
Binary cyclic codes are worth studying due to their applications and theoretical importance. It is an important problem to construct an infinite family of cyclic codes with large minimum distance d and dual distance . In recent years, much research has been devoted to improving the lower bound on d, some of which have exceeded the square-root bound. The constructions presented recently seem to indicate that when the minimum distance increases, the minimum distance of its dual code decreases. In this paper, we focus on the new constructions of binary cyclic codes with length , dimension near n/2, and both relatively large minimum distance and dual distance. For m is even, we construct a family of binary cyclic codes with parameters , where and . Both the minimum distance and the dual distance are significantly better than the previous results. When m is the product of two distinct primes, we construct some cyclic codes with dimensions k=(n+1)/2 and where the lower bound on the minimum distance is much larger than the square-root bound. For m is odd, we present two families of binary cyclic codes with , and , respectively, which leads that can reach 2n asymptotically. To the best of our knowledge, except for the punctured binary Reed-Muller codes, there is no other construction of binary cyclic codes that reaches this bound.
... 2. The odd graphs O n . For more information on concepts from graph theory we refer the reader to [15], and for concepts from coding theory we refer the reader to [27] and [34]. ...
... The lowest Hamming [6] distance between codewords can be used to formulate error detection code theory. For a code C with a minimum distance d, errors can be repaired up to ⌊ d−1 2 ⌋ and identified up to d − 1. ...
In this paper, we explore applications of combinatorics on words across various domains, including data compression, error detection, cryptographic protocols, and pseudorandom number generation. The examination of the theoretical foundations enabling these applications, emphasizing important concepts of mathematical relationships and algorithms. In data compression, we discuss the Lempel-Ziv family of algorithms and Lyndon factorization, with the number of Lyndon words of length n over an alphabet of size k given by We address cryptographic protocols and pseudorandom number generation, highlighting the role of pseudorandomness theory and complexity measures. Also, by explore de Bruijn sequences, topological entropy, and synchronizing words in their practical contexts, demonstrating their contributions to optimizing information storage, ensuring data integrity, and enhancing cybersecurity.
... Let a 0 , a 1 , · · · , a t 2 −1 , a ′ 0 , a ′ 1 , · · · , a ′ t 2 −1 ∈ C (N,q 2 ) i such that a m ∈ C (q−1,q 2 ) 2(j+N m) and a ′ m ∈ C (q−1,q 2 ) 2(j+e+N m)+1 , for m = 0, 1, · · · , t 2 − 1. Thus, in a quite similar way as before, and by using now (10), (11), and (14), we have ...
An important family of codes for data storage systems, cryptography, consumer electronics, and network coding for error control in digital communications are the so-called cyclic codes. This kind of linear codes are also important due to their efficient encoding and decoding algorithms. Because of this, cyclic codes have been studied for many years, however their complete weight distributions are known only for a few cases. The complete weight distribution has a wide range of applications in many research fields as the information it contains is of vital use in practical applications. Unfortunately, obtaining these distributions is in general a very hard problem that normally involves the evaluation of sophisticated exponential sums, which leaves this problem open for most of the cyclic codes. In this paper we determine, for any finite field \bbbf_q, the explicit factorization of any polynomial of the form , where c \in \bbbf_{q}^*. Then we use this result to obtain, without the need to evaluate any kind of exponential sum, the complete weight distributions of a family of irreducible cyclic codes of dimension two over any finite field. As an application of our findings, we employ the complete weight distributions of some irreducible cyclic codes presented here to construct systematic authentication codes, showing that they are optimal or almost optimal.
... Especially, a linear code C is self-dual if C = C ⊥ . Self-dual linear codes have various connections with combinatorics and lattice theory [3], [13]. In practice, self-dual linear codes have also important applications in cryptography [14], [15]. ...
It is well-known that MDS, AMDS or self-dual codes have good algebraic properties, and are applied in communication systems, data storage, quantum codes, and so on. In this paper, we focus on a class of generalized Roth-Lempel linear codes, and give an equivalent condition for them or their dual to be non-RS MDS, AMDS or non-RS self-dual and some corresponding examples.
... Studies on quantum error detection and correction codes began with Richard Hamming's pioneering 1950 publication, "Error detecting and error correcting codes" [209][210][211], and have continued to this day. These concepts are of fundamental importance not only in classical communication but also in fields like artificial intelligence (AI). ...
Accuracy, Noise, and Scalability in Quantum Computation: Strategies for the NISQ Era and Beyond
Mehmet Keçeci1
1ORCID : https://orcid.org/0000-0001-9937-9839, İstanbul, Türkiye
Received: 26.05.2025
Abstract:
Quantum computers promise to revolutionize science and technology by offering the potential to solve complex problems intractable with classical approaches. However, realizing this potential hinges on effectively managing the noise and errors inherent in quantum systems, which threaten computational accuracy. This work has explored a broad spectrum, from the fundamentals of quantum computation to strategies for enhancing the performance of devices in the Noisy Intermediate-Scale Quantum (NISQ) era, with a particular focus on the critical role of quantum error correction (QEC) codes and the decoder algorithms developed for them. While various methods exist for characterizing and manipulating quantum states, the scalability of these methods becomes a significant issue as the number of qubits increases. The measurement process itself also requires careful planning as it perturbs the quantum state. QEC codes, especially topological codes like surface codes, developed to overcome these challenges, form the foundation of fault-tolerant quantum computation. The success of a QEC code largely depends on the performance of its decoder algorithm, which analyses error syndromes to detect and correct the most probable errors. Alongside classical approaches like Minimum-Weight Perfect Matching (MWPM) and Union-Find, newer and potentially more powerful methods such as Maximum-Likelihood Decoders (MLD) and Neural Network-based Decoders (NNbD) are active areas of research. A prominent aspect of this study is the demonstration that, even with limited classical computing resources, the theoretical scalability of quantum error correction mechanisms can be pushed to remarkable limits using sophisticated simulation techniques and algorithmic ingenuity. Notably, striking results such as the simulation and verification of surface code error correction algorithms for systems of 25 million theoretical qubits have been achieved on a personal computer. Furthermore, the graphical visualization of error correction solutions for systems exceeding 100,000 theoretical qubits underscores the analysability of such complex systems. These findings indicate that error correction principles are theoretically applicable to very large systems and that classical simulations continue to be a valuable tool in this exploratory journey. In the future, key objectives will include the development of more efficient and scalable decoders, the discovery of new QEC codes, the creation of realistic noise models, advancements in hardware-software co-design, and the execution of complex algorithms on logical qubits. Quantum error correction will continue to play a central role on the path to fault-tolerant quantum computation, and theoretical and simulational work in this area will offer significant contributions to the realization of practical quantum computers. Large-scale simulation achievements driven by the creativity of individual researchers, as highlighted here, bolster hopes for the future of the field.
Keywords: Quantum Computing, Decoder, Simulation, Scalability, Qubit, Quantum Error Correction, QEC, Stabilizer Codes, Topological Codes, Surface Code, Fault-Tolerant Quantum Computation, Quantum Noise.
Note: Citations and numbering are in continuation of the previous article [242–244].
... If S=0S = 0, no error exists. If S≠0S \neq 0, the binary number represents the position of the erroneous bit, which can be flipped to correct the error [8]. ...
This research paper explores the application of the Hamming Code algorithm as an efficient error detection and correction technique in computer network communication systems. We examine the principles, algorithm rules, implementation strategies, and performance of Hamming Code compared to other error detection techniques. Through algorithm analysis, validation rules, programming demonstration, and experimental evaluations, we establish that the Hamming Code offers reliable data transmission with minimal redundancy and computational efficiency. The algorithm is suitable for real-time network communications and is widely applicable in various digital communication systems.
... Recall that the Hamming distance d(u, v) between two codewords u, v is the number of coordinates where u and v differ. The expression A q (n, d) denotes the maximum number of possible codewords in a q-ary block code of length n and minimum Hamming distance d, see [19]. The determination of (the asymptotic behavior of) A q (n, d) is one of the most central problems in coding theory. ...
We study the problem of finding the largest number T(n, m) of ternary vectors of length n such that for any three distinct vectors there are at least m coordinates where they pairwise differ. For m = 1, this is the classical trifference problem which is wide open. We prove upper and lower bounds on T(n, m) for various ranges of the parameter m and determine the phase transition threshold on m = m(n) where T(n, m) jumps from constant to exponential in n. By relating the linear version of this problem to a problem on blocking sets in finite geometry, we give explicit constructions and probabilistic lower bounds. We also compute the exact values of this function and its linear variation for small parameters.
... One of the most well-known theoretical bounds in coding theory is the sphere-packing bound [22] or Hamming bound ( [15], [18]) providing a limit on the number of codewords a code can contain while still maintaining a certain minimum distance. Codes attaining the Hamming bound are called perfect codes. ...
Projective metrics on vector spaces over finite fields, introduced by Gabidulin and Simonis in 1997, generalize classical metrics in coding theory like the Hamming metric, rank metric, and combinatorial metrics. While these specific metrics have been thoroughly investigated, the overarching theory of projective metrics has remained underdeveloped since their introduction. In this paper, we present and develop the foundational theory of projective metrics, establishing several elementary key results on their characterizing properties, equivalence classes, isometries, constructions, connections with the Hamming metric, associated matroids, sphere sizes and Singleton-like bounds. Furthermore, some general aspects of scale-translation-invariant metrics are examined, with particular focus on their embeddings into larger projective metric spaces.
... These examples not only showcase the versatility and effectiveness of the proposed methods but also provide a deeper insight into the process of constructing and analyzing quasi-self-dual codes. For a broader understanding of the general principles of coding theory that underpin these constructions, we refer the reader to [15,16]. ...
This paper establishes an extended theoretical framework centered on the duality of codes constructed over a special class of non-unital, commutative, local rings of order p2, where p is a prime satisfying p≡1mod4 or p≡3mod4. The work expands the traditional scope of coding theory by developing and adapting a generalized recursive approach to produce quasi-self-dual and self-dual codes within this algebraic setting. While the method for code generation is rooted in the classical build-up technique, the primary focus is on the duality properties of the resulting codes—especially how these properties manifest under different congruence conditions on p. Computational examples are provided to illustrate the effectiveness of the proposed methods.
... , X m ] at all points in the affine space A m (F q ). An extensive literature on binary and q-ary Reed-Muller codes can be found in [1,22]. Due to their nice algebraic properties, these codes have been extensively studied and are still an active area of research. ...
Affine Cartesian codes were first discussed by Geil and Thomsen in 2013 in a broader framework and were formally introduced by L\'opez, Renter\'ia-M\'arquez and Villarreal in 2014. These are linear error-correcting codes obtained by evaluating polynomials at points of a Cartesian product of subsets of the given finite field. They can be viewed as a vast generalization of Reed-Muller codes. In 1970, Delsarte, Goethals and MacWilliams gave a %characterization of minimum weight codewords of Reed-Muller codes and also formula for the minimum weight codewords of Reed-Muller codes. Carvalho and Neumann in 2020 considered affine Cartesian codes in a special setting where the subsets in the Cartesian product are nested subfields of the given finite field, and gave a characterization of their minimum weight codewords. We use this to give an explicit formula for the number of minimum weight codewords of affine Cartesian codes in the case of nested subfields. This is seen to unify the known formulas for the number of minimum weight codewords of Reed-Solomon codes and Reed-Muller codes.
... We begin with some notations and definitions from [23]. Let F q denote a finite field with q elements where q is power of a prime p . ...
In 2014 , Gupta and Ray proved that the circulant involutory matrices over the
finite field with 2^m elements can not be maximum distance separable (MDS). This non-existence also extends to circulant orthogonal matrices of order 2^d × 2^d over finite fields of characteristic 2 . These findings inspired many authors to generalize the circulant property for constructing lightweight MDS matrices with practical applications in mind. Recently, in 2022, Chatterjee and Laha initiated a study of circulant matrices by considering semi-involutory and semi-orthogonal properties. Expanding on their work, this paper establishes a link between the trace of associated diagonal matrices and the MDS property of matrices over the finite field F2m . Given that existing constructions of circulant MDS matrices rely on exhaustive search methods, our result introduces a necessary condition for a circulant semi-orthogonal (or semi-involutory) matrix to be MDS. Specifically, we prove that for circulant semi-orthogonal matrices of even order and circulant semi-involutory matrices, if the trace of the associated diagonal matrices is non-zero, the matrix cannot be MDS.
... • In [24], the authors showed the classical Sidel'nikov bound, and the Carlitz-Uchiyama bound on the minimum distances of Euclidean duals of binary primitive BCH codes with odd designed distances. ...
The task of constructing infinite families of self-dual codes with unbounded lengths and minimum distances exhibiting square-root lower bounds is extremely challenging, especially when it comes to cyclic codes. Recently, the first infinite family of Euclidean self-dual binary and nonbinary cyclic codes, whose minimum distances have a square-root lower bound and have a lower bound better than square-root lower bounds are constructed in \cite{Chen23} for the lengths of these codes being unbounded. Let q be a power of a prime number and . In this paper, we first improve the lower bounds on the minimum distances of Euclidean and Hermitian duals of BCH codes with length over and over in \cite{Fan23,GDL21,Wang24} for the designed distances in some ranges, respectively, where . Then based on matrix-product construction and some lower bounds on the minimum distances of BCH codes and their duals, we obtain several classes of Euclidean and Hermitian self-dual codes, whose minimum distances have square-root lower bounds or a square-root-like lower bounds. Our lower bounds on the minimum distances of Euclidean and Hermitian self-dual cyclic codes improved many results in \cite{Chen23}. In addition, our lower bounds on the minimum distances of the duals of BCH codes are almost or q times that of the existing lower bounds.
... The automorphism group Aut(C) of a code C is the set of all automorphisms of the code. For RM codes, the automorphism group is known to be the general affine group GA(m) [12], which consists of all affine bijections over F m 2 . An affine bijection is defined as the mapping z ...
By exploiting the rich automorphisms of Reed–Muller (RM) codes, the recently developed automorphism ensemble (AE) successive cancellation (SC) decoder achieves a near-maximum-likelihood (ML) performance for short block lengths. However, the appealing performance of AE-SC decoding arises from the diversity gain that requires a list of SC decoding attempts, which results in a high decoding complexity. To address this issue, this paper proposes a novel quasi-optimal path convergence (QOPC)-aided early termination (ET) technique for AE-SC decoding. This technique detects strong convergence between the partial path metrics (PPMs) of SC constituent decoders to reliably identify the optimal decoding path at runtime. When the QOPC-based ET criterion is satisfied during the AE-SC decoding, only the identified path is allowed to proceed for a complete codeword estimate, while the remaining paths are terminated early. The numerical results demonstrated that for medium-to-high-rate RM codes in the short-length regime, the proposed QOPC-aided ET method incurred negligible performance loss when applied to fully parallel AE-SC decoding. Meanwhile, it achieved a complexity reduction that ranged from 35.9% to 47.4% at a target block error rate (BLER) of 10−3, where it consistently outperformed a state-of-the-art path metric threshold (PMT)-aided ET method. Additionally, under a partially parallel framework of AE-SC decoding, the proposed QOPC-aided ET method achieved a greater complexity reduction that ranged from 81.3% to 86.7% at a low BLER that approached 10−5 while maintaining a near-ML decoding performance.
... Basics on linear block coding theory may be found in any of [3,27,29,33] and many others. The notation [n, r, d] is used here for a linear block code of length n, dimension r, and (minimum) distance d. ...
Linear block and convolutional codes are designed using unit schemes and families of these to required length, rate, distance and type are mined. Properties, such as type and distance, of the codes follow from the types of units used and thus required codes are built from specific units.
Orthogonal units, units in group rings, Fourier/Vandermonde units and related units are used to construct and analyse linear block and convolutional codes and to construct these to predefined length rate, distance and type. Series of self-dual, dual containing, quantum error-correcting and linear complementary dual codes are constructed for both linear block and convolutional codes.
Low density parity check linear block and linear convolutional codes are constructed from unit schemes.
... • Every square submatrix of M is non-singular [MS77]. ...
In this paper, we study MDS matrices that are specifically designed to prevent the occurrence of related differentials. We investigate MDS matrices with a Hadamard structure and demonstrate that it is possible to construct 4 X 4 Hadamard matrices that effectively eliminate related differentials. Incorporating these matrices into the linear layer of AES-like block-ciphers/hash functions significantly mitigates the attacks that exploit the related differentials property. The central contribution of this paper is to identify crucial underlying relations that determine whether a given 4 X 4 Hadamard matrix exhibits related differentials. By satisfying these relations, the matrix ensures the presence of related differentials, whereas failing to meet them leads to the absence of such differentials. This offers effective mitigation of recently reported attacks on reduced-round AES. Furthermore, we propose a faster search technique to exhaustively verify the presence or absence of related differentials in 8 X 8 Hadamard matrices over finite field of characteristic 2 which requires checking only a subset of involutory matrices in the set. Although most existing studies on constructing MDS matrices primarily focus on lightweight hardware/software implementations, our research additionally introduces a novel perspective by emphasizing the importance of MDS matrix construction in relation to their resistance against differential cryptanalysis.
... In many mathematical problems, the challenge lies in finding optimal solutions within a specific search space, regardless of their generalization beyond it. For instance, many problems require identifying high-quality solutions within particular dimensions, where cross-dimensional generalization is not the primary concern (Grochow, 2019; MacWilliams & Sloane, 1977). The observed improvement in search performance on the validation set using EvoTune is thus a promising indicator of its potential to address such challenging mathematical problems. ...
Discovering efficient algorithms for solving complex problems has been an outstanding challenge in mathematics and computer science, requiring substantial human expertise over the years. Recent advancements in evolutionary search with large language models (LLMs) have shown promise in accelerating the discovery of algorithms across various domains, particularly in mathematics and optimization. However, existing approaches treat the LLM as a static generator, missing the opportunity to update the model with the signal obtained from evolutionary exploration. In this work, we propose to augment LLM-based evolutionary search by continuously refining the search operator - the LLM - through reinforcement learning (RL) fine-tuning. Our method leverages evolutionary search as an exploration strategy to discover improved algorithms, while RL optimizes the LLM policy based on these discoveries. Our experiments on three combinatorial optimization tasks - bin packing, traveling salesman, and the flatpack problem - show that combining RL and evolutionary search improves discovery efficiency of improved algorithms, showcasing the potential of RL-enhanced evolutionary strategies to assist computer scientists and mathematicians for more efficient algorithm design.
This paper investigates perfect state transfer in Grover walks, a model of discrete-time quantum walks. We establish a necessary and sufficient condition for the occurrence of perfect state transfer on graphs belonging to an association scheme. Our focus includes specific association schemes, namely the Hamming and Johnson schemes. We characterize all graphs on the classes of Hamming and Johnson schemes that exhibit perfect state transfer. Furthermore, we study perfect state transfer on distance-regular graphs. We provide complete characterizations for exhibiting perfect state transfer on distance-regular graphs of diameter 2 and diameter 3, as well as integral distance-regular graphs.
La teoría de grafos se aplica en muchas áreas de la ciencia, como la computación, la química, las biociencias, las redes, los sistemas de seguridad y la toma de decisiones en estudios de sistemas de potencia, convirtiéndose en una herramienta importante en múltiples campos. En este artículo de divulgación se presenta el estudio de un parámetro particular, conocido como el número de empaquetamiento, dentro de una familia especial de gráficas denominadas grafos de fichas, además se
muestra cómo este permite abordar y contribuir a la resolución de problemas en la teoría de códigos correctores de errores.
Çoklu İşlemci Mimarilerinde Kuantum Algoritma Simülasyonlarının Hızlandırılması: Cython, Numba ve Jax ile Optimizasyon Teknikleri
Mehmet Keçeci
ORCID : https://orcid.org/0000-0001-9937-9839, İstanbul, Türkiye
Received: 03.06.2025
Özet/Abstract:
Kuantum algoritmalarının teorik potansiyeli, özellikle karmaşık problemlerin çözümünde devrim niteliğinde olsa da bu algoritmaların pratik geliştirilmesi ve doğrulanması büyük ölçüde klasik bilgisayarlarda yapılan simülasyonlara dayanmaktadır. Ancak, kuantum sistemlerinin Hilbert uzayının üssel büyümesi nedeniyle, artan kübit sayısı ile simülasyonların hesaplama maliyeti hızla artmakta ve bu durum ciddi bir performans darboğazı oluşturmaktadır. Bu zorluğun üstesinden gelmek için, modern çoklu işlemci mimarilerinin sunduğu paralel hesaplama yeteneklerinden faydalanmak kritik bir öneme sahiptir. Bu çalışma, Python programlama dili kullanılarak geliştirilen kuantum algoritma simülasyonlarının performansını artırmak amacıyla Cython, Numba ve Jax gibi ileri düzey optimizasyon araçlarının kullanımını ve bu araçların çoklu işlemci ortamlarındaki etkinliklerini incelemektedir. Cython, Python kodunu C veya C++'a çevirerek statik tipleme ve derleme avantajları sunar, bu sayede Python'ın yorumlama yükünü ortadan kaldırır ve GIL kısıtlamasını belirli koşullar altında aşarak gerçek iş parçacığı düzeyinde paralellik sağlar. Numba, JIT derleyicisi ile Python fonksiyonlarını, özellikle NumPy dizileri üzerinde çalışan sayısal hesaplama ağırlıklı döngüleri, çalışma zamanında makine koduna çevirerek önemli hızlanmalar elde eder; ayrıca `@njit(parallel=True)` gibi direktiflerle otomatik paralelleştirme yetenekleri sunar. Jax ise, otomatik farklılaştırma ve XLA derleyicisi ile optimizasyonun yanı sıra, `pmap` fonksiyonu aracılığıyla veri paralelliği modelini destekleyerek işlemleri birden fazla CPU çekirdeği veya GPU/TPU gibi hızlandırıcılara dağıtılmasını kolaylaştırır ve `vmap` ile otomatik vektörleştirmeyi mümkün kılar. Bu optimizasyon tekniklerinin entegrasyonu ve etkin kullanımı, kuantum algoritma simülasyonlarının yürütme sürelerini önemli ölçüde azaltabilir. Çalışma, bu araçların bireysel ve birleşik etkilerini, özellikle kuantum hata düzeltme kodları veya varyasyonel kuantum algoritmaları gibi hesaplama açısından yoğun görevlerin simülasyonunda ele almaktadır. Elde edilen performans kazanımları, daha büyük ve daha karmaşık kuantum sistemlerinin klasik kaynaklarla incelenmesine olanak tanıyarak, kuantum bilişim alanındaki araştırmaların ilerlemesine katkıda bulunacaktır. Sonuç olarak, Cython, Numba ve Jax'ın sunduğu derleme ve paralelleştirme stratejileri, kuantum simülasyonlarının çoklu işlemci mimarilerinde verimli bir şekilde çalıştırılması için güçlü ve esnek çözümler sunmaktadır.
Anahtar Kelimeler/ Keywords:
Kuantum Algoritma Simülasyonu, Çoklu İşlemci Mimarileri, Performans Optimizasyonu, Paralel Hesaplama, Cython, Numba, Jax, Python, Yüksek Başarımlı Hesaplama, Kuantum Bilişim.
Note: Citations and numbering are in continuation of the previous articles [312–321].
Özet/Abstract: Bu çalışma, kuantum hesaplamanın önündeki en büyük engellerden biri olan dekoherans ve hataların üstesinden gelmek için kritik bir rol oynayan kuantum hata düzeltme (QEC) kodlarının performansını artırmaya odaklanmaktadır. Özellikle, büyük ölçekli yüzey kodlarında (torik kodlar) metrik seçiminin ve algoritmik optimizasyonların hata düzeltme verimliliği üzerindeki etkileri kapsamlı bir şekilde incelenmiştir. Çalışmanın temel bulgularından biri, fiziksel kübitler arasındaki mesâfeyi tanımlamak için kullanılan metrik türünün (Öklid, Minkowski, Manhattan ve potansiyel olarak Riemann) hata düzeltme algoritmalarının hem çözüm süresini hem de doğruluğunu önemli ölçüde etkilediğidir. Yüksek kübit sayılarında (250.000 kübite kadar) gerçekleştirilen simülasyonlar, Öklid metriğinin genellikle iyi bir denge sunduğunu, Minkowski metriğinin ise farklı p-değerleri ile esneklik sağladığını göstermiştir. Riemann metriğinin potansiyel yüksek doğruluğuna rağmen, mevcut algoritmalara entegrasyon zorlukları ve hesaplama maliyeti pratik uygulamalarını sınırlamaktadır. Algoritmik optimizasyonlar bağlamında, Minimum Ağırlık Mükemmel Eşleştirme (MWPM) ve Union-Find algoritmalarının performansı karşılaştırılmıştır. Yüksek kübit sayılarında MWPM'in genellikle daha iyi sonuçlar verdiği gözlemlenmiştir. Bu algoritmaların verimliliğini daha da artırmak için, açık kaynaklı BlossomV kütüphanesi C++ (C++20/C++23 standartları kullanılarak) ve Rust gibi modern, yüksek performanslı dillerle yeniden derlenmiş ve optimize edilmiştir. Bu derleme optimizasyonları sayesinde, özellikle g++ derleyicisi ile yapılan denemelerde, büyük sistemlerdeki çözüm sürelerinde ~190 kata varan çarpıcı hızlanmalar elde edilmiştir. Çalışma ayrıca, Cat-kübitler gibi gelişmiş kübit tasarımlarının karşılaştığı spesifik hata türlerini (özellikle faz hataları) ve bu hataların potansiyel olarak Aharonov-Bohm etkisi gibi temel fiziksel prensiplerle nasıl ele alınabileceğini de teorik düzeyde tartışmaktadır. Sonuç olarak, bu araştırma, büyük ölçekli ve hataya dayanıklı kuantum bilgisayarlarının geliştirilmesi yolunda, metrik seçiminin, algoritma tasarımının ve yazılım optimizasyonunun sinerjik bir şekilde ele alınmasının kritik önem taşıdığını vurgulamaktadır. Elde edilen bulgular, gelecekteki QEC stratejilerinin ve kuantum donanım-yazılım eş tasarımının şekillendirilmesine katkıda bulunacaktır. Note: Citations and numbering are in continuation of the previous articles [312-320].
Bu çalışma, hataya dayanıklı kuantum hesaplamanın önündeki en büyük engellerden biri olan gürültüye karşı etkin stratejiler geliştirmek amacıyla, kuantum hata düzeltme (QEC) algoritmalarının özyineleme (recursion) performansının optimizasyonunu ve bu algoritmaların aşırı gürültü koşulları altındaki toleransını derinlemesine incelemektedir. Majorana fermiyonlarının non-abelyen istatistikleri ve topolojik koruma özellikleri, kuantum bilgisayarlar için umut vaat eden kübit adayları sunsa da bu potansiyelin gerçekleştirilmesi büyük ölçüde ölçeklenebilir ve gürültüye dirençli QEC mekanizmalarına bağlıdır. Araştırmamız, özellikle yüzey kodlarında sıklıkla kullanılan Union-Find, UFNS ve MWPM gibi algoritmaların karşılaştığı özyineleme derinliği sınırlamalarına odaklanmaktadır. Bu sınırlamaları aşmak için Yol Sıkıştırması, Ranka Göre Birleştirme ve Yinelemeli Uygulama gibi özyineleme optimizasyon teknikleri uygulanmış, bu sayede özyineleme sayısında dikkate değer düşüşler sağlanarak sistemin özyineleme sınırlarına takılması engellenmiştir. Bu optimizasyonlar, düzlemsel ve kübik gibi yüksek kübit sayılı sistemlerin simülasyonunu ve analizini mümkün kılmıştır. Tez kapsamında, literatürdeki gürültü tanımlarına ek olarak, "Aşırı Gürültü" (aynı anda en az iki farklı yüksek gürültü kaynağının varlığı, p ≥ 0.8-0.9) kavramı tanımlanmış ve QEC algoritmalarının bu zorlu senaryolardaki performansı, hata düzeltme süreleri ve kaynak gereksinimleri açısından değerlendirilmiştir. Elde edilen bulgular, özellikle yüksek kübit sayılarında Union-Find algoritmasının optimize edilmiş versiyonlarının MWPM'e kıyasla daha verimli olabildiğini (belirli aralıklarda) göstermiştir. Majorana Sıfır Modları (MZM) gibi doğası gereği gürültüsüz veya düşük gürültülü sistemlerin, "Kuantum Sıçraması" olarak tanımlanan paradigma değiştirici bir ilerleme için daha yüksek potansiyele sahip olduğu sonucuna varılmıştır. Mevcut kübit teknolojileri ve hata düzeltme yöntemleriyle önemli adımlar atılsa da gerçek bir kuantum üstünlüğüne ulaşmanın, MZM tabanlı platformlar gibi yenilikçi yaklaşımlar ve bu platformlara özgü daha sofistike QEC stratejileri gerektirebileceği öngörülmektedir. Bu çalışma, QEC algoritmalarının pratik uygulanabilirliğini artırmaya yönelik somut çözümler sunmakta, aşırı gürültü koşullarının kuantum sistemler üzerindeki etkilerini aydınlatmakta ve gelecekteki hataya dayanıklı kuantum bilgisayarların tasarımı için kritik çıkarımlar sağlamaktadır.
Anahtar Kelimeler/ Keywords: Kuantum Hata Düzeltme, Özyineleme Optimizasyonu, Aşırı Gürültü, Majorana Fermiyonları, Yüzey Kodları, Kuantum Sıçraması, Union-Find Algoritması, Hata Toleransı, Topolojik Kuantum Hesaplama, MWPM, Union-Find, UFNS.
Note: Citations and numbering are in continuation of the previous articles [307–314].
Kuantum bilgisayarlar, klasik yaklaşımların yetersiz kaldığı karmaşık problemleri çözme potansiyeliyle bilim ve teknoloji alanında devrim vaat etmektedir. Ancak, bu potansiyelin tam olarak hayata geçirilmesi, özellikle yüksek kübit sayılarında sistemlerin ölçeklenebilirliği ve kuantum sistemlerin doğasında bulunan gürültüye karşı etkin hata yönetimi gibi temel zorlukların üstesinden gelinmesine bağlıdır. Bu çalışma, yüksek kübit sayılı kuantum hesaplama sistemlerinde ölçeklenebilirlik ve hata yönetimi sorunlarını ele almakta; yüzey kodları, topolojik malzemeler ve hibrit algoritmik yaklaşımlar olmak üzere üç temel çözüm eksenini derinlemesine incelemektedir. Yüzey kodları, kuantum hata düzeltme (QEC) stratejileri arasında öne çıkan ve fiziksel kübitlerden hataya dayanıklı mantıksal kübitler oluşturmayı hedefleyen topolojik bir yaklaşımdır. Bu bağlamda, BlossomV, Minimum Ağırlıklı Mükemmel Eşleştirme (MWPM) ve Union-Find gibi çözümleme algoritmalarının verimliliği, milyonlarca kübite varan sistemlerdeki hata sendromlarının çözümlenmesinde kritik bir rol oynamaktadır. Çalışmamız, bu algoritmaların performansını, işlem süresini (wall time) ve üretilen veri boyutunu farklı kübit sayıları ve hata modelleri için analiz etmektedir. Ölçeklenebilirlik ve hata yönetiminde bir diğer umut verici yön, Majorana Sıfır Modları (MZM) ve Weyl Yarımetalleri (WSM) gibi topolojik malzemelerin kullanımıdır. Bu malzemeler, içsel olarak daha düşük gürültü seviyelerine sahip olmaları ve kuantum eşevreliliğini (koherans) daha uzun süre koruyabilmeleri nedeniyle, daha az karmaşık hata düzeltme mekanizmalarına ihtiyaç duyan veya hiç duymayan kübit platformları sunma potansiyeline sahiptir. Özellikle Weyl fermiyonlarının yüksüz ve kütlesiz doğası, korona etkisi gibi sorunları azaltarak daha kararlı kübitler oluşturulmasına olanak tanıyabilir. Son olarak, tek bir algoritmanın tüm problem türleri ve gürültü rejimleri için optimal olmaması gerçeğinden hareketle, hibrit algoritmik yaklaşımlar ve bir "Algoritma Havuzu" konsepti önerilmektedir. Bu yaklaşım, Sıfır Gürültü Ekstrapolasyonu (ZNE), Olasılığa Bağlı Hata İptali (PEC) gibi hata azaltma tekniklerinin yanı sıra, Coppersmith-Winograd gibi hızlı klasik matris çarpımı algoritmalarının, yüzey kodu çözümleme gibi yoğun hesaplama gerektiren alt görevleri hızlandırmak için entegre edilmesini içerir. Yapay zekâ destekli anlık algoritma seçimi, bu hibrit sistemlerin verimliliğini daha da artırabilir. Bu bütünleşmiş yaklaşım, yüksek kübit sayılarında hem işlem süresini kısaltmayı hem de hata oranlarını minimize etmeyi hedefleyerek, hataya dayanıklı ve ölçeklenebilir kuantum hesaplamanın önünü açmayı amaçlamaktadır.
Anahtar Kelimeler/ Keywords: Kuantum Hesaplama, Ölçeklenebilirlik, Hata Yönetimi, Yüzey Kodları, Topolojik Malzemeler, Weyl Yarımetalleri, Majorana Fermiyonları, Hibrit Algoritmalar, Kuantum Hata Düzeltme.
Note: Citations and numbering are in continuation of the previous articles [307–301].
Kuantum algoritmalarının güvenilirliği ve etkinliği, özellikle gürültüye duyarlı günümüz NISQ cihazlarında, kapsamlı hata azaltma stratejileri ve iyi tanımlanmış performans ölçütleriyle yakından ilişkilidir. Bu çalışma, sıfır-gürültü ekstrapolasyonu (ZNE), olasılıklı hata iptali (PEC) ve Clifford veri regresyonu (CDR) gibi farklı hata azaltma tekniklerini karşılaştırmalı olarak analiz etmektedir. Deneysel ve simülasyon tabanlı incelemeler, ZNE tekniğinin fiziksel hata oranlarında yaklaşık 741 kat gibi dikkate değer bir iyileştirme sağlayabildiğini (%0,07 seviyesinden %0,0001'e indirgeyerek) ve bu tekniklerin farklı hata rejimlerine göre uyarlanabileceğini göstermiştir. Çalışma, kuantum devrelerinin optimizasyonunda kullanılan ansatz yapılarının ve kübitlerin fiziksel konumları ile aralarındaki bağlantıların hata oranlarını doğrudan etkilediğini kuvvetle vurgulamaktadır. Bu gözlemler ışığında, daha düşük hata seviyelerine ulaşmak ve algoritmik performansı artırmak amacıyla, makine öğrenimi yetenekleriyle donatılmış, geçmişteki düşük hatalı başarılı örneklerden oluşan bir veritabanından öğrenen özdevinimli ve evrişimli akıllı ansatz yapıları önerilmektedir. Bu tür ansatz'ların yapay zekâ araçlarıyla dinamik olarak geliştirilmesi ve optimize edilmesi, hataların proaktif bir şekilde yönetilmesinde kritik bir rol oynayacaktır. Sunulan bulgular ve önerilen metrikler, gelecekteki daha gelişmiş topolojik hata düzeltme kodlamaları ve Künneth teoremi gibi matematiksel çerçeveler ışığında değerlendirilmesi planlanan otomatikleştirilmiş kuantum algoritma tasarım stratejileri için temel bir altyapı oluşturmaktadır.
Anahtar Kelimeler/ Keywords: Künneth Teoremi, Kuantum algoritmaları, Hata azaltma, Ansatz optimizasyonu, ZNE, Sıfır-Gürültü Ekstrapolasyonu, Kübit mimarisi, Makine öğrenimi entegrasyonu, Majorana fermiyonları, JupyterLab, Devre derinliği, Topolojik kodlar, Özdevinim, Evrişim, Yapay Zekâ.
Let , where and for with , and , where p is an odd prime and e is a positive integer. In this article, we have shown that if , then any linear code over is equivalent to a Euclidean linear complementary dual (LCD) code, and if , then the code is equivalent to an -Galois LCD code.
In this paper, by treating Reed-Muller (RM) codes as a special class of low-density parity-check (LDPC) codes and assuming that sub-blocks of the parity-check matrix are randomly interleaved to each other as Gallager's codes, we present a short proof that RM codes are entropy-achieving as source coding for Bernoulli sources and capacity-achieving as channel coding for binary memoryless symmetric (BMS) channels, also known as memoryless binary-input output-symmetric (BIOS) channels, in terms of bit error rate (BER) under maximum-likelihood (ML) decoding.
ResearchGate has not been able to resolve any references for this publication.