Science topics: Computer Science and EngineeringCoding Theory
Science topic
Coding Theory - Science topic
Explore the latest questions and answers in Coding Theory, and find Coding Theory experts.
Questions related to Coding Theory
Dear Professor @Anuradha Sharma and @Varsha Chauhan,
I would be very glad if you would accept my invitation to discuss my recent note “Disproving some theorems in Sharma and Chauhan et al. (2018, 2021)” concerning some wrong theorems (and strengthening others) in your previous publication. I also would like to discuss your recent note titled “Some Remarks on the Results Derived by Ramy Takieldin and Patrick Solé (2025)”.
I was particularly surprised to read your statement that my note is “factually incorrect or lacks adequate substantiation”. This is especially puzzling since both you and Prof. Varsha Chauhan were very welcoming to be included with us as coauthors on this note after you reviewed all the results and theorems in it and before you know that a copy of it has been already uploaded to ArXiv). Given this background, I am curious as to how you now view the results as incorrect. I would greatly appreciate it if you could clarify which specific results in my note you believe to be erroneous. I hope we can begin this discussion and I look forward to understanding your perspective.
Lets start by your statement that "the class of Λ-multi-twisted codes examined in Theorem 7 of [4] lacks significant relevance from a coding theory perspective," I would like to clarify that our note does not aim at studying this particular class of MT codes. Rather, the aim was to demonstrate the errors made in your previous publication when you studied this class. Specifically, the class of MT codes discussed in Theorem 7 of our note is broader than the one you considered in Theorem 5.2 and Corollary 5.1 of your paper, because we removed your restriction on λ and relaxed the coprimality condition. If the class we considered is indeed "lacks significant relevance from a coding theory perspective", then why you chose to study a subclass of it in your FFA paper? I emphasize that the primary aim of our note is not to conduct a detailed study of this class, but rather to clarify what is correct and what is not in your FFA paper.
I look forward to your response and hope you will agree to start in this discussion with me. I initially attempted to have this discussion as a comment on your note, but unfortunately, ResearchGate has recently disabled this feature.
I am looking to apply for PhD programs in the fields of Coding Theory, Cryptography, Information Theory, Communication Engineering, and Machine Learning.
I welcome any inquiries and collaboration opportunities related to these research areas.
Email: zhyueyue1998@gmail.com
Pseudocode: 1)Original: O 2)Epigenetic information polish: EIP 3)Recessive Privilege Distribution: RPD 4)Fertility Enhancement: FE 5) GMO: Genetically Modified Organism 6) SP: Same Person O + EIP + RPD + FE = GMO but SP.
Preprint Genetic Individuality
Dear colleagues,
As well known, we can deliver messages in different languages and characters, including Chinese, English, Latin, binary and decimal codes (if with a converter) and many other languages and characters. However, what is the most simple character that can reserve the richest message? In other words, if I want to store a message with as simple and fewer characters as possible, what language can I use?
Thank you so much.
Wei
I would like to discuss about semantic density and semantic gravity related to physics concepts.
In my research I use thematic content analysis to describe legal documents. After building our coding framework, we have assigned a sample of the data between 3 independent raters. I have asked them to code the data and upon receiving their codes back, I calculated the score by dividing the total number of codes present with the matching score. The simple percentage score between the 3 raters is 88%. I tried to calculate Krippendorff's Alpha using Recal3 (online), Real statistics (Add-in Excel) and with a R Script (icr package in order to calculate Krippendorff's alpha).
I always get aberrant data : -0,46 or -0,26 or a really low Kalpha like 0,10 or 0,11.
Why do you think this happens even if I have 88% of agreement?
My tables are composed of 3 rows (corresponding to the raters) and 26 columns (corresponding to the codes). The data is nominal, it only reflects the presence (1) or absence (0) of the category for the result of each encoder. The table is in the attached file.
Thank you
I have been pondering about the relationship between these two important topics of our data-driven world for a while. I have bits and pieces, but I have been looking forward to find a neat and systematic set of connections that would somehow (surprisingly) bind them and fill the empty spots I have drawn in my mind for the last few years.
In the past, while I was dealing with multi-class classification problem (not so long ago), I have come to realize that multiple binary classifications is a viable option to address this problem through using error correction output coding (ECOC) - a well known coding technique used in the literature whose construction requirements are a bit different than classical block or convolutional codes. I would like to remind you that grouping multiple classes in two superclasses (a.k.a. class binarization) can be addressed in various ways. You can group them totally randomly which does not dependent on the problem at hand or based on a set of problem-dependent constraints that can be derived from the training data. One way I like the most stays at the intersection point of information theory and machine learning. To be more precise, class groupings can be done based on the resultant mutual information to be able to maximise the class separation. In fact, the main objective with this method is to maximise class separation so that your binary classifiers expose less noisy data and hopefully result in better performance. On the other hand, ECOC framework calls for coding theory and efficient encoder/decoder architectures that can be used to efficiently handle the classification problem. The nature of the problem is not something we usually come across in communication theory and classical coding applications though. Binarization of classes implies different noise and defect structures to be inserted into the so called "channel model" which is not common in classical communication scenarios. In other words, the solution itself changes the nature of the problem at hand. Also the way we choose the classifiers (such as margin-based, etc) will affect the characterization of the noise that impacts the detection (classification) performance. I do not know if possible, but what is the capacity of such a channel? What is the best code structure that addresses these requirements? Even more interestingly, can the recurrent issues of classification (such as overfitting) be solved with coding? Maybe we can maintain a trade-off between training and generalization errors with an appropriate coding strategy?
Similar trends can be observed in the estimation theory realm. Parameter estimations or in the same way "regression" (including model fitting, linear programming, density estimation etc) can be thought as the problems of finding "best parameters" or "best fit", which are ultimate targets to be reached. The errors due to the methods used, collected data, etc. are problem specific and usually dependent. For instance, density estimation is a hard problem in itself and kernel density estimation is one of its kind to estimate probability density functions. Various kernels and data transformation techniques (such as Box-Cox) are used to normalize data and propose new estimation methods to meet today's performance requirements. To measure how well we do, or how different distributions are we again resort to information theory tools (such as Kullback–Leibler (KL) divergence and Jensen-Shannon function) and use the concepts/techniques (including entropy etc.) therein from a machine learning perspective. Such an observation separates the typical problems posed in the communication theory arena from the machine learning arena requiring a distinct and careful treatment.
Last but not the least, I think that there is deep rooted relationship between deep learning methods (and many machine learning methods per se) and basic core concepts of information and coding theory. Since the hype for deep learning has appeared, I have observed that many studies applying deep learning methods (autoencoders etc) for decoding specific codes (polar, turbo, LDPC, etc) claiming efficiency, robustness, etc thanks to parallel implementation and model deficit nature of neural networks. However, I am wondering the other way around. I wonder if, say, back-propagation can be replaced with more reasonable and efficient techniques very well known in information theory world to date.Perhaps, distortion theory has something to say about the optimal number of layers we ought to use in deep neural networks. Belief propagation, turbo equalization, list decoding, and many other known algorithms and models may have quite well applicability to known machine learning problems and will perhaps promise better and efficient results in some cases. I know few folks have already began searching for neural-network based encoder and decoder designs for feedback channels. There are many open problems in my oppinion about the explicit design of encoders and use of the network without the feedback. Few recent works have considered various areas of applications such as molecular communications and coded computations as means to which deep learning background can be applied and henceforth secure performances which otherwise cannot be achieved using classical methods.
In the end, I just wanted to toss few short notes here to instigate further discussions and thoughts. This interface will attract more attention as we see the connections clearly and bring out new applications down the road...
If we represent each of the small cubes in each face of a cube by a base 6 number (one of the 6 colors) , can we then represent the configuration of a cube by a 54 digit number (9 * 6) ?
Interestingly, the configurations could also be represented in terms of a base 6 number (representing the 6 operations ). Of course, in our case, things are ot commutative . How could this be related to diagonalization ?? The fully solved case reminds one of Jordan's normal form and Jordans block.
Could the above be related to problems in coding theory ?
Even as Explainable AI" is all the rage, coding/transformation fidelity is a critical success factor. Whether you are using frequency bands from Fourier transforms, statistical features of Wavelet decomposition, or various filters in Convolution Networks - researchers must be able to perform the reverse coding/transformation to see if they have retained sufficient information for classification. Without this, they are only guessing via network architecture trial and error. These tools are sorely lacking in Keras on Tensorflow in Python, so I wrote my own. I would like to see these more generalized and made into public libraries
Question: Who can point me to current work in this area, or can give advice on next steps in my effort?
Please see this public Jupyter Notebook for an example:
My research title is : ( THE EFFECT OF AN AUGMENTED REALITY TOOL IN ENHANCING STUDENTS' ACHIEVEMENT AND CREATIVE THINKING SKILLS?
I spent a lot of time reading several learning theories in order to choose the best theories to fit into my research variables and came out with: a dual coding theory, information processing system, social constructivism, cognitive constructivism, Mayer's cognitive multimedia theory, Gulfierd theory and situated learning theory.
I am so confused.
In field theory, a primitive element of a finite field GF(q) is a generator of the multiplicative group of the field. In other words, α ∈ GF(q) is called a primitive element if it is a primitive (q − 1)th root of unity in GF(q); this means that each non-zero element of GF(q) can be written as αi for some integer i.
For example, 2 is a primitive element of the field GF(3) and GF(5), but not of GF(7) since it generates the cyclic subgroup {2, 4, 1} of order 3; however, 3 is a primitive element of GF(7).
Hi
I have published this paper recently
In that paper, we did an abstracted simulation to get an initial result. Now, I need to do a detailed simulation on a network simulator.
So, I need a network simulator that implement or support MQTT-SN or some implementation of MQTT-SN that would work in a network simulator.
Any hints please?
Could someone give the name of some basic papers of DNA storage system from coding theory angle? I want to learn from the basic level. Thanks in advance.
I want to start my phd in applied mathematics. But, want to continue with my master work on coding theory, but this time I want to work also in a non mathematical field together with coding theory.
Can anyone provide me some topics name related to my previous work on coding theory?
In the book "Great Software Debates", Alan M. Davis states in the chapter "Requirements", sub-chapter "The Missing Piece of Software Development"
Students of engineering learn engineering and are rarely exposed to finance or marketing. Students of marketing learn marketing and are rarely exposed to finance or engineering. Most of us become specialists in just one area. To complicate matters, few of us meet interdisciplinary people in the workforce, so there are few roles to mimic. Yet, software product planning is critical to the development success and absolutely requires knowledge of multiple disciplines.
What are these (multiple) disciplines?
Please anyone with insights should help illuminate. Thank You
In his paper "A new upper bound on the minimal distance of self-dual codes" Conway and Sloane wrote (Theorem 5 iv):
"If we write
W(x,y) =
\sum\limits_{j=0}^{\lfloor\frac{n}{8}\rfloor}{a_j(x^2+y^2)^{\frac{n}{2}-4j}(x^2y^2(x^2-y^2)^2)^j}
using Gleason's theorem, for suitable INTEGERS a_j, then..."
My question is about these coefficients. In the books on Coding Theory that I read the coefficients are presented as complex, real or rational numbers. Has somebody proved that these coefficients are integers?
In literature, all I am seeing is Hamming code with minimum Hamming distance 3. Theoretically a Hamming code with minimum distance d can detect d-1 errors and can correct (d-1)/2 error. So minimum distance of 3 can detect 2 errors and correct 1 error.
So here is my question: Is it possible to generate a Hamming code with minimum distance of 5 which can correct 2-bit errors? If yes, could anyone please give me suggestion about that? If not, then please give me an explanation.
Thanks in advance.
Hi
I have a question regarding high-rate LDPC codes constructions. My research field is not coding but somewhere in my research I need to find an explicit way of constructing LDPC codes with high rate with girth 6 or 8. I think high-rate is not an important factor in coding but in compressed sensing it is of great importance since it provides the main assumption of compressed sensing.
I would like to know whether Cyclic or Quasi-cyclic LDPC codes of girth 6 or 8 can provide high-rate or not? Any suggestion is appreciated!
Thanks
Mahsa
I am working on Radical Theory of rings and module. I would love to improve my research on the application of prime radical or Jacobson radical.
In quasi cyclic codes we define H by adopting one of the several techniques proposed in literature and the null space over GF(q) of H gives a QC-LDPC code C.
The answer should be in [T. Kasami, N. Tokura, S. Azumi, On the weight enumeration of weight less than 2.5d of Reed-Muller codes, Inform. Contr. 30:4, 380-395,1976. https://doi.org/10.1016/S0019-9958(76)90355-7 ] but I see a contradictory information there. The final table states that there is nothing between 2d and 2.125d. But from Lemma 1 it can be seen that 2d+sqrt(d) is possible. For example, the Boolean function (1 + x1 x2 + x3 x4 + x5 x6 + x7 x8 + x9 x0) of degree 2 has weight 2.0625d (d=256). May be I wrongly understand something. My goal is to make a correct reference about weights less than 2.5d in RM(m,r)...
Dear all
I am writing a paper in the field of LDPC codes. I have a problem for simulating my proposed parity check matrix and obtaining the chart of bit error vs SNR values. I need to an expertise co-author in this field for finishing my paper. can everyone help me as co-author for simulating my paper? please contact me and I will explain more details
.
This question is still unanswered for me. Like CDs use Reed-Solomon codes for this purpose. It would be great that some one could refer me to some technical paper on this particular topic.
Thanks
Let us explore the number-theoretic question: suppose we have what we shall refer to as a Prime-Indexical Base Number System, such that the sum of any two primes is the next prime in the series of primes. So such as system would contain all and only prime numbers. For instance, 1+2=3, 3+2=5, 5+2=7; but 5+3=11, and 7+3=11 also. Furthermore, 11+13=29 and 29+31=61 ... and so on. Can we render a formula that would make such a Number System consistent?
Hello everyone! I am currently searching for algebraic structures which I will use for constructing codes for my research. I would appreciate any idea that you will be able to share regarding my question. Thanks.
Complete question: It is known that the nilradical N(R) of a commutative ring R is contained in the Jacobson radical J(R) of R. Now, if the J(R)=N(R) and N(R) is nontrivial, is there any significant use of this in establishing an algebraic structure to be applied in coding theory?
Hi all,
for my current research project, I use atlasTI for the qualitative analysis of my interview data. With the help of the CAT add-on, intercoderreliability can be assessed. I have never done this before and I am not sure which option to pursue:
i) should the second coder use my existing codebook, which emerged from open coding the interviews in the first place or
ii) should I let the second coder simply open-code the interviews as well?
Any best practise hints on this would be greatly appreciated! :)
Best wishes
Jens
Gray map f :from F_2+uF_2 to F_2 is f(x+uy)=(y,x+y)
i.e. compare all the possible codewords.
I have been utilising in-vivo codes on various established definitions in two fields of study in order to determine if theories derived from the code analysis can be grouped into the two fields of studies. What has emerged is 141 codes that have been grouped into 11 logical categories.
These categories can be linked to some prominent theories such as System Theory and Institution Theory but others cannot.
I need assistance with:
- Are there any principles in linking coding outputs to established theories?
- Are there any principles in handling code categories that do not logically link to established theories?
Thanks.
Since from their inception, all most all the attacks on McEliece cryptosystem and its variant Niederreiter cryptosystem have been unsuccessful. They are believed to resist even quantum attacks. These two are code-based Public-key cryptosystem whose private key consists of a generator matrix of a Goppa code. All other modified versions of these cryptosystems which were proposed by replacing Goppa codes with other codes have been proven to be insecure. In view of this, what is the structural advantage that Goppa codes enjoy over other families of codes that makes them so strong for cryptographic applications?
I'm implementing a successive cancellation decoding of a polar code in Matlab, however I can't get any good results.
I'm following the algorithm in the paper "List decoding of polar code." However, the result is always zeros, and I don't know why.
i have to calculate the minimal distance of a finite lenght LDPC code to compare it performance with other codes.
Conference Paper An Efficient Analysis of Finite-Length LDPC Codes
It appears to be based on algorithms in the literature such as the one by Minn, Zeng and Bhargava.
i have to calculate the minimal distance of a finite length LDPC code to compare it performance with other codes.
For real life which one should we apply first? To transform digital information we apply coding theory, and for security we apply cryptography. Which one should we apply first? Is cryptography applied before coding or after coding or both?
From where to start writing any simple Matlab code for LDPC?
Any suggestion or comparison plot in matlab?
It is a classification of JavaScript code in malicious and non-malicious classes.
From a Makefile, can I get the execution sequence of the software?
Example: from Makefile, can I get the details of the Linux kernel?
How can Vedic mathematics solve the problems associated with error-correcting codes?
Is there any application in finite fields to solve the 'Key Equation solver' block of RS code?
I think its related to the problem of computing the smallest weight basis of a matroid.
What is the best software package available for carrying out computations related to algebraic coding theory, especially codes over rings.
I am working on the security of a steganographic systems. When I was reading this interesting paper about steganographic codes, then I didnt understand why at section III they say that "for the security of steganographic systems, we hope there are enough stego-codes, especially binary codes."
I am doing reed solomon codes in verilog and I need some help in designing euclidian block.
Are there any books or papers that describe a method to evaluate the performance of a scrambling code? In other words, the properties of scrambling codes.
Can anyone please suggest the best reference book on Ring Theory that is useful for a budding researcher who is interested in algebraic coding theory, codes over rings in particular?
For a given sequence, a cyclic code can be obtained if one defines the generator polynomial of the code as the minimal polynomial of the sequence. Recently, this method has been used as another way to construct optimal codes. However, it seems that it is difficult to obtained optimal codes from this approach.
Recall that the period of a polynomial p(x) is the smallest n such that p divides x^n-1.The average is known to be of the order of q^n . I am interested in order of magnitude of the tail of the distribution...Say how many polys have period at most Kn for some fixed constant K.
by systematic I mean size 2^k and there is a window of k bits where you can watch all the binary k tuples.
What are the advantages of Joint weight enumerators in Coding Theory?
On Z_4, it is defined with respect to Euclidean metric, and on other rings like F2+uF2 it is defined with respect to Lee metric. Why is so? Can anyone please explain or suggest some literature?
In general, I am interested in algebraic coding theory. Is there any example of digital communication architecture/hardware that uses block codes or convolutional codes over rings? Personally, I am planning to pursue a doctoral studies on coding theory and I also wanted to learn the applied side of coding theory.
I'm working on a problem where I would need an error correcting code of length at least 5616 bits (and preferably not much longer) that could achieve correct decoding even if 40% of the received word is erroneous. I have looked into some basic coding theory textbooks, but I have not found anything that would suit my purpose. Does such a code exist? If it does, what kind of code is it and can it be efficiently realized? If it doesn't, why not?
Any insight to the problem will be much appreciated.