Science topic

Coding Theory - Science topic

Explore the latest questions and answers in Coding Theory, and find Coding Theory experts.
Questions related to Coding Theory
  • asked a question related to Coding Theory
Question
11 answers
Burning offer
I think it would be wiser to unite all the flora of the world first. In today's world, where the mobility of species and globalization have increased, limited local or country flora is no longer sufficient. For this, we must combine the family identification keys and number the species like a license plate, so that each species has a code number. Even if the name changes, the code remains the same. Endemics should be lettered with their own country code and Continents with their own code. (endemic, Asia, Turkey, 1750) ASTUR11750, or (America, USA, 18420) AMUSA18420. This code can then be associated with local names.
Relevant answer
Answer
Yes, they are necessary. However, the current digital methods can reduce the consumed time and the efficiency of identification.
  • asked a question related to Coding Theory
Question
10 answers
I would like to discuss about semantic density and semantic gravity related to physics concepts.
Relevant answer
Answer
Yes amazing topic and a great question even I was trying to get some data and would be really helpful for one of my research if you come across any details or the data meanwhile I just gone thru the details shared by Bruno De Lièvre and yes the thesis is of great helpful to me
  • asked a question related to Coding Theory
Question
8 answers
In my research I use thematic content analysis to describe legal documents. After building our coding framework, we have assigned a sample of the data between 3 independent raters. I have asked them to code the data and upon receiving their codes back, I calculated the score by dividing the total number of codes present with the matching score. The simple percentage score between the 3 raters is 88%.  I tried to calculate Krippendorff's Alpha using Recal3 (online), Real statistics (Add-in Excel) and with a R Script (icr package in order to calculate Krippendorff's alpha).
I always get aberrant data : -0,46 or -0,26 or a really low Kalpha like 0,10 or 0,11.
Why do you think this happens even if I have 88% of agreement?
My tables are composed of 3 rows (corresponding to the raters) and 26 columns (corresponding to the codes). The data is nominal, it only reflects the presence (1) or absence (0) of the category for the result of each encoder. The table is in the attached file.
Thank you
Relevant answer
Answer
I personally think that very high rates of agreement are all that really matters. As David Morse notes, various coefficients of agreement can be highly sensitive to marginals. Unfortunately, too many reviewers blindly demand kappa or alpha, despite near perfect agreement among coders.
  • asked a question related to Coding Theory
Question
3 answers
I have been pondering about the relationship between these two important topics of our data-driven world for a while. I have bits and pieces, but I have been looking forward to find a neat and systematic set of connections that would somehow (surprisingly) bind them and fill the empty spots I have drawn in my mind for the last few years.
In the past, while I was dealing with multi-class classification problem (not so long ago), I have come to realize that multiple binary classifications is a viable option to address this problem through using error correction output coding (ECOC) - a well known coding technique used in the literature whose construction requirements are a bit different than classical block or convolutional codes. I would like to remind you that grouping multiple classes in two superclasses (a.k.a. class binarization) can be addressed in various ways. You can group them totally randomly which does not dependent on the problem at hand or based on a set of problem-dependent constraints that can be derived from the training data. One way I like the most stays at the intersection point of information theory and machine learning. To be more precise, class groupings can be done based on the resultant mutual information to be able to maximise the class separation. In fact, the main objective with this method is to maximise class separation so that your binary classifiers expose less noisy data and hopefully result in better performance. On the other hand, ECOC framework calls for coding theory and efficient encoder/decoder architectures that can be used to efficiently handle the classification problem. The nature of the problem is not something we usually come across in communication theory and classical coding applications though. Binarization of classes implies different noise and defect structures to be inserted into the so called "channel model" which is not common in classical communication scenarios. In other words, the solution itself changes the nature of the problem at hand. Also the way we choose the classifiers (such as margin-based, etc) will affect the characterization of the noise that impacts the detection (classification) performance. I do not know if possible, but what is the capacity of such a channel? What is the best code structure that addresses these requirements? Even more interestingly, can the recurrent issues of classification (such as overfitting) be solved with coding? Maybe we can maintain a trade-off between training and generalization errors with an appropriate coding strategy?
Similar trends can be observed in the estimation theory realm. Parameter estimations or in the same way "regression" (including model fitting, linear programming, density estimation etc) can be thought as the problems of finding "best parameters" or "best fit", which are ultimate targets to be reached. The errors due to the methods used, collected data, etc. are problem specific and usually dependent. For instance, density estimation is a hard problem in itself and kernel density estimation is one of its kind to estimate probability density functions. Various kernels and data transformation techniques (such as Box-Cox) are used to normalize data and propose new estimation methods to meet today's performance requirements. To measure how well we do, or how different distributions are we again resort to information theory tools (such as Kullback–Leibler (KL) divergence and Jensen-Shannon function) and use the concepts/techniques (including entropy etc.) therein from a machine learning perspective. Such an observation separates the typical problems posed in the communication theory arena from the machine learning arena requiring a distinct and careful treatment.
Last but not the least, I think that there is deep rooted relationship between deep learning methods (and many machine learning methods per se) and basic core concepts of information and coding theory. Since the hype for deep learning has appeared, I have observed that many studies applying deep learning methods (autoencoders etc) for decoding specific codes (polar, turbo, LDPC, etc) claiming efficiency, robustness, etc thanks to parallel implementation and model deficit nature of neural networks. However, I am wondering the other way around. I wonder if, say, back-propagation can be replaced with more reasonable and efficient techniques very well known in information theory world to date.Perhaps, distortion theory has something to say about the optimal number of layers we ought to use in deep neural networks. Belief propagation, turbo equalization, list decoding, and many other known algorithms and models may have quite well applicability to known machine learning problems and will perhaps promise better and efficient results in some cases. I know few folks have already began searching for neural-network based encoder and decoder designs for feedback channels. There are many open problems in my oppinion about the explicit design of encoders and use of the network without the feedback. Few recent works have considered various areas of applications such as molecular communications and coded computations as means to which deep learning background can be applied and henceforth secure performances which otherwise cannot be achieved using classical methods.
In the end, I just wanted to toss few short notes here to instigate further discussions and thoughts. This interface will attract more attention as we see the connections clearly and bring out new applications down the road...
Relevant answer
Answer
I've been having similar random thoughts over the two topics. As a matter of fact, I'd like to think about learning in the more genernal sense, not limited to machines. But when I put keywords like 'coding theory', 'learning' etc. in google, most results are just about applying some information theoretical techniques in machine learning, while I'm looking for a deeper connection help me understand learning better. And your post is seemingly the closest thing to what I want.
To briefly summarise my idea, I think we can treat learning as encoding, similar to the last point brought up in your post. I have to admit my ignorance but I haven't found any works studying learning using the framework of coding theory, rather than just borrowing some convenient tools. You may have dug into the literature more since your post, please direct me to the right works/authors if you have found relevant materials.
I don't have a background in information theory, but I guess I know some naive basics of it. Many artificial neural networks can perform a denoising or pattern completion task -- Isn't that impossible from a information theoretical point of view? Why an output can ever be the 'denoised' version of any noisier input? Of course this is a stupid question, but it led me to realise that learning/training is like encoding and testing/responding is like decoding. Then I had to accept that a learning system with all its training data forms this information pathway that has a long (even permanent) lifespan, which should be shorter than the changing rate of the regularities underlying the data. Specifically, learning is a process for the system to compress the aggregated noise in the training data (coding types other than compression would be more fun, but I'm not discussing it here), it considers this as information and incorporates it into its learnable parameters (these things live longer than individual datum), and as a successful outcome the system becomes capable of denoising a test sample, which is in some sense similar to decoding an encrypted message with the correct codebook. In other words, I can think of learning as a procedure of the system minimising its lifetime entropy by data fitting. This idea is evidently hinted by the common use of error minimisation in terms of mimising loglikelihoods in machine learning, but was clearly spelt out in Smolensky's Harmonium, which is slightly different from Hinton's restricted Boltzmann machine in the goal of optimisation (involving entropy). Unfortunately I'm not experienced enough to explain the technical details.
From my perspective, I consider this research direction extremely important and relevant when it comes to continual learning. In a more classical, static data fitting or machine learning scenario, in theory the learning system could be embracing all the training data at the same time. Minimising lifetime system entropy is equivalent to reduce system uncertainty with respect to the training data at the exact moment it encounters data. However, this is clearly a non-realistic assumption for humans and for many AI applications. A more realistic data stream is more dynamic, and at each moment the system could only observe partially the data. Evidently if an artificial neural network tries only to optimise itself with respect to this imperfect information, it suffers from catastrophic forgetting. So people start to tweak the learning rules or the regularisers, etc. in order to fix the problem. I do similar things, too, but I feel a lack of theoretical guidance, as I consider there should be some information theoretical quantification of the difficulty of continual learning tasks (there are some primary but arbitrary classification now), at least for artificial tasks.
In summary, I believe an updated version for coding theory is needed for studying continual learning, because in this scenario the channel capacity of a learning system has to be affected by more than its instantaneous parameter (including structure) configuration, but additionally an integral over time of these parameters.
  • asked a question related to Coding Theory
Question
4 answers
If we represent each of the small cubes in each face of a cube by a base 6 number (one of the 6 colors) , can we then represent the configuration of a cube by a 54 digit number (9 * 6) ?
Interestingly, the configurations could also be represented in terms of a base 6 number (representing the 6 operations ). Of course, in our case, things are ot commutative . How could this be related to diagonalization ?? The fully solved case reminds one of Jordan's normal form and Jordans block.
Could the above be related to problems in coding theory ?
Relevant answer
Answer
Dear Ramchandran
There exist mathematics approach ( see above link ), but I think, you want algorithm to solve Rubiks cube ?
best regards
  • asked a question related to Coding Theory
Question
4 answers
Even as Explainable AI" is all the rage, coding/transformation fidelity is a critical success factor. Whether you are using frequency bands from Fourier transforms, statistical features of Wavelet decomposition, or various filters in Convolution Networks - researchers must be able to perform the reverse coding/transformation to see if they have retained sufficient information for classification. Without this, they are only guessing via network architecture trial and error. These tools are sorely lacking in Keras on Tensorflow in Python, so I wrote my own. I would like to see these more generalized and made into public libraries
Question: Who can point me to current work in this area, or can give advice on next steps in my effort?
Please see this public Jupyter Notebook for an example:
Relevant answer
Answer
Decoding deep convolution networks is indeed a very fascinating topic https://arxiv.org/pdf/1901.07647.pdf
  • asked a question related to Coding Theory
Question
3 answers
My research title is : ( THE EFFECT OF AN AUGMENTED REALITY TOOL IN ENHANCING STUDENTS' ACHIEVEMENT AND CREATIVE THINKING SKILLS?
I spent a lot of time reading several learning theories in order to choose the best theories to fit into my research variables and came out with: a dual coding theory, information processing system, social constructivism, cognitive constructivism, Mayer's cognitive multimedia theory, Gulfierd theory and situated learning theory.
I am so confused.
Relevant answer
Answer
Hi. I do want to give you a recommendation in order to help you to illuminate the basic theory of situated learning that also relate with contstructivism. Have you ever read about theory from John Dewey?
  • asked a question related to Coding Theory
Question
12 answers
In field theory, a primitive element of a finite field GF(q) is a generator of the multiplicative group of the field. In other words, α ∈ GF(q) is called a primitive element if it is a primitive (q − 1)th root of unity in GF(q); this means that each non-zero element of GF(q) can be written as αi for some integer i.
For example, 2 is a primitive element of the field GF(3) and GF(5), but not of GF(7) since it generates the cyclic subgroup {2, 4, 1} of order 3; however, 3 is a primitive element of GF(7).
Relevant answer
Answer
can you using turbo pascal of simple programming..if have a problem can calling me or sending a message to help
  • asked a question related to Coding Theory
Question
5 answers
Hi
I have published this paper recently
In that paper, we did an abstracted simulation to get an initial result. Now, I need to do a detailed simulation on a network simulator.
So, I need a network simulator that implement or support MQTT-SN or some implementation of MQTT-SN that would work in a network simulator.
Any hints please?
Relevant answer
Answer
Hello,
Any network simulator, e.g., Netsim, ns2 or any IoT simulator.
  • asked a question related to Coding Theory
Question
4 answers
Could someone give the name of some basic papers of DNA storage system from coding theory angle? I want to learn from the basic level. Thanks in advance.
Relevant answer
Answer
send me ur email id. write an email to me at abhay@iitism.ac.in
  • asked a question related to Coding Theory
Question
2 answers
I want to start my phd in applied mathematics. But, want to continue with my master work on coding theory, but this time I want to work also in a non mathematical field together with coding theory.
Can anyone provide me some topics name related to my previous work on coding theory?
Relevant answer
Answer
I suggest the following general topic which I think can be suitable to your phd :
Efficient non-binary LDPC encoder and decoder design.
  • asked a question related to Coding Theory
Question
11 answers
In the book "Great Software Debates", Alan M. Davis states in the chapter "Requirements", sub-chapter "The Missing Piece of Software Development"
Students of engineering learn engineering and are rarely exposed to finance or marketing. Students of marketing learn marketing and are rarely exposed to finance or engineering. Most of us become specialists in just one area. To complicate matters, few of us meet interdisciplinary people in the workforce, so there are few roles to mimic. Yet, software product planning is critical to the development success and absolutely requires knowledge of multiple disciplines.
What are these (multiple) disciplines?
Please anyone with insights should help illuminate. Thank You
Relevant answer
Answer
Obviously in both . For a specific software like C, C++, Java , SQL, ORACLE intra disciplinary coding is most common . But connecting a GUI with a CUI technique
like Microsoft visual studio can be connected to a data base like ORACLE to create a good interdisciplinary user friendly .
  • asked a question related to Coding Theory
Question
6 answers
In his paper "A new upper bound on the minimal distance of self-dual codes" Conway and Sloane wrote (Theorem 5 iv):
"If we write
W(x,y) =
\sum\limits_{j=0}^{\lfloor\frac{n}{8}\rfloor}{a_j(x^2+y^2)^{\frac{n}{2}-4j}(x^2y^2(x^2-y^2)^2)^j}
using Gleason's theorem, for suitable INTEGERS a_j, then..."
My question is about these coefficients. In the books on Coding Theory that I read the coefficients are presented as complex, real or rational numbers. Has somebody proved that these coefficients are integers?
Relevant answer
Answer
Dear Stefka Bouyuklieva,
Direct calculations agree with the author claims in the mentioned paper.
See the attached file that shows some of the factors have integer coefficients, similar calculations work for all other coefficients.
Best regards
  • asked a question related to Coding Theory
Question
3 answers
In literature, all I am seeing is Hamming code with minimum Hamming distance 3. Theoretically a Hamming code with minimum distance d can detect d-1 errors and can correct (d-1)/2 error. So minimum distance of 3 can detect 2 errors and correct 1 error.
So here is my question: Is it possible to generate a Hamming code with minimum distance of 5 which can correct 2-bit errors? If yes, could anyone please give me suggestion about that? If not, then please give me an explanation.
Thanks in advance.
Relevant answer
Answer
Sourav, You are right: a Hamming code can correct single errors. There are other codes that can correct two or even more errors. They have all kind of names and constructed in different ways (not Hamming codes). Greetings, Kees
  • asked a question related to Coding Theory
Question
2 answers
Hi
I have a question regarding high-rate LDPC codes constructions. My research field is not coding but somewhere in my research I need to find an explicit way of constructing LDPC codes with high rate with girth 6 or 8. I think high-rate is not an important factor in coding but in compressed sensing it is of great importance since it provides the main assumption of compressed sensing.
I would like to know whether Cyclic or Quasi-cyclic LDPC codes of girth 6 or 8 can provide high-rate or not? Any suggestion is appreciated!
Thanks
Mahsa
Relevant answer
Answer
Thanks! It was helpful!
  • asked a question related to Coding Theory
Question
3 answers
I am working on Radical Theory of rings and module. I would love to improve my research on the application of prime radical or Jacobson radical.
Relevant answer
Answer
The following was suggested by Wiwat Wanicharpichat in answer to a related question:
Ricardo Alfaro and Andri Kelarev, Recent Results on Ring Constructions for Error-Coorecting Codes, Contemporary Mathematics, 376, 2005, pp.1-10.
  • asked a question related to Coding Theory
Question
3 answers
In quasi cyclic codes we define H by adopting one of the several techniques proposed in literature and the null space over GF(q) of H gives a QC-LDPC code C.
Relevant answer
Answer
Finding the null space of any code is trite linear algebra. Maybe if you want the generator matrix of the dual in QC form there is some theory to do...
Or is your question : how to find good QC-LDPC codes over GF(q)?
  • asked a question related to Coding Theory
Question
2 answers
The answer should be in [T. Kasami, N. Tokura, S. Azumi, On the weight enumeration of weight less than 2.5d of Reed-Muller codes, Inform. Contr. 30:4, 380-395,1976. https://doi.org/10.1016/S0019-9958(76)90355-7 ] but I see a contradictory information there. The final table states that there is nothing between 2d and 2.125d. But from Lemma 1 it can be seen that 2d+sqrt(d) is possible. For example, the Boolean function (1 + x1 x2 + x3 x4 + x5 x6 + x7 x8 + x9 x0) of degree 2 has weight 2.0625d (d=256). May be I wrongly understand something. My goal is to make a correct reference about weights less than 2.5d in RM(m,r)...
Relevant answer
Answer
It seems that the mistake in the paper is localized. One sequence is indeed missed in the final table. The mistake is that the claim "the second term is zero except ..." after (23) is wrong. As a result, one sequence is missed in the final table, which becomes essential if m-r>7 (see the example in the starting post).
Another problem is that the paper does not contain complete proof, which are in another paper of the same authors with the same title in Rept. of Faculty of Eng. Sci., Osaka Univ., Japan. 1974.
I think that the results can be used after including the missed sequence and careful checking all the proofs.
It should be said however that a big job was done by the authors with the classification of the polynomials of small weights. The full text with the proofs is a 225-page preprint.
The notes above are the result of discussions with several mathematicians. They are not of general interest but may be useful if you plan to use the results of the discussed paper.
  • asked a question related to Coding Theory
Question
6 answers
Dear all
I am writing a paper in the field of LDPC codes. I have a problem for simulating my proposed parity check matrix and obtaining the chart of bit error vs SNR values. I need to an expertise co-author in this field for finishing my paper. can everyone help me as co-author for simulating my paper? please contact me and I will explain more details
.
Relevant answer
Answer
Dear Dr.Kaining Ham
Thank you for your comment.
  • asked a question related to Coding Theory
Question
5 answers
This question is still unanswered for me. Like CDs use Reed-Solomon codes for this purpose. It would be great that some one could refer me to some technical paper on this particular topic.
Thanks
Relevant answer
Answer
I think LDPC code with bit flipping decoding is under investigation currently. Please find the latest issue on JSAC.
  • asked a question related to Coding Theory
Question
5 answers
Let us explore the number-theoretic question: suppose we have what we shall refer to as a Prime-Indexical Base Number System, such that the sum of any two primes is the next prime in the series of primes. So such as system would contain all and only prime numbers. For instance, 1+2=3, 3+2=5, 5+2=7; but 5+3=11, and 7+3=11 also. Furthermore, 11+13=29 and 29+31=61 ... and so on. Can we render a formula that would make such a Number System consistent?
Relevant answer
Answer
I don't have any proof for what i say but it feels right to me.
It is impossible to make prime indexial base number system which is consistent because there exists no closed form relation for primes i.e predicting  nth prime, So eventually we need to know the distance between the primes to make mechanism to form the system. hence my hypothesis.
  • asked a question related to Coding Theory
Question
4 answers
Hello everyone! I am currently searching for algebraic structures which I will use for constructing codes for my research. I would appreciate any idea that you will be able to share regarding my question. Thanks.
Complete question: It is known that the nilradical N(R) of a commutative ring R is contained in the Jacobson radical J(R) of R. Now, if the J(R)=N(R) and N(R) is nontrivial, is there any significant use of this in establishing an algebraic structure to be applied in coding theory?
Relevant answer
Thanks for this, Prof. Wanicharpichat!
  • asked a question related to Coding Theory
Question
2 answers
Hi all,
for my current research project, I use atlasTI for the qualitative analysis of my interview data. With the help of the CAT add-on, intercoderreliability can be assessed. I have never done this before and I am not sure which option to pursue:
i) should the second coder use my existing codebook, which emerged from open coding the interviews in the first place or
ii) should I let the second coder simply open-code the interviews as well?
Any best practise hints on this would be greatly appreciated! :)
Best wishes
Jens
Relevant answer
Answer
Hi Jens,
I am not used at all on atla.ti which may be specefic but if coding for this addon is the same as any software, I think the most important thing is communicatoin between the developpers.
I would say it depends on if all developpers are going to touch to all of the code or if you can partition it (each developper working on a distinct portion) and rely on interfaces to make them interoperable.
Regards,
Maxime
  • asked a question related to Coding Theory
Question
2 answers
Gray map f :from F_2+uF_2 to F_2 is f(x+uy)=(y,x+y)
Relevant answer
Answer
You can read this type of paper F_2+uF_2, F_2+uF_2+u^2F_2, F_2+uF_2+u^2F_2+u^3F_2 and so on(ie, chain ring idea)
  • asked a question related to Coding Theory
Question
3 answers
i.e. compare all the possible codewords.
Relevant answer
Answer
Now, I am think,  a short length code is possible to use a ML decoding from related theorem
  • asked a question related to Coding Theory
Question
3 answers
recently, some researchers work on infinite structure of matroids.
independent axiom of matroids  have a first role in definition of codes on GF(q) that we can see this point in representable matroids. now , can we have any logical definition  of this point for related codes of infinite matroids?
Relevant answer
Answer
Hi Hossein.
I haven't thought much about this issue, but I found the attached paper on the archive.
Section 2.6 seems to touch on the issue of infinite matroids being representable over a field.
If that field is GF(q), one is close to your question, I assume.
  • asked a question related to Coding Theory
Question
5 answers
I have been utilising in-vivo codes on various established definitions in two fields of study in order to determine if theories derived from the code analysis can be grouped into the two fields of studies. What has emerged is 141 codes that have been grouped into 11 logical categories. 
These categories can be linked to some prominent theories such as System Theory and Institution Theory but others cannot. 
I need assistance with:
  1. Are there any principles in linking coding outputs to established theories?
  2. Are there any principles in handling code categories that do not logically link to established theories?
Thanks. 
Relevant answer
Answer
I think your general goal is quite coherent: first inductively coding your material as closely too the participants' own words as possible, and then comparing that coding system to a deductive framework based on existing theory.
As reasonable as this might be, I'm not aware of examples of anyone else following the same path. In terms of reporting your findings, you could create a table with 11 rows and 2 columns, where the cells that match existing theory have small amounts of text to explain the linkage, and then blank cells when your code categories do not match existing theory. The core of your Results section would then be to describe the innovative content that you have found.
As far as Ground Theory goes, you have certainly followed the recommendation of going from open (initial) coding into categorization. The strict use of in vivo codes is a bit unusual, but certainly understandable for your purposes. Beyond that, goal of comparing a set of categories (which is less than a theory) to preexisting theories is not a very good fit to GT. So I personally would say that you used GT methods in ways that would ultimately serve a different purpose.
Of course, you did not mention GT in your original post, and I'm not sure why you want to claim it. The problem being that many GT proponents have a tendency to argue over boundaries -- what is and is not acceptable as GT. Since your work is also compatible with Braun & Clarke's (2006) thematic analysis, I would recommend aligning yourself with that widely used approach to qualitative analysis.
  • asked a question related to Coding Theory
Question
3 answers
Since from their inception, all most all the attacks on McEliece cryptosystem and its variant Niederreiter cryptosystem have been unsuccessful. They are believed to resist even quantum attacks. These two are code-based Public-key cryptosystem whose private key consists of a generator matrix of a Goppa code. All other modified versions of these cryptosystems which were proposed by replacing Goppa codes with other codes have been proven to be insecure. In view of this, what is the structural advantage that Goppa codes enjoy over other families of codes that makes them so strong for cryptographic applications?
Relevant answer
Answer
@AR Reddy: the Goppa codes used in crypto are introduced by elementary means ie
explicit generator matrices. While they can be described by means of AG codes it is hardly relevant there. Historically they were introduced **before** general AG codes on curves. Also I dont understand your comment on the size of the key which depends only on the parameters n,k,q and not on the choice of the family of codes...
  • asked a question related to Coding Theory
Question
9 answers
I'm implementing a successive cancellation decoding of a polar code in Matlab, however I can't get any good results.
I'm following the algorithm in the paper "List decoding of polar code." However, the result is always zeros, and I don't know why.
Relevant answer
Answer
Hi,
I just see your post, I can help you with that in case you are still interested. 
  • asked a question related to Coding Theory
Question
3 answers
i have to calculate the minimal distance of a finite lenght LDPC code to compare it performance with other codes.
Relevant answer
Answer
You can find the minimum hamming distance of LDPC codes using their weight distribution function. For the computation of the weight distribution function of low-density parity-check (LDPC) code ensembles please check [1], where it has been shown how the minimum distance of LDPC codes depends on the degree distribution pair of the code. 
[1] C. Di, T. Richardson and R. Urbanke, “Weight distribution of low-density parity-check
codes,” IEEE Trans. Inform. Theory, vol. 52, no. 11, pp. 4839–4855, November 2006. 
  • asked a question related to Coding Theory
Question
3 answers
It appears to be based on algorithms in the literature such as the one by Minn, Zeng and Bhargava.
Relevant answer
Answer
MathWorks help documents refer to following reference :
But I will check the matlab codes and inform you about which part has changed.
T.M.Schmidl,D.C.CoxLow_Overhead,Low-Complexity Synchronization for
OFDM,”IEEE International Conference on communications,Vol.3,1996,p1301-13.
Schmidl, T.M.; Cox, D.C., "Robust frequency and timing synchronization for OFDM," Communications, IEEE Transactions on , vol.45, no.12, pp.1613,1621, Dec 1997
IEEE Std 802.11a, "Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications," 1999.
  • asked a question related to Coding Theory
Question
5 answers
i have to calculate the minimal distance of a finite length LDPC code to compare it performance with other codes.
Relevant answer
Answer
Dear Wassila Belferdi,
   If you get the parity-check matrix H of LDPC, the column weight of H adds 1 is the minimum distance of LDPC.
Good luck
Shigui
  • asked a question related to Coding Theory
Question
4 answers
I need this for cotton harvesting robot.
Relevant answer
Answer
  • asked a question related to Coding Theory
Question
6 answers
For real life which one should we apply first? To transform digital information we apply coding theory, and for security we apply cryptography. Which one should we apply first? Is cryptography applied before coding or after coding or both?
Relevant answer
Answer
You have two main types of coding:
1- Source coding: plain data is compressed by reducing or eliminating the redundancy.
2- Channel coding: redundant bits are added to your compressed data to enable error detection and/or correction (there is a trade-off between error-detection/correction capability and the final compression ratio).
To secure your data with a key, encryption can be applied. In this case, it should be applied right after source coding (i.e. you secure only the useful [compressed] data). Then, channel coding is applied to the encrypted data to enable error correction.
  • asked a question related to Coding Theory
Question
7 answers
From where to start writing any simple Matlab code for LDPC?
Any suggestion or comparison plot in matlab?
Relevant answer
Dear Sardar, please find complete matlab/Simulink implementation of LPDC code in the paper found in the link:https://www.researchgate.net/publication/258019751_Implementation_for_Two-Stage_Hybrid_Decoding_for_Low_Density_Parity_Check_%28LDPC%29_Codes
In the paper you find the algorithm of the code and you can start your own implementation. If any further questuins you can contact my coworker eng. Hend Oraby
at the Email address: hend.orabi@yahoo.com
wish you success
  • asked a question related to Coding Theory
Question
13 answers
It is a classification of JavaScript code in malicious and non-malicious classes.
Relevant answer
Answer
Hi,
There could be many reasons. one maybe because svm generally likes pre-processed data, same as ANN, like normalized in some way [-1,1], but RF can handle raw data better.
Check the guide to svm usage:
where they show poor usage can make a large difference (>20%) in performance
  • asked a question related to Coding Theory
Question
5 answers
From a Makefile, can I get the execution sequence of the software?
Example: from Makefile, can I get the details of the Linux kernel?
Relevant answer
No, the makefile, in the better situation will describe you how are the dependencies of the code (.c)  and header (.h) files.
If you want to know about the software itself, you'll need to study the source code itself. Even better, you could study its documentation.
If the documentation is not available, you run doxygen over the source code, which will extract the documentation, if existing, and describe you the relationships between entities in the code.
  • asked a question related to Coding Theory
Question
2 answers
How can Vedic mathematics solve the problems associated with error-correcting codes?
Is there any application in finite fields  to solve the 'Key Equation solver' block of RS code?
Relevant answer
Answer
I doubt RS over GF(256) can be decoded mentally. However there are well known papers of Vera Pless for instance about decoding Golay code by hand.
  • asked a question related to Coding Theory
Question
7 answers
I think its related to the problem of computing the smallest weight basis of a matroid.
Relevant answer
Answer
Assuming you are only interested in algorithms with feasible complexity :-) I hazard a guess that this is very difficult. A related problem of proving that a code has a word with weight below a given bound is proven to be in some nasty NP-category. I have this vague recollection that someone (Alex Vardy?) proved that another related problem of minimizing the trellis of a linear block code is similarly nasty, but I may be wrong about that.
  • asked a question related to Coding Theory
Question
7 answers
What is the best software package available for carrying out computations related to algebraic coding theory, especially codes over rings.
Relevant answer
Answer
  • asked a question related to Coding Theory
Question
3 answers
I am working on the security of a steganographic systems. When I was reading this interesting paper about steganographic codes, then I didnt understand why at section III they say that "for the security of steganographic systems, we hope there are enough stego-codes, especially binary codes."
Relevant answer
Answer
@Tanjona: It could be intersting to invite the authors to participate to this conversation, or to contact them directly by private email...???
Let we know when you will know more! ;-)
  • asked a question related to Coding Theory
Question
3 answers
I am doing reed solomon codes in verilog and I need some help in designing euclidian block.
Relevant answer
Answer
The Theory is in MacWilliams and Sloane; check also McEliece theory of Information and coding. This being said implementation is probably a pain.
  • asked a question related to Coding Theory
Question
1 answer
Are there any books or papers that describe a method to evaluate the performance of a scrambling code? In other words, the properties of scrambling codes.
Relevant answer
Answer
  • asked a question related to Coding Theory
Question
8 answers
Why not any polynomial?
Relevant answer
Answer
@Rama: in particular one nice property of a Galois ring is to be local, which would not be the case with a reducible polynomial
  • asked a question related to Coding Theory
Question
3 answers
Can anyone please suggest the best reference book on Ring Theory that is useful for a budding researcher who is interested in algebraic coding theory, codes over rings in particular?
Relevant answer
Answer
ZX Wan book Quaternary codes; also some parts of Hufman and Pless, Fundamentals of Coding Theory. Eventually my own book "Codes over Ring", World Scientific contains
interesting material, but more advanced.
  • asked a question related to Coding Theory
Question
4 answers
For a given sequence, a cyclic code can be obtained if one defines the generator polynomial of the code as the minimal polynomial of the sequence. Recently, this method has been used as another way to construct optimal codes. However, it seems that it is difficult to obtained optimal codes from this approach.
Relevant answer
Answer
I think what Ding is doing is essentially construct cyclic codes by their mattson Solomon polynomial ( a poly over an extension field). Its different with identifying up to reciprocation the parity check polynomial of the code with the characteristic poly of the sequence both polys being over the base field.
  • asked a question related to Coding Theory
Question
4 answers
Recall that the period of a polynomial p(x) is the smallest n such that p divides x^n-1.The average is known to be of the order of q^n . I am interested in order of magnitude of the tail of the distribution...Say how many polys have period at most Kn for some fixed constant K.
Relevant answer
Answer
Hi Selda,
thanks for the tip. Part of the difficulty is that I want to count all polys not just the irreducible ones...
  • asked a question related to Coding Theory
Question
4 answers
by systematic I mean size 2^k and there is a window of k bits where you can watch all the binary k tuples.
Relevant answer
Answer
@Singh: by systematic I mean size 2^k and there is a window of k bits where you can watch all the binary k tuples. I know this old paper fairly well and it deals with constant weight codes not systematic ones.
  • asked a question related to Coding Theory
Question
2 answers
What are the advantages of Joint weight enumerators in Coding Theory?
Relevant answer
Answer
Might be an in-between to compute a we of a multilevel construcion. Eg over F4 construct a code as C+wD, with C,D binary. Then knowing the joint of C and D will give u the cwe of C+wD
  • asked a question related to Coding Theory
Question
1 answer
On Z_4, it is defined with respect to Euclidean metric, and on other rings like F2+uF2 it is defined with respect to Lee metric. Why is so? Can anyone please explain or suggest some literature?
Relevant answer
Answer
Well historically Type II codes are the so called doubly even self dual binary codes.(There are five types the Vth being trivial. Search for Gleason Pierce Turyn theorem on divisible codes). Then Conway and Sloane define Type II lattices by analogy. So I introduced Type II Z4 codes because construction A yields Type II lattices (see my paper with Bonnecaze Mourrain and Bachoc), In that setting the Euclidean metric is natural. Now over other rings we want to produce binary Type II codes hence the metric used for F2+uF2 or even F4.
  • asked a question related to Coding Theory
Question
3 answers
In general, I am interested in algebraic coding theory. Is there any example of digital communication architecture/hardware that uses block codes or convolutional codes over rings? Personally, I am planning to pursue a doctoral studies on coding theory and I also wanted to learn the applied side of coding theory.
Relevant answer
Answer
The motivation for block codes over rings comes from spreading sequences for CDMA. The motivation for convolutional codes over rings comes from image processing : see my paper with Billy Sison...
  • asked a question related to Coding Theory
Question
12 answers
I'm working on a problem where I would need an error correcting code of length at least 5616 bits (and preferably not much longer) that could achieve correct decoding even if 40% of the received word is erroneous. I have looked into some basic coding theory textbooks, but I have not found anything that would suit my purpose. Does such a code exist? If it does, what kind of code is it and can it be efficiently realized? If it doesn't, why not?
Any insight to the problem will be much appreciated.
Relevant answer
Answer
I agree with Patrick. 40% correction if errors are random cannot be achieved for that length. If errors are not random, that is a different story, but in that case we need a model for the channel (like a bursty channel), as other people are pointing out.