Science topic

Information Theory - Science topic

An interdisciplinary study dealing with the transmission of messages or signals, or the communication of information. Information theory does not directly deal with meaning or content, but with physical representations that have meaning or content. It overlaps considerably with communication theory and CYBERNETICS.
Questions related to Information Theory
  • asked a question related to Information Theory
Question
3 answers
I am looking to apply for PhD programs in the fields of Coding Theory, Cryptography, Information Theory, Communication Engineering, and Machine Learning.
I welcome any inquiries and collaboration opportunities related to these research areas.
Relevant answer
Answer
Thank you very much for your guidance! I will contact Professor Philippe Mary right away!
  • asked a question related to Information Theory
Question
7 answers
Comments on “Information = Comprehension × Extension”
Resources
Inquiry Blog • Survey of Pragmatic Semiotic Information
OEIS Wiki • Information = Comprehension × Extension
C.S. Peirce • Upon Logical Comprehension and Extension
Relevant answer
Answer
Information = Comprehension × Extension • Comment 7
Let's stay with Peirce's example of inductive inference a little longer and try to clear up the more troublesome confusions tending to arise.
Figure 2 shows the implication ordering of logical terms in the form of a lattice diagram.
Figure 2. Disjunctive Term u, Taken as Subject
Figure 4 shows an inductive step of inquiry, as taken on the cue of an indicial sign.
Figure 4. Disjunctive Subject u, Induction of Rule v ⇒ w
One final point needs to be stressed. It is important to recognize the disjunctive term itself — the syntactic formula “neat, swine, sheep, deer” or any logically equivalent formula — is not an index but a symbol. It has the character of an artificial symbol which is constructed to fill a place in a formal system of symbols, for example, a propositional calculus. In that setting it would normally be interpreted as a logical disjunction of four elementary propositions, denoting anything in the universe of discourse which has any of the four corresponding properties.
The artificial symbol “neat, swine, sheep, deer” denotes objects which serve as indices of the genus herbivore by virtue of their belonging to one of the four named species of herbivore. But there is in addition a natural symbol which serves to unify the manifold of given species, namely, the concept of a cloven‑hoofed animal.
As a symbol or general representation, the concept of a cloven‑hoofed animal connotes an attribute and connotes it in such a way as to determine what it denotes. Thus we observe a natural expansion in the connotation of the symbol, amounting to what Peirce calls the “superfluous comprehension” or information added by an “ampliative” or synthetic inference.
In sum we have sufficient information to motivate an inductive inference, from the Fact u ⇒ w and the Case u ⇒ v to the Rule v ⇒ w.
Reference —
Peirce, C.S. (1866), “The Logic of Science, or, Induction and Hypothesis”, Lowell Lectures of 1866, pp. 357–504 in Writings of Charles S. Peirce : A Chronological Edition, Volume 1, 1857–1866, Peirce Edition Project, Indiana University Press, Bloomington, IN, 1982.
  • asked a question related to Information Theory
Question
6 answers
Information = Comprehension × Extension • Preamble
Eight summers ago I hit on what struck me as a new insight into one of the most recalcitrant problems in Peirce’s semiotics and logic of science, namely, the relation between “the manner in which different representations stand for their objects” and the way in which different inferences transform states of information.  I roughed out a sketch of my epiphany in a series of blog posts then set it aside for the cool of later reflection.  Now looks to be a choice moment for taking another look.
A first pass through the variations of representation and reasoning detects the axes of iconic, indexical, and symbolic manners of representation on the one hand and the axes of abductive, inductive, and deductive modes of inference on the other.  Early and often Peirce suggests a natural correspondence between the main modes of inference and the main manners of representation but his early arguments differ from his later accounts in ways deserving close examination, partly for the extra points in his line of reasoning and partly for his explanation of indices as signs constituted by convening the variant conceptions of sundry interpreters.
Resources
Inquiry Blog • Survey of Pragmatic Semiotic Information
OEIS Wiki • Information = Comprehension × Extension
C.S. Peirce • Upon Logical Comprehension and Extension
Relevant answer
Answer
Information = Comprehension × Extension • Selection 6
Selection 1 opens with Peirce proposing, “The information of a term is the measure of its superfluous comprehension”, and it closes with his offering the following promise.
❝I am going, next, to show that inference is symbolization and that the puzzle of the validity of scientific inference lies merely in this superfluous comprehension and is therefore entirely removed by a consideration of the laws of information.❞
Summing up his account to this point, Peirce appears confident he's kept his promise. Promising on our own account to give it another pass, we'll let him have the last word — for now.
❝We have now seen how the mind is forced by the very nature of inference itself to make use of induction and hypothesis.
❝But the question arises how these conclusions come to receive their justification by the event. Why are most inductions and hypotheses true? I reply that they are not true. On the contrary, experience shows that of the most rigid and careful inductions and hypotheses only an infinitesimal proportion are never found to be in any respect false.
❝And yet it is a fact that all careful inductions are nearly true and all well-grounded hypotheses resemble the truth; why is that? If we put our hand in a bag of beans the sample we take out has perhaps not quite but about the same proportion of the different colours as the whole bag. Why is that?
❝The answer is that which I gave a week ago. Namely, that there is a certain vague tendency for the whole to be like any of its parts taken at random because it is composed of its parts. And, therefore, there must be some slight preponderance of true over false scientific inferences. Now the falsity in conclusions is eliminated and neutralized by opposing falsity while the slight tendency to the truth is always one way and is accumulated by experience. The same principle of balancing of errors holds alike in observation and in reasoning.❞
(Peirce 1866, pp. 470–471)
Reference —
Peirce, C.S. (1866), “The Logic of Science, or, Induction and Hypothesis”, Lowell Lectures of 1866, pp. 357–504 in Writings of Charles S. Peirce : A Chronological Edition, Volume 1, 1857–1866, Peirce Edition Project, Indiana University Press, Bloomington, IN, 1982.
Resources —
Inquiry Blog • Survey of Pragmatic Semiotic Information
OEIS Wiki • Information = Comprehension × Extension
C.S. Peirce • Upon Logical Comprehension and Extension
  • asked a question related to Information Theory
Question
1 answer
Epigenetic information polish then recessive privilege distribution. WARNING: Genetic engineering is DANGEROUS. Hopefully regularly polishing the epigenome will cure aging and other diseases, plus prevent side effects of genetic engineering. Also, hopefully and theoretically after the potential genetic engineering to provide recessive traits and recessive genes, if the surgery is simple enough, subjects will keep their genetic signatures.
See my profile or Substack:
Relevant answer
Answer
Your highlight "Epigenetic Information Polish then Recessive Privilege Distribution" is an intriguing concept that combines ideas from epigenetics, genetics, and possibly social theory. Let me break down my thoughts on this:
  1. "Epigenetic Information Polish": This part suggests refining or optimizing epigenetic information. It could imply:
  • Cleaning up or correcting aberrant epigenetic marks
  • Enhancing beneficial epigenetic patterns
  • Fine-tuning epigenetic profiles for specific outcomes
This is an interesting idea, as epigenetic modifications are more malleable than genetic changes and could potentially be "polished" or optimized.
  1. "Recessive Privilege Distribution": This is a more complex and potentially controversial term. It could be interpreted in several ways:
  • In genetics: Focusing on the expression or benefits of recessive traits
  • In a social context: Redistributing advantages typically associated with dominant traits or social positions to those with recessive or less prominent characteristics
The combination of these concepts is thought-provoking. It could suggest a process where:
  1. Epigenetic information is first optimized or corrected
  2. Then, this refined epigenetic state is used to influence the expression of recessive traits in a way that confers some form of advantage or "privilege"
This idea raises several questions and potential implications:
  1. Ethical considerations: How would we determine which epigenetic patterns to "polish" and which recessive traits to privilege?
  2. Technical feasibility: While epigenetic modification is possible, precisely controlling the expression of recessive traits is extremely complex.
  3. Long-term effects: How would such interventions affect future generations and overall genetic diversity?
  4. Social implications: If applied in a societal context, how might this concept interact with or challenge existing social structures?
  5. Scientific basis: While intriguing, this concept would need substantial research to establish its biological validity and potential applications.
It's an innovative and provocative idea that bridges biological concepts with potentially broader implications. However, it would require careful definition and extensive research to move from a conceptual stage to any practical application.
Would you like to elaborate on what you envision for this concept or explore any specific aspect of it in more detail?
  • asked a question related to Information Theory
Question
1 answer
Relevant answer
Answer
Genetic engineering, while offering immense potential for improving human health and well-being, also presents significant ethical and practical challenges. If not approached with extreme caution, it could inadvertently lead to the development of diseases or even a scenario where robots become dominant.
Here are some ways cautious genetic engineering could prevent these outcomes:
Preventing Diseases:
  • Targeted Interventions: By precisely altering specific genes linked to diseases, genetic engineering can potentially eliminate or mitigate their effects. However, it's crucial to ensure that these interventions don't have unintended consequences, such as creating new genetic disorders.
  • Gene Therapy: Gene therapy, which involves introducing functional genes into cells, can treat genetic diseases. However, it's essential to carefully select target cells and ensure that the introduced genes integrate into the genome safely and effectively.
  • Germline Editing: While controversial, germline editing (modifying genes in embryos) could potentially eliminate genetic diseases from future generations. However, this approach raises serious ethical concerns and requires rigorous safety testing to prevent unintended consequences.
Preventing Robot Domination:
  • Ethical Guidelines: Developing and implementing strict ethical guidelines for AI and robotics research can help prevent the creation of autonomous systems that pose a threat to humanity. These guidelines should address issues such as safety, accountability, and the potential for misuse.
  • Human Oversight: Ensuring that humans maintain control over AI and robotic systems is essential. This can be achieved through careful design, robust safety measures, and mechanisms for human intervention in case of emergencies.
  • Preventing Sentience: While the development of sentient AI remains a distant possibility, it's important to consider the potential risks and take steps to prevent the creation of artificial beings that could pose a threat to humanity.
Key Considerations:
  • Risk Assessment: Thoroughly assessing the potential risks and benefits of genetic engineering projects is crucial. This includes considering the long-term consequences and the potential for unintended side effects.
  • Transparency and Accountability: Ensuring transparency in genetic engineering research and development can help build public trust and accountability. This includes open communication about the goals, methods, and potential risks of these projects.
  • International Cooperation: Collaborating with international partners can help establish global standards and guidelines for genetic engineering, ensuring that research and development are conducted responsibly and ethically.
  • asked a question related to Information Theory
Question
2 answers
Relevant answer
Answer
Epigenetics refers to the study of changes in gene expression that do not involve alterations to the underlying DNA sequence. This field has garnered significant attention for its potential to influence aging, combat diseases, and mitigate unwanted side effects of genetic engineering.
Aging is associated with various epigenetic changes, such as DNA methylation and histone modifications, which can lead to altered gene expression and contribute to age-related diseases like cancer and neurodegenerative disorders. By targeting these epigenetic modifications, researchers believe it may be possible to reverse or slow down the aging process. For instance, interventions that modify epigenetic markers could potentially restore youthful gene expression patterns, thereby improving cellular function and longevity.
Epigenetic therapies hold promise for treating a range of diseases. By understanding the specific epigenetic alterations associated with conditions like cancer, researchers can develop targeted therapies that either activate or repress certain genes without changing the genetic code itself. This approach could lead to more effective treatments with fewer side effects compared to traditional genetic engineering methods, which often involve irreversible changes to the genomeOne of the significant concerns with genetic engineering is the potential for unintended consequences, such as off-target effects or the activation of harmful genes. Epigenetic modifications can provide a more flexible approach to gene regulation, allowing for temporary changes that can be reversed if necessary.
This flexibility could help in fine-tuning therapeutic interventions, reducing the risk of adverse effects associated with permanent genetic alterations.
  • asked a question related to Information Theory
Question
4 answers
Relevant answer
Answer
Which epistemology do you associate with biology? Why?”
- epistemology absolutely directly is associated with biology, since all points/steps in the utmost fundamental result in epistemology – “Scientific method” , see https://en.wikipedia.org/wiki/Scientific_method
“…An iterative,[43] pragmatic[12] scheme of the four points above is sometimes offered as a guideline for proceeding:[47]
Define a question
Gather information and resources (observe)
Form an explanatory hypothesis
Test the hypothesis by performing an experiment and collecting data in a reproducible manner
Analyze the data
Interpret the data and draw conclusions that serve as a starting point for a new hypothesis
Publish results
Retest (frequently done by other scientists)
The iterative cycle inherent in this step-by-step method goes from point 3 to 6 and back to 3 again. ……”
- all/every living being, even bacteria, use and perform in their lives at their behavior.
Cheers
  • asked a question related to Information Theory
Question
7 answers
Fungi.
All of them decompose other dead organisms: "Fungi are important decomposers, especially in forests" ( https://education.nationalgeographic.org/resource/decomposers/ 19 oct 2023). Decomposing the dead is the literally ultimate form of predation.
"The earliest life forms we know of were microscopic organisms (microbes) that left signals of their presence in rocks about 3.7 billion years old" ( https://naturalhistory.si.edu/education/teaching-resources/life-science/early-life-earth-animal-origins ).
Microbes are sometimes fungi:
"They(microbes) include bacteria, archaea, fungi, protists, some green algae, and viruses" ( https://www.energy.gov/science/doe-explainsmicrobiology ).
Relevant answer
Answer
Yes, good point Phil Geis . We still have very little clue as to how life kicked off. A fascinating subject I love talking and thinking about, far more interesting than semantics of predator or parasite terminology.
  • asked a question related to Information Theory
Question
3 answers
Relevant answer
Answer
In my view, the most accurate and practical theories in any scientific field, particularly gerontology, are heavily influenced by temporal, spatial, and cultural contexts. It is challenging to propose a definitive theory on gerontology, as the perspectives and experiences of elderly individuals vary across different decades.”
  • asked a question related to Information Theory
  • asked a question related to Information Theory
Question
3 answers
Why does information theory explain aging, evolution vs creationism, critical rationalism, computer programming and much more?
Perhaps information has a very open definition thus, is very robust.
Relevant answer
Answer
All that is cut and paste salesmanship. How about some concrete examples of each?
  • asked a question related to Information Theory
Question
2 answers
Hi researchers
i need links to Computer and network journals which publish the review papers with acceptable fees
Relevant answer
Priyanka Gupta thank you for ur replay
  • asked a question related to Information Theory
Question
2 answers
Relevant answer
Answer
Alike a death, as the separation of the sole from body, such separation of the mind is the dreaming state, but separation of reason . the deep sleep.
  • asked a question related to Information Theory
Question
1 answer
Which theory best explains aging; error, damage, or information? Why specifically?
Information theory best explains aging because genetic errors are difficult to define. Also, damage may be necessary for growth. Therefore, information theory best explains aging.
"The Information Theory of Aging (ITOA) states that the aging process is driven by the progressive loss of youthful epigenetic information, the retrieval of which via epigenetic reprogramming can improve the function of damaged and aged tissues by catalyzing age reversal" ( 16 mar 2024 https://www.researchgate.net/publication/376583494_The_Information_Theory_of_Aging ).
Relevant answer
Answer
Damage Theory:
According to this theory, aging is the result of cumulative damage caused by external and internal factors, such as free radicals, radiation, and toxins. This damage affects cells and tissues, leading to a gradual decline in bodily functions.
Critique: This theory is supported by extensive experimental evidence, but it fails to explain why some organisms live much longer than others despite similar levels of damage.
  • asked a question related to Information Theory
Question
2 answers
Relevant answer
Answer
These three areas are quite different, although they can touch on related ideas in some ways. Here's a breakdown:
  • Information theory: This is a branch of applied mathematics that focuses on quantifying, storing, and transmitting information. It uses concepts from probability and statistics to analyze how efficiently information can be communicated through channels with noise or limitations.
  • Concrete concepts: This refers to ideas that are well-defined, specific, and easy to grasp. They are not abstract or theoretical. Examples include the concept of a chair, the number 5, or the color red.
  • Critical rationalism: This is a philosophical approach to knowledge acquisition. It emphasizes the importance of testing and criticizing ideas to see if they hold up under scrutiny. It rejects the notion of absolute certainty and suggests that knowledge is always provisional, open to revision based on new evidence.
There might be some connections:
  • Information theory and concrete concepts: Information theory can be used to analyze how efficiently concrete concepts are communicated. For example, a simple concept like "red" might require fewer bits to transmit than a more complex idea.
  • Critical rationalism and information theory: Critical rationalism can be used to evaluate the quality of information itself. If information is incomplete, contradictory, or not well-sourced, then a critical rationalist approach would be to question its validity.
Overall, information theory is a mathematical framework, concrete concepts are specific ideas, and critical rationalism is a way of approaching knowledge. They are all valuable tools in different areas.
  • asked a question related to Information Theory
  • asked a question related to Information Theory
Question
1 answer
I am ready. I’m an autistic antiracism educator.
Relevant answer
Answer
At first we need to know if you are capable of that or not and if you are capable of that you don't need anyone to recommendation . Anyway good luck..
  • asked a question related to Information Theory
Question
10 answers
It has been known for almost a century that when animals learn new routines that the synaptic strength within the brain, especially within the neocortex, is systematically altered (Hebb 1949; Kandel 2006). Enhancement of synaptic strength has been demonstrated for human subjects learning a new language. A group of Japanese university students, who were moderately bilingual, were enrolled in a 4-month intensive language course to improve their English (Hosoda et al. 2013). During this period, they learned ~ 1000 new English words which they used in various spoken and written contexts. The learning was followed by a weekly test. To learn the 1000 words, it is estimated that 0.0006 bits per second of information were transmitted over the 4-month period [1.5 bits per letter x 4 letters/word x 1000 words/16 weeks], a rate that (not surprisingly) falls well short of the 40 bits per second transmitted by a competent communicator of English (Reed and Durlach 1998); hence learning takes longer than the execution of a learned act. Additionally, it was discovered that the pathway between Broca’s area and Wernicka’s area was enhanced in the students as evidenced by diffusion tensor imaging (Hosoda et al. 2013). Such enhancement during learning has been attributed to increased myelination and synaptogenesis (Blumenfeld-Katzir et al. 2011; Kalil et al. 2014; Kitamura et al. 2017). A central reason for this understanding is that the minimal circuit for language learning has long been known to exist between Wernicke’s and Broca’s areas of the human brain based on lesion, stimulation, and neural recording experiments (Kimura 1993; Ojemann 1991; Metzger et al. 2023; Penfield and Roberts 1966).
With the use of modern methods (e.g., wide-field two-photon calcium imaging and optogenetic activation and inhibition), we can now delineate, with a high degree of precision, minimal cortical circuits that are involved in the learning of new tasks in animals (e.g., Esmaeili, Tamura et al., 2021, see attached Fig. 1). The next step is to measure the changes in synaptic formation via learning to assess the amount of new information added to neocortex [which in humans has an estimated capacity to store 1.6 x 10^14 bits of information or 2 ^ (1.6 x 10^14) possibilities, Tehovnik, Hasanbegović 2024]. This will address whether the neocortex has an unlimited capacity for information storage or whether the addition of new information replaces the old information as related to previous learning that utilized the minimal circuit (the same will need to be done for corresponding cerebellar circuits that contain the executable code based on stored declarative information, Tehovnik, Hasanbegović, Chen 2024). We have argued that uniqueness across individual organisms is predicated on both genetics and learning history (thereby making the hard problem of consciousness irrelevant). Soon investigators will track the learning history of an individual organism to assess how the brain creates (and updates) a unique library of learning per organism thereby helping us understand how genetics and learning history created, for example, Einstein, Kasparov, and Pelé.
Figure 1: A minimal neocortical circuit is illustrated for mice trained to perform a delayed go-no-go licking task before (Novice) and after learning (Expert). As with the minimal circuit for language acquisition in humans, this circuit can now be subjected to detailed synaptic analysis by which to quantify how learning occurs at the synapses (Hebb 1949; Kandel 1996); this quantification can be used to estimate how many bits of information the new connections represent and then to compare the amount of new information added to the animal’s behavioral repertoire (Tehovnik, Hasabegović, Chen 2024). Illustration from Fig. 8 of Esmaeli, Tamura et al. (2021).
Relevant answer
Answer
in humans has an estimated capacity to store 1.6 x 10^14 bits of information or 2 ^ (1.6 x 10^14) possibilities, Tehovnik, Hasanbegović 2024]."
That is how computers work, but humans don't have "bits" of information but fluctuations that evolve into experienceable forms. See DOT of consciousness as the now best theory of consciousness after the pseudoscience ITT has been debunked by world community of brain researchers
  • asked a question related to Information Theory
Question
3 answers
My answer: Yes, in order to interpret history, disincentives are the most rigorous guide. How?: Due to the many assumptions of inductive logic, deductive logic is more rigorous. Throughout history, incentives are less rigorous because no entity(besides God) is completely rational and or self-interested, thus what incentivizes an act is less rigorous then what disincentivizes the same action. And, as a heuristic, all entities(besides God) have a finite existence before their energy(eternal consciousness) goes to the afterlife( paraphrased from these sources : 1)
, thus interpretation through disincentives is more rigorous than interpreting through incentives.
Relevant answer
Answer
People's behavior in history is based on different motives, ideologies and personal views. Although motivational factors may influence decision making, individuals and groups often act within the context of their own authority and time.
  • asked a question related to Information Theory
Question
1 answer
I've been reading about Claude Shannon and Information Theory. I see he is credited with developing the concept of entropy in information theory, which is a measure of the amount of uncertainty or randomness in a system. Do you ever wonder how his concepts might apply to the predicted red giant phase of the Sun in about 5 billion years? Here are a few thoughts that don't include much uncertainty or randomness -
In about 5 billion years the Sun is supposed to expand into a red giant and engulf Mercury and Venus and possibly Earth (the expansion would probably make Earth uninhabitable in less than 1 billion years). It's entirely possible that there may not even be a red giant phase for the Sun. This relies on entropy being looked at from another angle - with the apparent randomness in quantum and cosmic processes obeying Chaos theory, in which there's a hidden order behind apparent randomness. Expansion to a Red Giant could then be described with the Information Theory vital to the Internet, mathematics, deep space, etc. In information theory, entropy is defined as a logarithmic measure of the rate of transfer of information. This definition introduces a hidden exactness, removing superficial probability. It suggests it's possible for information to be transmitted to objects, processes, or systems and restore them to a previous state - like refreshing (reloading) a computer screen. Potentially, the Sun could be prevented from becoming a red giant and returned to a previous state in a billion years (or far less) - and repeatedly every billion years - so Earth could remain habitable permanently. Time slows near the speed of light and near intense gravitation. Thus, even if it's never refreshed/reloaded by future Information Technology, our solar system's star will exist far longer than currently predicted.
All this might sound a bit unreal if you're accustomed to think in a purely linear fashion where the future doesn't exist. I'll meet you here again in 5 billion years and we can discuss how wrong I was - or, seemingly impossibly, how correct I was.
Relevant answer
Answer
"Expansion to a Red Giant could then be described with the Information Theory"
Expansion to a Red Giant IS described with Physics (entropy included). It's irreversible, insofar as while most stars go through a second visit to the red giant phase, their intermediate compact phase (Helium fusing core) is never the same as the previous compact phase (Hydrogen fusing core). You cannot return to the same initial conditions.
It's not that information theory is wrong, it's that it's absolutely peripheral to the physical processes that govern stellar evolution.
  • asked a question related to Information Theory
Question
2 answers
Professor Miguel Nicolelis (2019) has published a free copy of his contributions to BMI (brain-machine interfaces) emphasizing his twenty years of work starting in 1999 and continuing through 2015.* Until 2003, Nicolelis had no competitors, but shortly thereafter Andersen et al. (2003), Schwartz et al. (2004) and Donoghue et al. (2006) joined the field, and tried to eclipse him and his associates [as described in Tehovnik, Waking up in Macaíba, 2017]; they, however, failed to achieve the eclipse, since the information transfer rate of their devices were typically below 1 bit per second at an average of about 0.2 bits/sec, much like what Nicolelis’ devices were transferring (Tehovnik and Chen 2015; Tehovnik et al. 2013). By comparison, the cochlear implant transfers 10 bits/sec (Tehovnik and Chen 2015) and therefore has been commercialized with over 700,000 registered implant recipients worldwide (NIH Statistics 2019).
BMI technology is still largely experimental. Willett, Shenoy et al. (2021) have developed a BMI for patients that transfers up to 5 bits/sec for spontaneously generated writing, but it is unclear whether this high rate is due to the residual movements (Tehovnik et al. 2013) of the hand contralateral to the BMI implant. To date, the most ambitious BMI utilizes a digital bridge between neocortex and the spinal cord below a partial transection to evoke a stepping response that still requires support of the body with crutches; but significantly the BMI portion of the implant in M1 enhances the information transfer rate by a mere 0.5 bits per second, since most of the walking (86% or 3.0 bits/sec of it) is induced by spinal cord stimulation in the absence of the cortical implant (Lorach et al. 2023). Accordingly, BMI falls short of the cochlear implant and thus BMI developers are years away from a marketable device. The pre-mature marketing by Nicolelis at the 2014 FEFA World Cup of his BMI technology (Tehovnik 2017b) should be a warning to Elon Musk (of Neuralink) that biology is not engineering, for if it were a BMI chip would now be in every brain on the planet. See figure that summarizes the information transfer rates for various devices including human language.
Relevant answer
Answer
The amount of information current-day BMI systems transfer varies depending on the type of BMI and the specific task being performed. Here's a breakdown:
Information transfer rate (bits/second):
  • EEG-based BMIs (non-invasive): These BMIs measure electrical activity from the scalp and generally have lower information transfer rates, ranging from 0.25 to 0.5 bits/second. This is enough for basic control tasks like cursor movement or simple word selection, but not for complex actions.
  • Invasive BMIs: These BMIs use electrodes implanted directly in the brain, providing access to more detailed neural signals. Information transfer rates can be higher, reaching up to 40 bits/second for simple tasks like motor control. However, this is still significantly slower than natural human communication rates, which can reach 40-100 bits/second for speech and even higher for complex forms of communication like writing.
Factors affecting information transfer rate:
  • Type of brain activity: Different brain areas and signals carry different amounts of information. Motor cortex activity used for cursor control is easier to decode than complex cognitive processes like thoughts or emotions.
  • Electrode technology: The number and placement of electrodes influence how much neural activity is captured. More electrodes and better placement can lead to higher information transfer rates.
  • Signal processing algorithms: Algorithms used to interpret and decode brain signals play a crucial role in extracting information. Advancements in machine learning and artificial intelligence are improving decoding accuracy and information transfer rates.
Current limitations:
  • Low information transfer rates: Compared to natural communication, current BMIs are still relatively slow and limited in the complexity of information they can transfer.
  • Accuracy and reliability: Decoding brain signals can be challenging, leading to errors and inconsistencies in control.
  • Ethical considerations: Invasive BMIs raise ethical concerns about privacy, security, and potential misuse of brain data.
Despite these challenges, BMI research is rapidly advancing, and information transfer rates are expected to improve significantly in the future. This could revolutionize various fields, including healthcare, rehabilitation, and human-computer interaction.
  • asked a question related to Information Theory
Question
1 answer
How useful is the heuristic that if both sides of a debate are unfalsifiable then they may be a false dichotomy? My answer: The heuristic that is both sides of a debate are unfalsifiable then they may be a false dichotomy is very useful because it is probably the case for practical reasons. Examples include but may not be limited to (evolutionism or creationism), (freewill or determinism), (rationalism or empiricism).
Relevant answer
Answer
The world is not a collection of facts, and instead of using heuristics, read during a long period a philosopher such as for instance Wittgenstein or Heidegger. And that long period should extend over years.
  • asked a question related to Information Theory
Question
7 answers
Binary sequences 1100100101
Symbolized by binary P(0)=1/2 P(1)=1/2 H(x)=-0.5*log2(0.5)-0.5*log2(0.5)=-1
Symbolized by quaternions
P(11)=1/5 P(00)=1/5 P(10)=1/5 P(01)=2/5 H(x)=-1/5*log2(1/5)*3-2/5*log2(2/5)=-1.9219
...
Is there a problem with my understanding?
If not ,which result is information entropy?
Relevant answer
Answer
Firstly, the calculated entropy in both cases is positive (entropy is always non-negative).
Secondly, the probability of each bit/symbol is calculated over a large number of occurrences.
Thirdly, entropy = 1 for 'Symbolized by binary' case means the entropy is maximum because each bit is equi-probable. On the other hand, for the 'symbolized by quaternions' case, the entropy < 2 because the calculations considers that all states are not equi-probable.
I hope it helps.
  • asked a question related to Information Theory
Question
2 answers
The article has submited IEEE-TIT. Preprint manuscript is Post Shannon Information Theory.You can find it on this website. Please give a fair review.Thank you.
Relevant answer
Answer
Anyone can evaluate it.English is not my native language, so I hope there is no ambiguity.
  • asked a question related to Information Theory
Question
3 answers
Dear all,
Why forward selection search is very popular and widely used in FS based on mutual information such as MRMR, JMI, CMIM, and JMIM (See )? Why other search approaches such as the beam search approach is not used? If there is a reason for that, kindly reply to me.
Relevant answer
Answer
There is three main types of feature selection, filtering methods, wrapper methods, and embedded methods. Filtering methods use criteria based metrics that are independent to the modeling process and uses criteria such as mutual information, correlation or Chi square test to check each feature or a selection of features compared with the target. Other type of filtering methods includes variance thresholding and ANOVA. Wrapper methods uses error rates to help train individual models or subsets of features iteratively to select the critical features. Subsets can be selected Sequential Forward Selection, sequential backwards selection, bidirectional selection or randomally. With selecting features and training they are therefore more computationally expensive than filtering methods. There are heuristic approaches too such as Branch and Bound Search that are non exhausted searches. In some cases filtering methods are used before wrapper methods. Embedded methods includes use of decision trees or random forests for extracting feature importance for deciding which features to select. Overall feedforward, backward and bidrectional methods are stepwise methods for searching for crucial features. In regards to beam search which is more of a graph based heuristic optimization method that is similar to Best first search , that can be seen applied in neural network optimization or tree optimization rather than direct as a feature selection method.
  • asked a question related to Information Theory
Question
3 answers
The general consensus about the brain and various neuroimaging studies suggest that brain states indicate variable entropy levels for different conditions. On the other hand, entropy is an increasing phenomenon in nature from the thermodynamical point of view and biological systems contradict this law for various reasons. This can be also thought of as the transformation of energy from one form to another. This situation makes me think about the possibility of the existence of distinct energy forms in the brain. Briefly, I would like to ask;
Could we find a representation for the different forms of energy rather than the classical power spectral approach? For example, useful energy, useless energy, reserved energy, and so on.
If you find my question ridiculous, please don't answer, I am just looking for some philosophical perspective on the nature of the brain.
Thanks in advance.
Relevant answer
Answer
Hi,
The mitochondrion in cells is a powerhouse of energy. There are some articles on the topics of your interest:
Jeffery KJ, Rovelli C. Transitions in Brain Evolution: Space, Time and Entropy. Trends Neurosci. 2020;43(7):467-474. doi:10.1016/j.tins.2020.04.008
Lynn CW, Cornblath EJ, Papadopoulos L, Bertolero MA, Bassett DS. Broken detailed balance and entropy production in the human brain. Proc Natl Acad Sci U S A. 2021;118(47):e2109889118. doi:10.1073/pnas.2109889118
Carhart-Harris RL. The entropic brain - revisited. Neuropharmacology. 2018;142:167-178. doi:10.1016/j.neuropharm.2018.03.010
Sen B, Chu SH, Parhi KK. Ranking Regions, Edges and Classifying Tasks in Functional Brain Graphs by Sub-Graph Entropy. Sci Rep. 2019;9(1):7628. Published 2019 May 20. doi:10.1038/s41598-019-44103-8
Tobore TO. On Energy Efficiency and the Brain's Resistance to Change: The Neurological Evolution of Dogmatism and Close-Mindedness. Psychol Rep. 2019;122(6):2406-2416. doi:10.1177/0033294118792670
Raichle ME, Gusnard DA. Appraising the brain's energy budget. Proc Natl Acad Sci U S A. 2002;99(16):10237-10239. doi:10.1073/pnas.172399499
Matafome P, Seiça R. The Role of Brain in Energy Balance. Adv Neurobiol. 2017;19:33-48. doi:10.1007/978-3-319-63260-5_2
Engl E, Attwell D. Non-signalling energy use in the brain. J Physiol. 2015;593(16):3417-3429. doi:10.1113/jphysiol.2014.282517
Kang J, Jeong SO, Pae C, Park HJ. Bayesian estimation of maximum entropy model for individualized energy landscape analysis of brain state dynamics. Hum Brain Mapp. 2021;42(11):3411-3428. doi:10.1002/hbm.25442
  • asked a question related to Information Theory
Question
4 answers
The current technological revolution, known as Industry 4.0, is determined by the development of the following technologies of advanced information processing: Big Data database technologies, cloud computing, machine learning, Internet of Things, artificial intelligence, Business Intelligence and other advanced data mining technologies.
In connection with the above, I would like to ask you:
Which information technologies of the current technological revolution Industry 4.0 contribute the most to reducing the asymmetry of information between counterparties of financial transactions?
The above question concerns the asymmetry of information between such financial transaction partners, such as between borrowers and banks granting loans, and before granting a loan carrying out creditworthiness of a potential borrower and the bank's credit risk level associated with a specific credit transaction and, inter alia, financial institutions and clients of their financial services.
Please reply
Best wishes
Relevant answer
Answer
Information asymmetry between the financial institution offering certain financial services and the client can be reduced through the increase in the use of ICT and Industry 4.0 information technologies for remote, web-based service and concluding transactions. In addition, customers can use social media portals where they share their experiences of using specific financial services.
Best wishes,
Dariusz Prokopowicz
  • asked a question related to Information Theory
Question
43 answers
How to obtain currently necessary information from Big Data database systems for the needs of specific scientific research and necessary to carry out economic, business and other analyzes?
Of course, the right data is important for scientific research. However, in the present era of digitalization of various categories of information and creating various libraries, databases, constantly expanding large data sets stored in database systems, data warehouses and Big Data database systems, it is important to develop techniques and tools for filtering large data sets in those databases data to filter out of terabytes of data only information that is currently needed for the purpose of conducted scientific research in a given field of knowledge, for the purposes of obtaining answers to a given research question and for business needs, eg after connecting these databases to Business Intelligence analytical platforms. I described these issues in my scientific publications presented below.
Do you agree with my opinion on this matter?
In view of the above, I am asking you the following question:
How to obtain currently necessary information from Big Data database systems for the needs of specific scientific research and necessary to carry out economic, business and other analyzes?
Please reply
I invite you to the discussion
Thank you very much
Dear Colleagues and Friends from RG
The issues of the use of information contained in Big Data database systems for the purposes of conducting Business Intelligence analyzes are described in the publications:
I invite you to discussion and cooperation.
Best wishes
Relevant answer
Answer
Respected Doctor
Big data has three characteristics as follows:
1-Volume
It is the volume of data extracted from a source, which determines the value and capabilities of the data to be classified as big data, and by the year 2020, cyberspace will contain approximately 40,000 megabytes of data ready for analysis and information extraction.
2-Variety
It means the diversity of extracted data, which helps users, whether they are researchers or analysts, to choose the appropriate data for their field of research and includes structured data in databases and unstructured data (such as: images, clips, audio recordings, videos, SMS, call logs, and data). Maps (GPS), and require time and effort to prepare them in a suitable form for processing and analysis.
3-Velocity
It means the speed of producing and extracting data and sending it to cover the demand for it. Speed is a crucial element in making a decision based on this data, and it is the time we take from the moment this data arrives to the moment the decision is made based on it.
There are many tools and techniques that are used to analyze big data, such as: Hadoop, Map Reduce, HPCC, but Hadoop is one of the most famous of these tools. Big data is on several devices and then distributes the processing process to these devices to speed up the processing result and is returned or called as a single package. Tools that deal with big data consist of three main parts:
1- Data mining tools
2- Data Analysis Tools
3- Tools for displaying results (Dashboard).
Its use also varies statistically according to the research objectives (improving education, effectiveness of decision-making, military benefit, economic development, health management ... etc.).
greetings
Senior lecturer
Nuha hamid taher
  • asked a question related to Information Theory
Question
36 answers
What are the important topics in the field: Data analysis in Big Data database systems?
What kind of scientific research dominate in the field of Data analysis in Big Data database systems?
Please reply. I invite you to the discussion
Dear Colleagues and Friends from RG
The issues of the use of information contained in Big Data database systems for the purposes of conducting Business Intelligence analyzes are described in the publications:
I invite you to discussion and cooperation.
Best wishes
Relevant answer
Answer
Dear B. Dr. Ravishankar,
Today for the answer. Yes, you have indicated a key aspect that determines many of the currently developed analytical applications of Big Data Analytics technology.
Thank you very much,
Best wishes,
Dariusz Prokopowicz
  • asked a question related to Information Theory
Question
3 answers
Question closed an error was found.
  • asked a question related to Information Theory
Question
4 answers
I would like to have a deeper insight into Markov Chain, its origin, and its application in Information Theory, Machine Learning and automated theory.
Relevant answer
Answer
Yes whilst a Markov chain is a finite state machine, it is distinguished by its transitions being stochastic, i.e. random, and described by probabilities.
you can learn more about here:
Kind Regards
Qamar Ul Islam
  • asked a question related to Information Theory
Question
1 answer
Question closed an error was found.
Relevant answer
Answer
Good question
  • asked a question related to Information Theory
Question
16 answers
The future of marketing development in social media
Marketing in social media is still a very developing field in the field of marketing techniques used on the Internet. On the one hand, some of the largest online technology companies have built their business concept on social media marketing or are entering this field.
On the other hand, there are startups of technology companies acquiring data from the Internet and processing information in Big Data database systems for the purpose of providing information services to other entities as support for strategic and operational management, including planning advertising campaigns.
Therefore, the question arises:
What tools for social media marketing will be developed in the future?
Please, answer, comments
I invite you to the discussion
Relevant answer
Answer
Nowadays, events are much more than mere gatherings. Instead, they are a place where you can promote your brand to spread your business ideas.
In many situations, you can meet like-minded people and form valuable relationships. However, you need to have a viable promotion plan, as well as a way for people to network at the event itself.
  • asked a question related to Information Theory
Question
113 answers
Hello Dear colleagues:
it seems to me this could be an interesting thread for discussion:
I would like to center the discussion around the concept of Entropy. But I would like to address it on the explanation-description-ejemplification part of the concept.
i.e. What do you think is a good, helpul explanation for the concept of Entropy (in a technical level of course) ?
A manner (or manners) of explain it trying to settle down the concept as clear as possible. Maybe first, in a more general scenario, and next (if is required so) in a more specific one ....
Kind regards !
Relevant answer
Dear F. Hernandes
The Entropy (Greek - ἐντροπία-transformation, conversion, reformation, change) establishes the direct link between MICRO-scopic state (in other words orbital) of some (any) system and its MACRO-scopic state parameters (temperature, pressure, etc).
This is the Concept (from capital letter).
Its main feature – this is the ONLY entity in natural sciences that shows the development trend of any self-sustained natural process. It is the state function; it isn’t the transition function. That is why the entropy is independent from the transition route, it depends only from the initial state A and final state B for any system under consideration. Entropy has many senses.
In the mathematical statistics, the entropy is the measure of uncertainty of the probability distribution.
In the statistical physics, it presents the probability (so-caled *statistical sum*) of the existence of some (given) microscopic state (*statistical weight*) under the same macroscopic characteristics. This means that the system may have different amount of information, the macroscopic parameters being the same.
In the information approach, it deals with the information capacity of the system. That is why, the Father of Information theory Claude Elwood Shannon believed that the words *entropy* and *information* are synonyms. He defined entropy as the ratio of the lost information to the whole of information volume.
In the quantum physics, this is the number of orbitals for the same (macro)-state parameters.
In the management theory, the entropy is the measure of uncertainty of the system behavior.
In the theory of the dynamic systems, it is the measure of the chaotic deviation of the transition routes.
In the thermodynamics, the entropy presents the measure of the irreversible energy loss. In other words, it presents system’s efficiency (capacity for work). This provides the additivity properties for two independent systems.
Gnoseologically, the entropy is the inter-disciplinary measure of the energy (information) devaluation (not the price, but rather the very devaluation).
This way, the entropy is many-sided Concept. This provides unusual features of entropy.
What is the entropy dimension? The right answer depends on the approach. It is dimensionless figure in the information approach (Shannon defined it as the ratio of two uniform values; therefore it is dimensionless by definition). On the contrary, in the thermodynamics approach it has a dimension (energy to temperature J/K)
Is entropy parameter (fixed number) or this is a function? Once again, the proper answer depends on the approach (point of view). It is a number in the mathematical statistics (logarithm of the number of the admissible (unprohibited) system states, well-known sigma σ). At the same time, this is the function in the quantum statistics. Etc., etc.
So, be very cautious when you are operating with entropy.
Best wishes,
Emeritus Professor V. Dimitrov vasili@tauex.tau.ac.il
  • asked a question related to Information Theory
Question
7 answers
An interesting thing is the algorithm according to which specific search results appear in a Google search engine based on a given password.
Formulas of this type of algorithms can be variously constructed so that different search results can be obtained according to the same password.
Added to this is the issue of promoting search results for companies that have made certain fees for a high level of positioning in search results. Unfortunately, this is not an objective process of finding information available on the Internet but a formula based on commercial marketing. In this situation, there is a question about competitiveness, which is limited in this way.
In view of the above, I am asking you: Does Google's search engine algorithm restrict competition in the availability of information on the Internet?
Please, answer, comments. I invite you to the discussion.
Relevant answer
Answer
As part of the technological development of the web browser that has taken place since the late 1990s, the importance of the business and / or marketing factor in search algorithms is growing.
Greetings,
Dariusz Prokopowicz
  • asked a question related to Information Theory
Question
22 answers
What kind of scientific research dominate in the field of Functionality and applications of smartphones?
Please, provide your suggestions for a question, problem or research thesis in the issues: Functionality and applications of smartphones.
Please reply.
I invite you to the discussion
Thank you very much
Best wishes
Relevant answer
Answer
Privacy... Smartphones are becoming some of our most trusted computing devices. People use them to store highly sensitive information including email, passwords, financial accounts, and medical records... Huang, Y., Chapman, P., & Evans, D. (2011, August). Privacy-Preserving Applications on Smartphones. In HotSec.
  • asked a question related to Information Theory
Question
3 answers
I have been pondering about the relationship between these two important topics of our data-driven world for a while. I have bits and pieces, but I have been looking forward to find a neat and systematic set of connections that would somehow (surprisingly) bind them and fill the empty spots I have drawn in my mind for the last few years.
In the past, while I was dealing with multi-class classification problem (not so long ago), I have come to realize that multiple binary classifications is a viable option to address this problem through using error correction output coding (ECOC) - a well known coding technique used in the literature whose construction requirements are a bit different than classical block or convolutional codes. I would like to remind you that grouping multiple classes in two superclasses (a.k.a. class binarization) can be addressed in various ways. You can group them totally randomly which does not dependent on the problem at hand or based on a set of problem-dependent constraints that can be derived from the training data. One way I like the most stays at the intersection point of information theory and machine learning. To be more precise, class groupings can be done based on the resultant mutual information to be able to maximise the class separation. In fact, the main objective with this method is to maximise class separation so that your binary classifiers expose less noisy data and hopefully result in better performance. On the other hand, ECOC framework calls for coding theory and efficient encoder/decoder architectures that can be used to efficiently handle the classification problem. The nature of the problem is not something we usually come across in communication theory and classical coding applications though. Binarization of classes implies different noise and defect structures to be inserted into the so called "channel model" which is not common in classical communication scenarios. In other words, the solution itself changes the nature of the problem at hand. Also the way we choose the classifiers (such as margin-based, etc) will affect the characterization of the noise that impacts the detection (classification) performance. I do not know if possible, but what is the capacity of such a channel? What is the best code structure that addresses these requirements? Even more interestingly, can the recurrent issues of classification (such as overfitting) be solved with coding? Maybe we can maintain a trade-off between training and generalization errors with an appropriate coding strategy?
Similar trends can be observed in the estimation theory realm. Parameter estimations or in the same way "regression" (including model fitting, linear programming, density estimation etc) can be thought as the problems of finding "best parameters" or "best fit", which are ultimate targets to be reached. The errors due to the methods used, collected data, etc. are problem specific and usually dependent. For instance, density estimation is a hard problem in itself and kernel density estimation is one of its kind to estimate probability density functions. Various kernels and data transformation techniques (such as Box-Cox) are used to normalize data and propose new estimation methods to meet today's performance requirements. To measure how well we do, or how different distributions are we again resort to information theory tools (such as Kullback–Leibler (KL) divergence and Jensen-Shannon function) and use the concepts/techniques (including entropy etc.) therein from a machine learning perspective. Such an observation separates the typical problems posed in the communication theory arena from the machine learning arena requiring a distinct and careful treatment.
Last but not the least, I think that there is deep rooted relationship between deep learning methods (and many machine learning methods per se) and basic core concepts of information and coding theory. Since the hype for deep learning has appeared, I have observed that many studies applying deep learning methods (autoencoders etc) for decoding specific codes (polar, turbo, LDPC, etc) claiming efficiency, robustness, etc thanks to parallel implementation and model deficit nature of neural networks. However, I am wondering the other way around. I wonder if, say, back-propagation can be replaced with more reasonable and efficient techniques very well known in information theory world to date.Perhaps, distortion theory has something to say about the optimal number of layers we ought to use in deep neural networks. Belief propagation, turbo equalization, list decoding, and many other known algorithms and models may have quite well applicability to known machine learning problems and will perhaps promise better and efficient results in some cases. I know few folks have already began searching for neural-network based encoder and decoder designs for feedback channels. There are many open problems in my oppinion about the explicit design of encoders and use of the network without the feedback. Few recent works have considered various areas of applications such as molecular communications and coded computations as means to which deep learning background can be applied and henceforth secure performances which otherwise cannot be achieved using classical methods.
In the end, I just wanted to toss few short notes here to instigate further discussions and thoughts. This interface will attract more attention as we see the connections clearly and bring out new applications down the road...
Relevant answer
Answer
I've been having similar random thoughts over the two topics. As a matter of fact, I'd like to think about learning in the more genernal sense, not limited to machines. But when I put keywords like 'coding theory', 'learning' etc. in google, most results are just about applying some information theoretical techniques in machine learning, while I'm looking for a deeper connection help me understand learning better. And your post is seemingly the closest thing to what I want.
To briefly summarise my idea, I think we can treat learning as encoding, similar to the last point brought up in your post. I have to admit my ignorance but I haven't found any works studying learning using the framework of coding theory, rather than just borrowing some convenient tools. You may have dug into the literature more since your post, please direct me to the right works/authors if you have found relevant materials.
I don't have a background in information theory, but I guess I know some naive basics of it. Many artificial neural networks can perform a denoising or pattern completion task -- Isn't that impossible from a information theoretical point of view? Why an output can ever be the 'denoised' version of any noisier input? Of course this is a stupid question, but it led me to realise that learning/training is like encoding and testing/responding is like decoding. Then I had to accept that a learning system with all its training data forms this information pathway that has a long (even permanent) lifespan, which should be shorter than the changing rate of the regularities underlying the data. Specifically, learning is a process for the system to compress the aggregated noise in the training data (coding types other than compression would be more fun, but I'm not discussing it here), it considers this as information and incorporates it into its learnable parameters (these things live longer than individual datum), and as a successful outcome the system becomes capable of denoising a test sample, which is in some sense similar to decoding an encrypted message with the correct codebook. In other words, I can think of learning as a procedure of the system minimising its lifetime entropy by data fitting. This idea is evidently hinted by the common use of error minimisation in terms of mimising loglikelihoods in machine learning, but was clearly spelt out in Smolensky's Harmonium, which is slightly different from Hinton's restricted Boltzmann machine in the goal of optimisation (involving entropy). Unfortunately I'm not experienced enough to explain the technical details.
From my perspective, I consider this research direction extremely important and relevant when it comes to continual learning. In a more classical, static data fitting or machine learning scenario, in theory the learning system could be embracing all the training data at the same time. Minimising lifetime system entropy is equivalent to reduce system uncertainty with respect to the training data at the exact moment it encounters data. However, this is clearly a non-realistic assumption for humans and for many AI applications. A more realistic data stream is more dynamic, and at each moment the system could only observe partially the data. Evidently if an artificial neural network tries only to optimise itself with respect to this imperfect information, it suffers from catastrophic forgetting. So people start to tweak the learning rules or the regularisers, etc. in order to fix the problem. I do similar things, too, but I feel a lack of theoretical guidance, as I consider there should be some information theoretical quantification of the difficulty of continual learning tasks (there are some primary but arbitrary classification now), at least for artificial tasks.
In summary, I believe an updated version for coding theory is needed for studying continual learning, because in this scenario the channel capacity of a learning system has to be affected by more than its instantaneous parameter (including structure) configuration, but additionally an integral over time of these parameters.
  • asked a question related to Information Theory
Question
3 answers
I would like to know if there is an expression that shows the (maximum) channel capacity of a downlink multiuser MIMO channel when imperfect CSI is assumed.
Any references in this direction would be useful for me. Thanks!
Relevant answer
I can give you a conceptual answer and then you can build on it.
The ergodic channel capacity C is determined by the Shannon theorem such that
C=B log2 ( 1+ r/ N)
B is the bandwidth,
r the recieved signal power at the input of the receiver
N is the noise power
We can express r in terms of the channel gain h and S the sent signal power suh that r=S/h. The channel gain h in case of imperfect estimation can be expressed by h +dh
where dh is the error in determining h
C/B= channel spectral density= log2( 1+ Sl(h+dh)N))
Finally SD= log2( 1+ S(1+dh/h) /hN)
= log2( 1 +r/N ( 1+dh/h),
So the noise will increase by the relative channel error.
One can apply this formula on all MIMO channels and sum sup for all cahnnels.
Best wishes
  • asked a question related to Information Theory
Question
3 answers
Hi, How can we calculate the entropy of  chaotic signals? Is there a simple method or formula for doing this?
Relevant answer
  • asked a question related to Information Theory
Question
3 answers
Greetings,
I am working on my Grad Project implementing an LDPC DVB-S2 decoder. and the best resources explaining the LDPC decoding I found -unfortunately- follows the 5G standard. So, If I followed along with these resources discussing the 5G implementation, What should I look out for so not to get confused between the 2 implementations?
Thanks in advance!
Relevant answer
welcome,
Conceptually the encoding and the decoding techniques are the same for LDPC for the two applications. The difference may be in the code rate which is k/n with k is the message length and n is the code length. In addition, the size of the block may be different among the two standards.
You can adopt the method used for LTE provided that it satisfies your requested performance parameter.
May be the major difference between the block size and the encoding and the decoding time in two standards. So, the computing platform required may have different ratings. So, you have to take these differences into consideration from all the beginning. You cam make this clear to some extent by simulation experiments.
Best wishes
  • asked a question related to Information Theory
Question
13 answers
Is there an equation connecting the wave function and the entropy of the quantum system?
Relevant answer
Answer
Quantum theory allows us to assign a finite value to entropy and calculate it as a function of Planck's constant. Constant entropy is included in the calculation and this allows us to quantify the predictions of quantum theory. The second law of thermodynamics establishes the existence of entropy as a function of the state of the thermodynamic system, that is, "the second law is the law of entropy." In an isolated system, the entropy either remains unchanged or increases (in nonequilibrium processes), reaching a maximum when thermodynamic equilibrium is established (the law of increasing entropy). Different formulations of the second law of thermodynamics found in the literature are specific consequences of the law of increasing entropy
  • asked a question related to Information Theory
Question
4 answers
I have the following data set (attached) and I would like to calculate mutual information and joint entropy between multiple columns (like for A,B,D,E or C,D,E,F,G etc.). I have gone through R package entropy and other related packages. but as I am very new to information theory, I am having some problem to compute it.
I am specifically looking for R code or online calculator options to calculate this.
Relevant answer
Answer
Interesting question
  • asked a question related to Information Theory
Question
7 answers
Can we affirm that whenever one has a prediction algorithm, one can also get a correspondingly good compression algorithm for data one already have, and vice versa?
Relevant answer
Answer
There is some correlation between compression and perdition. Prediction is a tool of compression. Assume you have data and you you have redundancy in it you can predict the redundancy from the context of the signal and remove the redundancy by simply subtracting the the predicted signal from the real signal.
The difference will be the compressed signal.
The prediction is a powerful concept to reduce the redundancy in the signals and consequently compress it.
prediction is used intensively in video codecs and other signal codecs.
Best wishes
  • asked a question related to Information Theory
Question
4 answers
Please consider a set of pairs of probability measures (P, Q) with given means (m_P, m_Q) and variances (v_P, v_Q).
For the relative entropy (KL-divergence) and the chi-square divergence, a pair of probability measures defined on the common two-element set (u_1, u_2) attains the lower bound.
Regarding general f-divergence, what is the condition of f such that a pair of probability measures defined on the common two-element set attains the lower bound ?
Intuitively, I think that the divergence between localized probability measures seems to be smaller.
Thank you for taking your time.
Relevant answer
Answer
  • asked a question related to Information Theory
Question
92 answers
Dear researchers,
Let's share our opinion about recent attractive topics on communication systems and the potential future directions.
Thanks.
Relevant answer
Answer
FANET
  • asked a question related to Information Theory
Question
6 answers
By definition, the capacity of a communication channel is given by the maximum of the mutual information between the input (X) and output (Y) of the channel, where the maximization is with respect to the input distribution, that is C=sup_{p_X(x)}MI(X;Y).
From my understanding (please correct me if I'm wrong), when we have a noisy channel, such that some of the input symbols may be confused in the output of the channel, we can draw a confusability graph of such a channel where nodes are symbols and two nodes are connected if and only if they could be confused in the output.
If we had to communicate using messages made out of single symbols only, then the largest number of messages that could be sent over such a channel would be α(G), the size of the largest independent set of vertices in the graph (in this case Shannon capacity of the graph equals independence number of that graph α(G)).
Does this mean that for such a channel, the maximum mutual information of the input and output of the channel (channel capacity) is α(G), and it is achieved by sending the symbols of the largest independent set?
Relevant answer
Answer
Hello Amirhossein Nouranizadeh You have an interesting question to discuss, but first you probably need to think again about your introductory text.
" when we have a noisy channel, such that some of the input symbols may be confused in the output of the channel "
It's not exactly like this. First of all, any channel is noisy, otherwise it's not real, or you do not need to communicate because everything is known with no uncertainty.
There is no such thing as the error-free zone of the encoding aplhabet and the error-free zone of the decoding alphabet. Otherwise it means that the job has not been done.
Take the case you want to transmit one bit b. It takes values 0 or 1.
Assume that the sender sends b, the receiver receives b', at the other end of the tranmission channel.
The channel is noisy (otherwise it's not real...) then the probability that b'=b is less than 1: P(b'=b) <1.
The probability of error is P(b' # b)= 1 - P(b'=b) >0
Clear?
That's how life is.
Now instead of sending a single bit b, you send a vector v, and you receive a vector v' (in channel language they speak of "words" but a word is avector, that's the same)
Then error probability is P(v' # v) = 1- P(v'=v).
You should not assume that there are immune vectors and contaminable vectors. If the channel coding is done properly, it spreads the risk evenly, usually.
There are other strategies. In the first Digital Mobile Communication Codec (GSM first generation) there was however a somewhat different structure:
-hierarchy of parameters (say projection of vectors v on subspaces V1, V2, V3, etc, for instance the first two characters of one words, the the following two, etc)
-coding with robustness herarchy
Protect V1 more than V2, protect V2 more than V3.
-decoding with hierarchy of error protection
Then decoding after transmission of v sent gives at receiver v', reconstructed from projections v'1, v'2, v'3. Then v'1 has lower probability of error than v'2, which has lower probability of error than v'3.
With such as scheme, you give more protection resource to what is more dramatic to lose.
Imagine it's the remote control of a car: going forward + or going - backward is more crucial information than the precision on the geometric angle of the movement. So you protect it more.
I hope that with the above you get a concrete sense of what is happening at a sender, on a channel, and at a receiver.
Does it help you?
  • asked a question related to Information Theory
Question
13 answers
What in your opinion will the applications of the technology of analyzing big information collections in Big Data database systems be developed in the future?
In which areas of industry, science, research, information services, etc., in your opinion, will the applications of technology for the analysis of large collections of information in Big Data database systems be developed in the future?
Please reply
I invite you to the discussion
I described these issues in my publications below:
I invite you to discussion and cooperation.
Best wishes
Relevant answer
Answer
Dear Shafagat Mahmudova, Len Leonid Mizrah, Reema Ahmad, Shah Md. Safiul Hoque, Natesan Andiyappillai, Omar El Beggar, Tiroyamodimo Mmapadi Mogotlhwane, Thank you very much for participating in this discussion and providing inspiring and informative answers to the above question: What will Big Data be like in the future? Thank you very much for the interesting information and inspiration to continue deliberations on the above-mentioned issues. This discussion confirms the importance of the above-mentioned issues and the legitimacy of developing research on this subject. I also believe that the Big Data Analytics analytical and database technology is one of the most developing technologies included in Industry 4.0. What do you think about it?
Thank you very much and best regards,
Dariusz Prokopowicz
  • asked a question related to Information Theory
Question
6 answers
The development of IT and information technologies increasingly affects economic processes taking place in various branches and sectors of contemporary developed and developing economies.
Information technology and advanced information processing are increasingly affecting people's lives and business ventures.
The current technological revolution, known as Industry 4.0, is determined by the development of the following technologies of advanced information processing: Big Data database technologies, cloud computing, machine learning, Internet of Things, artificial intelligence, Business Intelligence and other advanced data mining technologies.
In connection with the above, I would like to ask you:
How to measure the value added in the national economy resulting from the development of information and IT technologies?
Please reply
Best wishes
Relevant answer
Answer
Dear Tarandeep Anand, Reza Biria, Krishnan M S, Thank you very much for participating in this discussion and providing inspiring and informative answers to the above question: How to measure the value added in the national economy resulting from the development of information and IT technologies? Thank you very much for your inspiring, interesting and highly substantive answer.
Thank you very much and best regards,
Dariusz Prokopowicz
  • asked a question related to Information Theory
Question
6 answers
In information theory, the entropy of a variable is the amount of information contained in the variable. One way to understand the concept of the amount of information is to tie it to how difficult or easy it is to guess the value. The easier it is to guess the value of the variable, the less “surprise” in the variable and so the less information the variable has.
Rényi entropy of order q is defined for q ≥ 1 by the equation,
S = (1/1-q) log (Σ p^q)
As order q increases, the entropy weakens.
Why we are concerned about higher orders? What is the physical significance of order when calculating the entropy?
Relevant answer
Answer
You may look at other entropies, search articles from Prof. Michèle Basseville on entropy (of probability measures)
  • asked a question related to Information Theory
Question
3 answers
Normalized Mutual Information (NMI) and B3 are used for extrinsic clustering evaluation metrics when each instance (sample) has only one label.
What are equivalent metrics when each instance (sample) has only one label?
For example, in first image, we see [apple, orange, pears], in second image, we see [orange, lime, lemon] and in third image, we see [apple], and in the forth image we see [orange]. Then, if put first image and last image in the one cluster it is good, and if put third and forth image in one cluster is bad.
Application: Many popular datasets for object detection or image segmentation have multi labels for each image. If we used this data for classification (not detection and not segmentation), we have multiple labels for each image.
Note: My task is unsupervised clustering, not supervised classification. I know that for supervised classification, we can use top-5 or top-10 score. But I do not know what will be in unsupervised clustering.
Relevant answer
Answer
  • asked a question related to Information Theory
Question
9 answers
In the question, Why is entropy a concept difficult to understand? (November 2019) Franklin Uriel Parás Hernández commences his reply as follows: "The first thing we have to understand is that there are many Entropies in nature."
His entire answer is worth reading.
It leads to this related question. I suspect the answer is, yes, the common principle is degrees of freedom and dimensional capacity. Your views?
Relevant answer
Answer
among all entropy definitions, the most difficult (I still don´t understand it) but probably the most important one is the Kolmolgorov-Sinai entropy.
The reason: Prof. Sinai and Acad. Kolmolgorov were major architects of the most bridges connecting the world of deterministic (dynamical) systems with the world of probabilistic (stochastic) systems.
  • asked a question related to Information Theory
Question
14 answers
The information inside the volume of black hole is proportional to its surface.
However, what if information does not cross the horizon, rather it is constrained to stay on the horizon's surface, progressively increasing the black hole radius? What if the black hole is empty, and its force comes just from a spacetime distorption inside it? Reversing Einstein, what if the black hole's attraction is not caused by its mass, but just by the spacetime deformation inside it? This would explain the paradoxes of the holographic principle...
Thanks
Clues: material isn’t doomed to be sucked into the hole. Only a small amount of it falls in, while some of it is ejected back out into space.
Relevant answer
Answer
"The Schwarzschild radius is a physical parameter that shows up in the Schwarzschild solution to Einstein's field equations, corresponding to the radius defining the event horizon of a Schwarzschild black hole".
Call it horizon, call it entropy as you desire, my question is the same: could it be that information from outside the black hole reaches just this external radius, accumulating on this surface, rather than entering INSIDE the radius? If inside the radius there is no information, it explains why the entropy of the black hole is proportional to the area of the entropy (as you call it), and not to the volume.
  • asked a question related to Information Theory
Question
3 answers
For compressive sensing (CS) , we can use fewer M measurements to reconstruct original N-dimension signal, where M<<N and the measurement matrix satisfies the restricted isometry property (RIP). Can we combine the concept of entropy in information theorem with CS? Intuitively speaking, the data is successfully reconstructed and information will not get lost before and after doing CS. Can we claim that the entropy of the compressed measurements is equal to the entropy of the original signal because entropy stands for contained information?
To understand my problem more easily, I take a example below:
Supposed in a data gathering wireless sensor network, we deploy N machines in the area. To quantify the amount of information collected by each machine, we assume a Gaussian source field, where the collection of data gathered by all machines is assumed to follow a multi-variate Gaussian distribution ~N(\mu, \Sigma). The joint entropy of all data is H(X). NOW we can use M measurements to reconstruct these data by CS. The joint entropy of these M data is H(Y). Can we say H(X) will equal to H(Y)?
Thanks for your response.
Relevant answer
I will excite the issue from another point of view. If we assume N transmitters in an area. Not all the N transmitters will be active at the same time. If we assume that M transmitters will be active at a time interval T, then one needs at least M samples to detect the active transmitters. More samples will be redundant and help signify the active sources.
This is from physical point of view.
In principle you do not need to make samples than the M-samples to predict the sate of the N senders as N-M are off. The comprehensive sensing is based on this fact.
Best wishes
  • asked a question related to Information Theory
Question
3 answers
While studying information theory we do not consider any directionality of the channel.
There will be no change if Reciever and the Transmitter are interchanged (i.e. Lorentz Reciprocity is obeyed).
However, suppose if the channel is a non-reciprocal device like a isolator or a Faraday rotator, rather than a simple transmission cable. What are the consequences on the Information theory?
What would be consequences on Shannon Entropy and several theorems like Shannon-Coding theorem, Shannon-Hartley theorem etc. I have been googling with several terms like Non-reciprocal networks etc, but I have not been able to find anything. Any help will be appreciated.
Relevant answer
Dear Chetan,
welcome,
You touched an important point which is the channel reciprocity.
The Shannon channel capacity does not require reciprocity. It handels a communication medium which carry information signal from source to destination and namely it give the limit of the transmission speed in bits per second which is called the channel; capacity. So it handles the rate of data transmission in one direction. Such transmission mode is called simplex.
There is the half duplex which affect transmission in both directions but at different time slots. This is related to the physical ability of the channel.
So, reciprocity is an additional property of the channel independent of the channel capacity which is defined only in one direction.
There is also full duplex in this case one uses two channels one for the forward and one for the backward channels.
This is my opinion about the two properties. The channel capacity and the reciprocity.
Best wishes
  • asked a question related to Information Theory
Question
4 answers
We very frequently use cross-entropy loss in neural networks. Cross-entropy originally came from information theory about entropy and KL divergence.
My question is that.. if I want to design a new objective function, does it always need to satisfy information theory?
For example, in my objective function, I want to add a probability measure of something, say A, to cross-entropy loss. A ranges from 0 to 1. So, the objective function will look like this:
= A + (cross-entropy between actual and prediction)
= A + (-(actual)*log(prediction))
Say, the above objective function works good for neural networks, but violate information theory in the sense that we are adding a probability value, A, to a loss value, which is cross-entropy: (-(actual)*log(prediction))
So, my question is that.. even if it violates loss evaluation from the viewpoint information theory, is it acceptable as an objective function for neural networks?
Relevant answer
Answer
Dear Md Sazzad Hossain,
From my experience, it is to define the objective function taking into account the formulation of the problem to be solved and if necessary adapted them to the mathematical conditions.
For more details and information about this subject, i suggest you to see links on topic.
Best regards
  • asked a question related to Information Theory
Question
2 answers
When Carrier Aggregation and cross-carrier scheduling are applied in LTE-Advanced system, UE may support multiple Component Carriers (CC) and control information on one CC can allocate radio resource on another CC. Search space of all CCs and control information is only transmitted from a chosen CC. In this case, if search spaces of different CCs are not properly defined, high blocking probability of control information will be very harmful to system performance.
My Question is: What is the cause of this blocking, is it deficiency of control channel elements to served, scheduled UE or what?
My guess is not but I have no proof of this. Can any expert help?
For now, I assume either self-overlapping or high mutual over-lapping of the UEs' search spaces as the likely cause of blocking.
Relevant answer
Answer
The main responsible guys behind Blocking users are Available CCEs and the Schedular one design to get the user fit into this If the hash function gives the indices which are already occupied then user can't get scheduled resulting in getting blocked. I hope the mentioned point makes sense to understand the reason for blocking users. Kindly refer to this to understand clearly.
  • asked a question related to Information Theory
Question
5 answers
Hi
I have published this paper recently
In that paper, we did an abstracted simulation to get an initial result. Now, I need to do a detailed simulation on a network simulator.
So, I need a network simulator that implement or support MQTT-SN or some implementation of MQTT-SN that would work in a network simulator.
Any hints please?
Relevant answer
Answer
Hello,
Any network simulator, e.g., Netsim, ns2 or any IoT simulator.
  • asked a question related to Information Theory
Question
5 answers
Goal of the theory:
Informational efficiency is a natural consequence of competition, relatively free entry, and low costs of information. If there is a signal, not incorporated in market prices, that future values will be high, competitive traders will buy on that signal. In doing so, they bid the price up, until it fully reflects the information in the signal.. !
Relevant answer
Answer
Timira Shukla Thx, But Inbound is a method of attracting, engaging, and delighting people to grow a business that provides value and builds trust. As technology shifts, inbound guides an approach to doing business in a human and helpful way. Inbound is a better way to market, a better way to sell, and a better way to serve your customers. Because when good-for-the-customer means good-for-the-business, your company can grow better over the long term.
But how or what strategy is best suited for timely information?!
  • asked a question related to Information Theory
Question
21 answers
Free access to information should prevail on the Internet.
This is the main factor in the dynamic development of many websites, new internet services, the growth of users of social media portals and many other types of websites.
In my opinion, all information that can be publicly disseminated should be available on the Internet without restrictions, universally and free of charge.
Please, answer, comments.
I invite you to the discussion.
Best wishes
Relevant answer
Answer
The issue of the possibility of publishing certain content, texts, banners, comments, etc. on the Internet and free information gathering are the key determinants of the development of information services on the Internet. On the other hand, the largest online technology corporations receive revenues mainly from paid marketing services. The Internet environment is therefore a kind of a mix of free and paid information and marketing services, which simultaneously, simultaneously and simultaneously in a mutually connected way are developed by various Internet companies.
Best wishes
  • asked a question related to Information Theory
Question
4 answers
Hello, I need to select for my research paper a researchers, who had wrote research papers/works/articles for journals about how they "see" a single person in Informatology or Information Science.
It is connected with my MA thesis, so answers from this question could help me with my choices. Appreciate every answer!
Relevant answer
Answer
Hi,
Maybe this paper could help:
Toward an understanding of the dynamics of relevance judgment: An analysis of one person's searchbehavior By: Tang, R; Solomon, P INFORMATION PROCESSING & MANAGEMENT Volume: 34 Issue: 2-3 Pages: 237-256 Published: MAR-MAY 1998
  • asked a question related to Information Theory
Question
7 answers
Hi Francis
Greetings from India
Do you use information theory in your work?
What is the framework you are using for integrating the two?
Thanks in advance
Safeer
Relevant answer
Answer
Yes i ues it in my work
  • asked a question related to Information Theory
Question
8 answers
Do we loose information when we project a manifold.
For example, do we loose information about the manifold i.e. Earth (Globe) when we project it to a chart in the book (using maybe stereographic, mercator or any other method)
Similarly, we should be loosing information while we create a Bloch sphere for a 2 state system in Quantum Mechanics which is also a Projected space from a higher dimension i.e. 4 dim.
Also, is there a way to quantify this information loss, if there is any?
Relevant answer
Answer
when we project (Earth) to a chart in the book, if the transformation is diffeomorphism we will have a minor copy of the earth, some details may not be clear to be seen in the graph but this does't mean they are lost.
  • asked a question related to Information Theory
Question
11 answers
Apparently, in some countries, they are founded, usually somewhere underground, in specially created bunkers capable of surviving climatic disasters and other banks of large collections of information on the achievements of human civilization gathered on digital data carriers.
These are properly secured Big Data database systems, data warehouses, underground information banks, digitally recorded.
The underground bunkers themselves can survive various climatic and other calamities for perhaps hundreds or thousands of years.
But how long will the large collections of information survive in these Big Data systems and data warehouses stored on digital media?
Perhaps a better solution would be to write this data analogically on specially created discs?
Already in the 1970s, a certain amount of data concerning the achievements of human civilization was placed on the Pioneer 10 probe sent to space that recently left the solar system and will be nearest 10,000 year flying with the information about human civilization to the Alpha Centauri constellation.
At that time, the amount of data sent to the Universe regarding the achievements of human civilization was recorded on gold discs.
Is there a better form of data storage at the moment when this data should last thousands of years?
Please reply
Best wishes
Relevant answer
Theoretically thousands years unless unexpected disasters occur...
  • asked a question related to Information Theory
Question
5 answers
Given that:
1) Alice and Bob have access to a common source of randomness,
2) Bobs random values are displaced by some (nonlinear) function, i.e. B_rand = F(A_rand).
Are there protocols, which allow for the two to securely agree on the function (or its inverse) without
revealing any information about it?
Relevant answer
Answer
Basically, there are three main steps for secure key generation based physical layer properties . These are 1) randomness extraction 2) reconciliation 3) privacy amplification .
We usually refer to key agreement by the reconciliation where some correction techniques can be used such as LDPC , cascaded , .... .
Due to the leak of the information in the reconciliation step , the privcy amplification can be utilized by the mean of some functions like universal hashing function
  • asked a question related to Information Theory
Question
15 answers
Black holes cause event horizons, depending on the mass compressed into a narrow space (point?). From this analogy, could the quantity (mass?) of information in a narrow space lead to an insight horizon, which is why we cannot see into it from the outside and therefore no 100 percent simulation of a real system filled with a lot of information can succeed?
The more factors we use to model a system, the closer we get to reality (e.g. ecosystems) - but this process is asymptotic (reality is asymptotically approximated with every additional correct factor). Interestingly, it also seems to us that an object red-shifts into infinity when it approaches a black hole (also asymptotically).
Can we learn anything from this analogy? And if so, what?
Relevant answer