Science topic
Information Theory - Science topic
An interdisciplinary study dealing with the transmission of messages or signals, or the communication of information. Information theory does not directly deal with meaning or content, but with physical representations that have meaning or content. It overlaps considerably with communication theory and CYBERNETICS.
Questions related to Information Theory
I am looking to apply for PhD programs in the fields of Coding Theory, Cryptography, Information Theory, Communication Engineering, and Machine Learning.
I welcome any inquiries and collaboration opportunities related to these research areas.
Email: zhyueyue1998@gmail.com
Comments on “Information = Comprehension × Extension”
Resources
Inquiry Blog • Survey of Pragmatic Semiotic Information
OEIS Wiki • Information = Comprehension × Extension
C.S. Peirce • Upon Logical Comprehension and Extension
Information = Comprehension × Extension • Preamble
Eight summers ago I hit on what struck me as a new insight into one of the most recalcitrant problems in Peirce’s semiotics and logic of science, namely, the relation between “the manner in which different representations stand for their objects” and the way in which different inferences transform states of information. I roughed out a sketch of my epiphany in a series of blog posts then set it aside for the cool of later reflection. Now looks to be a choice moment for taking another look.
A first pass through the variations of representation and reasoning detects the axes of iconic, indexical, and symbolic manners of representation on the one hand and the axes of abductive, inductive, and deductive modes of inference on the other. Early and often Peirce suggests a natural correspondence between the main modes of inference and the main manners of representation but his early arguments differ from his later accounts in ways deserving close examination, partly for the extra points in his line of reasoning and partly for his explanation of indices as signs constituted by convening the variant conceptions of sundry interpreters.
Resources
Inquiry Blog • Survey of Pragmatic Semiotic Information
OEIS Wiki • Information = Comprehension × Extension
C.S. Peirce • Upon Logical Comprehension and Extension
Epigenetic information polish then recessive privilege distribution. WARNING: Genetic engineering is DANGEROUS. Hopefully regularly polishing the epigenome will cure aging and other diseases, plus prevent side effects of genetic engineering. Also, hopefully and theoretically after the potential genetic engineering to provide recessive traits and recessive genes, if the surgery is simple enough, subjects will keep their genetic signatures.
See my profile or Substack:
Fungi.
All of them decompose other dead organisms: "Fungi are important decomposers, especially in forests" ( https://education.nationalgeographic.org/resource/decomposers/ 19 oct 2023). Decomposing the dead is the literally ultimate form of predation.
"The earliest life forms we know of were microscopic organisms (microbes) that left signals of their presence in rocks about 3.7 billion years old" ( https://naturalhistory.si.edu/education/teaching-resources/life-science/early-life-earth-animal-origins ).
Microbes are sometimes fungi:
"They(microbes) include bacteria, archaea, fungi, protists, some green algae, and viruses" ( https://www.energy.gov/science/doe-explainsmicrobiology ).
Why does information theory explain aging, evolution vs creationism, critical rationalism, computer programming and much more?
Perhaps information has a very open definition thus, is very robust.
Hi researchers
i need links to Computer and network journals which publish the review papers with acceptable fees
Which theory best explains aging; error, damage, or information? Why specifically?
Information theory best explains aging because genetic errors are difficult to define. Also, damage may be necessary for growth. Therefore, information theory best explains aging.
"The Information Theory of Aging (ITOA) states that the aging process is driven by the progressive loss of youthful epigenetic information, the retrieval of which via epigenetic reprogramming can improve the function of damaged and aged tissues by catalyzing age reversal" ( 16 mar 2024 https://www.researchgate.net/publication/376583494_The_Information_Theory_of_Aging ).
It has been known for almost a century that when animals learn new routines that the synaptic strength within the brain, especially within the neocortex, is systematically altered (Hebb 1949; Kandel 2006). Enhancement of synaptic strength has been demonstrated for human subjects learning a new language. A group of Japanese university students, who were moderately bilingual, were enrolled in a 4-month intensive language course to improve their English (Hosoda et al. 2013). During this period, they learned ~ 1000 new English words which they used in various spoken and written contexts. The learning was followed by a weekly test. To learn the 1000 words, it is estimated that 0.0006 bits per second of information were transmitted over the 4-month period [1.5 bits per letter x 4 letters/word x 1000 words/16 weeks], a rate that (not surprisingly) falls well short of the 40 bits per second transmitted by a competent communicator of English (Reed and Durlach 1998); hence learning takes longer than the execution of a learned act. Additionally, it was discovered that the pathway between Broca’s area and Wernicka’s area was enhanced in the students as evidenced by diffusion tensor imaging (Hosoda et al. 2013). Such enhancement during learning has been attributed to increased myelination and synaptogenesis (Blumenfeld-Katzir et al. 2011; Kalil et al. 2014; Kitamura et al. 2017). A central reason for this understanding is that the minimal circuit for language learning has long been known to exist between Wernicke’s and Broca’s areas of the human brain based on lesion, stimulation, and neural recording experiments (Kimura 1993; Ojemann 1991; Metzger et al. 2023; Penfield and Roberts 1966).
With the use of modern methods (e.g., wide-field two-photon calcium imaging and optogenetic activation and inhibition), we can now delineate, with a high degree of precision, minimal cortical circuits that are involved in the learning of new tasks in animals (e.g., Esmaeili, Tamura et al., 2021, see attached Fig. 1). The next step is to measure the changes in synaptic formation via learning to assess the amount of new information added to neocortex [which in humans has an estimated capacity to store 1.6 x 10^14 bits of information or 2 ^ (1.6 x 10^14) possibilities, Tehovnik, Hasanbegović 2024]. This will address whether the neocortex has an unlimited capacity for information storage or whether the addition of new information replaces the old information as related to previous learning that utilized the minimal circuit (the same will need to be done for corresponding cerebellar circuits that contain the executable code based on stored declarative information, Tehovnik, Hasanbegović, Chen 2024). We have argued that uniqueness across individual organisms is predicated on both genetics and learning history (thereby making the hard problem of consciousness irrelevant). Soon investigators will track the learning history of an individual organism to assess how the brain creates (and updates) a unique library of learning per organism thereby helping us understand how genetics and learning history created, for example, Einstein, Kasparov, and Pelé.
Figure 1: A minimal neocortical circuit is illustrated for mice trained to perform a delayed go-no-go licking task before (Novice) and after learning (Expert). As with the minimal circuit for language acquisition in humans, this circuit can now be subjected to detailed synaptic analysis by which to quantify how learning occurs at the synapses (Hebb 1949; Kandel 1996); this quantification can be used to estimate how many bits of information the new connections represent and then to compare the amount of new information added to the animal’s behavioral repertoire (Tehovnik, Hasabegović, Chen 2024). Illustration from Fig. 8 of Esmaeli, Tamura et al. (2021).
My answer: Yes, in order to interpret history, disincentives are the most rigorous guide. How?: Due to the many assumptions of inductive logic, deductive logic is more rigorous. Throughout history, incentives are less rigorous because no entity(besides God) is completely rational and or self-interested, thus what incentivizes an act is less rigorous then what disincentivizes the same action. And, as a heuristic, all entities(besides God) have a finite existence before their energy(eternal consciousness) goes to the afterlife( paraphrased from these sources : 1)
2) )
, thus interpretation through disincentives is more rigorous than interpreting through incentives.
I've been reading about Claude Shannon and Information Theory. I see he is credited with developing the concept of entropy in information theory, which is a measure of the amount of uncertainty or randomness in a system. Do you ever wonder how his concepts might apply to the predicted red giant phase of the Sun in about 5 billion years? Here are a few thoughts that don't include much uncertainty or randomness -
In about 5 billion years the Sun is supposed to expand into a red giant and engulf Mercury and Venus and possibly Earth (the expansion would probably make Earth uninhabitable in less than 1 billion years). It's entirely possible that there may not even be a red giant phase for the Sun. This relies on entropy being looked at from another angle - with the apparent randomness in quantum and cosmic processes obeying Chaos theory, in which there's a hidden order behind apparent randomness. Expansion to a Red Giant could then be described with the Information Theory vital to the Internet, mathematics, deep space, etc. In information theory, entropy is defined as a logarithmic measure of the rate of transfer of information. This definition introduces a hidden exactness, removing superficial probability. It suggests it's possible for information to be transmitted to objects, processes, or systems and restore them to a previous state - like refreshing (reloading) a computer screen. Potentially, the Sun could be prevented from becoming a red giant and returned to a previous state in a billion years (or far less) - and repeatedly every billion years - so Earth could remain habitable permanently. Time slows near the speed of light and near intense gravitation. Thus, even if it's never refreshed/reloaded by future Information Technology, our solar system's star will exist far longer than currently predicted.
All this might sound a bit unreal if you're accustomed to think in a purely linear fashion where the future doesn't exist. I'll meet you here again in 5 billion years and we can discuss how wrong I was - or, seemingly impossibly, how correct I was.
Professor Miguel Nicolelis (2019) has published a free copy of his contributions to BMI (brain-machine interfaces) emphasizing his twenty years of work starting in 1999 and continuing through 2015.* Until 2003, Nicolelis had no competitors, but shortly thereafter Andersen et al. (2003), Schwartz et al. (2004) and Donoghue et al. (2006) joined the field, and tried to eclipse him and his associates [as described in Tehovnik, Waking up in Macaíba, 2017]; they, however, failed to achieve the eclipse, since the information transfer rate of their devices were typically below 1 bit per second at an average of about 0.2 bits/sec, much like what Nicolelis’ devices were transferring (Tehovnik and Chen 2015; Tehovnik et al. 2013). By comparison, the cochlear implant transfers 10 bits/sec (Tehovnik and Chen 2015) and therefore has been commercialized with over 700,000 registered implant recipients worldwide (NIH Statistics 2019).
BMI technology is still largely experimental. Willett, Shenoy et al. (2021) have developed a BMI for patients that transfers up to 5 bits/sec for spontaneously generated writing, but it is unclear whether this high rate is due to the residual movements (Tehovnik et al. 2013) of the hand contralateral to the BMI implant. To date, the most ambitious BMI utilizes a digital bridge between neocortex and the spinal cord below a partial transection to evoke a stepping response that still requires support of the body with crutches; but significantly the BMI portion of the implant in M1 enhances the information transfer rate by a mere 0.5 bits per second, since most of the walking (86% or 3.0 bits/sec of it) is induced by spinal cord stimulation in the absence of the cortical implant (Lorach et al. 2023). Accordingly, BMI falls short of the cochlear implant and thus BMI developers are years away from a marketable device. The pre-mature marketing by Nicolelis at the 2014 FEFA World Cup of his BMI technology (Tehovnik 2017b) should be a warning to Elon Musk (of Neuralink) that biology is not engineering, for if it were a BMI chip would now be in every brain on the planet. See figure that summarizes the information transfer rates for various devices including human language.
How useful is the heuristic that if both sides of a debate are unfalsifiable then they may be a false dichotomy? My answer: The heuristic that is both sides of a debate are unfalsifiable then they may be a false dichotomy is very useful because it is probably the case for practical reasons. Examples include but may not be limited to (evolutionism or creationism), (freewill or determinism), (rationalism or empiricism).
Binary sequences 1100100101
Symbolized by binary P(0)=1/2 P(1)=1/2 H(x)=-0.5*log2(0.5)-0.5*log2(0.5)=-1
Symbolized by quaternions
P(11)=1/5 P(00)=1/5 P(10)=1/5 P(01)=2/5 H(x)=-1/5*log2(1/5)*3-2/5*log2(2/5)=-1.9219
...
Is there a problem with my understanding?
If not ,which result is information entropy?
The article has submited IEEE-TIT. Preprint manuscript is Post Shannon Information Theory.You can find it on this website. Please give a fair review.Thank you.
Dear all,
Why forward selection search is very popular and widely used in FS based on mutual information such as MRMR, JMI, CMIM, and JMIM (See )? Why other search approaches such as the beam search approach is not used? If there is a reason for that, kindly reply to me.
The general consensus about the brain and various neuroimaging studies suggest that brain states indicate variable entropy levels for different conditions. On the other hand, entropy is an increasing phenomenon in nature from the thermodynamical point of view and biological systems contradict this law for various reasons. This can be also thought of as the transformation of energy from one form to another. This situation makes me think about the possibility of the existence of distinct energy forms in the brain. Briefly, I would like to ask;
Could we find a representation for the different forms of energy rather than the classical power spectral approach? For example, useful energy, useless energy, reserved energy, and so on.
If you find my question ridiculous, please don't answer, I am just looking for some philosophical perspective on the nature of the brain.
Thanks in advance.
The current technological revolution, known as Industry 4.0, is determined by the development of the following technologies of advanced information processing: Big Data database technologies, cloud computing, machine learning, Internet of Things, artificial intelligence, Business Intelligence and other advanced data mining technologies.
In connection with the above, I would like to ask you:
Which information technologies of the current technological revolution Industry 4.0 contribute the most to reducing the asymmetry of information between counterparties of financial transactions?
The above question concerns the asymmetry of information between such financial transaction partners, such as between borrowers and banks granting loans, and before granting a loan carrying out creditworthiness of a potential borrower and the bank's credit risk level associated with a specific credit transaction and, inter alia, financial institutions and clients of their financial services.
Please reply
Best wishes
How to obtain currently necessary information from Big Data database systems for the needs of specific scientific research and necessary to carry out economic, business and other analyzes?
Of course, the right data is important for scientific research. However, in the present era of digitalization of various categories of information and creating various libraries, databases, constantly expanding large data sets stored in database systems, data warehouses and Big Data database systems, it is important to develop techniques and tools for filtering large data sets in those databases data to filter out of terabytes of data only information that is currently needed for the purpose of conducted scientific research in a given field of knowledge, for the purposes of obtaining answers to a given research question and for business needs, eg after connecting these databases to Business Intelligence analytical platforms. I described these issues in my scientific publications presented below.
Do you agree with my opinion on this matter?
In view of the above, I am asking you the following question:
How to obtain currently necessary information from Big Data database systems for the needs of specific scientific research and necessary to carry out economic, business and other analyzes?
Please reply
I invite you to the discussion
Thank you very much
Dear Colleagues and Friends from RG
The issues of the use of information contained in Big Data database systems for the purposes of conducting Business Intelligence analyzes are described in the publications:
I invite you to discussion and cooperation.
Best wishes
What are the important topics in the field: Data analysis in Big Data database systems?
What kind of scientific research dominate in the field of Data analysis in Big Data database systems?
Please reply. I invite you to the discussion
Dear Colleagues and Friends from RG
The issues of the use of information contained in Big Data database systems for the purposes of conducting Business Intelligence analyzes are described in the publications:
I invite you to discussion and cooperation.
Best wishes
I would like to have a deeper insight into Markov Chain, its origin, and its application in Information Theory, Machine Learning and automated theory.
The future of marketing development in social media
Marketing in social media is still a very developing field in the field of marketing techniques used on the Internet. On the one hand, some of the largest online technology companies have built their business concept on social media marketing or are entering this field.
On the other hand, there are startups of technology companies acquiring data from the Internet and processing information in Big Data database systems for the purpose of providing information services to other entities as support for strategic and operational management, including planning advertising campaigns.
Therefore, the question arises:
What tools for social media marketing will be developed in the future?
Please, answer, comments
I invite you to the discussion
Hello Dear colleagues:
it seems to me this could be an interesting thread for discussion:
I would like to center the discussion around the concept of Entropy. But I would like to address it on the explanation-description-ejemplification part of the concept.
i.e. What do you think is a good, helpul explanation for the concept of Entropy (in a technical level of course) ?
A manner (or manners) of explain it trying to settle down the concept as clear as possible. Maybe first, in a more general scenario, and next (if is required so) in a more specific one ....
Kind regards !
An interesting thing is the algorithm according to which specific search results appear in a Google search engine based on a given password.
Formulas of this type of algorithms can be variously constructed so that different search results can be obtained according to the same password.
Added to this is the issue of promoting search results for companies that have made certain fees for a high level of positioning in search results. Unfortunately, this is not an objective process of finding information available on the Internet but a formula based on commercial marketing. In this situation, there is a question about competitiveness, which is limited in this way.
In view of the above, I am asking you: Does Google's search engine algorithm restrict competition in the availability of information on the Internet?
Please, answer, comments. I invite you to the discussion.
What kind of scientific research dominate in the field of Functionality and applications of smartphones?
Please, provide your suggestions for a question, problem or research thesis in the issues: Functionality and applications of smartphones.
Please reply.
I invite you to the discussion
Thank you very much
Best wishes
I have been pondering about the relationship between these two important topics of our data-driven world for a while. I have bits and pieces, but I have been looking forward to find a neat and systematic set of connections that would somehow (surprisingly) bind them and fill the empty spots I have drawn in my mind for the last few years.
In the past, while I was dealing with multi-class classification problem (not so long ago), I have come to realize that multiple binary classifications is a viable option to address this problem through using error correction output coding (ECOC) - a well known coding technique used in the literature whose construction requirements are a bit different than classical block or convolutional codes. I would like to remind you that grouping multiple classes in two superclasses (a.k.a. class binarization) can be addressed in various ways. You can group them totally randomly which does not dependent on the problem at hand or based on a set of problem-dependent constraints that can be derived from the training data. One way I like the most stays at the intersection point of information theory and machine learning. To be more precise, class groupings can be done based on the resultant mutual information to be able to maximise the class separation. In fact, the main objective with this method is to maximise class separation so that your binary classifiers expose less noisy data and hopefully result in better performance. On the other hand, ECOC framework calls for coding theory and efficient encoder/decoder architectures that can be used to efficiently handle the classification problem. The nature of the problem is not something we usually come across in communication theory and classical coding applications though. Binarization of classes implies different noise and defect structures to be inserted into the so called "channel model" which is not common in classical communication scenarios. In other words, the solution itself changes the nature of the problem at hand. Also the way we choose the classifiers (such as margin-based, etc) will affect the characterization of the noise that impacts the detection (classification) performance. I do not know if possible, but what is the capacity of such a channel? What is the best code structure that addresses these requirements? Even more interestingly, can the recurrent issues of classification (such as overfitting) be solved with coding? Maybe we can maintain a trade-off between training and generalization errors with an appropriate coding strategy?
Similar trends can be observed in the estimation theory realm. Parameter estimations or in the same way "regression" (including model fitting, linear programming, density estimation etc) can be thought as the problems of finding "best parameters" or "best fit", which are ultimate targets to be reached. The errors due to the methods used, collected data, etc. are problem specific and usually dependent. For instance, density estimation is a hard problem in itself and kernel density estimation is one of its kind to estimate probability density functions. Various kernels and data transformation techniques (such as Box-Cox) are used to normalize data and propose new estimation methods to meet today's performance requirements. To measure how well we do, or how different distributions are we again resort to information theory tools (such as Kullback–Leibler (KL) divergence and Jensen-Shannon function) and use the concepts/techniques (including entropy etc.) therein from a machine learning perspective. Such an observation separates the typical problems posed in the communication theory arena from the machine learning arena requiring a distinct and careful treatment.
Last but not the least, I think that there is deep rooted relationship between deep learning methods (and many machine learning methods per se) and basic core concepts of information and coding theory. Since the hype for deep learning has appeared, I have observed that many studies applying deep learning methods (autoencoders etc) for decoding specific codes (polar, turbo, LDPC, etc) claiming efficiency, robustness, etc thanks to parallel implementation and model deficit nature of neural networks. However, I am wondering the other way around. I wonder if, say, back-propagation can be replaced with more reasonable and efficient techniques very well known in information theory world to date.Perhaps, distortion theory has something to say about the optimal number of layers we ought to use in deep neural networks. Belief propagation, turbo equalization, list decoding, and many other known algorithms and models may have quite well applicability to known machine learning problems and will perhaps promise better and efficient results in some cases. I know few folks have already began searching for neural-network based encoder and decoder designs for feedback channels. There are many open problems in my oppinion about the explicit design of encoders and use of the network without the feedback. Few recent works have considered various areas of applications such as molecular communications and coded computations as means to which deep learning background can be applied and henceforth secure performances which otherwise cannot be achieved using classical methods.
In the end, I just wanted to toss few short notes here to instigate further discussions and thoughts. This interface will attract more attention as we see the connections clearly and bring out new applications down the road...
I would like to know if there is an expression that shows the (maximum) channel capacity of a downlink multiuser MIMO channel when imperfect CSI is assumed.
Any references in this direction would be useful for me. Thanks!
Hi, How can we calculate the entropy of chaotic signals? Is there a simple method or formula for doing this?
Greetings,
I am working on my Grad Project implementing an LDPC DVB-S2 decoder. and the best resources explaining the LDPC decoding I found -unfortunately- follows the 5G standard. So, If I followed along with these resources discussing the 5G implementation, What should I look out for so not to get confused between the 2 implementations?
Thanks in advance!
Is there an equation connecting the wave function and the entropy of the quantum system?
I have the following data set (attached) and I would like to calculate mutual information and joint entropy between multiple columns (like for A,B,D,E or C,D,E,F,G etc.). I have gone through R package entropy and other related packages. but as I am very new to information theory, I am having some problem to compute it.
I am specifically looking for R code or online calculator options to calculate this.
Can we affirm that whenever one has a prediction algorithm, one can also get a correspondingly good compression algorithm for data one already have, and vice versa?
Please consider a set of pairs of probability measures (P, Q) with given means (m_P, m_Q) and variances (v_P, v_Q).
For the relative entropy (KL-divergence) and the chi-square divergence, a pair of probability measures defined on the common two-element set (u_1, u_2) attains the lower bound.
Regarding general f-divergence, what is the condition of f such that a pair of probability measures defined on the common two-element set attains the lower bound ?
Intuitively, I think that the divergence between localized probability measures seems to be smaller.
Thank you for taking your time.
Dear researchers,
Let's share our opinion about recent attractive topics on communication systems and the potential future directions.
Thanks.
By definition, the capacity of a communication channel is given by the maximum of the mutual information between the input (X) and output (Y) of the channel, where the maximization is with respect to the input distribution, that is C=sup_{p_X(x)}MI(X;Y).
From my understanding (please correct me if I'm wrong), when we have a noisy channel, such that some of the input symbols may be confused in the output of the channel, we can draw a confusability graph of such a channel where nodes are symbols and two nodes are connected if and only if they could be confused in the output.
If we had to communicate using messages made out of single symbols only, then the largest number of messages that could be sent over such a channel would be α(G), the size of the largest independent set of vertices in the graph (in this case Shannon capacity of the graph equals independence number of that graph α(G)).
Does this mean that for such a channel, the maximum mutual information of the input and output of the channel (channel capacity) is α(G), and it is achieved by sending the symbols of the largest independent set?
What in your opinion will the applications of the technology of analyzing big information collections in Big Data database systems be developed in the future?
In which areas of industry, science, research, information services, etc., in your opinion, will the applications of technology for the analysis of large collections of information in Big Data database systems be developed in the future?
Please reply
I invite you to the discussion
I described these issues in my publications below:
I invite you to discussion and cooperation.
Best wishes
The development of IT and information technologies increasingly affects economic processes taking place in various branches and sectors of contemporary developed and developing economies.
Information technology and advanced information processing are increasingly affecting people's lives and business ventures.
The current technological revolution, known as Industry 4.0, is determined by the development of the following technologies of advanced information processing: Big Data database technologies, cloud computing, machine learning, Internet of Things, artificial intelligence, Business Intelligence and other advanced data mining technologies.
In connection with the above, I would like to ask you:
How to measure the value added in the national economy resulting from the development of information and IT technologies?
Please reply
Best wishes
In information theory, the entropy of a variable is the amount of information contained in the variable. One way to understand the concept of the amount of information is to tie it to how difficult or easy it is to guess the value. The easier it is to guess the value of the variable, the less “surprise” in the variable and so the less information the variable has.
Rényi entropy of order q is defined for q ≥ 1 by the equation,
S = (1/1-q) log (Σ p^q)
As order q increases, the entropy weakens.
Why we are concerned about higher orders? What is the physical significance of order when calculating the entropy?
Normalized Mutual Information (NMI) and B3 are used for extrinsic clustering evaluation metrics when each instance (sample) has only one label.
What are equivalent metrics when each instance (sample) has only one label?
For example, in first image, we see [apple, orange, pears], in second image, we see [orange, lime, lemon] and in third image, we see [apple], and in the forth image we see [orange]. Then, if put first image and last image in the one cluster it is good, and if put third and forth image in one cluster is bad.
Application: Many popular datasets for object detection or image segmentation have multi labels for each image. If we used this data for classification (not detection and not segmentation), we have multiple labels for each image.
Note: My task is unsupervised clustering, not supervised classification. I know that for supervised classification, we can use top-5 or top-10 score. But I do not know what will be in unsupervised clustering.
In the question, Why is entropy a concept difficult to understand? (November 2019) Franklin Uriel Parás Hernández commences his reply as follows: "The first thing we have to understand is that there are many Entropies in nature."
His entire answer is worth reading.
It leads to this related question. I suspect the answer is, yes, the common principle is degrees of freedom and dimensional capacity. Your views?
The information inside the volume of black hole is proportional to its surface.
However, what if information does not cross the horizon, rather it is constrained to stay on the horizon's surface, progressively increasing the black hole radius? What if the black hole is empty, and its force comes just from a spacetime distorption inside it? Reversing Einstein, what if the black hole's attraction is not caused by its mass, but just by the spacetime deformation inside it? This would explain the paradoxes of the holographic principle...
Thanks
Clues: material isn’t doomed to be sucked into the hole. Only a small amount of it falls in, while some of it is ejected back out into space.
For compressive sensing (CS) , we can use fewer M measurements to reconstruct original N-dimension signal, where M<<N and the measurement matrix satisfies the restricted isometry property (RIP). Can we combine the concept of entropy in information theorem with CS? Intuitively speaking, the data is successfully reconstructed and information will not get lost before and after doing CS. Can we claim that the entropy of the compressed measurements is equal to the entropy of the original signal because entropy stands for contained information?
To understand my problem more easily, I take a example below:
Supposed in a data gathering wireless sensor network, we deploy N machines in the area. To quantify the amount of information collected by each machine, we assume a Gaussian source field, where the collection of data gathered by all machines is assumed to follow a multi-variate Gaussian distribution ~N(\mu, \Sigma). The joint entropy of all data is H(X). NOW we can use M measurements to reconstruct these data by CS. The joint entropy of these M data is H(Y). Can we say H(X) will equal to H(Y)?
Thanks for your response.
While studying information theory we do not consider any directionality of the channel.
There will be no change if Reciever and the Transmitter are interchanged (i.e. Lorentz Reciprocity is obeyed).
However, suppose if the channel is a non-reciprocal device like a isolator or a Faraday rotator, rather than a simple transmission cable. What are the consequences on the Information theory?
What would be consequences on Shannon Entropy and several theorems like Shannon-Coding theorem, Shannon-Hartley theorem etc. I have been googling with several terms like Non-reciprocal networks etc, but I have not been able to find anything. Any help will be appreciated.
We very frequently use cross-entropy loss in neural networks. Cross-entropy originally came from information theory about entropy and KL divergence.
My question is that.. if I want to design a new objective function, does it always need to satisfy information theory?
For example, in my objective function, I want to add a probability measure of something, say A, to cross-entropy loss. A ranges from 0 to 1. So, the objective function will look like this:
= A + (cross-entropy between actual and prediction)
= A + (-(actual)*log(prediction))
Say, the above objective function works good for neural networks, but violate information theory in the sense that we are adding a probability value, A, to a loss value, which is cross-entropy: (-(actual)*log(prediction))
So, my question is that.. even if it violates loss evaluation from the viewpoint information theory, is it acceptable as an objective function for neural networks?
When Carrier Aggregation and cross-carrier scheduling are applied in LTE-Advanced system, UE may support multiple Component Carriers (CC) and control information on one CC can allocate radio resource on another CC. Search space of all CCs and control information is only transmitted from a chosen CC. In this case, if search spaces of different CCs are not properly defined, high blocking probability of control information will be very harmful to system performance.
My Question is: What is the cause of this blocking, is it deficiency of control channel elements to served, scheduled UE or what?
My guess is not but I have no proof of this. Can any expert help?
For now, I assume either self-overlapping or high mutual over-lapping of the UEs' search spaces as the likely cause of blocking.
Hi
I have published this paper recently
In that paper, we did an abstracted simulation to get an initial result. Now, I need to do a detailed simulation on a network simulator.
So, I need a network simulator that implement or support MQTT-SN or some implementation of MQTT-SN that would work in a network simulator.
Any hints please?
Goal of the theory:
Informational efficiency is a natural consequence of competition, relatively free entry, and low costs of information. If there is a signal, not incorporated in market prices, that future values will be high, competitive traders will buy on that signal. In doing so, they bid the price up, until it fully reflects the information in the signal.. !
Free access to information should prevail on the Internet.
This is the main factor in the dynamic development of many websites, new internet services, the growth of users of social media portals and many other types of websites.
In my opinion, all information that can be publicly disseminated should be available on the Internet without restrictions, universally and free of charge.
Please, answer, comments.
I invite you to the discussion.
Best wishes
Hello, I need to select for my research paper a researchers, who had wrote research papers/works/articles for journals about how they "see" a single person in Informatology or Information Science.
It is connected with my MA thesis, so answers from this question could help me with my choices. Appreciate every answer!
Hi Francis
Greetings from India
Do you use information theory in your work?
What is the framework you are using for integrating the two?
Thanks in advance
Safeer
Do we loose information when we project a manifold.
For example, do we loose information about the manifold i.e. Earth (Globe) when we project it to a chart in the book (using maybe stereographic, mercator or any other method)
Similarly, we should be loosing information while we create a Bloch sphere for a 2 state system in Quantum Mechanics which is also a Projected space from a higher dimension i.e. 4 dim.
Also, is there a way to quantify this information loss, if there is any?
Apparently, in some countries, they are founded, usually somewhere underground, in specially created bunkers capable of surviving climatic disasters and other banks of large collections of information on the achievements of human civilization gathered on digital data carriers.
These are properly secured Big Data database systems, data warehouses, underground information banks, digitally recorded.
The underground bunkers themselves can survive various climatic and other calamities for perhaps hundreds or thousands of years.
But how long will the large collections of information survive in these Big Data systems and data warehouses stored on digital media?
Perhaps a better solution would be to write this data analogically on specially created discs?
Already in the 1970s, a certain amount of data concerning the achievements of human civilization was placed on the Pioneer 10 probe sent to space that recently left the solar system and will be nearest 10,000 year flying with the information about human civilization to the Alpha Centauri constellation.
At that time, the amount of data sent to the Universe regarding the achievements of human civilization was recorded on gold discs.
Is there a better form of data storage at the moment when this data should last thousands of years?
Please reply
Best wishes
Given that:
1) Alice and Bob have access to a common source of randomness,
2) Bobs random values are displaced by some (nonlinear) function, i.e. B_rand = F(A_rand).
Are there protocols, which allow for the two to securely agree on the function (or its inverse) without
revealing any information about it?
Black holes cause event horizons, depending on the mass compressed into a narrow space (point?). From this analogy, could the quantity (mass?) of information in a narrow space lead to an insight horizon, which is why we cannot see into it from the outside and therefore no 100 percent simulation of a real system filled with a lot of information can succeed?
The more factors we use to model a system, the closer we get to reality (e.g. ecosystems) - but this process is asymptotic (reality is asymptotically approximated with every additional correct factor). Interestingly, it also seems to us that an object red-shifts into infinity when it approaches a black hole (also asymptotically).
Can we learn anything from this analogy? And if so, what?