Science topic

Information Analysis - Science topic

Explore the latest questions and answers in Information Analysis, and find Information Analysis experts.
Questions related to Information Analysis
  • asked a question related to Information Analysis
Question
5 answers
Several leading technology companies are currently working on developing smart glasses that will be able to take over many of the functions currently contained in smartphones.
It will no longer be just Augmented Reality, Street View, enabling interactive connection to Smart City systems, Virtual Reality used in online computer games but many other functions of remote communication and information services.
In view of the above, I address the following questions to the esteemed community of researchers and scientists:
Will smart glasses replace smartphones in the next few years?
Or will thin, flexible interactive panels stuck on the hand prove more convenient to use?
What new technological gadget could replace smartphones in the future?
What do you think about this topic?
Please reply,
I invite you all to discuss,
Thank you very much,
Greetings,
Dariusz Prokopowicz
Relevant answer
Answer
Smart glasses have been positioned as the window into the future. They've been a promising technology for many years now. With the recent developments, it's likely that by the end of this decade they very well may become a tech accessory as common as the smartwatch is today.
  • asked a question related to Information Analysis
Question
5 answers
Do new ICT information technologies facilitate the development of scientific collaboration, the development of science?
Do new ICT information technologies facilitate scientific research, the conduct of research activities?
Do new ICT information technologies, internet technologies and/or Industry 4.0 facilitate research?
If so, to what extent, in which areas of your research has this facilitation occurred?
What examples do you know of from your own research and scientific activity that support the claim that new ICT information technologies facilitate research?
What is your opinion on this topic?
Please reply,
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz
Relevant answer
Answer
Agree to the insights given by @Shafagat
  • asked a question related to Information Analysis
Question
6 answers
Hi everyone,
We'd like to open a huge topic by a literature systematic review. However, the topic is so broad, the initial search on Web of Science only provided us od over 25 000 papers which met our search criteria. (Sure this can be reduced, but only slightly.)
I'd like to explore computer asisted review's possibilities - there must be some software capable of performing an analysis of some sort. Is there anyone who has experience in this field?
Thank you for your thoughts.
Best regards,
Martin
Relevant answer
Answer
Here are some references which should be helpful!
Michelson, M., Chow, T., Martin, N. A., Ross, M., Ying, A. T. Q., & Minton, S. (2020). Artificial intelligence for rapid meta-analysis: Case study on ocular toxicity of hydroxychloroquine. Journal of medical Internet research, 22(8), e20007.
•Lämsä, J., Espinoza, C., Tuhkala, A. ., & Hämäläinen, R. (2021). Staying at the front line of literature: How can topic modelling help researchers follow recent studies?. Frontline Learning Research, 9(3), 1 - 12. https://doi.org/10.14786/flr.v9i3.645
•Ferdinands, G., Schram, R., de Bruin, J., Bagheri, A., Oberski, D. L., Tummers, L., & van de Schoot, R. (2020, September 16). Active learning for screening prioritization in systematic reviews - A simulation study. https://doi.org/10.31219/osf.io/w6qbg
•van de Schoot, R., de Bruin, J., Schram, R. et al. An open source machine learning framework for efficient and transparent systematic reviews. Nat Mach Intell 3, 125–133 (2021). https://doi.org/10.1038/s42256-020-00287-7
  • asked a question related to Information Analysis
Question
8 answers
Do you have any experience or opinion about the accuracy of scientific information? The paper describes the accuracy of Wikipedia. I am experiencing resistance from wiki-bot/automatic response that prevents me from correcting the wrong knowledge. Thank you.
Relevant answer
Answer
I share the concerns of Ehtisham Lodhi that Wikipedia is insufficient and unreliable. I would have no hesitation to think if I needed reliable information (such as for quoting for a reference for my own paper), I would go to a search of peer reviewed literature or even (if the style is old fashioned or the Lecturer is) try a textbook.
  • asked a question related to Information Analysis
Question
5 answers
What strategies do you personally follow to manage information overload?
Relevant answer
Answer
Addressing information overload in scholarly literature
Information overload is a common problem, and it is an old problem. It is not a problem of the internet age, and it is not specific to scholarly literature, but the growth of preprints in the last five years presents us with a proximal example of the challenge.
We want to tackle this information overload problem and have some ideas on how to do this – presented at the end of this post. Are you willing to help? This post tells some of the back story of how preprints solve part of the problem – speedy access to academic information –  yet add to the growing information that we need to filter to find results that we can build on. It is written to inspire the problem solvers in our community to step forward and help us to realise some practical solutions....
  • asked a question related to Information Analysis
Question
4 answers
Hi, does anyone know theories related to (improvement of) product information and/or product (detail) page for online retailing? It will be appreciated a lot - thanks!
Relevant answer
Answer
Hi
The selling of a product largely depends on the quality of the product since the consumer does not buy the product rather he buys the product for its quality attributes. Therefore, you can follow the two following articles for information on product quality:
1. Caswell, J. A., Noelke, C. M., & Mojduszka, E. M. (2002). Unifying two frameworks for analyzing quality and quality assurance for food products. In Global food trade and consumer demand for quality (pp. 43-61). Springer, Boston, MA.
2. Caswell, J. A. (2006). Quality assurance, information tracking, and consumer labeling. Marine pollution bulletin, 53(10-12), 650-656.
You can also follow one of my articles is accepted for publication online buying behavior of clothing products for online retailing. It will be published very soon. You can knock then at hossainafjal@gmail.com
  • asked a question related to Information Analysis
Question
2 answers
How to judge the correctness of the obtained information related to COVID-19 and how reliable are the various online sources of this information?!
What should/not we trust?! where to get information!?
Relevant answer
Answer
I totally agree with you.
Therefore, I recommend you to take a look at:
  • asked a question related to Information Analysis
Question
102 answers
This is my only question on logic in RG; there are other questions on applications of logic, that I recommend.
There are any types and number of truth values, not just binary, or two or three. It depends on the finesse desired. Information processing and communication seem to be described by a tri-state system or more, in classical systems such as FPGAs, ICs, CPUs, and others, in multiple applications programmed by SystemVerilog, an IEEE standard. This has replaced the Boolean algebra of a two-state system indicated by Shannon, also in gate construction with physical systems. The primary reason, in my opinion, is in dealing more effectively with noise.
Although, constructionally, a three-state system can always be embedded in a two-state system, efficiency and scalability suffer. This should be more evident in quantum computing, offering new vistas, as explained in the preprint
As new evidence accumulates, including in modern robots interacting with humans in complex computer-physical systems, this question asks first whether only the mathematical nature is evident as a description of reality, while a physical description is denied. Thus, ternary logic should replace the physical description of choices, with a possible and third truth value, which one already faces in physics, biology, psychology, and life, such as more than a coin toss to represent choices.
The physical description of "heads or tails", is denied in favor of opening up to a third possibility, and so on, to as many possibilities as needed. Are we no longer black or white, but accept a blended reality as well?
Relevant answer
Answer
Great idea
  • asked a question related to Information Analysis
Question
20 answers
How does one combine the basis of Quantum Physics that the information cannot be destroyed with the GR statement that black holes destroy the info?
Relevant answer
Answer
Indeed, some of these topics are open: they are connected with the theory of quantum gravity, yet to be constructed (string theory and holography, with the AdS/CFT correspondence, or loop quantum gravity are only attempts).
However, I think that the "black hole information paradox" is surrounded by too much hype. The reason is, of course, the attraction of Hawking's public figure and his wager. There was much theatre in Hawking's conceding that black hole evaporation in fact preserves information.
The paradox arises because the initial matter configuration is assumed to be constructed as a pure quantum state. As I have already remarked, this is unphysical. The article in Wikipedia about the "black hole information paradox" cites Penrose saying that the loss of unitarity in quantum systems is not a problem and that quantum systems do not evolve unitarily as soon as gravitation comes into play. This is most patent in theories of cosmological inflation.
Of course, the definitive answer to Natalia S Duxbury's question will come with the final theory of quantum gravity. We can keep looking forward to it :-)
Best wishes to the seekers of final theories!
  • asked a question related to Information Analysis
Question
3 answers
Eric Kandel (2006) has revealed that the consolidation of memory at the level of the nucleus is a bipolar process: chemical agents exist in our cells that can either potentiate or suppress memory. Having a double-ended system prevents extremes: remembering everything or remembering nothing. As a child we are rewarded for remembering everything we are taught in school under the assumption that all knowledge is good. But what happens if the knowledge is tainted such as that the black slaves on plantations enjoyed being supported by the white slave owners, that the Holocaust was a fabrication, that the recent election in the United States was rigged, that vaccines produce massive side-effects, that drinking Clorox is an effective way to kill Covid-19, and so on. It is instructive that Albert Einstein was not a great student (i.e., did not like to memorize things and he had difficulty in his second language, French, which he needed to complete his university entrance exams, Strauss 2016) yet his ability to zero-in on the important data while excluding nonsense is what made him an extraordinary scientist. Ergo, the management of one’s memory may be as important as having a good memory.
References
Kandel ER (2006) In Search of Memory. The Emergence of a New Science of Mind. W.W. Norton & Company Inc., New York.
Strauss V (2016) Was Albert Einstein really a bad student who failed math? The Washington Post, Feb.
Relevant answer
Answer
Thomas Ryan (MIT), and others talked about this matter, and they even go further by stating that they found that memory (offline consciousness) don't reside in the brain. I however see the entire consciousness reside in extra physical dimensions; kindly see the following video: https://youtu.be/I1G3Jx-Q1YY
  • asked a question related to Information Analysis
Question
9 answers
Are there any studies that show a positive relationship between the length of a text (word count or number of characters) and its information content?
Relevant answer
Answer
I agree with some the more replies above, but few new information when is first time talking abut short contents outlines of epistles (letters), epigrams, and so on? and what about the Renaissance "adagia" of Erasmus and Luis Vives? and compared what the shorter tale (Guy de Maupassant; Lovecraft), involving adventure or magic, related to the novel, book that tells an invented story, we are in a settle behaviour, the way in whics it acts, functions or changes, for instance with the aphorisms of Nietzsche, a short, clever sentence which expresses a generqal truth which isn't at the novel, based on a particular mode influencing a huge universe of account, even report of our feelings; well ending with a question, an adage or an aphorism might contain thousand expressions or only a very few?..
  • asked a question related to Information Analysis
Question
16 answers
Could anyone provide any comment and/or references on the measurement of "information depth" ?
By "information depth", I mean more than just the minimum amount of bits to reproduce a given information. It would also have to involve some stuff related to the content and maybe corollary aspects of the information making full part of it (how it is collected in function of the environment ? Its added value in given context ? Its "strength" for further progress ? ...).
For instance, there are the two verses (in French):
Gal, amant de la reine, alla, tour magnanime
Galamment de l'arène à la Tour Magne à Nîmes.
Both verses mean something coherent. But there is also additional information in them :
- they are alexandrines ;
- they are pronounced exactly the same way (the second verse is a full rhyme of the first verse) ;
- there is geographical information : there is an (ancient) arena and a tower called "Magne" in the French city of Nîmes (Provence) ;
- ....
Similarly how could one measure the exchange of information which makes that a transcription factor (in molecular biology) recognizes a given DNA sequence to be translated : for instance A-C-A-G-G-T-A-G-T-C .... (and by the way, how can it "instantaneously" recognize the sequence and only that one sequence which gives the relevant needed protein ? ) ? And how could the process of information exchange be described for e.g. the methylation or demethylation of the right DNA-base (at the right moment), same for cellular division (chromatin-histone compacting / decompacting, ...), for reprogramming in meiosis, etc. (epigenetics) ?
Relevant answer
Answer
Hi Sang Ho Lee,
Yes, I acknowledge the (wilful) naive question ! Thank you for the references and the nice summary on the "state of art" of the issue in biology.
I was already amazed of the fantastic discoveries of the last century in biology on how information is accurately exchanged, handled, developed with such high level of accuracy and reliability (DNA, RNA, genetic code, cell organization, mitosis, meiosis, regulation, operons, promoters, repressors, 3D-complexity of proteins, etc, etc.) so many times in a second to allow life to thrive in billions of cells, organisms, ....during millions of years. So indeed this is the question everybody asks : how is it possible ? What is the secret of such high information quality at all levels ? How can we grasp  it, measure it ?
And now, since 2-3 decades, we see the progress of epigenetics (the "how genes are expressed ..") with the precise timely methylations, demethylations, transposable elements, mobile DNA, reprogramming, return to pluripotency, epigenetic barriers, Polycomb, Trithorax, .... somatic, germinal memory, even epigenetic heritability of learned behaviours etc. etc. (all the work since the pioneer work of Barbara Mc Clintock).
Of course it takes time and it is an huge work all over the world to discover, describe and prove all this.
But the  "information problem" is still there !
I have found following article : maybe a starting point for further thinking ?
  • asked a question related to Information Analysis
Question
8 answers
Great attention should be paid to methods of search and selection of sources to establish their credibility and value of information sources.
Relevant answer
Answer
The Use internet for Educational Research:
Since internet is a public domain and no one guaranty and responsible of what are being wrote or spread. It may be an obsolete knowledge or information, or it is partially tested information or it is uncompleted information. Therefore, it is higher risk to consume internet information as base for academicals writing. Especial the information whereas is spread over the anonymous sites.
Although there were anonymous and uncompleted sites, we can found millions of popular journals, economic magazines, and bulletins such as New York Post, Strait times, Washington post, universities libraries etc.
Reference
  • asked a question related to Information Analysis
Question
4 answers
I'm going to discover any possible association rules among some different events in the environment. for example association between height and incidence of a disease. I'm working on polygons but its possible to convert them to point features. So offer me some of methods and software which provide them.
Relevant answer
Answer
I don't know if my answer will fit exactly your question. If for "hidden associations" you mean associations which can be mediated by "hidden variables" related to the events that you measured, I suggest you to explore the "structural equations modeling".
I learned how to use it on the Mplus software, which has a demo version https://www.statmodel.com/demo.shtml
and I know that there are also R packages.
  • asked a question related to Information Analysis
Question
4 answers
I know how does the Random Forest works if we have two choices. If apple is Red go left, if Apple is Green go right. and etc. 
But for my question, if the data is texts"features" I trained the classifier with training data, I would like to understand deeply how does the algorithm split the node, based on what? the tf-idf weight, or the word itself. In addition, how did it precidt the class for each example. 
I would really appreciate a very explained in details with example in texts.
Relevant answer
Answer
Hi Sultan,
I am not familiar on which implementation of RandomForest you are using. Anyway, you are bit far from what it does.
Random forests for classification builds Multiple decision trees (not only one). Each tree output a label (using a process similar to the one described by you with the apple). Then, the final label is decided by a majority voting process.
Each tree is generated through a subset of the initial training set randomly selected (usually, 1/3 of the samples are used, depending on the default value of the implementation that you are used). Each tree is also generated using just a few features (e.g. if your features are frequency counts of words, it will consider just a small subset of words, randomly selected among the initial ones). Usualy, 3-5 features are used...but, again...it depends on the default value of your implementation.
The splitting criteria on each tree node is to use a condition over typically one feature which divides the dataset in two equal parts. The criteria to select which is the feature to consider in each node is, by default, the gini criterion (see here to know more about this). This process stops whenever all the remaining data samples have the same label. This node will be a leaf one which will assign labels to test samples.
Please let me know if you got everything from here.
Best,
Luis
  • asked a question related to Information Analysis
Question
30 answers
I have 600 examples on my dataset for classification task. Number for examples labeled in each class is different. ClassA has 300 examples, ClassB has 150 examples, ClassC has 150 examples.
I read many papers and resources about splitting data into two or three parts, train-validation- and test. Some are saying if you have limited data then no need for wasting time and three parts. Two parts (train-test) is enough giving 70% for training, and 30% for testing. And using 5-flogs metric also is the ideal one for limited.
Some are saying doing 70% for training ( and the validation data taken from the training data itself for 30%) , and test for the remaining 30% from the original data.
From your experience, could you tell me your thoughts and suggestions about this mystery? 
Thank you 
Relevant answer
Answer
According to Andrew Ng, in the Coursera MOOC on Introduction to Machine Learning, the general rule of thumb is to partition the data set into the ratio of 3:1:1 (60:20:20) for training, validation and testing respectively.
When a learning system is trained with some data samples, you might not know to which extent it can predict unseen samples correctly. The concept of cross validation is done to tweak the parameters used for training in order to optimize its accuracy and to nullify the effect of over-fitting on the training data. This shouldn't be done on the test set itself and hence the separation between testing set and cross validation set.
In cases where cross validation is not applicable, it is common to separate the data  in the ratio of 7:3 (70:30) for training and testing respectively.
  • asked a question related to Information Analysis
Question
4 answers
Does anybody have an example of research conducted in one area of science that when cross pollinated with research outputs from a different field of research, led to a breakthrough in a completely new area.
I am specifically looking for examples of research being conducted in two completely different areas and with no obvious connection that when brought together, by whatever means, led to a new discovery, process, or solution to an outstanding problem.
Relevant answer
Answer
I think that that there are many potentially fruitful combinations. For example, economics and physics. But I think that both coauthors should have at least elementary education in both sciences, otherwise they may not understand each other, also due to different terminology. I have an article about that on RG:
Yegorov Y. (2007) Econo-physics: A Perspective of Matching Two Sciences
I think that social sciences can be also matched with computer science, in particular, with the theory of networks. Here, if you are a social scientist and can formulate good problem (like propagation of drugs), you might not know computer science, but find a guy, who can make simulation with networks, study their statistical properties, etc.
  • asked a question related to Information Analysis
Question
1 answer
I have read your paper, "Finding Opinion Strength Using Fuzzy Logic on Web Reviews". It is really a good paper.
I want to ask at page 42 (page 6), you have mentioned seven pieces of data, namely Nikon D3SLR, Olympus FE- 210, Cannon 300, Cannon EOS40D, FUZIFLIM S9000, Sony Cyber shot DSCH 10, and Kodak M1033.
How can you extract these data from the corresponding website? Do you have any public package for these data? I am very interesting and want to test these data. Thanks very much.
And you mentioned, "our system has a good accuracy in predicting product ranking". I am sorry that I have not seen the comparative experiments in the paper. How can you get this conclusion? Thanks for giving any advice.
Relevant answer
Answer
sorry I do not have the expertise to answer this question.
  • asked a question related to Information Analysis
Question
19 answers
In our Age of Information the physical Theory of Information by SHANNON is to narrow - he excluded "psychological considerations". But today the importance of this term is too great - we need a unified definition for  a l l  sciences !
My results see at http://www.plbg.at, but they are only my ideas. They try to find a valid and acceptable  abstraction over all sciences. 
Relevant answer
Answer
Many definitions of information as a construct include the notion that information is the result of data analysis or visualization that is useful (actionable) or valuable (can be bought or sold). So the question of useful or valuable 'how' and 'to whom' naturally arises.
If you use the above factors in a definition, information is subjective and "psychological [or cultural] considerations" could not be excluded. Measurement of the construct defined this way is not easy but is a matter of perspective (as in judicial courts knowing obscenity when they see it).
David Kroenke is a well-recognized database and data modeling expert, and he has worked on differentiating data from information in practical ways for years (see his various articles and textbooks). He contends that information resides in the mind of the user. As an example, he points out that some people can make good use of a graph or map or other visualization of the results of data analysis while other people can't or won't. Further, an animal looking at that graph/visualization can't make use of it. Therefore, the information resides in the mind of the user, making it subjective.
in this context, perhaps the successful/unsuccessful use of data analysis/visualizations would be a good surrogate measure for information. If you can use it, it's there. If you make good use of it (successful outcome), the quality of the information was good.
A counter argument is that use of the results of data analysis/visualization is really applying strategies and knowledge and therefore falls into the category of expertise. I see this idea in discussions of expert systems and artificial intelligence.
Just my two cents worth on the topic.
  • asked a question related to Information Analysis
Question
17 answers
Thanks to Number Theory, we had been studying numbers and their properties since a long time now. Dealing with numbers usually involves trying to find out the existence of the certain special magical powers they possess, if any. My question rotates around some of the immediate clinical aspects:
Is there a general, generic, genetic manner in which numbers can be used as a memory storage unit? Is there a measure of how much information can be stored in numbers and representations of them? Is it possible to find how many numbers are there?
Relevant answer
Answer
Also, if you are not aware of Chaitin's constant, it's something I'd recommend to you: it's almost the reverse of what you asked for: a number about which we know almost nothing!  
  • asked a question related to Information Analysis
Question
13 answers
Let's use an example. We have a function y = f(x), in which x is the input (the probability) and y is the output (the entropy).  If we change y in y', can we find an x' such that f(x') = y'?
In other words, I know that when p changes, H changes; is it possible the opposite, such that if H changes, p changes?
Relevant answer
Answer
Considering the Gibbs-Shannon entropy S, we know that
dS=-∑(lnpi)dpi
We will consider i=1,2,...,n with n>2. Then there are n-1 independent variations dpi.
You cannot uniquely determine n-1>1 independent quantities dpi from a single quantity dS.
I am not sure if this is what you were asking?
  • asked a question related to Information Analysis
Question
11 answers
Relevant answer
Answer
I always admired the work of Francois Quesnay (Tableau Economique). It is an early (18th C / 1758) precursor to input output analysis. It is also connected to the modern work of Walter Isard (inter-regional input output models) and to the classical economics / Marx inspired "production of commodities in terms of commodities." (Sraffa). Modern economic geographers (Michael Webber and David Rigby) have also explored the theoretical connections. [The Golden Age Illusion: Rethinking Postwar Capitalism by Michael J. Webber; David L. Rigby]. Surely another ill-fated example is the attempt at Soviet style central planning -- notorious for making too many of the wrong items, due to lack of market driven price signals. So we come back to the market and big data efforts of retailers like Walmart who have long explored patterns with terabytes of data.
Until your nice question / comment I had not really joined these ideas to connect to big data, but you are correct -- there is a nice lineage. At least I think so!
  • asked a question related to Information Analysis
Question
13 answers
Information analysis as a discipline belonging to information science. Specifically the behavior of the information published by governments electronic on social networks.
Relevant answer
Answer
Hi, Yarenia, good day.
Wish the below 5 research articles suit your need.
  • asked a question related to Information Analysis
Question
3 answers
Thinking, insight, equations and gedanken experiments are all other words for information in a general sense . If this is true then it would seem that information has to be exchanged in the universe before cause and effect is observed or constructed. Information, therefore, applies to every discipline although it may be called something else in that discipline.
Relevant answer
Answer
What do you mean by "groups of threes"?
  • asked a question related to Information Analysis
Question
8 answers
What are Semantics of Business Vocabulary and Business Rules (SBVR) Models and what are Information System Models?  Does UML Business model or Information System model.
Relevant answer
Answer
There is a problem of terminology here: whereas meta-models are used to translate from one language to another, stereotypes are used within a language and therefore have nothing to do with meta-models.
  • asked a question related to Information Analysis
Question
10 answers
I have taken effort to produce such software and am seeking persons who have suitable application for such sequences.  Currently, the software will compute shustrings on the integers and the nucleotides but will generate maximally (or uniformly) disordered sequences only on the integers; the nucleotides is coming, as is binary and a user selectable symbol set.  For now, understanding the application areas for such sequences, and the provision of same to interested parties, with the hope that users will provide feedback, is solicited.
This software generates sequences in integral powers of ten, up to length one hundred million digits.  A sequence of one hundred million digits takes just 30 seconds to produce.
Relevant answer
Answer
The shuffle algorithm is now properly functioning, and mixes the shustrings very quickly,  The result is a UDS without lumps; instead, the UDS is rather smooth, with combinations of all symbols to be found sprinkled throughout the sequence.  Again, a UDS is a UDS, all are equivalently disordered.  It is just that some UDSs have better visual appeal as compared with other UDSs.
  • asked a question related to Information Analysis
Question
6 answers
Hi Gurus, I have a set of documents and I want to know the topic about which these documents are. Is it the issue of topic modeling? Is there any software or technique where I give this set of documents as input and it gives me its topic - may be using a kind of taxonomy or what. Can any body explain both theoretical and practical. Thanks a lot
Relevant answer
Answer
Hello Kims,
In data mining, topic models are set of algorithms to uncover hidden topics in a set of documents(corpus).
for a review of text data mining please see: http://www.tandfonline.com/doi/full/10.1080/21642583.2014.970732
for implementation of Latent Dirichlet allocation methods and other algorithms.
Regards
  • asked a question related to Information Analysis
Question
5 answers
Products in final assembly become more and more complex. I am investigating how companies develop their information support to the operator and would like to know if anyone else has done some research in this area?
Example of RQs:
What media (text, pictures, movies) is best to present information, in terms of
Quality?
Personalised instructions?
time-saving?
cost-saving?
social sustainability?
available ICT?
Relevant answer
Answer
Do you mean a 'generic' best way of presenting work instructions for operators? A confluence of factors (including e.g.,  the effectiveness of the underlying technologies, cost, availability, unique situation on ground-e.g., literacy level of the operator, etc.) should determine the best mix of multimedia presentation solutions, which would obviously vary from case to case.
  • asked a question related to Information Analysis
Question
4 answers
I want to study the process of appropriation of the information through virtual communities of Facebook. Would you help me with this?
I want to know if you could advice me to provide a review of the literature on these concepts:
-a / the concept of information (not the information system)
-b / the adoption of information (not the adoption of information systems)
-c / the acquisition of information (and not the acquisition of information systems)
-d / the ownership of the information (not the ownership of the information system)
Can we classify them in this order:
-1 / Adoption information;
-2 / Acquisition of information;
-3 / Ownership information
Thanks a lot
Relevant answer
Answer
I think there is one more question about the meaning of acquisition of information, because probably you will have to identify, in your study, the difference between acquisition / ownership of information and  learning.
  • asked a question related to Information Analysis
Question
5 answers
Hello, I want to compute the so-called IRT Item Information Function for individual items as well as the latent variable using Stata. Several methods are available for IRT analysis, like clogit, gllamm or raschtest, but so far I could not find any syntax to draw the Information Function using anyone of these methods. So, any help or example is much appreciated.
Relevant answer
Answer
Dear Richard,
This morning I did not see the options test info in the help file of icc_2pl.
Now I did, and the great news is that thanks to icc_2pl, I am also able to create the item information plots and the test information plot.
So, I really have to thank you again for your ado file, and I think you should share it with the Stata community too. It is a great time saver to have this.
Best regards,
Eric
  • asked a question related to Information Analysis
Question
10 answers
Non-market production of information has been gaining ground for the last 15 years or so. New social forms of information production, facilitated by networks are becoming counter intuitive to people living in market-based, economies. Individuals can reach and inform million others around the world. This fact has led to the emergence of coordinated effects, where the aggregate effect of individual action, even if it is not self-consciously cooperative produces the coordinated effect of a rich information environment. Based on this empirical state of affairs, do you think that information networks present an alternative to traditional market production of information?
Relevant answer
Answer
Yes, every successive day, we have larger use of information networks to provide information about events, products, ideas, technologies, etc. The internet and the mass media are allowing this to happen.
Definitely, it is a supplement or even an alternative to, the traditional market-based information. But it is unlikely to replace the latter. Of course, it would enable people to have a closer look at the information that the market provides as they already have the information through the networks. This would further add to the "Customer is the king" theory, which is the key postulate of the globalization philosophy.
The seminal book in the attached link that was published by Professor Benkler of Harvard Law School in 2006, aroused a debate on the power of these information networks.
  • asked a question related to Information Analysis
Question
8 answers
Brain-to-brain transfer of information has been illustrated between a pair of rats (Pais-Vieira et al. 2013). We evaluate the scientific validity of this study. First, the rats receiving the electrical stimulation were performing at 62 to 64% correctness when chance was 50% correctness using one of two discrimination paradigms, tactile or visual. This level of performance is not sustainable without being imbedded within a behavioural paradigm that delivers reward periodically. Second, we estimated that the amount of information transferred between the rats was 0.004 bits per second employing the visual discrimination paradigm and 0.015 bits per second employing the tactile discrimination paradigm. The reason for these low transfer scores (i.e. rates that are 1 to 2 orders of magnitude lower than that transferred by brain-machine interfaces) is that overall the rats were performing close to chance. Nevertheless, based on these results Pais-Vieira et al. have suggested that the next step is to extend their studies to multiple brain communication. We would suggest that the information transfer rate for brain-to-brain communication be enhanced before performing such an experiment. Note that the information transfer rate for human language can be as high as 40 bits per second (Reed and Durlach 1998).
For more information see: Tehovnik EJ & Teixeira e Silva Z (2014) Brain-to-brain interface for real-time sharing of sensorimotor information: a commentary. OA Neurosciences, Jan 01;2(1):2.
Relevant answer
Answer
Thank you very much Edward for bringing this interesting piece of work to limelight through your question.
There is no reason why two physical objects like a pair of brains cannot interact directly between them through the intermediary of appropriate fields. But, the coupling constant or the strength of the interaction may be very low and sensitively dependent on many factors which cannot be directly controlled when the brains belong to two living creatures like humans.
This kind of research is clearly a big pain for traditional thinkers, who tend to believe that brains are isolated from each other.
Regards
Rajat
  • asked a question related to Information Analysis
Question
3 answers
From an information management point of view.
Relevant answer
Answer
Not really my area, but Wikipedia (which, I know, isn;t always the best!) has a good and very well referenced article, though not dealing in any depth with information management:
  • asked a question related to Information Analysis
Question
2 answers
How can I calculate information entropy between wavelet coefficients and signals?
Relevant answer
Answer
May I ask that the aim to calculate information entropy between wavelet coefficients and signals? For the coupling analysis or to meassure the complexity of signals?