Science topic

Information Analysis - Science topic

Explore the latest questions and answers in Information Analysis, and find Information Analysis experts.
Questions related to Information Analysis
  • asked a question related to Information Analysis
Question
4 answers
How should the architecture of an effective computerised platform for detecting fakenews and other forms of disinformation on the internet built using Big Data Analytics, artificial intelligence and other Industry 4.0 technologies be designed?
The scale of the development of disinformation on the Internet including, among other things, fakenews has been growing in recent years mainly in social media. Disinformation is mainly developing on social media sites that are popular among young people, children and teenagers. The growing scale of disinformation is particularly socially damaging in view of the key objective of its pursuit by cybercriminals and certain organisations using, for example, the technique of publishing posts and banners using fake profiles of fictitious Internet users containing fakenews. The aim is to try to influence public opinion in society, to shape the general social awareness of citizens, to influence the assessment of the activities of specific policies of the government, national and/or international organisations, public or other institutions, to influence the ratings, credibility, reputation, recognition of specific institutions, companies, enterprises, their product and service offerings, individuals, etc., to influence the results of parliamentary, presidential and other elections, etc. In addition to this, the scale of cybercriminal activity and the improvement of cyber security techniques have also been growing in parallel on the Internet in recent years. Therefore, as part of improving techniques to reduce the scale of disinformation spread deliberately by specific national and/or international organisations, computerised platforms are being built to detect fake news and other forms of disinformation on the internet built using Big Data Analytics, artificial intelligence and other Industry 4.0 technologies. Since cybercriminals and organisations generating disinformation use new Industry 4.0 technologies in the creation of fake profiles on popular social networks, new information technologies, Industry 4.0, including but not limited to Big Data Analytics, artificial intelligence, deep learning, machine learning, etc., should also be used to reduce the scale of such harmful activities to citizens.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How should the architecture of an effective computerised platform for detecting factoids and other forms of disinformation on the Internet built using Big Data Analytics, artificial intelligence and other Industry 4.0 technologies be designed?
And what do you think about it?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
Relevant answer
Answer
A multi-faceted computerised platform for detecting fake news and other disinformation online, especially one that uses Big Data Analytics, AI, and other Industry 4.0 technologies, is needed. Here's a platform architectural outline and my thoughts on major components and strategies:
Components of architecture
1. Data Collection and Aggregation: - Collect data from internet sources, such as social media platforms, using web crawlers and APIs.
Use Big Data technologies like Hadoop or Spark to aggregate and store enormous amounts of data.
2. Data Preprocessing and Normalization: - Remove noise and normalise data format.
NLP can parse and interpret text.
3. Feature Extraction: - Use NLP to extract sentiment, subjectivity, writing style, and other linguistic traits.
Analyse metadata (source credibility, user profiles, network patterns).
4. Use AI and machine learning algorithms (e.g., SVM, Random Forest, neural networks) to categorise content as real or deceptive.
Transformers, BERT, and other deep learning methods can help you comprehend language context and nuances.
5. Real-Time Analysis: Apply a stream processing system for real-time data analysis.
Complex event processing engines can identify patterns and anomalies in data.
6. Verify and Fact-Check: - Use fact-checking APIs and databases to verify and cross-check information.
- Create a semi-automated system where specialists verify flagged content.
7. Feedback Mechanism: - Establish a feedback loop to enhance detection models based on current misinformation trends and techniques.
8. User Interface and Reporting: - Create an easy-to-use interface for monitoring and reporting.
Visualise trends and hazards with dashboards.
9. Security and Privacy: - Protect platform and user data with strong security measures.
- Follow ethics and privacy laws.
Personal Opinion:
To comprehend and counteract disinformation, computer science, journalism, psychology, and political science must be combined.
**AI Limitations** AI is powerful but not perfect. Overusing AI might cause biases and inaccuracies. Human monitoring is crucial.
Ethics: Disinformation detection must be balanced with free expression and privacy.
- Adaptable and evolving Disinformation methods change, therefore the platform must adapt.
In conclusion:
In conclusion, developing a disinformation detection tool in the digital age is difficult but essential. It demands combining modern technologies with human expertise and ethics. The fight against fake news and disinformation requires cross-disciplinary and sectoral coordination.
References for designing and developing a computerised platform to detect fake news and disinformation utilising Big Data Analytics, AI, and Industry 4.0 technologies:
1. "Big Data Analytics in Cybersecurity" by Onur Savas and Julia Deng. This book discusses big data analytics in cybersecurity, particularly disinformation detection.
2. "Deep Learning for Natural Language Processing: Creating Neural Networks with Python" by Palash Goyal and Sumit Pandey. Deep learning models are essential for false news identification, and this book covers their use in textual data processing and understanding.
3. Clarence Chio and David Freeman's "Machine Learning and Security: Protecting Systems with Data and Algorithms". This book discusses machine learning and security, providing ideas for disinformation detection.
4. "Social Media Data Mining and Analytics" by Gabor Szabo and Gungor Polatkan. Social media data mining is crucial to disinformation analysis and detection.
5. "Data-Driven Security: Analysis, Visualisation and Dashboards" by Jay Jacobs and Bob Rudis. Data security, including visualisation and analysis for a misinformation platform, is covered in this book.
6. "Cybersecurity – Attack and Defence Strategies: Infrastructure security with Red Team and Blue Team tactics" by Yuri Diogenes and Erdal Ozkaya. It provides cybersecurity strategies for disinformation detection platform development.
7. **"Artificial Intelligence and Machine Learning for Business: A No-Nonsense Guide to Data Driven Technologies" by Steven Finlay.** This guide explains how AI and ML in business can be used for cybersecurity and disinformation.
These references from academic databases or libraries provide a foundation in the technologies and methods needed to develop an effective Internet disinformation detection platform. Big data analytics, AI, cybersecurity, and social media analytics are covered.
  • asked a question related to Information Analysis
Question
15 answers
WHAT IS INFORMATION? WHAT IS ITS CAUSAL (OR NON-CAUSAL?) CORE? A Discussion. Raphael Neelamkavil, Ph.D. (Quantum Causality), Dr. phil. (Gravitational Coalescence Cosmology)
Questions Addressed: What is information? Is it the same as the energy or matter-energy that is basic to it? Is it merely what is being communicated via energy and different from the energy? If it is different, is it causally or non-causally different or a-causally? Is it something purely physical, if it is based on and/or identifiable to energy? What is the symbolic nature of information? How does information get symbolized? Does it have a causal basis and core? If yes, how to systematize it? Can the symbolic aspect of information be systematized? Is information merely the symbolic core being transmitted via energy? If so, how to connect systematically and systemically the causal core and the symbolic core of languages? If language is a symbolizing production based on consciousness and life – both human and other – and if the symbolic aspect may be termed the a-causal but formatively causal core or even periphery of it, can language possess a non-causal aspect-core or merely a causal and an a-causal aspect-cores? If any of these is the case, what are the founding aspects of language and information within consciousness and life? These are the direct questions involved in the present work. I shall address these and the following more general but directly related questions together in the proposed work.
From a general viewpoint, the causal question engenders a multitude of other associated paradoxical questions at the theoretical foundations of the sciences. What are the foundations of all sciences and philosophy together, upon which the concepts of information, language, consciousness which is the origin of language, and the very existent matter-energy processes are based? Are there commonalities between information, language, consciousness, and existent matter-energy processes? Could a grounding of information, language, etc. be helped if their common conceptual base on To Be can be unearthed, and their consciousness-and-life-related and matter-energy-related aspects may be discovered? How to connect them to the causal (or non-causal?) core of all matter-energy? These are questions more foundational than the former set.
Addressing and resolving the foundational question of the apriority of Causality is, in my opinion, the possibly most fundamental solution. Hence, addressing these is the first task. This should be done in such a manner that the rest should follow axiomatically and thus naturally. Hence, the causal question is to be formulated and then the possible ways of reflection of the same in mental concepts that may axiomatically be demonstrated to follow suit. This task appears to be over-ambitious. But I would attempt to demonstrate as rationally as possible that the connections are strongly based on the very implications of To Be. As regards language, I deal only with verbal, nominal, and attributive (adverbs and adjectives) words, because (1) including other parts of speech would go beyond more than double the number of pages and (2) these other parts of speech are much more complicated and hence may be thought through and integrated in the mainline theory here, say, in the course of another decade or more!
Relevant answer
Answer
For you, thermodynamic information is negentropy. What is negentropy? Is this non-causal?
What is for you the difference between thermodynamic energy and thermodynamic information / negentropy?
What is intrinsic information or whatever? Does it not involve time? Something can be intrinsic to anything, but should involve its own processual quantity of time. If intrinsic is just an observer-independent sense, then it belongs to our manner of sensing it or meaning something with it, etc. Right? Now, to tell you the fact, your statement that intrinsic means observer-independent sense give me more confusion than answer.
A holarchy is self-referential, as you said. Are conscious processes holarchical? Have there been experimental work and results on this? If there are, I am interested. Kindly suggest some books and articles in this.
  • asked a question related to Information Analysis
Question
3 answers
If ChatGPT is merged into search engines developed by internet technology companies, will search results be shaped by algorithms to a greater extent than before, and what risks might be involved?
Leading Internet technology companies that also have and are developing search engines in their range of Internet information services are working on developing technological solutions to implement ChatGPT-type artificial intelligence into these search engines. Currently, there are discussions and considerations about the social and ethical implications of such a potential combination of these technologies and offering this solution in open access on the Internet. The considerations relate to the possible level of risk of manipulation of the information message in the new media, the potential disinformation resulting from a specific algorithm model, the disinformation affecting the overall social consciousness of globalised societies of citizens, the possibility of a planned shaping of public opinion, etc. This raises another issue for consideration concerning the legitimacy of creating a control institution that will carry out ongoing monitoring of the level of objectivity, independence, ethics, etc. of the algorithms used as part of the technological solutions involving the implementation of artificial intelligence of the ChatGPT type in Internet search engines, including those search engines that top the rankings of Internet users' use of online tools that facilitate increasingly precise and efficient searches for specific information on the Internet. Therefore, if, however, such a system of institutional control on the part of the state is not established, if this kind of control system involving companies developing such technological solutions on the Internet does not function effectively and/or does not keep up with the technological progress that is taking place, there may be serious negative consequences in the form of an increase in the scale of disinformation realised in the new Internet media. How important this may be in the future is evident from what is currently happening in terms of the social media portal TikTok. On the one hand, it has been the fastest growing new social medium in recent months, with more than 1 billion users worldwide. On the other hand, an increasing number of countries are imposing restrictions or bans on the use of TikTok on computers, laptops, smartphones etc. used for professional purposes by employees of public institutions and/or commercial entities. It cannot be ruled out that new types of social media will emerge in the future, in which the above-mentioned technological solutions involving the implementation of ChatGPT-type artificial intelligence into online search engines will find application. Search engines that may be designed to be operated by Internet users on the basis of intuitive feedback and correlation on the basis of automated profiling of the search engine to a specific user or on the basis of multi-option, multi-criteria search controlled by the Internet user for specific, precisely searched information and/or data. New opportunities may arise when the artificial intelligence implemented in a search engine is applied to multi-criteria search for specific content, publications, persons, companies, institutions, etc. on social media sites and/or on web-based multi-publication indexing sites, web-based knowledge bases.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If ChatGPT is merged into search engines developed by online technology companies, will search results be shaped by algorithms to a greater extent than before, and what risks might be associated with this?
What is your opinion on the subject?
What do you think about this topic?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
Relevant answer
Answer
If tools such as ChatGPT, after the necessary update and adaptation to current Internet technologies, are combined with search engines developed by Internet technology companies, search results can be shaped by certain complex algorithms, by generative artificial intelligence learned to use and improve complex models for advanced intelligent search of precisely defined topics, intelligent search systems based on artificial neural networks and deep learning. If such solutions are created, it may involve the risk of deliberate shaping of algorithms of advanced Internet search systems, which may generate the risk of interference and influence of Internet search engine technology companies on search results and thus shaping the general social awareness of citizens on specific topics.
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
  • asked a question related to Information Analysis
Question
7 answers
The fourth technological revolution currently underway is characterised by rapidly advancing ICT information technologies and Industry 4.0, including but not limited to machine learning, deep learning, artificial intelligence, ... what's next? Intelligent thinking autonomous robots?
The fourth technological revolution currently underway is characterised by rapidly advancing ICT information technologies and Industry 4.0, including but not limited to technologies learning machines, deep learning, artificial intelligence. Machine learning, machine learning, machine self-learning or machine learning systems are all synonymous terms relating to the field of artificial intelligence with a particular focus on algorithms that can improve themselves, improving automatically through the action of an experience factor within exposure to large data sets. Algorithms operating within the framework of machine learning build a mathematical model of data processing from sample data, called a learning set, in order to make predictions or decisions without being programmed explicitely by a human to do so. Machine learning algorithms are used in a wide variety of applications, such as spam protection, i.e. filtering internet messages for unwanted correspondence, or image recognition, where it is difficult or infeasible to develop conventional algorithms to perform the needed tasks. Deep learning is a kind of subcategory of machine learning, which involves the creation of deep neural networks, i.e. networks with multiple levels of neurons. Deep learning techniques are designed to improve, among other things, automatic speech processing, image recognition and natural language processing. The structure of deep neural networks consists of multiple layers of artificial neurons. Simple neural networks can be designed manually so that a specific layer detects specific features and performs specific data processing, while learning consists of setting appropriate weights, significance levels, value system for components of specific issues defined on the basis of processing and learning from large amounts of data. In large neural networks, the deep learning process is automated and self-contained to a certain extent. In this situation, the network is not designed to detect specific features, but detects them on the basis of the processing of appropriately labelled data sets. Both such datasets and the operation of neural networks themselves should be prepared by specialists, but the features are already detected by the programme itself. Therefore, large amounts of data can be processed and the network can automatically learn higher-level feature representations, which means that they can detect complex patterns in the input data. In view of the above, deep learning systems are built on Big Data Analytics platforms built in such a way that the deep learning process is performed on a sufficiently large amount of data. Artificial intelligence, denoted by the acronym AI (artificial intelligence), is respectively the 'intelligent', multi-criteria, advanced, automated processing of complex, large amounts of data carried out in a way that alludes to certain characteristics of human intelligence exhibited by thought processes. As such, it is the intelligence exhibited by artificial devices, including certain advanced ICT and Industry 4.0 information technology systems and devices equipped with these technological solutions. The concept of artificial intelligence is contrasted with the concept of natural intelligence, i.e. that which pertains to humans. In view of the above, artificial intelligence thus has two basic meanings. On the one hand, it is a hypothetical intelligence realised through a technical rather than a natural process. On the other hand, it is the name of a technology and a research field of computer science and cognitive science that also draws on the achievements of psychology, neurology, mathematics and philosophy. In computer science and cognitive science, artificial intelligence also refers to the creation of models and programmes that simulate at least partially intelligent behaviour. Artificial intelligence is also considered in the field of philosophy, within which a theory is developed concerning the philosophy of artificial intelligence. In addition, artificial intelligence is also a subject of interest in the social sciences. The main task of research and development work on the development of artificial intelligence technology and its new applications is the construction of machines and computer programmes capable of performing selected functions analogously to those performed by the human mind functioning with the human senses, including processes that do not lend themselves to numerical algorithmisation. Such problems are sometimes referred to as AI-difficult and include such processes as decision-making in the absence of all data, analysis and synthesis of natural languages, logical reasoning also referred to as rational reasoning, automatic proof of assertions, computer logic games e.g. chess, intelligent robots, expert and diagnostic systems, among others. Artificial intelligence can be developed and improved by integrating it with the areas of machine learning, fuzzy logic, computer vision, evolutionary computing, neural networks, robotics and artificial life. Artificial intelligence (AI) technologies have been developing rapidly in recent years, which is determined by its combination with other Industry 4.0 technologies, the use of microprocessors, digital machines and computing devices characterised by their ever-increasing capacity for multi-criteria processing of ever-increasing amounts of data, and the emergence of new fields of application. Recently, the development of artificial intelligence has become a topic of discussion in various media due to the open-access, automated and AI-enabled solution ChatGPT, with which Internet users can have a kind of conversation. The solution is based and learns from a collection of large amounts of data extracted in 2021 from specific data and information resources on the Internet. The development of artificial intelligence applications is so rapid that it is ahead of the process of adapting regulations to the situation. The new applications being developed do not always generate exclusively positive impacts. These potentially negative effects include the potential for the generation of disinformation on the Internet, information crafted using artificial intelligence, not in line with the facts and disseminated on social media sites. This raises a number of questions regarding the development of artificial intelligence and its new applications, the possibilities that will arise in the future under the next generation of artificial intelligence, the possibility of teaching artificial intelligence to think, i.e. to realise artificial thought processes in a manner analogous or similar to the thought processes realised in the human mind.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
The fourth technological revolution currently taking place is characterised by rapidly advancing ICT information technologies and Industry 4.0, including but not limited to machine learning technologies, deep learning, artificial intelligence, .... what's next? Intelligent thinking autonomous robots?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz
Relevant answer
Answer
The drive to build autonomous, thinking, intelligent robots, androids raises many ethical controversies and potential risks. In addition to this, the drive to build artificial consciousness as a kind of continuation of the development of artificial intelligence is also controversial.
What is your opinion on this topic?
Best regards,
Dariusz Prokopowicz
  • asked a question related to Information Analysis
Question
2 answers
By combining the technologies of quantum computers, Big Data Analytics, artificial intelligence and other Industry 4.0 technologies, is it possible to significantly improve the predictive analyses of various multi-faceted macroprocesses?
By combining the technologies of quantum computers, Big Data Analytics, big data analytics and information extracted from e.g. large numbers of websites and social media sites, cloud computing, satellite analytics etc. and artificial intelligence in joint applications for the construction of integrated analytical platforms, it is possible to create systems for the multi-criteria analysis of large quantities of quantitative and qualitative data and thus significantly improve predictive analyses of various multi-faceted macro-processes concerning local, regional and global climate change, the state of the biosphere, natural, social, health, economic, financial processes, etc.?
Ongoing technological progress is increasing the technical possibilities of both conducting research, collecting and assembling large amounts of research data and their multi-criteria processing using ICT information technologies and Industry 4.0. Before the development of ICT information technologies, IT tools, personal computers, etc. in the second half of the 20th century as part of the 3rd technological revolution, computerised, semi-automated processing of large data sets was very difficult or impossible. As a result, the building of multi-criteria, multi-article, big data and information models of complex macro-process structures, simulation models, forecasting models was limited or practically impossible. However, the technological advances made in the current fourth technological revolution and the development of Industry 4.0 technology have changed a lot in this regard. The current fourth technological revolution is, among other things, a revolution in the improvement of multi-criteria, computerised analytical techniques based on large data sets. Industry 4.0 technologies, including Big Data Analytics technology, are used in multi-criteria processing, analysing large data sets. Artificial Intelligence (AI) can be useful in terms of scaling up the automation of research processes and multi-faceted processing of big data obtained from research.
The technological advances taking place are contributing to the improvement of computerised analytical techniques conducted on increasingly large data sets. The application of the technologies of the fourth technological revolution, including ICT information technologies and Industry 4.0 in the process of conducting multi-criteria analyses and simulation and forecasting models conducted on large sets of information and data increases the efficiency of research and analytical processes. Increasingly, in research conducted within different scientific disciplines and different fields of knowledge, analytical processes are carried out, among others, using computerised analytical tools including Big Data Analytics in conjunction with other Industry 4.0 technologies.
When these analytical tools are augmented with Internet of Things technology, cloud computing and satellite-implemented sensing and monitoring techniques, opportunities arise for real-time, multi-criteria analytics of large areas, e.g. nature, climate and others, conducted using satellite technology. When machine learning technology, deep learning, artificial intelligence, multi-criteria simulation models, digital twins are added to these analytical and research techniques, opportunities arise for creating predictive simulations for multi-factor, complex macro processes realised in real time. Complex, multi-faceted macro processes, the study of which is facilitated by the application of new ICT information technologies and Industry 4.0, include, on the one hand, multi-factorial natural, climatic, ecological, etc. processes and those concerning changes in the state of the environment, environmental pollution, changes in the state of ecosystems, biodiversity, changes in the state of soils in agricultural fields, changes in the state of moisture in forested areas, environmental monitoring, deforestation of areas, etc. caused by civilisation factors. On the other hand, complex, multifaceted macroprocesses whose research processes are improved by the application of new technologies include economic, social, financial, etc. processes in the context of the functioning of entire economies, economic regions, continents or in global terms.
Year on year, due to technological advances in ICT, including the use of new generations of microprocessors characterised by ever-increasing computing power, the possibilities for increasingly efficient, multi-criteria processing of large collections of data and information are growing. Artificial intelligence can be particularly useful for the selective and precise retrieval of specific, defined types of information and data extracted from many selected types of websites and the real-time transfer and processing of this data in database systems organised in cloud computing on Big Data Analytics platforms, which would be accessed by a system managing a built and updated model of a specific macro-process using digital twin technology. In addition, the use of supercomputers, including quantum computers characterised by particularly large computational capacities for processing very large data sets, can significantly increase the scale of data and information processed within the framework of multi-criteria analyses of natural, climatic, geological, social, economic, etc. macroprocesses taking place and the creation of simulation models concerning them.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Is it possible, by combining the technologies of quantum computers, Big Data Analytics, big data analytics and information extracted from, inter alia, a large number of websites and social media portals, cloud computing, satellite analytics, etc., and artificial intelligence in joint applications of building integrated analytical platforms? and artificial intelligence in joint applications for the construction of integrated analytical platforms, is it possible to create systems for the multi-criteria analysis of large quantities of quantitative and qualitative data and thereby significantly improve predictive analyses of various multi-faceted macro-processes concerning local, regional and global climate change, the state of the biosphere, natural, social, health, economic, financial processes, etc.?
By combining the technologies of quantum computers, Big Data Analytics, artificial intelligence and other Industry 4.0 technologies, is it possible to significantly improve the predictive analyses of various multi-faceted macroprocesses?
By combining the technologies of quantum computers, Big Data Analytics, artificial intelligence, is it possible to improve the analysis of macroprocesses?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
The above text is entirely my own work written by me on the basis of my research.
I have not used other sources or automatic text generation systems such as ChatGPT in writing this text.
Copyright by Dariusz Prokopowicz
Dariusz Prokopowicz
Relevant answer
Answer
In my opinion, thanks to the combination of the above-mentioned technologies, there are new opportunities to expand research and analytical capabilities, to process large data sets within the framework of Big Data Analytics, to develop predictive models for various types of macro-processes.
What is your opinion on this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
  • asked a question related to Information Analysis
Question
6 answers
Threads can replace Twitter?
Relevant answer
Answer
It is important to understand Twitter's users and its advertising base to answer this question. For an advertiser focused on Business-to-Business (B2B) or Business-to-Government (B2G), especially those focused in the tech industry, Twitter provided the marketing personas that we sought. Personas are the user type profiles sought, i.e., C-Level, Business decision maker, technical decision maker, and business influencer. and, Facebook and Instagram started as a great place to appeal to Business-to-Consumer (B2C) personas.
For Threads to capture Twitter's users and B2B/B2G personas, it must deliver a better experience and provide advertisers the confidence to jump to that platform to be successful. Threads' first steps have not been successful in doing so.
  • asked a question related to Information Analysis
Question
10 answers
In your opinion, will the addition of mandatory sustainability reporting according to the European Sustainability Reporting Standards (ESRS) to company and corporate reporting motivate business entities to scale up their sustainability goals?
In your opinion, will the introduction of mandatory enhanced disclosure of sustainability issues help to scale up the implementation of sustainability goals and accelerate the processes of transforming the economy towards a sustainable, green circular economy?
Taking into account the negative aspects of the unsustainable development of the economy, including the over-consumption of natural resources, the increasing scale of environmental pollution, the still high greenhouse gas emissions, the progressing process of global warming, the intensifying negative effects of the climate change taking place, etc., it is necessary to accelerate the processes of carrying out the pro-environmental and pro-climate transformation of the classic growth, brown, linear economy of excess into a sustainable, green, zero-carbon growth and closed loop economy. One of the key determinants for achieving the aforementioned green transformation of the economy is also the implementation of the Sustainable Development Goals, i.e. according to the UN standard 17 Sustainable Development Goals. In recent years, many companies and enterprises, noticing the growing importance of this issue, including the increasing scale of pro-environmental and pro-climate awareness of citizens, i.e. customers of their offers of companies and enterprises, add to their missions and development strategies the issues of implementation of sustainable development goals and present themselves and their offers of products and services within advertising campaigns and other forms of marketing communication as green, implementing specific sustainable development goals, environmentally and climate friendly, etc. Unfortunately, this is always in accordance with the fact that the implementation of the sustainable development goals is not a fact. Unfortunately, this is not always consistent with the facts. Research shows that in the European Union, the majority of existing companies and enterprises already carry out this type of marketing communication to a greater or lesser extent. However, a significant proportion of businesses that present themselves as green, pursuing specific sustainability goals, environmentally and climate-friendly, and that present their product and service offerings as green, made exclusively from natural raw materials, and produced fully in line with sustainability goals, are doing so unreliably and misleading potential customers. Many companies and businesses are greenwashing. It is therefore necessary to improve systems for verifying what economic operators present about themselves and their offerings in their marketing communications against the facts. By significantly reducing the scale of greenwashing used by many companies, it will be possible to increase the effectiveness of carrying out the process of green transformation of the economy and really increase the scale of achieving the Sustainable Development Goals. Significant instruments to motivate business operators to conduct marketing communications in a reliable way also include extending the scope of business operators' reporting to include sustainability issues. The addition of sustainability reporting obligations for companies and businesses in line with the European Sustainability Reporting Standards (ESRS) should motivate economic actors to scale up their implementation of the Sustainable Development Goals. In November 2022, the Council of the European Union finally approved the Corporate Sustainability Reporting Directive (CSRD). The Directive requires companies to report on sustainability in accordance with the European Sustainability Reporting Standards (ESRS). This means that under the Directive, more than 3,500 companies in Poland will have to disclose sustainability data. The ESRS standards developed by EFRAG (European Financial Reporting Advisory Group) have been submitted to the European Commission and we are currently waiting for their final form in the form of delegated acts. However, this does not mean that companies should not already be looking at the new obligations. Especially if they have not reported on sustainability issues so far, or have done so to a limited extent. Companies will have to disclose sustainability issues in accordance with ESRS standards. It is therefore essential to build systemic reporting standards for business entities enriched with sustainability issues. In a situation where the addition of sustainability reporting obligations in accordance with the European Sustainability Reporting Standards (ESRS) to company and corporate reporting is effectively carried out, there should be an increased incentive for business entities to scale up their sustainability goals. In this regard, the introduction of enhanced disclosure of sustainability issues should help to increase the scale of implementation of the sustainable development goals and accelerate the processes of transformation of the economy towards a sustainable green circular economy.
In view of the above, I would like to address the following question to the esteemed community of scientists and researchers:
In your opinion, will the introduction of mandatory enhanced disclosure of sustainability issues help to scale up the implementation of the Sustainable Development Goals and accelerate the processes of transformation of the economy towards a sustainable, green circular economy?
In your opinion, will the addition of mandatory sustainability reporting to companies and businesses in line with the European Sustainability Reporting Standards (ESRS) motivate business entities to scale up the implementation of the Sustainable Development Goals?
Will the extension of sustainability reporting by business entities motivate companies to scale up their sustainability goals?
What challenges do companies and businesses face in relation to the obligation for expanded disclosure of sustainability issues?
What do you think about it?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Counting on your opinions, on getting to know your personal opinion, on an honest approach to discussing scientific issues and not the ready-made answers generated in ChatGPT, I deliberately used the phrase "in your opinion" in the question.
The above text is entirely my own work written by me on the basis of my research.
I have not used other sources or automatic text generation systems such as ChatGPT in writing this text.
Copyright by Dariusz Prokopowicz
Best wishes,
Dariusz Prokopowicz
Relevant answer
Answer
Dear Prof. Prokopowicz!
You raised a very important topic to address. May I kindly argue at this stage one cannot really say what future scenario will be the actual one. Predictions are difficult to present based on historical data as the business environment in Europe is about to become increasingly turbulent due to the war in Ukraine and global competition between the EU and China on the one hand and the USA on the other. Hoping for the best, but still preparing for the worst:
1) Greenwashing in Corporate Sustainability Reporting: Towards Successful Environmental Sustainability Management (2023): https://link.springer.com/collections/gjgafbdgdi
2) Hahn, R., Reimsbach, D., & Wickert, C. (2023). Nonfinancial Reporting and Real Sustainable Change: Relationship Status—It’s Complicated. Organization & Environment, 36(1), 3–16, https://doi.org/10.1177/10860266231151653, Open access:
Yours sincerely, Bulcsu Szekely
  • asked a question related to Information Analysis
Question
351 answers
A fundamental question at artificial intelligence (AI) informatics scientists: Are information and artificial and biological intelligence non-causal, not based on energy?
I am now finalizing a book on this theme. It is theoretically very fundamental to AI and biological intelligence (BI).
I create a system of thought that yields Universal Causality in all sciences and also in AI and BI.
I invite your ideas. I have already uploaded a short document on this in my RG page. Kindly read it and comment here.
The book is supposed to appear at some time after Dec 2023 in English and Italian, and then in Spanish. Will keep you informed.
Relevant answer
Answer
"a modification of a degree of freedom" must be caused a certain impact of certain material carrier.
  • asked a question related to Information Analysis
Question
5 answers
What, in your opinion, are the negative effects of the low level of economic knowledge of society and what can the low level of economic knowledge of a significant part of citizens in society lead to?
A recent survey shows that only 60 per cent of the public in Poland knows what inflation is, including the awareness that a drop in inflation from a high level means that prices are still rising but more slowly. In Poland, in February 2023, the government-controlled Central Statistical Office showed consumer inflation at 18.4 per cent. Since March, disinflation has been realised. In April 2023, shown by the Central Statistical Office, consumer inflation stood at 14.7 per cent. the most optimistic forecasts of the central bank cooperating informally with the government, i.e. the National Bank of Poland, suggest that Poland's falling inflation may only fall to single-digit levels in December. After deducting international factors, i.e. the prices of energy raw materials, energy and foodstuffs, core inflation, i.e. that determined by internal factors in Poland, still stands at around 12 per cent. The drop in inflation since March has been largely determined by a reduction in the high, until recently excessively high margins and prices of motor fuels by the government-controlled, monopolistically operating, state-owned gas and fuel concern, which holds over 90 per cent of domestic production and sales of motor fuels. These reductions are the result of criticism in the independent media that this government-controlled concern is acting anti-socially, making excessive profits by maintaining increased margins and not reducing the price of motor fuels until early 2023, despite the fact that the prices of energy raw materials, including oil and natural gas, have already fallen to pre-war levels in Ukraine. Citizens can only find out from the government-independent media what is really happening in the economy. Consequently, in the government-controlled meanstream media, including, among others, the government-controlled so-called public television, other media, including independent media, are constantly criticised and informationally harassed. But back to the issue of economic knowledge of the public. Taking into account the media in Poland, it is the media independent from the PIS government that play an important role in increasing economic awareness and knowledge, including objective presentation of events in the economy, objective and consistent with the fundamentals of economics explanation of how economic processes work. The aforementioned research shows that as many as 40 per cent of citizens in Poland still do not know what inflation is, do not fully understand what the successive decrease in inflation consists in. Some of these 40 per cent of the public assume that a fall in inflation, even from a high level, i.e. the disinflation currently taking place, means that the prices of purchased products and services are supposedly falling. The level of economic knowledge is therefore still low and various dishonest economic actors and institutions take advantage of this. The low level of economic knowledge among the public has often been exploited by para-financial companies, which, in their advertising campaigns and in the presentation of their image as banks, have created financial pyramids that have taken money from the public for unreliable deposits. Many citizens lost their life savings in this way. In Poland, this was the case when the authorities overseeing the financial system inadequately informed citizens about the high risk of losing the money they deposited with such para-banking companies and pseudo-investment companies as Kasa Grobelnego and AmberGold. In addition, the low level of economic knowledge in society also makes it easier for unreliable political options to find support among a significant proportion of citizens in society for populist pseudo-economic policy programmes and, on that basis, also to win parliamentary elections, and to conduct economic policy in a way that leads to financial or economic crises after a few years. It is therefore necessary to develop a system of economic education from primary school onwards, but also in the so-called Universities of the Third Age, which are mainly used by senior citizens. This is important because it is seniors who are most exposed to unreliable, misleading publicity campaigns run by money laundering companies. Thanks to the low level of economic knowledge, the government in Poland, through the medium of the controlled meanstream media, persuades a significant part of the population to support a real anti-social, anti-environmental, anti-climatic, financially unsustainable pseudo economic policy, which leads to high indebtedness of the state financial system, to the continuation of financial and economic crises.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What, in your opinion, are the negative consequences of the low level of economic knowledge of society and what can the low level of economic knowledge of a significant part of citizens in society lead to?
What are the negative consequences of the low level of economic knowledge of the public?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Counting on your opinions, on getting to know your personal opinion, on an honest approach to the discussion of scientific issues and not ready-made answers generated in ChatGPT, I deliberately used the phrase "in your opinion" in the question.
The above text is entirely my own work written by me on the basis of my research.
I have not used other sources or automatic text generation systems such as ChatGPT in writing this text.
Copyright by Dariusz Prokopowicz
Warm regards,
Dariusz Prokopowicz
Relevant answer
Answer
Time limits to acquire economical knowledge.
  • asked a question related to Information Analysis
Question
3 answers
Is analytics based on Big Data and artificial intelligence already capable of predicting what we will think about tomorrow, that we need something, that we should perhaps buy something we think we need?
Can an AI-equipped internet robot using the results of research carried out by Big Data advanced socio-economic analytics systems and employed in the call centre department of a company or institution already forecast, in real time, the consumption and purchase needs of a specific internet user on the basis of a conversation with a potential customer and, on this basis, offer internet users the purchase of an offer of products or services that they themselves would probably think they need in a moment?
On the basis of analytics of a bank customer's purchases of products and services, analytics of online payments and settlements and bank card payments, will banks refine their models of their customers' purchase preferences for the use of specific banking products and financial services? for example, will the purchase of a certain type of product or service result in an offer of, for example, a specific insurance or bank loan to a specific customer of the bank?
Will this be an important part of the automation of the processes carried out within the computerised systems concerning customer relations etc. in the context of the development of banking in the years to come?
For years, in databases, data warehouses and Big Data platforms, Internet technology companies have been collecting information on citizens, Internet users, customers using their online information services.
Continuous technological progress increases the possibilities of both obtaining, collecting and processing data on citizens in their role as potential customers, consumers of Internet offers and other media, Internet information services, offers of various types of products and services, advertising campaigns that also influence the general social awareness of citizens and the choices people make concerning various aspects of their lives. The new Industry 4.0 technologies currently being developed, including Big Data Analytics, cloud computing, Internet of Things, Blockchain, cyber security, digital twins, augmented reality, virtual reality and also machine learning, deep learning, neural networks and artificial intelligence will determine the rapid technological progress and development of applications of these technologies in the field of online marketing in the years to come as well. The robots being developed, which collect information on specific content from various websites and webpages, are able to pinpoint information written by internet users on their social media profiles. In this way, it is possible to obtain a large amount of information describing a specific Internet user and, on this basis, it is possible to build up a highly accurate characterisation of a specific Internet user and to create multi-faceted characteristics of customer segments for specific product and service offers. In this way, digital avatars of individual Internet users are built in the Big Data databases of Internet technology companies and/or large e-commerce platforms operating on the Internet, social media portals. The descriptive characteristics of such avatars are so detailed and contain so much information about Internet users that most of the people concerned do not even know how much information specific Internet-based technology companies, e-commerce platforms, social media portals, etc. have about them.
Geolocalisation added to 5G high-speed broadband and information technology and Industry 4.0 has, on the one hand, made it possible to develop analytics for identifying Internet users' shopping preferences, topics of interest, etc., depending on where, specifically geographically, they are at any given time with the smartphone on which they are using certain online information services. On the other hand, the combination of the aforementioned technologies in the various applications developed in the applications installed on the smartphone has made it possible, on the one hand, to increase the scale of data collection on Internet users, and, on the other hand, also to increase the efficiency of the processing of this data and its use in the marketing activities of companies and institutions and the implementation of these operations increasingly in real time in the cloud computing, the presentation of the results of the data processing operations carried out on Internet of Things devices, etc.
It is becoming increasingly common for us to experience situations in which, while walking with a smartphone past some physical shop, bank, company or institution offering certain services, we receive an SMS, banner or message on the Internet portal we have just used on our smartphone informing us of a new promotional offer of products or services of that particular shop, company, institution we have passed by.
In view of the above, I would like to address the following question to the esteemed community of scientists and researchers:
Is analytics based on Big Data and artificial intelligence, conducted in the field of market research, market analysis, the creation of characteristics of target customer segments, already able to forecast what we will think about tomorrow, that we need something, that we might need to buy something that we consider necessary?
Is analytics based on Big Data and artificial intelligence already capable of predicting what we will think about tomorrow?
The text above is my own, written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems such as ChatGPT.
Copyright by Dariusz Prokopowicz
What do you think about this topic?
What is your opinion on this subject?
Please answer,
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz
Relevant answer
Answer
Predicting individual thoughts and opinions is a complex task due to the inherent complexity and subjectivity of human cognition. Thoughts and opinions are influenced by a myriad of factors such as personal experiences, values, beliefs, cultural background, and individual idiosyncrasies. These factors create a highly nuanced and dynamic landscape that is challenging to capture accurately through data analysis alone.
While analytics based on Big Data and AI can provide valuable insights into general trends and patterns, predicting individual thoughts requires a deep understanding of the context and personal factors that shape an individual's thinking. AI algorithms typically analyze historical data to identify correlations and patterns, which can be useful in predicting collective behavior or trends at a broader level. For example, analyzing social media data can help identify sentiments about a particular topic within a given population.
However, predicting individual thoughts requires accounting for unique and specific circumstances that can significantly impact an individual's perspectives. These circumstances may not be adequately captured in the available data sources or may change rapidly over time. Furthermore, individual thoughts and opinions are not solely influenced by external factors but are also shaped by internal cognitive processes that can be highly subjective and difficult to quantify.
Another challenge lies in the interpretability of AI algorithms. While AI can make predictions based on complex models, explaining how those predictions were generated can be challenging. This lack of interpretability makes it difficult to gain a deep understanding of the underlying factors influencing individual thoughts and opinions, limiting the reliability and trustworthiness of such predictions.
It is important to note that the field of AI is rapidly advancing, and new techniques and approaches are continually emerging. Researchers are working on developing more sophisticated models that can better capture and understand human cognition. However, the ability to predict individual thoughts with complete accuracy still remains a significant challenge.
In summary, while analytics based on Big Data and AI can provide valuable insights and predictions at a collective level, accurately predicting individual thoughts and opinions is a complex task due to the multifaceted nature of human cognition and the limitations of available data sources. While advancements are being made, predicting individual thoughts with certainty remains beyond the current capabilities of AI.
  • asked a question related to Information Analysis
Question
2 answers
How to build a Big Data Analytics system based on artificial intelligence more perfect than ChatGPT that learns but only real information and data?
How to build a Big Data Analytics system, a Big Data Analytics system, analysing information taken from the Internet, an analytics system based on artificial intelligence conducting real-time analytics, integrated with an Internet search engine, but an artificial intelligence system more perfect than ChatGPT, which will, through discussion with Internet users, improve data verification and will learn but only real information and data?
Well, ChatGPT is not perfect in terms of self-learning new content and perfecting the answers it gives, because it happens to give confirmation answers when there is information or data that is not factually correct in the question formulated by the Internet user. In this way, ChatGPT can learn new content in the process of learning new but also false information, fictitious data, in the framework of the 'discussions' held. Currently, various technology companies are planning to create, develop and implement computerised analytical systems based on artificial intelligence technology similar to ChatGPT, which will find application in various fields of big data analytics, will find application in various fields of business and research work, in various business entities and institutions operating in different sectors and industries of the economy. One of the directions of development of this kind of artificial intelligence technology and applications of this technology are plans to build a system of analysis of large data sets, a system of Big Data Analytics, analysis of information taken from the Internet, an analytical system based on artificial intelligence conducting analytics in real time, integrated with an Internet search engine, but an artificial intelligence system more perfect than ChatGPT, which will, through discussion with Internet users, improve data verification and will learn but only real information and data. Some of the technology companies are already working on this, i.e. on creating this kind of technological solutions and applications of artificial intelligence technology similar to ChatGPT. But presumably many technology start-ups that plan to create, develop and implement business specific technological innovations based on a specific generation of artificial intelligence technology similar to ChatGPPT are also considering undertaking research in this area and perhaps developing a start-up based on a business concept of which technological innovation 4.0, including the aforementioned artificial intelligence technologies, is a key determinant.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How to build a Big Data Analytics system, a system of Big Data Analytics, analysis of information taken from the Internet, an analytical system based on Artificial Intelligence conducting real-time analytics, integrated with an Internet search engine, but an Artificial Intelligence system more perfect than ChatGPT, which will, through discussion with Internet users, improve data verification and will learn but only real information and data?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
Relevant answer
Answer
In the last years, more and more managers in organizations are reliant on a wide range of information systems that provide them with analytics methods and features to support their decisions and planning activities. With the trend of the growing automation of business processes and subsequently growing a huge amount of data stored in databases, big data analytics methods can now take full advantage of this trend to support decision makers in their decision-making process.
Regards,
Shafagat
  • asked a question related to Information Analysis
Question
3 answers
How can the implementation of artificial intelligence help in terms of the automated process of analysing the sentiment of the content of posts, posts, banners, etc. posted by Internet users on popular online social media, analysing changes in opinion on specific topics, changes in trends of general social awareness, etc. conducted using computerised Big Data Analytics platforms?
How can the computerised analytics system architecture of Big Data Analytics platforms used to analyse the sentiment of Internet users' social media activity be improved using the new technologies of Industry 4.0, including but not limited to artificial intelligence, deep learning, machine learning, etc.?
In recent years, analytics conducted on large data sets downloaded from multiple websites using Big Data Analytics platforms has been developing rapidly. This type of analysis also includes sentiment analyses of changes in Internet users' opinions on specific topics, issues, opinions on product and service offers, brands of companies, public figures, political parties, etc., based on verification of thousands of posts and comments, answers given in discussions posted on social media sites. With the ever-increasing capabilities in terms of computing power of new generations of microprocessors and the speed of processing data stored on increasingly large digital storage media, the importance of increasing the scale of automation of the processes carried out during the aforementioned sentiment analyses is increasing. Certain new technologies of Industry 4.0, including machine learning, deep learning and artificial intelligence, are coming to the aid of this issue. I am conducting research on the process of sentiment analysis of the content of posts, posts, banners, etc. posted by Internet users on popular online social media, analysis of changes in opinion on specific topics, changes in trends of general social awareness, etc. conducted using computerised Big Data Analytics platforms. I have included the results of these studies in my articles on this subject. I have also posted these articles after publication on my profile of this Research Gate portal. I would like to invite you to join me in scientific cooperation on this issue.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How can the implementation of artificial intelligence help in terms of the automated process of analysing the sentiment of the content of posts, posts, banners, etc. posted by Internet users on popular online social media, analysing changes in opinion on specific topics, changes in trends of general social awareness, etc. conducted using computerised Big Data Analytics platforms?
What do you think about this topic?
What is your opinion on this subject?
Please respond,
Please answer with reasons,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
Relevant answer
Answer
Salesforce Tableau puts AI in driver’s seat for big data
“Data-driven” is a mantra for countless organizations, but it’s a back-seat driver, useful only to the extent that decision-makers can parse and interpret it and comprehend its implications.
Indeed, a 2022 survey-based study from strategy firm NewVantage found organizations continue to struggle to become data-driven, with only 26.5% reporting having achieved this goal, and only 19.3% reporting having established a data culture.
Business software titan Salesforce said a raft of enhancements to its Tableau project management platform will help decision makers live up to the phrase “data driven” by making big data more user-friendly. The company said the changes feature automation to processes and new integration features meant to make data easy to visualize, manipulate and share on Slack, its hub for communications, collaboration and customer engagement...
  • asked a question related to Information Analysis
Question
12 answers
How can artificial intelligence such as ChatGPT and Big Data Analytics be used to analyse the level of innovation of new economic projects that new startups that are planning to develop implementing innovative business solutions, technological innovations, environmental innovations, energy innovations and other types of innovations?
The economic development of a country is determined by a number of factors, which include the level of innovativeness of economic processes, the creation of new technological solutions in research and development centres, research institutes, laboratories of universities and business entities and their implementation into the economic processes of companies and enterprises. In the modern economy, the level of innovativeness of the economy is also shaped by the effectiveness of innovation policy, which influences the formation of innovative startups and their effective development. The economic activity of innovative startups generates a high investment risk and for the institution financing the development of startups this generates a high credit risk. As a result, many banks do not finance business ventures led by innovative startups. As part of the development of systemic financing programmes for the development of start-ups from national public funds or international innovation support funds, financial grants are organised, which can be provided as non-refundable financial assistance if a startup successfully develops certain business ventures according to the original plan entered in the application for external funding. Non-refundable grant programmes can thus activate the development of innovative business ventures carried out in specific areas, sectors and industries of the economy, including, for example, innovative green business ventures that pursue sustainable development goals and are part of green economy transformation trends. Institutions distributing non-returnable financial grants should constantly improve their systems of analysing the level of innovativeness of business ventures planned to be implemented by startups described in applications for funding as innovative. As part of improving systems for verifying the level of innovativeness of business ventures and the fulfilment of specific set goals, e.g. sustainable development goals, green economy transformation goals, etc., new Industry 4.0 technologies implemented in Business Intelligence analytical platforms can be used. Within the framework of Industry 4.0 technologies, which can be used to improve systems for verifying the level of innovativeness of business ventures, machine learning, deep learning, artificial intelligence (including e.g. ChatGPT), Business Intelligence analytical platforms with implemented Big Data Analytics, cloud computing, multi-criteria simulation models, etc., can be used. In view of the above, in the situation of having at one's disposal appropriate IT equipment, including computers equipped with new generation processors characterised by high computing power, it is possible to use artificial intelligence, e.g. ChatGPT and Big Data Analytics and other Industry 4.0 technologies to analyse the level of innovativeness of new economic projects that plan to develop new start-ups implementing innovative business solutions, technological, ecological, energy and other types of innovations.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How can artificial intelligence such as ChatGPT and Big Data Analytics be used to analyse the level of innovation of new economic projects that plan to develop new startups implementing innovative business solutions, technological innovations, ecological innovations, energy innovations and other types of innovations?
What do you think?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
Relevant answer
Answer
Enhancements to Tableau for Slack focuses on sharing, search and insights with automated workflows for tools like Accelerator. The goal: empower decision makers and CRM teams to put big data to work...
The changes also presage what’s coming next: integration of recently announced generative AI model Einstein GPT, the fruit of Salesforce’s collaboration with ChatGPT maker OpenAI, with natural language-enabled interfaces to make wrangling big data a low-code/no-code operation...
  • asked a question related to Information Analysis
Question
3 answers
Does analytics based on sentiment analysis of changes in Internet user opinion using Big Data Analytics help detect fakenews spread as part of the deliberate spread of disinformation on social media?
The spread of disinformation on social media used by setting up fake profiles and spreading fakenews on these media is becoming increasingly dangerous in terms of the security of not only specific companies and institutions but also the state. The various social media, including those dominating this segment of new online media, however, differ considerably in this respect. The problem is more acute in the case of those social media which are among the most popular and on which mainly young people function, whose world view can be more easily influenced by factual information and other disinformation techniques used on the Internet. Currently, among children and young people, the most popular social media include Tik Tok, Instagram and YouTube. Consequently, in recent months, the development of some social media sites such as Tik Tok is already being restricted by the governments of some countries by banning the use, installation of this application of this portal on smartphones, laptops and other devices used for official purposes by employees of public institutions. These actions are argued by the governments of these countries in order to maintain a certain level of cyber security and reduce the risk of surveillance, theft of data and sensitive, strategic and particularly security-sensitive information of individual institutions, companies and the state. In addition, there have already been more than a few cases of data leaks on other social media portals, telecoms, public institutions, local authorities and others based on hacking into the databases of specific institutions and companies. In Poland, however, the opposite is true. Not only does the organised political group PIS not restrict the use of Tik Tok by employees of public institutions, but it also motivates the use of this portal by politicians of the ruling PIS option to publish videos as part of the ongoing electoral campaign, which would increase the chances of winning parliamentary elections for the third time in autumn this year 2023. According to analysts researching the problem of growing disinformation on the Internet, in highly developed countries it is enough to create 100 000 avatars, i.e. non-existent fictitious persons, created as it were and seemingly functioning thanks to the Internet by creating profiles of these fictitious persons on social media portals referred to as fake profiles created and functioning on these portals, to seriously influence the world view, the general social awareness of Internet users, i.e. usually the majority of citizens in the country. On the other hand, in third world countries, in countries with undemocratic systems of power, all that is needed for this purpose is about 1,000 avatars of these fictitious people with stories modelled, for example, on famous people such as, in Poland, a well-known singer claiming that there is no pandemic and that vaccines are an instrument for increasing control of citizens by the state. The analysis of changes in the world view of Internet users, changes in trends concerning social opinion on specific issues, evaluations of specific product and service offers, brand recognition of companies and institutions can be conducted on the basis of sentiment analysis of changes in the opinion of Internet users using Big Data Analytics. Consequently, this type of analytics can be applied and of great help in detecting factual news disseminated as part of the deliberate spread of disinformation on social media.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Does analytics based on sentiment analysis of changes in the opinions of Internet users using Big Data Analytics help in detecting fakenews spread as part of the deliberate spread of disinformation on social media?
What is your opinion on this topic?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
Relevant answer
Answer
Yes, sentiment analysis based on Big Data Analytics can help in detecting fake news spread as part of the deliberate spread of disinformation on social media. Sentiment analysis involves the use of natural language processing and machine learning techniques to analyze large amounts of textual data, such as social media posts, to identify the sentiment expressed in the text. By analyzing changes in the sentiment of Internet users towards a particular topic or event, it is possible to identify patterns of misinformation and disinformation.
For example, if there is a sudden surge in negative sentiment towards a particular politician or political party, it could be an indication of a disinformation campaign aimed at spreading negative propaganda. Similarly, if there is a sudden increase in positive sentiment towards a particular product or service, it could be an indication of a paid promotion or marketing campaign.
However, it is important to note that sentiment analysis alone may not be enough to detect fake news and disinformation. It is also important to consider other factors such as the source of the information, the credibility of the information, and the context in which the information is being shared. Therefore, a comprehensive approach involving multiple techniques and tools may be necessary to effectively detect and combat fake news and disinformation on social media.
  • asked a question related to Information Analysis
Question
5 answers
We know that knowledge management transcends or goes beyond information management, but what process do you think should be followed so as not to evaluate them separately?
Relevant answer
Answer
nformation management refers to management of data (facts and figures) that has been obtained from different sources. This data is structured, organized and processed.
Knowledge management is obtained via experience, education and the understanding of information[online]. In my own thinking, knowledge management is more deeper than information management. Maybe, knowledge management contains storage and processing of knowledge ontology, or natural language understanding(NLG) techniques for representing and managing knowledge than information for further processing.
  • asked a question related to Information Analysis
Question
5 answers
Do new ICT information technologies facilitate the development of scientific collaboration, the development of science?
Do new ICT information technologies facilitate scientific research, the conduct of research activities?
Do new ICT information technologies, internet technologies and/or Industry 4.0 facilitate research?
If so, to what extent, in which areas of your research has this facilitation occurred?
What examples do you know of from your own research and scientific activity that support the claim that new ICT information technologies facilitate research?
What is your opinion on this topic?
Please reply,
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz
Relevant answer
Answer
Hello All and thanks to Dr. Dariusz Prokopowicz for the importnant discussion,
ICT technologies are entering all aspects of life. They are facilitiating all that is mentioned in the discussion question and more.
Without these technologies, life would have been nearly impossible during the pandemic and the increasing number of people around the world would have difficulty achieving the same level of communications, collaboration, smart planning, resource utilization,.... etc. without the current ICT tools.
  • asked a question related to Information Analysis
Question
5 answers
Several leading technology companies are currently working on developing smart glasses that will be able to take over many of the functions currently contained in smartphones.
It will no longer be just Augmented Reality, Street View, enabling interactive connection to Smart City systems, Virtual Reality used in online computer games but many other functions of remote communication and information services.
In view of the above, I address the following questions to the esteemed community of researchers and scientists:
Will smart glasses replace smartphones in the next few years?
Or will thin, flexible interactive panels stuck on the hand prove more convenient to use?
What new technological gadget could replace smartphones in the future?
What do you think about this topic?
Please reply,
I invite you all to discuss,
Thank you very much,
Greetings,
Dariusz Prokopowicz
Relevant answer
Answer
  1. Novel technological device which has surprised the world since it hit the market is smart glasses.
  2. The concept of glasses which are computer-enhanced in order to allow the user to access information is something which some people still can’t wrap their head around.
  3. What’s more, some of the more recent models are even featuring most of the perks offered by smartphones, and the market for them is growing.
  • asked a question related to Information Analysis
Question
6 answers
Hi everyone,
We'd like to open a huge topic by a literature systematic review. However, the topic is so broad, the initial search on Web of Science only provided us od over 25 000 papers which met our search criteria. (Sure this can be reduced, but only slightly.)
I'd like to explore computer asisted review's possibilities - there must be some software capable of performing an analysis of some sort. Is there anyone who has experience in this field?
Thank you for your thoughts.
Best regards,
Martin
Relevant answer
Answer
anyone can select papers of last 10 years from top journals for literature review or select related models and framework particular field for analysis
  • asked a question related to Information Analysis
Question
8 answers
Do you have any experience or opinion about the accuracy of scientific information? The paper describes the accuracy of Wikipedia. I am experiencing resistance from wiki-bot/automatic response that prevents me from correcting the wrong knowledge. Thank you.
Relevant answer
Answer
I share the concerns of Ehtisham Lodhi that Wikipedia is insufficient and unreliable. I would have no hesitation to think if I needed reliable information (such as for quoting for a reference for my own paper), I would go to a search of peer reviewed literature or even (if the style is old fashioned or the Lecturer is) try a textbook.
  • asked a question related to Information Analysis
Question
5 answers
What strategies do you personally follow to manage information overload?
Relevant answer
Answer
Addressing information overload in scholarly literature
Information overload is a common problem, and it is an old problem. It is not a problem of the internet age, and it is not specific to scholarly literature, but the growth of preprints in the last five years presents us with a proximal example of the challenge.
We want to tackle this information overload problem and have some ideas on how to do this – presented at the end of this post. Are you willing to help? This post tells some of the back story of how preprints solve part of the problem – speedy access to academic information –  yet add to the growing information that we need to filter to find results that we can build on. It is written to inspire the problem solvers in our community to step forward and help us to realise some practical solutions....
  • asked a question related to Information Analysis
Question
4 answers
Hi, does anyone know theories related to (improvement of) product information and/or product (detail) page for online retailing? It will be appreciated a lot - thanks!
Relevant answer
Answer
Hi
The selling of a product largely depends on the quality of the product since the consumer does not buy the product rather he buys the product for its quality attributes. Therefore, you can follow the two following articles for information on product quality:
1. Caswell, J. A., Noelke, C. M., & Mojduszka, E. M. (2002). Unifying two frameworks for analyzing quality and quality assurance for food products. In Global food trade and consumer demand for quality (pp. 43-61). Springer, Boston, MA.
2. Caswell, J. A. (2006). Quality assurance, information tracking, and consumer labeling. Marine pollution bulletin, 53(10-12), 650-656.
You can also follow one of my articles is accepted for publication online buying behavior of clothing products for online retailing. It will be published very soon. You can knock then at hossainafjal@gmail.com
  • asked a question related to Information Analysis
Question
2 answers
How to judge the correctness of the obtained information related to COVID-19 and how reliable are the various online sources of this information?!
What should/not we trust?! where to get information!?
Relevant answer
Answer
I totally agree with you.
Therefore, I recommend you to take a look at:
  • asked a question related to Information Analysis
Question
102 answers
This is my only question on logic in RG; there are other questions on applications of logic, that I recommend.
There are any types and number of truth values, not just binary, or two or three. It depends on the finesse desired. Information processing and communication seem to be described by a tri-state system or more, in classical systems such as FPGAs, ICs, CPUs, and others, in multiple applications programmed by SystemVerilog, an IEEE standard. This has replaced the Boolean algebra of a two-state system indicated by Shannon, also in gate construction with physical systems. The primary reason, in my opinion, is in dealing more effectively with noise.
Although, constructionally, a three-state system can always be embedded in a two-state system, efficiency and scalability suffer. This should be more evident in quantum computing, offering new vistas, as explained in the preprint
As new evidence accumulates, including in modern robots interacting with humans in complex computer-physical systems, this question asks first whether only the mathematical nature is evident as a description of reality, while a physical description is denied. Thus, ternary logic should replace the physical description of choices, with a possible and third truth value, which one already faces in physics, biology, psychology, and life, such as more than a coin toss to represent choices.
The physical description of "heads or tails", is denied in favor of opening up to a third possibility, and so on, to as many possibilities as needed. Are we no longer black or white, but accept a blended reality as well?
Relevant answer
Answer
Great idea
  • asked a question related to Information Analysis
Question
20 answers
How does one combine the basis of Quantum Physics that the information cannot be destroyed with the GR statement that black holes destroy the info?
Relevant answer
Answer
Indeed, some of these topics are open: they are connected with the theory of quantum gravity, yet to be constructed (string theory and holography, with the AdS/CFT correspondence, or loop quantum gravity are only attempts).
However, I think that the "black hole information paradox" is surrounded by too much hype. The reason is, of course, the attraction of Hawking's public figure and his wager. There was much theatre in Hawking's conceding that black hole evaporation in fact preserves information.
The paradox arises because the initial matter configuration is assumed to be constructed as a pure quantum state. As I have already remarked, this is unphysical. The article in Wikipedia about the "black hole information paradox" cites Penrose saying that the loss of unitarity in quantum systems is not a problem and that quantum systems do not evolve unitarily as soon as gravitation comes into play. This is most patent in theories of cosmological inflation.
Of course, the definitive answer to Natalia S Duxbury's question will come with the final theory of quantum gravity. We can keep looking forward to it :-)
Best wishes to the seekers of final theories!
  • asked a question related to Information Analysis
Question
3 answers
Eric Kandel (2006) has revealed that the consolidation of memory at the level of the nucleus is a bipolar process: chemical agents exist in our cells that can either potentiate or suppress memory. Having a double-ended system prevents extremes: remembering everything or remembering nothing. As a child we are rewarded for remembering everything we are taught in school under the assumption that all knowledge is good. But what happens if the knowledge is tainted such as that the black slaves on plantations enjoyed being supported by the white slave owners, that the Holocaust was a fabrication, that the recent election in the United States was rigged, that vaccines produce massive side-effects, that drinking Clorox is an effective way to kill Covid-19, and so on. It is instructive that Albert Einstein was not a great student (i.e., did not like to memorize things and he had difficulty in his second language, French, which he needed to complete his university entrance exams, Strauss 2016) yet his ability to zero-in on the important data while excluding nonsense is what made him an extraordinary scientist. Ergo, the management of one’s memory may be as important as having a good memory.
References
Kandel ER (2006) In Search of Memory. The Emergence of a New Science of Mind. W.W. Norton & Company Inc., New York.
Strauss V (2016) Was Albert Einstein really a bad student who failed math? The Washington Post, Feb.
Relevant answer
Answer
Thomas Ryan (MIT), and others talked about this matter, and they even go further by stating that they found that memory (offline consciousness) don't reside in the brain. I however see the entire consciousness reside in extra physical dimensions; kindly see the following video: https://youtu.be/I1G3Jx-Q1YY
  • asked a question related to Information Analysis
Question
9 answers
Are there any studies that show a positive relationship between the length of a text (word count or number of characters) and its information content?
Relevant answer
Answer
I agree with some the more replies above, but few new information when is first time talking abut short contents outlines of epistles (letters), epigrams, and so on? and what about the Renaissance "adagia" of Erasmus and Luis Vives? and compared what the shorter tale (Guy de Maupassant; Lovecraft), involving adventure or magic, related to the novel, book that tells an invented story, we are in a settle behaviour, the way in whics it acts, functions or changes, for instance with the aphorisms of Nietzsche, a short, clever sentence which expresses a generqal truth which isn't at the novel, based on a particular mode influencing a huge universe of account, even report of our feelings; well ending with a question, an adage or an aphorism might contain thousand expressions or only a very few?..
  • asked a question related to Information Analysis
Question
16 answers
Could anyone provide any comment and/or references on the measurement of "information depth" ?
By "information depth", I mean more than just the minimum amount of bits to reproduce a given information. It would also have to involve some stuff related to the content and maybe corollary aspects of the information making full part of it (how it is collected in function of the environment ? Its added value in given context ? Its "strength" for further progress ? ...).
For instance, there are the two verses (in French):
Gal, amant de la reine, alla, tour magnanime
Galamment de l'arène à la Tour Magne à Nîmes.
Both verses mean something coherent. But there is also additional information in them :
- they are alexandrines ;
- they are pronounced exactly the same way (the second verse is a full rhyme of the first verse) ;
- there is geographical information : there is an (ancient) arena and a tower called "Magne" in the French city of Nîmes (Provence) ;
- ....
Similarly how could one measure the exchange of information which makes that a transcription factor (in molecular biology) recognizes a given DNA sequence to be translated : for instance A-C-A-G-G-T-A-G-T-C .... (and by the way, how can it "instantaneously" recognize the sequence and only that one sequence which gives the relevant needed protein ? ) ? And how could the process of information exchange be described for e.g. the methylation or demethylation of the right DNA-base (at the right moment), same for cellular division (chromatin-histone compacting / decompacting, ...), for reprogramming in meiosis, etc. (epigenetics) ?
Relevant answer
Answer
Hi Sang Ho Lee,
Yes, I acknowledge the (wilful) naive question ! Thank you for the references and the nice summary on the "state of art" of the issue in biology.
I was already amazed of the fantastic discoveries of the last century in biology on how information is accurately exchanged, handled, developed with such high level of accuracy and reliability (DNA, RNA, genetic code, cell organization, mitosis, meiosis, regulation, operons, promoters, repressors, 3D-complexity of proteins, etc, etc.) so many times in a second to allow life to thrive in billions of cells, organisms, ....during millions of years. So indeed this is the question everybody asks : how is it possible ? What is the secret of such high information quality at all levels ? How can we grasp  it, measure it ?
And now, since 2-3 decades, we see the progress of epigenetics (the "how genes are expressed ..") with the precise timely methylations, demethylations, transposable elements, mobile DNA, reprogramming, return to pluripotency, epigenetic barriers, Polycomb, Trithorax, .... somatic, germinal memory, even epigenetic heritability of learned behaviours etc. etc. (all the work since the pioneer work of Barbara Mc Clintock).
Of course it takes time and it is an huge work all over the world to discover, describe and prove all this.
But the  "information problem" is still there !
I have found following article : maybe a starting point for further thinking ?
  • asked a question related to Information Analysis
Question
8 answers
Great attention should be paid to methods of search and selection of sources to establish their credibility and value of information sources.
Relevant answer
Answer
The Use internet for Educational Research:
Since internet is a public domain and no one guaranty and responsible of what are being wrote or spread. It may be an obsolete knowledge or information, or it is partially tested information or it is uncompleted information. Therefore, it is higher risk to consume internet information as base for academicals writing. Especial the information whereas is spread over the anonymous sites.
Although there were anonymous and uncompleted sites, we can found millions of popular journals, economic magazines, and bulletins such as New York Post, Strait times, Washington post, universities libraries etc.
Reference
  • asked a question related to Information Analysis
Question
4 answers
I'm going to discover any possible association rules among some different events in the environment. for example association between height and incidence of a disease. I'm working on polygons but its possible to convert them to point features. So offer me some of methods and software which provide them.
Relevant answer
Answer
I don't know if my answer will fit exactly your question. If for "hidden associations" you mean associations which can be mediated by "hidden variables" related to the events that you measured, I suggest you to explore the "structural equations modeling".
I learned how to use it on the Mplus software, which has a demo version https://www.statmodel.com/demo.shtml
and I know that there are also R packages.
  • asked a question related to Information Analysis
Question
4 answers
I know how does the Random Forest works if we have two choices. If apple is Red go left, if Apple is Green go right. and etc. 
But for my question, if the data is texts"features" I trained the classifier with training data, I would like to understand deeply how does the algorithm split the node, based on what? the tf-idf weight, or the word itself. In addition, how did it precidt the class for each example. 
I would really appreciate a very explained in details with example in texts.
Relevant answer
Answer
Hi Sultan,
I am not familiar on which implementation of RandomForest you are using. Anyway, you are bit far from what it does.
Random forests for classification builds Multiple decision trees (not only one). Each tree output a label (using a process similar to the one described by you with the apple). Then, the final label is decided by a majority voting process.
Each tree is generated through a subset of the initial training set randomly selected (usually, 1/3 of the samples are used, depending on the default value of the implementation that you are used). Each tree is also generated using just a few features (e.g. if your features are frequency counts of words, it will consider just a small subset of words, randomly selected among the initial ones). Usualy, 3-5 features are used...but, again...it depends on the default value of your implementation.
The splitting criteria on each tree node is to use a condition over typically one feature which divides the dataset in two equal parts. The criteria to select which is the feature to consider in each node is, by default, the gini criterion (see here to know more about this). This process stops whenever all the remaining data samples have the same label. This node will be a leaf one which will assign labels to test samples.
Please let me know if you got everything from here.
Best,
Luis
  • asked a question related to Information Analysis
Question
30 answers
I have 600 examples on my dataset for classification task. Number for examples labeled in each class is different. ClassA has 300 examples, ClassB has 150 examples, ClassC has 150 examples.
I read many papers and resources about splitting data into two or three parts, train-validation- and test. Some are saying if you have limited data then no need for wasting time and three parts. Two parts (train-test) is enough giving 70% for training, and 30% for testing. And using 5-flogs metric also is the ideal one for limited.
Some are saying doing 70% for training ( and the validation data taken from the training data itself for 30%) , and test for the remaining 30% from the original data.
From your experience, could you tell me your thoughts and suggestions about this mystery? 
Thank you 
Relevant answer
Answer
According to Andrew Ng, in the Coursera MOOC on Introduction to Machine Learning, the general rule of thumb is to partition the data set into the ratio of 3:1:1 (60:20:20) for training, validation and testing respectively.
When a learning system is trained with some data samples, you might not know to which extent it can predict unseen samples correctly. The concept of cross validation is done to tweak the parameters used for training in order to optimize its accuracy and to nullify the effect of over-fitting on the training data. This shouldn't be done on the test set itself and hence the separation between testing set and cross validation set.
In cases where cross validation is not applicable, it is common to separate the data  in the ratio of 7:3 (70:30) for training and testing respectively.
  • asked a question related to Information Analysis
Question
4 answers
Does anybody have an example of research conducted in one area of science that when cross pollinated with research outputs from a different field of research, led to a breakthrough in a completely new area.
I am specifically looking for examples of research being conducted in two completely different areas and with no obvious connection that when brought together, by whatever means, led to a new discovery, process, or solution to an outstanding problem.
Relevant answer
Answer
I think that that there are many potentially fruitful combinations. For example, economics and physics. But I think that both coauthors should have at least elementary education in both sciences, otherwise they may not understand each other, also due to different terminology. I have an article about that on RG:
Yegorov Y. (2007) Econo-physics: A Perspective of Matching Two Sciences
I think that social sciences can be also matched with computer science, in particular, with the theory of networks. Here, if you are a social scientist and can formulate good problem (like propagation of drugs), you might not know computer science, but find a guy, who can make simulation with networks, study their statistical properties, etc.
  • asked a question related to Information Analysis
Question
1 answer
I have read your paper, "Finding Opinion Strength Using Fuzzy Logic on Web Reviews". It is really a good paper.
I want to ask at page 42 (page 6), you have mentioned seven pieces of data, namely Nikon D3SLR, Olympus FE- 210, Cannon 300, Cannon EOS40D, FUZIFLIM S9000, Sony Cyber shot DSCH 10, and Kodak M1033.
How can you extract these data from the corresponding website? Do you have any public package for these data? I am very interesting and want to test these data. Thanks very much.
And you mentioned, "our system has a good accuracy in predicting product ranking". I am sorry that I have not seen the comparative experiments in the paper. How can you get this conclusion? Thanks for giving any advice.
Relevant answer
Answer
sorry I do not have the expertise to answer this question.
  • asked a question related to Information Analysis
Question
19 answers
In our Age of Information the physical Theory of Information by SHANNON is to narrow - he excluded "psychological considerations". But today the importance of this term is too great - we need a unified definition for  a l l  sciences !
My results see at http://www.plbg.at, but they are only my ideas. They try to find a valid and acceptable  abstraction over all sciences. 
Relevant answer
Answer
Many definitions of information as a construct include the notion that information is the result of data analysis or visualization that is useful (actionable) or valuable (can be bought or sold). So the question of useful or valuable 'how' and 'to whom' naturally arises.
If you use the above factors in a definition, information is subjective and "psychological [or cultural] considerations" could not be excluded. Measurement of the construct defined this way is not easy but is a matter of perspective (as in judicial courts knowing obscenity when they see it).
David Kroenke is a well-recognized database and data modeling expert, and he has worked on differentiating data from information in practical ways for years (see his various articles and textbooks). He contends that information resides in the mind of the user. As an example, he points out that some people can make good use of a graph or map or other visualization of the results of data analysis while other people can't or won't. Further, an animal looking at that graph/visualization can't make use of it. Therefore, the information resides in the mind of the user, making it subjective.
in this context, perhaps the successful/unsuccessful use of data analysis/visualizations would be a good surrogate measure for information. If you can use it, it's there. If you make good use of it (successful outcome), the quality of the information was good.
A counter argument is that use of the results of data analysis/visualization is really applying strategies and knowledge and therefore falls into the category of expertise. I see this idea in discussions of expert systems and artificial intelligence.
Just my two cents worth on the topic.
  • asked a question related to Information Analysis
Question
17 answers
Thanks to Number Theory, we had been studying numbers and their properties since a long time now. Dealing with numbers usually involves trying to find out the existence of the certain special magical powers they possess, if any. My question rotates around some of the immediate clinical aspects:
Is there a general, generic, genetic manner in which numbers can be used as a memory storage unit? Is there a measure of how much information can be stored in numbers and representations of them? Is it possible to find how many numbers are there?
Relevant answer
Answer
Also, if you are not aware of Chaitin's constant, it's something I'd recommend to you: it's almost the reverse of what you asked for: a number about which we know almost nothing!  
  • asked a question related to Information Analysis
Question
13 answers
Let's use an example. We have a function y = f(x), in which x is the input (the probability) and y is the output (the entropy).  If we change y in y', can we find an x' such that f(x') = y'?
In other words, I know that when p changes, H changes; is it possible the opposite, such that if H changes, p changes?
Relevant answer
Answer
Considering the Gibbs-Shannon entropy S, we know that
dS=-∑(lnpi)dpi
We will consider i=1,2,...,n with n>2. Then there are n-1 independent variations dpi.
You cannot uniquely determine n-1>1 independent quantities dpi from a single quantity dS.
I am not sure if this is what you were asking?
  • asked a question related to Information Analysis
Question
11 answers
Relevant answer
Answer
I always admired the work of Francois Quesnay (Tableau Economique). It is an early (18th C / 1758) precursor to input output analysis. It is also connected to the modern work of Walter Isard (inter-regional input output models) and to the classical economics / Marx inspired "production of commodities in terms of commodities." (Sraffa). Modern economic geographers (Michael Webber and David Rigby) have also explored the theoretical connections. [The Golden Age Illusion: Rethinking Postwar Capitalism by Michael J. Webber; David L. Rigby]. Surely another ill-fated example is the attempt at Soviet style central planning -- notorious for making too many of the wrong items, due to lack of market driven price signals. So we come back to the market and big data efforts of retailers like Walmart who have long explored patterns with terabytes of data.
Until your nice question / comment I had not really joined these ideas to connect to big data, but you are correct -- there is a nice lineage. At least I think so!
  • asked a question related to Information Analysis
Question
13 answers
Information analysis as a discipline belonging to information science. Specifically the behavior of the information published by governments electronic on social networks.
Relevant answer
Answer
Hi, Yarenia, good day.
Wish the below 5 research articles suit your need.
  • asked a question related to Information Analysis
Question
3 answers
Thinking, insight, equations and gedanken experiments are all other words for information in a general sense . If this is true then it would seem that information has to be exchanged in the universe before cause and effect is observed or constructed. Information, therefore, applies to every discipline although it may be called something else in that discipline.
Relevant answer
Answer
What do you mean by "groups of threes"?
  • asked a question related to Information Analysis
Question
8 answers
What are Semantics of Business Vocabulary and Business Rules (SBVR) Models and what are Information System Models?  Does UML Business model or Information System model.
Relevant answer
Answer
There is a problem of terminology here: whereas meta-models are used to translate from one language to another, stereotypes are used within a language and therefore have nothing to do with meta-models.
  • asked a question related to Information Analysis
Question
10 answers
I have taken effort to produce such software and am seeking persons who have suitable application for such sequences.  Currently, the software will compute shustrings on the integers and the nucleotides but will generate maximally (or uniformly) disordered sequences only on the integers; the nucleotides is coming, as is binary and a user selectable symbol set.  For now, understanding the application areas for such sequences, and the provision of same to interested parties, with the hope that users will provide feedback, is solicited.
This software generates sequences in integral powers of ten, up to length one hundred million digits.  A sequence of one hundred million digits takes just 30 seconds to produce.
Relevant answer
Answer
The shuffle algorithm is now properly functioning, and mixes the shustrings very quickly,  The result is a UDS without lumps; instead, the UDS is rather smooth, with combinations of all symbols to be found sprinkled throughout the sequence.  Again, a UDS is a UDS, all are equivalently disordered.  It is just that some UDSs have better visual appeal as compared with other UDSs.
  • asked a question related to Information Analysis
Question
6 answers
Hi Gurus, I have a set of documents and I want to know the topic about which these documents are. Is it the issue of topic modeling? Is there any software or technique where I give this set of documents as input and it gives me its topic - may be using a kind of taxonomy or what. Can any body explain both theoretical and practical. Thanks a lot
Relevant answer
Answer
Hello Kims,
In data mining, topic models are set of algorithms to uncover hidden topics in a set of documents(corpus).
for a review of text data mining please see: http://www.tandfonline.com/doi/full/10.1080/21642583.2014.970732
for implementation of Latent Dirichlet allocation methods and other algorithms.
Regards
  • asked a question related to Information Analysis
Question
5 answers
Products in final assembly become more and more complex. I am investigating how companies develop their information support to the operator and would like to know if anyone else has done some research in this area?
Example of RQs:
What media (text, pictures, movies) is best to present information, in terms of
Quality?
Personalised instructions?
time-saving?
cost-saving?
social sustainability?
available ICT?
Relevant answer
Answer
Do you mean a 'generic' best way of presenting work instructions for operators? A confluence of factors (including e.g.,  the effectiveness of the underlying technologies, cost, availability, unique situation on ground-e.g., literacy level of the operator, etc.) should determine the best mix of multimedia presentation solutions, which would obviously vary from case to case.
  • asked a question related to Information Analysis
Question
4 answers
I want to study the process of appropriation of the information through virtual communities of Facebook. Would you help me with this?
I want to know if you could advice me to provide a review of the literature on these concepts:
-a / the concept of information (not the information system)
-b / the adoption of information (not the adoption of information systems)
-c / the acquisition of information (and not the acquisition of information systems)
-d / the ownership of the information (not the ownership of the information system)
Can we classify them in this order:
-1 / Adoption information;
-2 / Acquisition of information;
-3 / Ownership information
Thanks a lot
Relevant answer
Answer
I think there is one more question about the meaning of acquisition of information, because probably you will have to identify, in your study, the difference between acquisition / ownership of information and  learning.
  • asked a question related to Information Analysis
Question
5 answers
Hello, I want to compute the so-called IRT Item Information Function for individual items as well as the latent variable using Stata. Several methods are available for IRT analysis, like clogit, gllamm or raschtest, but so far I could not find any syntax to draw the Information Function using anyone of these methods. So, any help or example is much appreciated.
Relevant answer
Answer
Dear Richard,
This morning I did not see the options test info in the help file of icc_2pl.
Now I did, and the great news is that thanks to icc_2pl, I am also able to create the item information plots and the test information plot.
So, I really have to thank you again for your ado file, and I think you should share it with the Stata community too. It is a great time saver to have this.
Best regards,
Eric
  • asked a question related to Information Analysis
Question
10 answers
Non-market production of information has been gaining ground for the last 15 years or so. New social forms of information production, facilitated by networks are becoming counter intuitive to people living in market-based, economies. Individuals can reach and inform million others around the world. This fact has led to the emergence of coordinated effects, where the aggregate effect of individual action, even if it is not self-consciously cooperative produces the coordinated effect of a rich information environment. Based on this empirical state of affairs, do you think that information networks present an alternative to traditional market production of information?
Relevant answer
Answer
Yes, every successive day, we have larger use of information networks to provide information about events, products, ideas, technologies, etc. The internet and the mass media are allowing this to happen.
Definitely, it is a supplement or even an alternative to, the traditional market-based information. But it is unlikely to replace the latter. Of course, it would enable people to have a closer look at the information that the market provides as they already have the information through the networks. This would further add to the "Customer is the king" theory, which is the key postulate of the globalization philosophy.
The seminal book in the attached link that was published by Professor Benkler of Harvard Law School in 2006, aroused a debate on the power of these information networks.
  • asked a question related to Information Analysis
Question
8 answers
Brain-to-brain transfer of information has been illustrated between a pair of rats (Pais-Vieira et al. 2013). We evaluate the scientific validity of this study. First, the rats receiving the electrical stimulation were performing at 62 to 64% correctness when chance was 50% correctness using one of two discrimination paradigms, tactile or visual. This level of performance is not sustainable without being imbedded within a behavioural paradigm that delivers reward periodically. Second, we estimated that the amount of information transferred between the rats was 0.004 bits per second employing the visual discrimination paradigm and 0.015 bits per second employing the tactile discrimination paradigm. The reason for these low transfer scores (i.e. rates that are 1 to 2 orders of magnitude lower than that transferred by brain-machine interfaces) is that overall the rats were performing close to chance. Nevertheless, based on these results Pais-Vieira et al. have suggested that the next step is to extend their studies to multiple brain communication. We would suggest that the information transfer rate for brain-to-brain communication be enhanced before performing such an experiment. Note that the information transfer rate for human language can be as high as 40 bits per second (Reed and Durlach 1998).
For more information see: Tehovnik EJ & Teixeira e Silva Z (2014) Brain-to-brain interface for real-time sharing of sensorimotor information: a commentary. OA Neurosciences, Jan 01;2(1):2.
Relevant answer
Answer
Thank you very much Edward for bringing this interesting piece of work to limelight through your question.
There is no reason why two physical objects like a pair of brains cannot interact directly between them through the intermediary of appropriate fields. But, the coupling constant or the strength of the interaction may be very low and sensitively dependent on many factors which cannot be directly controlled when the brains belong to two living creatures like humans.
This kind of research is clearly a big pain for traditional thinkers, who tend to believe that brains are isolated from each other.
Regards
Rajat
  • asked a question related to Information Analysis
Question
3 answers
From an information management point of view.
Relevant answer
Answer
Not really my area, but Wikipedia (which, I know, isn;t always the best!) has a good and very well referenced article, though not dealing in any depth with information management:
  • asked a question related to Information Analysis
Question
2 answers
How can I calculate information entropy between wavelet coefficients and signals?
Relevant answer
Answer
May I ask that the aim to calculate information entropy between wavelet coefficients and signals? For the coupling analysis or to meassure the complexity of signals?